Abstracts Research Seminar Winter Term 2018/19
Walter Farkas: Intrinsic Risk Measures
Monetary risk measures classify a financial position by the minimal amount of external capital that must be added to the position to make it acceptable. We propose a new concept: intrinsic risk measures. The definition via external capital is avoided and only internal resources appear. An intrinsic risk measure is defined by the smallest percentage of the currently held financial position which has to be sold and reinvested in an eligible asset such that the resulting position becomes acceptable. We show that this approach requires less nominal investment in the eligible asset to reach acceptability. It provides a more direct path from unacceptable positions towards the acceptance set and implements desired properties such as monotonicity and quasi-convexity solely through the structure of the acceptance set. We derive a representation on cones and a dual representation on convex acceptance sets and we detail the connections of intrinsic risk measures to their monetary counterparts.
Torsten Hothorn: Transformation Forests
Regression models for supervised learning problems with a continuous response are commonly understood as models for the conditional mean of the response given predictors. This notion is simple and therefore appealing for interpretation and visualisation. Information about the whole underlying conditional distribution is, however, not available from these models. A more general understanding of regression models as models for conditional distributions allows much broader inference from such models, for example the computation of prediction intervals. Several random forest-type algorithms aim at estimating conditional distributions, most prominently quantile regression forests (Meinshausen, 2006, JMLR). We propose a novel approach based on a parametric family of distributions characterised by their transformation function. A dedicated novel “transformation tree” algorithm able to detect distributional changes is developed. Based on these transformation trees, we introduce “transformation forests” as an adaptive local likelihood estimator of conditional distribution functions. The resulting predictive distributions are fully parametric yet very general and allow inference procedures, such as likelihood-based variable importances, to be applied in a straightforward way. The procedure allows general transformation models to be estimated without the necessity of a priori specifying the dependency structure of parameters. Applications include the computation of probabilistic forecasts, modelling differential treatment effects, or the derivation of counterfactural distributions for all types of response variables.
Matthias Fengler: Textual Sentiment, Option Characteristics, and Stock Return Predictability
We distill sentiment from a huge assortment of NASDAQ news articles by means of machine learning methods and examine its predictive power in single-stock option markets and equity markets. We provide evidence that single-stock options react to contemporaneous sentiment. Next, examining return predictability, we discover that while option variables indeed predict stock returns, sentiment variables add further informational content. In fact, both in a regression and a trading context, option variables orthogonalized to public and sentimental news are even more informative predictors of stock returns. Distinguishing further between overnight and trading-time news, we find the first to be more informative. From a statistical topic model, we uncover that this is attributable to the differing thematic coverage of the alternate archives. Finally, we show that sentiment disagreement commands a strong positive risk premium above and beyond market volatility and that lagged returns predict future returns in concentrated sentiment environments.
Nestor Parolya: Testing for Independence of Large Dimensional Vectors
In this paper new tests for the independence of two high-dimensional vectors are investigated. We consider the case where the dimension of the vectors increases with the sample size and propose multivariate analysis of variance-type statistics for the hypothesis of a block diagonal covariance matrix. The asymptotic properties of the new test statistics are investigated under the null hypothesis and the alternative hypothesis using random matrix theory. For this purpose we study the weak convergence of linear spectral statistics of central and (conditionally) non-central Fisher matrices. In particular, a central limit theorem for linear spectral statistics of large dimensional (conditionally) non-central Fisher matrices is derived which is then used to analyse the power of the tests under the alternative.
The theoretical results are illustrated by means of a simulation study where we also compare the new tests with several alternative, in particular with the commonly used corrected likelihood ratio test. It is demonstrated that the latter test does not keep its nominal level, if the dimension of one sub-vector is relatively small compared to the dimension of the other sub-vector. On the other hand the tests proposed in this paper provide a reasonable approximation of the nominal level in such situations. Moreover, we observe that one of the proposed tests is most powerful under a variety of correlation scenarios.
Tobias Fissler: The Elicitation Problem or The Quest of Comparing Forecasts in a Meaningful Way
A proven strategy in decision-making to cope with unknown or uncertain future events is to rely on forecasts for these events. Examples range from weather forecasts for agriculture, airlines or a convenient everyday life, to forecasts for supply and demand in a business context, to risk-assessment in finance or predictions for GDP growth and inflation for prudential economic policy. In the presence of multiple different forecasts, a core challenge is to assess their relative quality and to eventually rank them in terms of their historic performance. This calls for an accuracy measure which is commonly given in terms of a loss function specifying the discrepancy between a forecast and the actual observation. Examples include the zero-one loss, the absolute loss or the squared loss. If the ultimate goal of the forecasts is specified in terms of a statistical functional such as the mean, a quantile, or a certain risk measure, the loss should incentivise truthful forecasts in that the expected loss is strictly minimised by the correctly specified forecast. If a functional possesses such an incentive compatible loss function, it is called elicitable. Besides enabling meaningful forecast comparison, the elicitability of a functional allows for M-estimations and regression. Acknowledging that there is a wealth of elicitable functionals (mean, quantiles, expectiles) and non-elicitable functionals (variance, Expected Shortfall), this talk addresses aspects of the following Elicitation Problem:
1) When is a functional elicitable?
2) What is the class of incentive compatible loss functions?
3) What are distinguished loss functions to use in practice?
4) How to cope with the non-elicitability of a functional?
The emphasis will lie on main achievements for multivariate functionals such as the pair of risk measures (Value-at-Risk, Expected Shortfall). It will also give an outlook to modern and very recent achievements in the realm of set-valued functionals which are suited to consider set-valued measures of systemic risk or confidence intervals and regions.
Clara Grazian: Bayesian analysis of semiparametric copula models
Approximate Bayesian computation (ABC) is a recent class of algorithms which allows for managing complex models where the likelihood function may be considered intractable. Complex models are usually characterized by a dependence structure difficult to model with standard tools. Copula models have been introduced as a probabilistic way to describe general multivariate distributions by considering the marginal distributions and a copula function which captures the dependence structure among the components of the vector. While it is often straightforward producing reliable estimates of the marginals, estimating the dependence structure is more complicated, in particular in high dimensional problems. Major areas of application include econometrics, engineering, biomedical science, signal processing and finance.
We consider the general problem of estimating some specific quantities of interest of a generic copula (such as, for example, tail dependence index or the Spearman's coefficient) by adopting an approximate Bayesian approach based on computing the empirical likelihood as an approximation of the likelihood function for the quantity of interest.
The approach is general, in the sense that it could be adapted both to parametric and nonparametric modelling of the marginal distributions and on a parametric or semiparametric estimation of the copula function. We will show how the Bayesian procedure based on ABC shows better properties that the classical inferential solution available in the literature and apply the method in both simulated and real examples.
Christa Cuchiero: Contemporary stochastic volatility modeling - theory and empirics
Stochastic volatility modeling has been in the center of finance and econometrics since the groundbreaking results of Black & Scholes and Merton on their famous deterministic volatility model. It is a very natural approach to resolve the shortcomings of the Black-Scholes-Merton model and to match both time series and option data much more accurately. In the last few years, the by now classical continuous time stochastic volatility models based on low dimensional diffusions, e.g. the Heston or the SABR model, have been challenged and might be fully replaced by two modern developments, namely rough volatility and local stochastic volatility.
The rough volatility paradigm asserts that the trajectories of assets' volatility are rougher than Brownian motion, a revolutionary perspective that has radically changed certain persistent paradigms. It considers volatility as stochastic Volterra process and provides a universal approach to capture econometric and microstructural foundations of markets.
Rough volatility is complemented by local stochastic volatility which combines classical stochastic volatility with perfect calibration to the implied volatility smiles or skews, a theoretically and practically still very intricate task. In this talk we provide a novel infinite dimensional point of view on both directions. It allows to dissolve a generic non-Markovanity of the at first sight naturally low dimensional volatility process. This approach enables in particular to treat the challenging problem of multivariate rough covariance models for more than one asset. We also consider (non-parametric) estimation techniques and tread new paths to calibration by using machine learning methods.
Johannes Heiny: Assessing the dependence of high-dimensional time series via autocovariances and autocorrelations
In the first part of this talk, we provide asymptotic theory for certain functions of the sample autocovariance matrices of a high-dimensional time series with infinite fourth moment. The time series exhibits linear dependence across the coordinates and through time. Assuming that the dimension increases with the sample size, we provide theory for the eigenvectors of the sample autocovariance matrices and find explicit approximations of a simple structure, whose finite sample quality is illustrated for simulated data. We also obtain the limits of the normalized eigenvalues of functions of the sample autocovariance matrices in terms of cluster Poisson point processes. In turn, we derive the distributional limits of the largest eigenvalues and functionals acting on them. In the second part, we consider the sample correlation matrix R associated to n observations of a p-dimensional time series. In our framework, we allow that p/n may tend to 0 or a positive constant. If the time series has a finite fourth moment, we show that the sample correlation matrix can be approximated by its sample covariance counterpart for a wide variety of models. This result is very important for data analysts who use principal component analysis to detect some structure in high-dimensional time series. From a theoretical point of view, it allows to derive a plethora of ancillary results for functionals of the eigenvalues of R. For instance, we determine the almost sure behavior of the largest and smallest eigenvalues, and the limiting spectral distribution of R. The optimal condition for the convergence of the empirical spectral distributions turns out to be slightly weaker than normal domain of attraction. In the case of time series with infinite (2 - Ꜫ)-moments, a new class of Marchenko-Pastur type laws appears as limiting spectral distributions of R.
Alexander McNeil: Spectral backtests of forecast distributions with application to risk management
We study a class of backtests for forecast distributions in which the test statistic is a spectral transformation that weights exceedance events by a function of the modeled probability level. The choice of the kernel function makes explicit the user's priorities for model performance. The class of spectral backtests includes tests of unconditional coverage and tests of conditional coverage. We show how the class embeds a wide variety of backtests in the existing literature, and propose novel variants as well. The tests are illustrated by extensive simulation studies in which we consider the performance when essential features of the forecast model are neglected, such as heavy tails and volatility. In an empirical application, we backtest forecast distributions for the overnight P&L of ten bank trading portfolios.
Rodney Strachan: Reducing Dimensions in a Large TVP-VAR
This paper proposes a new approach to estimating high dimensional time varying parameter structural vector autoregressive models (TVP-SVARs) by taking advantage of an empirical feature of TVP-(S)VARs. TVP-(S)VAR models are rarely used with more than 4-5 variables. However recent work has shown the advantages of modelling VARs with large numbers of variables and interest has naturally increased in modelling large dimensional TVP-VARs. A feature that has not yet been utilized is that the covariance matrix for the state equation, when estimated freely, is often near singular. We propose a specification that uses this singularity to develop a factor-like structure to estimate a TVP-SVAR for 15 variables. Using a generalization of the recentering approach, a rank reduced state covariance matrix and judicious parameter expansions, we obtain efficient and simple computation of a high dimensional TVP- SVAR. An advantage of our approach is that we retain a formal inferential framework such that we can propose formal inference on impulse responses, variance decompositions and, important for our model, the rank of the state equation covariance matrix. We show clear empirical evidence in favour of our model and improvements in estimates of impulse responses.
Veronika Rockova: Dynamic Sparse Factor Analysis
Its conceptual appeal and effectiveness has made latent factor modeling an indispensable tool for multivariate analysis. Despite its popularity across many fields, there are outstanding methodological challenges that have hampered practical implementations. One major challenge is the selection of the number of factors, which is exacerbated for dynamic factor models, where factors can disappear, emerge, and/or reoccur over time. Existing tools that assume a fixed number of factors may provide a misguided representation of the data mechanism, especially when the number of factors is crudely misspecified. Another challenge is the interpretability of the factor structure, which is often regarded as an unattainable objective due to the lack of identifiability. Motivated by a topical macroeconomic application, we develop a flexible Bayesian method for dynamic factor analysis (DFA) that can simultaneously accommodate time-varying number of factors and enhance interpretability without strict identifiability constraints. To this end, we turn to dynamic sparsity by employing Dynamic Spike-and-Slab (DSS) priors within DFA. Scalable Bayesian EM estimation is proposed for fast posterior mode identification via rotations to sparsity, enabling data analysis at scales that would have been previously unfeasible. We study a large-scale balanced panel of macroeconomic variables covering multiple facets of the US economy, with a focus on the Great Recession, to highlight the efficacy and usefulness of our proposed method.
Zachary Feinstein: Pricing debt in an Eisenberg-Noe interbank network with comonotonic endowments
In this talk we present formulas for the pricing of debt and equity of firms in a financial network under comonotonic endowments. We demonstrate that the comonotonic setting provides a lower bound to the price of debt under Eisenberg-Noe financial networks with consistent marginal endowments. Such financial networks encode the interconnection of firms through debt claims. The proposed pricing formulas consider the realized, endogenous, recovery rate on debt claims. Special consideration will be given to the setting in which firms only invest in a risk-free bond and a common risky asset following a geometric Brownian motion.
Matteo Mogliani: Bayesian MIDAS penalized regressions: Estimation, selection, and prediction
We propose a new approach to modeling and forecasting with mixed-frequency regressions (MIDAS) in presence of a large number of predictors. Our approach resorts to penalized regressions such as Lasso and Group Lasso, hence addressing the issue of simultaneously estimating and selecting the model, and relies on Bayesian techniques for estimation. In particular, the penalty hyper-parameters driving the model shrinkage are automatically tuned via an adaptive MCMC algorithm. To achieve sparsity and improve variable selection, we also consider a Group Lasso model augmented with a spike-and-slab prior. Simulations show that the proposed models have good in-sample and out-of-sample performance, even when the design matrix presents very high correlation. When applied to a forecasting model of US GDP, the results suggest that high-frequency financial variables may have some, although limited, short-term predictive content.
Wolfgang Hörmann: Stochastic disease spread models
Stochastic SIR (susceptible - infectious - recovered) models for homogeneous and non-homogeneous populations are considered. The first part of the talk presents a new way to calculate the attack rate distribution of a Markov process model in continuous time for the assumption of homogeneous mixing. In the second part a discrete time SIR model with a general non-homogeneous mixing structure is defined. We use it to measure the effectiveness of different simple disease intervention methods like vaccination and quarantine, by quantifying its capability to decrease the basic reproduction number R0 below its critical value 1 as this makes outbreaks impossible.
Rémi Piatek: A multinomial probit model with latent factors, with an application to the study of inequality in educational attainment
We develop a parametrization of the multinomial probit model that yields greater insight into the underlying decision-making process, by decomposing the error terms of the utilities into a small set of latent factors. The latent factors are identified without a measurement system, and they can be meaningfully linked to an economic model. We provide sufficient conditions that make this structure identifiable and interpretable. For inference, we design a Markov chain Monte Carlo sampler based on marginal data augmentation. A simulation exercise shows the good numerical performance of our sampler and reveals the practical importance of alternative identification restrictions. Our approach can generally be applied to any setting where researchers can specify an a priori structure on a few drivers of unobserved heterogeneity.
We apply this framework to bring a fresh perspective to inequality in educational attainment, suggesting occupational sorting as an unexplored channel that may depress education outcomes in children from less advantaged families, in addition to established considerations such as school readiness and financing constraints. To study this channel, education and occupation choices are analyzed jointly, whereas existing research usually treats them as separate. We therefore develop a model of educational choice in which the formation of wage expectations accounts for anticipated occupational choices. For our empirical application, we use a 5% representative sample of US high schoolers to determine the impact of multiple cognitive and non-cognitive skills on occupational choice, relative to parental background.