Abstracts Research Seminar Winter Term 2020/21
Alejandra Avalos-Pacheco:
Factor Regression for Dimensionality Reduction and Data Integration Techniques with Applications to Cancer Data
Two key challenges in modern statistical applications are the large amount of information recorded per individual, and that such data are often not collected all at once but in batches. These batch effects can be complex, causing distortions in both mean and variance. We propose a novel sparse latent factor regression model to integrate such heterogeneous data. The model provides a tool for data exploration via dimensionality reduction while correcting for a range of batch effects. We study the use of several sparse priors (local and non-local) to learn the dimension of the latent factors. Our model is fitted in a deterministic fashion by means of an EM algorithm for which we derive closed-form updates, contributing a novel scalable algorithm for non-local priors of interest beyond the immediate scope of this paper. We present several examples, with a focus on bioinformatics applications. Our results show an increase in the accuracy of the dimensionality reduction, with non-local priors substantially improving the reconstruction of factor cardinality, as well as the need to account for batch effects to obtain reliable results. Our model provides a novel approach to latent factor regression that balances sparsity with sensitivity and is highly computationally efficient.
Klaus Nordhausen:
Blind Source Separation for Spatial Data
Blind source separation has a long tradition for iid data and multivariate time series. Blind source separation methods for multivariate spatial observations have however not been considered yet much in the literature. We suggest therefore a blind source separation model for spatial data and show how the latent components can be estimated using two or more scatter matrices. The statistical properties and merits of these estimators are derived and verified in simulation studies. A real data example illustrates the method.
Hannes Leeb:
Conditional Predictive Inference for High-Dimensional Stable Algorithms
We investigate generically applicable and intuitively appealing prediction intervals based on leave-one-out residuals. The conditional coverage probability of the proposed intervals, given the observations in the training sample, is close to the nominal level, provided that the underlying algorithm used for computing point predictions is sufficiently stable under the omission of single feature/response pairs. Our results are based on a finite sample analysis of the empirical distribution function of the leave-one-out residuals and hold in non-parametric settings with only minimal assumptions on the error distribution. To illustrate our results, we also apply them to high-dimensional linear predictors, where we obtain uniform asymptotic conditional validity as both sample size and dimension tend to infinity at the same rate. These results show that despite the serious problems of resampling procedures for inference on the unknown parameters (cf. Bickel and Freedman, 1983; El Karoui and Purdom, 2015; Mammen, 1996), leave-one-out methods can be successfully applied to obtain reliable predictive inference even in high dimensions.
Joint work with Lukas Steinberger.
Igor Cialenco:
Adaptive Robust Stochastic Control with Applications to Finance
We propose a new methodology, called adaptive robust control, for solving a discrete-time control problem subject to model (Knightian) uncertainty. We apply this framework to an optimal portfolio allocation problem where the uncertainty comes from the fact that the true law of the underlying model is only known to belong to a certain family of probability laws. We develop a learning algorithm that reduces the model uncertainty through progressive learning about the unknown system. In particular, we will study a class of time-inconsistent terminal Markovian control problems and provide a machine learning algorithm in solving numerically the mean-variance portfolio selection problem under the model uncertainty.
Konstantin Posch:
Correlated Parameters to Accurately Measure Uncertainty in Deep Neural Networks
In this talk, a novel approach for training deep neural networks using Bayesian techniques is presented. The Bayesian methodology allows for an easy evaluation of model uncertainty and, additionally, is robust to overfitting. These are commonly the two main problems classical, i.e., non-Bayesian architectures have to struggle with. The proposed approach applies variational inference in order to approximate the intractable posterior distribution. In particular, the variational distribution is defined as the product of multiple multivariate normal distributions with tridiagonal covariance matrices. Every single normal distribution belongs either to the weights or to the biases corresponding to one network layer. The layerwise a posteriori variances are defined based on the corresponding expectation values, and furthermore, the correlations are assumed to be identical. Therefore, only a few additional parameters need to be optimized compared with non-Bayesian settings. The performance of the new approach is evaluated and compared with other recently developed Bayesian methods. Basis of the performance evaluations are the popular benchmark data sets MNIST and CIFAR-10. Among the considered approaches, the proposed one shows the best predictive accuracy. Moreover, extensive evaluations of the provided prediction uncertainty information indicate that the new approach often yields more useful uncertainty estimates than the comparison methods.
Joint work with Jürgen Pilz.
Irene Monasterolo:
On the Dependence of Investor’s Probability of Default to Climate Policy Scenarios
Financial supervisors have recognized that climate change represents a new source of risk for financial stability and that investors should assess their exposure to climate risks. However, the nature of climate risk challenges traditional approaches to risk management. In this paper, we integrate forward-looking climate policy scenarios in macroeconomic and corporate risk analysis. We develop a model that allows to compute the valuation adjustment of corporate bonds conditioned to climate transition risk, based on available knowledge on climate policy scenarios provided by climate economic models. We assess how the Probability of Default (PD) of an investor’s portfolio of corporate bonds can be affected by climate policy shock scenarios. We find that the sensitivity of the Climate Value at Risk and Expected Shortfall to forward-looking climate policy shocks on the PD of the bonds is low. In contrast, the PD of the investor’s portfolio is highly sensitive to small changes in the PD of the bond conditioned to adverse climate policy shocks and their probability of occurrence. Thus, climate financial risk management strategies that do not consider adverse climate scenarios can massively underestimate portfolio risk. This, in turn, could lead companies and managers to increase their climate risk exposure, neglecting opportunities from climate mitigation and slowing down the low-carbon transition. Our results provide critical insights to investors and financial supervisors for mainstreaming climate regulatory considerations in their financial risk assessment and management.
Keywords: Climate policy scenarios, investor’s probability of default, corporate bond portfolio, climate Value at Risk, Expected Shortfall, climate financial risk assessment.
Johanna G. Nešlehová:
On Attainability of Kendall's Tau Matrices and Concordance Signatures
The concordance signature of a random vector or its distribution is defined to be the set of concordance probabilities for margins of all orders. In this talk, I will show that the concordance signature of a copula is always equal to the concordance signature of some unique mixture of so-called extremal copulas. This result has a number of interesting consequences, which I will explore, such as a characterization of the set of Kendall rank correlation matrices as the cut polytope, and a method for determining whether a set of concordance probabilities is attainable. I also use it to show that the widely-used elliptical distributions yield a strict subset of the attainable concordance signatures as well as a strict subset of the attainable Kendall rank correlation matrices, and prove that the Student t copula converges to a mixture of extremal copulas sharing its concordance signature with all elliptical distributions that have the same correlation matrix. Finally, I will discuss a method of estimating an attainable concordance signature from data, and highlight applications to Monte Carlo simulations of dependent random variables as well as expert elicitation of consistent systems of Kendall's tau dependence measures.
The talk is based on a joint paper with Alexander J. McNeil and Andrew D. Smith.
Tilmann Gneiting: Isotonic Distributional Regression (IDR): Leveraging Monotonicity, Uniquely So!
There is an emerging consensus in the transdiciplinary literature that the ultimate goal of regression analysis is to model the conditional distribution of an outcome, given a set of explanatory variables or covariates. This new approach is called "distributional regression", and marks a clear break from the classical view of regression, which has focused on estimating a conditional mean or quantile only. Isotonic Distributional Regression (IDR) learns conditional distributions that are simultaneously optimal relative to comprehensive classes of relevant loss functions, subject to monotonicity constraints in terms of a partial order on the covariate space. This IDR solution is exactly computable and does not require approximations nor implementation choices, except for the selection of the partial order. Despite being an entirely generic technique, IDR is strongly competitive with state-of-the-art methods in a case study on probabilistic precipitation forecasts from a leading numerical weather prediction model.
Joint work with Alexander Henzi and Johanna F. Ziegel.
Christa Cuchiero:
From Signature SDEs to Affine and Polynomial Processes and Back
Modern universal classes of dynamic processes, based on neural networks or signature methods, have recently entered the field of stochastic modeling, in particular in Mathematical Finance. This has opened the door to more data-driven and thus more robust model selection mechanisms, while first principles like no arbitrage still apply. We focus here on signature SDEs, i.e. SDEs whose characteristics are linear functions of the process' signature and present methods how to learn these characteristics from data.
From a more theoretical point of view, we show how these new models can be embedded in the framework of affine and polynomial processes, which have been -- due to their tractability -- the dominating process class prior to the new era of highly overparametrized dynamic models. Indeed, we prove that generic classes of diffusion models can be viewed as infinite dimensional affine processes, which in this setup coincide with polynomial processes. A key ingredient to establish this result is again the signature process. This then allows to get (formal) power series expansions for expected values of analytic functions of the process' marginals, which also apply to signature SDEs.
Immanuel M. Bomze:
Towards Responsible Game Theory - From Kant to a Copositive View on a Parametric QP
In game theory, players are usually assumed to choose their strategies exclusively in terms of their own benefit. Recent work (Alger/Weibull, Econometrica 2013) considers morally motivated agents whose objective partly includes social welfare if everybody would do likewise, as in Immanuel Kant's categorical imperative (1785). Some finite and symmetric two-player games have no (pure or mixed) Nash equilibrium when played by partly morally motivated players. The reason is that the "right thing to do" may be not to randomize. We provide necessary and sufficient conditions for the existence of equilibrium. To do so, we employ an optimization perspective, because the model results in a quadratic optimization problem (QP) parametrized by strategies. For partnership games, this new concept yields an interesting intermediate solution between global and local optimality for the Standard (=simplex-constrained) QP.
Coauthors: Werner Schachinger, University of Vienna; Jörgen Weibull, Stockholm School of Economics.