Abstracts Research Seminar Winter Term 2016/17
Natesh Pillai: Bayesian Factor Models in High Dimensions
Sparse Bayesian factor models are routinely implemented for parsimonious dependence modeling and dimensionality reduction in high-dimensional applications. We provide theoretical understanding of such Bayesian procedures in terms of posterior convergence rates in inferring high-dimensional covariance matrices where the dimension can be larger than the sample size. We will also discuss other high dimensional shrinkage priors and discuss them in the context of factor models.
Peter Bank: Hedging with Temporary Price Impact
We consider the problem of hedging a European contingent claim with temporary price impact as proposed by Almgren and Chriss. The hedging problem can be regarded as a cost optimal tracking problem of the frictionless hedging strategy. We solve this problem explicitly for general predictable target hedging strategies. It turns out that, rather than towards the current target position, the optimal policy trades towards a weighted average of expected future target positions. Our findings thus complement a number of previous studies in the literature on optimal strategies in illiquid markets, where the frictionless hedging strategy is confined to diffusions.
This is joint work with Mete Soner (ETH Zürich) und Moritz Voß (TU Berlin).
Achim Zeileis: Examining Exams Using Rasch Models and Assessment of Measurement Invariance
Many statisticians regularly teach large lecture courses on statistics, probability, or mathematics for students from other fields such as business and economics, social sciences and psychology, etc. The corresponding exams often use a multiple-choice or single-choice format and are typically evaluated and graded automatically. Although further examinations of these exams would be of interest, these are frequently not carried out. For example a measurement scale for the difficulty of the questions (or items) and the ability of the students (or subjects) could be established using psychometric item response theory (IRT) models. Moreover, based on such a model it could be assessed whether the exam is really "fair" for all participants or whether certain items are easier (or more difficult) for certain subgroups of subjects.
Here, several recent methods for assessing "measurement invariance" and for detecting "differential item functioning" in the standard Rasch IRT model are discussed and applied to results from a first-year mathematics exam with single-choice items. Several covariates like gender, prior experience, and prior mathematics knowledge are available to form potential subgroups with differential item functioning. These can either be tested using the classical likelihood ratio, score, or Wald tests (with suitable item anchoring). Alternatively, generalizations of the score test can also assess changes along continuous variables and trees can form subgroups recursively in a data-driven way. Finally, finite mixtures of Rasch models are a general strategy to test for violations of measurement invariance that can also be applied without further covariates.
All analyses can be carried out using the psycho* family of R packages (psychotools, psychotree, psychomix) which provide a unified approach to estimating, visualizing, testing, mixing, and partitioning a range of psychometric models. The empirical exam results come from the "Mathematics 101" course for business and economics students at Universität Innsbruck and have been generated based on the R package "exams". This allows to automatically draw a large variety of similar exercises and render these into formats for written exams, learning management systems (such as Moodle, Blackboard, etc.), or live quizzes with voting via smartphones or tablets.
Mathias Beiglböck: The Geometry of Model Uncertainty
The over-confidence in mathematical models and the failure to account for model uncertainty have frequently been blamed for their infamous role in financial crises. Serious consideration of model ambiguity is vital not only in the financial industry and for proficient regulation but also for university level teaching. Remarkably, it remains an open challenge to quantify the effects of model uncertainty in a coherent way. From a mathematical perspective, this is a delicate issue which touches on deep classical problems of stochastic analysis. In recent work, we establish a new link to the field of optimal transport. This yields a powerful geometric approach to the problem of model uncertainty and, more generally, the theory of stochastic processes.
Ulrich Horst: Optimal Trade Execution with Stochastic Resilience in a Non-Markovian Framework
We study an optimal execution problem in illiquid markets with both instantaneous and permanent price impact and stochastic resilience. We allow for only absolutely continuous trading strategies. In such a model the value function can be described by a three-dimensional system of backward stochastic differential equations (BSDE) with a singular terminal condition. We prove existence and uniqueness of a solution to the BSDE system extending the asymptotic expansion approach introduced in Graewe et al (2015) to higher dimensions. We characterize the optimal execution strategy in terms of the unique solution to the BSDE system.
The talk is based on joint work with Paulwin Graewe.
Petros Dellaportas: Identifying and predicting jumps in financial time series
We deal with the problem of identifying jumps in multiple financial time series consisting of stock daily returns. A stochastic volatility model combined with a doubly stochastic Poisson process is used to model the evolution of the returns across time. The propagation of jumps over time and over stocks is modeled using dynamic factor models for the intensity of the Poisson process. We develop efficient MCMC algorithms to estimate the parameters and the latent states of the proposed model. The efficiency of the proposed algorithms is compared with existing MCMC algorithms. Our methodology is tested on simulated data and it is applied on the daily returns of the 600 stocks of the Euro STOXX index, over the period of 2008 - 2014.
Claudia Ceci: On the Hedging Strategies for Defaultable Claims Under Incomplete Information
In this paper we investigate the hedging problem of a defaultable claim with recovery at default time via the local risk-minimization approach when investors have a restricted information on the market. We assume that the stock price process dynamics depends on an exogenous unobservable stochastic factor and that at any time, investors may observe the risky asset price and know if default has occurred or not. According to the progressive enlargement approach, firstly we consider an enlargement of the natural filtration of the underlying Brownian motions driving the market dynamics to make the random time of default a stopping time; secondly, we introduce an enlargement of the natural filtration of the risky asset price, describing the information available to investors. We characterize the optimal strategy in terms of the integrand in the Galtchouk-Kunita-Watanabe decomposition of the defaultable claim under the minimal martingale measure and the available information flow. Finally, we provide an explicit formula by means of predictable projection of the corresponding hedging strategy under full information with respect to the natural filtration of the risky asset price and the minimal martingale measure, and this leads, in a Markovian setting, to filtering problems.
(Co-authors: Katia Colaneri and Alessandra Cretarola)
Keywords: Defaultable markets, intensity-based model, hazard rate, partial information, local risk-minimization.
Matt Taddy: Deep Counterfactual Prediction using Instrumental Variables
We are in the middle of a remarkable rise in the use and capability of artificial intelligence. Much of this growth has been fueled by the success of deep machine learning architectures: models that map from observables to latent representations via multiple layers of, in most cases, a neural network. These deep ML algorithms are effective tools for reduced-form prediction, and they can be combined in AI systems to solve complex automated reasoning problems. Our research is focused on directing deep ML towards Economic Artificial Intelligence: structural and causal inference for economic systems. This paper provides a recipe for combining ML algorithms to solve for causal effects in the presence of instrumental variables – sources of treatment randomization that are conditionally independent from the response. We show that a flexible IV specification resolves into two prediction tasks that can be solved with deep neural nets: a first-stage network for treatment prediction and a second-stage network whose loss function involves integration over the conditional treatment distribution. Our framework imposes restrictions on the stochastic gradient descent routine used for training, but it is general enough that we can take advantage of off-the-shelf ML capabilities and avoid extensive algorithm customization. Crucially, we outline how to obtain out-of-sample causal validation in order to avoid over-fit and describe schemes for both Bayesian and frequentist inference. The methods are illustrated in simulation and in an application studying the comparative effectiveness of digital advertisement channels.
Joint work with Jason Hartford, Greg Lewis and Kevin Leyton-Brown.
Christian Kleiber: Majorization and the Lorenz order in statistics, applied probability, economics and beyond
Majorization is a deceptively simple concept (a preorder) for comparing two vectors of equal length with identical sums of (nonnegative) entries. In a sense, it measures the degree to which the vector elements differ. Majorization is intimately connected with Schur convexity. Over the last 100 years, it has found a multitude of uses in applied mathematics, statistics, applied probability, economics, and beyond. Several generalizations exist, the Lorenz order being a prominent example. The talk provides a survey of basic properties of majorization and the Lorenz order along with a number of applications, among them statistical distribution theory, economics, and the social sciences. If time permits, we may branch out to related stochastic orders.
Keywords: Inequalities, Lorenz order, majorization, Schur convexity, stochastic orders.
Arnulf Jentzen: Stochastic algorithms for the approximative pricing of financial derivatives
In this talk we present a few recent results on approximation algorithms for forward stochastic differential equations (SDEs) and forward-backward stochastic differential equations (FBSDEs) that appear in models for the approximative pricing of financial derivatives. In particular, we review strong convergence results for Cox-Ingersoll-Ross (CIR) processes, high dimensional nonlinear FBSDEs, and high dimensional nonlinear parabolic partial differential equations (PDEs). CIR processes appear in interest rates models and in the Heston equity derivative pricing model as instantaneous variance processes (squared volatility processes). High dimensional nonlinear FBSDEs and high dimensional nonlinear PDEs, respectively, are frequently employed as models for the value function linking the price of the underlying to the price of the financial derivative in pricing models incorporating nonlinear effects such as the default risk of the issuer and/or the holder of the financial derivative.
The talk is based on joint works with Weinan E (Beijing University & Princeton University), Mario Hefter (University of Kaiserslautern), Martin Hairer (University of Warwick), Martin Hutzenthaler (University of Duisburg-Essen), and Thomas Kruse (University of Duisburg-Essen).
Guido Consonni: Objective Bayes Learning of Graphical Models
Graphical models are widely used to model conditional independencies among a set of variables in various scientific areas. One important objective is to learn the structure of a graph based on a sample of observations. In this talk I will consider two problems in structural learning. The first one relates to Gaussian graphical models having a regression structure (covariate-adjusted graphical models), where the aim is to learn the graph after having taken into account the effect of some potential confounders. The second problem concerns learning a Directed Acyclic Graph (DAG) model. It is well known that different DAGs can encode the same set of conditional independencies, so that one cannot distinguish between DAGs using observational data, but only between Markov equivalence classes of DAGs. The latter are uniquely identified by the corresponding essential graph, which accordingly becomes the object of our learning procedures.
Joint work with Luca La Rocca (Università di Modena e Reggio Emilia) and Stefano Peluso (Università Cattolica del Sacro Cuore, Milan)
Sara Biagini: The robust Merton problem of an ambiguity averse investor
(Co-author: Mustafa Ç. Pınar)
We derive a closed form portfolio optimization rule for an investor who is diffident about mean return and volatility estimates, and has a CRRA utility. Confidence is here represented using ellipsoidal uncertainty sets for the drift, given a (compact valued) volatility realization. This specification affords a simple and concise analysis, as the agent becomes observationally equivalent to one with constant, worst case parameters. The result is based on a max–min Hamilton–Jacobi–Bellman–Isaacs PDE, which extends the classical Merton problem and reverts to it for an ambiguity-neutral investor.
Keywords: Robust optimization, Merton problem, Volatility uncertainty, Ellipsoidal uncertainty on mean returns, Hamilton–Jacobi–Bellman–Isaacs equation