Abstracts Research Seminar Summer Term 2018
Jin Ma: Optimal Dividend and Investment Problems under Sparre Andersen Model
This talk concerns an open problem in Actuarial Science: the optimal dividend and investment problems Sparre Andersen model, that is, the claim frequency is a “renewal” process. The main feature of the problem is that the underlying reserve dynamics, even in its simplest form, is no longer Markovian. By using the backward Markovization technique we recast the problem in a Markovian framework with an added random “clock”, from which we validate the dynamic programming principle (DPP). We will then show that the corresponding (dynamic) value function is the unique constrained viscosity solution of the (non-local) HJB equation. We shall further discuss the possible optimal strategy or ε-optimal strategy by addressing the regularity of the value function.
This talk is based on the joint works with Lihua Bai and Xiaojing Xing.
Andrzej Ruszczynski: Risk-Averse Control of Partially Observable Markov Systems
We consider risk measurement in controlled partially observable Markov systems in discrete time. In such systems, part of the state vector is not observed, but affects the transition kernel and the costs. We introduce new concepts of risk filters and study their properties. We also introduce the concept of conditional stochastic time consistency. We derive the structure of risk filters enjoying this property and prove that they can be represented by a collection of law invariant risk measures on the space of function of the observable part of the state. We also derive the corresponding dynamic programming equations. Then we illustrate the results on a clinical trial problem and a machine deterioration problem. In the final part of the talk, we shall discuss risk filtering and risk-averse control of partially observable Markov jump processes in continuous time.
Cosimo-Andrea Munari: Existence, uniqueness and stability of optimal portfolios of eligible assets
In a capital adequacy framework, risk measures are used to determine the minimal amount of capital that a financial institution has to raise and invest in a portfolio of pre-specified eligible assets in order to pass a given capital adequacy test. From a capital efficiency perspective, it is important to identify the set of portfolios of eligible assets that allow to pass the test by raising the least amount of capital. We study the existence and uniqueness of such optimal portfolios as well as their sensitivity to changes in the underlying capital position. This naturally leads to investigating the continuity properties of the set-valued map associating to each capital position the corresponding set of optimal portfolios. We pay special attention to lower semicontinuity, which is the key continuity property from a financial perspective. This "stability" property is always satisfied if the test is based on a polyhedral risk measure but it generally fails once we depart from polyhedrality even when the reference risk measure is convex. However, lower semicontinuity can be often achieved if one if one is willing to focuses on portfolios that are close to being optimal. Besides capital adequacy, our results have a variety of natural applications to pricing, hedging, and capital allocation problems.
This is joint work with Michel Baes and Pablo Koch-Medina.
Marica Manisera and Paola Zuccolotto: Basketball data science
The research seminar will deal with statistical analysis of basketball data, with special attention to the most recent advances showing how selected statistical methods and data mining algorithms can be applied in basketball to solve practical problems. After a brief description of the state of the art of basketball analytics, we will introduce different data sources that can be fruitfully used to perform basketball analytics. Then, in order to show basketball data science in action, we will discuss four case studies, focused on: (i) the proposal of new positions in basketball; (ii) the analysis of the scoring probability when shooting under high-pressure conditions; (iii) performance variability and teamwork assessment; (iv) sensor data analysis.
The authors are scientific coordinators of the international project BDsports (Big Data Analytics in Sports, bodai.unibs.it/bdsports), whose main aims include scientific research, education, dissemination and practical implementation of sports analytics.
Eric Eisenstat: Efficient Estimation of Structural VARMAs with Stochastic Volatility
This paper develops Markov chain Monte Carlo algorithms for structural vector autoregressive moving average (VARMA) models with fix coefficients and time-varying error covariances, modeled as a multivariate stochastic volatility process. A particular benefit of allowing for time variation in the covariances in this setting is that it induces uniqueness in terms of fundamental and various non-fundamental VARMA representations. Hence, it resolves an important issue in applying multivariate time series models to structural macroeconomic problems. Although computation in this setting is more challenging, the conditionally Gaussian nature of the model renders efficient sampling algorithms feasible. The algorithm presented in this paper uses two innovative approaches to achieve sampling efficiency: (i) the time-varying covariances are sampled jointly using particle Gibbs with ancestry sampling, and (ii) the moving average coefficients are sampled jointly using an extension of the Whittle likelihood approximation. We provide Monte Carlo evidence that the algorithm performs well in practice. We further employ the algorithm to assess the extent to which commonly used SVAR models satisfy their underlying fundamentalness assumption and the effect that this assumption has on structural inference.
Nicole Bäuerle: Optimal Control of Partially Observable Piecewise Deterministic Markov Processes
In this talk we consider a control problem for a Partially Observable Piecewise Deterministic Markov Process of the following type: After the jump of the process the controller receives a noisy signal about the state and the aim is to control the process continuously in time in such a way that the expected discounted cost of the system is minimized. We solve this optimization problem by reducing it to a discrete-time Markov Decision Process. This includes the derivation of a filter for the unobservable state. Imposing sufficient continuity and compactness assumptions we are able to prove the existence of optimal policies and show that the value function satisfies a fixed point equation. A generic application is given to illustrate the results.
The talk is based on a joint paper with Dirk Lange.
Ioannis Kosmidis: Location-adjusted Wald statistics
Inference on a scalar parameter of interest is commonly constructed using a Wald statistic, on the grounds of the validity of the standard normal approximation to its finite-sample distribution and computational convenience. A prominent example are the individual Wald tests for regression parameters that are reported by default in regression output in the majority of statistical computing environments. The normal approximation can, though, be inadequate, especially when the sample size is small or moderate relative to the number of parameters. In this talk, the Wald statistic is viewed as an estimate of a transformation of the model parameters and is appropriately adjusted so that its null expectation is asymptotically closer to zero. The bias adjustment depends on the expected information matrix, the first-order term in the bias expansion of the maximum likelihood estimator, and the derivatives of the transformation, all of which are either readily available or easily obtainable in standard software for a wealth of well-used models. The finite-sample performance of the location-adjusted Wald statistic is examined analytically in simple models and via simulation in a series of more realistic modelling frameworks, including generalized linear models, meta-regression and beta regression. The location-adjusted Wald statistic is found able to deliver significant improvements in inferential performance over the standard Wald statistic, without sacrificing any of its computational simplicity.
Kemal Dinçer Dingeç: Evaluating CDF and PDF of the Sum of Lognormals by Monte Carlo Simulation
Evaluating cumulative distribution function (CDF) and probability density function (PDF) of the sum of lognormal random variates by Monte Carlo simulation is a topic discussed in several recent papers. Our experiments show, that in particular for variances smaller than one, conditional Monte Carlo (CMC) in a well chosen main direction leads already to a quite simple algorithm with large variance reduction.
For the general case the implementation of the CMC algorithm requires numeric root finding which can be implemented robustly using upper and lower bounds for the root. Adding importance sampling (IS) to the CMC algorithm can lead to large additional variance reduction. For the special case of independent and identically distributed (IID) lognormal random variates the root is obtained in closed form. It is important that for this case the optimal importance sampling density is very close to the product of its conditional densities. So the product of the approximate one-dimensional conditional densities is used as multivariate IS density.
Applying the different approximation methods for the one-dimensional conditional densities, it is possible to obtain a significant additional variance reduction over the pure CMC algorithm by means of importance sampling. When also the density of the lognormal sum is required, it is important that an approximating function with continuous first derivative is available.
In this talk the variance reduction factors obtained with different approximation methods and the necessary setup times for the random variate generation algorithm are compared. Also the influence of the selected main direction is analyzed.
Wayne Oldford: Exploratory visualization of higher dimensional data
Visualization is an important asset to data analysis, both in communicating results and in explicating the analysis narrative which led to them. However, it is sometimes at its most powerful when used prior to commitment to any analysis narrative, simply to explore the data with minimal prejudice. This is exploratory visualization and its goal is to reveal structure in the data, especially unanticipated structure. Insights gained from exploratory visualization can inform and possibly significantly affect any subsequent analysis narrative.
The size of modern data, in dimensionality and in numbers of observations, poses a formidable challenge for exploratory visualization. First, dimensionality is limited to at most three physical dimensions both by the human visual system and by modern display technology. Second, the number of observations that can be individually displayed on any device is constrained by the magnitude and resolution of its display screen. The challenge is to develop methods and tools that enable exploratory visualization of modern data in the face of such constraints.
Some methods and software which we have designed to address this challenge will be presented in this talk (based on joint work with Adrian Waddell, Adam Rahman, Marius Hofert, or Catherine Hurley). Most of the talk will focus on the problem of exploring higher dimensional spaces, largely through defining, following, and presenting “interesting” low dimensional trajectories through high dimensional space. Both spatial and temporal strategies will be used to allow visual traversal of the trajectories. Software which facilitates exploration via these trajectories will be demonstrated (based mainly on the interactive and extendible exploratory visualization system called ‘loon’, and ‘zenplots’, each of which are available as an ‘R’ package from CRAN). If time permits, our methodology (and software) for reducing the number of observations (without compromising too much either the empirical distribution or important geometric features of the high dimensional point-cloud) will also be presented.
Peter Filzmoser: Robust and sparse estimation methods for linear and logistic regression in high dimensions
The elastic net estimator has been introduced for different models, such as for linear and logistic regression. We propose a robust version of this estimator based on trimming. It is shown how outlier-free data subsets can be identified and how appropriate tuning parameters for the elastic net penalties can be selected. A final reweighting step is proposed which improves the statistical efficiency of the estimators. Simulations and data examples underline the good performance of the newly proposed method, which is available in the R package enetLTS on CRAN.
John M. Maheu: Nonparametric Dynamic Conditional Beta
This paper derives a dynamic conditional beta representation using a Bayesian semiparametric multivariate GARCH model. The conditional joint distribution of excess stock returns and market excess returns are modeled as a countably infinite mixture of normals. This allows for deviations from the elliptic family of distributions. Empirically we find the time-varying beta of a stock nonlinearly depends on the contemporaneous value of excess market returns. In highly volatile markets, beta is almost constant, while in stable markets, the beta coefficient can depend asymmetrically on the market excess return.