Abstracts
Christian Genest:
Bayesian Hierarchical Modeling of Spatial Extremes
Climate change and global warming have increased the need to assess and forecast environmental risk over large domains and to develop models for the extremes of natural phenomena such as droughts, floods, torrential precipitation, and heat waves. Because catastrophic events are rare and evidence is limited, Bayesian methods are well suited for the areal analysis of their frequency and size. In this talk, a multi-site modeling strategy for extremes will be described in which spatial dependence is captured through a latent Gaussian random field whose behavior is driven by synthetic covariates from climate reconstruction models. It will be seen through two vignettes that the site-to-site information sharing mechanism built into this approach does not only generally improve inference at any location but also allows for smooth interpolation over large, sparse domains.
The first application will concern the quantification of the magnitude of extreme surges on the Atlantic coast of Canada as part of the development of an overland flood protection product by an insurance company. The second illustration will show how coherent estimates of extreme precipitation of several durations based on a Bayesian hierarchical spatial model enhances current methodology for the construction, at monitored and unmonitored locations, of IDF curves commonly used in infrastructure design, flood protection, and urban drainage or water management.
Jakša Cvitanić:
Truth-Incentive Surveys
In this talk, I will present some results on the problem of eliciting honest responses to a multiple choice question (MCQ) in a survey of a sample of respondents, as might appear in a market research study, opinion poll or economics experiment. Since the original "Bayesian Truth Serum" (BTS) of Prelec (2004), many other truth-incentive mechanisms have been found. We introduce a new one, which is particularly simple. Under our, so-called choice-matching mechanism, respondents are compensated through an auxiliary task, e.g., a personal consumption choice or a forecast. Their compensation depends both on their performance on the auxiliary task, and on the performance of those respondents who matched their response to the MCQ. I will also discuss conditions under which BTS is the unique mechanism that correctly ranks experts.
The talk is based on joint papers with D. Prelec, S. Radas, B. Riley, H. Sikic and B. Tereick.
Patrick Mair:
Multidimensional Scaling in Action: Recent Developments and Implementation
Multidimensional scaling (MDS) is a widely applied multivariate exploratory technique used in many fields of research. MDS represents proximities among objects as distances among points in a low-dimensional configuration space (with given dimensionality), allowing researchers to explore similarity structures among these objects. The main MDS package in R is called "smacof". This talk gives an overview of various recent MDS developments and corresponding implementations in smacof. In the first part the focus is on MDS goodness-of-fit assessment, options for interpretation of the configuration, Procrustes alignment, and various MDS variants. The second part deals with a popular MDS variant called "unfolding" where the aim is to jointly scale rows and columns of a rectangular dissimilarity matrix that consists of rankings or ratings.
Within this context, ordinal unfolding, row-conditional unfolding, and configurationally restricted solutions are presented.
René Carmona:
Non-Standard Stochastic Control With Nonlinear Feynman-Kac Costs
We consider the conditional control problem introduced by P.L. Lions in his lectures at the Collège de France in November 2016. In his lectures, Lions emphasized some of the major differences with the analysis of classical stochastic optimal control problems, and in so doing, raised the question of the possible differences between the value functions resulting from optimization over the class of Markovian controls as opposed to the general family of open loop controls. The goal of the paper is to elucidate this quandary and provide elements of response to Lions’ original conjecture. First, we justify the mathematical formulation of the conditional control problem by the description of practical model from evolutionary biology. Next, we relax the original formulation by the introduction of soft as opposed to hard killing, and using a mimicking argument, we reduce the open loop optimization problem to an optimization over a specific class of feedback controls. After proving existence of optimal feedback control functions, we prove a superposition principle allowing us to recast the original stochastic control problems as deterministic control problems for dynamical systems of probability Gibbs measures. Next, we characterize the solutions by forward-backward systems of coupled non-linear Partial Differential Equations (PDEs) very much in the spirit of the Mean Field Game (MFG) systems. From there, we identify a common optimizer, proving the conjecture of equality of the value functions. Finally we illustrate the results by convincing numerical experiments.
Joint work with Mathieu Laurière and Pierre-Louis Lions.
Petros Dellaportas:
Can Independent Metropolis beat Monte Carlo?
Assume that we would like to estimate the expected value of a function f with respect to a density π. We prove that if π is close enough under KL divergence to another density q, an independent Metropolis sampler estimator that obtains samplers from π with proposal density q, enriched with a variance reduction computational strategy based on control variates, achieves smaller asymptotic variance than the one from crude Monte Carlo. The control variates construction requires no computational effort but assumes that the expected value of f under q is available. We illustrate our results in marginal likelihood estimation problems. Furthermore, we propose an adaptive independent Metropolis algorithm and we demonstrate its applicability in Bayesian inference problems.
Ralf Wunderlich:
Stochastic Models and Optimal Control of Epidemics Under Partial Information
Mathematical models of epidemics such as the COVID-19 pandemics often use compartmental models dividing the population into several compartments. Based on a microscopic setting describing the temporal evolution of the subpopulation sizes in the compartments by stochastic counting processes one can derive macroscopic models for large populations describing the average behavior by associated ODEs such as the celebrated SIR model. Further, diffusion approximations allow to address fluctuations from the average and to describe the state dynamics also for smaller populations by stochastic differential equations (SDE).
Usually not all of the state variables are directly observable and we are facing the so-called "dark figure" problem addressing for example the unknown number of asymptomatic and non-detected infections. Such not directly observable states are problematic if it comes to the computation of characteristics of the epidemic such as the effective reproduction rate and the prevalence of the infection within the population. Further, the management and containment of epidemics relying on solutions of (stochastic) optimal control problems and the associated feedback controls need observations of the current state as input.
The estimation of unobservable states based on records of the observable states leads to a non-standard filtering problem for partially observable stochastic models. We adopt the extended Kalman filter approach coping with non-linearities in the state dynamics and the state-dependent diffusion coefficients in the SDEs. Based on these filtering results we study a stochastic optimal control problem under partial information arising in the cost-optimal management of epidemics.
Balasubramanian Narasimhan:
Elastic Net Regularization for GLMs and Extensions
The R package 'glmnet' is widely used for fitting lasso and elastic net models and has undergone continuous development over the years [1-4]. Some recent updates [5] allow for fitting elastic net regularized regression to all generalized linear model families, Cox models to start-stop data, and fitting relaxed lasso models. I will discuss the design and implementation of these features together with some related examples. This is joint work with Kenneth Tay and Trevor Hastie.
1. Friedman, Hastie, and Tibshirani. “Regularization Paths for Generalized Linear Models via Coordinate Descent.” doi.org/10.18637/jss.v033.i01
2. Simon, Friedman, and Hastie. “A Blockwise Descent Algorithm for Group-Penalized Multiresponse and MultinomialRegression.” doi.org/10.48550/arXiv.1311.6529
3. Simon, Friedman, Hastie, and Tibshirani. “Regularization Paths for Cox’s Proportional Hazards Model via Coordinate Descent.” doi.org/10.18637/jss.v039.i05
4. Tibshirani, Bien, Friedman, Hastie, Simon, Taylor, and Tibshirani. “Strong Rules for Discarding Predictors in Lasso-Type Problems.” doi.org/10.1111/j.1467-9868.2011.01004.x
5. Tay, Narasimhan, and Hastie. “Elastic Net Regularization Paths for All Generalized Linear Models.” doi.org/10.18637/jss.v106.i01
Giorgia Callegaro:
Functional Quantization of Rough Volatility and Applications to the VIX
We develop a product functional quantization of rough volatility. Since the quantizers can be computed offline, this new technique, built on the insightful works by Luschgy and Pages, becomes a strong competitor in the new arena of numerical tools for rough volatility. We concentrate our numerical analysis to pricing VIX Futures in the rough Bergomi model and compare our results to other recently suggested benchmarks.