Die Erholunsgzone vor dem D4 Gebäude über dem Brunnen.

Abstracts

  • Hansjörg Albrecher - Climate Change and Pooling of NatCat Risks

    In this talk some challenges for the insurance of losses due to natural catastrophes will be discussed, including some consequences of climate change. We then study the potential of pooling respective risks over time and space, quantify the diversification potential in some concrete cases and look into techniques for the identification of pools between regions or countries that should best collaborate to deal with such risks.  

  • Valeria Bignozzi - Risk measurement under parameter uncertainty

    In the intense debate on risk measure properties, large attention has been
    recently devoted to elicitability. A statistical functional is elicitable, if it can
    be written as the minimiser of an expected loss function; the mean, the quantile
    and the expectile are prominent examples. Elicitability is also related
    to the idea of regression, indeed the loss function can be used to measure
    the “distance” between a given variable Y and a regression function. These
    concepts have been employed for fair valuation in actuarial mathematics,
    where the expected loss function is used to find a portfolio that is as close as
    possible to an insurance liability, while having a residual risk of zero. In this
    work we use elicitability to find the risk estimator that best approximate
    a financial loss in a context of model uncertainty. When the probability
    distribution of the loss is unknown, the risk measure is estimated based on
    (historical) data and takes different values depending on the realisation of
    the sample used. Our goal is to find the best strategy/risk measure estimator
    that also reflects the riskiness arising from distribution uncertainty. In
    particular, focusing on the family of location-scale distributions, we consider
    elicitable risk measures and different estimators, we study their properties
    and evaluate their accuracy.

    Based on a joint work Salvatore Scognamiglio and Andreas Tsanakas

  • Valérie Chavez - Causal Discovery in Multivariate Extremes

    Understanding the causal dynamics between different climate factors at their extreme level is crucial for effective climate risk management.
    Causal asymmetry is the result of the principle that an event is a cause only if its absence would not have been a cause. From there, uncovering causal discovery becomes a matter of comparing a well-defined score in both directions. Motivated by studying causal effects at extreme levels of a random vector, we propose to construct a model-agnostic causal score relying solely on the assumption of the existence of a max-domain of attraction.
    Based on a representation of a Generalized Pareto random vector, we construct the causal score as the Wasserstein distance between the margins and a well-specified random variable. The proposed methodology is illustrated on a hydrologically simulated dataset of different characteristics of catchments in Switzerland: discharge, precipitation and snow melt. Join work with Linda Mhalla and Philippe Naveau.

  • Katia Colaneri - Portfolio Insurance strategies: few recent result in a market model with jumps

    We investigate an optimal investment problem associated with proportional portfolio insurance (PPI) strategies in the presence of jumps in the underlying dynamics. PPI strategies enable investors to mitigate downside risk while still retaining the potential for upside gains. This is achieved by maintaining an exposure to risky assets proportional to the difference between the portfolio value and the present value of the guaranteed amount. While PPI strategies are known to be free of downside risk in diffusion modeling frameworks with continuous trading, see e.g., Cont and Tankov (2009), real market applications exhibit a significant non-negligible risk, known as gap risk, which increases with the multiplier value. We determine the optimal PPI strategy in a setting where gap risk may occur, due to downward jumps in the asset price dynamics. We consider a loss-averse agent who aims at maximizing the expected utility of the terminal wealth exceeding a minimum guarantee. Technically, we model agent's preferences with an S-shaped utility function to accommodate the possibility that gap risk occurs, and address the optimization problem via a generalization of the martingale approach that turns out to be valid under market incompleteness. 

  • Vicky Fasen-Hartmann - Risk contagion in financial networks: the effect of a Gaussian copula

    Systemic risk measurements play a crucial role in assessing the stability of complex financial systems. Empirical evidence suggests that returns from various financial assets exhibit heavy-tailed behavior. Additionally, these returns often demonstrate asymptotic independence, meaning extreme values are less likely to occur simultaneously.
    Surprisingly, asymptotic independence in dimensions larger than two has received limited attention both theoretically and in financial risk modeling. In this talk, we introduce the concept of mutual asymptotic independence for general d-dimensions. We compare it with the traditional notion of pairwise asymptotic independence and apply both concepts to a Gaussian copula.
    Furthermore, we consider a financial network model using a bipartite graph of banks and assets with portfolios of possibly overlapping heavy-tailed risky assets having a Gaussian copula. For such models, we provide precise asymptotic expressions for various (conditional)l tail risk probabilities and associated CoVaR measures for assessing systemic risk.

  • Christian Genest - A latent-vine factor-copula time series model for extreme flood insurance losses

    Vine and factor copula models are handy tools for statistical inference in high dimension. However, their use for tail modeling and prediction is subject to caution when extreme data are sparse. Motivated by the need to assess the risk of cooccurrence of large insurance losses in the American National Flood Insurance Program (NFIP), I will describe a novel class of copula models that can account for spatiotemporal dependence within clustered sets of time series. This new class, which combines the advantages of vine and factor copula models, provides great flexibility in capturing tail dependence while maintaining interpretability through a parsimonious latent structure. Using NFIP data, I will show the value of this approach in evaluating the risks associated with extreme weather events. This talk is based on joint work with Xiaoting Li and Harry Joe.

  • Marius Hofert - Think stochastic: A new dependence model construction

    Recently, a publication suggested the use of EFGM copulas as dependence models in risk management. In terms of a stochastic representation of EFGM copulas, we explain intuitively why such copulas are only able to model a limited range of dependence. We then introduce a new class of copulas based on similar ideas but without such limitations and investigate some of its properties.

  • Marco Oesting - The Functional Peaks-Over-Threshold Approach – What if the Risk is (Partially) Unobserved?


    In order to describe the extremal behaviour of some stochastic process X, approaches from univariate extreme value theory such as the peaks-over-threshold approach are commonly generalized to the spatial or spatio-temporal domain. In this setting, extreme events can be flexibly defined as exceedances of a risk functional r like a spatial average, for example, applied to X. Inference for the resulting limit process, the so-called r-Pareto process, requires the evaluation of r(X) and thus the knowledge of the whole process X. In practical applications, we face the challenge that observations of X are only available at single sites.

    To overcome this issue, we propose a two-step MCMC algorithm in a Bayesian framework. In a first step, we sample from X conditionally on the observations in order to evaluate which observations lead to r-exceedances. In a second step, we use these exceedances to sample from the posterior distribution of the parameters of the limiting r-Pareto process. Alternating these steps results in a full Bayesian model for the extremes of X. We show that, under appropriate assumptions, the probability of classifying an observation as r-exceedance in the first step converges to the desired probability. Furthermore, given the first step, the distribution of the Markov chain constructed in the second step converges to the posterior distribution of interest. Our procedure is compared to the Bayesian version of the standard procedure in a simulation study.

    This is joint work with Max Thannheimer.

  • Sebastian Lerch -  Uncertainty quantification for data-driven weather models

    Modeling and quantifying climate and weather risks typically relies on ensemble simulations from physical models of the atmosphere, the generation of which requires tremendous amounts of computational resources. Over the past few years, artificial intelligence (AI)-based data-driven weather models have experienced rapid progress. Recent studies, with models trained on reanalysis data, achieve impressive results and demonstrate substantial improvements over state-of-the-art physics-based numerical weather prediction models across a range of variables and evaluation metrics. Beyond improved predictions, the main advantages of data-driven weather models are their substantially lower computational costs and the faster generation of forecasts, once a model has been trained. In my talk, I will give an overview of these recent developments, highlighting both the potential of data-driven weather models, but also limitations and open questions.
    In particular, most efforts in data-driven weather forecasting have been limited to deterministic, point-valued predictions, making it impossible to quantify forecast uncertainties, which is crucial for risk modeling and for optimal decision making in applications. I will present results from recent work on uncertainty quantification methods to generate probabilistic weather forecasts from a state-of-the-art deterministic data-driven weather model, Pangu-Weather. Specifically, we compare approaches for quantifying forecast uncertainty based on generating ensemble forecasts via perturbations to the initial conditions, with the use of statistical and machine learning methods for post-hoc uncertainty quantification.

    This presentation is based on joint work with Christopher Bülte, Nina Horat and Julian Quinting.

  • Alexander McNeil -  Vine copulas for stochastic volatility

    Models from the GARCH class have proved to be extremely useful models for forecasting volatility and measuring risk in financial time series. However, they are something of a black box with respect to their serial dependence structure and they may not be the best models for all time series exhibiting stochastic volatility.
    To shed more light on how GARCH models work, we examine the bivariate copulas that describe their serial dependencies and higher-order partial serial dependencies. We show how these copulas can be approximated using a combination of standard bivariate copulas and uniformity-preserving transformations known as v-transforms. The insights help us to construct stationary d-vine models to rival and often surpass the performance of GARCH processes in modelling volatile financial return series.

  • Ostap Okhrin - Plan growth stages and weather insurance design

    We investigate innovative approaches to weather index insurance design for agricultural producers, focusing on soybean production in Illinois, USA, and winter barley cultivation.
    A novel weather index insurance product is proposed for soybean production, dividing the vegetation cycle into four growth stages to mitigate basis risk. Using a nonparametric approach with a Generalized Additive Model (GAM) and Penalized B-spline (P-spline) methodology, the study demonstrates significant improvements in basis risk mitigation compared to traditional whole-cycle models.
    In the case of winter barley cultivation, the study addresses the challenge of separating plant growth stages when phenology information is unavailable. Employing a data-driven phase-division method, various statistical and machine learning estimation methods are evaluated to model the weather-yield relationship, offering promising avenues for enhancing weather index insurance design.

  • Johanna Ziegel - Isotonic distributional regression and CRPS decompositions

    Isotonic distributional regression (IDR) is a nonparametric distributional regression approach under a monotonicity constraint. It has found application as a generic method for uncertainty quantification, in statistical postprocessing of weather forecasts, and in distributional single index models. IDR has favorable calibration and optimality properties in finite samples. Furthermore, it has an interesting population counterpart called isotonic conditional laws that generalize conditional distributions with respect to σ-algebras to conditional distributions with respect to σ-lattices. In this talk, an overview of the theory is presented. Furthermore, it is shown how IDR can be used to decompose the mean CRPS for assessing the predictive performance of models with regard to their calibration and discrimination ability.