Abstracts Research Seminar Summer Term 2022
Paul Eisenberg:
Affine Models for Energy Markets
The so-called spot market for electricity is a standardised market where electricity for delivery is traded for each of the 24 hours of the next day. Electricity futures are exchange traded contracts – they are used to cover for longer standardised periods (e.g. calendar weeks, months, years). Prices on the spot market are settled once per day while prices for futures are settled similar to shares on an exchange. In this talk, we look at some generic price models for spot and electricity futures markets with a special focus on finite factor models – in case of futures markets they are called affine due to their affine geometric structure. The aim of the talk is to present some of the recent theoretic results around these models.
Rafael M. Frongillo:
How (Not) to Run a Forecasting Competition: Incentives and Efficiency
Forecasting competitions, wherein forecasters submit predictions about future events or unseen data points, are an increasingly common way to gather information and identify experts. One of the most prominent platforms for these competitions is Kaggle, which has run machine learning competitions with prizes up to 3 million USD. The most common approach to running such a competition is also the simplest: score each prediction given the outcome of each event (or data point), and pick the forecaster with the highest score as the winner. Perhaps surprisingly, this simple mechanism has poor incentives, especially when the number of events (data points) is small relative to the number of forecasters. Witkowski, et al. (2018) identified this problem and proposed a clever solution, the Event Lotteries Forecasting (ELF) mechanism. Unfortunately, to choose the best forecaster as the winner, ELF still requires a large number of events. This talk will overview the problem, and introduce a new mechanism which achieves robust incentives with far fewer events. Our approach borrows ideas from online machine learning; if time, we will see how the same mechanism solves an open question for online learning from strategic experts.
Rodney Sparapani:
Nonparametric Machine Learning and Efficient Computation With Bayesian Additive Regression Trees (BART)
We introduce the BART R package on The Comprehensive R Archive Network (CRAN). BART is a Bayesian nonparametric, machine learning, ensemble predictive modeling method for continuous, binary, categorical and time-to-event outcomes. To motivate this methodology, we will introduce childhood development growth charts for continuous outcomes and the surrounding inference challenges. BART is a tree-based, black-box method which fits the outcome to an arbitrary random function of the covariates via the BART prior. The BART technique is relatively computationally efficient as compared to its competitors, but large sample sizes can be demanding. Therefore, the BART package includes efficient state-of-the-art implementations that can take advantage of modern off-the-shelf hardware and software multi-threading technology (as a descendent of the Message Passing Interface BART source code). Besides BART itself, related methods will be touched upon adapted to heteroskedasticity and monotonicity with their own respective software packages.
Katia Colaneri:
Discrete-Time Optimization Problems in Insurance With a Final Distribution Constraint
We study a few optimization problems for an insurance company who additionally aims to reach a surplus with a given target distribution at a fixed time T. First, we consider the dividends problem where dividends are distributed at deterministic dates, and we model the surplus under the Gaussian approximation. In this setting we investigate strategies that allow either to maximise the dividends or to minimize the ruin probability, and lead to a specific distribution of the surplus at a pre-specified future date. The constraint on the distribution makes the problem non-standard and has important implications in terms of risk management. Second, we consider an insurance company who targets to buy a reinsurance contract for a pool of insured, with a surplus modelled as an arithmetic Brownian motion. To achieve a certain level of sustainability (i.e. the collected premia should be sufficient to buy reinsurance and to pay the occurring claims) the initial capital is set to be zero. We only allow for piecewise constant reinsurance strategies producing a normally distributed terminal surplus, whose mean and variance lead to a given Value-at-Risk at some confidence level alpha. We investigate the question which admissible reinsurance strategy produces a smaller ruin probability, if the ruin-checks are due at discrete deterministic points in time.
This talk is based on a joint work with Julia Eisenberg and Benedetta Salterini.
Bret M. Hanlon:
Regularized Ordinal Regression and the ordinalNet R Package
Regularization techniques such as the lasso (Tibshirani 1996) and elastic net (Zou and Hastie 2005) can be used to improve regression model coefficient estimation and prediction accuracy, as well as to perform variable selection. Ordinal regression models are widely used in applications where the use of regularization could be beneficial; however, these models are not included in many popular software packages for regularized regression. We propose a coordinate descent algorithm to fit a broad class of ordinal regression models with an elastic net penalty. Furthermore, we demonstrate that each model in this class generalizes to a more flexible form, that can be used to model either ordered or unordered categorical response data. We call this the elementwise link multinomial-ordinal class, and it includes widely used models such as multinomial logistic regression (which also has an ordinal form) and ordinal logistic regression (which also has an unordered multinomial form). We introduce an elastic net penalty class that applies to either model form, and additionally, this penalty can be used to shrink a non-ordinal model toward its ordinal counterpart. Finally, we introduce the R package ordinalNet, which implements the algorithm for this model class.
Joint work with Michael J. Wurm and Paul J. Rathouz.
Marius Hofert:
Quasi-Random Sampling for Multivariate Distributions via Generative Neural Networks
Generative moment matching networks (GMMNs) are introduced for generating approximate quasi-random samples from multivariate models with any underlying copula in order to compute estimates with variance reduction. So far, quasi-random sampling for multivariate distributions required a careful design, exploiting specific properties (such as conditional distributions) of the implied parametric copula or the underlying quasi-Monte Carlo (QMC) point set, and was only tractable for a small number of models. Utilizing GMMNs allows one to construct approximate quasi-random samples for a much larger variety of multivariate distributions without such restrictions, including empirical ones from real data with dependence structures not well captured by parametric copulas. Once trained on pseudo-random samples from a parametric model or on real data, these neural networks only require a multivariate standard uniform randomized QMC point set as input and are thus fast in estimating expectations of interest under dependence with variance reduction. Numerical examples are considered to demonstrate the approach. Emphasis is put on ideas rather than mathematical proofs.
Joint work with Avinash Prasad and Mu Zhu.
Christian Hennig:
Testing in Models That Are Not True
The starting point of my presentation is the apparently popular idea that in order to do hypothesis testing (and more generally frequentist model-based inference) we need to believe that the model is true, and the model assumptions need to be fulfilled. I will argue that this is a misconception. Models are, by their very nature, not "true" in reality. Mathematical results secure favourable characteristics of inference in an artificial model world in which the model assumptions are fulfilled. For using a model in reality we need to ask what happens if the model is violated in a "realistic" way. One key approach is to model a situation in which certain model assumptions of, e.g., the model-based test that we want to apply, are violated, in order to find out what happens then. This, somewhat inconveniently, depends strongly on what we assume, how the model assumptions are violated, whether we make an effort to check them, how we do that, and what alternative actions we take if we find them wanting. I will discuss what we know and what we can't know regarding the appropriateness of the models that we "assume", and how to interpret them appropriately, including new results on conditions for model assumption checking to work well, and on untestable assumptions.
Joint work with Iqbal Shamsudheen.
Fred Espen Benth:
Pricing Options on Flow Forwards by Neural Networks in Hilbert Space
We propose a methodology for pricing options on flow forwards by applying infinite-dimensional neural networks. We recast the option pricing problem as an optimization problem in a Hilbert space of real-valued function on the positive real line, which is the state space for the forward price term structure dynamics. This optimization problem is solved by facilitating a feedforward neural network architecture designed for approximating continuous functions on the state space. The proposed neural net is built upon the basis functions of the Hilbert space. We present a case study showing numerical efficiency of the approach, with an improved performance over classical neural net trained on discretely sampling the term structure curves.
This is joint work with Nils Detering (University of California at Santa Barbara) and Luca Galimberti (Norwegian University of Science and Technology).
Gunter Löffler:
How Efficiently Do Green Bonds Help the Environment?
We examine whether the magnitude of financial benefits derived from corporate green bond issuance is associated with the magnitude of future reductions in carbon emissions of non-financial corporates. We find a significantly negative relationship between the volume of issued green bonds and future carbon intensity. The relationship is limited to firms with higher credit risk and higher financial constraints. The association between carbon reductions and pricing advantages of green bonds is weaker and inconsistent. The findings suggest that green bonds can help firms to finance carbon reductions, but they also indicate that a considerable fraction of green bond financing does not have direct effects.
Joint work with Mona ElBannan.
Zachary Feinstein:
Axioms and Properties of Automated Market Makers
Automated market makers [AMMs] are a decentralized approach to creating a financial market. In contrast to a centralized exchange, AMMs are able to operate without the need for a trust-based relationship and they do not require custodial services. AMMs operate by using mathematical formulas to balance reserves to automatically quote market prices and execute swaps. Beyond the simple structure of AMMs, a primary benefit is that these automated market makers allow individuals to pool their assets to become liquidity providers and collect the proceeds associated with serving that vital market function. Within this talk, we will consider an axiomatic construction of AMMs so as to provide mathematical properties that all market makers should satisfy. These properties are compared with AMMs that exist in practice.
This is joint work with Maxim Bichuch.
Firdevs Ulus:
Direction-Free Approximation Algorithms for Bounded Convex Vector Optimization Problems
In the literature, various algorithms exist for solving convex vector optimization problems in the sense of finding polyhedral approximations to the Pareto frontier. These algorithms iteratively solve scalarization models which in general require a direction parameter in the objective space. The performance of such algorithms depends on the choice of this parameter. For bounded convex vector optimization problems, we propose a norm-minimizing scalarization and a primal algorithm based on solving them iteratively. Both the scalarization and the algorithm are free of direction-biasedness. We also propose a modification of this algorithm by introducing a suitable compact subset of the upper image, which helps in proving for the first time the finiteness of an algorithm for convex vector optimization.
We then construct a geometric dual algorithm based on a geometric duality relation between the primal and dual images. Different from the existing approaches, the dual algorithm does not depend on a fixed direction parameter, hence aligned with the proposed primal algorithm. For a primal problem with a q-dimensional objective space, we formulate a dual problem with a (q+1)-dimensional objective space. Consequently, the resulting dual image is a convex cone. The constructed geometric dual algorithm gives a finite ϵ-solution to the dual problem; moreover, it gives a finite weak ϵ’-solution to the primal problem, where ϵ’ is determined by ϵ and the structure of the underlying ordering cone.
We perform some computational tests and observe promising results for the proposed algorithms.
Çağın Ararat:
A Mixed-Integer Programming Approach for Computing Systemic Risk Measures
Systemic risk is concerned with the instability of an interconnected financial system. In the literature, several systemic risk measures have been proposed to determine capital requirements for the members subject to joint risk considerations. We address the problem of computing systemic risk measures for networks with sophisticated clearing mechanisms. More precisely, going beyond the standard Eisenberg-Noe model, we consider default costs as in Rogers-Veraart model as well as operating cash flows that are unrestricted in sign. We propose novel mixed-integer programming problems for calculating clearing vectors in this signed Rogers-Veraart model. By combining the network model with a polyhedral acceptance set for total payments, we obtain a set-valued systemic risk measure which has nonconvex values in general. On a general probability space, we provide theoretical results for the weighted-sum and Pascoletti-Serafini scalarizations of the systemic risk measure. Then, through computational experiments on a finite probability space, we assess the sensitivity of the systemic risk measure with respect to structural parameters.
This is a joint work with Nurtai Meimanjan.