Campus of WU Wien (© Johannes Zinner)

Vienna Congress on Mathematical Finance - VCMF 2016
Mon–Wed, Sept. 12–14, 2016

VCMF Educational Workshop
Thu–Fri, Sept. 15–16, 2016

Program and Abstracts



 

 

Panel Discussion

"Role of mathematical models in financial risk management and regulation (broadly defined)"

We bring together representatives from the regulators, the industry and the academic world:

Panellists:

  • Gabriela de Raaij
    Head of "Off-Site Supervision Division – Significant Institutions", Oesterreichische Nationalbank (OeNB)
  • Thomas Steiner
    Member of the Managing Board and Managing Director of the Division "Risk Management / Operations", Austrian Treasury (OeBFA)
  • Johann Strobl
    Member of the Managing Board, Deputy CEO, Chief Risk Officer, Raiffeisen Bank International AG (RBI)
  • Josef Teichmann
    Full Professor of Financial Mathematics, ETH Zürich

Moderator:

  • Walter Schachermayer
    Full Professor of Mathematics, University of Vienna

Abstract:

In the aftermath of the financial crisis the role of mathematical models in financial risk management was heavily debated. For instance Lord Turner, then chairman of the British Financial Service Authority named "misplaced reliance on apparently sophisticated maths" as one (of many) sources of the crisis. On the other hand many academics argue that better mathematical models and better trained quantitative analysts are needed to improve risk management systems in the financial industry. This debate is taken up in the panel discussion and it will be reviewed from many angles (industry, regulators and academia). Panelists will also discuss implications of the crisis for research and teaching in quantitative and mathematical finance and risk management.


 

 

Plenary Talks


Invited plenary talk: LC.0.100, Wed, 9:00

Freddy Delbaen  (ETH & UZH, Zurich, and Tokyo Metropolitan Univ.)

Risk Measures on Orlicz Spaces

The usual definition for monetary utility functions is given on the space L. For dual spaces, LΦ, of Orlicz–Δ2 spaces LΨ, there are two generalisations. One uses norm bounded sets, the other one uses order intervals. We show that a monetary utility function has a dual representation with a penalty function defined on LΦ, if the utility function is upper semi continuous for the convergence in probability on order intervals. More precisely we show that a convex set CLΦ is σ(LΦ, LΨ) closed if for each order interval, [-η, η] = {ξ | -ηξη}  (0 ≤ ηLΦ), the intersection C [-η, η] is closed for the convergence in probability. The result is based on the following technical lemma. For a norm bounded sequence ξn in LΦ, which converges in probability to 0, there exist forward convex combinations ζnconv{ξn, ξn+1, ...} as well as an element ηLΦ such that ζn → 0, almost surely and |ζn| ≤ η.


Invited plenary talk: LC.0.100, Mon, 9:00

Hans Föllmer  (Humboldt-Universität zu Berlin)

Conditional Aspects of Systemic Risk

We discuss some mathematical aspects of systemic risk in large financial networks, in particular some problems of consistency under conditioning that are motivated by the interplay between local and global risk assessment.
We focus on the systemic risk measures proposed by Chen, Iyengar, and Moallemi (2013). They involve an aggregation procedure and a convex risk measure that is applied to the aggregate position. Such a structural decomposition can be regarded as a consistency property. From this point of view, the dual representation of a systemic risk measure reduces to a criterium for consistency that is well known in the context of time-consistency. Then we study the structure of systemic risk measures that are consistent with a given family of local conditional risk measures for smaller subsystems, and in particular the appearance of phase transitions at the global level. This may be seen as a non-linear extension of the analysis of Gibbs measures in Statistical Mechanics.


Invited plenary talk: LC.0.100, Tue, 9:00

Peter Friz  (TU and WIAS Berlin)

Option Pricing in the Moderate Deviations Regime

We consider call option prices in diffusion models close to expiry, in an asymptotic regime ("moderately out of the money") that interpolates between the well-studied cases of at-the-money options and out-of-the-money fixed-strike options. First and higher order small-time moderate deviation estimates of call prices and implied volatility are obtained. The expansions involve only simple expressions of the model parameters, and we show in detail how to calculate them for generic stochastic volatility models. (Joint work with S. Gerhold and A. Pinter.) Time permitting, I will report on similar result in classes of rough volatility models.


Invited plenary talk: LC.0.100, Tue, 9:50

Emmanuel Gobet  (École Polytechnique, Paris)

Data-driven regression Monte Carlo

Our goal is to solve certain dynamic programming equations associated to a given Markov chain X, using a regression-based Monte Carlo algorithm. This type of equation arises when computing price of American options or solving non-linear pricing rules.
More specifically, we assume that the model for X is not known in full detail and only a root sample of size M of such process is available. We are investigating a new method that by-passes the calibration step.
By a stratification of the space and a suitable choice of a probability measure ν, we design a new resampling scheme that allows to compute local regressions (on basis functions) in each stratum. The combination of the stratification and the resampling allows to compute the solution to the dynamic programming equation (possibly in large dimensions) using only a relatively small set of root paths. To assess the accuracy of the algorithm, we establish non-asymptotic error estimates in L2 ( ν ). Our numerical experiments illustrate the good performance, even with M = 20 - 40 root paths.


Invited plenary talk: LC.0.100, Wed, 9:50

Mathieu Rosenbaum  (École Polytechnique, Paris)

The characteristic function of rough Heston models

It has been recently shown that rough volatility models, where the volatility is driven by a fractional Brownian motion with small Hurst parameter, provide very relevant dynamics in order to reproduce the behavior of both historical and implied volatilities. However, due to the non-Markovian nature of the fractional Brownian motion, they raise new issues when it comes to derivatives pricing. Using an original link between nearly unstable Hawkes processes and fractional volatility models, we compute the characteristic function of the log-price in rough Heston models. In the classical Heston model, the characteristic function is expressed in terms of the solution of a Riccati equation. Here we show that rough Heston models exhibit quite a similar structure, the Riccati equation being replaced by a fractional Riccati equation.

This is joint work with Omar El Euch.


Invited plenary talk: LC.0.100, Wed, 16:00

Josef Teichmann  (ETH Zurich)

Affine processes and non-linear (partial) differential equations

Affine processes have been used extensively to model financial phenomena since their marginal distributions are very tractable from an analytic point of view (up to the solution of a non-linear differential equation). It is well known by works of Dynkin-McKean-LeJan-Sznitman that one can turn this point of view around and represent solutions of non-linear PDEs by affine processes. Recent advances in mathematical Finance in this direction have been contributed by Henry-Labordere and Touzi. We shall provide some general theory in this direction from the affine point of view and introduce stochastic representation of fully non-linear PDEs.

Joint work with Georg Grafendorfer and Christa Cuchiero.


Invited plenary talk: LC.0.100, Mon, 9:50

Almut Veraart  (Imperial College London)

Modelling multivariate serially correlated count data in continuous time

A new continuous-time framework for modelling serially correlated count and integer-valued data is introduced in a multivariate setting. The main modelling component is a multivariate integer-valued trawl process which is obtained by kernel smoothing of an integer-valued Levy basis.

The key feature making trawl processes highly suitable for applications is the fact that their marginal distribution and their serial dependence can be modelled independently of each other.

We demonstrate the flexibility of this new modelling paradigm, by presenting various ways of describing both serial and cross-sectional dependence.

Moreover, we develop efficient methods for statistical inference within the trawl framework.

These methods have been tested in detailed simulation studies and their finite sample performance turns out to be very good.

Finally, we apply the new methodology to financial data.


 

 

Plenary Lectures at the Educational Workshop

Link to slides (password prodected)


Invited lecture: TC.2.01, Thu, 16:00 & Fri, 10:50

Nicole Bäuerle  (KIT, Germany)

Markov Decision Processes with Applications to Finance and Insurance

Markov Decision Processes are controlled Markov processes in discrete time. They appear in various fields of applications like e.g. economics, finance, operations research, engineering and biology. The aim is to maximize the expected (discounted) reward of the process over a given time horizon. We consider problems with arbitrary (Borel) state and action space with a finite and an infinite time horizon. Solution methods and the Bellman equation are discussed as well as the existence of optimal policies. For problems with infinite horizon we give convergence conditions and present solution algorithms like Howard's policy improvement or linear programming. The statements and results are illustrated by examples from finance and insurance like consumption-investment problems and dividend pay-out problems.

In the second lecture we investigate the problem of maximizing a certainty equivalent of the total or discounted reward which is generated by a Markov Decision Process. The certainty equivalent is defined by U-1(𝔼U(X)) where U is an increasing function. In contrast to a risk-neutral decision maker this optimization criterion takes the variability of the reward into account. It contains as a special case the classical risk-sensitive optimization criterion with an exponential utility. We illustrate our results with the help of a risk-sensitive dividend pay-out problem.


Invited lecture: TC.2.01, Thu, 14:00 & Fri, 9:00

Alexander McNeil (Heriot-Watt University, Edinburgh)

Market Risk Models and Backtesting

The outcome of the "Fundamental Review of the Trading Book" (FRTB) is that the capital requirement for banks using an internal model approach for their trading books will be based on the expected shortfall (ES) risk measure. However, the process of gaining internal model approval will continue to be based on backtesting value-at-risk (VaR) estimates at the 99% level and the approval process will be extended to individual trading desk level; desks that submit unsatisfactory backtest results may lose internal model approval. The Basel documentation also suggests that banks will be expected to go beyond the basic backtesting requirements by considering VaR exceptions at multiple confidence levels, tests based on expected shortfall and tests based on so-called realized p-values.

To understand these regulatory developments better, we will look at the following topics in this educational workshop:

  1. methods used by banks to measure the market risks in their trading books, including the risk-factor mapping process and the statistical and econometric modelling methods used to estimate risk measures like VaR;
  2. the current backtesting regime based on VaR exceptions and its shortcomings;
  3. the recent academic debate surrounding the backtesting of alternative risk measures, such as expected shortfall;
  4. more advanced backtesting approaches based on realized p-values that can address some of the ambitions of regulators as expressed in the FRTB.

The workshop will be partly based on material from the book "Quantitative Risk Management: Concepts, Techniques & Tools" (PUP, 2015) by McNeil, Frey & Embrechts, as well as new research. Some R examples will be integrated into the presentation.


Invited lecture: TC.2.01, Thu, 9:10 & Fri, 13:30

Johannes Muhle-Karbe  (University of Michigan)

Option Pricing and Hedging with Model Uncertainty

The starting point for classical option pricing theory is a probabilistic model for the future evolution of the underlying asset. In reality, every such model is of course at best a useful approximation. Whence, it is important to assess the susceptibility of classical results to model uncertainty and to derive robust decision rules that take it into account in an appropriate manner.

In these lectures, we discuss some approaches to tackle this problem. We start from the so-called "uncertain volatility model", where one considers a whole class of possible scenarios for the volatility process of the underlying. The goal then is to determine robust superhedging strategies that eliminate all risk in all of these conceivable scenarios.

We then move on to more moderate attitudes towards uncertainty, where different scenarios are not treated in the same way but instead weighted by their plausibility, measured in terms of their "distance" from a given reference model.

Finally, we discuss how the above results change if liquidly traded vanilla options are available as additional instruments for static or dynamic hedging.

References:
[1] Herrmann, S. and Muhle-Karbe, J. and Seifried, F. (2015): Hedging with small uncertainty aversion. Preprint, available at http://ssrn.com/abstract=2625965.
[2] Herrmann, S. and Muhle-Karbe, J. (2016): Model uncertainty, recalibration, and the emergence of delta-vega hedging. Preprint, available at http://ssrn.com/abstract=2694718.


Invited lecture: TC.2.01, Thu, 11:10 & Fri, 15:20

Peter Tankov  (Université Paris-Diderot)

Asymptotic Methods for Optimal Tracking: Lower Bounds, Feedback Strategies and Applications in Finance

In this lecture, we shall present recently developed methods for approximate solution of stochastic control problems in the asymptotic regime where the costs for applying the control are small. We consider the problem of tracking a target whose dynamics is modeled by a continuous Itô semimartingale. The aim of the controller is to minimize both deviation from the target and tracking efforts.

We shall first establish the existence of asymptotic lower bounds for this problem, depending on the cost structure. These lower bounds can be related to the time-average control of Brownian motion, which is characterized as a deterministic linear programming problem. Furthermore, we shall provide a comprehensive list of examples for which the lower bound is sharp and is attained by an explicit feedback strategy. Finally, applications to various control problems arising in mathematical finance (option hedging in discrete time, utility maximization with transaction costs) will be discussed.

This lecture is based on the following papers:
[1] Jiatu Cai, Mathieu Rosenbaum and Peter Tankov. Asymptotic Lower Bounds for Optimal Tracking: a Linear Programming Approach. preprint, arXiv:1510.04295.
[2] Jiatu Cai, Mathieu Rosenbaum and Peter Tankov. Asymptotic Optimal Tracking: Feedback Strategies. preprint, arXiv:1603.09472.


 

 

Invited Talks


Invited talk: LC.0.100, Tue, 14:00

Beatrice Acciaio  (London School of Economics)

Model-independent pricing with additional information: a Skorokhod embedding approach

We analyze the pricing problem of an agent having additional (potentially insider) information on the market in a model-independent setup. Following Hobson's approach we reformulate this problem as a constrained Skorokhod embedding problem, and show a natural supperreplication result. Furthermore, we establish a monotonicity principle for the constrained SEP, giving a geometric characterisation of the support of the optimisers (in the spirit of Beiglboeck, Cox and Huesmann (2014)) which allows to link the additional information with geometric properties of the optimizers to the constrained embedding problem. Surprisingly, for certain types of information the absence of arbitrage can be easily checked by considering only unconstrained solutions. We give some numerical evidence of the value of the informed agent's information, in terms of the change in price of variance options.

The talk is based on a joint work with Alex Cox and Martin Huesmann.


Invited talk: LC.0.100, Mon, 14:00

Elisa Alos (Pompeu Fabra University Barcelona)

On the link between the implied volatility skew and the Malliavin derivative operator

In this talk, we use Malliavin calculus techniques to obtain an expression for the short-time behavior of the at-the-money implied volatility skew. This expression depends on the derivative of the volatility in the sense of Malliavin calculus. We will show that this result can be useful in applications, as in modeling problems or in option pricing approximation.


Invited talk: LC.0.100, Mon, 14:45

Christian Bayer  (WIAS Berlin)

Pricing under rough volatility

From an analysis of the time series of realized variance (RV) using recent high frequency data, Gatheral, Jaisson and Rosenbaum (2014) previously showed that log-RV behaves essentially as a fractional Brownian motion with Hurst exponent H of order 0.1, at any reasonable time scale. The resulting Rough Fractional Stochastic Volatility (RFSV) model is remarkably consistent with financial time series data. We now show how the RFSV model can be used to price claims on both the underlying and integrated variance. We analyze in detail a simple case of this model, the rBergomi model. In particular, we find that the rBergomi model fits the SPX volatility markedly better than conventional Markovian stochastic volatility models, and with fewer parameters. Finally, we show that actual SPX variance swap curves seem to be consistent with model forecasts, with particular dramatic examples from the weekend of the collapse of Lehman Brothers and the Flash Crash.

Joint work with Peter K. Friz and Jim Gatheral.
For details see: http://ssrn.com/abstract=2554754.


Invited talk: LC.0.100, Mon, 16:00

Agostino Capponi  (Columbia University, New York)

Liability Concentration and Systemic Losses in Financial Networks

We develop a novel framework to assess the impact of structural policies targeting systemic risk. We focus on policies centered around the notion of concentration of interbank exposures, which are at the heart of the too-big-to fail risk. We quantify concentration by applying the majorization order to the liability matrix that captures the interconnectedness of banks in the financial network.
We develop notions of highly and lowly capitalized networks to bring out the qualitatively different implications of exposure concentration on the systemic loss profile.
Our analysis suggests that systemic losses increase as interbanking exposures become more concentrated if the system is highly capitalized, while the opposite holds if the system is lowly capitalized. We conduct an empirical analysis of the European sovereign banking network, and find that it is persistently highly capitalized. This empirical finding, along with the concentration results, supports structural policies put forward by the Basel Committee, and aimed at limiting the size of gross exposures toward individual counterparties.

Joint work with David D. Yao and Peng Chu Chen.


Invited talk: LC.0.100, Tue, 14:45

Patrick Cheridito  (ETH Zurich)

Duality formulas for robust pricing and hedging in discrete time

In this paper we derive robust super- and subhedging dualities for contingent claims that can depend on several underlying assets. In addition to strict super- and subhedging, we also consider relaxed versions which, instead of eliminating the shortfall risk completely, aim to reduce it to an acceptable level. This yields robust price bounds with tighter spreads. As applications we study strict super- and subhedging with general convex transaction costs and trading constraints as well as risk based hedging with respect to robust versions of the average value at risk and entropic risk measure. Our approach is based on representation results for increasing convex functionals and allows for general financial market structures. As a side result it yields a robust version of the fundamental theorem of asset pricing.

Joint work with Michael Kupper and Ludovic Tangpi.
For details see: http://arxiv.org/abs/1602.06177.


Invited talk: LC.0.100, Tue, 16:00

Ulrich Horst  (Humboldt-Universität zu Berlin)

Mathematical Modeling of Limit Order Books

We review recent limit theorems for limit order books with state dependent order arrival dynamics. We discuss both „law of large numbers results“ where the limiting dynamics can be described by non-linear, non-local PDEs as well as diffusion-type scalings where the limiting dynamics follows a coupled PDE-SPDE system.

The talk is based on joint work with Christian Bayer, Dörte Kreher and Jinniao Qiu.


Invited talk: LC.0.100, Wed, 11:55

Rüdiger Kiesel  (University of Duisburg-Essen)

Modelling day-ahead and intraday electricity markets

Trading in the intraday electricity markets increased rapidly since the opening of the market.
This may be driven by the need of photovoltaic and wind power operators to balance their production forecast errors, i.e. deviations between forecasted and actual production. Evidence for this is a jump in the volume of intraday trading as the direct marketing of renewable energy was introduced. Furthermore, there may be a generally increased interest in intraday trading activities due to proprietary trading. We will start with an overview of recent developments . Our focus will be on the structure of intraday trading of electricity and the identification of the price-driving factors. In addition, we will discuss some aspects of market making in the intraday markets.


Invited talk: LC.0.100, Tue, 16:45

Dörte Kreher  (Humboldt-Universität zu Berlin)

On a Functional Convergence Theorem for Interpolated Markov Chains to an Infinite Dimensional Diffusion

In this talk we discuss a functional convergence theorem for a certain class of discrete Markov dynamics, which appear in the modeling of state dependent limit order books. It is shown that under suitable assumptions the sequence of discrete models is relatively compact in a localized sense and that any limit point satisfies a certain infinite dimensional SDE. Under additional assumptions on the dependence structure we then construct a class of models, which fit in the general framework, such that the limiting SDE admits a unique solution.

The talk is based on joint work with Ulrich Horst.


Invited talk: LC.0.100, Wed, 11:10

Antonis Papapantoleon  (University of Mannheim, Germany)

An equilibrium model for spot and forward prices of commodities

We consider a model that consists of financial investors and producers of a commodity. Producers store some production for future sale and short future contracts, while speculators invest in futures to diversify their portfolios. The forward and the spot equilibrium commodity prices are endogenously derived as the outcome of the interaction between producers and investors. We provide semi-explicit expressions for the equilibrium prices and analyze their dependence on the model parameters.


Invited talk: LC.0.100, Mon, 16:45

Philipp Schönbucher (Financialytic GmbH, Bonn)

Free Lunches that Nobody is Eating - Market Dislocations since 2008

Since the financial crisis of 2008, many violations of the fundamental "absence of arbitrage" assumptions of the theory of mathematical finance have kept appearing with astonishing persistence and apparently without being exploited away. These dislocations include large and persistent negative and (more recently) positive bond/CDS basis, cross-currency swap basis, index/component basis, and swap spreads. We analyse the genesis of these dislocations and the specific obstacles that prevented (or still prevent) market participants from arbitraging them away. These obstacles stem from new and increasingly binding constraints in funding, regulation, and accounting, most of which have not yet been incorporated properly in asset pricing models.


Invited talk: LC.0.100, Wed, 15:00

Jorge P. Zubelli  (IMPA, Rio de Janeiro)

Calibration of Local-Volatility Models from Option Data for Commodities

Local volatility models are extensively used and well-recognized for hedging and pricing in financial markets. They are frequently used, for instance, in the evaluation of exotic options so as to avoid arbitrage opportunities with respect to other instruments.

Derivatives for commodities are extensively traded instruments in financial markets, especially in energy and oil ones. They are particularly challenging since the spot prices are not directly observable, thus requiring a further modeling effort.

The ill-posed character of local volatility surface calibration from market prices requires the use of regularization techniques either implicitly or explicitly. Such regularization techniques have been widely studied for a while and are still a topic of intense research. We have employed convex regularization tools and recent inverse problem advances to deal with the local volatility calibration problem.

In this talk we shall describe ongoing work on the use local volatility models in the context of commodity markets, in particular applied to energy and oil ones. This work is part of ongoing collaboration with V. Albani (Vienna), U. Ascher (Toronto), and Xu Yang (IMPA).


 

 

Contributed Talks


Contributed talk: TC.2.03, Wed, 11:10

Suhan Altay (TU Wien)

Term structure of defaultable bonds, an approach with Jacobi processes

In this study, we propose a novel defaultable term structure model that is capable of capturing negative instantaneous correlation between credit spreads and risk-free rate documented in empirical literature while sustaining the positivity of the default intensity and risk-free rate. Given a multivariate Jacobi (Wright-Fisher) process and a certain functional, we are able to compute the zero-coupon bond prices, both defaultable and default-free, in a relatively tractable way by using the exponential change of measure technique with the help of 'carre du champ' operator as well as by using the transition density function of the process. The resulting formula involves series involving ratios of gamma functions and fast converging exponential decay functions. The main advantage of the proposed reduced form model is that it provides a more flexible correlation structure between state variables governing the (defaultable) term structure within a relatively tractable framework for bond pricing. Moreover, in higher dimensions one does not need to rely on numerical schemes related to the differential equations, which may be difficult to handle (e.g multi-dimensional Riccati equations in affine and quadratic term structure frameworks), because transition density function of the state variables are given in a relatively explicit form.

This is joint work with Uwe Schmock.


Contributed talk: TC.3.01, Mon, 11:40

Julio D. Backhoff Veraguas (TU Wien)

Root and Rost embeddings, their symmetry, and the robust superhedging of variance options

Recent works by A. Cox and J. Wang have highlighted the connection between the Rost/Root solutions to the Skorokhod Embedding Problem and certain optimal stopping problems. In this talk we present a simple probabilistic argument for this connection (as opposed to the original analytical ones) which furthermore implies an interesting relationship/symmetry between the Rost and Root solutions. Within this framework we shall further explore the associated robust superhedging problem for variance options.

This is joint work with Mathias Beiglböck, Alexander Cox and Martin Huesmann.


Contributed talk: TC.3.01, Mon, 12:10

Andrea Barletta (Aarhus University)

Retrieving Risk-Neutral Densities Embedded in VIX Options: a Non-Structural Approach

We propose a non-structural pricing method to retrieve the VIX risk-neutral density (RND) directly from related option prices. Non-structural means that the method only imposes mild regularity conditions on the shape of the RND and therefore entails a reduced exposition to misspecification risks. Our approach is based on the classic methodology of expanding a density function in a finite sum of orthogonal polynomials weighted by a known kernel function, where the expansion coefficients are inferred from option prices through ordinary least squares regression. Our methodology can be thought of as an alternative to the classic works of Madan and Milne (1994), Coutant et al. (2001), and Jondeau and Rockinger (2001), where the kernel is not a Gaussian distribution but is supported over the positive real axis. In particular, we extend the classic Laguerre expansions, used e.g. by Filipovic et al. (2013), by introducing a family of kernels that encompasses well known distributions such as the exponential, the Gamma, the Weibull, and the GIG. To handle the problem of multi-collinearity arising as the expansion order increases we rely on principal component analysis. Positivity and unitary mass of the estimated RND are guaranteed by construction. Numerical illustrations are provided, with the estimation being performed on option prices generated by a benchmark RND, as well as on real market data.

This is joint work with Paolo Santucci de Magistris and Francesco Violante.


Contributed talk: TC.2.01, Wed, 11:40

Daniel Bartl (University of Konstanz)

Robust exponential hedging in discrete time

In this talk we shall focus on the robust exponential utility maximization problem with random endowment in discrete time. An investor is allowed to invest dynamically in the market without transactions cost and thus tries to maximizes his/her worst case expected exponential utility of the endowment plus terminal wealth with respect to a family of non-dominated probabilistic models. Under a no-arbitrage condition on the probabilistic models we provide the existence of an optimal trading strategy, defined simultaneously under all models. Further, under additional tightness and closeness of this family, we provide duality for measurable endowments and show that the (entropic) penalty function has a closed form.

The talk is based on joint work with Patrick Cheridito and Michael Kupper.


Contributed talk: TC.2.01, Wed, 11:10

Christoph Belak (University of Trier)

Utility Maximization in a Factor Model with Constant and Proportional Costs

We study the problem of maximizing expected utility of terminal wealth for an investor facing a mix of constant and proportional transaction costs. While the case of purely proportional transaction costs is by now well understood and existence of optimal strategies is known to hold for very general price processes extending beyond semi-martingales [1], the case of constant costs remains a challenge since the existence of optimal strategies is not even known in tractable models such as the Black-Scholes model. In this talk, we present a novel approach which allows us to construct optimal strategies in a multidimensional diffusion market with price processes driven by a factor process and for general lower-bounded utility functions.

The main idea is to characterize the value function associated with the optimization problem as the pointwise infimum V of a suitable set of superharmonic functions. The advantage of this approach is that the pointwise infimum inherits the superharmonicity property, which in turn allows us to prove a verification theorem for candidate optimal strategies under mild regularity assumptions on V. Indeed, for the verification procedure based on superharmonic functions to be applicable, it suffices that the pointwise infimum V is continuous.

In order to establish the continuity of V, we adapt the stochastic Perron's method [2] to our situation to show that V is a discontinuous viscosity solution of the associated quasi-variational inequalities. A comparison principle for discontinuous viscosity solutions then closes the argument and shows that V is continuous. With this, the verification theorem becomes applicable and it follows that the pointwise infimum V coincides with the value function and that the candidate optimal strategies are indeed optimal.

This is joint work with Sören Christensen.

References:
[1] Christoph Czichowsky and Walter Schachermayer. Portfolio optimisation beyond semi-martingales: Shadow prices and fractional Brownian motion. To appear in Annals of Applied Probability, 2016.
[2] Erhan Bayraktar and Mihai Sirbu. Stochastic Perron's method for Hamilton-Jacobi-Bellman equations. SIAM J. Control Optim., 51(6):4274-4294, 2013.

A preprint of the paper is available at http://ssrn.com/abstract=2774697.


Contributed talk: TC.2.01, Mon, 16:00

Maxim Bichuch (Johns Hopkins University, Baltimore)

Optimal Investment with Transaction Costs and Stochastic Volatility

Two major financial market complexities are transaction costs and uncertain volatility, and we analyze their joint impact on the problem of portfolio optimization. When volatility is constant, the transaction costs optimal investment problem has a long history, especially in the use of asymptotic approximations when the cost is small. Under stochastic volatility, but with no transaction costs, the Merton problem under general utility functions can also be analyzed with asymptotic methods. Here, we look at the final time optimal investment and consumption problem, when both complexities are present, using separation of time scales approximations. We find the first term in the asymptotic expansion in the time scale parameter, of the optimal value function, consumption, and of the optimal strategy, for fixed small transaction costs. We give a proof of accuracy in the case of fast mean-reverting stochastic volatility. Additionally, we deduce the optimal long-term growth rate.

This is joint work with Ronnie Sircar.


Contributed talk: LC.0.100, Mon, 12:10

Tilmann Blümmel (TU Wien)

Understanding the structure of no arbitrage

The fundamental theorem of asset pricing (FTAP) relates the existence of an equivalent σ-martingale measure (σ ≠ ∅) to a no arbitrage condition, the "no free lunch with vanishing risk"-condition (NFLVR). The latter is equivalent to the classical "no arbitrage"-condition (NA) and the "no unbounded profit with bounded risk"-condition (NUPBR). For continuous semimartingales, (NUPBR) is equivalent to the "structure conditon" (SC) which allows for an explicit characterization of the elements in σ. But even more important, it provides a natural candidate for an equivalent σ-martingale measure, the so-called minimal martingale measure (MMM). Unfortunately, for non-continuous semimartingales the (MMM) is, if it exists, in general only a signed measure. Hence, the following natural questions arise: Does there exist a natural candidate for an equivalent σ-martingale measure if the semimartingale is not continuous? Does there exist a characterization of the elements in σ? Moreover, is the natural candidate, as in the case of a continuous semimartingale, related to a particular structure condition on the underlying semimartingale? The aim of the talk is to answer these questions for quasi-left-continuous semimartingales in a rather basic/didactic way that could be part of a course on continuous time mathematical finance.


Contributed talk: TC.2.01, Tue, 14:30

Alice Buccioli (Aarhus University)

Constant Proportion Portfolio Insurance Strategies in Contagious Markets

In portfolio insurance, CPPI strategies are popular among investors as they allow to gear up the upside potential of a stock index while limiting its downside risk. From the issuer's perspective it is important to adequately assess the risks associated with the CPPI, both in order to charge the correct "gap" fees and for risk management.

The literature on CPPI modeling typically assumes diffusive or Lévy-driven dynamics for the risky asset underlying the strategy. In either case the self-contagious nature of asset prices is not taken into account, meaning that the models inherently underestimate the frequency of clustered large negative movements in returns. In this paper, we introduce self-exciting jumps in the underlying dynamics via Hawkes processes, in order to account for contagion while preserving analytical tractability. Within this richer framework we estimate measures of the risk involved in the practical implementation of discrete-time rebalancing rules governing the CPPI product.

Moreover, we gauge the CPPI performance by comparing different re-balancing rules that take into account both the presence of market frictions, such as transactions costs, and the stochastic nature of the index dynamics.

This is joint work with Thomas Kokholm.


Contributed talk: TC.2.01, Mon, 11:40

Ngoc Huy Chau (Alfréd Rényi Institute of Mathematics, Budapest)

Optimal investment with long memory processes

In a continuous time market model, Larsen et al. (2014) prove that the value function of an optimization problem is Gâteaux differentiable with respect to the market price of risk. We push this idea further by proving that the value function has not only Gâteaux derivative but also Frechet derivative on a certain domain. We then apply this new result to some particular market models with long range dependence. More precisely, the long memory is characterized by a fractional Brownian motion with Hurst parameter H (0, 1). One of the main results of the talk is that the value function depends continuously on the Hurst parameter, which deepens our knowledge on the effect of memory in optimization. Some further applications are also introduced.

This is joint work with Miklos Rasonyi.


Contributed talk: TC.2.03, Mon, 12:10

Katia Colaneri (University of Perugia)

Locally risk-minimizing strategies for defaultable claims under incomplete information

We study the hedging problem of a defaultable claim with recovery at default time via the local riskminimization approach, for investors who have a restricted information on the market. Precisely, we consider a financial market model with a riskless asset (whose discounted price is equal to 1) and a risky asset whose (discounted) price is given by a geometric diffusion process. We also assume that the stock price process depends on an exogenous unobservable stochastic factor. On this market there is a defaultable claim with recovery at the default time τ, seen as a payment stream over the stochastic interval [0,T∧τ], where T represents the maturity of the contract. In this setting we assume that, at any time, investors may observe the risky asset price and know if default has occurred or not.

However, the default time intensity, as well as the drift in the risky asset price dynamics, are not directly observable. Therefore, we deal with an incomplete market model and then we choose to apply the local risk-minimization approach for payment streams in the context of partial information in order to find an optimal hedging strategy for the given defaultable claim. We model the default time τ as a random variable which is not a stopping time with respect to the available information. This requires an enlargement of filtration which makes τ a stopping time. Moreover, we assume that the default time is not predictable and consider the so-called intensity based approach (see, e.g. [2, 3]).

Locally risk-minimizing hedging strategies for European type contingent claims can be characterized via the Föllmer-Schweizer decomposition of the random variable representing their payoff, see, e.g. [4, 5] for more details. Here, we provide an analogous description of the optimal strategies in the context of payment streams associated to defaultable claims under partial information. We assume that hedging stops after default: this allows us to work with stopped price processes and guarantees that the Brownian motion, that drives the risky asset price dynamics, remains a continuous martingale, and hence a Brownian motion, also with respect to the enlarged filtration, without assuming the martingale invariance property. Then, we introduce the (stopped) Föllmer-Schweizer decompositions under full and partial information and the corresponding Galtchouk-Kunita-Watanabe decompositions with respect to the minimal martingale measure, and characterize the optimal hedging strategy in terms of the integrands appearing in these decompositions. In this sense, we extend the results obtained in [1] to the partial information framework.

Moreover, as a further achievement, we characterize the optimal strategy under partial information in terms of the projection of the corresponding hedging strategy under full information with respect to the natural filtration of risky asset price process, under the minimal martingale measure.

Finally, we discuss an example in the Markovian setting which allows to compute the hedging strategy in a more explicit form via filtering results.

This is joint work with Claudia Ceci and Alessandra Cretarola.

References:
[1] Francesca Biagini and Alessandra Cretarola. Local risk-minimization for defaultable claims with recovery process. Applied Mathematics & Optimization, 65(3):293-314, 2012.
[2] Tomasz R. Bielecki and Marek Rutkowski. Credit risk: modeling, valuation and hedging. Springer Finance. Springer-Verlag Berlin, Heidelberg, New York, 2002.
[3] Christophette Blanchet-Scalliet and Monique Jeanblanc. Hazard rate for credit risk and hedging defaultable contingent claims. Finance and Stochastics, 8(1):145-159, 2004.
[5] Martin Schweizer. A guided tour through quadratic hedging approaches. In E. Jouini, J. Cvitanic, and M. Musiela, editors, Option Pricing, Interest Rates and Risk Management, pages 538-574. Cambridge University Press, 2001.
[5] Martin Schweizer. A guided tour through quadratic hedging approaches. In E. Jouini, J. Cvitanic, and M. Musiela, editors, Option Pricing, Interest Rates and Risk Management, pages 538-574. Cambridge University Press, 2001.


Contributed talk: LC.0.100, Mon, 11:10

David Criens (Technical University of Munich)

Martingality in Terms of Semimartingale Problems

The martingality of a local martingale lies at the heart of many fields of mathematical finance. The existence of an equivalent martingale measure or more generally of changes of probability measures is deeply connected to the question when a candidate density process is a true martingale. Thereby it relates to the absence of arbitrage and builds the foundation for risk-neutral pricing. On the other hand strict martingality is used to model financial bubbles. We raise the question how martingality of non-negative local martingales driven by Hilbert-space valued semimartingales on stochastic intervals relates to path properties of their driving processes. This not only leads to valuable characterizations of martingality, but also reveals new existence and uniqueness results for semimartingale problems. As a case study we derive explicit conditions for the martingality of stochastic exponentials driven by infinite-dimensional Brownian motion.

This is joint work with Kathrin Glau.


Contributed talk: TC.3.01, Tue, 11:40

Ryan Donnelly (EPFL, Lausanne)

Insider Trading with Residual Risk

We consider an extension of the Kyle (1985) model in which the insider is risk-averse and does not have complete information about the terminal value of the traded asset. The simultaneous addition of both risk aversion and residual risk changes the nature of linear equilibrium for both the market-maker and the insider: the market-maker is now required to estimate the insider's inventory level in addition to the insider's private signal; the insider now bases her trades on her present inventory level and the market-maker's estimation of her inventory level, in addition to the market-maker's valuation error.

This is joint work with Pierre Collin-Dufresne.


Contributed talk: TC.2.01, Tue, 11:10

Karl-Theodor Eisele (University of Strasbourg)

Traded financial flows and their values

This paper treats the evaluation of traded securities in a financial market. Contrary to the classical theory of mathematical finance, the concept of a financial security is not the development of its market value, leading to the well known no-arbitrage theory. Instead, our basic conception of a security is its flow of payments in future times.

The market value of such a security stems by an evaluation of its financial flow via a risk assessment (the negative of a risk measure). However, the argument of the risk assessment is not the incremental flow of the security, but its cumulative counterpart.

A frictionless market evaluates portfolio compositions of the securities in a linear way. The two other important properties of the risk assessment are market sensitivity and time consistency. Together they imply the known no-arbitrage property.

The main result of the paper is the representation theorem of the market value of portfolios as a combination of the momentary financial flow and the future value of the portfolio. In the case where all flows are reduced to terminal payments, the theorem is equivalent to the well known Dalang-Morton-Willinger Theorem.


Contributed talk: TC.2.03, Wed, 15:00

Zehra Eksi Altay (WU Vienna)

Shall I sell or shall I wait: Optimal liquidation under partial information with feedback effects

We study the problem of a trader who wants to maximize the expected reward from liquidating a given position. We model the stock price dynamics as a pure jump process with local characteristics driven by an unobservable finite-state Markov chain and the liquidation rate; this reflects uncertainty about the state of the market and feedback effects from trading, respectively. A model of this form captures typical features of high frequency data. We use stochastic filtering to reduce the optimization problem under partial information to an equivalent one under complete information. This leads to a control problem for piecewise deterministic Markov processes (in short PDMP). We apply control theory for PDMPs to our problem. In particular, we derive the optimality equation for the value function and characterize the value function as unique viscosity solution of the associated dynamic programming equation. The paper concludes with a detailed analysis of specific examples. Finally, we present numerical results illustrating the impact of partial information and feedback effect on the value function and on the optimal liquidation rate.

This is joint work with Katia Colaneri, Rüdiger Frey and Michaela Szolgyenyi.


Contributed talk: TC.2.03, Tue, 11:10

Morgan Escalera (Rose-Hulman Institute of Technology, Indiana)

Sovereign Adaptive Risk Modeling

In the wake of the 2008 financial crisis, the FSB (Financial Stability Board) and the BCBS (Basel Committee on Banking Supervision) created a list of Globally Systematically Important Banks with the intention of determining which financial institutions were important enough to the global market that their failure would result in total systemic collapse. The purpose of this research paper is to use econometric analysis to create a model that generalizes the BCBS's five indicators and apply these measures of systemic stress to governmental bodies. These five criteria are size, interconnectedness, cross-jurisdictional activities, complexity, and substitutability. Our model utilizes weighted directed graphs to simulate default scenarios of central banks in the system as well as creating implied measures of default for each country based on five-year bond yields and CDS spreads. The original application of the model was a time series going back 5 years tracking the troubled economy of Greece in the Eurozone, comparing its risk to European stability to the other member states of the monetary union.

This is joint work with Wayne Tarrant.


Contributed talk: TC.2.03, Wed, 11:40

Tolulope Rhoda Fadina (ETH Zurich)

Credit risk with ambiguity on the default intensity

We introduce the concept of no-arbitrage in a credit risk market under ambiguity. We consider an intensity-based framework where we assume that the default intensity is strictly positive (an investor is always exposed to risk). This assumption is economically intuitive, as it is equivalent to an approach where at every time s credit risk is present and not negligible. However, we consider the realistic case where the intensity is not precisely known, but there is ambiguity on the intensity. By means of the Girsanov theorem, we start from the reference measure where the intensity is equal 1 and define the equivalent measures P^h where the intensity is h. Ambiguity is considered in the sense that h lies between an upper and lower bound. From this viewpoint, the credit risky case turns out to be similar to the case of drift uncertainty in the Nonlinear expectation framework. In addition, we discuss the Good-deal bounds.

This is joint work with Thorsten Schmidt.


Contributed talk: TC.2.01, Mon, 14:00

Tobias Fissler (University of Bern)

Higher Order Elicitability: Expected Shortfall is jointly elicitable with Value at Risk - Implications for Backtesting

A statistical functional, such as the mean or the median, is called elicitable if there is a scoring function or loss function such that the correct forecast of the functional is the unique minimizer of the expected score. Such scoring functions are called strictly consistent for the functional. The elicitability of a functional opens the possibility to compare competing forecasts and to rank them in terms of their realized scores.

In the first part of this talk, we explore the notion of higher order elicitability, that is, we investigate the question of elicitability for higher-dimensional functionals. As a result of particular applied interest we show that the pair (Value at Risk, Expected Shortfall) ((VaR, ES)) is elicitable despite the fact that ES itself is not. More generally, we give a characterization of the class of strictly consistent scoring functions for this pair, making use of a higher dimensional version of Osband's principle.

In the second part of the talk, we discuss the consequences of this result for backtesting ES-forecasts. We introduce comparative backtests of Diebold-Mariano type using a strictly consistent scoring function for the pair (VaR, ES). Comparative backtests open the possibility to choose a conservative null hypothesis in contrast to the current state of the art. Emphasizing our argument with a brief simulation study, we demonstrate that the change of the null hypothesis in comparative backtests amounts to a reversed onus of proof in backtesting decisions. This appears to be beneficial to all stakeholders, including banks, regulators, and society at large.

This is joint work with Johanna F. Ziegel and Tilmann Gneiting.

References:
[1] T. Fissler and J. F. Ziegel (2016). Higher order elicitability and Osband's principle. Ann. Statist. 44 (4), 1680–1707.
[2] T. Fissler, J. F. Ziegel and T. Gneiting (2016). Expected Shortfall is jointly elicitable with Value at Risk – Implications for backtesting. Risk Magazine, January 2016.


Contributed talk: TC.2.03, Tue, 16:00

Niushan Gao (University of Lethbridge)

The Fatou property and w*-representations of risk measures

In 2002, Delbaen proved that a risk measure on $L_\infty$ satisfying the Fatou property admits a $w^*$-representation. It has since been an intriguing problem to extend this result to a more general class of underlying spaces. In this talk, we present a solution to this problem without making use of the C-property (see S. Biagini, M. Frittelli, On the extension of the Namioka-Klee theorem and on the Fatou property for risk measures) . Precisely, we present a representation theorem for risk measures on dual Banach lattices that satisfy a suitable version of Fatou property. This theorems applies, in particular, to risk measures on all Orlicz spaces over [0, 1] which is not $L_1[0, 1]$. Our approach is essentially based on uo-convergence, a new tool from Banach lattice theory.

This is joint work with Foivos Xanthos.
A preprint is available at: http://arxiv.org/abs/1511.03159.


Contributed talk: TC.2.01, Tue, 14:00

Kathrin Glau (Technical University of Munich)

Magic Points in Finance: Empirical Integration for Parametric Option Pricing

We propose an interpolation method for parametric option pricing tailored to the persistently recurring task of pricing liquid financial instruments. The method supports the acceleration of such essential tasks of mathematical finance as model calibration, real-time pricing, and, more generally, risk assessment and parameter risk estimation. We adapt the empirical magic point interpolation method of Barrault et al. (2004) to parametric Fourier pricing. For a large class of combinations of option types, models and free parameters the approximation converges exponentially in the degrees of freedom and moreover has explicit error bounds. Numerical experiments confirm our theoretical findings and show a significant gain in efficiency, even for examples beyond the scope of the theoretical results. The method thus shows seems highly promising for further developments and applications to multivariate Fourier integration.

This is joint work with Maximilian Gaß and Maximilian Mair.

References:
[1] Barrault, Maday, Nguyen and Patera (2004), An empirical interpolation method: application to efficient reduced-basis discretization of partial differential equations, Comptes Rendus Mathématique 339 (9), 667-672.
[2] Gaß and Glau (2015), Parametric Integration by Magic Point Empirical Interpolation, Preprint on http://arxiv.org/abs/1511.08510.
[3] Gaß, Glau and Mair (2015), Magic Points in Finance: Empirical Interpolation for Parametric Option Pricing, Preprint on http://arxiv.org/abs/1511.00884.


Contributed talk: TC.3.01, Tue, 12:10

Florence Guillaume (University of Antwerp)

Stochastic modelling of herd behavior indices

The higher upside potential and the usually lower market risk of diversification have sparked off an unprecedented boom in the investment of basket structured products. Despite its potentially higher risk-adjusted return, multi-name derivative trading has exposed investors to a new kind of risk, the so-called correlation risk, which is driven by adverse movements in the correlation between the underlying assets. In particular, a substantial increase in co-movement can annihilate the benefit of diversification and even increase the risk of the financial position. Such adverse scenario typically occurs during severe systemic crises, such as the global financial crisis of 2007-2008, and can then have dramatic consequences for the worldwide economic and financial system, when not properly accounted for. Hence, a sound monitoring of the degree of co-movement in the market, allowing for a good entering and closing timing of any dispersion strategy should become a priority. Recently, herd behavior indices, such as the HIX index, have emerged as model-free barometers of market diversification. The HIX index takes values between 0 and 1, a value of 1 indicating that no diversification is possible. Although the HIX index reflects the expected level of co-movement perceived by today’s investors, using only current values of the HIX when taking investment decisions may not be sufficient since one does not then have an idea of the future direction of the HIX. As a result, the diversification one initially hopes for may be less than expected. By using some stochastic model that is calibrated on observed HIX values, one can grab the future trend of market diversification and use this information to adjust the investment strategy.

The aim of this talk is to propose different diffusion processes to model herd behavior indices such as the HIX index. These models arise by combining popular mean-reverting processes with simple algebraic functions mapping the definition domain of the underlying mean-reverting process to the unit interval. The resulting Itô processes preserve, to some extent, the mean-reverting trend of the underlying process while satisfying the fundamental properties of the so-called herd behavior indices. In the numerical study, we calibrate the different model settings to time series data of the HIX index and investigate their ability to predict the future behavior of herd behavior indices.

Reference:
Guillaume, F. and Linders, D. (2015). Stochastic modelling of herd behavior indices. Quantitative Finance, 15(2), pp 1963-1977.


Contributed talk: TC.2.01, Mon, 11:10

Abidi Hani (Linnaeus University)

Infinite horizon impulse control problem with jumps using doubly reflected BSDEs

We establish existence results for adapted solutions of infinite horizon doubly reflected backward stochastic differential equations with jumps. We apply these results to get the existence of an optimal impulse control strategy for an infinite horizon impulse control problem. The properties of the Snell envelope reduce our problem to the existence of a pair of right continuous left limited processes. Finally, we give some numerical results.

This is joint work with Ammami Rim and Pontier Monique.


Contributed talk: TC.2.03, Mon, 14:30

Michael Hanke (University of Liechtenstein)

No-Arbitrage ROM Simulation Matching Multivariate Skewness and Kurtosis

This paper extends previous work on ROM simulation (Ledermann et al. 2011, LinAlgAppl; Geyer et al. 2014, JEconDynContr) to generate discrete samples of multivariate distributions, which match pre-specified means, covariances, and multivariate measures of skewness and kurtosis. Instead of the "overly simplistic" Mardia measures, which have been used in previous literature, we use richer measures of multivariate skewness and kurtosis that have been invented by Kollo (2008, JMultivarAnal). We show how these higher moments can be computed both from empirical data and for theoretical multivariate distributions, such as the multivariate extended skew-normal (see, e.g., Adcock et al., 2015, EuropJFin).

The method is suitable for all applications where multivariate samples need to be simulated, which match pre-specified means, covariances, and multivariate skewness and kurtosis. Since it extends the No-Arbitrage ROM algorithm invented by Geyer et al. (2014), this includes financial applications such as (large-scale) risk management, asset allocation, and portfolio optimization.

This is joint work with Spiridon Penev, Wolfgang Schief and Alex Weissensteiner.


Contributed talk: TC.2.03, Mon, 16:00

Dieter Hendricks (University of the Witwatersrand, Johannesburg)

Using real-time cluster configurations of streaming asynchronous features as online state descriptors in financial markets

We present a scheme for online, unsupervised state discovery and detection from streaming, multi-featured, asynchronous data in high-frequency financial markets. Online feature correlations are computed using an unbiased, lossless Fourier estimator. A high-speed maximum likelihood clustering algorithm is then used to find the feature cluster configuration which best explains the structure in the correlation matrix. We conjecture that this feature configuration is a candidate descriptor for the temporal state of the system. Using a simple cluster configuration similarity metric, we are able to enumerate the state space based on prevailing feature configurations. The proposed state representation removes the need for human-driven data pre-processing for state attribute specification, allowing a learning agent to find structure in streaming data, discern changes in the system, enumerate its perceived state space and learn suitable action-selection policies.


Contributed talk: TC.2.03, Tue, 16:30

Asgar Jamneshan (University of Konstanz)

Vector duality via a conditional extension

A duality result for lower-semicontinuous and convex vector-valued functions by means of conditional set theory is discussed. The result is useful for representations of vector-valued risk measures and vector optimization.

This is joint work with Samuel Drapeau and Michael Kupper.


Contributed talk: LC.0.100, Tue, 11:40

Martin Keller-Ressel (TU Dresden)

Affine Processes with Stochastic Discontinuities

Motivated by applications in finance, such as credit risk, we study affine processes without the common assumption of stochastic continuity. Such processes are semimartingales, but usually not quasi-left-continuous and may exhibit jumps at pre-determined times. We derive the associated Riccati equations that determine the characteristic function of the process and discuss some results on existence.

This is joint work with Thorsten Schmidt and Robert Wardenga.


Contributed talk: TC.3.01, Tue, 11:10, no-show / cancelled

Nikolai Kolev (University of Sao Paulo)

Cointegrating Jumps and Applications to Financial Markets

Recent studies show that the spot dynamics of various financial markets is subjected to mean reversion, seasonality and jumps. We investigate the problem of dependency considering two-dimensional jump diffusion processes with a two-dimensional compound Poisson component. We introduce a new methodology for dependency modeling of two-dimensional Poisson process based on exponential random variables X and Y generated by the stochastic representation

(1) (X,Y) = [min(U,T),min(V,T/a)],

where a > 0 and the random variables U, V and T are independent and exponentially distributed. Relation (1) suggests a presence of a singular component along the line {x = ay} through the origin of the corresponding bivariate exponential distribution. When a=1, the classical Marshall-Olkin bivariate exponential distribution results.

Hence, we are able to build two-dimensional Poisson process with dependent marginals where the univariate Poisson processes can be recognized linked by the nature of cointegration between their jumps. Our model allows to describe cases where the second random time event Y does not occur jointly with the first "unprotected" one X, when a common "fatal shock" identified by the random variable T in (1) arises. Such a construction helps to answer typical finance context questions. For example, "How long should the boss wait to take an action after implicit indication of a shock (political news, say) onto a dependent market?", or "What is the impact of insurance risk if various companies are linked?"

Risk neutral formulas will be presented for plain vanilla and spread options given the price dynamics driven by an exponential mean-reverting Geometric Ornstein-Uhlenbeck process and different pre-specified two-dimensional Poisson components. We will illustrate our approach with examples. For instance, we consider as a base the Merton model (a pure Geometric Brownian Motion plus jump model) and compare the results obtained by our approach to those available in several recent surveys, adopting that the two compound Poisson processes are independent or share a common Poisson component.


Contributed talk: TC.2.01, Tue, 15:00

Steven Kou (National University of Singapore)

EM Algorithm and Stochastic Control

We propose a Monte Carlo simulation based approach, called the dynamic EM algorithm, to solve stochastic control problems. In the special case of just searching for an optimal parameter, the algorithm simply becomes the classical Expectation-Maximization (EM) algorithm in statistics. The new algorithm extends the existing literature as follows: (1) We do not assume any particular dynamics of the stochastic processes such as diffusion or jump diffusions. (2) We show the monotonicity of performance improvement in every iteration, which leads to the convergence results. (3) We focus on finite-time horizon problems, where the optimal policy is not necessarily stationary. Various applications are given, such as real business cycle, stochastic growth, and airline network revenue management. This is a joint work with Paul Glasserman, Xianhua Peng, and Xingbo Xu.


Contributed talk: TC.2.01, Tue, 12:10

Johann Kronthaler (KPMG Austria GmbH)

Modelling risks in banks and insurance companies - a slight difference between theory and practice

Financial and actuarial mathematics have developed a broad range of theoretical methodologies to measure risk. This talk will discuss the difference between what should be done from a theoretical point of view and what can be done and is actually done in reality. What are the regulatory requirements and where are the real constraints? Examples will range from banks having to model the probabilities of default (PDs) under IFRS 9 to insurers having to simulate a value at risk (VaR) of 99.5% under Solvency II. In both cases, the underlying data often show undesired properties or are not available to the desired extent. Nevertheless, solutions need to be found to meet regulatory requirements - sometimes even under the principle of proportionality, which requires companies to provide sound argumentation and documentation of why chosen methodologies are appropriate. Furthermore, these regulatory requirements are being developed and extended as time passes. Often they are made more complex and an increasing need for the analysis of a company's own data is being observable. Within the talk, we will provide an overview of some current developments and how companies find solutions for meeting these in practice.

This is joint work with Birgit Ondra.


Contributed talk: TC.2.01, Tue, 11:40

Peng Luo (ETH Zurich)

Portfolio Optimization under Probability and Discounting Uncertainty

We consider the portfolio optimization problem with random endowment under a general concave non-linear expectation given by a maximal subsolution of a BSDE. We provide a characterization of the optimal strategy in terms of fully coupled FBSDE, where we have an auxiliary BSDE dealing with the discounting process. We will present several concrete examples in which we construct the solutions explicitly.

This is joint work with Samuel Drapeau and Dewen Xiong.


Contributed talk: TC.2.03, Mon, 15:00

Mirco Mahlstedt (Technical University of Munich)

Calibration to American Options: Numerical Investigation of the De-Americanization Method and Presentation of the Reduced Basis Method

American options are the reference instruments for the model calibration for the large and important class of single stocks. For this task a fast and accurate pricing algorithm is indispensable. The literature discusses mainly discusses pricing methods for American options that are based on Monte Carlo, tree and partial differential equation methods. We present an alternative approach that has become popular under the name De-Americanization in the financial industry. The method is easy to implement and enjoys fast run-times. Since it is based on ad hoc simplifications, however, theoretical results guaranteeing reliability are not available. We therefore empirically test the performance of the De-Americanization method for calibration in a local volatility, a stochastic volatility and a jump diffusion model. We find scenarios where De-Americanization performs very well and cases where it is less accurate and serious attention is called for.

Especially these cases in combination with the desire for an error control urge the need for a reliable and fast method. For the calibration to American options we therefore propose a PDE based method, called reduced basis. Reduced basis methods are explicietely developed to reduce computational time for solving parametric differential equations while delivering sufficient error control. In a numerical study we show that this method is suitable for calibration to American options with comparable accuracy and reduced computational costs.

This is joint work with Olena Burkovska, Maximilian Gaß, Kathrin Glau, Wim Schoutens and Barbara Wohlmuth.


Contributed talk: TC.2.03, Mon, 11:40

Olaf Menkens (Dublin City University)

Pricing and Hedging of European Plain Vanilla Options under Jump Uncertainty

This talk studies the pricing and hedging problem of European plain vanilla options in a modified Black Scholes market. That is the price of the risky asset is allowed to jump, where the timing and the size of the jump is unknown (with no jump being possible as well). Using a superhedging approach, worst case pricing formulae, Greeks, and superhedging strategy for call and put options will be given explicitly and will be discussed. Moreover, the worst case prices explain the volatility smile which can be observed in market data. Finally, the model is calibrated to market data.

Reference:
Olaf Menkens, Pricing and Hedging of European Plain Vanilla Options under Jump Uncertainty, Working Paper, http://ssrn.com/abstract=2773246, May 2016.


Contributed talk: TC.2.01, Mon, 17:00

Markus Michaelsen (University of Hamburg)

Marginal Consistent Dependence Modeling using Weak Subordination for Brownian Motions

We present an approach of modeling dependencies in exponential Lévy market models with arbitrary margins originated by time-changed Brownian motions. Using weak subordination of Buchmann and Madan (2015) improves models based on pathwise subordination, since weakly subordinated Lévy processes are not required to have independent components considering multivariate stochastic time changes. We apply a subordinator being able to incorporate any joint or idiosyncratic information arrivals. We emphasize multivariate variance gamma (VG) and normal inverse Gaussian processes and state explicit formulae for the Lévy characteristics. We estimate a multivariate VG model on bivariate market data using maximum likelihood and show that the model is highly preferable to the ordinary approaches of Leoni and Schoutens (2008) and Semeraro (2008). Consistent values of multi-asset options under given marginal pricing models are achieved, generating a non-flat implied correlation surface.

This is joint work with Alexander Szimayer.


Contributed talk: LC.0.100, Tue, 12:10

Rosa Maria Mininni (University of Bari Aldo Moro)

A generalized Cox-Ingersoll-Ross equation

We studied a generalized initial value parabolic problem including, as a special case, the Cox-Ingersoll-Ross equation ([1]) for the price of a zero-coupon bond. As a first new result in literature we proved the existence and uniqueness of the solution on spaces of continuous functions by using the Semigroup Theory. We also derived a generalized Feynman-Kac type formula that enables us to obtain the unique solution of the initial value problem as limit of approximants solutions obtained in a (very complicated) explicit, closed form. This result is a useful tool for understanding additional properties of the solution itself from a mathematical finance point of view. Our results were obtained in a joint work with G.Ruiz Goldstein, J.A. Goldstein and S. Romanelli ([GGMR]).

References:
[1] J.C. Cox, J.E. Ingersoll, and S.A. Ross, A theory of the term structure of interest rates, Econometrica, 53 (1985), 385-407.
[2] G. Ruiz Goldstein, J.A. Goldstein, R.M. Mininni, S. Romanelli, The semigroup governing the generalized Cox-Ingersoll-Ross equation, Adv. Differential Equations, 21 (2016), no. 3/4, 235-264.


Contributed talk: TC.2.03, Wed, 14:00

Marvin Mueller (ETH Zurich)

SPDE Models for the Limit Order Book

Motivated by observations in high frequency markets we introduce a model for the order book density based on parabolic stochastic partial differential equations. While the celebrated free boundary model for price formation of Lasry-Lions (2007) was starting point for a wide range of price-time continuous models for limit order books, tractability is one of the main issues when working with infinite dimensional systems. We discuss existence of finite dimensional realizations which reduce the complexity of the model drastically. Following empirical observations by Cont, Kukanov and Stoikov (2014), the so called order flow imbalance induces naturally a model for short term price dynamics which becomes explicit in the particular framework.

This is joint work with Rama Cont and Martin Keller-Ressel.


Contributed talk: TC.2.01, Mon, 15:00

Cosimo Munari (University of Zurich)

Do coherent risk measures take a liability holders' perspective?

The primary objective of solvency regimes for financial institutions is to ensure that liability holders are protected against default risk at an acceptable level of security. This fundamental goal translates into a normative requirement for capital adequacy tests, called surplus invariance, according to which the capital adequacy assessment should only depend on the default profile of financial institutions. This requirement, which had already appeared in a first version of the landmark paper by Artzner, Delbaen, Eber and Heath (1999), has been regained attention in a variety of recent papers, such as Cont, Deguest and He (2013) and Staum (2013).

By means of duality methods we characterize capital adequacy tests satisfying surplus invariance and show that the only capital adequacy tests that simultaneously satisfy surplus invariance and coherence are those based on test scenarios. In particular, capital adequacy tests based on Expected Shortfall fail to be surplus invariant. This finding challenges the widespread agreement that coherent risk measures, in particular Expected Shortfall, take a liability holders' perspective and implies that, when choosing a regulatory risk measure, tradeoffs that may ultimately undermine regulatory objectives become necessary. Special attention is paid to regulation based on Value-at-Risk and Expected Shortfall.

The presentation is based on the two recently accepted papers Koch-Medina and Munari (2016), published in Journal of Banking & Finance, and Koch-Medina, Munari, Sikic (2016), forthcoming in Mathematics and Financial Economics.


Contributed talk: TC.2.03, Wed, 12:10

Yukio Muromachi (Tokyo Metropolitan University)

Reformulation of the arbitrage-free pricing method under the multi-curve environment

This paper proposes a unified framework for the pricing of derivatives under the multi-curve setting. It is shown that any derivative security can be duplicated by using the underlying assets, collateral account and funding account, appropriately. A risk-neutral measure is defined accordingly under which the derivative price is determined uniquely. This idea is extended to the pricing of OIS and LIBOR discount bonds and interest-rate derivatives under the risk-neutral measure, which explains the existence of multiple yield curves simultaneously in the market. Some specific models are given to demonstrate the usefulness of our approach. Through numerical examples, we find that the discrepancy of derivative prices under the multi-curve setting from the classical ones becomes significant when the spread volatility between the collateral and funding rates exceeds some level.

This is joint work with Masaaki Kijima.


Contributed talk: LC.0.100, Wed, 14:30

Ciprian Necula (University of Zurich)

A generalized Bachelier formula for pricing basket and spread options

In this paper we propose a closed-form pricing formula for European basket and spread options. Our approach is based on approximating the risk-neutral probability density function of the terminal value of the basket using a Gauss-Hermite series expansion around the Gaussian density. The new method is quite general as it can be applied for a basket with a large number of assets and for all dynamics where the joint characteristic function of log-prices is known in closed form. We provide a simulation study to show the accuracy and the speed of our methodology.

This is joint work with Fulvia Fringuellotti.


Contributed talk: TC.2.01, Wed, 14:30

Marco Nicolosi (University of Perugia)

Optimal Asset Allocation In Money Management Under Mean-Reverting Processes

We find the optimal strategy for a portfolio manager whose compensation depends on the relative performance with respect to a benchmark, when expected returns are mean reverting. The optimization problem is non-standard, as it involves a non-concave objective function and a stochastic market price of risk. We solve it by using a concavification method together with a Fourier transform approach. We provide a semi-closed form expression for the optimal strategy by solving a system of Riccati equations.

This is joint work with Flavio Angelini and Stefano Herzel.


Contributed talk: TC.2.03, Tue, 14:30

Salvador Ortiz-Latorre (University of Oslo)

A new pricing measure in the Barndorff-Nielsen & Shephard model for commodity markets

For a commodity spot price dynamics given by an Ornstein-Uhlenbeck process with Barndorff-Nielsen and Shephard stochastic volatility, we price forwards using a class of pricing measures that simultaneously allow for change of level and speed in the mean reversion of both the price and the volatility. It is demonstrated that we can provide flexible shapes that are typically observed in energy markets. In particular, our pricing measure preserves the affine model structure and decomposes into a price and volatility risk premium. In the geometric spot price model we need to resort to a detailed analysis of a system of Riccati equations, for which we show asymptotic properties that explain the possible risk premium profiles. Among the typical shapes, the risk premium allows for a stochastic change of sign, and can attain positive values in the short end of the forward market and negative in the long end.

This is joint work with Fred Espen Benth.


Contributed talk: TC.2.01, Tue, 16:30

Oliver Pfante (Frankfurt Institute for Advanced Studies)

Uncertainty Estimates in Stochastic Volatility Models via Fisher Information

Modeling European Option prices via a stochastic volatility approach is vastly popular as these models capture the empirically observed volatility smile which is beyond the scope of the classical Black-Scholes model. Essentially, stochastic volatility models are two-dimensional diffusion processes: the first process captures the dynamics of the random volatility in terms of a mean-reverting process; the second one couples to the volatility driving the price of the asset and therefore the price of a derivative traded on this asset. European Call and Put prices, respectively, are strictly increasing or decreasing functions in the volatility holding everything else equal. Hence, there is a one-to-one mapping from volatility to European Option prices allowing to back out volatility from such prices. This, so called implied volatility, is not only widely used among Quants but has also led to volatility indices like the VIX roughly representing the implied volatility of a 30 day variance swap on the S&P 500. Despite the impressive prevalence of implied volatility there are no estimates on the uncertainty left about volatility as well as the parameters of the unobserved volatility process when estimated from European Option price data.

This omission is in stark contrast to related work estimating volatility from observed asset prices alone. Here, not least the rise of Bayesian methods enables an accurate quantification of the uncertainty inherent in volatility estimates. Indeed, in a previous paper, we found that stock prices provide almost no information about the unobserved volatility process. In this work, we address the information content of European Option prices on implied volatility in terms of the Fisher information matrix. In statistical terms, Fisher information quantifies the uncertainty of a maximum likelihood estimate by the curvature of the likelihood function around its maximum. A shallow maximum would have low information as the parameters are only weakly determined; while a sharply peaked maximum would have high information.

Heston's stochastic volatility model gives rise to an analytic expression, in terms of Fourier transformations, for the price of a European Option. Thus, assuming that observed Option prices are centered on the theoretical price disturbed by additive Gaussian noise, the curvature of the corresponding likelihood function, and therefore the Fisher information, can readily be computed from the Greeks of Heston's model. We find that Option prices allow reliable estimates of implied volatility with negligible uncertainty as long as volatility is large enough. Interestingly, if volatility drops below a critical value, inferences from Option prices become impossible because Vega, the derivative of a European Option w.r.t. volatility, nearly vanishes.

This is joint work with Nils Bertschinger.


Contributed talk: TC.2.03, Mon, 16:30

Davide Pirino (Scuola Normale Superiore, Pisa)

EXcess Idle Time

We introduce a novel economic indicator, named excess idle time (EXIT), measuring the extent of sluggishness in financial prices. Under a null and an alternative hypothesis grounded in no-arbitrage (the null) and microstructure (the alternative) theories of price determination, we derive a complete limit theory for EXIT leading to formal tests for staleness in the price adjustments. In agreement with changing levels of liquidity, empirical implementation of the theory indicates that financial prices are often more sluggish than implied by the (ubiquitous, in frictionless continuous-time asset pricing) semimartingale assumption. We show that EXIT provides, for each trading day, an effective proxy for the extent of illiquidity which is easily implementable using transaction prices only. Using EXIT, a sizable compensation for long-run illiquidity risk in market returns is uncovered.

This is joint work with Federico Bandi and Roberto Renò.


Contributed talk: TC.2.03, Wed, 14:30

Mathias Pohl (University of Vienna)

The Option Value of a Limit Order and its Implied Volatility

The fact that a limit order can be interpreted as an option is apparent: a limit order gives the trader an option to trade at a fixed price. We derive a remarkable analogy between option pricing and the pricing of limit orders by showing that a limit order can be understood as a lookback-type option on the underlying order-flow process.

Based on this novel pricing formula and a framework that allows to link the order-flow process and the price process, we present two important contributions. First, the methodology allows to "price" limit orders and thus enhances order submission strategies based on order-flow information. Secondly, the limit order pricing formula makes it possible to determine the implied volatility associated with a limit order submission.

This notion of a limit orders's implied volatility reflects that, in volatile markets, limit orders are placed far from the current price. A crucial implication of this concept is that we can forecast implied volatility based on high-frequency order book data. Hence, an empirical application employing limit order book data complements the theoretical results.

This is joint work with Gökhan Cebiroglu.


Contributed talk: TC.2.01, Wed, 14:00

Miklos Rasonyi (Alfréd Rényi Institute of Mathematics, Budapest)

Optimal investment in markets with frictions

We consider a model of an illiquid market where trading faster leads to more unfavourable prices and the cost of illiquidity is superlinear in the trading speed. We consider an investor who, in the spirit of cumulative prospect theory, may have a non-concave utility function and may distort probabilities by exaggerating the likelihood of extreme events. The existence of optimal strategies for such agents is shown in great generality. Our main tool will be an extension of the well-known Skorohod representation theorem for sequences of weakly convergent random variables.

This is joint work with Ngoc Huy Chau.


Contributed talk: TC.2.03, Tue, 11:40

Imke Redeker (Brandenburg University of Technology)

A structural model for credit risk with a switching barrier

A first-passage model of corporate default risk is considered where the default event is specified in terms of the evolution of the total value of the firm's asset and the default barrier. The special feature of the model is that this barrier at which the firm is liquidated is allowed to switch. This describes changes in the economy or the appointment of a new firm management. Further, it is assumed that public bond investors only have incomplete information about the default barrier. In this setup we derive the default probability and consider the pricing of default sensitive contingent claims.

This is joint work with Michaela Szölgyenyi and Ralf Wunderlich.


Contributed talk: LC.0.100, Tue, 11:10

Emanuela Rosazza Gianin (University of Milano-Bicocca)

The term structure of Sharpe-Ratios: a new approach for arbitrage-free asset pricing in continuous time

Motivated by some recent empirical studies on the dependency of the Sharpe Ratios (or market price of risk) on the considered time horizon, we present a theoretical framework of asset pricing in continuous time that copes with such a phenomenon.

The approach departs from an arbitrage-free and incomplete market setting when different pricing measures are possible. Asset pricing will be done by means of an EMM-string formed by a continuum of equivalent martingale measures (EMM) where at any evaluation time an EMM can be chosen according to time horizon of the claim or to other factors such as updated information and so on.

This results in a time-inconsistent pricing scheme and can be captured by a special type of Backward Stochastic Volterra Integral Equations (BSVIE, see Yong (2006)).

This is joint work with Patrick Beissner.


Contributed talk: TC.2.03, Mon, 11:10

Alet Roux (University of York)

Game options in currency models with proportional transaction costs

The pricing, hedging, optimal exercise and optimal cancellation of game or Israeli options are considered in a multi-currency model with proportional transaction costs. Efficient constructions for optimal hedging, cancellation and exercise strategies are presented, together with numerical examples, as well as probabilistic dual representations for the bid and ask price of a game option.


Contributed talk: TC.2.01, Wed, 15:00

Jörn Sass (University of Kaiserslautern)

Optimized expected utility risk measures and ratings by implied risk aversion

We introduce an optimal expected utility risk measure (OEU) that is generated by a utility function via an associated optimal investment problem. A financial position is evaluated by finding the capital to be borrowed and added to the position in order to maximize the discounted certainty equivalent of the future payoff. We derive properties of OEU and put them in relation to alternative risk measures. For constant relative risk aversion and for proper discounting, OEU is non-trivial and coherent. OEU reacts in a more sensitive way to slight changes of the probability of a financial loss than (average) value at risk. This motivates to use implied risk aversion based on OEU as a coherent rating methodology for structured financial products. This takes into account both upside potential and downside risks and is easily interpreted in terms of an individual investor's risk aversion. We illustrate our approach by a case study.

This is joint work with Holger Fink, Sebastian Geissel and Frank T. Seifried.


Contributed talk: TC.2.03, Tue, 14:00

Carlo Sgarra (Politecnico di Milano)

Optimal Investment in Market with Over and Under-Reaction to Information

In this paper we introduce a jump-diffusion model of shot-noise type for stock prices, taking into account over and under-reaction of the market to incoming news.

We focus on the expected (logarithmic) utility maximization problem by providing the optimal investment strategy in explicit form, both under full (i.e., from the insider point of view, aware of the right kind of reaction at any time) and under partial information (i.e., from the standard investor viewpoint, who needs to infer the kind of reaction from data). We test our results on market data relative to Enron and Ahold.

The three main contributions of this paper are: the introduction of a new market model dealing with over and under-reaction to news, the explicit computation of the optimal filter dynamics using an original approach combining enlargement of filtrations with Innovation Theory and the application of the optimal portfolio allocation rule to market data.

This is joint work with Giorgia Callegaro, Mhamed Gaigi and Simone Scotti.


Contributed talk: LC.0.100, Mon, 11:40

Elena Shmileva (Higher School of Economics St. Petersburg)

The most probable sample paths for some Levy processes

The present study examines the shifted small deviations for the Lévy processes from the Wiener domain of attraction and for the symmetric stable Lévy processes. This allows us to obtain the most probable sample paths and the Onsager-Machlup functional for these Lévy processes.


Contributed talk: TC.2.01, Tue, 16:00

Tomáš Sobotka (University of West Bohemia)

Robustness and uncertainty analyses of stochastic volatility models

In this talk we focus on robustness and uncertainty analyses of several continuous-time stochastic volatility (SV) models with respect to the task of market calibration. Firstly, we perform bootstrapping of several market data sets to compare the stability of calibrated parameters under data uncertainty. Secondly, we validate the hypothesis of the jump term importance in the underlying jump-diffusion dynamics. This is done, unlike in Campologno(2006), without assuming independence of the calibrated parameters, to be capable of using real market data sets. In the light of the new fractional SV models, we also measure an impact of the Hurst parameter for an approximative fractional model.

This talk is based on the joint work with J. Pospíšil and P. Ziegler.


Contributed talk: TC.2.03, Tue, 12:10

Stephan Sturm (Worcester Polytechnic Institute (WPI), Boston)

Arbitrage-free XVA

We introduce a framework for computing the Total Valuation Adjustment (XVA) of an European claim accounting for funding costs, counterparty risk, and collateral mitigation. We derive the nonlinear BSDEs associated with the replicating portfolios of long and short positions, and define the buyer and seller’s XVAs. When borrowing and lending rates coincide we provide a fully explicit expression for the XVA. When they differ, we derive the semi-linear PDEs, and conduct a numerical analysis.

This is joint work with Maxim Bichuch and Agostino Capponi.


Contributed talk: TC.2.03, Mon, 14:00

Michaela Szölgyenyi (WU Vienna)

A strong order 1/2 method for solving multidimensional SDEs appearing in mathematical finance

When solving certain stochastic optimization problems in mathematical finance, the optimal control policy sometimes turns out to be of threshold type, meaning that the control depends on the controlled process in a discontinuous way. The stochastic differential equations (SDEs) modeling the underlying process then typically have discontinuous drift and degenerate diffusion parameter. This motivates the study of a more general class of such SDEs. We prove an existence and uniqueness result, based on a certain transformation of the state space by which the drift is "made continuous". As a consequence the transform becomes useful for the construction of a numerical method. The resulting scheme is proven to converge with strong order 1/2. This is the first result of that kind for such a general class of SDEs. We will first present the one-dimensional case and subsequently show how the ideas can be generalized to higher dimensions. Thereby we find a geometrical interpretation of our weakened non-degeneracy condition.

This is joint work with Gunther Leobacher.


Contributed talk: TC.3.01, Mon, 11:10

Antonella Tolomeo (University of Torino)

Disentangling Overlapping Shocks in Portfolio Choice

In a market where price shocks result from the sum of several mean-reverting shocks, this paper finds the optimal trading policies and their welfare for informed investors, who observe all individual shocks, and uninformed investors, who estimate them from the aggregate shock alone. All investors have constant relative risk aversion. When at least three shocks are present, uninformed investors ascribe more of the price change to shocks with lower frequency. Shocks that are uncorrelated for the informed are rationally perceived as negatively correlated by the uninformed, and their correlation weakens as the difference of their frequencies increases.

This is joint work with Paolo Guasoni.


Contributed talk: TC.2.01, Mon, 14:30

Radu Tunaru (University of Kent)

Regulatory Capital Requirements: Saving Too Much for Rainy Days?

Model risk needs to be recognized and accounted for in addition to market risk. Uncertainty in risk measures estimates may lead to false security in financial markets. We argue that quantile type risk-measures are at least as good as expected shortfall. We demonstrate how a bank can choose among competing models for measuring market risk and account for model risk. Some BCBS capital requirements formula currently in effect leads to excessive capital buffers even on an unstressed basis. We highlight that the loss to society associated with the inefficient minimum capital requirements calculations is economically substantial over time.

This is joint work with Walter Farkas and Fulvia Fringuellotti.


Contributed talk: TC.2.01, Mon, 16:30

Misha van Beek (University of Amsterdam)

The regime switching affine process

We introduce the regime switching affine process. This is a Markov process that behaves as a different affine process conditional on each regime, with some parameter restrictions. The regime switches are driven by a Markov chain. We prove that the joint process of the Markov chain and the conditionally affine part is again an affine process on an enlarged state space. This result unifies several semi-analytical solutions found in the literature for pricing derivatives of very specific regime switching processes on smaller state spaces. It also provides an overarching theory that allows us to introduce regime switching to the pricing of many derivatives of the broad class of affine processes. Examples include European options and term structure derivatives with stochastic volatility and default. Essentially, whenever there is a pricing solution based on an affine process, we can extend this to a regime switching affine process without sacrificing the analytical tractability of the affine process.

This is joint work with Michel Mandjes, Peter Spreij and Erik Winands.


Contributed talk: TC.2.01, Mon, 12:10

Christian Vonwirth (University of Kaiserslautern)

Explicit optimal high-dimensional portfolio policies under partial information and convex constraints

We consider an investor who wants to maximize her expected utility of terminal wealth by trading in a high-dimensional financial market with one riskless asset and several stocks. The stock returns are driven by a Brownian motion and the drift is an unknown random variable that has to be estimated from the observable stock prices in addition to some expert’s opinion as proposed in Cvitanic et al [2]. The best estimate given these observations is the Kalman-Bucy-Filter.

The investor is restricted to portfolio strategies satisfying several convex constraints (for instance due to legal restrictions or fund characteristics) covering in particular no-short-selling and no-borrowing constraints. One popular approach to constrained portfolio optimization is the convex duality approach of Cvitanic and Karatzas [1]. They introduce auxiliary markets with shifted parameters and obtain corresponding dual problems.

First we solve these dual problems in the cases of logarithmic and power utility using stochastic control. Here we apply a reverse separation approach in order to obtain areas where the value function is differentiable. It turns out that these areas have a straightforward interpretation allowing to differ between active stocks (which are invested in) and passive stocks.

Afterwards we solve the auxiliary markets given the optimal dual process and obtain explicit optimal portfolio policies. A verification theorem guarantees the validity of our results.

Following our approach the resulting optimal strategies can be calculated entirely explicitly. To this end, we have to consider all possible subsets of active stocks. An efficient algorithm is presented and we close with an analysis of simulated and historical data.

This is joint work with Jörn Sass.

References:
[1] Cvitanic, J., Karatzas, I. (1992). Convex duality in constrained portfolio optimization. The Annals of Applied Probability 2, no.4, 767-818.
[2] Cvitanic, J., Lazrak, A., Martellini, L., Zapatero, F. (2006). Dynamic portfolio choice with parameter uncertainty and the economic value of analysts' recommendations. The Review of Financial Studies, 19, 1113-1156.


Contributed talk: TC.2.01, Tue, 17:00

Kim Weston (Carnegie Mellon University, Pittsburgh)

Market Stability and Indifference Prices

Consider a derivative security whose underlying is not replicable yet is highly correlated with a traded asset. As the correlation between the underlying and traded asset increases to 1, do the claim's indifference prices converge to the arbitrage-free price? In this talk, I will first present a counterexample in a Brownian setting with power utility investor where the indifference prices do not converge. The counterexample's degeneracies are alleviated for utility functions on the real line, and a positive convergence result will be presented in this case.


Contributed talk: TC.2.01, Wed, 12:10

Dorothee Westphal (University of Kaiserslautern)

Expert Opinions and Logarithmic Utility Maximization for Multivariate Stock Returns with Gaussian Drift

We investigate a financial market with multivariate stock returns where the drift is an unobservable Ornstein-Uhlenbeck process. Information is obtained by observing stock returns and unbiased expert opinions.

The optimal trading strategy of an investor maximizing expected logarithmic utility of terminal wealth depends on the conditional expectation of the drift given the available information, the filter. We investigate properties of the filters and their conditional covariance matrices. This includes the asymptotic behaviour for an increasing number of expert opinions on a finite time horizon and conditions for convergence on an infinite time horizon with regularly arriving expert opinions.

Here lies the main difficulty in this extension of Gabih, Kondakji, Sass and Wunderlich (2014) from the univariate to the multivariate case due to the lack of explicit solutions.

We deduce properties of the value function using its representation as a function of the conditional covariance matrices.

This is joint work with Jörn Sass and Ralf Wunderlich.


Contributed talk: LC.0.100, Wed, 14:00

Sander Willems (EPFL and Swiss Finance Institute, Lausanne)

Exact Term-Structure Estimation Using the Pseudo-Inverse

We introduce a novel method to estimate the discount curve from market quotes based on the Moore-Penrose pseudo-inverse such that 1) the market quotes are exactly replicated, 2) the curve has maximal smoothness, 3) no ad hoc interpolation is needed and finally 4) no numerical root-finding algorithms are required. We provide a full theoretical framework as well as practical applications for both single curve and multi-curve estimation.

This is joint work with Damir Filipovic.


Contributed talk: TC.2.03, Tue, 15:00

Ralf Wunderlich (Brandenburg University of Technology Cottbus-Senftenberg)

Partially Observable Stochastic Optimal Control Problems for an Energy Storage

We address the valuation of an energy storage facility in the presence of stochastic energy prices as it arises in the case of a hydro-electric pump station. The valuation problem is related to the problem of determining the optimal charging/discharging strategy that maximizes the expected value of the resulting discounted cash flows over the lifetime of the storage. We use a regime switching model for the energy price which allows for a changing economic environment described by a non-observable Markov chain.

The valuation problem is formulated as a stochastic control problem under partial information in continuous time. Applying filtering theory we find an alternative state process containing the filter of the Markov chain, which is adapted to the observable filtration. For this alternative control problem we derive the associated Hamilton-Jacobi-Bellman (HJB) equation which is not strictly elliptic. Therefore we study the HJB equation using regularization arguments.

We use numerical methods for computing approximations of the value function and the optimal strategy. Finally, we present some numerical results.

The talk is based on the paper
A.A. Shardin and R. Wunderlich (2016): Partially Observable Stochastic Optimal Control Problems for an Energy Storage. Stochastics, in press.


Contributed talk: TC.2.03, Tue, 17:00

Anastasiia Zalashko (University of Vienna)

Causal transport in discrete time and applications

We study the optimal transport under the causal constraint, which introduces the arrow of time into the problem. Loosely speaking, causal transport plans are a relaxation of adapted processes in the same sense as Kantorovich transport plans extend Monge-type transport maps. The corresponding causal version of the transport problem has recently been introduced by R. Lassalle. Working in a discrete time setup, we establish a recursive formulation for the problem that links the causal transport problem to the recently introduced non-linear transport problems by Gozlan et al. (2015). Moreover, the causal analogues to the Brenier maps are identified under precise conditions. A first consequence of this is the application of strengthened transport-information inequalities in the context of stochastic optimization, complementing the works of Pflug et al. (2009-2012), that serve to gauge the discrepancy between stochastic programs driven by different noise distributions. Finally, the developed techniques give a new light into some classical problems in Mathematical Finance, for which the time-information structure is central, as enlargement of filtrations and optimal stopping.

This is joint work with Julio Backhoff, Mathias Beiglböck and Yiqing Lin.


 

 

Poster Presentations


Poster presentation: second session, starting on Tuesday

Emre Akdoğan (Middle East Technical University, Ankara)

A Survey on Stochastic Control of Itô- Levy Processes: Applications in Finance and Insurance

In this study, the literature, recent devolepments and new achievements in stochastic optimal control theory are surveyed. Optimal control theory is an important direction of mathematical optimization for deriving control policies subject to time-dependent processes whose dynamics follow stochastic differential equations. In this study, this methodology is used to deal with those infinite-dimensional optimization programs for problems from finance and insurance that are indeed motivated by the real life. Stochastic optimal control problems can be further treated and solved along different avenues, two of the most important ones of being (i) Pontryagin's maximum principle together with stochastic adjoint equations (within both necessary and sufficient optimality conditions), and (ii) Dynamic Programming principle together with Hamilton-Jacobi-Bellman (HJB) equations (within necessary and sufficient versions, e.g., a verification analysis). Here we introduce into the needed instruments from economics and from Ito calculus, such as the theory of jump-diffusion and Lévy processes. In particular, we will present Dynamic Programing Principle, HJB Equations, Verification Theorem, Sufficient Maximum Principle for stochastic optimal control of jump diffusions, and we state connections and differences between Maximum Principle and the Dynamic Programing Principle. Our discussion will also take into account the potentials of our two avenues towards future improvement and generalization. We will give examples from financal mathematics and actuarial sciences, namely, stochastic portfolio selection in the stock market and optimal investment and liability ratio for an insurer, respectively. In our examples, we shall refer to various utility functions such as exponential, power and logarithmic ones, and to different parameters of risk averseness. The paper ends with a conclusion and an outlook to future studies, addressing elements of information, memory and stochastic games.

This is joint work with Yeliz Yolcu Okur and Gerhard-Wilhelm Weber.


Poster presentation: second session, starting on Tuesday

Suhan Altay (TU Wien)

Stein-Chen approximation and error bounds for the sum of path-dependent indicators of stochastic processes

We study certain approximation results for the distribution of sum of indicators, which are jointly determined by the maximum and the minimum of stochastic processes during successive intervals, by a Poisson distribution. By using certain dependence structures such as association and/or positive cumulative dependence and applying the well known Stein-Chen methodology, we show how the sum of indicators given by path-dependent functionals such as maximum and minimum of the stochastic processes in successive time intervals can approximated by a Poisson distribution.


Poster presentation: second session, starting on Tuesday

Mária Bohdalová (Comenius University in Bratislava)

Copula Quantile Regression Hedging

Our study deals with the problem of hedging portfolio returns. Many practitioners and academicians try to solve the problem of how to calculate accurately the optimal hedge ratio. In this study we compare estimations of the hedge ratio using the classical approach, linear quantile regression approach based on the selected quantiles as median, etc. and we compare them with non-linear quantile regression approach. To estimate the hedge ratios we have calibrated Student t distribution for the marginal densities and Student t copula of the portfolio returns. We have created two portfolios of the assets, one for equal weights and the second one for the optimal weights in the sense of minimal risk. Our findings show that assumption of Student t marginal leads to a better estimation of the hedge ratio.

This is joint work with Michal Greguš and Ondrej Bohdal.


Poster presentation: first session, starting on Monday

Camilla Damian (WU Vienna)

Filter-Based Discrete-Time EM Algorithm with Diffusion and Point Process Observation

The poster focuses on statistical inference in a dynamic, reduced form, partial-information model for Eurozone sovereign credit spreads. The main assumption is that default intensities are driven by an unobservable finite-state Markov chain.

Regarding methodology, an extension of the EM algorithm is involved: instead of pure diffusion information (see Elliott, 1993), both diffusive and point-process observations are considered. In the financial application, the point process represents default history of a given country. The techniques employed in Frey and Schmidt (2012) are used to solve the nonlinear filtering problems arising in the E-step, while the approaches of Clark (1978) and James et al. (1996) provide a framework to obtain a robust discretization of the continuous-time filters.

The goal is to estimate the model parameters, in particular the infinitesimal generator of the underlying Markov chain. Moreover, as shown in James et al. (1996), using the filter-based discrete-time EM algorithm makes it possible to obtain an MLE estimate also for the observation noise variance. The results are those of a simulation analysis, essential to check performance, accuracy and stability of the algorithm before applying it to real data.

This is joint work with Zehra Eksi-Altay and Rüdiger Frey.


Poster presentation: first session, starting on Monday

Tobias Fissler (University of Bern)

Testing the maximal rank of the volatility process for continuous diffusions observed with noise

We present a test for the maximal rank of the volatility process in continuous diffusion models observed with noise. Such models are typically applied in mathematical finance, where latent price processes are corrupted by microstructure noise at ultra high frequencies. Using high frequency observations, we construct a test statistic for the maximal rank of the time varying stochastic volatility process. Our methodology is based upon a combination of a matrix perturbation approach and pre-averaging. We will show the asymptotic mixed normality of the test statistic and obtain a consistent testing procedure. We complement the presentation with a simulation and an empirical study showing the performances on finite samples.

This is joint work with Mark Podolskij.

References:
T. Fissler and M. Podolskij (2016). Testing the maximal rank of the volatility process for continuous diffusions observed with noise. To appear in Bernoulli.


Poster presentation: first session, starting on Monday

Lei Ge (City University of Hong Kong)

An accurate approximate solution for portfolio selection under stochastic volatility and consumption

We study optimal portfolio and consumption problem in finite horizon and in continuous time. We consider that volatility is stochastic and investors have hyperbolic absolute risk aversion (HARA) utility for both consumption and terminal wealth. Commonly studied utility functions such as constant relative risk aversion (CRRA)/power, quadratic, logarithmic and exponential utility functions, are special cases of HARA utility function. Therefore, the HARA utility function which we considered represents a wide class of investors.

This is a difficult dynamics optimization problem and it involves nonlinear partial differential equation. Numerical computation has been the main tool for studying this problem. We develop closed-form approximate solutions of optimal portfolio and consumption rules for this problem. The theoretical predictions from our solutions are in good agreement with the numerical solutions. In particular, the relative errors of our approximate solutions are smaller than the relative errors in the parameter estimation. Therefore, for practical purpose, our approximate solutions can be treated as "exact".

This is joint work with Qiang Zhang.


Poster presentation: second session, starting on Tuesday

Martin Glanzer (University of Vienna)

Robust acceptability pricing of contingent claims: A stochastic programming approach

Classical superhedging bounds for the price of a contingent claim are often too large to be practically useful. Therefore, instead of requiring superreplication almost surely w.r.t. the physical measure P corresponding to the underlying market model, we allow for an accepted occurrance of negative events controled by acceptability functionals. Typically, this lowers the superhedging price. On the other hand, with this approach, it is not only the null sets of the probability model P which are taken into account, as is the case with almost sure superhedging. Thus, due to the fact that it is practically impossible to pose/estimate the true probability model for the underlying asset price evolution, introducing acceptability to the pricing procedure puts us in a classical situation of Knightian uncertainty. In particular, we incorporate this model uncertainty problem in the following sense: the hedging strategy is required to superreplicate the payoff(s) of a claim w.r.t. the specified acceptability functional(s) in terms of all probability models which lie in a nested distance neighbourhood (the 'ambiguity set') around some baseline model. We present some duality results, as well as algorithms which allow the computation of explicit superhedging prices within this setting, i.e. in particular, taking into account the whole ambiguity set when only the maximum nested distance to some baseline scenario tree model is given. Moreover, we examine the behaviour of the superhedging price surface w.r.t. the level of acceptability as well as the size of the ambiguity set. We discuss some particularly interesting applications of the presented framework and show some numerical results.

This is joint work with Georg Pflug.


Poster presentation: second session, starting on Tuesday

Margarita Grushanina (Erste Group Bank AG, Vienna)

Forecasting government bond yields using error-correction model and neural networks

In this study I analyse the long- and short-term dynamics of the government bond yields of 10 core euro area countries. First, I use the panel cointegration approach, which takes into account financial and economic interdependencies between the countries, for the analysis of both long- and short-term dynamics. As long-term determinants I use debt-to-GDP ratio, current account balance and potential GDP growth rate, while in the short-run the government bond yields are determined by inflation, short-term interest rate, real exchange rates, budget deficit and output gap. As the second step, I apply machine learning methods (neural networks) to the same panel aiming at minimizing prediction error of the forecast. I argue that machine learning technique shows significantly better prediction accuracy in forecasting short-term dynamics, while in case of the long-run analysis the error-correction model shows better performance. This is in line with the theoretical assumption, that in the long run the dynamics of government borrowing costs is determined by the fundamental factors while in the short run factors come into play which are difficult to formalise (e.g. policy uncertainty).


Poster presentation: first session, starting on Monday

Lingqi Gu (University of Vienna)

On the existence of shadow prices for optimal investment with random endowment

In this paper, we consider a numéraire-based utility maximization problem under proportional transaction costs and random endowment. Assuming that the agent cannot short sell assets and is endowed with a strictly positive contingent claim, a primal optimizer of this utility maximization problem exists. Moreover, we observe that the original market with transaction costs can be replaced by a frictionless shadow market that yields the same optimality. On the other hand, we present an example to show that in some case when these constraints are relaxed, the existence of shadow prices is still warranted.

This is joint work with Yiqing Lin and Junjian Yang.


Poster presentation: first session, starting on Monday

Rainer Hirk (WU Vienna)

Multivariate analysis of corporate credit ratings

Credit risk modeling, including the measurement of credit quality, has been intensively investigated by academics and practitioners over the past decades. This work contributes to this field of research by developing a framework for jointly modeling ordinal credit ratings and possibly defaults as outcomes. When ratings from different rating sources are available, the joint modeling of the panel of raters can uncover important information about the correlation in the different rating processes and possible systematic and rater-specific patterns. The model is based on the assumption that each ordinal outcome is a coarser version of an underlying latent process which depends on firm-specific and global covariates. This rating implied creditworthiness score is modeled as a consensus score plus rating errors. Focusing on the class of multivariate cumulative link mixed models, several existing estimation procedures (e.g., composite likelihood methods) for ordinal data models are investigated and extended. Firm-level and stock price data for publicly traded North American companies as well as long-term issuer credit ratings from the big three rating agencies (Standard & Poor's, Moody's and Fitch) are collected and analyzed to illustrate the proposed framework.

This is joint work with Rainer Hirk and Laura Vana.


Poster presentation: first session, starting on Monday

Chun-Sung Huang (University of Cape Town)

Efficient Option Pricing under the Double Jump Model with Stochastic Volatility and Stochastic Interest Rate Based on Fourier-Cosine Expansions

This paper focuses on the pricing of European options when the underlying asset follows a double exponential jump diffusion model with stochastic volatility and stochastic interest rates. In particular, we explore the efficient pricing methodology based on the Fourier-cosine expansions (COS), and compare the resulting efficiency to the widely utilized Fast Fourier Transform (FFT). Our numerical results show that not only is the COS method more efficient, but is also more accurate than the alternative FFT when benchmarked to the existing closed-form solution. Furthermore, we show that variability in the correlation between volatility and the underlying asset, as well as the interest rate dynamic, has a significant impact on the resulting option prices across a range of strikes and maturity dates.

This is joint work with John G. O'Hara and Sure Mataramvura.


Poster presentation: second session, starting on Tuesday

Tijana Levajkovic (University of Innsbruck)

Numerical framework for the stochastic linear quadratic control problem

Many problems in mathematical finance can be formulated as stochastic linear quadratic control (SLQR) problems. The dynamics of the system is linear and the cost functional quadratic. For a well-posed SLQR problem, the optimal control is given in terms of Riccati equation. We provide a numerical framework for solving the SLQR problem using a polynomial chaos expansion approach in white noise setting. After applying polynomial chaos expansion to the state equation, we obtain a system of infinitely many deterministic partial differential equations in terms of the coefficients of the state and the control variables. We set up a control problem for each equation, which results in a set of deterministic linear quadratic regulator problems. Solving these control problems, we find optimal coefficients for the state and the control. We prove the optimality of the solution expressed in terms of the expansion of these coefficients compared to a direct approach. Our approach can be applied to a very general case where the coefficients, both in the state equation and the cost are random. Moreover, the results can be extended to control problems where the dynamics are driven by fractional Brownian motion.


Poster presentation: second session, starting on Tuesday

Duc Hoang Luu (Max Planck institute for mathematics in the sciences, Leipzig)

Stationary solution of the variance process in fractional Heston model

It is well known that the variance process in the classical Heston model possesses a stationary distribution to which other measures converge to with exponential rate. In this poster the fractional Heston model is considered in which the variance process, although still a mean reversion, is driven by a fractional Brownian motion. Since it is not Markovian, little is known about the long term behavior of the solution. The approach uses theories of rough path, random dynamical systems and random attractors. The main results are the existence and uniqueness and the positiveness of the solution, the existence of a global random attractor for the random dynamical system generated by the initial equation.


Poster presentation: second session, starting on Tuesday

Giovanni Masala (University of Cagliari)

Electricity derivatives: an application to IDEX Italian Market

The liberalization of electricity markets has produced more volatile electricity prices and increased trading in electricity derivatives.

Besides, the electricity markets are currently facing significant changes with the introduction of renewable energies and new demand-response mechanisms.

The pricing of electricity derivatives is a challenging task due to electricity peculiar characteristics as compared to stocks and commodities (namely non-storability, generation constraints, transmission constraints, seasonality and weather dependence for example).

Another important aspect in pricing electricity derivatives is the fact that electricity must be produced in the same quantity as is consumed in real time, in order to avoid damaging network collapses.

Energy risk management uses futures to hedge against spot price fluctuations during the delivery period. Futures contracts are sold and bought to lock the price in advance for the planned generation or consumption of the next years, quarters and months so that spot trading is only used to optimize the procurement and sale of power in the short-run. Futures are also the most natural vehicles for investors willing to take positions in power markets without the underlying physical constraints.

We focus then on the Italian market. At this purpose, IDEX is the Energy Derivatives segment of IDEM, the Italian derivatives market managed by "Borsa Italiana" which is a company of London Stock Exchange Group. It has been launched in 2008 and it represents a regulated market where Italian power derivatives are traded. It currently trades both baseload and peakload power futures (on a monthly, quarterly and yearly basis).

It is well known that the features of electricity price dynamics determine high price volatility. For this reason, power futures allow operators within the industry to face this kind of risk by providing hedging tools in a safe trading environment. The goal is to provide an efficient business planning with greater operational profitability.

The underlying power spot market, namely the day-ahead market is managed by a state owned company, GME ("Gestore Mercati Energetici"). Besides, the single national purchase price PUN ("Prezzo Unico Nazionale") is calculated for every hour as a weighted average of the zonal prices determined on the day-ahead market.

In this survey, we attempt to investigate the relationships between IDEX futures' prices and electricity price and load for the Italian Market.

This is joint work with Laura Casula.

Main references:
[1] Aïd R. Electricity Derivatives (2015), Springer, ISBN 978-3-319-08394-0.
[2] Benth F.E. and Krühner P. Derivatives Pricing in Energy Markets: An Infinite-Dimensional Approach (2015), SIAM J. Financial Math., Vol. 6, pp. 825–869.
[3] Caporin M., Preś J. and Torro H. Model based Monte Carlo pricing of energy and temperature Quanto options (2012), Energy Economics, Vol. 34, pp. 1700–1712.
[4] Füss R., Mahringer S. and Prokopczuk M. Electricity derivatives pricing with forward-looking information (2015), Journal of Economic Dynamics & Control, Vol. l58, pp. 34–57.
[5] Weron R. Modeling and forecasting electricity loads and prices. A statistical approach (2006), John Wiley & Sons Ltd.


Poster presentation: first session, starting on Monday

Ethan Owen Petersen (Rose-Hulman Institute of Technology, Indiana)

The Stock Price Effect of Apple Keynotes

In this paper, we analyze the volatility of Apple's stock beginning January 3, 2005 up to October 9, 2014, then focus on a range from 30 days prior to each product announcement until 30 days after. Product announcements are filtered; announcements whose 60 day range is devoid of other events are separated. This filtration is chosen to isolate, and study, a potential cross-effect. Concerning Apple keynotes, there are two significant dates: the day the invitations to the event are received and the day of the event itself. As such, the statistical analysis is conducted for both invite-centered and event-centered time frames. A comparison to the VIX is made to determine if the trend is simply following the market or deviating. Regardless of the filtration, we find that there is a clear deviation from the market. Comparing these data sets, there are significantly different trends: isolated events have a constantly decreasing, erratic trend in volatility but an increasing, linear trend is observed for clustered events. According to the Efficient Market Hypothesis, we would expect a change when new information is publicly known and the results of this study support this claim.


Poster presentation: second session, starting on Tuesday

Christian Pötz (Technical University of Munich)

Chebyshev interpolation for analytic and non-analytic option prices

Based on Gaß et al. (2015) we investigate the Chebyshev interpolation method in the context of parametric option pricing. Gaß et al. (2015) propose Chebyshev interpolation of option prices in the parameters to gain efficiency for recurrent pricing tasks. They show that for a large set of options, models and free parameters, prices are analytic functions of the parameters and the interpolation converges (sub)exponentially. For some interesting models, however, the price function is not analytic but still smooth. We apply a convergence result for differentiable functions and obtain polynomial error decay for the approximation of smooth option prices and their derivatives. We analyse the regularity of the option price in Lévy models using Fourier pricing techniques. As an example of a model which is non-analytic in some of the parameters, we investigate the Normal Inverse Gaussian (NIG) model in detail. For a numerical convergence study the method is implemented for 1,2 and 3 free parameters in the NIG model and for comparison in the Black-Scholes model. The error decay is observed for different parameter constellations and compared to the theoretical bound. The empirical results confirm a very high accurateness and efficiency of the method.

This is a joint work with Kathrin Glau.

Reference:
Gaß, M., Glau, K., Mahlstedt, M., and Mair, M. (2015). Chebyshev interpolation for parametric option pricing. arXiv preprint https://arXiv:1505.04648.


Poster presentation: second session, starting on Tuesday

Emel Savku (Middle East Technical University)

Maximum Principle For A Delayed Stochastic Hybrid Model and An Application to Finance

Stochastic Hybrid Systems are natural and efficient candidates to model abrupt changes in the financial markets with their heterogenous nature. We study on a stochastic optimal control problem of a stochastic hybrid model within the framework of regime-switches.The necessary and sufficient maximum principle for a Markov regime-switching jump-diffusion with delay is developed. The associated adjoint equations are shown to satisfy an anticiapted backward stochastic differential equation (ABSDE). We provide the existence uniqueness result for such ABSDEs. We illustrate our results by an application to optimal consumption from a cash flow with delay.


Poster presentation: first session, starting on Monday

Sergei Sidorov (Saratov State University)

Optimal payoffs for an investor with asymmetric attitude to gains and losses

The description of Cumulative Prospect Theory (CPT) includes three important parts: a value function over outcomes, v; a weighting function over cumulative probabilities, w; CPT-utility as unconditional expectation of the value function v under probability distortion w. In this paper we consider the problem of choosing an CPT-investor's portfolio in the case of complete market. The problem of finding the optimal portfolio for CPT-investor is to maximize the unconditional expectation of the value function v under probability distortion w over terminal consumption, subject to budget constraint on initial wealth. We find the optimal payoffs for CPT-investor for the classic Black-Scholes environment assuming that there are a single lognormally distributed stock and a risk free bond. We compare the optimal payoffs of CPT-investor with the optimal payoffs of the investor that maximizes expected power utility over terminal payoffs, subject to budget constraint on initial wealth (EU-investor). Moreover, we examined the problem of optimal portfolio choice for PT- and CPT-investors assuming that dynamic trading is prohibited. We have considered a simple stochastic model with one risk-free asset and one risky asset that follows geometrical Brownian motion. We proved existence of a non-trivial optimal choice for the weights of the assets under some conditions on the parameters of the value function v.

This is joint work with Sergei Mironov.


Poster presentation: second session, starting on Tuesday

Wayne Tarrant (Rose-Hulman Institute of Technology, Indiana)

Effective historical risk measures

In this talk we look at the efficacy of typical historical risk measures on different markets. We consider both the Value at Risk and the Expected Shortfall as the most common risk measures. We calculate over varying time frames and confidence levels in order to determine if some historical measures predict the level of future losses with any reliability.

We will be most interested in stock market indices, bonds, foreign exchange rates, and commodities. We consider times of recession and expansion to see if there are differences in the efficacy of the measures. In the end we compute a spectral risk measure for each market and time frame to see if there is some one measure that is consistently accurate across type of market, level of confidence, and/or time frame.


Poster presentation: first session, starting on Monday

Mailan Trinh (University of Sussex)

Performance of model selection within a class of models for intraday trading data

Analysis of high-frequency data suggests that intraday trading exhibits non-stationary behaviour. Therefore, we propose a simple model for non-stationary returns based on a non-homogeneous normal compound Poisson process. For practical reasons, we are mainly interested in the performance of information criteria (IC) for model selection within this class of models. In a Monte Carlo experiment, we test the commonly used IC: Akaike's information criterion (AIC), the Bayesian information criterion (BIC) and the Hannan Quinn information criterion (HQ). The IC perform relatively well in cases of few parameters, but performance decreases with growing number of parameters. This is joint work with Linda Ponta (University of Genoa), Marco Raberto (University of Genoa), Enrico Scalas (University of Sussex) and Silvano Cincotti (University of Genoa).


Poster presentation: first session, starting on Monday

Barbara Trivellato (Politecnico di Torino)

Exponential models and utility maximization by Orlicz spaces

Applications of statistical exponential models built on Orlicz spaces arise in several fields, such as differential geometry, algebraic statistics and information theory. Their use in finance is quite new, although the importance of Orlicz spaces in utility maximization problems and in the theory of risk measures is known. We explore some theoretical concepts in the theory of maximal exponential models and address the classical problem of exponential utility maximization in incomplete markets. In the exponential framework, the starting point is the notion of maximal exponential model centered at a given positive density p, introduced by Pistone and Sempi (1995). In that paper, the set of (strictly) positive densities is endowed with a structure of exponential Banach manifold, using the Orlicz space associated to an exponentially growing Young function.

One of the main result of Cena and Pistone (2007) states that any density belonging to the maximal exponential model centered at p is connected by an open exponential arc to p and viceversa, (by "open", we essentially mean that the two densities are not the extremal points of the arc).

Many examples in the literature show that the minimal entropy martingale (density) measure q is connected to p = 1 by an open exponential arc, and this obviously reflects on the solution of the primal problem. In Santacroce, Siri and Trivellato (2016), the equivalence between the equality of the maximal exponential models centered at two (connected) densities p and q and the equality of the Orlicz spaces referred to the same densities is proved. By duality methods (see Biagini (2008), Biagini and Frittelli (2008) for an Orlicz space approach), this result is used in an ongoing research in order to characterize the solution of the maximization problem in terms of the solution of the dual problem.


Poster presentation: second session, starting on Tuesday

Robert Wardenga (TU Dresden)

Continuous tenor affine LIBOR models and XVA

We consider the class of affine LIBOR models with multiple curves, which is an analytically tractable class of discrete tenor models. By introducing an interpolating function, we extend the affine LIBOR models to a continuous tenor and derive expressions for the instantaneous forward rate and the short rate. We show that the continuous tenor model is arbitrage-free, that the analytical tractability is retained under the spot martingale measure, and that under mild conditions an interpolating function can be found such that the extended model fits any initial forward curve. This allows us to compute value adjustments (i.e. XVAs) consistently. As an application, we compute the price and value adjustments for a basis swap, and study the model risk associated to different interpolating functions.

This is joint work with Antonis Papapantoleon.


Poster presentation: first session, starting on Monday

Christoph E. Weiss (University of Cambridge)

Modelling and Forecasting Inflation Rate Volatility

The adverse effects of inflation volatility on economic growth and welfare are well known. The generating process that underlies inflation volatility is not as well understood as it should be.
Using monthly data that underlies the Retail Price Index for the U.K., for the period since inflation targeting was introduced in October 1992, we analyse the drivers of the inflation rate and its volatility. We explore patterns in the time-varying covariation among product inflation rates that aggregate up to category inflation rates that in turn aggregate up to the overall inflation rate. We find that aggregate inflation volatility closely tracks the time paths of covariation, which turns out to be mainly driven by the variances of common shocks shared by all products, and the covariances of idiosyncratic, product-level shocks.
Using a forecasting system that comprises of models for the mean and the variance, following the disaggregated approach of the hierarchical time series forecasting framework, we exploit the index structure of the aggregate inflation rate and obtain forecasts that - depending on choice of forecast horizon and choice of proxy for 'actual' inflation volatility - are between 16 and 108% more accurate than a GARCH(1,1) for the aggregate inflation rate/volatility.

This is joint work with Paul Kattuman.


Poster presentation: first session, starting on Monday

Bilgi Yilmaz (Middle East Technical University, Ankara)

A Stochastic Model Approach to Determine the Pattern in House Prices

The increase in interest in real estate as investment tools required more sophisticated methods as the traditional valuation methods became inadequate to analyze the improvement in house prices changes. Common valuation methods such as hedonic and multi-regression enable researchers to display the importance and the impact of the significant house characteristics on its price. Even if, the econometric methods are the most commonly used ones, the estimation power of these methods is questionable when required conditions such as normality, independency and linearity conditions are not satisfied. Therefore, there is a need to construct new approaches that capture properly the important factors and the unexpected fluctuations in house prices.

The aim of this study is to design a model, based on stochastic processes, on the house prices and related indicators. The development of such a model that captures the pattern of house prices indirect to the underlying other stochastic variables is expected to define the price structure. Based on stochastic differential equation systems (SDEs), we aim to determine theoretical fair house prices. One of the main advantages of this approach is to identify the house prices that share common time series components with explanatory variables, such as mortgage rates. The model allows arbitrary correlation between house prices and mortgage rates, and therefore, it generalizes and combines the historical house prices and mortgage rates by using all the information in the initial term structures of both historical prices and mortgage rates. The full probabilistic inference for the model parameters is facilitated by adapting a Monte Carlo (MC) algorithm in the formulation of proposed model. The critical part in this pricing approach is the precise description of the stochastic process governing the behavior of the housing price and the interest rate. It is the characteristics of this process which determines the exact nature of both variables. The study contributes to the existing literature by offering the use of SDEs to the econometric analysis of the housing market.

This is joint work with A. Sevtap Selçuk-Kestel.


Poster presentation: first session, starting on Monday

Yeliz Yolcu Okur (Middle East Technical University, Ankara)

Option Pricing under Heston Stochastic Volatility Model using Discontinuous Galerkin Finite Elements

We consider interior penalty discontinuous Galerkin finite element (dGFEM) method for variable coefficient diffusion-convection-reaction equation to discretize the Heston PDE for the numerical pricing of European options. The mixed derivatives in the cross diffusion term are handled in a natural way compared to the finite difference methods. The advantages of dGFEM space discretization and Cranck-Nicholson method with Rannacher smoothing as time integrator for Heston model with non-smooth initial and boundary conditions are illustrated in several numerical examples for European call, butterfly spread and digital options. The convection dominated Heston PDE for vanishing volatility is efficiently solved utilizing the adaptive dGFEM algorithm. Numerical experiments illustrate that dGFEM is highly accurate and very efficient for pricing financial options.

This is joint work with Sinem Kozpinar, Murat Uzunca and Bülent Karasözen.

References:
[1] Heston, S. L. (1993). A closed-form solution for options with stochastic volatility with applications to bond and currency options, Review of Financial Studies, 6, 327-343.
[2] Riviere, B. (2008). Discontinuous Galerkin Methods for Solving Elliptic and Parabolic Equations, Theory and Implementation, SIAM.
[3] Winkler, G., Apel, T., Wystup, U. (2001). Valuation of options in Heston's stochastic volatility model using finite element methods, Foreign Exchange Risk, 283-303.


Poster presentation: first session, starting on Monday

Yeliz Yolcu Okur (Middle East Technical University, Ankara)

Pricing Equity Options under a Double-Exponential Jump-Diffusion Process in the presence of Stochastic Barrier

We consider that the firm defaults when its asset value hits a stochastic barrier related to its outstanding obligations. We derive a partial integro-differential equation for pricing equity options where the underlying security is driven by a double-exponential jump-diffusion model. In order to find a numerical solution for the corresponding partial integro-differential equation, a localization of the infinite domain is used, and then, a finite difference scheme is applied. We also compare the approximate solutions obtained with the results of Monte Carlo simulation to validate our findings.

This is joint work with Sinem Kozpinar, Omur Ugur and Cansu Evcin.

References:
[1] Cont, R., Voltchkova, E. (2005). A finite difference scheme for option pricing in jump diffusion and exponential Levy models, SIAM Journal on Numerical Analysis, 43 (4), 1596-1626.
[2] Kou, S. (2002). Jump diffusion model for option pricing, Management Science, 48 (8), 1086-1101.
[3] Sepp, A. (2006). Extended credit grades model with stochastic volatility and jumps, Wilmott magazine, 50-62.


Poster presentation: second session, starting on Tuesday

Daisuke Yoshikawa (Hokkai-Gakuen University)

On the market of contingent claims in models with uncertainty

As the bankruptcy of Lehman Brothers shows, the probability of huge loss is often surprisingly unexpected. However, some people hung on the financial market, not having sufficient information on the true loss probability. For the stabilization of financial market, it is necessarily to pay attention to such people. In this paper, we focus on this problem in, especially, derivative market.

Any investor expects the higher satisfaction by the trade of derivatives, even if the knowledge on the true probability is different from each other. Thus, they would offer the price of derivatives better than the standard which is at least allowable for them. For such a standard, we use the framework of indifference pricing; we derive the utility indifference price when the true probability is not specified. For this, we consider the investor fallen into pessimism by less information on the probability will prepare for loss, assuming the worst scenario even if this scenario happens with extremely small probability. The principle of this decision making is called maxmin expected utility (MEU). That is, we derive the derivative prices with MEU.

Based on the result of derivative prices with and without MEU, we consider how the transaction of derivatives is formed. Further, we analyse the sensitivity of the amount of transaction of derivatives on investors' initial wealth and averseness to risk.


Poster presentation: first session, starting on Monday

Dariusz Zawisza (Jagiellonian University in Krakow)

A consumption - investment problem when some coefficients might be unbounded

During the talk I will present our recent results concerning an existence theorem for HJB equations arising in some consumption - investment problems. Assets prices are diffusions with dynamics affected by a correlated non-tradable (but observable) stochastic factor and our investor tries to maximize the standard HARA utility functional. We put the emphasis on the problems when the interest rate or the market price of risk are unbounded functions of the factor process. We consider the finite and the infinite horizon problem formulation. Our results generalize many other optimization problems. The talk will be based on Zawisza [1] paper and current research.

[1] D. Zawisza, Smooth solutions to discounted reward control problems with unbounded discount rate and financial applications, arXiV 1602.00899v2.


Poster presentation: second session, starting on Tuesday

Aleksandra Zhukova (Russian Academy of Sciences)

A model of optimal consumption with random times of obtaining loans

This work continues studies of the influence of random moments of change in control on the behavior of a consumer. In this paper we introduce random (Poisson) moments of access to credit. The model assumes an infinitely-living discounted utility maximizer (of CRRA type) that has external income and who may take loans to purchase consumption goods for the known price. The result of the analysis is that the optimal control is such that in the high-frequency limit the consumption expenditure is a non-random function that depends only on time. Perturbations methods suggest the dependence of the found function on the integral characteristics of the external income. The method we use to solve the optimal control problem is sufficient optimality conditions using Lagrange multipliers.

This is joint work with Igor Pospelov.


Gold Sponsors

Raiffeisen Bank International
KPMG Austria GmbH Wirtschaftsprüfungs- und Steuerberatungsgesellschaft
Erste Group Bank AG
Deloitte

Silver Sponsors

Raiffeisen Capital Management
d-fine Austria GmbH

Organisers

WU Wien - Vienna University of Economics and Business
FAM @ TU Wien - Vienna University of Technology
University of Vienna