Rathaus Wien / Vienna City Hall, (c) WienTourismus/Christian Stemper

21st International Congress on
Insurance: Mathematics and Economics - IME 2017

Vienna, Mon–Wed, July 3–5, 2017

IME Educational Workshop
Vienna, Thu–Fri, July 6–7, 2017

Program and Abstracts




 

Panel Discussion

"Ultra-low interest rates in insurance business"

Panelists:

  • Paul Embrechts 
    Full professor of Mathematics, ETH Zurich, CH
  • Eva Fels 
    Austrian Association of Insurance Companies, AT
  • Rüdiger Frey 
    Full professor for Mathematics and Finance, WU Wien, AT
  • Axel Helmert 
    Managing Director of msg life central europe and msg life Austria, DE & AT
  • Ralf Korn 
    Full professor at the Financial Mathmatics Group, TU Kaiserslautern, DE
  • Werner Matula 
    Head of the Actuarial Department, Vienna Insurance Group, AT

Moderator:

  • Reinhold Kainhofer 
    since April 2016: Deputy responsible actuary (Life), Generali Versicherung AG, AT,
    before: Financial and actuarial analyst in insurance supervision, Austrian Financial Market Authority, AT

Abstract:

In the past, economists traditionally assumed that the nominal interest rates are bounded below at zero. However, in the last 7-8 years the interest rates have been falling significantly, becoming even negative in some countries. As a matter of fact, the European Central Bank cut the fixed rate to zero and the deposit facility to -0.40 on the 16th of March 2016. The ultra-low interest rate environment remains a key concern for insurance business. For instance, life insurers' income from investments might be insufficient to meet guaranteed obligations, and non-life insurers might have to rise the insurance prices to compensate the reduction in investment income.
The anticipated rising of interest rates should improve the situation but is not free of dangers.

In the panel discussion, the problem of ultra-low interest rates and various tools to address it will be reviewed from different angles. Panelists will also discuss the general implications of the ultra-low interest rate environment on industry trends and regulatory demands.


 

Plenary Talks at IME Congress


Invited plenary talk: Monday, 09:10, Room 1

Corina Constantinescu-Loeffen  (Department of Mathematical Sciences, University of Liverpool, United Kingdom)

Ruin probabilities in insurance risk models

Abstract: Starting from the classical compound Poisson collective risk model of Cramér and Lundberg, more complex models have been proposed to account for real phenomena in insurance business, such as the impact of financial investments or the dependence among events. We present some exact and asymptotic ruin probabilities results in different insurance risk models. For their derivation we advance both probabilistic and analytical arguments, employing differential equations, Laplace transforms, fractional calculus and regular variation theory.


Invited plenary talk: Wednesday, 12:05, Room 1

Catherine Donnelly  (Department of Actuarial Mathematics and Statistics, Heriot-Watt University, United Kingdom)

What is the future of pensions?

The pensions industry is leaving what looks like a golden age. What is it leaving behind? Where does the industry appear to be heading? What could it offer an individual, to help them plan, save and invest for a decent retirement?

We hope to stimulate discussion on what we as academic researchers can do to improve the retirement outcomes of our society.


Invited plenary talk: Tuesday, 09:00, Room 1

Paul Embrechts  (Department of Mathematics, ETH Zurich, Switzerland)

Quantile-Based Risk Sharing and Equilibria

We address the problem of risk sharing among agents using a two-parameter class of quantile-based risk measures, the so-called Range-Value-at-Risk (RVaR), as their preferences. The family of RVaR includes the Value-at-Risk (VaR) and the Expected Shortfall (ES), the two popular and competing regulatory risk measures, as special cases. We first establish an inequality for RVaR-based risk aggregation, showing that RVaR satisfies a special form of subadditivity. Then, the Pareto-optimal risk sharing problem is solved through explicit construction. To study risk sharing in a competitive market, an Arrow-Debreu equilibrium is established for some simple, yet natural settings. Further, we investigate the problem of model uncertainty in risk sharing, and show that, generally, a robust optimal allocation exists if and only if none of the underlying risk measures is a VaR. Practical implications of our main results for risk management and policy makers are discussed, and several novel advantages of ES over VaR from the perspective of a regulator are thereby revealed.
This talk is based on joint work with Hailyan Liu and Ruodu Wang (University of Waterloo).


 

Plenary Lectures at IME Educational Workshop


Invited lecture: Thursday, 11:00, and Friday, 13:20, Room 1

Anna Rita Bacinello  (Department of Economics, Business, Mathematics and Statistics "Bruno de Finetti", University of Trieste, Italy)

Market-Consistent Valuation of Life Insurance Liabilities

Providers of life insurance and pension benefits have always had to deal with a number of risks involving their business of various nature: financial-economic, biometric, behavioural, ... . Nevertheless, the increasing competition led them to offer more and more complex products, in order to tailor as much as possible the needs of their customers. This implied that modern life and pension products are designed as packages of various riders, that can be either included or not in the insurance contract, along with its basic element. A typical example of this is constituted by the variable annuities, that package several types of options and guarantees, at the policyholder's discretion. Therefore appropriate modelling tools are necessary in order to value such products. The aim of this course is to provide a market-consistent valuation framework of life and pension insurance liabilities, with a special focus on variable annuities.


Invited lecture: Thursday, 9:00, and Friday, 10:50, Room 1

René Dahms  (Department of Mathematics, ETH Zurich, Switzerland)

Stochastic reserving in P&C insurance

Usually, claim reserves are the largest balance sheet item on the liability site of P&C (re)insurers. Therefore, their estimation is very important. Moreover, under Solvency II, SST (the Swiss way of Solvency II) or the forthcoming accounting standard IFRS 17 it is essential to estimate beside the expected value corresponding uncertainties. In this mini-course we will give a short introduction in reserving of P&C liabilities and different types of reserving methods used in practice. We will discuss the concept of best estimate reserves and different concepts of corresponding uncertainties at the example of Mack's Chain-Ladder model, probably the most used method in practice. Finally, we will have a short look at Linear-Stochastic-Reserving methods, a huge class of reserving methods which includes many classical methods used in practice.


Invited lecture: Thursday, 15:30, and Friday, 8:30, Room 1

Vincent Goulet  (École d'actuariat, Université Laval, Québec, Canada)

Computational Actuarial Science with R

The numerical and computational aspects play an ever increasing role in the risk modeling and evaluation process. Participants to the workshop will improve their general programming skills and expand their knowledge of R for quantitative risk analysis.
The workshop focuses on best practices and adopts a hands on approach with lots of demonstrations and exercises. We will first review the basic notions of R programming from an actuarial perspective, study the most important tools and learn to be efficient with the language. Because it is an important topic for any programmer, we will devote some time to floating point numbers and roundoff error. Based on a case study, the second part of the workshop will follow a typical risk analysis process: manipulation and modeling of insurance data, estimation, measuring of risk, evaluation and simulation. In closing, participants will learn to do more and be more effective in their work with literate programming and version control.
A base knowledge of R and standard statistical and actuarial procedures is assumed.

Technical requirements:
Laptop computer with the most recent version of R installed and either a good R programmer editor or R IDE. We recommend:
 • GNU Emacs with ESS (macOS, Windows);
 • RStudio.

New: Documentation/slides:
https://vigou3.github.io/ime-2017-workshop-computational-actuarial-science-r/.

Short bio:
Vincent Goulet is professor at the School of Actuarial Science of Université Laval, Québec, Canada. He began using R when the version number still started with a 0. Vincent is the maintainer of actuar, the first R package specifically devoted to actuarial science and extreme value distributions. He has been teaching R programming to future actuaries for more than 12 years, an experience he distilled into the open document Introduction à la programmation en R (in French).


Invited lecture: Thursday, 13:30, and Friday, 15:10, Room 1

Stefan Thonhauser  (Institute of Analysis and Number Theory, Graz University of Technology, Austria)

Optimization problems in risk theory

A large share of research in risk theory dealt with stochastic optimization problems in recent years. Classical contributions put the main focus on the minimization of the ruin probability and the maximization of expected dividend payments, which are inconsistent with one another in terms of the valuation of an insurer's portfolio. In this lecture we will recap the basic theory on stochastic optimal control and review the classical risk theoretic results. Subsequently, we will meet different variations of classical value functions and underlying surplus processes. These modifications typically change the nature of the optimization problems and explicit solutions are hardly available. Exemplary we will discuss the introduction of transaction costs in the problem of minimizing the ruin probability by investment and parameter uncertainty in the dividend problem. Finally, we will survey further extensions and solution techniques.


 

Contributed Talks


Contributed talk: Monday, 13:55, Room 7

Lourdes Belchior Afonso (CMA/FCT - Universidade Nova Lisboa, Portugal)

Heuristic approach to evaluate the fire risk sub-module in Solvency II

The Commission Delegated Regulation (EU) 2015/35 of 10 October 2014 contain implementing rules for Solvency II. In order to apply the requirements set out in article 132 for Non-life catastrophe risk sub-module; Man-made catastrophe risk sub-module; Fire risk sub-module it will be necessary to evaluate the largest sum insured of all buildings that are partly or fully located within a radius of 200 meters. Consequently, Operational Research techniques are getting increasingly important as a decision support tools for Actuarial Analysis.

Consider the following problem: given a list of clients inside a territory it is necessary to find the centre of the 200 meters radius circle that aggregates the largest sum insured considering all buildings located partly or fully inside that circle. That problem can be formulated as a Binary Linear Programming Problem which raises the usual computational complexity issues.

In this work the authors propose a formulations for the mentioned problem and a heuristic approach for solving it.

Keywords: Solvency II; Fire risk sub-module; Binary Linear Programming; Heuristics.

The talk is based on a joint work with Joana Fradinho, Nelson Chibeles-Martins and Isabel Gomes.


Contributed talk: Tuesday, 13:30, Room 4

Jae Youn Ahn (Ewha Womans University, Republic of Korea (South Korea))

On random-effect models for testing the heterogeneity in Bonus-Malus system.

Heterogeneity among claim observations plays a critical role in rate making system in insurance industry. It is a common practice in insurance to model such heterogeneity using the random effect, and the posterior distribution of random effect can be used for the rate making of the policyholder. Especially, in bonus-malus system, Poisson-Gamma mixture model is a common choice to model the claim frequency, where the number of claims are modeled with Poisson distribution and the subject heterogeneity is explained by the common random effect from Gamma distribution. However, Poisson-Gamma mixture model often shows the lack-of-fit for overdispersed claim frequency data. In this presentation, we address various issues related to Poisson-Gamma mixture model, including inaccurate rate making and inappropriate hypothesis testing result. In particular, we show that the variance component test based on Poisson-Gamma mixture model is often severely biased, and the bonus-malus factor is overestimated. We provide an alternative random-effect model to minimize the problems of Poisson-Gamma mixture model.

The talk is based on a joint work with Whoojoo Lee.


Contributed talk: Tuesday, 13:30, Room 5

Daniel Alai (University of Kent, United Kingdom)

Lifetime Dependence Modelling using a Generalized Multivariate Pareto Distribution

An important driver of longevity risk is uncertainty in old-age mortality, especially surrounding potential dependence structures. We explore a generalized multivariate Pareto distribution that is closely related to Archimedean survival copulas. It can be applied to a variety of applications, from portfolios of standard annuities to joint-life annuity products for couples. In past work, it has been shown that even a little dependence between lives can lead to much higher uncertainty. Therefore, the ability to assess and incorporate the appropriate dependence structure, whilst allowing for extreme observations, significantly improves the pricing and risk management of life-benefit products.

The talk is based on a joint work with Zinoviy Landsman.


Contributed talk: Monday, 13:55, Room 2

Sara Ali Alokley (King Faisal University, Saudi Arabia)

Clustering of Extremes in Financial Returns: A Comparison Between Developed and Emerging Markets

This paper investigate the dependency or clustering of extremes in the financial returns data by estimating the extremal index value θ∈[0,1]. The smaller the value of θ the more clustering we have. Here we apply the method of Ferro and Segers (2003) to estimate the extremal index for a range of threshold values. We compare the dependency structure of extremes in the developed and emerging markets. We use the financial returns of the stock market index in the developed markets of US, UK, France, Germany and Japan and the emerging markets of Brazil, Russia, India, China and Saudi Arabia. The results show that more clustering occurs in the emerging markets for a high threshold value. Furthermore, the developed countries show the same clustering behavior under a number of threshold values. This study will help to understand the dependency structure of the financial returns data. Moreover, understanding clustering of extremes in these markets help investors reduce the exposure to extreme financial events such as the financial crisis.

The talk is based on a joint work with Mansour Saleh Albarrak.


Contributed talk: Tuesday, 13:55, Room 3

Jennifer Alonso Garcia (UNSW Sydney, Australia)

Pricing and Hedging Guaranteed Minimum Withdrawal Benefits under a General Levy Framework using the COS Method

This paper extends the Fourier-cosine (COS) method (Fang and Oosterlee, 2008) to the pricing and hedging of variable annuities embedded with guaranteed minimum withdrawal benefit (GMWB) riders. The COS method facilitates efficient computation of prices and hedge ratios of the GMWB riders when the underlying fund dynamics evolve under the influence of the general class of Lévy processes (Papapantoleon, 2008). Formulae are derived to value the contract at each withdrawal date using a backward recursive dynamic programming algorithm. Numerical comparisons are performed with results presented in Bacinello et al. (2014) and Luo and Shevchenko (2014) to confirm the accuracy of the method. The efficiency of the proposed method is assessed by making comparisons with the approach presented in Bacinello et al. (2014). We find that the COS method presents highly accurate results with notably fast computational times. The valuation framework forms the basis for GMWB hedging. A local risk minimisation approach to hedging inter-withdrawal date risks is developed (Kolkiewicz and Liu, 2012). A variety of risk measures are considered for minimisation in the general Lévy framework. While the second moment and variance have been considered in existing literature, we show that the value-at-risk may also be of interest as a risk measure to minimise risk in variable annuities portfolios.

References:

Bacinello, A. R., Millossovich, P., and Montealegre, A. (2014), “The valuation of GMWB variable annuities under alternative fund distributions and policyholder behaviours,” Scandinavian Actuarial Journal, 1-20.
Bauer, D., Kling, A., and Russ, J. (2008), “A Universal Pricing Framework for Guaranteed Minimum Benefits in Variable Annuities," Astin Bulletin, 38, 621-651.
Condron, C. M. (2008), “Variable Annuities and the New Retirement Realities," The Geneva Papers on Risk and Insurance-Issues and Practice, 33, 12-32.
Fang, F. and Oosterlee, C. W. (2008), “A Novel Pricing Method for European Options Based on Fourier-Cosine Series Expansions," SIAM Journal on Scientific Computing, 31, 826-848.
Fung, M. C., Ignatieva, K., and Sherris, M. (2014), “Systematic mortality risk: An analysis of guaranteed lifetime withdrawal bene_ts in variable annuities," Insurance: Mathematics and Economics, 58, 103-115
Hanif, F., Finkelstein, G., Corrigan, J., and Hocking, J. (2007), “Life insurance, global variable annuities," Published by Morgan Stanley Research in co-operation with Milliman.
IRI (2015), “Second-Quarter 2015 Annuity Sales Report," https://www.myirionline.org/docs/default-source/news-releases/iri-issues-second-quarter-2015-annuity-sales-report-%28pdf%29.pdf?sfvrsn=0
Kolkiewicz, A. and Liu, Y. (2012), “Semi-Static Hedging for GMWB in Variable Annuities,” North American Actuarial Journal, 16, 112-140.
Ledlie, M. C., Corry, D. P., Finkelstein, G. S., Ritchie, a. J., Su, K., and Wilson, D. C. E. (2008), “Variable Annuities," British Actuarial Journal, 14, 327-389.
Luo, X. and Shevchenko, P. V. (2014), “Fast numerical method for pricing of variable annuities with guaranteed minimum withdrawal benefit under optimal withdrawal strategy," International Journal of Financial Engineering, 2, 1{24.
Papapantoleon, A. (2008), “An introduction to Lévy processes with applications in finance," arXiv preprint arXiv:0804.0482.

The talk is based on a joint work with Oliver Wood and Jonathan Ziveyi.


Contributed talk: Wednesday, 09:00, Room 3

Suhan Altay (TU Wien, Austria)

Yield Curve Scenario Generation with Independent Component Analysis

Recent years have witnessed considerable research and development of interest rate models dealing with various needs of financial and insurance companies. Especially, with the current low-interest rate environment, generating realistic yield curve scenarios become an important issue for assessing the risks associated with asset and liabilities such as those of life insurance companies and pension funds. In this work, we propose a yield curve scenario generator capable of extracting the state variables (factors) driving the dynamics of the default-free and defaultable bonds. Our objective is to extend the methodology proposed by Jamshidian and Zhu (1997) in a way that the original state variables are transformed to a new set of state variables via Independent Component Analysis (ICA) instead of Principal Component Analysis (PCA). The main advantage of ICA is to extract components that are statistically independent, which makes the estimation of parameters and generation (simulation) of future yield curves more efficient. Another feature of this method is its ability to extract non-Gaussian factors because the estimation requires non-Gaussian independent components. Under this proposed model, the yield curve is described by a linear mixture of independent components, which are assumed to follow non-Gaussian dynamics such as non-Gaussian Lévy driven Ornstein-Uhlenbeck processes.


Contributed talk: Tuesday, 14:20, Room 1

Ayse Arik (Hacettepe University, Turkey)

Valuation of Defined Benefit Pension Schemes Based on Solvency II

Solvency II regulations encourage insurers to apply stochastic models for valuation of the contracts and to consider the dependency between finance and insurance markets for the examination of the capital adequacy requirements. In this study, we introduce the dependence structure between short rate and transition rates in a continuous Markovian setting for a hypothetical defined benefit pension plan. Moreover, we calculate the pension obligations using stochastic models in this setting.

Keywords: Defined Benefit Pension Plan, Solvency II, Markov Model, Dependency, Stochastic Models.

References:
Arik, A., Okur, Y.Y., Sahin, S., Ugur, Ö. "Pricing Pension Buy-outs under Stochastic Interest and Mortality Rates" submitted to Scandinavian Actuarial Journal on January 2017, under review.
Buchardt, K., Dependent Interest and Transition Rates in Life Insurance, Insurance: Mathematics and Economics, 2014(55), pp 167 - 179.
Haberman, S., Pitacco, E., Actuarial Models for Disability Insurance, Chapman&Hall/CRC, 1999.

The talk is based on a joint work with Yeliz Yolcu Okur.


Contributed talk: Monday, 16:05, Room 5

Séverine Arnold (University of Lausanne - HEC, Switzerland)

Forecasting Mortality: Why Is It Also Risky To Use Generational Life Tables?

With past mortality improvements, it is now well-recognised that future mortality evolutions need to be taken into account by the actuary when a life or a death product is priced, or when the appropriate reserves for such products are constituted. Therefore, any life insurer or pension fund will face the question of which mortality table it should use. Should it be a period life table or a generational one? A period life table, also known as static life table, indicates death rates at a specific point in time, while a generational life table presents death rates for a specific generation, that is for people born the same year. Nowadays, generational mortality tables are usually recognised as best practice, and thus, a greater proportion of institutions are using them. This is the case in Switzerland.

In this paper, we highlight why generational mortality tables may not be the optimal solution and we illustrate it through the current situation of Swiss pension funds. A large proportion of practitioners believe that the use of generational mortality tables is safer and prevents the constitution of a longevity reserve, a reserve aiming to reduce the impact of some unexpected mortality improvements. We show that this is not the case and that, on the contrary, pension funds using generational mortality tables are more sensitive to mortality changes than institutions using period tables.


Contributed talk: Tuesday, 13:55, Room 9

Hirbod Assa (University of Liverpool, United Kingdom)

Insurances on commodities

In this talk, I will be investigating optimal insurances designed on commodities. There are three main challenges in designing optimal insurance contracts. The first challenge is to choose a suitable model for commodity prices. Commodities are goods, therefore, they are affected by demand, supply, speculation and economic risk variables (e.g., inflation). As a result, a good model has to include the impact of all these economic factors. The second challenge in designing optimal insurances is to find correct premium rules for insurances on commodities. Since commodities are usually non-traded this leads us to deal with market prices of risk. The third challenge is to design an optimal insurance contract. The no-moral-hazard assumption along with the so-called Marginal Indemnification Function (MIF) method can help to find optimal solutions.

Considering all these challenges, in this talk I will design an optimal insurance contract on commodities and find out how economic factors can affect the optimal design. The methodology that is developed in this talk is general in the sense that it can obtain the optimal solution for any agent that uses a distortion risk measure to assess her risk. I will show that optimal solutions have multi-layer structure.


Contributed talk: Monday, 10:40, Room 7

Dibu Sasidharan Athanikal (National Institute of Technology Calicut, India)

On the Gerber-Shiu function of a MAP risk model with possible delayed Phase type by-claims

In this paper, we consider an insurance risk model governed by a Markovian arrival claim process and two kinds of Phase-type claims - main claims and by-claims. Every main claim may induce by-claim with probability ‘θ’ and the payment of the by-claim is delayed until the next claim arrival. The model is relevant when the payments of by-claims are approved only after an investigation. We analyse the Gerber-Shiu discounted penalty function of the model using a suitably defined auxiliary model. Defective renewal equations in matrix form connecting the Gerber-Shiu functions are written using transient analysis of the corresponding Markovian fluid flow models that are adapted to insurance risk processes. Expressions are provided for the triple Laplace transform of the time to ruin, surplus prior to ruin and deficit at ruin and hence, the discounted joint and marginal moments of the surplus prior to ruin and the deficit at ruin are obtained. Finally, some special cases are discussed for illustration of the method.

The talk is based on a joint work with Jacob M J.


Contributed talk: Tuesday, 15:40, Room 6

Benjamin Avanzi (UNSW Sydney, Australia)

Optimal dividends under Erlang(2) inter-dividend decision times

In the classical dividends problem, dividend decisions are allowed to be made at any time. Under such a framework, the optimal dividend strategies are often of barrier or threshold type, which can lead to very irregular dividend payments over time. In practice however companies distribute dividends on a periodic basis. In that spirit, “Erlangisation” techniques have been used to approximate problems with fixed inter-dividend decision times. This method has found particular success in finding solutions related to the expected present value of periodic dividends, and in deriving associated results relating to probability of ruin.

In contrast, when studying the optimality of such strategies, the existing literature focuses exclusively on the special case of exponential-that is, Erlang(1)-inter-dividend decision times, mainly because higher dimensional models are surprisingly difficult to study. While this difficulty continues to exist in high dimensions, in this paper we provide a full proof of the optimality of periodic barrier strategies when inter-dividend-decision times are Erlang(2) distributed. Results are illustrated.

The talk is based on a joint work with Vincent Tu and Bernard Wong.


Contributed talk: Monday, 11:30, Room 7

Florin Avram (Universite de Pau, France, France)

A review of the scale functions method for spectrally negative Levy processes

First passage problems for possibly absorbed or/and reflected spectrally negative Levy processes have been widely applied in mathematical finance, risk, queueing, and inventory/storage theory.

Historically, such problems were tackled by taking Laplace transform of the associated Kolmogorov integro-differential equations involving the generator operator. In the last years there appeared an alternative approach based on the computation of two scale functions W and Z, which solve the two-sided exit problem from an interval. Since many other problems can be reduced to this problem, we end with a dictionary furnishing formulas "standardized" in terms of the "W,Z alphabet" for a great variety of problems.

We collect here our favorite recipes from this dictionary, including two recent ones which generalize the classic De Finetti and Shreve-Lehoczky-Gaver dividend problems, and whose optimization may provide useful tools for the valuation of financial companies.

One interesting use of the dictionary is for recognizing relationships between apparently unrelated problems. Another is checking when a formula is already known, which may not be altogether trivial given that at least four related strands of literature need to be checked. Last but not least, it seems that formulas for the classic absorbed and reflected Levy processes hold also for Levy processes with refraction, and with Parisian absorbtion or/and reflection, once the classic W,Z are replaced with appropriate generalizations.

The talk is based on a joint work with Mine Çaglar and Ceren Vardar.


Contributed talk: Tuesday, 11:05, Room 6

Pablo Azcue (Universidad Torcuato di Tella, Argentine Republic)

Optimal time of merger of two insurance companies

We consider a two-dimensional optimal dividend problem in the context of two insurance companies with compound Poisson surplus processes, which have the possibility of merger at any time- at this time, the companies put together all their surplus and pay the claims of the two companies -. Both companies, and the eventual merged company, pay out dividends to their common shareholders. The goal is to find both the optimal dividend payment policy and the merger time which maximize the expected cumulative discounted dividend pay-outs until the ruin time; we consider the possibility of a merger cost. This is a mixed singular control/optimal stopping problem and was proposed by Gerber and Shiu (2006).

We characterize the optimal value function as the smallest viscosity solution of the associated Hamilton–Jacobi–Bellman (HJB) equation. We also find a verification result to check optimality even in the case where the optimal value function is not differentiable: if a limit of value functions of admissible strategies is a viscosity super-solution of the HJB equation, then it is the optimal value function.

Since the associated HJB equation is a two-dimensional integro-differential equation with obstacle, it is difficult to find numerical solutions. We present a numerical scheme to construct epsilon-optimal strategies and show some examples with non-trivial free boundaries.

The talk is based on a joint work with Nora Muler.


Contributed talk: Tuesday, 13:30, Room 2

Taehan Bae (University of Regina, Canada)

On the mixtures of length-biased Weibull distributions for severity modelling

In this talk, a new class of length-biased Weibull mixtures will be introduced and its basic distributional properties will be reviewed. As a generalisation of the Erlang mixture distribution, the length-biased Weibull mixture has an increased flexibility to fit various shapes of data distributions including heavy-tailed ones. The Expectation-Maximization algorithm for statistical estimation in the presence of a data collection threshold will also be discussed with examples of loss data sets from operational risk modelling.


Contributed talk: Tuesday, 14:45, Room 3

Raj Kumari Bahl (University of Edinburgh, United Kingdom)

General Price Bounds for Guaranteed Annuity Options

In this paper, we are concerned with the valuation of Guaranteed Annuity Options (GAO's) under the most generalised modelling framework where both interest rate and mortality risk are stochastic and correlated. Pricing these type options in the correlated environment is a challenging task and no closed form solution is available in the literature. We employ the use of doubly stochastic stopping times to incorporate the randomness about the time of death and employ a suitable change of measure to facilitate the valuation of survival benefit, there by adapting the payoff in terms of the payoff of a basket call option. We derive general price bounds for GAO's by employing the theory of comonotonicity and the Rogers and Shi (1995) approach. We carry out Monte Carlo simulations to estimate the price of a GAO and illustrate the strength of the bounds for a variety of affine processes governing the evolution of mortality and the interest rate.

Keywords: Guaranteed Annuity Option (GAO), affine process, interest rate risk, mortality risk, change of measure, comonotonicity.

Reference:
L.C.G. Rogers and Z. Shi. The Value of an Asian Option. Journal of Applied Probability, 32(4):1077-1088, 1995.

The talk is based on a joint work with Sotirios Sabanis.


Contributed talk: Tuesday, 16:05, Room 3

Anastasios Bardoutsos (University of Groningen, The Netherlands)

Coherent mortality projections for the Netherlands taking into account mortality delay and smoking

Background: Estimates of future mortality often prove inaccurate as conventional extrapolative methods do not capture the impact of smoking nor the mortality delay: the shift in the age-at-death distribution towards older ages.

Objective: We estimate future life expectancy for the Netherlands by simultaneously taking into account the effect of smoking, developments in mortality delay, and the mortality experience of other countries and the opposite sex.

Methods: We used sex-specific lung-cancer and all-cause mortality data for the Netherlands and 10 European countries (40-100+, 1960-2012). We project non-smoking-related mortality up to 2050 by extrapolating the delay and compression parameters of the CoDe 2.0 model, and combine this with projected smoking-attributable mortality fractions to obtain future all-cause mortality.

Results: Increases in the modal age at death – indicating mortality delay - are more linear and more similar for men and women for non-smoking-related mortality compared to all-cause mortality. Our extrapolation of non-smoking-related mortality taking into account mortality delay resulted in a higher life expectancy in 2050 compared to a Lee-Carter projection. Adding projected smoking-related mortality resulted in a decline in projected life expectancy, especially for women. Applying the past delay in non-smoking-related mortality among French women resulted in a strong increase in projected life expectancy, especially among men.

Conclusions: Taking into account smoking when performing projections based on mortality delay is essential for the Netherlands. Our coherent mortality projection for the Netherlands taking into account mortality delay and smoking resulted in higher life expectancy in 2050 and more deaths at higher ages.

The talk is based on a joint work with Fanny Janssen and Joop de Beer.


Contributed talk: Wednesday, 11:35, Room 5

Lkhamjav Batsaikhan (Da-Yeh University, Taiwan)

Interest Rate Volatility, Contract Value, and Default Risk in Participating Life Insurance

For life insurers, guaranteed interest rates in participating insurance present well-known tradeoffs, with high interest rates drawing more customers, but possibly posing a threat to solvency. This has brought the actuarial problems of contract valuation and default risk firmly into the domain of public policy. The goal of insurance regulation is to reduce insolvency risk while offering policyholders attractive returns. European standards cap guaranteed interest at 60% of the national interest rate, a requirement that has been applied to countries with divergent economic conditions. A potential drawback of this type of cap is that a unidimensional standard, based only on the magnitude of the interest rate, may be too simplistic. If default probability depends on other factors such as interest rate volatility, then a more developed formula may be needed.

This paper uses the framework in Grosen and Jorgensen (2000) to determine how interest rate volatility affects participating life insurance contract value and default probability. Like Grosen and Jorgensen, we allow contract value and default probability to depend on asset volatility and expected interest rate. However, we develop their simulation model further by incorporating stochastic interest rates and the regulatory interest rate cap as additional variables. We find that volatility is associated with lower contract value and higher default probability. The ramifications for regulatory caps are discussed.

The talk is based on a joint work with Carol Anne Troy and Hsu Wenyen.


Contributed talk: Tuesday, 11:05, Room 1

Jean-François Bégin (Simon Fraser University, Canada)

Deflation Risk and Implications for Life Insurers

Life insurers are exposed to deflation risk: falling prices could lead to insufficient investment returns, and inflation-indexed protections could make insurers vulnerable to deflation. In this spirit, this paper proposes a market-based methodology for measuring deflation risk based on a discrete framework: the latter accounts for the real interest rate, the inflation index level, its conditional variance, and the expected inflation rate. US inflation data are then used to estimate the model and show the importance of deflation risk. Specifically, the distribution of a fictitious life insurer’s future payments is investigated. We find that the proposed inflation model yields higher risk measures than the ones obtained using competing models, stressing the need for dynamic and market-consistent inflation modelling in the life insurance industry.


Contributed talk: Monday, 13:30, Room 3

Fabio Bellini (University of Milano-Bicocca, Italy)

An expectile-based measure of implied volatility

We show how to compute the expectiles of the risk neutral distribution from the prices of European call and put options.

We analyze their empirical properties on a dataset of options on the italian stock index FTSEMIB.

We show that the interexpectile difference has interesting theoretical and empirical properties and is a natural measure of the variability of the risk neutral distribution.

We study its statistical properties and compare it with those of VIX and SKEW indexes, and with the implicit VaR and CVaR introduced in Barone Adesi (2016).

The talk is based on a joint work with Lorenzo Mercuri and Edit Rroji.


Contributed talk: Monday, 16:55, Room 6

Zied Ben Salah (American University in Cairo, Egypt)

Optimal Premiums for a Risk Model with Capital Injections

In this paper a risk model in the context of capital injections is considered. These injections provide an additional capital after each ruin. It is assumed that after each injection the insurer adjusts the premium rate. This premium adjustment aims to protect the insurer from the impact of the future deficits. In order to measure the impact of the premium changes on the future deficit, the expected discounted value of capital injections (EDVCI) is calculated in terms of the different premium rates. This paper provides an optimal premium strategy for a given value of EDVCI. The general optimal problem is considered under a risk model with a subordinator and Brownian perturbation. Then, the special cases for the Sparre Andersen risk model and the classical model are considered. Numerical examples are provided under some specific distributions of claims.

The talk is based on a joint work with Khouzeima Moutanabir.


Contributed talk: Monday, 10:40, Room 4

Lluís Bermúdez (University of Barcelona, Spain)

A bivariate regression model for panel count data

A bivariate INAR(1) regression model is adapted to the ratemaking problem of pricing an automobile insurance contract with two types of coverages (third-party liability guarantee and other guarantees) taking into account both the correlation between claims from different type of coverage and the serial correlation between the observations of a same policyholder that are observed over time. A numerical application using an automobile insurance claims database is conducted and the main finding is that the improvement of the BINAR(1) model over the simplest models is very large implying that we need to consider both time and cross correlation to fit the data at hand.

The talk is based on a joint work with Montserrat Guillen and Dimitris Karlis.


Contributed talk: Tuesday, 14:20, Room 2

Corina Birghila (University of Vienna, Austria)

Insurance premium under ambiguity

The insurance industry relies on the premium calculation of statistical model for losses. Especially in the case of extreme events, the high ambiguity concerning the occurrence and the magnitude of losses increases the difficulty of managing and estimating risk. In this talk we propose a method to incorporate model error into pricing and design of an insurance contract. Our proposed premium compensates, on one hand, for the aleatoric uncertainty of the loss model and on the other hand, for the epistemic uncertainty which characterizes the model misspecification. While the former ambiguity is generated by insurer's risk adversed attitude - captured by classical premium principles -, the latter is connected to his uncertainty aversion toward extreme events. To guarantee the robustness of our insurance premium, we consider a worst-case approach over all models within some neighborhood of the reference model.


Contributed talk: Wednesday, 09:25, Room 8

Tim Boonen (University of Amsterdam, The Netherlands)

Price competition in general insurance markets: a dynamic game-theoretic approach

In the insurance markets, the number of product-specific policies from different companies has increased significantly. The strong market competition has boosted the demand for a competitive premium. In actuarial science, scant literature still exists on how competition actually affects the calculation and the pricing cycles of company's premiums. In this paper, we model premium dynamics via a differential game, and study the insurers' equilibrium premium pricing in a competitive market. We apply an optimal control theory approach to determine the open-loop strategies Nash Equilibrium premiums.

Two models are investigated. The market power of each insurance company is characterized by a price sensitive parameter, and the business volume is affected by the solvency ratio. Considering the average market premiums, the first model studies an exponential relation between premium strategies and volume of business. The other model initially characterize the competition between any selected pair of insurers, then aggregates all the paired competitions in the market. A numerical example illustrates the premium dynamics, and shows that premium cycles may exist in equilibrium.

The talk is based on a joint work with Athanasios Pantelous and Renchao Wu.


Contributed talk: Monday, 14:45, Room 3

Marcel Bräutigam (UPMC & ESSEC & Labex MME-DII, France)

Predicting Risk with Risk Measures: An Empirical Study

In this study we empirically assess the performance of the historical VaR in predicting the future risk of a financial institution. Our contribution is threefold:

Firstly, we use a stochastic process called Sample Quantile Process (SQP) as risk measure (process) to provide a dynamical and generalized framework. The SQP can be seen as a generalization of the rolling-window historical VaR.

Secondly, we introduce a new quantity in order to measure the quality of risk prediction and hereby assess the validity of VaR capital requirements. This is different from backtesting where, in the case of VaR, one quantifies the VaR violations but doesn’t capture the size of over- or underestimation. We then use this new quantity to explore the behavior of the SQP as a risk predictor by varying its parameters.

Thirdly, we study the behaviour of the future risk as a function of past volatility. We show that if the past volatility is low, the historical computation of the risk measure underestimates the future risk, while in period of high volatility, the risk measure overestimates the risk, confirming that the current way financial institutions measure their risk is highly procyclical. Using a simple GARCH(1,1) model we see that part of this behaviour can be attributed to the clustering and return to the mean of volatility. Still, we observe that the overestimation of risk during high-volatility periods is more systematic in the historical data than with the GARCH(1,1) model.

The talk is based on a joint work with Michel Dacorogna and Marie Kratz.


Contributed talk: Tuesday, 11:30, Room 1

Kristian Buchardt (PFA Pension, Denmark)

On cash flows dependent on investment returns in life and pension insurance

In investment and insurance contracts, certain future payments may depend on investment returns. Examples of this could be tax payments or investment costs. We study the problem of determination, hedging and valuation of such cash flows. We consider a simple contract with a guaranteed payment at a future time point, in a set-up with taxes and investment costs that are affine functions of the investment returns, and determine the market value. We decompose the value into the tax part, the investment cost part and the benefit part, and determine the associated hedging strategies. In particular, we identify the (interest rate dependent) expected future tax payments and investment costs. Finally, we consider the special case of affine interest rates, where explicit results can be obtained.

The talk is based on a joint work with Thomas Moller.


Contributed talk: Monday, 16:30, Room 6

Abel Cadenillas (University of Alberta, Canada)

The Optimal Size Band of Government Stabilization Funds

To mitigate the negative consequences of a crisis, an instrument of fiscal policy is the stabilization fund, which is a mechanism to save money in the good times to be used in the bad economic times. We present a model to study the optimal management of a stabilization fund.

The talk is based on a joint work with Ricardo Huaman-Aguilar.


Contributed talk: Tuesday, 16:30, Room 4

Enrique Calderin (University of Melbourne, Australia)

Fat-Tailed Regression with the Double Pareto Lognormal Distribution Applied to Bodily Claims Data

Traditional regression models might not be appropriate when the probability of extreme events is higher than that implied by the normal distribution. Extending the idea of parametric regression, we discuss fat--tailed regression with the double Pareto lognormal distribution. This model is obtained as the mixture of a lognormal distribution with a double--Pareto distribution. We apply this parametric family to modelling bodily claims amounts.

The talk is based on a joint work with Kevin Fergusson and Xueyuan Wu.


Contributed talk: Monday, 15:40, Room 8

Arian Cani (University of Lausanne, Switzerland)

On randomized reinsurance contracts

The design of optimal reinsurance treaties is a classical problem in risk theory. The identified optimality results are then typically based on a deterministic reinsurance rule. In the framework of a one-year reinsurance model including regulatory solvency constraints and the associated cost of capital, in this paper we propose a randomized stop-loss reinsurance strategy and investigate the effects of randomizing on the expected profit after reinsurance. We provide an analytical characterization of the resulting optimal stop-loss retention level. The proposed randomized strategy turns out to outperform the classical deterministic strategy in a number of cases.

The talk is based on a joint work with Hansjoerg Albrecher.


Contributed talk: Monday, 14:45, Room 8

David A. Carozza (Université du Québec à Montréal & XL Catlin, Canada)

Spatial and temporal diversification of climate-driven flood and hurricane risk for re/insurers

Natural catastrophes due to atmospheric perils such as flood, tropical cyclones, severe weather, drought, and wildfire make up a majority of disaster losses, which resulted in over $100 billion in global economic losses in 2015 and made up over 95% of the increase in global losses from 1980 to 2008. The re/insurance industry works to minimize the risk from these catastrophes by building diversified portfolios. Atmospheric perils are heterogeneously distributed throughout the globe and strongly influenced by patterns of interannual natural climate variability, such as the El Nino Southern Oscillation (ENSO). Climate patterns such as these can drive atmospheric perils to be correlated to one another. In a recent report, Aon Benfield found that economic and insured losses were considerably lower during El Nino years (warm phase of ENSO), both globally and in a number of regions, than during La Nina years (cold phase of ENSO). Global losses tied to tropical cyclones, flooding, severe thunderstorm and winter storms were all individually smaller during El Nino years as well. Currently, due to a paucity of research targeted at understanding the confluence between climate patterns, extreme weather events and re/insured assets, the re/insurance industry is largely unable to appropriately minimize the climate-driven risk in geospatially diversified portfolios.

As an initial step toward developing a statistical framework that will allow us to better understand the spatial and temporal distribution of atmospheric perils with the goal of informing risk management, we focus on the global distribution of two perils, flood and tropical cyclones, in the context of the Dartmouth Flood Observatory and IBTRaCS global tropical cyclone databases. To gain insight into these perils, we investigate associations between regions through correlations on frequency and intensity variables, between peril variables and key indices of climate oscillations, and finally between the two perils themselves. We then apply spatial statistics methods to further examine the link between frequency, intensity, and the climate patterns, for each of the perils individually as well as for the interdependence between them. This methodology sets the foundation for catastrophe portfolio risk management that captures the impact of climate oscillations on the frequency, intensity, and spatial distribution of climate-driven catastrophes. The future integration of losses into our framework will allow for the analysis of optimal risk management strategies for re/insurance portfolios.

The talk is based on a joint work with Mathieu Boudreault.


Contributed talk: Monday, 16:05, Room 1

Claudia Ceci (University of Chieti-Pescara, Italy, Italy)

Unit-linked life insurance policies: optimal hedging in partially observable market models

In this paper we investigate the hedging problem of a unit-linked life insurance contract via the local risk-minimization approach, when the insurer has a restricted information on the market. In particular, we consider an endowment insurance contract, that is a combination of a term insurance policy and a pure endowment, whose final value depends on the trend of a stock market where the premia the policyholder pays are invested. To allow for mutual dependence between the financial and the insurance markets, we use the progressive enlargement of filtration approach. We assume that the stock price process dynamics depends on an exogenous unobservable stochastic factor that also influences the mortality rate of the policyholder. We characterize the optimal hedging strategy in terms of the integrand in the Galtchouk-Kunita- Watanabe decomposition of the insurance claim with respect to the minimal martingale measure and the available information flow. We provide an explicit formula by means of predictable projection of the corresponding hedging strategy under full information with respect to the natural filtration of the risky asset price and the minimal martingale measure. Finally, we discuss applications in a Markovian setting via stochastic filtering.

The talk is based on a joint work with Katia Colaneri and Alessandra Cretarola.


Contributed talk: Monday, 10:40, Room 2

Maria de Lourdes Centeno (CEMAPRE / Universidade de Lisboa, Portugal)

Diagonal inflated models and ratemaking of dependent risks

Considering that dependencies among risks is of major importance for some classes of business, we revisit some regression models that can be used for ratemaking.

We start by considering several regression models to deal with dependent claim numbers, namely Poisson bivariate regression models, the generalised bivariate negative binomial and the bivariate Poisson-Laguere polynomial. As the heterogeneity of some data is not explained in a satisfactory away by these models we will enlarge the family of distributions by allowing diagonal effects. We will compare this approach with copula based models, using a data set provided by a major Portuguese insurance company.

The talk is based on a joint work with João Manuel Andrade e Silva.


Contributed talk: Wednesday, 09:25, Room 1

Francesca Centrone (Università degli Studi del Piemonte Orientale, Italy)

Capital Allocation à la Aumann and Shapley for non differentiable risk measures

To face future uncertainty about their net worth, firms, insurances and in general portfolio managers are often imposed to hold a so called risk capital, that is, an amount of riskless assets in order to hedge themselves. This fact then raises the issue of how to share all this immobilized capital in an a priori fair way among the different lines or business units (see, for example [4], [6]).

As risk capital is commonly accepted in the literature to be modeled through the use of risk measures ([1], [4], [5], [9], [10]), capital allocation problems in risk management and the theory of risk measures are naturally linked.

Starting from Deprez and Gerber's ([7]) work on convex risk premiums, Tsanakas ([11]) defines a Capital Allocation Rule (C.A.R) for Gateaux differentiable risk measures inspired to the game theoretic concept of Aumann and Shapley value ([2]), and studies its properties for some widely used classes of convex risk measures, also providing explicit formulas. His analysis leaves anyway substantially open the case of general non Gateaux-differentiable risk measures (although he treats the case of distortion exponential risk measures, but it is easy to find other meaningful examples of convex and quasiconvex non Gateaux differentiable risk measures) as well as the study of quasiconvex risk measures, whose importance has been well recognized quite recently in the literature ([3], [8]).

The purpose of this work is to try to fill, though not in full generality, these gaps. To this aim, we propose a family of C.A.R. based on the dual representation theorems for risk measures, study their properties and show that they reduce to Tsanakas' one, when we assume Gateaux differentiability. In the meantime, we discuss the suitability of the use of quasiconvex risk measures for capital allocation purposes.

References
[1] Artzner, P., Delbaen, F., Eber, J.M., Heath, D. (1999), Coherent measures of risk, Mathematical Finance 9/3, 203-228.
[2] Aumann, R.J., Shapley, L.S. (1974), Values of Non-Atomic Games: Princeton University Press, Princeton.
[3] Cerreia-Vioglio, S., Maccheroni, F., Marinacci M., Montrucchio L. (2011), Risk measures: rationality and diversification, Mathematical Finance 21/4, 743-774.
[4] Delbaen, F. (2000), Coherent Risk Measures: Lecture notes, Scuola Normale Superiore, Pisa, Italy.
[5] Delbaen, F. (2002), Coherent Risk Measures on General Probability Spaces, in: Advances in Finance and Stochastics, K. Sandmann and P.J. Schonbucher eds. (2002), Springer-Verlag, 1-37.
[6] Denault, M. (2001), Coherent allocation of risk capital, Journal of Risk 4/1, 1--34.
[7] Deprez, O., Gerber, H.U. (1985), On convex principles of premium calculation, Insurance: Mathematics and Economics 4, 179--189.
[8] El Karoui, N., Ravanelli, C. (2009), Cash sub-additive risk measures and interest rate ambiguity, Mathematical Finance 19, 561-590.
[9] Follmer H., Schied A. (2002a), Convex measures of risk and trading constraints, Finance & Stochastics 6, 429-447.
[10] Frittelli M., Rosazza Gianin E. (2002), Putting order in risk measures, Journal of Banking & Finance 26,1473-1486.
[11] Tsanakas, A. (2009), To split or not to split: Capital allocation with convex risk measures, Insurance: Mathematics and Economics 44, 268--277.

The talk is based on a joint work with Emanuela Rosazza Gianin.


Contributed talk: Tuesday, 10:40, Room 3

Linus Fang-Shu Chan (Soochow University, Taiwan)

On the Effective Durations and Effective Convexities of Participating Life Insurance Reserves: The Simultaneous Impacts of Surrender Option and Bonus Option

Valuing the prices and evaluating the reserves of the participating policies are innovated and encouraged by financial valuation techniques and both issues play important roles in recent insurance literature. However, an important issue about the interest rate risk of participating policies’ reserves remains obscure in the literature. In this project, we use the effective duration and effective convexity to measure interest rate risk of a participating whole life insurance policy. We apply CIR term structure of interest rate and a modified arctangent function to incorporate both interest-rate-sensitive surrender behavior and the impact of participating feature on the surrenders. Based on the model setting, we will see how the participating scheme affects the effective durations and effective convexities when both the surrenders and participations are motivated by interest rate spread. Also, we re-examine whether the term structure of effective duration in Chan and Tsai (2010) are still robust or need more interpretation when apply to a participating insurance policy.


Contributed talk: Wednesday, 09:50, Room 8

Ke Xin Chen (The Chinese University of Hong Kong, Hong Kong S.A.R. (China))

Optimal Investment of Insurers under Regime-Switching Cointegration

While cointegration models with constant parameters are shown to generate statistical arbitrage, the cointegration feature may change and even disappear subject to regime shift. We consider the optimal investment problem of an insurer in a continuous time Markov regime-switching cointegration financial market. The insurer has to observe the random insurance liability. We allow the cointegration among risky assets to depend on the market mode, and the insurance claims are governed by a Markov regime-switching compound Poisson process. Using the time-consistent mean-variance framework, the optimal investment is derived by means of a game theoretic approach. Empirical and numerical examples are given.

The talk is based on a joint work with Hoi Yin Wong.


Contributed talk: Tuesday, 15:40, Room 8

Yunzhou Chen (University of Liverpool, United Kingdom)

An Optimal Control Approach to Optimal Reciprocal Reinsurance Policy

In this paper, we consider the problem of reciprocal optimal reinsurance design, when the risk is measured by a distortion risk measure and the premium is given by a distortion risk premium. We assume while one party (e.g., insurance company) optimizes its risk, the other party (e.g., reinsurance company) control its total risk. Having this in mind, we introduce three types of reciprocal reinsurance problems: Ceding-Optimal/Reinsurance-Control, Reinsurance-Optimal/Ceding-Control and Ceding-Reinsurance/Optimal-Control. We characterize the optimal solutions to these three problems by using the Marginal Indemnification Function method and the Lagrangian duality theory. Then we move to a more realistic situation when we assume the policies that are traded are either stop-loss, stop-loss after quota-share or quota-share after stop-loss. We show how one can find the optimal retention levels of each policy. We also will discuss some particularly interesting cases at the end.

The talk is based on a joint work with Hirbod Assa.


Contributed talk: Wednesday, 09:25, Room 5

Ze Chen (Tsinghua University & KU Leuven, Belgium)

Differentiate liabilities valuations: relative conservativeness, gain-loss asymmetry and loss-aversion

In this paper, we investigate differentiating liability valuations in the manner of relative conservativeness of hedging, gain-loss asymmetry and loss-aversion feature, including considering market-consistent valuation and fair valuation in Dhaene et al. (2017). Risk margin valuation, asymmetry and loss aversion of valuation, reveals its conservativeness extent. Relative conservativeness between valuations is intro-duced and considered under the principle to valuate the remaining risk of another valuation method. This principal is also applied to categorize valuations by the its conservativeness extent. Symmetry of valuation, which measures the difference of a valuation in pricing potential gains versus losses, is investigated. In addition, loss aversion hedger and its based valuation, where decisionmakers are distinctly more sensitive to losses than to gains, is investigated under the framework of convex hedge-based valuation. Asymmetric mean-variance valuations with loss-aversion, are also investigated.

The talk is based on a joint work with Jan Dhaene and Bingzheng Chen.


Contributed talk: Monday, 16:30, Room 5

Chunli Cheng (Hamburg University, Germany)

Surrender Risk and Mortality: The Impact of a Population-wide Health Shock on Life Insurance

We study the fair valuation of participating life insurance policies with surrender risk linked to mortality via a population-wide health shock, for example, smog currently prevalent in Asia. A population-wide health shock results in higher mortality rates in the risk pool, which modify policyholders’ surrender behavior. In this paper, we propose two after-health-shock surrender scenarios: First, due to a high liquidity demand for financing medical care, policyholders become more impatient in handling surrender of their policy; second, on the contrary, policyholders exert more effort in making their surrender decisions. An early default mechanism is imposed by a regulator to protect policyholders. We discuss the impacts of a population-wide health shock and regulator’s solvency intervention on the contract fair valuation both with and without taking into account the link between surrender risk and mortality.


Contributed talk: Monday, 10:40, Room 1

Patrick Cheridito (ETH Zürich, Switzerland)

Variable annuities with high water mark withdrawal benefit

In this paper, we develop a continuous-time model for variable annuities allowing for periodic withdrawals proportional to the high water mark of the underlying account value as well as early surrender of the policy. We derive a HJB variational inequality characterizing the minimal superhedging price of such a contract and the worst-case policyholder behavior from an issuer's perspective. Based on these results, we construct a dynamic trading strategy which superreplicates the contract. In addition, we show how early surrender has to be penalized to disincentivize a worst-case policyholder from using this option. To treat the problem numerically, we develop a semi-Lagrangian scheme based on a discretization of the underlying noise process.

The talk is based on a joint work with Peiqi Wang.


Contributed talk: Tuesday, 16:05, Room 8

Wing Fung Chong (The University of Hong Kong; King's College London, Hong Kong S.A.R. (China))

Optimal Reinsurance under Law-invariant Coherent Risk Measures

In this talk, we study a general problem of the optimal reinsurance design, which minimizes a general law-invariant coherent risk measure of the total retained loss of the insurer, with a general law-invariant coherent reinsurance premium principle, together with a premium budget constraint. Solving this general problem involves three crucial steps. Firstly, the objective function and the premium principle are represented in terms of distortion functions. Secondly, the mini-max theorem for infinite dimensional spaces is applied to interchange the infimum on the space of indemnities and the supremum on the space of probability measures. Thirdly, a recent result based on a Neyman-Pearson type argument in Lo (2017) is applied to solve the problem. Illustrative examples are given at the end of the talk. This talk is based on a joint work with Ka Chun Cheung and Ambrose Lo.

Key words: Optimal reinsurance design; General law-invariant coherent risk measure; Budget constraint; Distortion functions; Sion's Mini-max Theorem; Neyman-Pearson.

The talk is based on a joint work with Ka Chun Cheung and Ambrose Lo.


Contributed talk: Monday, 14:20, Room 4

Hwei-Lin Chuang (National Tsing Hua University, Taiwan)

The Impact of Financial Crisis on Skilled/Unskilled Wage Gap: Evidence from the Insurance Workers in Taiwan

The financial crisis has affected not only the financial market but also the labor market. This study aims to examine the impact of 2008 financial crisis on skilled/unskilled wage gap with an emphasis on the insurance workers in the finance industry. Using data from the Manpower Utilization Survey in Taiwan, this study applies the educational attainment as the classification criterion to distinguish skilled and unskilled labor. The main concept of this study’s econometric specification is based on the Human Capital Theory. The implications of the estimation results of the wage equation are consistent with the expectation of Human Capital Theory. In particular, the schooling variable has a positive influence on wages and its impact on wages for skilled labor is greater than it is for unskilled labor. The results from the Chow-test suggest that the wage structures for total labor, skilled labor and unskilled labor have all changed after the financial crisis. This study further conducts the Blinder-Oaxaca Wage Decomposition procedure to examine the impact of the financial crisis on skilled/unskilled wage gap. It is noticed that the sex and work experience variables have narrowed the skilled/unskilled wage gap and the effects have increased after the financial crisis. The schooling variable has narrowed skilled/unskilled wage gaps before the crisis while it turns to widen the wage gap after the financial crisis. This study pays a special attention on the insurance workers in the finance industry. It is found that insurance workers tend to widen the wage gap in the finance industry, and its impact has enhanced after the financial crisis from 15.18% to 27.31%.

The talk is based on a joint work with Cheng-Kai Huang.


Contributed talk: Monday, 15:40, Room 1

Katia Colaneri (University of Perugia, Italy)

Indifference price of unit linked life insurance contracts under partial information

In this paper we investigate the pricing problem of a unit-linked life insurance contract when the insurer has a limited information on the mortality intensity of the policy holders. In these type of contracts the final value depends on the death time of the insured as well as the trend of a portfolio traded in the financial market.

We assume that the financial market consists of a riskless asset, a risky asset and a longevity bond dependent on the mortality index defined on the same age cohort of population of the policy holder. We propose a modeling framework that allows for mutual dependence between the financial and the insurance markets. We consider dynamics of the risky asset and the mortality index governed by diffusion processes whose coefficients depend on the same observable stochastic process representing economic and environmental factors. Furthermore, we assume that at any time the insurance company knows if the policy holder is still alive but cannot directly observe her mortality intensity that depends on an exogenous latent factor.

Mortality intensity of the population and the policy holder do not coincide in general. This translates into the presence of a basic risk that, even in the context of complete information, does not permit perfect replication of the contract via self-financing strategies. Therefore, in alternative to arbitrage pricing we use expected utility maximization under exponential preferences as evaluation approach, which leads to the so-called indifference price. Under partial information this methodology requires filtering techniques that can reduce the original control problem to an equivalent problem in complete information. We analyze the problem using the Bellman optimality principle and characterize the value function as well as the indifference price in terms of the solution to a backward stochastic differential equation.

The talk is based on a joint work with Claudia Ceci and Alessandra Cretarola.


Contributed talk: Monday, 13:30, Room 1

Massimo Costabile (University of Calabria, Italy)

Computing Risk Measures of Life Insurance Policies through Lattice-Based Models

The construction of an efficient risk managemt system in insurance relies upon the correct measurement of risks affecting the value of the firm's business. Among different sources of risks, market risk concerns the possible decline of an investment value due to market factors such as stock prices, interest rates, etc. Several risk measures have been proposed through years to asses properly the market risk of financial intermediaries. Among them, the Value at Risk (VaR), defined in its most general form as the loss level that will not be exceeded with a certain confidence level during a certain period of time, plays a prominent role. Nevertheless VaR is affected by important limitations. For example, it does not give informations about the size of the maximum possible loss. Moreover, VaR is not a coherent measure of risk and, in particular, it does not satisfy the property of subadditivity. A possible alternative which overcomes these drawbacks is the Expected Shortfall, defined as the average loss in excess of a given VaR level.

Evaluating risk measures of modern life insurance policies with benefit depending on financial state variables is complicated by the fact that two different probability measures are usually applied. The physical or real-world probability measure along the risk horizon and a risk-neutral probability measure alomg the remaining time interval until the policy maturity. This change of measure implies that straightforward application of the Monte Carlo method is no more possible and one has to resort to time consuming nested simulations or to the least squares Montecarlo approach.

We will show that lattice-based models can be applied to determine risk measures of life insurance policies. In particular, the main advantage of the proposed approach relies upon the fact that it is possible to construct a unique lattice to describe the evolution of key financial variables both along the risk horizon where the physical probability measure is used, and along the remaining time interval where the risk-neutral probability measure is needed. This allows us to construct a very efficient evaluation scheme that computes highly accurate estimates of the considered risk measures.


Contributed talk: Monday, 11:05, Room 5

Jonas Crevecoeur (KU Leuven, Belgium)

Modeling reporting delay dynamics for claims reserving

Holding sufficient capital is essential for an insurance company to ensure its solvability. Hence, predicting the amount of capital needed to fulfill liabilities with respect to past exposure years in a stable and accurate way, is an important actuarial task. Recent research puts focus on the use of detailed information regarding the development of individual claims (Antonio and Plat (2014); Avanzi et al. (2016); Badescu et al. (2016); Verbelen et al. (2017); Wüthrich (2016)). This is called the micro-level or granular reserving approach. Reserve calculations are required for both claims that are not yet settled as well as for claims that have already occurred in past exposure years, but which have not yet been reported to the insurer. We focus on estimating the count of these latter claims, by modeling the time between occurrence and reporting of claims, the so-called reporting delay. Using data at daily level we propose a micro-level model for the heterogeneity in reporting delay caused by calendar day effects, such as the weekday pattern and holidays. Hereby extending the work of Verrall and Wüthrich (2016) who recently presented a first detailed study of reporting delay features. These methods are illustrated by case studies using multiple real life insurance datasets.

The talk is based on a joint work with Katrien Antonio and Roel Verbelen.


Contributed talk: Monday, 16:30, Room 7

Camilla Damian (WU Vienna University of Economics and Business, Austria)

EM Algorithm for Markov Chains Observed via Gaussian Noise and Point Process Information: Theory and Numerical Experiments

In this paper we obtain an Expectation Maximization (EM) algorithm for the setting where the state variable follows a finite-state Markov chain observed via diffusive and point process information. This has practical significance, since in recent years there has been an increasing interest for such model settings in the context of corporate and sovereign credit risk modeling. Another possible application is in insurance, where the point process models large losses. The E-step of the EM algorithm amounts to the derivation of filters for several quantities of interest, such as occupation times and level integrals. In this context, we obtain finite-dimensional filters both in exact and unnormalized form. For practical implementations, it is useful to derive a version of the resulting filters that depends continuously on observations (so-called robust filters). In this sense, we compute a discretized, robust version of the unnormalized filters. Additionally, we propose goodness-of-fit tests and we run an extensive simulation analysis in order to test the speed and accuracy of the algorithm. This analysis suggests that the method yields satisfactory results in terms of convergence, and that the robust filtering improves the stability of the algorithm even on a finer grid. Finally, we provide a real data application in the context of corporate credit risk modeling.

The talk is based on a joint work with Zehra Eksi and Rüdiger Frey.


Contributed talk: Tuesday, 15:40, Room 9

Angelos Dassios (London School of Economics, United Kingdom)

Parisian options, truncated Lévy measures and insurance mathematics

Parisian options are path-dependent options whose payoffs depend not only on the final value of the underlying asset, but also on the path trajectory of the underlying above or below a predetermined barrier L. For example, the owner of a Parisian down-and-out call loses the option when the underlying asset price S reaches the level L and remains constantly below this level for a time interval longer than D, while for a Parisian down-and-in call, the same event gives the owner the right to exercise the option. Parisian options are a kind of barrier option. However, they have the advantage of not being as easily manipulated by an influential agent as a simple barrier option, and thus protect against easy arbitrage.

This seminar is a survey of older and more recent results. We will present two approaches for explicit calculations. One is a recursive formula for the density of one and two-sided Parisian stopping times. These formulae do not require any numerical inversion of Laplace transforms. This approach will also help us derive some asymptotic results.

Another versatile approach is the exact simulation of the stopping time viewed as a random variable. We will look at a more complicated problem and we propose an efficient algorithm to efficiently simulate the drawdown stopping time and the associated maximum at this time. The method is straightforward and fast to implement, and avoids simulating sample paths thus eliminating discretisation bias. We show how the simulation algorithm is useful for pricing more complicated derivatives such as multiple drawdown options.

The two approaches will allow us to highlight a connection to subordinators with truncated Lévy measures. We will then investigate possible applications of these results to insurance mathematics.


Contributed talk: Tuesday, 15:40, Room 5

Piet de Jong (Macquarie University, Australia)

A more meaningful parameterization of the Lee-Carter model

A new parameterization of the Lee--Carter model is introduced. The new parametrization has two advanteges. First, LC parameters are normalized such that they have a direct and intuitive interpretation, and are directly comparable across populations. Second, the model is reframed in terms of the "needed--exposure" (NE). The NE is the number required in order to get one expected death and is closely related to the "needed--to--treat" measure used to communicate risks and benefits of medical treatments. In the new parametrization, time parameters are readily and directly interpretable as an overall across--age NE. Age parameters are interpretable as age--specific elasticities: percentage changes in the NE at a particular age in response to a percent change in the overall NE. A similar approach can be used to confer interpretability on parameters of other mortality models.

The talk is based on a joint work with Leonie Tickle.


Contributed talk: Tuesday, 16:30, Room 3

Ana Debón (Universitat Politècnica de València, Spain)

Spatial Modeling of Old-Age Mortality Risk in the US

Modeling longevity risk in the USA has gained much attention in recent years in the area of insurance risk management and securitization of mortality based products. Many studies of the US mortality rates over time are based on the macro level data and do not consider spatial dependence. Some of these studies established the relationship between mortality rates, economic variables, and climate change for the USA, focusing on state-by-year or region-by-year fixed effects models in various insurance applications. There is also an evidence in the literature that climate change impacts mortality rates. However, these studies have not considered a spatial dependence of significant clusters through a spatial panel data model.

We propose a spatial panel econometrics model for modeling the mortality rates for the age group of 65 and older for the continental USA. The objective of this project is to explain the behavior of the mortality rates for subgroups 65-75, 75-85, and 85+, by gender and by state for period 1970-2013, depending on demographic and economic variables. A spatial lag model with fixed effects is proposed. The methodology takes into account the neighboring relationships between the states. The performance of the model is assessed using the methods of goodness of fit, residual variance, and the coefficient of determination.

Our analysis uses a rich data set that includes: 1) death and population records from the USA National Center for Health Statistics provided with a signed Data Use Agreement, 2) Gini coefficient obtained from the US Internal Revenue Service, 3) GDP provided by the US Bureau of Labor, 4) Population projections from the US Census, and 5) property damages from the natural disasters obtained from the Hazard & Vulnerability Research Institute.

We identified spatial-temporal mortality clusters, which in turn can be considered in modeling of longevity risk and insurance risk management practice.

The talk is based on a joint work with Tatjana Miljkovic and Patricia Carracedo.


Contributed talk: Wednesday, 10:15, Room 7

CANCELLED: Silvia Dedu (Bucharest University of Economic Studies, Romania)

General entropy measures based approach to risk assessment in actuarial models

Risk assessment is a topic of increased interest, since it allows choosing the optimal strategy in many real world problems. Entropy represents a fundamental concept used to evaluate the uncertainty degree associated with random variables or phenomena. It can be used as a measure of variability for continuous random variables or a measure of diversity regarding the possible values of discrete random variables. Due to their widespread applicability, the derivation of closed expressions for various entropy measures corresponding to univariate and multivariate distributions is a subject of great importance. The aim of this paper is to develop a general information measures based approach to risk assessment for actuarial models involving truncated and censored random variables. By using some general information measures, such as Tsallis or Kaniadakis entropies, the effect of different partial insurance schemes upon the entropy of losses is investigated. Analytic expressions for the per-payment and per-loss entropies and relationships between them are obtained. Also, the entropies of losses corresponding to proportional hazards and proportional reversed hazards models are derived. The applications presented prove that information theory approach using general entropy measures for loss models allows a higher degree of flexibility. Computational results are provided.

The talk is based on a joint work with Vasile Preda.


Contributed talk: Monday, 11:30, Room 6

Łukasz Delong (Warsaw School of Economics, Poland)

Optimal investment for insurance company with exponential utility and wealth-dependent risk aversion coefficient

We investigate an exponential utility maximization problem for an insurer who faces a stream of claims. The insurer's risk aversion coefficient changes in time and it depends on the insurer's current net asset value (the excess of the assets over the liabilities). The exponential utility maximization problem with time-varying risk aversion coefficient is time-inconsistent. We use the notion of an equilibrium strategy and we derive the HJB equation for our time-inconsistent optimization problem. We assume that the insurer's risk aversion coefficient consists of a constant risk aversion which is perturbated by adding a small amount of wealth-dependent risk aversion. The value function, which solves the HJB equation, is expanded on the parameter controlling the degree of risk aversion depending on wealth. We find the first-order approximation to the optimal equilibrium investment strategy and the first-order approximation to the solution to the HJB equation. We use BSDEs and PDEs to describe the value function and the equilibrium strategy.Numerical examples will be presented.


Contributed talk: Tuesday, 16:05, Room 4

Sander Devriendt (KU Leuven, Belgium)

Sparse modeling of risk factors in insurance analytics

Insurance companies use predictive models for a variety of analytic tasks, including pricing, marketing campaigns, claims handling, fraud detection and reserving. Typically, these predictive models use a selection of continuous, ordinal, nominal and spatial risk factors to differentiate risks. Such models should not only be competitive, but also interpretable by stakeholders (including the policyholder and the regulator) and easy to implement and maintain in a production environment. That is why current actuarial literature puts focus on generalized linear models where risk cells are constructed by binning risk factors up front, using ad hoc techniques or professional expertise. In statistical literature penalized regression is often used to encourage the selection and fusion of predictors in predictive modeling. Most penalization strategies work for data where predictors are of the same type, such as LASSO for continuous variables and Fused LASSO for ordered variables. We design an estimation strategy for generalized linear models which includes variable selection and the binning of risk factors through L1-type penalties. We consider the joint presence of different types of covariates and a specific penalty for each type of predictor. Using the theory of proximal operators, our estimation procedure is computationally efficient since it splits the overall optimization problem into easier to solve sub-problems per predictor and its associated penalty. As such, we are able to simultaneously select, estimate and group, in a statistically sound way, any combination of continuous, ordinal, nominal and spatial risk factors. We illustrate the approach with simulation studies, an analysis of Munich rent data, and a case-study on motor insurance pricing.

Keywords: generalized linear model, generalized additive model, smoothing splines, penalization, binning, variable selection, sparse modeling, insurance analytics, LASSO, Fused LASSO.

The talk is based on a joint work with Katrien Antonio, Edward W. Frees and Roel Verbelen.


Contributed talk: Monday, 16:30, Room 4

Dimitrina S. Dimitrova (Cass Business School, City, University of London, United Kingdom)

On the double boundary non-crossing probability for a class of compound risk processes with applications

We present a numerically efficient method for computing the probability that a non-decreasing, pure jump stochastic risk process, in the form of a compound point process with independent increments, will not exit the strip between two non-decreasing, possibly discontinuous, time-dependent boundaries, within a finite time interval. The class of such risk processes is relatively broad, including the compound Poisson and compound negative binomial processes as special cases, the latter playing important role in modelling aggregate insurance claims. The method is illustrated on both single and double boundary non-exit problems among which, computing the non-ruin probability in the insurance and the dual risk models and also computing the double boundary non-exit probability for Brownian motions. The latter problem has attracted a lot of attention in the literature since it has numerous applications in many fields, e.g. it naturally arises in the context of option pricing in insurance and finance which is also briefly addressed.

The talk is based on a joint work with Zvetan G. Ignatov, Vladimir K. Kaishev and Senren Tan.


Contributed talk: Tuesday, 16:05, Room 5

Erengul Dodd (University of Southampton, United Kingdom)

Stochastic Modelling and Projection of UK Mortality Improvements Allowing for Overdispersion

Recent mortality improvements lead to higher life expectancies in most countries. Therefore modelling and projecting mortality has become important due to its social implications such as pensions and healthcare. We propose a comprehensive and coherent approach for mortality projection, based on a negative binomial model to overcome one of the several limitations of existing approaches such as over-fitted and insufficiently robust mortality projections as a result of employing an error model (e.g. Poisson) which provides a poor fit to the data. We also incorporate smoothness in parameter series which vary over age, cohort, and time, using generalised additive models (GAMs). GAMs, being a flexible class of semi-parametric statistical models, allow us to differentially smooth components, such as cohorts, more aggressively in areas of sparse data for the component concerned. While GAMs can provide a reasonable fit for the ages where there is adequate data, estimation and extrapolation of mortality rates using a GAM at higher ages is problematic due to high variation in crude rates. At these ages, parametric models can give a more robust fit, enabling a borrowing of strength across age groups. Our projection methodology is based on a smooth transition between a GAM at lower ages and a fully parametric model at higher ages.

The talk is based on a joint work with Jon J. Forster, Jakub Bijak and Peter W. F. Smith.


Contributed talk: Tuesday, 11:30, Room 3

Karl-Theodor Eisele (Université de Strasbourg, France)

Guaranteed accounts and profit sharing in life-insurance

The implementation of Solvency II in the European regulatory system for insurance companies since 2016 has revealed a number of problems when it comes to accounting principles. In particular, the integration of profit sharing rules to Solvency II seems till to need a comprehensive treatment.

Historically, the entitlement of profit sharing is enrooted in the ideas of Solvency I, characterized by prudential actuarial interest rates and biased mortality tables. The central calculatory tool is the notion of mathematical provision. In its standard version, the mathematical provision is as well a prospective estimation value as a retrospective book-value (see e.g. U. Gerber: Life Insurance, or K. Wolfsdorf: Versicherungsmathematik I.4, Satz 5). The legal regulation of profit sharing refers to the mathematical provision as an essential element (e. g. Code des assurances, article R 331-3). However, the mathematical provision does no longer appear explicitly in Solvency II as an accounting item.

We present a comprehensive modelling of life-insurance contracts which integrates major parts of Solvency I and opens at the same time the transition to Solvency II. As a key tool serve the guaranteed accounts associated to each contract. These book-values provide a sound basis for the procedures of profit sharing. Their sum reappears as the 'hard part' (i. e. the book-value part) of the 'best estimate of liability' within the transition to Solvency II.

The talk is based on a joint work with .


Contributed talk: Wednesday, 11:10, Room 8

Hampus Engsner (Stockholm University, Sweden)

The value and replicating portfolio of a liability cash flow in discrete time

Given a liability cash flow, a set of financial replication instruments, including a numéraire asset, and a dynamic monetary risk measure, we derive a replicating portfolio whose market price is taken as the definition of the value of the liability cash flow. This replicating portfolio includes a specific book-keeping strategy for the position in the numéraire asset throughout the runoff of the liability. We show that, under natural conditions, the value of the liability coincides with a value obtained from multi-period valuation using cost-of-capital arguments.

The talk is based on a joint work with Filip Lindskog and Kristofer Lindensjo.


Contributed talk: Wednesday, 11:35, Room 7

Debora Daniela Escobar (University of Vienna, Austria)

Insurance premium in energy markets

We propose to price future contracts in energy markets using the distortion premium principle. We allow pricing below the net premium principle due to the change of sign of the risk premia in this market. Using this framework we formulate the risk premia in terms of Wasserstein distances and show continuity of the distortion premium principle with respect to this distance. Finally, we incorporate uncertainty in the model using Wasserstein balls as ambiguity sets. We apply this framework to the European Power Exchange (EPEX SPOT) in the German/Austrian area and consider Future Phelix contracts of the European Energy Exchange (EEX).


Contributed talk: Wednesday, 09:25, Room 3

Zhenzhen Fan (Nankai University, China)

Currency risk management under equity-currency contagion

Investors are exposed to currency risks when they invest internationally. While there is a vast literature on equity risk management, the management of currency risk, however, has not gone beyond the universal hedging formula (that is, every investor has the same hedging ratio towards any foreign currency regardless of the investor's home currency in equilibrium) proposed by Black (1990, JF) 27 years ago.

We propose a mutually exciting jump-diffusion model and characterize the "safe-haven" currencies by a small equity-currency excitor, indicating that a price plunge in the equity market is not likely to trigger a depreciation of that currency. The "investment" currency, on the other hand, is characterized by a large equity-currency excitor, indicating that a price plunge in the equity market is likely to trigger a substantial depreciation of that currency. We examine that In the long term when all investors hold the market equity portfolio, how they discriminate between safe haven currency risk and investment currency risk.

We derive equilibrium currency hedging strategies in this economy and find that all else equal, investors hedge less safe-haven currency risk than investment currency risk, a result that challenges the classic Black (1990) universal hedging formula. Our results shed light on the currency risk management in the long term when equity risk and currency risk are contagious.

References
Black, F. (1990). Equilibrium exchange rate hedging. The Journal of Finance 45(3), 899–907.

The talk is based on a joint work with Roger J. A. Laeven.


Contributed talk: Monday, 16:55, Room 1

Runhuan Feng (University of Illinois, United States of America)

Exponential functionals of Levy processes and variable annuity guaranteed benefits

Exponential functionals of Brownian motion have been extensively studied in financial and insurance mathematics due to their broad applications, for example, in the pricing of Asian options. The Black-Scholes model is appealing because of mathematical tractability, yet empirical evidence shows that geometric Brownian motion does not adequately capture features of market equity returns. One popular alternative for modeling equity returns consists in replacing the geometric Brownian motion by an exponential of a Levy process. In this paper we use this latter model to study variable annuity guaranteed benefits and to compute explicitly the distribution of certain exponential functionals.

The talk is based on a joint work with Alexey Kuznetsov and Fenghao Yang.


Contributed talk: Monday, 13:55, Room 8

Kevin John Fergusson (University of Melbourne, Australia)

Application of the Double Pareto Lognormal Distribution to Rainfall Events

The double Pareto lognormal distribution is obtained as the distribution of the stopped wealth where the wealth process is geometric Brownian motion and the random stopping time is exponentially distributed. We apply this to modelling rainfall levels where the accumulation of moisture in the cloud system follows geometric Brownian motion and the lifetime of the system is exponentially distributed.

The talk is based on a joint work with Enrique Javier Calderin and Xueyuan Wu.


Contributed talk: Tuesday, 15:40, Room 2

Anselm Fleischmann (BELTIOS GmbH, Austria)

Recent Developments and Selected Models in Long-Term Care Insurance

In many countries, the ongoing demographic change is putting pressure on the financial feasibility of long-term care benefits in social insurance or public welfare. As a consequence, private insurers are increasingly focusing on this matter. A continuous time-homogeneous multi-state Markov model for modelling long-term care products is introduced. The (weak) lumpability property is used to allow for consistent model calibration based on Austrian data. Possible applications range from modelling lifelong long-term care insurance products to forecasting expenditures in social insurance. As an alternative concept, activities of daily living (ADL) are addressed. Selected results from the literature are presented.


Contributed talk: Tuesday, 16:55, Room 3

Farid Flici (Centre for Research in Applied Economics for Developement, Algeria)

Construction of a dynamic life-table based on the Algerian retirees mortality experience

Life expectancy is still improving in the developing countries; this improvement is almost different by sub-population. Mortality of the retired population is often lower compared to that of the global population. The use of a dynamic life-tables based on global population data might distort all calculations when used for pension plan reserving. The use of life tables adapted to the retirees mortality experience is more suitable for this issue. Usually, the data of the insured population is not available for a long period allowing to do a robust forecast. Also, this data is issue from reduced sample of population can lead to high irregularities related to the reduced population at risk. In such a case, the direct use of the stochastic mortality models such Lee-Carter [4] or Cairns-Blake-Dowd models [2] to predict the future mortality trends is not practical atall. For this, some methods were proposed to consider the particularities of the insured population mortality while ensuring a good fitting quality and strong forecasting capacity. These methods aim to position the experience life table to an external reference [5][6]. The main idea was to define a relationship regression between the specific death rates and the baseline death rates. This process is principally based on the Brass Logit system [1]. The use of the baseline life table to estimate mortality schemes starting from incomplete or imperfect mortality data has become a common practice for experience fe-tables construction both in developed and developing countries. Kamega (2011) used the same approach to estimate actuarial life table for some centre-African countries [7] with taking he French life tables as an external reference (TGH05 and TGF05).

The main objective for the present work is to construct a prospective life table based on the mortality data of the Algerian retired population. The data is available for ten years (2004 to 2013) and for the ages [50, 99] arranged by five- age intervals. This data concerns the observed number of deaths and the survivals number by the end of each year of the observation period. We have tried earlier to construct a prospective life-table based on the global population mortality data [3]. The length of the observed data allows doing a strong forecast. Here, we use this life table as a baseline mortality to position the experience life table that we aim to construct by the present work. Finally, the obtained results will be used for life-annuities pricing and reserving comparatively with the results obtained with the global population life table.

References
1. Brass, W.: On the scale of mortality. W. Brass (ed.). Biological Aspects of Demography. London: Taylor and Francis. (1971).
2. Cairns, A.J.G., Blake, D., and Dowd, K.:A two-factor model for stochastic mortality with parameter uncertainty: Theory and calibration. Journal of Risk and Insurance, 73: 687-718.(2006)
3. Flici, F.: Longevity and life-annuities reserving in Algeria. Conference paper. East Asian Actuarial Association Conference. Taipei, Taiwan. Lisbon. (2014).
4. Lee, R. D. and Carter, L. R.:Modeling and forecasting U. S. Mortality. Journal of American statistical association. 87, 419, 659 671.(1992)
5. Planchet, F.: Tables de mortalit dexprience pour des portefeuilles de rentiers. Note mthodologique de lInstitut des Actuaires. (2005)
6. Planchet, F.: Construction des tables de mortalit dexprience pour les portefeuilles de rentiers prsentation de la mthode de construction. Note mthodologique de l'Institut des Actuaires. (2006).
7. Kamega, A.: Outils theoriques et operationnels adaptes au contexte de l'assurance vie en Afrique subsaharienne francophone - Analyse et mesure des risques lies a la mortalite, Doctoral Dissertation. Universite Claude Bernard - Lyon I. France.(2011).

The talk is based on a joint work with Frederic Planchet.


Contributed talk: Monday, 11:30, Room 4

Edward W. Frees (University of Wisconsin-Madison, United States of America)

Joint Modeling of Customer Loyalty and Risk in Personal Insurance

This work connects two strands of research of modeling personal (automobile and homeowners) insurance. One strand involves understanding the joint outcomes of separate personal insurance contracts, e.g., do higher automobile claims suggest more severe homeowner claims? Joint modeling of personal insurance is complicated by the fact that the outcomes typically have a mass at zero, corresponding to no claims, and when there are claims, distributions tend to be right-skewed and long-tailed. Moreover, it is important to account for insured personal characteristics as well as characteristics of the contract and, in the case of auto and homeowners, features of the automobile and the house. A second strand of the literature involves understanding determinants of customer loyalty. For example, we now know that when a customer cancels one insurance contract, he or she is likely to cancel all other contracts soon after.

This paper examines longitudinal data from a major Spanish insurance company that offers automobile and homeowners insurance. The dataset tracks 890,542 clients over five years, many of whom subscribed to both automobile and homeowners insurance (75,536, or approximately 8.5%). To represent this data, we use copula regression to model the joint outcomes of auto and home claims as well as customer loyalty. Including customer loyalty, or duration with the company, is complicated because of the censoring of this time variable as well as the discreteness. Although customers may cancel the contract at any time, cancelation typically occurs at contract renewal, making this variable essentially a discrete outcome. Composite likelihood and generalized method of moments techniques allow us to address the special features of this data structure.

Other findings as at this writing are preliminary and we look forward to discussing our results with conference participants. Consistent with findings from other studies, we do know that intertemporal dependencies are important, e.g., high auto claims from one year signal high auto claims for the following year. Work is ongoing to develop strategies that will allow the insurance manager to identify profitable portfolios through measurement of a customer loyalty index.

The talk is based on a joint work with Catalina Bolancé, Montserrat Guillén and Emiliano Valdez.


Contributed talk: Tuesday, 14:20, Room 8

Rüdiger Frey (Vienna Uniersity of Economics and Business, Austria)

Value adjustments and dynamic hedging of reinsurance counterparty risk under partial information

Reinsurance counterparty risk represents the risk that a reinsurance company fails to honor her obligations from a reinsurance treaty, for instance because the company defaults prior to maturity of the contract. While this risk is of high concern to practitioners and regulators for instance under the Solvency II regulatory regime, there is only very little quantitative research on measuring and hedging reinsurance counterparty risk.

In this paper we attempt to fill this gap. We compute valuation adjustments for reinsurance counterparty risk and we study the hedging of this risk by trading in credit default swaps on the reinsurance company. Perfect hedging is typically not possible and we resort to the (local) risk-minimization approach. We consider a partial information framework where the intensity of the loss process of the primary insurance contract is unobservable and correlated to the default intensity of the reinsurer. Moreover there might be direct contagion effects.

To determine the hedging strategy we make use of an orthogonal decomposition of the market value of the reinsurance contract into a hedgeable and a non-hedgeable part (Galtchouk-Kunita-Watanabe decomposition). Moreover we characterize the optimal hedging strategy in the full and the partial information framework by means of predictable projections. Stochastic filtering will be used to compute value adjustments and hedging strategy under partial information.

The talk is based on a joint work with Claudia Ceci and Katia Colaneri.


Contributed talk: Tuesday, 16:30, Room 2

Michel Fuino (University of Lausanne - HEC, Switzerland)

Long-term care models and dependence probability tables by acuity level: new empirical evidence from Switzerland

Due to the demographic changes and population aging occurring in many countries, the financing of long-term care (LTC) poses a systemic threat. The scarcity of knowledge about the probability of an elderly person needing help with activities of daily living has hindered the development of insurance solutions that complement existing social systems. In this paper, we consider two models: a frailty level model that studies the evolution of a dependent person through mild, moderate and severe dependency states to death and a type of care model that distinguishes between care received at home and care received in an institution. We develop and interpret the expressions for the state- and time-dependent transition probabilities in a semi-Markov framework. Then, we empirically assess these probabilities using a novel longitudinal dataset covering all LTC needs in Switzerland over a 20-year period. As a key result, we are the first to derive dependence probability tables by acuity level, gender and age for the Swiss population. We discuss significant differences in the transition probabilities by gender, age and duration. Using sociodemographic covariates, we reveal the importance of household composition and geographical region of residence for selected transitions.

The talk is based on a joint work with Joël Wagner.


Contributed talk: Monday, 14:20, Room 5

Christian Furrer (University of Copenhagen / PFA Pension, Denmark)

Credibility for Markov chains

We study multidimensional credibility in the Markov chain life insurance setup. In the classic parametric model without credibility, estimation of the transition intensities can be split into a number of subproblems, which can be solved separately using traditional methods. We show that similar dimension reducing techniques apply when credibility is introduced in a specific manner, where the crucial assumption is independence between the individual credibility variables. Finally, the concepts and results of the study are discussed in the framework of experience rating for group disability insurance in a model with recovery.


Contributed talk: Monday, 11:55, Room 5

Guangyuan Gao (Renmin University of China, China)

Claims reserving using claims amounts and payments counts: a Bayesian compound Poisson model approach

We consider the claims reserving problem when both the claims amount run-off triangle and the payments count run-off triangle are available. The compound Poisson model proposed by Wüthrich(2003) is studied in a Bayesian framework. We derive the analytical results for the process variance and the estimation variance of the predicted unpaid claims. A Gibbs sampler is proposed under the conjugate priors. We show that the Bayesian compound Poisson model with non-constant dispersion can largely reduce the prediction uncertainty compared with the models in Wüthrich(2003).

References:
Wüthrich, M. V. (2003). Claims reserving using Tweedie's compound Poisson model. ASTIN Bulletin 33, 331-346.

The talk is based on a joint work with Shengwang Meng.


Contributed talk: Tuesday, 16:55, Room 4

Catalina García García (Granada University, Spain)

Classification and claim prediction through logistic regression and support vector machines.

Insurers commonly use a number of a priori variables to differentiate risk levels among policyholders. This classification procedure is part of the pricing task and allows the equilibrium of insurance market. The classical and Bayesian logit regressions are usually applied to study the relationship between a dichotomous response variable and one or more explanatory variables. Initially, these methodologies consider only symmetric and unimodal variables but it is also possible to find skew versions in the scientific literature. Instead of this application, this paper proposes to apply the different versions of logit regressions (classical/bayesian, symmetric/skew) to find factors that allow the classification of risky and non-risky policies. In addition, the measures for goodness-of-fit of the model, the significance of the asymmetric factor and the rest of variables are also discussed. Finally, the classification application of these methodologies is compared with the support vector machines performing the classification by finding the hyper-plane that differentiate the two classes. All the analyzed models are fitted and tested on Australian automobile claim data.

The talk is based on a joint work with Victor Blanco Izquierdo, Jose Maria Perez Sanchez and Roman Salmeron Gomez.


Contributed talk: Monday, 11:05, Room 4

Jose Garrido (Concordia University, Canada)

Hybrid Hidden Markov Models and GLMs for Auto Insurance Premiums

We describe a new approach to estimate the pure premium for automobile insurance. Using the theory of hidden Markov models (HMM) we derive a Poisson-gamma HMM and a hybrid between HMMs and generalized linear models (HMM-GLM). The hidden state represents a driver's skill, an unobservable variable. The Poisson-gamma HMM and HMM-GLM have two emissions, claim severity and frequency, making it easier to compare to current actuarial models.

The proposed models help deal with the overdispersion problem in claim counts and allow dependence between the claim severity and frequency. We derive MLEs for the parameters of the proposed models and then using simulations with the Expectation Maximization (EM) algorithm we compare the three methods: GLMs, HMMs and HMM-GLMs. We show that in some instances the HMM-GLM outperforms the standard GLM, while the Poisson-gamma HMM under-performs the other models. Thus in certain situations the added complexity of a HMM-GLM may be worth it.

The talk is based on a joint work with Lucas Berry.


Contributed talk: Wednesday, 10:15, Room 4

Vincent Goulet (Université Laval, Canada)

A foray into the insurance of things, or how to price individual objects without prior data

Imagine insuring not only the home of a philatelist for a nominal amount, but each individual stamp in her collection at its current market value. Until recently, this was economically feasible for very large or high profile collections only. The Internet of Things brought us interconnected physical devices (from smoke detectors and door looks, to cars and whole buildings) collecting and exchanging data over a network, usually the Internet. For the insurance industry, this digital technology induces what we will refer to as the Insurance of Things: very finely grained coverage for a body of objects monitored constantly one way or another. In this talk, we will review some challenges actuaries will face when pricing such insurance policies, notably the lack of prior data, varying effects from the different causes of peril and inherent dependence between the insured goods. We will also present how we tackled these problems in an actual application.


Contributed talk: Tuesday, 11:55, Room 6

Peter Grandits (TU Wien, Austria)

A two dimensional dividend problem for collaborating companies and an optimal stopping problem

We consider two insurance companies with wealth processes described by two independent Brownian motions with drift. The goal of the companies is to maximize their expected aggregated discounted dividend payments until ruin. The companies are allowed to help each other by means of transfer payments, but they are not obliged to do so, if one company faces ruin. We show that the problem is equivalent to a mixture of a one dimensional singular control problem and an optimal stopping problem. The value function is characterized as the unique viscosity solution of the HJB equation, and we construct the value function as well as the optimal strategy rather explicitly.


Contributed talk: Wednesday, 11:10, Room 7

William Miguel Guevara Alarcon (University of Lausanne, Switzerland)

Modelling marine liability losses: The long and heavy tail of sinking ships

Marine is the oldest type of insurance coverage. However, unlike cargo and hull covers, marine liability is a rather young line of business whose losses can have very long tails. Additionally, the accumulation of losses from the risks insured by Protection and Indemnity (P&I) Clubs can provoke extreme claims on a marine portfolio. This work describes the recent evolution of marine liability market and its extreme losses. Modelling of large losses for pricing high layers of non-proportional reinsurance contracts for this type of coverage is presented.


Contributed talk: Monday, 15:40, Room 2

Lukas Josef Hahn (University of Ulm and Institute for Finance and Actuarial Sciences (ifa), Germany)

Multi-year non-life insurance risk for dependent loss portfolios

Projection of own funds and capital requirements over a multi-year horizon has become a fundamental component in modern risk- and value-based business planning and regulatory requirements for insurance companies. For example, the Own Risk and Solvency Assessment (ORSA) process under Solvency II requires forecasting of the Overall Solvency Needs, i.e. the capital requirements based on the undertaking-specific risk profile, tolerance, and business plans, especially when the assumptions for the Solvency Capital Requirement (SCR) of the Solvency II standard formula are violated.

In this talk, we derive non-parametric and parametric bootstrap models to simulate the full predictive distribution of the undertaking-specific multi-year technical result of a non-life insurance company. Its business may consist of an arbitrary number of possibly dependent loss portfolios that meet the assumptions of classical distribution-free loss reserving models such as the chain ladder model. The full predictive distribution allows to quantify multi-year non-life insurance risk and its reserve and premium risk components through corresponding risk measures, e.g. the Value-at-Risk as in Solvency II, applied to the change in the basic own funds of the insurance company over a multi-year time horizon.

Based on data of a fictional non-life insurance company, we conduct an extensive and insightful case study in light of the ORSA process by calculating an SCR according to our full undertaking-specific non-life insurance risk and benchmarking it against the SCR for the non-life insurance risk module according to the Solvency II standard formula (with and without undertaking-specific parameters). We further survey the performance of closed-form estimators for the mean squared error of prediction as a second-moment risk measure from recent analytic approaches including postulation of a posteriori distributional assumptions in comparison to the empirical findings from our bootstrap model.

The talk is based on a joint work with Marc Linde.


Contributed talk: Tuesday, 13:30, Room 6

Bingyan Han (The Chinese University of Hong Kong, Hong Kong S.A.R. (China))

Optimal consumption and investment problem with ambiguous correlation

Consider an economy with a risk-free asset and two risky assets (or two stochastic variables) with an ambiguous correlation. An ambiguity- and risk-averse investor determines the optimal investment and consumption strategy robust to the uncertainty in the correlation. Our investigation is based on subsistence consumption so that there is also a lower bound for the consumption rate. The problem formulation becomes a utility maximization problem over the worst-case scenario with respect to possible correlations. As the correlation ambiguity leads to robust optimal decision over a set of nonequivalent probability measures, the G-expectation framework is adopted to characterize the problem into a maximin optimization. The maximin type HJB equation is solved in closed-form solution for a general class of utility functions, embracing power utility functions and exponential utility functions. The optimal investment and consumption policy is obtained. Under certain regularity, we prove the verification theorem under the G-framework. Economic interpretations drawn from the optimal investment and consumption rule with ambiguous correlation can better explain investors’ behavior.

The talk is based on a joint work with Hoi Ying Wong.


Contributed talk: Monday, 14:45, Room 1

Hamza Hanbali (KU Leuven, Belgium)

Systematic Risk in Long Term Life Insurance Business: The need for appropriate indexing mechanisms

The Law of Large Numbers (LLN) is the fundamental concept on which classical insurance business is built. In the framework of this probabilistic law, the realizations of insurance risks in a given portfolio are considered as independent random variables. The independence property implies that the risks are devoid of interaction. The LLN guarantees that the gains and losses of the insurer average out when the portfolio size is sufficiently large. This way of eliminating risk is known as `diversification'.

In practice, actuaries rely on past data to perform forecasts of actuarial and financial quantities. Very often, the main quantities to be estimated are the probabilities of occurrence of the risk (e.g. death or sickness probabilities), the claim amounts in case they are random (for instance, in health insurance contracts) and the interest rates for discounting future cash flows. These estimated values allow to obtain a fair price, or premium. Thereafter, both the premium and its underlying estimates are used as a basis for other tasks, such as reserving or Asset and Liability Management. Therefore, in order for the insurance company to meet its obligations toward both the client and the regulator, forecasts must be performed with high accuracy.

However, given that the quantities needed in the pricing are stochastic processes, accurate estimates are often very hard to obtain, as they depend on the underlying model and the data. Moreover, errors that occur in the pricing process are magnified when the portfolio size is increased, which can lead to significant losses for the insurer. This form of risk which is stemming from the uncertainty about the actuarial and financial quantities is called `systematic risk', and cannot be carried out using traditional insurance technics.

In our paper, we do not state that forecasting is a vain exercise. But we suggest to combine forecasting with other tools that can guarantee the solvency of the insurer. In particular, we focus on the systematic risk stemming from the uncertainty on the life table. It is demonstrated that appropriate updating mechanisms allow to cope with both the uncertainty on the estimates and the systematic risk inherent in the insurance business. First, we examine the consequences of working with unknown survival probabilities for which only a prediction is available. Then, we investigate solutions consisting on transferring part of the risk back to clients. Finally, we show that this approach helps to achieve solvency at an affordable cost.

The talk is based on a joint work with Michel Denuit, Jan Dhaene and Julien Trufin.


Contributed talk: Wednesday, 11:35, Room 1

Dinah Heidinger (Friedrich-Alexander University Erlangen-Nürnberg, Germany)

Awareness, Determinants, and Value of Reputation Risk Management: An Empirical Study in the Banking and Insurance Industry

Reputation risk has become increasingly important, especially in the financial services industry where trust plays a crucial role. The aim of this paper is to provide a holistic view on the practice of reputation risk management based on a sample of US and European banks and insurers. This is done by focusing on three central aspects: First, we investigate how the awareness and management of reputation risk as reflected in annual reports has developed over the last ten years by adopting a text mining approach. Second and third, having identified firms with an implemented reputation risk management, we empirically study determinants and firm characteristics as well as its value-relevance, which to the best of our knowledge has not been done so far. Our results show that the awareness of reputation risk has increased and that it also gained in importance relative to other risks. Moreover, we find that less leveraged and more profitable firms are significantly more likely to implement a reputation risk management. This also holds for firms that belong to the banking industry, are situated in Europe, have a higher awareness for their reputation and face fewer risks. Finally, we obtain first indications that reputation risk management can increase firm value.

The talk is based on a joint work with Nadine Gatzert.


Contributed talk: Tuesday, 14:20, Room 6

Lars Frederik Brandt Henriksen (PFA Pension, Denmark)

On the distribution of the excedents of funds with assets and liabilities in presence of solvency and recovery requirements

In this paper, we consider a profitable, risky setting with two separate, correlated asset and liability processes. The company that is considered is allowed to distribute excess profits (traditionally referred to as dividends in the literature), but is regulated and is subject to particular regulatory (solvency) constraints (the nature of which is inspired by Paulsen, 2003 and Avanzi and Wong, 2012). Importantly, because of its bivariate nature, such distributions of excess profits can take two alternative forms. These can originate from a reduction of assets (and hence a payment to owners), but also from an increase of liabilities (when these represent the wealth of owners, such as in pension funds). The latter is particularly relevant if leakages do not make sense because of the context, such as in pension funds where assets are locked until retirement. A bivariate geometric Brownian motion was introduced in Gerber and Shiu (2003). They considered two problems: (a) to keep the funding ratio (ratio of assets to liabilities) within a band, by equalising inflows and outflows at the boundaries of the band—they conjectured a fund “should” do so; and (b) to maximize (in absence of inflows) the expected present value of outflows (dividends). They conjectured that a barrier dividend strategy should be optimal. Decamps et al. (2006) extended (a) to finite time horizon, while Decamps et al. (2009) proved that the conjecture in (b) is correct. Also, Chen and Yang (2010) extended the results of Gerber and Shiu (2003) to a regime-switching environment. Avanzi et al. (2016) determined that barrier type distributions are optimal in presence of a solvency constraint (such as in Paulsen, 2003) or in presence of forced rescue measures below a pre-specified level. In this paper, we extend the model of Gerber and Shiu (2003) and consider recovery requirements (see, for instance, Avanzi and Wong, 2012) for the distribution of excess funds. The recovery requirements are an extension of the plain vanilla solvency constraints considered in Paulsen (2003) and Avanzi et al. (2016) and require funds to reach a higher level of funding than the solvency trigger level before they can distribute excess funds again. We obtain closed form expressions for the expected present value of distributions (asset decrements or liability increments) when a distribution barrier is used. The optimal barrier level is obtained, and its existence and uniqueness are discussed.

References
Avanzi, B., Henriksen, L. F. B., Wong, B., 2016. Optimal dividends in an asset-liability surplus model under solvency considerations. Tech. Rep. 2016ACTL02, UNSW Australia Business School.
Avanzi, B., Wong, B., 2012. On a mean reverting dividend strategy with Brownian motion. Insurance: Mathematics and Economics 51 (2), 229–238.
Chen, P., Yang, H., 2010. Pension funding problem with regime-switching geometric brownian motion assets and liabilities. Applied Stochastic Models in Business and Industry 26 (2), 125–141.
Decamps, M., Schepper, A. D., Goovaerts, M., 2006. A path integral approach to asset-liability management. Physica A: Statistical Mechanics and its Applications 363 (2), 404 – 416.
Decamps, M., Schepper, A. D., Goovaerts, M., 2009. Spectral decomposition of optimal asset-liability management. Journal of Economic Dynamics and Control. 33 (3), 710–724.
Gerber, H. U., Shiu, E. S. W., 2003. Geometric brownian motion models for assets and liabilities: From pension funding to optimal dividends. North American Actuarial Journal 7 (3), 37–56.
Paulsen, J., Oct. 2003. Optimal dividend payouts for diffusions with solvency constraints. Finance and Stochastics 7 (4), 457–473.

The talk is based on a joint work with Benjamin Avanzi and Bernard Wong.


Contributed talk: Tuesday, 13:55, Room 1

Peter Hieber (University of Ulm, Germany)

Tonuity: A novel individual-oriented retirement plan

For insurance companies in Europe, the introduction of Solvency II leads to a tightening of rules for solvency capital provision. In life insurance, this especially affects retirement products that contain a significant portion of longevity risk (for example conventional annuities). This could be an incentive for insurance companies to offer retirement products that shift longevity risk (at least partially) to policyholders. An extreme case is a so-called tontine product where the insurance company's role is merely administrative and longevity risk is shared within a pool of policyholders. From the policyholder’s perspective, this extreme is not desirable as it leads to high uncertainty of retirement income, especially at old ages. For this reason, this talk suggests a new product – the tonuity – that combines the appealing features of reduced solvency capital requirements (tontine) and income security (conventional annuity).

The talk is based on a joint work with An Chen and Jakob Klein.


Contributed talk: Tuesday, 16:05, Room 2

Jonas Hirz (BELTIOS GmbH, Austria)

Actuarial Applications of MCMC in Mortality and Morbidity

Embedded in Bayesian statistics, Markov chain Monte Carlo (MCMC) is a powerful tool for high-dimensional parameter estimation, able to provide answers to numerous actuarial questions. In this talk a glimpse to its rich applicability in mortality and morbidity modelling is given. Starting with high-dimensional estimation of death probabilities and corresponding mortality trends, it is shown how MCMC can be used to quantify parameter uncertainty as well as how to detect selection effects via checking appropriateness of 2.-order life table against company data. As an application on morbidity modelling, we use MCMC for estimation of transition rates and of expected years lived in disability within a multi-state Markov model for long-term care insurance using Austrian data.


Contributed talk: Tuesday, 16:55, Room 2

Wen-Yen Hsu (Feng Chia University, Taiwan)

Adverse Selection and the Decision to Lapse and Reinstate Policies

This paper examines whether reinstated long term health insurance policies have higher claim experience than continuously-in-force policies. If reinstatement decision is made independent on health condition, then the difference between claim experiences of reinstated policies and that of continuously-in-force policies should be insignificant. However, if reinstatement decision involves adverse selection, we will observe reinstated policies to have higher claim experience than that of continuously-in-force policies. Medical treatment decision is seen as a two-step process, in which the patient first determines whether to visit the doctor, and the doctor then determines the extent of the treatment. Thus, we employ a two-part model. Our empirical results show that reinstated policies to have higher claim probability, but not claim severity, than that of continuously-in-force policies. Our results indicate that reinstatement decisions may be subject to adverse selection, and adverse selection exists in the probability for the insured to visit a doctor, but not in the extent of treatment. Finally, our robust tests indicate that lapsed policies have significantly lower pre-lapse claim experience than that of continuously-in-force policies. It shows that insurers are subject to two-fold adverse selection: one in the decision of lapse and one in the decision of reinstatement.

The talk is based on a joint work with Gene Lai and Karen Su.


Contributed talk: Monday, 14:45, Room 6

Hong-Chih Huang (National Chengchi University, Taiwan)

Optimal Allocations of Natural Hedging Strategies for Insurance Companies

This research investigates a natural hedging strategy and attempts to find an optimal allocation of insurance products that can deal with longevity risks for insurance companies. Currently, most actuaries still have mispricing problem with both life and annuity products for the reason of without considering enough mortality improvement. This mispricing problem commonly exists in the countries with official static life or annuity tables issued by governments or actuarial societies in which all insurance companies follow these official tables to price life or annuity products. (e.g., Taiwan, Korean, Japan…) Although the purpose of this annuity table is for valuation, almost all of insurance companies follow this annuity table to calculate premiums due to lack of real experience. Thus, even insurance companies have good mortality models to account for actual future improvements in mortality, but they may not be able to price and sell their annuity products using these derived mortality rates. It would be too expensive to sell them in competitive markets because most insurance companies currently still use the concept of static mortality table to price annuity products. For the same reason, they are not willing to sell their life products with lower prices using these derived mortality rates because most insurance companies have higher prices by using the concept of static mortality table to price life products. Thus, greater longevity risk implies that life insurers earn profits but annuity insurers suffer losses. Natural hedging provides an alternative choice to solve this problem of mispricing due to longevity risk.

This paper aims to solve all the three shortcomings of previous literatures and provides a more elastic feasible natural hedging strategy for the use of insurance companies in practice. We observe from numerical results and find that insurance companies should hold more annuity premiums than life premiums in order to reduce the mispricing volatility. Minimizing volatility of mispricing is an important issue. However, ignore the constraint of non-negative profit and only consider volatility risk, insurance companies could suffer deficit from mispricing. With the constraint of non-negative profit, insurance companies should hold higher proportions of life premiums for the younger policyholder and hold higher proportions of annuity premiums for the elder policyholder in order to minimize the volatility of mispricing without have a deficit of mispricing due to longevity risk. We find this strategy is quite different from those without the constraint of non-negative profit and is too far different from current pattern of premium proportion in practice although the objective of this strategy meet realistic needs most. Therefore, this paper further set another constraint in which the suggested strategy of premium proportions for each age of policyholder should fall within a certain range of current allocation in insurance companies. With this setting, the suggested strategy can provide a feasible optimal solution to meet the requirement of natural hedging with a non-negative profit or a positive profit.

The talk is based on a joint work with Chou-Wen Wang.


Contributed talk: Wednesday, 11:35, Room 2

Isaudin Ismail (University of Leicester, United Kingdom)

Risk aggregation using a mixture of copulas

Insurance and reinsurance companies have to calculate the solvency capital requirement in order to ensure that they can meet their future obligations to policyholders and beneficiaries. The solvency capital requirement is a risk management tool essential especially when extreme catastrophic events happen resulting in high number of possibly interdependent claims. This paper focuses on the problem of aggregating the risk coming from several insurance lines of business. Our starting point is to use the Hierarchical Risk Aggregation method which was initially based on 2-dimensional elliptical copulas. We use copulas from the Archimedean family and a mixture of copulas. The results show that a mixture of copulas can provide a better fit to the data and consequently avoid overestimation or underestimation of the capital requirement of an insurance company.

The talk is based on a joint work with Alexandra Dias and Aihua Zhang.


Contributed talk: Tuesday, 10:40, Room 5

Petar Jevtic (Arizona State University (forthcoming), United States of America)

Longevity Bond Pricing in Equilibrium

We consider a partial equilibrium model for pricing a longevity bond in a model with stochastic mortality intensity that affects the income of economic agents. The agents trade in a risky financial security and in the longevity bond in order to maximize their utilities. Agent's risk preferences are of monetary type and are described by backward stochastic differential equations (BSDEs). The endogenous equilibrium bond price is characterized by a BSDE. By using Clark-Haussmann formula, we prove that our longevity bond completes the market. Illustrative numerical examples are provided.

The talk is based on a joint work with Minsuk Kwak and Traian Pirvu.


Contributed talk: Tuesday, 11:05, Room 7

Lanpeng Ji (University of Applied Sciences of Western Switzerland, Switzerland)

Ruin problem of a two-dimensional fractional Brownian motion risk process

In this talk we shall discuss the ruin probability and ruin time of a two-dimensional fractional Brownian motion risk process. The two fractional Brownian motion risk processes model the surplus processes of two insurance companies (or two branches of the same company) that divide between them both claims and pure premiums in some specified proportions. The ruin problem considered is that of the two-dimensional risk process first entering the negative quadrant, that is, the simultaneous ruin problem. We derive both asymptotics of the ruin probability and approximations of the scaled conditional ruin time as the initial capital tends to infinity.

The talk is based on a joint work with Stephan Robert.


Contributed talk: Tuesday, 14:45, Room 1

Peter Løchte Jørgensen (Aarhus University, Denmark)

EIOPA’s/Solvency II's Smith‐Wilson method for discounting pension liabilities

After a lengthy development process and many delays the European Union’s new regulatory framework for insurance and reinsurance undertakings – Solvency II – finally came into effect on January 1st, 2016. The sections of the Solvency II Directive (and its various amendments) that deal with how pension liabilities should be valued introduce a specific mathematical model for determination of the term structure of risk-free interest rates that should be used for valuation purposes. The model goes by the name “The Smith-Wilson model” and originates from a proprietary report from British actuarial consultancy firm Bacon & Woodrow (Smith & Wilson (2000)). Not only is this model fairly unknown in the existing and quite comprehensive academic literature on term structure estimation, it also must be applied with several of its key parameters being dictated by EIOPA. For example, EIOPA has decided that the Smith-Wilson model’s zero-coupon interest rates should converge towards a level which is fixed at 4.2% by one of the model parameters referred to as the “ultimate forward rate”. This approach should call for some concern in light of the fact that long-term risk-free market interest rates are currently far below this level in many countries in Europe.

The first objective of the project is to present a complete and in-depth mathematical description of the Smith-Wilson technique and to provide a thorough analysis of the model’s properties. We compare to the Nelson-Siegel/Svensson-class of term structure models which is a common and popular choice of both academics and practitioners (and most central banks) for estimation of the term structure of interest rates.

Secondly, the Smith-Wilson model’s practical implementation will be demonstrated and discussed, and we will analyze how the valuation results produced by the model are affected by the parameter restrictions imposed by EIOPA. We also plan to take a critical look at some of the other adjustments and modifications to the risk-free term structure curve that are allowed by EIOPA and which do not all seem well justified by financial theory.

A preliminary conclusion from the project is that there seems to be a conflict between, on one hand, how EIOPA requires the term structure of risk-free interest rates to be estimated and, on the other hand, one of the main overall objectives of Solvency II which is that the assessment of the financial position of pension and insurance companies should be market consistent and “should rely on sound economic principles and make optimal use of the information provided by financial markets”.

EIOPA = European Insurance and Occupational Pensions Authority
ESFS = European System for Financial Stability
Smith & Wilson (2000), “Fitting Yield Curves with Long Term Constraints”, Bacon & Woodrow report.


Contributed talk: Monday, 15:40, Room 4

Meelis Käärik (University of Tartu, Estonia)

Estimation of claim frequency with the emphasis on model comparison principles

We consider a problem of estimating claim frequency with different methods. More precisely, we compare the classification and regression trees (C&RT) approach by a more dynamic local regression approach. The C&RT model is appealing because of its relatively simple construction and a possibly complex but easily interpretable model. Local regression, on the other hand, avoids the "price shock" issue, which is one of the main problems with static classification methods.

In a situation when one applies different methods to data and tries to find the most suitable method, there always raises the question of the precision of the prediction errors. We will focus on the estimation problem from this aspect and provide some tools to answer the question of when is one method really better than another.

A case study with real vehicle casco insurance dataset is included, the estimates obtained by local regression, "global" regression and C&RT approach are compared.

The talk is based on a joint work with Raul Kangro and Liina Muru.


Contributed talk: Wednesday, 11:10, Room 4

Vladimir K. Kaishev (Cass Business School, City, University of London, United Kingdom)

Geometrically Designed Splines in Generalized (Non-)Linear Models with Actuarial Applications

We present a method for constructing what we call Geometrically Designed Splines (GeDS) with variable knots in the framework of Generalized (Non-)Linear Models. The latter is a generalization to the exponential family of the GeDS methodology developed by Kaishev et al. (2016) in the normal case. It utilizes a novel geometric interpretation of the estimation of the spline regression coefficients and knots. GeDS is based on the so-called variation diminishing (shape preserving) properties of spline functions, combined with a data driven phase of recovering the underlying control polygon of the spline. The method produces simultaneously linear, quadratic, cubic (and possibly higher order) spline fits with one and the same number of B-spline coefficients and is very efficient for both smooth and wiggly underlying functional dependencies. Small/large sample properties of the GeDS estimator are explored and it is compared with competitors implemented within the families of semi-parametric models (SPM), Generalized Smoothing Spline ANOVA models (GSS) and Generalized Additive Models (GAM). We demonstrate how the GeDS method is applied in the context of multivariate Archimedean copula estimation. The GeDS estimation procedure is further illustrated numerically, based on simulated and real data examples from actuarial modelling and materials science.

The talk is based on a joint work with Dimitrina S. Dimitrova, Andrea Lattuada and Richard J. Verrall.


Contributed talk: Monday, 13:30, Room 4

Alan Ker (University of Guelph, Canada)

Borrowing Information across Space and Time from Possibly Similar Data Generating Processes: Implications for Rating Crop Insurance Contracts

The goal is to nonparametrically estimate the time-conditional density of a random variable with a time-varying (beyond the first two moments) data generating process in the presence of small T and the abundance of spatially correlated data from other possibly similar dgps. This situation is common in rating crop insurance contracts. In this manuscript various estimators that borrow information (shrinkage estimators, smoothing methods, bayesian methods) are proposed and evaluated in the context of rating crop insurance contracts.

Crop yield data are plagued by a time-varying dgp (caused by changing innovations), a paucity of historical data, but a vast majority of yield data from other areas that have possibly similar dgps. Moreover, the realized yield data are spatially correlated and the correlation structures are non-trivial. We want to estimate the yield distribution for a given area at a given time (usually T+1) using historical yield data from a set of areas. The estimated densities are used to recover producer premium rates for crop insurance programs.

Crop insurance in the developed world is the main avenue to funnel monies to the agricultural production sector. The US averages $6 billion annually and this figures is trending upwards. In Canada, monies seem to have plateaued at $1 billion annually.


Contributed talk: Monday, 14:20, Room 2

A. Sevtap Kestel (Middle East Technical University, Turkey)

An Internal Model for an Emerging Market under Copula Approach: Turkish MPTL Case

Besides the standard approach, partial or full internal models are used in the determination of required capital in Solvency II. Solvency capital requirement (SCR) which is composed of sub-modules for the calculation of different risks takes into account a predetermined dependence structure among risk classes. In this study, we aim to analyze the compliance and impact of the Solvency II requirements on emerging markets, such as in Turkey based on real data. The study focuses on non-life premium and reserve risk calculations using both the standard formula and the internal model for insurance companies with different market sizes whose dependence structure are also analyzed by using copulas. Based on the real life occurrences in motor vehicle liability (MTPL) and other motor (motor) segments over the years 2009-2015, internal model results for SCR are compared to the results of Solvency II standard formula and the current Turkish solvency capital regime. Additionally, the present value of the mean pure premium per policy is estimated based on the dependence structure defined by Clayton copula in terms of the joint distribution of development factor and the claim size. In addition, in case of underwritten premium without loading and taxes remaining lower than the pure premium, an alternative technical reserve type called, premium risk reserve, is introduced to aid the regulators.

The talk is based on a joint work with Mehmet Höbek and Erdener Usta.


Contributed talk: Tuesday, 11:05, Room 4

Marie-Claire Koissi (University of Wisconsin-Eau Claire, United States of America)

A Survey of Insurance applications of the FAHP

In recent years, the Analytic Hierarchy Process (AHP) and the Fuzzy Analytic Hierarchy Process (FAHP) have been predominantly used in the areas of engineering, manufacturing, and finance, to solve increasingly complex problems involving pair-wise comparison, evaluation, and selection, allocations and planning, and other multiple criteria decision in general.

Multi-criteria decision making methods (MCDM) are not yet widely used in insurance, and actuarial applications of these methods are not extensive in the literature. However, it’s undeniable that actuarial science is a very promising field for MCDM. Possible actuarial applications of the FAHP are in risk management, fraud detection, portfolio and asset management, bond rating, and financial and strategic decision-making.

This talk will present possible applications of AHP (and FHP) in insurance and actuarial sciences problems.

The talk is based on a joint work with Arnold F. Shapiro.


Contributed talk: Wednesday, 09:50, Room 1

Thomas Kokholm (Aarhus University, Denmark)

Constant Proportion Portfolio Insurance Strategies in Contagious Markets

We study the risk embedded in the Constant-Proportion Portfolio Insurance trading strategy in a jump-diffusion model where the price of the underlying asset may experience self-contagion. Constant Proportion Portfolio Insurance (CPPI) strategies are popular dynamic portfolio strategies that allow investors to gear up the upside potential of a risky asset while limiting the downside risk.

In an idealized setting under continuity assumptions imposed both on the trading frequency and the dynamics of the risky asset, the risk of violating the guarantee embedded in the CPPI is zero. However, in practice both assumptions are violated: first, as the CPPI is often written on risky funds with low liquidity the rebalancing frequency of the CPPI is typically performed monthly or even quarterly making the risk of breaching the floor between trading dates non-negligible. Second, it has been widely documented that price trajectories contain jumps, which introduces a risk of breaching the floor even under continuous-time trading.

The first point is addressed in Balder et al. (2009) where the effectiveness of CPPI strategies under discrete-time trading is studied. They analyze a discretely rebalanced CPPI under the assumption that the risky asset evolves as a geometric Brownian motion. The second point is addressed in Cont and Tankov (2009), where they relax the continuity assumption of the price process and study the risk of the CPPI in models driven by Lévy processes and under continuous-time trading.

We conduct a study of the gap risk coverage associated with CPPI strategies in a setting where the risky asset has a self-exciting jump component. While in Lévy-driven models the intensity of adverse shocks is constant, self-exciting jump processes account for the risk of jump clustering documented in real markets. First, under the assumption of frictionless and continuous trading, the modeling setup we propose allows for an analytical expression for the probability of breaching the floor. Second, to bring the level of our analysis closer to wealth allocations in actual CPPI issuances, we employ discrete-time trading and we investigate the risk of a CPPI portfolio attributable to both price crashes and discrete-time readjustments. In such discrete trading context, we estimate measures of the risk involved in the practical implementation of the CPPI strategy. For models calibrated to option prices, we find that the impact of contagion on the fair "gap risk fee'' is of smaller magnitude. However, for risk measures corresponding to the tail of the loss distribution we find that neglecting to take the contagion into account will underestimate those measures of the CPPI significantly when the rebalancing is done at a frequency lower than weekly. Moreover, we add illiquidity to the model and impose a cap on the daily trading in the risky asset. We find that for illiquid assets the impact of contagion on tail risk measures is significant even in the case of daily rebalancing, since the CPPI might lose additional value after the floor is breached if the risky exposure cannot be sold off in one block. Finally, we compare the Constant-Mix trading strategy to the CPPI in terms of risk measures and find that, for low levels of rebalancing and in distressed market conditions, the risk measures of the CPPI are higher than those of the Constant-Mix strategy, despite that the former strategy has a capital guarantee built in.

The talk is based on a joint work with Alice Buccioli.


Contributed talk: Tuesday, 16:30, Room 9

Otto Konstandatos (University of Technology Sydney, Australia)

An Analytical Approach for Reset Employee Share Option Valuation Incorporating Voluntary Early Exercise and Involuntary Attrition Adjustment using Survival Functions

Employee share options (ESOs) are a common performance-based method to reward employees and represent major items of corporate liability. They are particularly common for senior executives. The IFRS2 and AASB2 financial reporting standards require public corporations to include the cost of these liabilities in their financial statements. Professional bodies such as the US Society of Actuaries and the Australian Institute of Actuaries provide explicit guidance notes advising on the applicable standards in the actuarial valuation of ESOs. Typically an ESO has a vesting period where voluntary exercise is not allowed, followed by an exercise window where the employee may voluntarily exercise their option provided the employee has survived in employment beyond the vesting period. Following a substantial stock price decline before vesting, an ESO may be deeply out-of-the-money. ‘Reset’ (or ‘reload’) ESOs typically allow the option to be cancelled and re-issued with a lower exercise price and/or later maturity, to re-incentivise the employee. In this work we produce a novel analytical valuation formula to evaluate a Reset ESO structure allowing for the simultaneous resetting of vesting period, exercise window, reset level and maturity. Extending the analysis of the simple ESO structure studied in Kyng et al (2016), and modelling voluntary early exercise using the Hull and White (2004) exercise-multiple characterisation, we decompose the reload ESO into a combination of non-standard sequential barrier options with different barriers during the vesting and exercise periods. We apply the non-standard Method of Images (Buchen (2001)) and utilise several new lemmas to express our results as portfolios of standardised European binary power option instruments. Typically death, disability or ill health retirement will lead to forfeiture before vesting or to involuntary early exercise of the contingent employee benefit after vesting. We incorporate survival analysis in our valuation through the use of empirically determined survival functions from Life Insurance Tables. We construct portfolios of our analytical result weighted using the empirically determined survival functions and present numerical results for survival adjustment illustrating the effect of several different levels of disability on the reset ESO valuation. In accordance with IFRS2 and AASB2 reporting requirements and professional actuarial guidance notes, our modelling approach allows us to express the survival-adjusted original ESO price and reset component separately. Our valuation formulae are relatively simple and readily implementable by Actuarial practitioners.


Contributed talk: Wednesday, 09:50, Room 7

Dimitrios G. Konstantinides (University of the Aegean, Greece)

Tail behavior of randomly weighted infinite sums

Let $X_1,X_2,\ldots$ be a sequence of pairwise asymptotically independent and real-valued random variables, and $\Theta_1,\Theta_2,\ldots$ be another sequence of nonnegative and nondegenerate at zero r.v.s., independent of $X_1,X_2,\ldots$. In this paper, we investigate the tail behavior of randomly weighted infinite sums under the assumption that $S_\infty^+=\sum_{n=1}^\infty\Theta_nX_n^+$ (where $X_n^+=\max\{X_n,0\}$) has a consistently varying tail, as well as some moment conditions on $\Theta_1,\Theta_2,\ldots$. Our obtained result is more easily verifiable than some existing ones restricting the conditions on $X_1,X_2,\ldots$.

The talk is based on a joint work with Yang Yang and KaiYong Wang.


Contributed talk: Monday, 16:55, Room 3

Daniel Kostner (Free University of Bozen, Italy)

Cone distribution functions and quantiles for multivariate random variables

Set-valued quantiles for multivariate distributions with respect to a general convex cone are introduced which are based on a family of (univariate) distribution functions rather than on the joint distribution function. It is shown that these quantiles enjoy basically all the properties of univariate quantile functions. Relationships to families of univariate quantile functions and to depth functions are discussed. Finally, a corresponding Value at Risk for multivariate random variables as well as stochastic orders are introduced via the set-valued approach.

The talk is based on a joint work with Andreas Hamel.


Contributed talk: Tuesday, 16:05, Room 6

Paul Krühner (TU Wien, Austria)

On suboptimal control and application to an insurance problem

The typical approach to optimal stochastic control is to guess the optimal control or the structure of the value function and use a verification theorem to show its optimality. However, in many problems it is impossible to find or guess the optimal control. A different approach is to simply pick a control and to somehow measure its performance. We find a way to measure the performance deficit of a feedback strategy to the optimal control without knowledge on the optimal control. Our findings are applied to the following optimal reinsurance problem. We consider a company who's surplus is modelled by a diffusion approximation with random drift and volatility. This company may choose to reinsure parts or all of its business and seeks to maximise its total capital at a later point of time. In this setup we simply consider constant reinsurance strategies and compare them to the unknown optimal strategy.


Contributed talk: Tuesday, 16:55, Room 9

CANCELLED: Timothy James Kyng (Macquarie University, Australia)

Numerical Experiments and Hybrid Methods for Valuation of Multi-Asset Multi-Period Executive Share Options

Executive stock options (ESOs) are equity instruments that may be granted to senior employees as part of their compensation. There are two significant dates in the life of an ESO, the vesting date and the maturity date. Death or exit from employment before the vesting date results in the cancellation of the ESO. Typically ESOs are granted to the employee conditionally subject to a set of hurdles that must be met on the vesting date and contingent on survival in employment till that time. After that the ESO usually has American style exercise rights up to the maturity date. Exercise may be triggered by financial considerations or automatically in the event of death or ill health of the employee. Accordingly these ESOs are a hybrid of an insurance contract and an option contract. Accounting standards such as IFRS2 and AASB2 require public companies to report the costs of providing these ESOs. The ESO valuations are often performed by Actuaries. The Society of Actuaries in the USA and the Institute of Actuaries in Australia both provide guidance notes / professional standards for their members relating to ESO valuation. There are many different possible designs for the structure of an ESO. Consequently, there is no analytic formula that applies generally and analytic or numerical valuations must be developed on an ad hoc basis. This paper adopts the Hull White exercise multiple approach to modelling voluntary early exercise behaviour of ESO holders and incorporates attrition induced involuntary early exercise for an ESO with a 3 asset hurdle which reflects both absolute and relative performance of the company’s stock. The paper explores the use of hybrid numerical methods for ESO valuation using Monte Carlo simulation combined with either analytic formulae or binomial lattice methods via a series of numerical experiments and utilising various methods for improving the convergence of these numerical methods. We provide analytic formulae for the version of the ESO that does not allow for early exercise, derived using the Skipper Buchen result. These formulae allow us to test the hybrid numerical methods for convergence and accuracy. This paper extends the work of Kyng, Konstandatos and Bienek (2016).

The talk is based on a joint work with Otto Konstandatos and Fabian Gatzka.


Contributed talk: Monday, 16:05, Room 3

Zinoviy Landsman (University of Haifa, Israel)

Multivariate Tail-based Measures for Systemic Risk

The growing complexity and globalization of financial services have reinforced the interconnection between financial institutions. While this interdependence may have promoted efficiency and economic growth, it has also increased the chance of systemic risk, the kind that brought the world economy to the brink of a dangerous collapse. Since interdependencies can be modeled by multivariate structures which, by definition, describe the potential impact of one institute on another, the proposed research is designed to focus on measures quantifying the tail behavior of multivariate distributions in a framework of a variety of dependence structures known for their relevance in modeling heavy losses.

In the talk we propose two types of novel risk measures:
1. Measures which extend the existing common univariate-based measures of Tail Conditional Expectation. These measures focus on quantile-based thresholds of severe risks and the mean of such risks (the so-called MTCE, recently proposed in Landsman et al (2016) ), allowing the conditioning on distress in other institutions.
2. Measures which aim at quantifying the dispersion of multivariate risk from MTCE-type risk measures. Such measures quantify the simultaneous tail dependence of several financial institutions.

The proposed multivariate risk measures will be illustrated with the different dependence structures and the real data.

References:
Landsman, Z., Makov, U., & Shushi, T. (2016). Multivariate tail conditional expectation for elliptical distributions. Insurance: Mathematics and Economics, 70, 216-223.

The talk is based on a joint work with Udi Makov amd Shushi.


Contributed talk: Tuesday, 14:20, Room 4

Yohann Le Faou (UPMC, France)

Random Forest for Regression of a Censored Variable

In the insurance broker market, commissions received by brokers are closely related to the surrender of the insurance contracts. In order to optimize a commercial process, a scoring of prospects should then take into account this surrender component. We propose a weighted Random Forest model to predict the surrender factor which is part of the scoring. Our model handles censoring of the observations, a classical issue when working on surrender mechanisms. Through careful studies of real and simulated data, we compare our approach with other standard methods which apply in our setting. We show that our approach is competitive in terms quadratic error to address the given problem.

The talk is based on a joint work with Arnaud Cohen, Guillaume Gerber, Olivier Lopez and Michael Trupin.


Contributed talk: Tuesday, 10:40, Room 1

Hangsuck Lee (Sungkyunkwan University, Republic of Korea (South Korea))

Crediting Strategy for an Optimal Universal Life Contract

The determination of crediting rate in universal life products is an important decision making problem faced by insurers when they apportion the investment return between policyholders and themselves. Since it affects the policies in force and new contract sales, this paper focuses on finding an optimal crediting rate in such a way that the utilities of both parties can be maximized. Under the restriction that the insurer’s expected utility is maintained above a certain level, we attempt to achieve the optimal solution to maximize the policyholder’s utility. For the purpose, we employ the Holmstrom(2016 Nobel laureate) model dealing with the principal-agent problem. This paper adopts the policyholder and the insurer as the principal and the agent, respectively, and regards a share of the investment return as an incentive that will be given to the agent. As a result, we find that the optimal portion of the crediting rate in the investment return is inversely proportional to the excess return over the risk free rate of interest. In other words, the optimal portion of the insurer’s share is proportional to the excess return. An empirical study based on the insurer’s investment return, the crediting rate, and excess return verifies that our theoretical finding is statistically significant.

The talk is based on a joint work with Hyungsuk Choi and Bangwon Ko.


Contributed talk: Tuesday, 11:05, Room 5

Jin-Ping Lee (Feng Chia University, Taiwan)

Hedging and Valuation of Longevity Swap with Counterparty Risk

Longevity swaps are the most popular instruments for life insurers that translate their longevity risk to the capital market. Longevity swaps are typically a bilateral contract and their values are determined by not only the reference longevity rates but also the financial positions of contracting parties. This paper develops a Merton-type structural model with stochastic interest rates to examine how the default risk, asset risk, the size of swap, and the size of both contracting parties affect the valuation and the hedging effectiveness of the longevity swap. For index-based longevity contracts, we further look into how basis risk affects the valuation and the hedging effectiveness. We set up the dynamics of assets and liabilities for both contracting parties in a multi-period environment with stochastic interest rates and specify the payoffs of longevity swaps. We then compute the spreads of longevity swaps in a risk-neutral pricing framework via the Monte Carlo simulation. Our results show how spreads and hedging effectiveness of the longevity swap change with default risk, size of the swap, basis risk, interest rate risk, and the relative size of contracting parties.

The talk is based on a joint work with chao Lee and Min-Teh Yu.


Contributed talk: Monday, 16:55, Room 4

Simon C.K. Lee (The University of Hong Kong, Hong Kong S.A.R. (China))

Delta Boosting Machine with Application to General InsuranceS

In this paper, we introduce Delta Boosting(DB) as a new member of the boosting family. Similar to the popular Gradient Boosting (GB), this new member is presented as a forward stagewise additive model that attempts to reduce the loss at each iteration by sequentially fitting a simple base learner to complement the running predictions. Instead of relying on the negative gradient, as is the case for GB, DB adopts a new measure called delta as the basis. Delta is defined as the loss minimizer at an observation level.

We also show that DB is the optimal boosting member for a wide range of loss functions. The optimality is a consequence of DB solving for the split and adjustment simultaneously to maximize loss reduction at each iteration. In addition, we introduce an asymptotic version of DB that works well for all twice-differentiable strictly convex loss functions. This asymptotic behavior does not depend on the number of observations, but rather on a high number of iterations which can be augmented through common regularization techniques. We show that the basis in the asymptotic extension only differs from the basis in GB by a multiple of the second derivative of the log-likelihood. The multiple is considered to be a correction factor, one that corrects the bias towards the observations with high second derivatives in GB. When negative log-likelihood is used as the loss function, this correction can be interpreted as a credibility adjustment for the process variance.

Simulation studies and real data application we conducted in this paper suggest that DB is a significant improvement over GB. The performance of the asymptotic version is less dramatic, but the improvement is still compelling. Like GB, DB provides a high transparency to users, and we can review the marginal influence of variables through relative importance charts and the partial dependence plots. We can also assess the overall model performance through evaluating the losses, lifts and double lifts on the holdout sample.

The talk is based on a joint work with Sheldon X. Lin.


Contributed talk: Monday, 14:20, Room 8

Wing Yan Lee (Hang Seng Management College, Hong Kong S.A.R. (China))

Systemic Weather Risk and Agricultural Insurance Pricing

Systemic weather risk has been cited as one of the major reasons of crop insurance market failure, since it induces spatial correlation in agricultural losses and usually causes huge damages in the insurance portfolio. The objective of this paper is to detect outliers and structure changes in crop yields that are associated with systemic weather risk and incorporate it into agricultural insurance design and pricing. In doing so, a novel methodology is proposed that is able to capture the data trend and detect the trend shift, as well as identify the outliers of the data. Specifically, we propose a parameter-free envelope-based decomposition algorithm, which is not sensitive to noise, to capture a general trend of a time series. In order to identify the outliers of the data, we apply the Minimum Covariance Determinant (MCD) estimator, which has been proved theoretically and empirically to be effective in outliers identification in an extremely noisy environment. Lastly, the trend shift is detected by taking into consideration the slope-change of the data. Empirical analysis indicates that integrating systemic risk factors would improve agricultural insurance design and pricing framework, and ultimately increase the efficiency of the agricultural insurance program.

The talk is based on a joint work with Wenjum Zhu, Siu Kai Choy, Shu Yan Lam and Kwok Wai Yu.


Contributed talk: Tuesday, 11:30, Room 2

Yung-Tsung Lee (National Chiayi University/Taiwan, Taiwan)

On the Valuation of Reverse Mortgages with Surrender Options

This study investigates the cost of reverse mortgage (RM) insurance by incorporating the borrower’s surrender option. Like traditional mortgages, RM borrowers can repay the loan early. Under Home Equity Conversion Mortgage (HECM) program, a RM borrower would not terminate the contract while there is a sluggish housing market since there is a non-recourse clause. Alternatively, the motivation of early repayment would increase while the housing price appreciates in which the borrower can enjoy the benefits exclusively. Prior studies consider the termination of a RM contract by exogenous decrements such as the cease of the borrower’s life. This study tries to fill this gap in the literature by incorporating the borrower’s surrender behavior and examines the effect on the cost and risk profile of RM insurance.

Accordingly, besides the cease of the borrower’s life, we consider the early payment of a RM contract. The termination of a RM loan is determined by two factors, the surrender as well as the mortality. Since an early payment may depend on the value of the housing price, the aforementioned issue is a dynamic programming problem. A numerical approach will be employed to solve this problem due to the complexity of the structure of RMs. Cost and risk profile of RM insurance with and without considering the surrender option will be examined. We expect that the cost and tail risk will both increase and this gives a pioneering understanding for managing risk of RM insurance.

The talk is based on a joint work with Tianxiang Shi.


Contributed talk: Wednesday, 09:25, Room 6

Susanna Levantesi (Sapienza University of Rome, Italy)

Optimal Product Mix in Long Term Care Insurance

We investigate the application of natural hedging strategies for long term care insurers by diversifying both longevity and disability risks affecting long term care annuities. We propose two approaches to natural hedging: one built on a multivariate duration, the other on the Conditional Value-at-Risk minimization of the unexpected loss. Both the approaches are extended to the long term care insurance using a multiple state framework. In order to represent the future evolution of mortality and disability transition probabilities, we use the stochastic model of Cairns et al., 2009 with cohort effect under parameter uncertainty. We calculate the optimal level of a product mix and measure the effectiveness provided by the interaction of long term care stand alone, deferred annu- ity and whole-life insurance, also taking into account in the multivariate duration approach that the transition probabilities might change of the same size, either of a different size. We compare the results obtained by the two approaches and find that the approach based on the Conditional Value- at-Risk minimization produces better hedging results respect to the multivariate duration approach, especially when combining two line of business.


Contributed talk: Monday, 11:05, Room 7

Ghislain Leveille (Université Laval, Canada)

Compound trend renewal and Cox processes with discounted claims

Leveille et al. (2010) obtained exact analytical formulas for the Laplace transforms (LT) and distribution functions of several compound renewal sums with discounted claims. More recently, the same authors looked further into this subject by providing other important examples of these distribution functions and by presenting a truncated series solution method of the differential equation involving this LT in order to obtain an accurate approximation of the distribution function of our aggregate risk process, when inversion is too complex.

In this talk, we will extend the previous works to compound trend renewal sums with discounted claims, where the renewal and the non-homogeneous Poisson processes are two important cases of the trend renewal counting process. In parallel, we will also examine the inversion problem of the characteristic function of the compound Cox process with stochastic discounted claims. Several examples and applications will be presented.

Keywords: Aggregate discounted claims; Compound renewal sums; Cox process; Distribution functions; Stochastic interest rate; Trend renewal process.

References :
Leveille, G., Garrido, J. and Wang, Y. F. (2010) Moment generating function for compound renewal sums with discounted claims. Scandinavian Actuarial Journal 3, 165-184.
Wang, Y.F. , Garrido, J. and Leveille, G. (2016) The distribution of discounted compound PH-renewal processes. Methodology and Computing in Applied Probability. doi:10.1007/s11009-016-9531-6.
Leveille, G. and Hamel, E. (2017) Conditional, non-homogeneous and doubly stochastic compound Poisson processes with stochastic discounted claims. Methodology and Computing in Applied Probability. doi 10.1007/s11009-017-9555-6.


Contributed talk: Tuesday, 11:05, Room 2

CANCELLED: Jacques Levy Vehel (INRIA, France)

Causal relations between monetary policies and market behaviour

The effects of monetary policies, and most notably the celebrated Quantitative Easing (QE) measures taken recently by several central banks are the subject of intense debates as to their real impacts on markets.

In this work, we use the statistical theory of causality in the frame of a Structural Vector AutoRegression (SVAR) model to elucidate certain aspects of this problem on a firm theoretical and empirical basis. Our main contribution lies in the choice of the components of the vector used to build the SVAR model. The starting point is to recognise that, while it is true that QE is typically correlated with a decrease in markets volatility, it also coincides with an increase in the probability of the occurrence of jumps. In other words, where QE has been applied, one has observe simultaneously less "nervous" markets but ones which are more prone to discontinuities.

In order to give a mathematical meaning to these consideration, we estimate on daily observations of the SP500 both the instantaneous volatility and the local intensity of jumps. This latter quantity is defined as the stability exponent of the stable motion that is tangent to the record at hand at the considered time. This intensity ranges in the interval (0,2], where a larger value means a smaller intensity of jumps, the value 2 corresponding to no jumps at all. We first verify numerically that more jumps (that is, a smaller intensity) correlates strongly with less volatility : markets thus seem to trade more smoothness for more discontinuities.

We then incorporate both instantaneous volatility and local intensity of jumps of the SP500 into an SVAR model, along with several logs : the BofA Merrill Lynch US Broad Market Index, the BofA Merrill Lynch US Corporate Index, the Federal Reserve Short-term Funds Rates and the Federal Reserve Balances.
We consider two separate models, one from 1996 to 2008 and the second from 2009 to present. The second period corresponds to the one where QE has taken place.

We use two different methodologies to extract the causal structure of the data : the first one relies on classical exclusion algorithms for Directed Acyclic Graphs, and only make the assumption that no cycles are present in the causal structure. The second one assumes in addition that at most one of the logs follows a Gaussian distribution, a fact that we verify numerically on our data. The acyclic assumption is often made in such studies, as it fits with standard economic wisdom.

Our main findings are as follows :

  • Pre-QE period : the causal analysis shows that Reserves and Rates influence the BofA indices, but not their volatility or jump intensity. Causal relations also exist between indices and the SP500, both at the levels of their values and as concerns their volatility/jump intensity
  • QE period : Reserves have a causal impact on both the indices and the SP500, but a far larger effect on their volatility and jump intensity. Signs in the SVAR model indicate that an increase in the Reserve causes a drop in volatility and an increase in the jump intensity.

The net output of our analysis is that it is indeed the case that, as compared to pre-QE era, the massive increase in Reserves did have an effect on, e.g. the behaviour of the SP500. In particular, on the positive side, it has helped to reduce its volatility. However, this action is counterbalanced by the fact that this volatility reduction is accompanied by an increase in the jump intensity : in plain words, this means that markets are less nervous on a day to day basis, but are more prone to sudden movements now and then, and that this modification may, at least in part, be attributable to QE.

We believe that our analysis is the first to highlight such fine effects of QE on markets and that it is can help assess and guide monetary policies.

The talk is based on a joint work with Philippe Desurmont, Anne Philippe and Marie-Anne Vibet.


Contributed talk: Tuesday, 11:55, Room 8

Danping Li (University of Waterloo, Canada)

Optimality of Excess-of-Loss Reinsurance under a Mean-Variance Criterion

In this paper, we study an insurer's reinsurance-investment problem in a general reinsurance form under a mean-variance criterion. We show that excess-of-loss is the equilibrium reinsurance strategy under a spectrally negative Lévy insurance model when the reinsurance premium is computed according to the expected value premium principle. Furthermore, we obtain the explicit equilibrium reinsurance-investment strategy by solving the extended Hamilton-Jacobi-Bellman equation. Finally, we compare our result to some related problems, such as optimal strategy to maximize the expected exponential utility of terminal wealth and the pre-commitment strategy of mean-variance problem.

The talk is based on a joint work with Dongchen Li and Virginia R. Young.


Contributed talk: Wednesday, 09:50, Room 2

Han Li (University of New South Wales, Australia)

Modeling multi-state health transitions in China: A generalized linear model with time trends

Population ageing has reached a new dimension in China. Therefore, there is an increasing need to understand and analyze the ill-health transitions among Chinese elderly. In this paper, we propose a generalized linear model to estimate the transition rates of functional disability. We use individual-level panel data from the Chinese Longitudinal Healthy Longevity Survey (CLHLS) for the period 1989-2012. We made a formal comparison between male and female resident, for both rural and urban areas. Based on the results from proposed model, we also predicted the demand for aged care services in China.

The talk is based on a joint work with Katja Hanewald and Adam Shao.


Contributed talk: Tuesday, 16:55, Room 6

Shu Li (University of Illinois at Urbana-Champaign, United States of America)

Analysis of the omega-killed Markov additive process

In this talk, we consider the Markov Additive Process (MAP) with the omega-killed feature, where the killing (or bankruptcy) rate is level-dependent. Such a distinction between ruin and bankruptcy was first proposed in Albrecher, Gerber and Shiu (2011), where they solve for the optimal constant dividend barrier. As a generalization of Li and Palmowski (2016), we extend the results of the two-sided exit problem, potential measure, and occupation time in red to the MAP. The optimality of the dividend strategy will be further discussed.

The talk is based on a joint work with Irmina Czarna, Adam Kaszubowski and Zbigniew Palmowski.


Contributed talk: Wednesday, 10:15, Room 2

Zhongfei Li (Sun Yat-sen University, China)

Pre-Commitment and Equilibrium Strategies for DC Pension Fund with Regime Switching and a Return of Premiums Clause

We study an optimal investment problem for defined-contribution (DC) pension plans during the accumulation phase. During the accumulation phase, a pension member contributes a predetermined amount of money as premiums and the manager of the pension fund invests the premiums in a financial market to increase the value of the accumulation. To protect the rights of pension members who die before retirement, we introduce the return of premiums clause that a member who has died can withdraw any premiums she has contributed. We assume that the financial market consists of one risk-free asset and multiple risky assets, the returns of the risky assets depend on the market states, the evolution of the market states is described by a Markov chain, and the transition matrixes are time-varying. The pension fund manager aims to maximize the expected terminal wealth of every surviving member at retirement and to minimize the risk measured by the variance of him/her terminal wealth, which are two conflicting objectives. We formulate the investment problem as a discrete-time mean-variance model. Since the model is time-inconsistent, we seek its pre-commitment and equilibrium strategies. Using the embedding technique and the dynamic programming method, we obtain the pre-commitment strategy and the corresponding efficient frontier in closed-form. Applying the game theory and the extended Bellman equation, we derive the analytical expressions of the equilibrium strategy and the corresponding efficient frontier. Some interesting theoretical and numerical results are found for the two investment strategies, the two efficient frontiers, and the impact of regime switching and the return of premiums clause.

The talk is based on a joint work with Lihua BIan and Haixiang Yao.


Contributed talk: Tuesday, 11:55, Room 1

Tzuling Lin (National Chung Cheng University, Taiwan)

Hedging mortality/longevity risk for multiple years

In the article, we develop strategies of hedging mortality/longevity risks for multiple years for a life insurer/an annuity provider and a financial intermediary. The hedges for more than one year for life insurance and annuity products involve two uncertain factors, the mortality rates and the numbers of life insureds/annuity recipients. Under the fact that both factors are random variables, we derive the closed-form formulas for the optimal units of the mortality-linked securities for hedging mortality (longevity) risk of a life insurer (an annuity provider) for several years. Numerical illustrations show that purchasing the optimal units of mortality-linked securities can significantly hedge the downside risk of loss due to mortality (longevity) risk for the life insurer (annuity provider); for a financial intermediary, adopting an optimal weight of a portfolio of life and annuity business can reduce extreme losses from the longevity risk but could slightly increase losses from the mortality risk.

The talk is based on a joint work with Cary Chi-Liang Tsai.


Contributed talk: Monday, 15:40, Room 5

Mathias Lindholm (Stockholm University, Sweden)

On connections between some classical mortality laws and proportional frailty

We provide a simple frailty argument that produces the Gompertz-Makeham mortality law as the population hazard rate under the assumption of proportional frailty given a common exponential hazard rate. Further, based on a slight generalisation of the result for the Gompertz-Makeham law the connection to Perks and Beard's mortality laws are discussed. Moreover, we give conditions for which functional forms of the baseline hazard that will yield proper frailty distributions given that we want to retrieve a certain overall population hazard within the proportional frailty framework.


Contributed talk: Wednesday, 11:35, Room 8

Filip Lindskog (Stockholm University, Sweden)

Insurance valuation: A computable multi-period cost-of-capital approach

We present an approach to market-consistent multi-period valuation of insurance liability cash flows based on a two-stage valuation procedure. First, a portfolio of traded financial instrument aimed at replicating the liability cash flow is fixed. Then the residual cash flow is managed by repeated one-period replication using only cash funds. The latter part takes capital requirements and costs into account, as well as limited liability and risk averseness of capital providers. The cost-of-capital margin is the value of the residual cash flow. We set up a general framework for the cost-of-capital margin and relate it to dynamic risk measurement.

Moreover, we present explicit formulas and properties of the cost-of-capital margin under further assumptions on the model for the liability cash flow and on the conditional risk measures and utility functions. Finally, we highlight computational aspects of the cost-of-capital margin, and related quantities, in terms of an example from life insurance.

The talk is based on a joint work with Hampus Engsner and Mathias Lindholm.


Contributed talk: Wednesday, 09:50, Room 6

Fangda Liu (Central University of Finance and Economics, China)

Optimal Insurance Design in the Presence of Exclusion Clauses

The present work studies the design of an optimal insurance policy from the perspective of an insured, where the insurable loss is mutually exclusive from another loss that is denied in the insurance coverage. To reduce ex post moral hazard, we assume that both the insured and the insurer would pay more for a larger realization of the insurable loss. When the insurance premium principle preserves the convex order, we show that any admissible insurance contract is suboptimal to a two-layer insurance policy under the criterion of minimizing the insured's total risk exposure quantified by value at risk, tail value at risk or an expectile. The form of optimal insurance can be further simplified to be one-layer by imposing an additional weak condition on the premium principle. Finally, we use Wang's premium principle and the expected value premium principle to illustrate the applicability of our results, and find that optimal insurance solutions are affected not only by the size of the excluded loss but also by the risk measure chosen to quantify the insured's risk exposure.

The talk is based on a joint work with Yichun Chi.


Contributed talk: Monday, 16:05, Room 8

Haiyan Liu (University of Waterloo, Canada)

Pareto-optimal reinsurance arrangements under general model settings

In this paper, we study Pareto optimality of reinsurance arrangements under general model settings. We give the necessary and sufficient conditions for a reinsurance contract to be Pareto-optimal and characterize all Pareto-optimal reinsurance contracts under more general model assumptions. We also obtain the sufficient conditions that guarantee the existence of the Pareto-optimal reinsurance contracts. When the losses of an insurer and a reinsurer are both measured by the Tail-Value-at-Risk (TVaR) risk measures, we obtain the explicit forms of the Pareto-optimal reinsurance contracts under the expected value premium principle. From the purpose of practice, we use numerical examples to show how to determine the best Pareto-optimal reinsurance contracts among the available Pareto-optimal reinsurance contracts such that both the insurer's aim and the reinsurer's goal can be met under the best Pareto-optimal reinsurance contracts.

The talk is based on a joint work with Jun Chai and Ruodu Wang.


Contributed talk: Tuesday, 16:30, Room 5

I-Chien Liu (National Taichung University of Science and Technology, Taiwan)

Cohort Mortality Model Innovations with Non-Gaussian Distributions

The paper aims to combine the mortality models provided by Renshaw and Haberman (2006) and Mitchell et al. (2013). The mortality model provided by Renshaw and Haberman (2006) is a pioneer for cohort effect. Mitchell et al. (2013) use a skill to revise the Lee-Carter model and prove their model is better than others. We attempt to revise the model provided by Renshaw and Haberman (2006) using the same skill. In addition, we add five non-Gaussian distributions, including Student’s t-distribution, jump diffusion distribution, variance gamma distribution, normal inverse Gaussian distribution and generalized hyperbolic skew Student’s t-distribution, into the error terms of our mortality model. The paper will derive the formulae for the procedure of parameter estimations. According to the iterations of Newton method, we can obtain the corresponding parameters of our model. We can collect the mortality data from Human Mortality Database website, and determine that which distribution is better for the error terms of the mortality model based on log-likelihood function, Akaike information criterion, Bayesian information criterion and three testing methods (Kolmogorov-Smirnov test, Anderson-Darling test, Cramér-von-Mises test). Finally, we examine the mortality projection ability of our model using the percentiles of mean absolute percentage errors.

Keywords: Non-Gaussian Distributions; Cohort Effect; Mortality Model.

Reference
Mitchell, D., Brockett, P., Mendoza-Arriaga, R., Muthuraman, K., 2013, Modeling and forecasting mortality rates, Insurance: Mathematics and Economics, 52, 275-285.
Renshaw, A. E., Haberman, S., 2006, A Cohort-Based Extension to the Lee-Carter Model for Mortality Reduction Factors, Insurance: Mathematics and Economics, 38, 556-570.

The talk is based on a joint work with Hong-Chih Huang and Chou-Wen Wang.


Contributed talk: Tuesday, 16:30, Room 7

Jiajun Liu (Xi'an Jiaotong-Liverpool University, China)

Precise Estimates for the ruin probability with Dependent Insurance and Financial risks

In this talk, we consider a discrete-time risk model with insurance and financial risks, where the insurance net loss within period i and the overall stochastic discount factor follow a certain dependence structure via the conditional tail probability. Under the assumption that the distribution of insurance risk with one time period belongs to a subexponential distribution (hence, heavy-tailed) and the class of convolution-equivalent (hence, light-tailed distribution) respectively, precise estimates for finite time ruin probabilities are derived. Furthermore, we also derive some precise formulas for the tail probability of the finite time ruin under the assumption that the distribution of net insurance loss is Gamma-like tailed as an extension.


Contributed talk: Monday, 16:30, Room 8

Ambrose Lo (University of Iowa, United States of America)

Pareto-optimal reinsurance policies in the presence of individual rationality constraints

The notion of Pareto optimality is commonly employed to forge decisions that reconcile the conflicting interests of multiple agents with possibly different risk preferences. In the context of a one-period distortion-risk-measure-based reinsurance model, we characterize the set of Pareto-optimal reinsurance policies analytically and expeditiously. The resulting solutions not only cast light on the structure of the Pareto-optimal contracts, but also allow us to portray the trade-offs between the insurer and reinsurer geometrically. A strikingly simple graphical search of Pareto-optimal policies in the presence of the insurer's and reinsurer's individual rationality constraints is illustrated in the special cases of Value-at-Risk and Tail Value-at-Risk.


Contributed talk: Wednesday, 09:00, Room 7

Stéphane Loisel (ISFA, Univ. Lyon 1, France)

On some properties of Schur-constant vectors

In this talk we present some properties of Schur-constant vectors (including a Markov property) as well connections to various fields and applications in actuarial science.

The talk is based on a joint work with Claude Lefevre and Sergey Utev.


Contributed talk: Tuesday, 15:40, Room 1

Yi Lu (Simon Fraser University, Canada)

Optimal Investment Strategies and Intergenerational Risk Sharing for Target Benefit Pension Plans

A stochastic model for a target benefit pension fund in continuous time is studied, where the plan member’s contributions are set in advance while the pension payments depend on the financial situation of the plan, implying risk sharing between different generations. The pension fund is invested in both a risk-free asset and a risky asset. In particular, stochastic salary rates and the correlation between the salary movement and the financial market fluctuation are considered. Using the stochastic optimal control approach, closed-form solutions are obtained for optimal investment strategies as well as optimal benefit payment adjustments, which minimize the combination of benefit risk (in terms of deviating from the target) and intergenerational transfers. Numerical analysis is presented to illustrate the sensitivity of the optimal strategies to parameters of the financial market and salary rates.

The talk is based on a joint work with Suxin Wang and Barbara Sanders.


Contributed talk: Wednesday, 09:50, Room 4

Anne MacKay (Université du Québec à Montréal, Canada)

Analysis of VIX-linked fees for GMWBs via explicit solution simulation methods

We consider the VIX-linked fee presented in Cui et al. (2017) in the context of variable annuity contracts with guaranteed minimum withdrawal benefits (GMWB). Our goal is to assess the effectiveness of the new fee structure in decreasing the sensitivity of the insurer's liability to volatility risk. Since the GMWB is highly path-dependent, it is particularly sensitive to volatility risk, and can also be challenging to price, especially in the presence of the VIX-linked fee. In this paper, following Kouritzin (2016), we present an explicit weak solution for the value of the VA account and use it in Monte Carlo simulations to value the GMWB guarantees. Numerical examples are provided to assess the impact of the VIX-linked fee on the sensitivity of the liability to volatility risk.

The talk is based on a joint work with Michael A. Kouritzin.


Contributed talk: Monday, 15:40, Room 3

Melina Mailhot (Concordia University, Canada)

Robust Multivariate Risk Measure

In this presentation, we introduce a new multivariate risk measure, the Multivariate Range Value-at-Risk. This multivariate risk measure reveals several desirable properties, such as translation invariance, positive homogeneity and monotonicity. We show that it can be formulated in a way that one can show robustness. We will compare this new risk measure with its univariate counterpart and other common multivariate risk measures, such as multivariate VaR and TVaR. The difference between this risk measure other multivariate frameworks will be highlighted. We will present several numerical examples and illustrations of the results.


Contributed talk: Tuesday, 16:05, Room 1

Hong Mao (Shanghai Second Polytechnic University, China)

Optimal Contribution and Investment in A Defined Benefit Pension Plan When The Return Rate of Risky Assets Is Time Series Correlated and Cyclical Change

This work considers stochastic models of defined benefit pension plans. Stochastic growth rate of salary, and stochastic mortality is allowed to evaluate pension plan.

Borrowing money at risk-free interest rate is allowed. Especially important thing is that based on Momon (2004), we extend Vasicek model to multi-dimentional cases and use it to mode the return rates of multi-risky assets invested and the growth rate of wage with time series correlation and cyclically changes. we apply time inconsistent dynamic programming and establish objective function of minimizing the sum of the valatilities of the contribution and the return rate of investment portfolio in order to determine dynamically optimal contribution rate and investment strategy

The talk is based on a joint work with Zhongkai Wen.


Contributed talk: Monday, 16:05, Room 2

Etienne Marceau (Université Laval, Canada)

Aggregation and Risk Measurement of Exchangeable Risks, assuming Dependence Uncertainty

In this paper, we consider the computation of risk measures, such as the VaR and the TVaR, for a portfolio of n dependent losses assuming that the marginal distributions of the loss random variables are known but that the dependence structure is only known partially. We will consider a portfolio of exchangeable loss random variables for which the dependence relationship is defined through a common factor random variable (rv). We suppose the distribution of the common factor rv to be unknown while its first moments, such as the mean, the variance, and the skewness are assumed to be known. Based on the link between the joint distribution of the vector of n losses and the moments of the common factor rv, we propose an approach to derive upper and lower bounds on risk measures on the aggregate losses of the portfolio. Briefly, using stochastic ordering arguments, it is possible to derive first distributional lower and upper bounds on the distribution of the common factor rv. Then, we obtain lower and upper bounds on risk measures, such as the VaR and the TVaR of the aggregate losses of the portfolio. For example, assuming the probability of occurrence of a default and the covariance between the occurrences of two defaults in a portfolio of credit risks to be known, we are able to find the smallest and the largest value of the VaR and the TVaR of the aggregate losses of the portfolio. Numerical examples are provided to illustrate our proposed approach.

The talk is based on a joint work with Hélène Cossette, Itre Mtalai and Areski Cousin.


Contributed talk: Monday, 11:05, Room 1

Agnieszka Marciniuk (Wrocław University of Economics, Poland)

Marriage reverse annuity contract and dread disease insurance as a one product

The aim of this presentation is to propose a new combined financial and insurance product which consists of marriage reverse annuity contract and dread disease insurance for a husband and for a wife. Marriage reverse annuity contract is a financial product offered to elderly spouses. Owners receive annuity-due benefits in return for the transfer of the ownership onto the company (mortgage fund) and they have an ensured right to live in property until their death. Dread disease (or critical illness) insurance (offered to individuals - separately for a wife and for a husband) provides the policyholder with lump sum in case of dread disease included in the set of diseases specified by the policy conditions, such as heart attack, cancer or stroke. The benefit is paid on diagnosis. We analyse a stand-alone cover which means that the insurance policy ceases immediately after payment of the sum assured.

This product could be addressed to elderly spouses since it takes advantages of both financial and insurance products and improves the living conditions of pensioners and provides additional funds in case of a critical illness. Moreover, the combined product reduces costs associated with maintenance expenses of three separate products (one company).

A multiple state model for the combined product is constructed and its probabilistic structure is estimated under the assumptions that the future lifetimes of a husband and a wife are independent random variables and the benefit is paid until the death of the last surviving spouse (the last surviving status). To make calculation easier the matrix formulas are determined and applied for:

  • annual marriage reverse annuity paid when at least one of the spouses is alive,
  • dread disease lump sum benefit,
  • net single premium.

The idea of the new product is that net premiums are cared for by customer’s capital for the reverse annuity contract (it is the percentage of the value of property). The amount b of annual marriage reverse annuity is divided into three parts: bß an amount of annual marriage reverse annuity paid when at least one of the spouses is alive and a period premium for the dread disease insurance (the same amount for a wife and a husband), where is a reverse annuity parameter. The value is net premium for a wife and a husband. The dread disease lump sum benefits for a wife and a husband are counted separately.

All numerical analysis are made for spouses who are aged between 65 and 85 by the use of own interfaces written in MATLAB. It is assumed that the reverse annuity parameter is equal to 99% and the dread disease covers risk against the cancer on the example of lung cancer. The fixed long-term interest rate 5.79 % was estimated on the basis of the real Polish market data related to the yield to maturity on fixed interest bonds and Treasury bills from 2008 by the use of Nelson-Siegel model. The data of interest’s rate was chosen for 2008 because of the hospital patients’ data are also from this year.

The talk is based on a joint work with Beata Zmyslona and Joanna Debicka.


Contributed talk: Monday, 14:20, Room 7

Tatiana D. Margulies (Universidad de Buenos Aires, Argentine Republic)

Actuarial Implications of Peer-To-Peer (P2P) Insurance

Peer-to-peer (P2P) insurance is essentially a risk sharing arrangement to cover losses among similar individuals within a collective or group. At the heart of P2P insurance is the application of new technologies to the approach used at beginning of modern insurance when insurance was obtained through membership in fraternal orders, guilds, and/or friendly societies. Under P2P insurance, a group of individuals ("peers") typically pool their resources in order to insure similar goods or services. These individuals determine which risks associated with their goods are covered, the premiums to be charged, and the events that constitute a claim. Compared to traditional insurance, P2P insurance is informal nature and is technologically based. The main vehicle for the propagation of P2P insurance is social networking and the hope or expectation is that the cohesion of the social network reduces moral hazard, thus resulting in fewer false claims for losses. It is anticipated that P2P insurance will use technology to simplify the insurance process and eliminate many of the intermediaries between the insured and the insurer, thus reducing costs associated with procurement of policies and settlement of claims.

P2P insurance is generally viewed as having the potential to radically change and "disrupt" the insurance landscape if P2P insurance becomes the dominant form of risk transfer among low-risk segments of the population-at-large. Those who share this view expect that P2P insurance can be as disruptive to insurance companies as Uber is to taxis, Airbnb is to hotels, Netflix is to cinemas, and Amazon and Alibaba are to retailing. The objective of this paper is to explore the strengths, weaknesses, and limitations of P2P insurance. We are especially interested in determining if P2P insurance is an actuarially viable form of insurance, especially for large risks, or if it is just a passing trend that will fade away in the near future.

The talk is based on a joint work with Colin Ramsay and Victor Oguledo.


Contributed talk: Wednesday, 11:10, Room 5

Lorenzo Mercuri (University of Milan, Italy)

Stochastic mortality models: some extensions based on Lévy CARMA models

Force of mortality is defined using the exponential function of the Legendre polynomials, as in Renshw et al. (1996), plus an extra term which captures the mortality shocks. The approach in Ahmadi et al. (2015) is extended, as we substitute the Ornstein-Uhlenbeck process with a Continuous Autoregressive Moving Average (CARMA) model. To estimate model coefficients, i.e. explanatory variables and Lévy process, the Generalized Linear Model is used and the log-likelihood function is maximized. Fitting results on US male life tables are presented.

The talk is based on a joint work with Asmerilda Hitaj and Edit Rroji.


Contributed talk: Tuesday, 13:55, Room 4

Tatjana Miljkovic (Miami University, United States of America)

A Cautionary Note for Risk Management Purposes on Application of Finite Mixture Modeling of Insurance Losses

An adequate assessment of risk measures is critical to the pricing of individual contracts as well as to determining the solvency capital levels. We use finite mixture models in model fitting. We propose a new approach in risk management that evaluates models based on model selection criterion in concert with tail risk measures. With this, we inform a risk manager to consider tradeoffs between the best model selected using common criteria such as negative log-likelihood (NLL), Akaike Information Criterion (AIC), or Bayesian Information Criterion (BIC) and frequently used risk measures such as value-at-risk (VaR) and conditional-tail-expectation (CTE). For illustration purposes, we develop a new grid map which considers model selection criterion and the risk measures jointly.

Motivated by recent popularity of finite mixture models in the area of loss modeling, we consider the application of the proposed approach in finite mixture modeling of the left-truncated losses. This is an extension of the work done on finite mixture modeling of insurance losses using non-Gaussian distributions. Here, we developed a new finite mixture model based on Gamma, Lognormal, and Weibull distributions. The EM algorithm is utilized to find the optimum number of the mixture components. In addition to fitting the mixture models using components from the same parametric family, we also considered the finite mixture models based on any combination of Gamma, Lognormal, and Weibull distributions. The EM algorithm is initialized using the Euclidian distance-based stochastic initialization strategy, known as emEM in the computational statistics. In addition to performing a simulation study, we also illustrated our proposed approach on two real data sets.

The talk is based on a joint work with Martin Blostein and Petar Jevtic.


Contributed talk: Tuesday, 13:30, Room 9

Manuel Morales (University of Montreal, Canada)

On an Agent-based Simulator Model for the Limit-Order-Book and its Applications to Measuring Price Impact

In this paper we discuss an agent-based simulator model for financial market prices through a Limit-Order-Book (LOB) dynamics. We study its implementation as a tool to measure the price impact function of meta-orders. Based on the existing literature on agent-based models, we build an event-driven simulation model for the order flow dynamics in a limit order book. The proposed model retains most of its most-sought simplicity while still reproducing key stylized features of the LOB. Moreover, the event-driven procedure allows for an efficient dialog with trading algorithms thus making it suitable to explore further explore the universality of the so-called square root law of price impact. In order to carry out such study, we calibrate our model to LOB data from the Toronto Stock Exchange and we use this simulator to empirically study the price impact function. Our conclusions point to a simulator engine capable to replicate price impact functions that are compatible with the reported findings in the literature. We argue that a potential application of such calibrated model could be that of a benchmark testing environment for intra-day trading strategies seeking to minimize their market impact imprint.

The talk is based on a joint work with Nazim Regnard.


Contributed talk: Tuesday, 16:30, Room 1

Hélène Morsomme (Université catholique de Louvain, Belgium)

Stochastic optimal control of public pension schemes

Population ageing undermines the current social security pension system. In this context, new pay-as-you-go pension systems are considered in order to maintain its sustainability. Solidarity between generations can result in risk sharing between the pensioners and the contributors. In classical pension design, there are essentially two kinds of pension schemes : Defined Contribution (DC) and Defined Benefit (DB) plans. Alternative pension plans based on a mix between DC and DB are considered.

Currently, automatic balance mechanisms are studied with discrete dynamic programming. In order to generalize this approach in continuous time, we apply the stochastic optimal control theory in the Brownian environment and we optimize a quadratic loss function based on the processes of the replacement rate and the contribution rate. As a result, we propose an optimal risk sharing between DC and DB.

The talk is based on a joint work with Pierre Devolder.


Contributed talk: Tuesday, 10:40, Room 6

Nora Muler (Universidad Torcuato di Tella, Argentine Republic)

Optimal multi-band policies in the problem of optimal cash management for compound Poisson processes with two-sided jumps

In this presentation, we deal with a continuous time model of cash management where the changes of the money stock between controls are described as a compound Poisson process with two-sided jumps and a negative drift; this drift corresponds to the daily money demand, and the negative and positive jumps correspond to outflows or inflows that occur in a very short time at random epochs. The money manager continuously monitors the cash flow and at any time he/she can increase or decrease the amount of cash to prevent the excess and shortage. The involved cost has two components: the opportunity cost of holding the money and the adjustments cost coming from decreasing and increasing the amount of cash; we consider here that both the adjustments costs of increasing and decreasing the amount of cash have a positive fixed component and we include the possibility of having a proportional component as well. The main goal is to find the optimal policy in order to minimize the expected value of the discounted cost. The goal is to find the optimal policy in order to minimize the total expected value of the discounted cost. Moreover, we require that cash should be injected whenever the money stock becomes negative. Note that the fixed component cost leads to an impulse control problem. This optimal cash management problem goes back to the classical paper of Miller and Orr (1966), where the uncontrolled cash flow follows a stationary random walk.

We show that the value function of the problem of minimizing the cost is a viscosity solution of the corresponding Hamilton-Jacobi-Bellman equation and can be characterized as the largest viscosity supersolution; we also give a verification result. We derive the structure of the optimal impulse strategy, which could have (in principle) more than one band. We study the regularity of the optimal value function and we see that a jump in the derivatives may occur at the trigger points. Finally we show that, unlike the case of Brownian motion, there are examples in which the optimal strategy has more than one band. Also we give examples where the value function is neither smooth nor convex, even in the case that the compound Poisson process has exponential jump distributions sizes.

The talk is based on a joint work with Pablo Azcue.


Contributed talk: Monday, 13:55, Room 3

Alfred Müller (Universität Siegen, Germany)

On consistency of expectiles and the Omega ratio with stochastic dominance rules

In this talk the relationship between the risk measures called expectiles and the performance measures known as Omega ratios is explained. For both of them consistency with respect to stochastic dominance rules is investigated. In particular a new stochastic order based on expectiles is introduced that turns out to have some unexpected properties. Conditions are given under which expectiles and Omega ratios are consistent with classical first and second order stochastic dominance and with respect to the recently introduced fractional stochastic dominance between first and second order.

The talk is based on a joint work with Bernhard Klar.


Contributed talk: Monday, 14:20, Room 3

Cosimo Munari (University of Zurich, Switzerland)

Comonotonic risk measures in a world without risk-free assets

We study comonotonicity of risk measures in the context of capital adequacy in terms of the primitives of the theory of risk measures: acceptance sets and eligible assets. We show that comonotonicity cannot be characterized by the properties of the acceptance set alone and heavily depends on the choice of the eligible asset. In fact, in many important cases, comonotonicity is only compatible with risk-free eligible assets. These findings severely question the assumption of comonotonicity in a world of ``discounted'' capital positions and seem to call for a renewed discussion about the meaning and the role of comonotonicity within a capital adequacy framework.

The talk is based on a joint work with Pablo Koch-Medina and Gregor Svindland.


Contributed talk: Tuesday, 16:55, Room 1

Poontavika Naka (University of Liverpool, United Kingdom)

Annuitisation Divisors for Notional Defined Contribution (NDC) Pension Schemes

A non-financial pension scheme, also known as notional defined contribution accounts (NDCs), is a combination of a traditional defined contribution (DC) to calculate the initial pension and a pay-as-you-go (PAYG) financing. Under this scheme, participants pay a fixed contribution rate on earnings into their account throughout working life and their contributions are credited with a notional interest rate, generally linked to wage or GDP growth rather than a return on financial assets. At retirement, the value of the accumulated contributions is converted into a life annuity by a so-called annuity divisor usually determined by the average (unisex) life expectancy, the indexation of pensions and the expected rate of return.

The use of uniform annuity divisor introduces an intra-generational redistribution from short-lived toward long-lived individuals; that entails a transfer of wealth from males (who generally live shorter) to females (who live longer) and from low-educated (who live shorter) to high-educated persons (who live longer).

In this study, we aim to quantify lifetime redistribution of a generic NDC pension system by calculating the ratio between the present value of expected pension benefits and the present value of the contributions paid during the participant’s working life. This measure enables us to assess the expected money’s worth of participation to the pension system. Also, building differential mortality tables by the level of education and gender, we compute different annuity divisors to assess redistribution among socioeconomic groups.

The talk is based on a joint work with Carmen Boada-Penas.


Contributed talk: Monday, 16:05, Room 4

Anastasia Novokreshchenova (University of Turin, Belgium)

Estimation of the price of risk in the Heston model

In this paper we study the problem of calibrating and modelling market price of risk in the context of stochastic volatility models. The Heston model (1993) has been widely used for equity option pricing purposes which is done under a risk-neutral measure. In the context of Solvency II internal models are required to produce both risk-neutral and real-world simulations - in particular, for calculation of the Solvency Capital Requirement (SCR). Under the Heston model it is possible to define a price of risk in a way that the state variable follows a square-root process under both an objective probability measure and an equivalent martingale measure. This is a convenient framework in terms of modelling. The problem of calibrating the price of risk is crucial in this setting. We propose a calibration procedure based on the work of Ait-Sahalia and Kimmel (2007). Their methodology involves approximating the unknown likelihood function and identifying the unobserved volatility state variable by inverting option prices under the Heston model. Using joint observations of the Eurostoxx50 index and its options for the last 17 years we study the stability of the obtained market price of risk parameters and the coherence of the solution with respect to the risk-neutral valuation. Using Monte Carlo simulations we compute a yearly VaR which is crucial for SCR. To make the valuation more realistic we incorporate a stochastic interest rate in the model.

The talk is based on a joint work with Celine Azizieh.


Contributed talk: Monday, 13:55, Room 4

Endar H. Nugrahani (Bogor Agricultural University, Indonesia)

Assessment on Financial Performance of Indonesian Insurance Companies

Good financial performance of an insurance industry is important because it influences heavily the public confidence to the insurance companies. The more credible an insurer's financial capability in managing its customers, the more interested are prospective customers in purchasing the insurance products. Therefore, it is necessary to assess the performance of such companies. This paper presents the analysis on financial performance of insurance companies in Indonesia. Based on the fact that the published financial reports of life insurance companies in Indonesia are usually presented in the form of ratio values, so the proposed model is the logit and probit regressions with binary dependent variables. Moreover, some predictions on the future financial performance of the insurance companies under consideration are also given. Some sensitivity analysis show that the variables which affect significantly the company’s financial performance are the gross premium and equities, i.e. claims, business, and commissions.

References
[1] Agresti A. 2007. An Introduction to Categorical Data Analysis. 2nd Edition. New York: John Wiley and Sons.
[2] Aldrich JH. 1984. LinearProbability, Logit, and Probit Models. London: Sage Publishing Company.
[3] Bouzouita R. 1991. A Probit Analysis of Best Ratings. Journal of Insurance Issues. 1(3): 23-34.

The talk is based on a joint work with Hadi Sumarno.


Contributed talk: Tuesday, 14:20, Room 9

Ramin Okhrati (University of Southampton, United Kingdom)

Hedging of defaultable securities under delayed data

We investigate a hedging problem of certain defaultable securities assuming that there is a delay in receiving data (i.e. lagged data). From a financial point of view, this indicates that traders are not up to date and do not have full access to the accounting data. In our analysis, different levels of information are distinguished including full market, company’s management, and investors information. We apply filtration expansion theory and compensator techniques to obtain semi-explicit solutions for locally risk minimizing hedging strategies from investors perspective. The results are presented according to the solutions of partial differential equations.


Contributed talk: Monday, 11:05, Room 6

Iqbal Owadally (Cass Business School, City, University of London, United Kingdom)

Optimal Investment for Retirement with Deferred Annuities

We construct an optimal investment portfolio model with deferred annuities for an individual investor saving for retirement. The objective function consists of power utility in terms of secured retirement income increments from the deferred annuity purchases, as well as bequest from remaining wealth invested in equity, bond, and cash funds. The asset universe is governed by a vector autoregressive model incorporating the Nelson-Siegel term structure and accumulated equity returns. We use multi-stage stochastic programming to solve the optimization problem numerically. In the previous literature, the power utility has been linearized, whereas we use powerful new non-linear solvers directly. Our numerical results show that the availability of deferred annuity purchases changes significantly the portfolio of investors saving for retirement.

The talk is based on a joint work with Chul Jang and Andrew Clare.


Contributed talk: Monday, 11:55, Room 6

Mustafa Asım Ozalp (Hacettepe University, Turkey)

Optimal Investment and Insurance Policy for an Insurer with Random Size Jump-Diffusion Risk Process

We obtain an optimal investment strategy and liability ratio for an insurer who hedges herself in a financial market composed of one riskless and one risky asset. The risky asset follows a geometric Brownian motion and the insurer’s risk process is assumed to follow a jump diffusion process. Due to the fact that claim-payments arrive in different sizes over time, we take the liberty of relaxing the assumption of constant parameter jump for the claim sizes. Then Hamilton-Jacobi-Bellman equation is solved to obtain explicit solutions under different utility forms and with different probability distributions assigned to claim magnitudes.

The talk is based on a joint work with Yeliz Yolcu Okur and Kasirga Yildirak.


Contributed talk: Tuesday, 11:55, Room 4

Silvana Manuela Pesenti (Cass Business School, United Kingdom)

Reverse Sensitivity Testing

Sensitivity and uncertainty analyses are important components of model building, interpretation and validation. We propose a model-independent framework for sensitivity analysis that reflects uncertainty in the input variables as well as sensitivity in the output. A model comprises a vector of random input factors and an aggregation function, mapping risk factors to a random output. A typical example is that of an internal model used by an insurer to calculate capital requirements. Our reverse sensitivity testing method works on a set of Monte-Carlo samples generated from a baseline model and proceeds in three stages. First, a stress on the model output distribution is specified, for example an increase in output VaR and/or Expected Shortfall. Second, a stressed model is identified as a re-weighting of the original Monte-Carlo sample. The stressed model has minimal relative entropy with respect to the baseline model, while giving rise to the required output stress. Third, information regarding the deviation of the risk factor distributions between the baseline and stressed model is combined with considerations of uncertainty around those distributions, to generate a ranking of input variables, reflecting the interplay of sensitivity and uncertainty analysis. The implementation is akin to importance sampling and is numerically efficient, circumventing the need for the computationally expensive repeated evaluations of the aggregation function that are common in standard sensitivity analyses. We illustrate our approach through a numerical example of a simple insurance portfolio.

The talk is based on a joint work with Pietro Millossovich and Andreas Tsanakas.


Contributed talk: Monday, 13:30, Room 5

Michal Pesta (Charles University, Czech Republic)

Granular loss modeling with copulae

To meet all future claims rising from policies, it is requisite to quantify the outstanding loss liabilities. Loss reserving methods based on aggregated data from run-off triangles are predominantly used to calculate the claims reserves. Conventional reserving techniques have some disadvantages: loss of information from the policy and the claim's development due to the aggregation, zero or negative cells in the triangle; usually small number of observations in the triangle; only few observations for recent accident years; and sensitivity to the most recent paid claims.

To overcome these dilemmas, granular loss reserving methods for individual claim-by-claim data will be presented. Moreover, reserves' estimation is a crucial part of the risk valuation process, which is now a front burner in economics. Since there is a growing demand for prediction of total reserves for different types of claims or even multiple lines of business, a copula framework for granular reserving will be established.

The talk is based on a joint work with Ostap Okhrin.


Contributed talk: Wednesday, 09:00, Room 6

Georg Pflug (University of Vienna, Austria)

Design of insurance contracts under ambiguity

We consider the problem of optimally designing an insurance contract with respect to costs (premium payments) and benefits (risk reduction). For an optimal design, the loss distribution has to be precisely specified. However, in many situations, the estimation of the loss distribution comes with an estimation error (which we call model ambiguity), which is usually ignored.

Under (nonparametric) model ambiguity the optimal design problem extends to a maximin problem, i.e. a saddlepoint problem. We give some properties of the saddlepoint solution and demonstrate this with some examples form CAT-insurance.

This is joint work with Corina Birghila.


Contributed talk: Tuesday, 14:20, Room 5

Pierrick Piette (Université Claude Bernard Lyon 1 / Sinalys, France)

Mortality Rates Improvements Forecasting with High-Dimensional Vector-Autoregression Models

The mortality rates forecasting problem involves the analysis of high-dimensional time series, especially in multi-populations modelling. Most of usual mortality models propose to decompose the mortality rates into several latent factors to reduce this complexity. These approaches, in particular those used cohort factors, have a good fit, but they are less reliable for forecasting purpose. One of the major challenges is to determine the spatial-temporal dependence structure between mortality rates given a relative moderate sample size. This paper proposes a large vector autoregressive (VAR) model fitted on the differences in the log-mortality rates, ensuring that the existence of long-run relationships between the mortality rates improvements. Our contribution is threefold. First, sparsity when fitting the model is ensured by using high-dimensional variables selection techniques without imposing arbitrary constraints on the dependence structure. The main interest is that the structure of the model is directly driven by the data, in contrast to the main mortality forecasting models. Additionally, our estimation allows a one-step process, as we do not need to estimate hyper-parameters. The variance-covariance matrix of residuals is then estimated through a parametric form. Secondly, our approach can be used to detect no intuitive age dependence in the data, beyond the cohort effect which is captured by our model. Third, our approach is natural to model the several populations in long run perspectives. Finally, in an out-of-sample forecasting study for mortality rates, we obtain a significant performance increasing when compared to classical mortality models using the French, US and UK data. We also show that our results enlighten the so-called cohort effect for these populations.

The talk is based on a joint work with Quentin Guibert and Olivier Lopez.


Contributed talk: Monday, 13:55, Room 5

Georgios Pitselis (University of Piraeus, Greece)

Quantile regression techniques with a working correlation model for credibility premium estimation

In this work we show how credibility techniques can be accommodated within the theory of quantile regression model for longitudinal data that combines the between - and within – subject estimating functions for parameter estimation. The model takes into account the variation and correlations of insurance contracts. The proposed method is robust to the error correlation structure and improves the efficiency of parameter estimators and is useful in actuarial applications including premium estimation and loss reserving.


Contributed talk: Tuesday, 14:20, Room 7

Konstadinos Politis (University of Piraeus, Greece)

Some monotonicity properties for solutions of renewal equations

Many actuarial quantities of interest satisfy an equation of renewal-type. In risk theory, in particular, renewal equations have played traditionally an important role in developing new results. In addition, it is often insightful to study monotonicity properties for the function of interest, a typical example being (several special cases of) the Gerber-Shiu function. In the present paper, we discuss conditions under which the solution of a renewal equation is monotone and attempt to put several existing results in a unified framework.

The talk is based on a joint work with Vaios Dermitzakis.

Acknowledgement: This work has been partly supported by the University of Piraeus Research Center.


Contributed talk: Tuesday, 16:30, Room 8

Chi Seng Pun (Nanyang Technological University, Singapore)

Non-zero-sum Reinsurance Games subject to Ambiguous Correlations

This paper studies the economic implications of ambiguous correlation in a non-zero-sum game between two insurers. We establish the general framework of Nash equilibrium for the coupled optimization problems. For the constant absolute risk aversion (CARA) insurers, we show that the equilibrium reinsurance strategies admit closed-form solutions. Our results indicate that the ambiguous correlation leads to an increase in the equilibrium demand of reinsurance protection for both insurers. Numerical studies examine the effect on the quality of the correlation estimations.

The talk is based on a joint work with Chi Chung Siu and hoi Ying Wong.


Contributed talk: Tuesday, 10:40, Room 7

Haoyu Qian (University of Liverpool, United Kingdom)

Ruin probability of the shot-noise Cox process

The Cox risk process with shot-noise is a special case in modelling the collective insurance risk, in which the average number of claims is a time dependent parameter. In this paper, at beginning we model the shot-noise of a Cox process as a function which dynamically switches between good and bad events/states (named as dynamic shot-noise type process in this paper). The Seal’s type integral-differential equation will be applied to generate explicit results for the probability of finite-time ruin, under several claims distributions. Then we will derive the explicit solution for classical shot-noise Cox process by applying the total probability theorem in infinite time horizon.


Contributed talk: Wednesday, 10:15, Room 8

François Quittard-Pinon (EMLYON Business School, France)

Risk Control of Variable Annuities with Ratchet: The GMAB Case

This paper suggests a unified methodology for the management of Guaranteed Minimum Accumulation Benefit contracts. Using a non-Gaussian setting in line with many of the stylized features observed in the market, we adress the pricing, hedging and risk control of these contracts from an operational risk management perspective. Since the well-known and widely used delta-hedging ratio is not optimal, one of the most important problem raised is the issue of hedging. The literature suggests many theoretical solutions whose efficiency from computational point of view is controversial,and rarely studied. We propose two different forms of the heding ratio, one of which allows to obtain a closed-form expression for Merton's model, the other one use a numerical solution by FFT. From the empirical part of the paper, the authors gives a simple rule for designing a hedging policy appropriate to the actual Financial environment that proves useful both for insurers and regulators.

The talk is based on a joint work with Abdou Kelani.


Contributed talk: Monday, 11:55, Room 1

Colin Ramsay (University of Nebraska-Lincoln, United States of America)

The Annuity Puzzle and an Outline of Its Solution

In his seminal paper, Yaari (1965) showed that, assuming actuarially fair annuity prices, uncertain lifetimes, and no bequest motives, utility maximizing retirees should annuitize all of their wealth upon retirement. Nevertheless, the markets for individual life annuities in the U.S., the U.K., and several other developed countries have been small relative to other financial investment outlets competing for retirement savings. Researchers have found this situation puzzling hence the so-called “annuity puzzle." There are many possible explanations for the annuity puzzle including “rational" explanations such as adverse selection, bequest motives, and incomplete markets; and “behaviorial" explanations such as mental accounting, cumulative prospect theory, and mortality salience. We will review the literature on the various plausible explanations given for the existence of the annuity puzzle and we will suggest a few of the ingredients needed for possible solutions.

Yaari, M. 1965. Uncertain Lifetime, Life Insurance, and Theory of the Consumer. Review of Economic Studies 32, 2: 137-150.

The talk is based on a joint work with Victor Oguledo.


Contributed talk: Tuesday, 13:55, Room 7

Lewis Ramsden (University of Liverpool, United Kingdom)

Ruin Probabilities Under Solvency II Constraints

Under Pillar 1 of the Solvency II (SII) directive, the Solvency Capital Requirement (SCR) should reflect a level of funds that enables insurance (and reinsurance) undertakings to absorb significant losses and give reasonable assurance to policyholders and beneficiaries. In more details, insurance firms are required to guarantee that the SCR coverage ratio stays above a certain level with a large enough probability. Failure to remain above this level may trigger regulatory actions to ensure this obligation is fulfilled and the policy holders are protected against insolvency. In this paper, we generalise the classic Poisson risk model to comply with SII regulations (in the above sense). We derive an explicit expression for the `probability of insolvency' (which is different from the classic ruin probability), in terms of the classic ruin quantities, and establish a relationship between the probability of insolvency and the classic ruin measure. In addition, under the assumption of exponentially distributed claim sizes, we show the probability of insolvency is simply a constant factor of the classic ruin function. Finally, in order to better capture the reality, dividend payments to the companies shareholders are considered and an explicit expression for the probability of insolvency is derived under this modification.

Keywords: Solvency Capital Requirements, Minimum Capital Requirements, Solvency II, Insolvency Probabilities, Ruin Probabilities, Classic Risk Model.

The talk is based on a joint work with Apostolos Papaioannou.


Contributed talk: Monday, 16:55, Room 8

Jiandong Ren (University of Western Ontario, Canada)

On Pareto-Optimal Reinsurance With Constraints Under Distortion Risk Measures

This paper studies the Pareto-optimal reinsurance policies, where both the insurer's and the reinsurer's risks and returns are considered. We assume that the risks of the insurer and the reinsurer, as well as the reinsurance premium, are determined by some distortion risk measures with different distortion operators. Under the constraint that a reinsurance policy is feasible only if the resulting risk of each party is below some pre-determined values, we derive explicit expressions for the optimal reinsurance polices. Methodologically, we show that the generalized Neyman-Pearson method, the Lagrange multiplier method, and the dynamic control methods can be utilized to solve the optimization problem with constraints. Special cases when both parties' risks are measured by Value-at-Risk (VaR) and Tail Value-at-Risk (TVaR) are studied in great details. Numerical examples are provided to illustrate practical implications of the results.

The talk is based on a joint work with Wenjun Jiang and Hanping Hong.


Contributed talk: Monday, 15:00, Room 7

Jacques Resing (Eindhoven University of Technology, The Netherlands)

Some two-dimensional risk models with correlated net input rates

In this talk we study two two-dimensional risk models with correlated net input rates. The first model is a model in which the net input rates of two companies depend on the state of the economy. In the model there are alternatingly periods during which the net input rates of both companies are positive and periods during which both net input rates are negative. The surplus processes of the companies in this model can be related to the surplus processes in a two-dimensional risk model with simultaneous arrivals and, as a consequence, also the joint ruin probabilities can be obtained by using results from Badila, Boxma, Resing and Winands (2014) for the model with simultaneous arrivals.

The second model is a model for two competitors. Alternatingly, there are periods during which the net input rate for the first company is positive and for the second company negative, and periods during which the net input rate for the first company is negative and for the second company positive. In this model ruin probabilities can be found studying hitting probabilities of a compound Poisson process with two, parallel or non-parallel, linear boundaries.

The talk is based on a joint work with Emil Johanneson.


Contributed talk: Monday, 11:30, Room 3

Emanuela Rosazza Gianin (University of Milano-Bicocca, Italy)

Time-consistency of risk measures: how strong is such a property?

Quite recently, a great interest has been devoted to time-consistency of risk measures in its different formulations (see Delbaen [4], Foellmer and Penner [7], Bion-Nadal [2], Delbaen et al. [5], Laeven and Stadje [9], among many others). However, almost all the papers address to coherent or convex risk measures satisfying cash-invariance.

In the present work we study time-consistency for more general dynamic risk measures where either only cash-invariance or both cash-invariance and convexity are dropped. This analysis is motivated by the recent papers of El Karoui and Ravanelli [6] and Cerreia-Vioglio et al. [3] who discussed and weakened the axioms above by introducing cash-subadditivity and quasiconvexity. In particular, we investigate and discuss if the notion of time consistency is too restrictive, when considered in the general framework of quasiconvex and cash-subadditive risk measures and, consequently, leads to a very special class of risk measures. Finally, we provide some conditions guaranteeing time-consistency in this more general framework.

References:
[1] Acciaio, B., Penner, I. (2011): Dynamic risk measures. Advanced Mathematical Methods for Finance. Springer Berlin Heidelberg. p. 1-34.
[2] Bion-Nadal, J. (2008): Dynamic risk measures: Time consistency and risk measures from BMO martingales. Finance and Stochastics 12, 219 -244.
[3] Cerreia-Vioglio, S., Maccheroni, F., Marinacci, M., Montrucchio, L. (2011): Risk measures: rationality and diversification, Mathematical Finance 21/4, 743-774.
[4] Delbaen, F. (2006): The structure of m-stable sets and in particular of the set of risk neutral measures. In: Memoriam Paul-André Meyer, Lecture Notes in Mathematics 1874, pp. 215-258.
[5] Delbaen, F., Peng, S., Rosazza Gianin, E. (2010): Representation of the penalty term of dynamic concave utilities. Finance and Stochastics 14/3, 449-472.
[6] El Karoui, N., Ravanelli, C. (2009): Cash sub-additive risk measures and interest rate ambiguity, Mathematical Finance 19, 561-590.
[7] Foellmer, H., Penner, I. (2006): Convex risk measures and the dynamics of their penalty functions. Statistics and Decisions 24/1, 61-96.
[8] Kloeppel, S., Schweizer, M. (2007): Dynamic utility indifference valuation via convex risk measures. Mathematical Finance 17/4, 599-627.
[9] Laeven, R. J. A., Stadje, M.A. (2014): Robust portfolio choice and indifference valuation, Mathematics of Operations Research 39, 1109-1141.

The talk is based on a joint work with Elisa Mastrogiacomo.


Contributed talk: Monday, 16:55, Room 5

Edit Rroji (Università degli Studi di Milano-Bicocca, Italy)

Dealing with mortality at highest age groups

Mortality modeling represents a fast growing topic in the actuarial science literature. Small datasets at highest age groups increase uncertainty in the selection of a proper model for death patterns description. The aim of the proposed analysis is to investigate empirically the enlargement of age intervals for reliable model fitting. Results are presented both for forecasting models and for approaches based on mortality laws. In this framework, we measure the impact of longevity risk in life annuity pricing we generate bootstrapped samples. We simulate future scenarios for yearly probabilities of death and use them in life annuity pricing. The impact of parameter uncertainty is assessed by decomposing a relative risk index for an increasing number of annuitants in a portfolio.

The talk is based on a joint work with Ermanno Pitacco.


Contributed talk: Wednesday, 09:00, Room 1

Iegor Rudnytskyi (University of Lausanne, Switzerland)

Stochastic Programming for Asset Allocation in Pension Funds

Stochastic programming (SP), as alternative to a common choice of Monte-Carlo (MC) simulation methods, has been shown as a powerful approach for asset and liability management (ALM) of pension funds and life insurance companies. While MC seeks for a "sufficiently good" solution, SP returns an approximation of the true optimal solution, which makes SP superior to MC in many contexts. However, SP requires significantly more efforts (both in computational and mathematical terms) when compared to more easily treatable MC methods. The SP multistage problem formulation heavily relies on a concept of a scenario tree, which represents the evolution of stochastic parameters (e.g., future performance of economy, demographic variables, etc.), that are typically represented by time series models. Such scenario trees allow to convert the SP problem to an approximated deterministic formulation, which typically stays in a linear framework. In our work, we list the methods of generating the scenario tree mostly used in practice and we focus on a bracket-mean method. Then, we investigate the convergence towards the optimal solution with respect to the bushiness of the scenario tree. Furthermore, the sensitivity of the optimal solution to the changes in a planning horizon and a target wealth is analyzed. For our ALM scenarios we fit historical data to time series model. Finally, we compare the performance of SP and MC approaches by means of a simple example allowing as to provide practical insights and conclusion.

The talk is based on a joint work with Joël Wagner.


Contributed talk: Tuesday, 10:40, Room 8

Wolfgang J. Runggaldier (University of Padova, Italy)

Reinsurance and investment in the financial market in view of minimizing expected capital injections

We consider an insurance model with reinsurance and investment in the financial market and with the possibility of capital injections to prevent ruin. The objective is to choose reinsurance and investment to minimize expected total capital injections. The reserve evolves as a piecewise deterministic process with random discontinuities triggered either by the arrival of a claim or by a change in the prices of the assets in which the company invests. We propose an approach of the type of Value Iteration and present a convergent approximation methodology to actually compute a solution. As by-product of the approximations we obtain an endogenous lower bound on the reserve, below which the company should be considered as ruined in spite of the possibility of capital injections.

The talk is based on a joint work with Michele Antonello and Luca Cipani.


Contributed talk: Tuesday, 11:05, Room 3

Emilio Russo (University of Calabria, Italy)

Fair valuation of participating policies embedding a minimum guaranteed bonus rate and a surrender option in a stochastic interest rate framework

Participating policies are innovative life insurance products that are gaining popularity in fi nancial and insurance markets because they may combine financial and demographic risks, and provide benefi ts linked to the company asset returns. Indeed, interest is credited periodically, generally at each contract anniversary, to the policy according to some bonus distribution rules. In general, such policies are also equipped with a minimum interest rate guarantee and with a surrender option, having the feature of an American put option, which allows the holder to sell the policy back to the insurance company before its maturity versus the payment of a cash surrender value. Hence, numerical methods are required to evaluate participating policies due to the unknown distribution of the optimal surrender time.

Under a constant interest rate framework, some models to evaluate participating policies with a minimum interest guarantee have been proposed, among others, by Bacinello (2001), Chu and Kwok (2006), and Bauer et al. (2006), while, for instance, Grosen and Jorgensen (2000) and Bacinello (2003) propose evaluation models for participating policies embedding both a minimum guaranteed bonus rate and a surrender option. But actually, such insurance instruments are generally long-term contracts for which it is more appropriate to consider a stochastic dynamics for the interest rate than keeping it fi xed for the entire policy lifetime. In this sense, a fi rst attempt has been made by Zaglauer and Bauer (2008) who incorporate stochastic interest rate dynamics in the Bauer et al. (2006) method to evaluate participating contracts not embedding a surrender option.

The proposal moves in this direction in that it is suitable for evaluating participating policies embedding not only a minimum guaranteed bonus rate but also a surrender option in a stochastic interest rate framework. The model is flexible in that it may accommodate the different speci fications for the stochastic interest rate widely used in finance, which are directly discretized by means of a recombining lattice approximation. Taking into account the stochastic rate, a similar lattice method is used to discretize the company asset dynamics. Then, the two lattices are combined in order to establish a bivariate tree where participating policy may be evaluated by discounting the policy payoff over the lattice branches, and allowing early exercise at each contract anniversary to model the surrender decision.


Contributed talk: Tuesday, 11:55, Room 2

Aliki Sagianou (University of the Aegean, Greece)

Multiple component stochastic mortality modeling and assessment

Due to the continuous increment of longevity, the number of pension plans and compensations paid are likely to lead the pension funds and the life insurance companies off-budget. In order to maintain the balance of an organization's reserve funds and control the inherent risks, the Directive Solvency II introduces a uniform system of calculating capital requirements with the aim of ensuring organizations’ solvency and risk management ability. An essential step in this direction is to model the mortality rates by using stochastic mortality models and predict the trend of mortality in the future accurately. Such a prediction can be utilized in order to assist the determination of capital requirements of an insurance company in a more reliable way. To this end, the main goal in this work is to provide a comprehensive comparison among various stochastic mortality models, analyze their advantageous and negative characteristics and to determine the most appropriate model to predict the mortality rates. The comparison engages five mortality models. More specifically, we compare the Lee – Carter [1], Renshaw – Haberman [2], APC [3], Plat [4] and Hatzopoulos – Haberman (HH) [5,6] models using mortality data from several countries in order to evaluate their efficiency and eventually conclude to the one which fits and projects the mortality rates in a more precise way. It has to be stated, that in this work we also introduce some significant improvements to the HH model. The latter, adopts a different methodology for parameter estimation, as Generalized Linear Models (GLMs) are utilized in order to graduate the crude mortality rates. Then, sparse principal component analysis (SPCA) [7] is applied to the table of the GLM parameter estimations. The process of estimating the incorporated parameters of SPCA is driven both by quantitative and qualitative approaches. In our experiments, we evaluate the efficiency of the models using data from Greece, France and England & Wales, for calendar years 1961-2013, individual ages 0-84, and each gender. In order to analyze the models from diverse perspectives and determine the most effective among them, our application incorporates several kinds of criteria. These criteria are the Bayesian Information Criterion (BIC), the Akaike Information Criterion (AIC) and error tests, such as the Root Mean Square Error (RMSE), Mean Absolute Error (MAE) and Mean Absolute Percentage Error (MAPE). We also consider Unexplained Variance (UV), for each added component, as another criterion. Moreover, we use a backtesting framework with a 10-step ahead forecast for each model, and we compare the out-of-sample forecasted mortality rates to the actual ones in order to quantify the effectiveness of each model. The forecast process uses dynamic linear regression (DLR) and ARIMA models. To sum up, based on the criteria mentioned above, we analyze which of the mortality models fit “better” the mortality rates in order to acquire the best possible prediction and therefore the appropriate basis to compute the capital requirements. Our experimental results show that in order to capture the shape of mortality trend in various age ranges, the mortality models should incorporate more than one period term. This is because the behavior of mortality trend varies depending on the examined age range and mortality models, which include more terms, are able to fit and predict the mortality rates more accurately.

Keywords: Cohort; Mortality forecasting; Generalized Linear Models; Sparse Principal Component analysis; Factor analysis; Dynamic Linear Regression; Bootstrap confidence intervals.

References
[1] R. D. Lee, L. R. Carter, Modeling and Forecasting U.S. Mortality, Journal of the American Statistical Association 87 (419) (1992) 659 - 671. doi: 10.1080/01621459.1992.10475265.
[2] Renshaw, S. Haberman, A cohort-based extension to the Lee-Carter model for mortality reduction factors, Insurance: Mathematics and Economics 38 (3) (2006) 556 - 570. doi: 10.1016/j.insmatheco.2005.12.001.
[3] D. Currie, M. Durban, P. H. Eilers, Smoothing and forecasting mortality rates, Statistical Modelling 4 (4) (2004) 279 - 298. doi: 10.1191/1471082X04st080oa.
[4] R. Plat, On Stochastic Mortality Modeling, Insurance: Mathematics and Economics 45 (3) (2009) 393 - 404. doi: 10.1016/j.insmatheco.2009.08.006.
[5] P. Hatzopoulos, S. Haberman, A parameterized approach to modeling and forecasting mortality, Insurance: Mathematics and Economics 44 (1) (2009) 103 - 123. doi: 10.1016/j.insmatheco.2008.10.008.
[6] P. Hatzopoulos, S. Haberman, A dynamic parameterization modeling for the age-period-cohort mortality, Insurance: Mathematics and Economics 49 (2) (2011) 155 - 174. doi: 10.1016/j.insmatheco.2011.02.007.
[7] J. R. G. Lansangan, E. B. Barrios, Principal components analysis of nonstationary time series data, Statistics and Computing 19 (2) (2008) 173. doi: 10.1007/s11222-008-9082-y.

The talk is based on a joint work with Petros Hatzopoulos.


Contributed talk: Monday, 11:55, Room 2

Ryota Saito (Waseda University, Japan)

Information criteria for bivariate compound Poisson risk models with dependent claims

When we construct multivariate insurance risk models, it is important to consider the dependence structure between those components. Levy copula, introduced in Cont and Tankov (2004), is one of the useful tools to put a dependency in two risk processes driven by Levy processes. However, the choice of Levy copula is critical in practice since it completely determines the dependence structure. In this paper, we deal with the model selection problem for bivariate compound Poisson risk models with dependency of jumps via a Levy copula. In such a context, values of maximum likelihoods are often used as a criterion to choose a better model; e.g., Avanzi et al. (2011, ASTIN Bulletin), where the model with higher values of the likelihood is preferred to the lower. However, as is well-known, such a "maximum likelihood criterion" sometimes chooses an unsuitable model in the sense of future's prediction. In this paper, we will give some AIC-type information criteria for statistical model selection of not only Levy copulas but also Levy measures in bivariate compound Poisson risk processes.

Our information criterion is based on the extended Kullback-Leibler divergence; see, e.g., Shimizu (2009, J. Statist. Plan. Infer.), since Levy measures in compound Poisson processes are not necessarily probability measures, but finite measures in general. We shall construct an information criterion as an asymptotically unbiased estimator of the extended Kullback-Leibler divergence by suitable bias correction. We also show some numerical examples, where our criterion works well although the maximum likelihood criterion can select unsuitable models.

The talk is based on a joint work with Yasutaka Shimizu.


Contributed talk: Wednesday, 11:10, Room 1

Marta Sánchez Sánchez (Universidad de Cádiz, Spain)

A Bayesian sensitivity study in actuarial context

In the context of robust Bayesian analysis, we focus on a new class of prior distributions based on stochastic orders and distortion functions defined in Arias-Nicolás et al. (2016). The problem of computing most habitual premium principles in risk theory will be analysed in this conference. We will consider that uncertainty with regard to the prior distribution can be represented by the assumption that the unknown prior distribution belongs to the new class of distributions and we will examine the ranges of the Bayesian premium when the priors belong to such a class. In order to measure the uncertainty induced by such class, as well as its effect on the Bayesian Premiums, we will use Kolmogorov and Kantoverich metrics. Finally, we will go on to show the results obtained and an interpretation of these results.

Key Words: Robustness Bayesian Analysis, prior class, stochastic orders, distortion functions, premiums.

References: Arias-Nicolás, J.P., Ruggeri, F. and Suárez-Llorens, A. (2016). New classes of priors based on stochastic orders and distortion functions. Bayesian Analysis, 11, 4, pp. 1107-1136

The talk is based on a joint work with Miguel Angel Sordo Diaz and Alfonso Suarez Llorens.


Contributed talk: Monday, 14:45, Room 2

Jose Maria Sarabia (University of Cantabria, Spain)

Aggregation of Dependent Risks in Mixtures of Exponential Distributions and Extensions

The distribution of the sum of dependent risks is a crucial aspect in actuarial sciences, risk management and in many branches of applied probability. In this paper, we obtain analytic expressions for the probability density function (pdf) and the cumulative distribution function (cdf) of the aggregated risks, where the risks are modeled according to a multivariate mixture of exponential distributions.

First, we review the properties of the multivariate mixture of exponentials including a characterization theorem (in terms of the copula generator and the marginal distributions), dependence conditions (total positivity of order two in pairs and associated random variables), dependence measures, joint moments, copula (which belong to the Archimedean family) and other relevant features. We continue with the analytical formulation for the pdf and the cdf of the aggregated distribution. Moroever, we include expressions for the survival function, the moments and the value at risk (VaR).

Then, we study in detail some specific multivariate models with claims of the type Pareto (Sarabia, J.M., Gomez-Deniz, E., Prieto, F., Jorda, V., Insurance Math. Econom. 71, 2016), Gamma, Weibull with shape parameter 1/2, general Weibull, inverse Gaussian mixture of exponentials (Whitmore, G.A., Lee, M-L.T., Technometrics 33, 1991) and other parent distributions. In particular, the models with Pareto and Weibull claims have Clayton and Gumbel copulas, respectively. As regards the other multivariate models, their dependence structure is characterized by new families of copulas, which are obtained and studied.

For all these models, we obtain specific expressions for the aggregated distribution, and we study some of their main properties. Specifically, we compute some risk measures including VaR, tail value at risk and other tail measures. Explicit ruin formulas (Albrecher, H., Constantinescu, C., Loisel, S., Insurance Math. Econom. 48, 2011) are also discussed.

Finally, some extensions of the basic multivariate model are studied. The first extension is specified by conditional distributions of the power type, with a given baseline cdf (Albrecher, H., Constantinescu, C., Loisel, S., Insurance Math. Econom. 48, 2011). A second extension is based on mixtures of gamma and gamma product-ratio claims (Sibuya, M., Ann. Instit. Statist. Math. 31, 1979). We also include some numerical applications.

The talk is based on a joint work with Emilio Gomez-Deniz, Faustino Prieto and Vanesa Jorda.


Contributed talk: Monday, 10:40, Room 6

David Saunders (University of Waterloo, Canada)

Optimal Investment Strategies for Participating Contracts using the Martingale Method

Participating contracts are popular insurance policies, for which the payoff to a policyholder is linked to the performance of a portfolio managed by the insurer. We consider the portfolio selection problem of an insurer that offers participating contracts and has an S-shaped utility function. Applying the martingale approach, closed-form solutions are obtained. The resulting optimal strategies are compared with portfolio insurance hedging strategies. We also study numerical solutions of the portfolio selection problem with constraints on the portfolio weights.

The talk is based on a joint work with Hongcan Lin and Chengguo Weng.


Contributed talk: Monday, 13:55, Room 1

Frank Schiller (Munich Re, Germany)

Predictive model for mental illness in disability insurance in Germany

There are several areas of application for data analytics methods in insurance, one is to improve the understanding of drivers for disability claims in life insurance. In this presentation we demonstrate how to apply modern predictive models successfully for mental illness prediction in German disability insurance. Over 30 variables have been analyzed using various predictive models. Potential applications arise in risk assessment, claims handling and portfolio management. However, a useful model should not only allow a good prediction but also has to enable that findings can be interpreted and validated by experts.


Contributed talk: Tuesday, 11:05, Room 8

Maren Diane Schmeck (Bielefeld University, Germany)

The challenge of finding the optimal reinsurance strategy in a Markov-switching model.

Consider an insurance entity with a suplus modelled by a Brownian motion with drift. The company is allowed to buy proportional reinsurance. Furthermore, the drift and the volatility of the diffusion-surplus depend on an observable continuous-time Markov chain, representing the macroeconomic changes.

Our objective is to minimize the value of expected discounted capital injections, which are necessary to keep the risk process above zero. Thus, we target to find the value function defined as the infimum of expected discounted capital injections over all admissible reinsurance strategies; and to derive the optimal strategy leading to the value function.

Differently than in the case without Markov-switching, a general form explicit solution could not be given. The solution is proved to be a fixed point of an algorithm, consisting in recursive solution of differential equations.

The talk is based on a joint work with Julia Eisenberg.


Contributed talk: Tuesday, 11:30, Room 6

Hanspeter Schmidli (University of Cologne, Germany)

On Dividends with Tax and Capital Injection

We consider a risk model driven by a spectrally negative Levy process. From the surplus dividends are paid and capital injections have to be made in order to keep the surplus positive. In addition, tax has to be paid for dividends, but injections lead to an exemption from tax. We show that the optimal dividend strategy is a two barrier strategy. The barrier depends on whether an immediate dividend will be taxed or not.


Contributed talk: Wednesday, 11:10, Room 2

Klaus D. Schmidt (Technische Universität Dresden, Germany)

Estimators for a Class of Measures of Concordance for Bivariate Copulas

We propose and study estimators for a wide class of bivariate measures of concordance for copulas. These measures of concordance are generated by a copula and generalize Spearman's rho and Gini's gamma.

In the case of Spearman's rho and Gini's gamma the estimators turn out to be the usual sample versions of these measures of concordance.

The talk is based on a joint work with Sebastian Fuchs.


Contributed talk: Tuesday, 14:45, Room 8

Steffen Schumann (SCOR Global P&C Deutschland, Germany)

Application of Cluster Analysis for the Projection of Individual Large Claims for Long Tail Non-Proportional Reinsurance Pricings

For (re)insurance companies the accurate estimation of ultimate values of large losses is of vital importance and particularly challenging for long tail segments. With regard to the impact analysis of excess of loss reinsurance, it is useful to estimate ultimate values based on the development of individual losses (see Drieskens et al. (2012), Murphy, McLennan (2006), Mack (2002) or Höhn, Bollmann (2006)). One of the core issues of such methods is the decision how to determine the future loss development factors for each loss from the set of available loss development factors. According to the model introduced in Drieskens et al. (2012), one possibility is to divide the available loss development factors based on their underlying incurred value using a threshold, whose calculation and value is not further specified and, thus, unknown. In the approach by Mack (2002), the loss development factors are determined according to similar losses. However, in much cases a suitable similar loss may not be available using real data. Thus, clustering techniques are used to analyze the structure of observed loss development factors using the available information for a reinsurance company: the incurred values, the paid pattern, and the combination of both. The deviation of the calculated ultimate values of the large losses could be reduced with a more realistic projection of the losses, leading to a more reliable estimation and pricing. Hence, the optimal number of clusters is determined using the Elbow method by Thorndike (1953), the Gap statistic by Tibshirani et al. (2001), and the silhouette coefficient by Rousseeuw (1987). After choosing the number of clusters, the CLARA algorithm by Kaufman, Rousseeuw (2009) is used to cluster the available data. This algorithm is selected because it is appropriate for large applications by applying the PAM algorithm, as an improvement of the k-means algorithm, on data samples (e.g. Rousseeuw (1987)). Following this approach, the number of optimal clusters differs in most situations depending on the different type of information applied. The goodness of the clustering is measured using the silhouettes of the determined clusters and the comparison of the estimated ultimate values for observed reinsurance claim data using an adjusted method based on Drieskens et al. (2012). However, it is questionable which available information, the incurred value, the paid pattern or both, is more trustworthy. In the end, the usage of the combined information gives a reasonable cluster resulting in the least deviation of the ultimate values compared with the Chain Ladder model applied to excess of loss reinsurance data.

Keywords: CLARA; Cluster analysis; Large loss reserving; Reinsurance; Clustering validity

The talk is based on a joint work with Thomas Hartung.


Contributed talk: Tuesday, 15:40, Room 3

Malgorzata Seklecka (University of Liverpool, United Kingdom)

Effects of Temperature and Economic Changes in the United Kingdom

It is well known that the health of a population is affected by social, environmental, and economic factors. Therefore, changes in death rates may occur due to climate and economic changes. In this article, we extend on a previous study Seklecka et al. (2017) into excess deaths as a result of climate change to also investigate the impact of economic changes using annual data for the United Kingdom. A new stochastic mortality model is proposed which extends the Lee and Carter (1992) model and recent extensions by including an additional economic factor alongside a temperature-related term. This model is shown to provide better fitting and forecasting results.

The talk is based on a joint work with Lydia Dutton, Athanasios A. Pantelous and Colin O'Hare.


Contributed talk: Monday, 13:30, Room 6

Kristina Sendova (University of Western Ontario, Canada)

How about the relative security loading?

In this paper we introduce a non-homogeneous compound-birth process as the claim-counting process of an insurance company. The main feature of this process is that it may account for batch claim arrivals. As a result, the insurer's detailed record of costs resulting from claim processing may be used for fine-tuning the relative security loading. Further, assuming a simpler version of the claim-counting process, we study the Gerber-Shiu function and some of its special cases in more detail.

The talk is based on a joint work with Leda Minokova.


Contributed talk: Tuesday, 10:40, Room 4

Arnold F. Shapiro (Pennsylvania State University, United States of America)

Probabilistic fuzzy systems

It generally is conceded that the two major sources of uncertainty are randomness and fuzziness, and that they are complementary. This study extends this viewpoint to an integrated framework where both types of uncertainty exist concurrently within a model, and where each of the randomness and fuzziness components, while necessary, is not sufficient to formulate the model. Many practical applications are of this sort.

Probabilistic fuzzy systems (PFSs) were developed to accommodate such an integrated framework. Essentially, the PFS is a methodology that is built on a fuzzy inference system, which has been modified to accommodate a probabilistic fuzzy rule base. This provides a stochastic input-output mapping between the input fuzzy sets associated with the antecedent part of the rule base and the output fuzzy sets associated with the consequent part.

The purpose of this talk is to present an overview of PFSs. The talk begins with an introduction to PFSs and a discussion of their architecture. Next, we explain the key features of their methodology. Given this background, we present some examples of application. The talk concludes with a comment on the findings and suggestions for further studies.


Contributed talk: Tuesday, 16:55, Room 8

Yang Shen (York University, Canada)

A stochastic Stackelberg differential game between an insurer and a reinsurer

This paper proposes a new continuous-time framework to analyze optimal reinsurance, in which an insurer and a reinsurer are two players of a stochastic Stackelberg differential game, i.e., a stochastic leader-follower differential game. This allows us to determine optimal reinsurance from joint interests of the insurer and the reinsurer, which is rarely considered in a continuous-time setting. In the Stackelberg game, the reinsurer moves first and the insurer moves subsequently to achieve a Stackelberg equilibrium towards optimal reinsurance arrangement. Speaking more precisely, the reinsurer is the leader of the game and decides on optimal reinsurance premium to charge, while the insurer is the follower of the game and chooses optimal proportional reinsurance to purchase. We solve the game problem in two cases: exponential utility maximization and mean-variance optimization. We find that the reinsurer always applies the variance premium principle to calculate the optimal reinsurance premium and the insurer's optimal ceding/retained proportion of insurance risk depends not only on the risk aversion of itself but also on that of the reinsurer.


Contributed talk: Tuesday, 13:30, Room 3

Pavel Shevchenko (Macquarie University, Australia)

Impact of management fees on pricing of variable annuity guarantees

As a type of retirement income products, variable annuities allow equity exposure for a policyholder's retirement fund while the downside risk can be limited by electing additional guarantees. The exposure to the equity market and the protection from financial risk attract a charge of management fee and insurance fee, respectively, to the policyholder. In this paper we investigate the impact of management fees on the pricing of variable annuities with guaranteed minimum withdrawal benefits under optimal withdrawal strategies from a policyholder's and an insurer's perspective. As a result two different optimal pricing problems are formulated, which are solved by dynamic programming and finite difference methods. When management fees are present, we show that the rational policyholder's withdrawal behaviors between the two different optimization criteria can deviate significantly from each other, leading to a substantial difference of fair insurance fees being charged. Numerical examples are provided for a range of scenarios to demonstrate the impact of management fees on fair pricing of the withdrawal guarantees.

The talk is based on a joint work with Jin Sun and Man Chung Fung.


Contributed talk: Tuesday, 16:05, Room 7

Yasutaka Shimizu (Waseda University, Japan)

Asymptotic theory of parametric inference for ruin probability under Levy insurance risks

The classical insurance ruin theory and its related field can revive interest in recent Enterprise Risk Management (ERM) because the theory gives us many tools for the dynamic risk management. A central issue in this context is estimating ruin probability under certain spectrally negative jump processes. Under a parametric assumption for the claim process, it is not so hard to construct an asymptotically normal estimator of ruin probability via the delta method given an asymptotically normal estimator of unknowns. However, the asymptotic variance of the estimator includes the derivative of the ruin probability with respect to the parameters, which is not easy to compute. To construct a confidence interval, we will give an approximation for the derivatives under a large initial surplus, and gives an approximated confidence interval.


Contributed talk: Monday, 16:55, Room 2

Nariankadu Shyamalkumar (The University of Iowa, United States of America)

On Tail Dependence Matrices

Modelling dependence has recently gained a lot of interest in actuarial science, especially with focus on inter-relatedness when one or more variable assumes extreme values. The latter is referred to in the literature as tail-dependence, and as for dependence there are many ways of measuring tail dependence. For a d-dimensional vector, an analogue of the correlation matrix is the tail dependence matrix, and recently Embrechts, Hofert and Wang gave a characterization for such matrices. Nevertheless, as pointed out by them, an algorithm to determine whether a given matrix equals the tail dependence matrix of a distribution has been lacking. In this talk we discuss an algorithm and provide some heuristics to the possible computational complexity of this problem.

The talk is based on a joint work with Siyang Tao.


Contributed talk: Wednesday, 09:00, Room 8

Mario Sikic (University of Zurich, Switzerland)

Market-Consistent Valuation and Financial Management of an Insurance Firm

We analyze the value-maximizing financial policies (dividend payments, capital raising, and investments) of a limited-liability insurance firm with a diffuse shareholder base. We assume the company operates in an environment where both raising and holding capital are subject to frictions, i.e. they are costly. We put special emphasis on the investment policy. Our results show that it is sometimes optimal to engage in risky investments. This is at odds with the celebrated results by Froot and Stein (1998) and Froot (2007) stating that financial institutions (and insurers, in particular) should hedge all of their exposure to liquidly traded risks. In contrast to these authors, however, we explicitly take into account the possibility of the company defaulting on its obligations.

Our analysis relies on dynamic-programming methods. We use tangible capital as the state variable and dividends, capital injections and the investment policy as the controls. Typically, the assumption of the firm being risk neutral, which is consistent with its value-maximization objective, entails determining firm value by computing the expected value of discounted future dividends until liquidation under the objective probability. However, as we also allow for investments in traded securities, the valuation rule needs to be consistent with market prices. This implies that, when taking expectations, we cannot resort to the objective probability measure but need to use a particular risk-adjusted probability measure Q that is uniquely determined by the following properties: i) computing discounted expected values with respect to Q reproduces market prices when applied to the payoffs of traded assets and ii) Q is consistent with risk neutrality for payoffs that are not correlated with the financial market.

The talk is based on a joint work with Pablo Koch-Medina, Santiago Moreno-Bromberg and Claudia Ravanelli.


Contributed talk: Tuesday, 13:30, Room 1

Søren Kærgaard Slipsager (Aarhus University, Denmark)

The Real Risk in Pension Forecasting

The planning of saving and investing over the life cycle with an aim towards retirement is to many people a daunting task. Further complexities are added to the problem when considering the role of inflation whose importance to the actuary was emphasized in both Wilkie (1981) and Wilkie (1984). Even though pension savers tend to think in nominal terms they should be thinking in real terms as it is their purchasing power over the retirement period that matters. A common way to incorporate changes in the purchasing power is by using a consumer price index (CPI). In retirement planning, an easy and thus popular approach is to assume that the CPI grows deterministically at a fixed rate.

The purpose of this paper is to shed light on some of the flaws in the forecasting approach undertaken by the pension industry. Specifically, it considers the treatment of inflation and shows that the current modeling framework is too simplistic. I identify the flaws of the existing regulatory framework and provide an alternative full model framework constructed around the three-factor diffusion model recently proposed by the Danish Society of Actuaries.

By use of a simulation study I compare the deterministic inflation scheme applied in the industry to a stochastic scheme and show that the real value of the pension saver’s investment portfolio at retirement is highly dependent on the inflation scheme. As the deterministic scheme does not take state variable correlations into account it overestimates the expected portfolio value in real terms compared to the stochastic scheme. Moreover, the deterministic scheme gives rise to a more heavy-tailed distribution implying a misestimation of downside risk and upside potential. Finally, it is shown in a realistic case study that the pension saver’s expected retirement payout profile is heavily affected.


Contributed talk: Monday, 16:30, Room 3

Miguel A. Sordo (University of Cádiz, Spain)

Stochastic orders and co-risk measures under positive dependence

An important problem in portfolio risk analysis is to evaluate the systemic risk, which is related to the risk that the failure or loss of a component X spreads to another component Y or even to the whole portfolio. To address this issue, the literature offers different conditional risk measures (called co-risk measures) and contribution measures. While co-risk measures evaluate the risk of large losses of the components via dependence-adjusted versions of measures usually employed to assess isolate risks (such as the CoVaR and the CoES), contribution measures quantify how a stress situation for a component X affects another component Y (or the whole portfolio). The aim of this paper is to study the consistency of some co-risk measures and risk contribution measures with respect to various stochastic orderings of the marginals under different positive dependence assumptions. Some of the stochastic orders considered in this talk are the hazard rate order, the increasing convex order, the dispersive order and the excess wealth order.

The talk is based on a joint work with Alfonso Suárez-Llorens and Alfonso J. Bello.


Contributed talk: Monday, 16:30, Room 2

Jaap Spreeuw (Cass Business School, City, University of London, United Kingdom)

Fitting Archimedean copula models based on distance between generators

In several recent publications, parametric Archimedean copula models are introduced with generators that do not have a closed form expression for the inverse generator. Examples can be found in McNeil and Nešlehová (JMVA, 2010), Hua and Joe (JMVA, 2011) and Hua (IME, 2015). As a consequence, closed form expressions for the copula itself as well as the Kendall function are not available either. Such copulas are also harder to fit by common estimation methods. Parameters can no longer be estimated by minimizing the distance between empirical and theoretical Kendall function, while grid search would be required to employ pseudomaximum likelihood. Although estimation by inversion of Kendall’s tau could still work, this would only be feasible for one-parameter families, and only produce unbiased estimates if data are uncensored.

In this talk, a new estimation method is introduced that is based on minimizing the distance between empirical and theoretical generator. That generators are only defined up to an arbitrary scaling factor is accommodated by adding a scaling variable to the parameter vector of the theoretical generator. The method is applied to non-life (complete) as well as life (censored) insurance data, both used in the literature before. To assess the quality of the method, for both data a comparison is made with pseudomaximum likelihood. In addition, a new copula family is introduced, where the inverse of the generator is not available in closed form, and an existing one is expanded. Both families perform (very) well for the life data set at least.


Contributed talk: Monday, 11:30, Room 1

Gabriele Stabile (Sapienza-Università di Roma, Italy)

On the free boundary of an annuity purchase

Locating the optimal age (time) at which to purchase an irreversible life annuity is a problem that has received considerable attention in the literature over the past decade. As Hainaut and Deelstra noticed (J. Econ. Dyn. Control 44 (2014) 124-146), one might think that individuals should annuitize their wealth in case the value of their financial investments is decreasing, in order to prevent a further decrease in value. Alternatively, in order to have an acceptable annuity payment, individuals should switch to annuities if the financial performance are high enough.This paper set up a free boundary analysis to study the optimal time to switch from a financial investment to a life annuity and to investigate the shape of the regions where it is optimal respectively to delay or immediately purchase an annuity (the so called continuation and stopping regions). The optimal annuitization time is characterized as the fi rst time the individual's wealth hits an unknown boundary, that divides the time-wealth plane into the continuation and stopping regions. The problem of finding the optimal stopping boundary is converted into a parabolic free boundary problem. From this free boundary set-up we deduce an integral equation for the boundary, which can be used to compute its values numerically. A variety of numerical examples are presented in case of Gompertz-Makeham mortality and proportional hazard rate models.

The talk is based on a joint work with Tiziano De Angelis.


Contributed talk: Monday, 15:40, Room 6

Mogens Steffensen (University of Copenhagen, Denmark)

Approximations to expected utility optimization in continuous time

In this paper, we explore approximate solutions to optimal control problems that cannot be solved analytically with existing techniques. Inspired by the mean-variance analysis of the single period environment, an advanced and a simple method are developed in order to approximate optimal investment strategies in continuous time. In the advanced method, the original problem is approximated by a Taylor series expansion in the conditional mean of terminal wealth. As the point of expansion is thereby continuously changing, the approximation results in a non-standard optimal control problem that can be characterised by an extended HJB equation. In the simple method, the problem is expanded in the initial mean, leading to a problem that can be solved using the classical HJB equation in an unconventional way. The advanced approximated problem inherits more features from the original problem than the simple approximated problem. In a numerical example, we illustrate how the advanced approximate strategy gives a better approximation than the simple approximated strategy. An approximate solution is determined to a prospect theory investment problem, utilising the advanced method of approximation. The solution reects the same behaviour as the classical life-cycle investment strategy, where the proportion of wealth invested in the risky asset is decreasing over time and independent of the level of wealth.

The talk is based on a joint work with Maj-Britt Nordfang.


Contributed talk: Tuesday, 14:20, Room 3

Xiaoshan Su (EMLyon Business School, France)

Pricing Defaultable Participating Contracts with Regime Switching and Jump Risk

This paper presents a regime switching jump diffusion framework for pricing participating life insurance contracts where both the credit risk and the market long-term regime switching and short-term jump risks are taken into account. The assets portfolio is assumed to be totally invested in the financial market and to be financed with equity and life insurance policies. This framework assumes that the value of the assets portfolio evolves as a geometric regime switching double exponential jump diffusion, which simultaneously captures the long-term regime switching risk and the short-term jump risk. The structural approach characterizes the potential default of the insurance company, where default happens when the value of assets portfolio crosses a level that is related to initial policyholders premium. The first passage time of the corresponding process is involved and the definition and implementation of an associated matrix Wiener-Hopf factorization is presented. The regime switching double exponential jump diffusion model preserves the analytical tractability of the regime switching Brownian motion and of the double exponential jump diffusion models. This makes the price of life insurance contracts computable in semi-closed-form by combining closed-form expressions with numerical Laplace inversion. An illustration concludes the paper and addresses the respective impacts of different risk sources on the price of the life insurance contracts.

The talk is based on a joint work with Olivier Le Courtois and François Quittard-Pinon.


Contributed talk: Monday, 13:30, Room 7

Anatoliy Swishchuk (University of Calgary, Canada)

Risk Model Based on Compound Hawkes Process

The Hawkes process (Hawkes (1971)) is a simple point process that has self-exciting property, clustering effect and long memory. It has been widely applied in seismology, neuroscience, DNA modelling and many other fields, including finance (Embrechts et al. (2011)) and insurance (Stabile et al. (2010)).
In this talk, we introduce a new model for the risk process, based on compound Hawkes process (CHP) for the arrival of claims: R(t):=u+pt-[a(X(1))+a(X(2))+…+a(X(N(t))], where u is the initial capital of an insurance company, p is the rate of at which premium is paid, X(k) is continuous-time Markov chain in state space X and N(t) is a Hawkes process (Hawkes (1971)) (a(x) is continuous and bounded function on X). If a(X(k))=X(k) are i.i.d.r.v. and N(t) is a homogeneous Poisson process, then R(t) is a classical risk process also known as the Cramer-Lundberg risk model (Asmussen and Albrecher (2010)). In the latter case we have compound Poisson process (CPP) for the outgoing claims. Using this analogy, we call our risk process as a risk model based on compound Hawkes process (CHP), a(X(1))+a(X(2))+…+a(X(N(t)). To the best of the author’s knowledge, this risk model is the most general relaying on the existing literature. In comparison to simple Poisson arrival of claims, CHP model accounts for the risk of contagion and clustering of claims.

We study: the main properties of this new risk model, ruin time (i.e., ultimate ruin time), net profit condition, and premium principle.
We note, that Stabile & Torrisi (2010) were the first who replaced Poisson process by a simple Hawkes process in studying the classical problem of the probability of ruin. Dassios and Zhao (2011) considered the same ruin problem using marked mutually-exciting rocess (dynamic contagion process).
Jang & Dassios (2012) implement Dassios & Zhao (2011) to calculate insurance premiums and suggest higher premiums should be set up in general across different insurance product lines.

References:
Asmussen, S. and Albrecher, H. (2010): Ruin Probabilities, 2nd edition, World Scientific, Singapore.
Dassios, A. and Zhao, HB. (2011): A dynamic contagion process, Adv. in Appl. Probab. Volume 43, Number 3, 814-846.
Dassios, A. and Jang, J. (2012): A Double Shot Noise Process and Its Application in Insurance. J. Math. System Sci., 2, 82-93
Embrechts, P., Liniger, T. and Lin, L. (2011): Multivariate Hawkes Processes: An Application to Financial Data, Journal of Applied Probability. 48A, 367-378.
Hawkes, A. G. (1971): Spectra of some self-exciting and mutually exciting point processes. Biometrika. 58, 83-90.
Jang, J. and Dassios, A. (2013): A bivariate shot noise self-exciting process for insurance. Insurance: Mathematics and Economics, 53 (3), 524-532.
Stabile, G. and G. L. Torrisi. (2010): Risk processes with non-stationary Hawkes arrivals. Methodol. Comput. Appl. Prob. 12, 415-429.


Contributed talk: Monday, 14:20, Room 6

Maissa Tamraz (University of Lausanne - HEC, Switzerland)

Some mathematical aspects of price optimisation

Calculation of an optimal tariff is a principal challenge for pricing actuaries. In this contribution we are concerned with the renewal insurance business discussing various mathematical aspects of calculation of an optimal renewal tariff. Our motivation comes from two important actuarial tasks, namely a) construction of an optimal renewal tariff subject to business and technical constraints, and b) determination of an optimal allocation of certain premium loadings. We consider both continuous and discrete optimisation and then present several algorithmic sub-optimal solutions. Additionally, we explore some simulation techniques. Several illustrative examples show both the complexity and the importance of the optimisation approach.

Key Words: market tariff; optimal tariff; price optimisation; renewal business; sequential quadratic programming.

The talk is based on a joint work with Enkelejd Hashorva, Gildas Ratovomirija and Yizhou Bai.


Contributed talk: Tuesday, 11:30, Room 8

Muhsin Tamturk (University of Leicester, United Kingdom)

Quantum mechanics approach to the reinsurance with capital injections

The finite time ruin probability of modified surplus process under reinsurance agreement is computed via the Quantum Mechanics Approach that provides an alternative powerful tool to the traditional probability calculations (see [1]).
According to the reinsurance agreement, reinsurance premium is paid in advance, and the insurance company is exposed to capital injections when the capital is below a specific retention level.

The Dirac-Feynman Path integral approach is applied in the finite time method. For large time, our finite time method is compared with an infinite time method introduced in [2] in terms of the ruin probability and expected total injections amount against different retention levels. In addition, we analyse whether the reinsurance agreement is reasonable or not with respect to various parameters such as the initial capital, reinsurance premium, claim frequency and claim mean.

Keywords: Ruin probability, Dirac-Feynman Path integral approach, Quantum mechanics, Reinsurance.

References
[1] Baaquie, Belal E. Quantum finance: Path integrals and Hamiltonians for options and interest rates. Cambridge University press. 2007.
[2] Nie, C., Dickson, D.C. and Li, S., 2011. Minimizing the ruin probability through capital injections. Annals of Actuarial Science, 5(02), pp.195-209.

The talk is based on a joint work with Sergey Utev.


Contributed talk: Wednesday, 09:00, Room 5

Biqi Tan (Central University of Finance and Economics, China)

On the valuation and risk measurement of variable annuities with flexible premium

In this paper, we consider the computation of balance fees of variable annuities with flexible premium and its risk measures: Value at Risk and Conditional Tail Expectation of the net liability of this product under the Black Scholes framework. While the existing literature on the variable annuities with flexible premium is based on numerical analysis, we propose stratified approximations based on the conditional moment matching technique to increase the computation efficiency of this problem.

To calculate balance fees and the risk measures of flexible premium variable annuities, it is essential to compute the conditional distribution of the time integral of the underlying equity fund given the terminal value of the underlying equity fund. We use gamma and lognormal distributions to approximate the conditional distribution and fit the parameters of gamma and lognormal distributions by matching the first and second conditional moments of this conditional integral, which simplifies the highly oscillating double integral to a single integral. Combining the financial risk with the actuarial risk, we can figure out a fair price of flexible premium variable annuities to see if it is reasonable to use the same price for both single and flexible premium products. Finally, the numerical test shows the computation efficiency of the proposed method that it greatly improves the existing method.

The talk is based on a joint work with Biqi Tan.


Contributed talk: Wednesday, 11:35, Room 4

Senren Tan (Cass Business School, City, University of London, United Kingdom)

Efficient Computation of the Kolmogorov-Smirnov Distribution with Applications in Insurance and Finance

The distribution of the Kolmogorov-Smirnov (K-S) test statistic has been widely studied under the assumption that the underlying theoretical cdf, F(x), is continuous. However, there are many real-life applications in which fitting discrete or mixed distributions is required. Nevertheless, due to inherent difficulties, the distribution of the K-S statistic when F(x) has jump discontinuities has been studied to a much lesser extent and no exact and efficient computational methods have been proposed in the literature.

In this paper, we provide a fast and accurate method to compute the (complementary) cdf of the K-S statistic when F(x) is discontinuous, and thus obtain exact p-values of the K-S test. Our approach is to express the complementary cdf through the rectangle probability for uniform order statistics, and to compute it using Fast Fourier Transform (FFT). We give also a useful extension of the Schmid’s asymptotic formula for the distribution of the K-S statistic, relaxing his requirement for F(x) to be increasing between jumps and thus allowing for any general mixed or purely discrete F(x). The numerical performance of the proposed FFT-based method is illustrated, when F(x) is mixed, purely discrete, and continuous, on various examples including also from (re)insurance. The performance of the general asymptotic formula is also studied.

The talk is based on a joint work with Dimitrina Dimitrova and Vladimir Kaishev.


Contributed talk: Monday, 10:40, Room 8

Qihe Tang (University of Iowa, United States of America)

CAT Bond Pricing under a Product Probability Measure with EVT Risk Characterization

Frequent large losses from recent catastrophes have caused great concerns among insurers/reinsurers, who as a result start to seek mitigations of such catastrophe risks by issuing catastrophe (CAT) bonds and thereby transferring the risks to the bond market. Whereas, the pricing of CAT bonds remains a challenging task, mainly due to the facts that the CAT bond market is incomplete and that their pricing usually requires knowledge about the tail of the risks. In this paper, we propose a general pricing framework that utilizes a product pricing measure in conjunction with extreme value theory (EVT). While the EVT approach is used to uncover the tail risk, the product measure combines a distorted probability measure that prices the catastrophe risks underlying the CAT bond with a risk-neutral probability measure that prices interest rate risk, to provide an integrated pricing framework. A case study using California earthquake data is shown with numerous sensitivity analyses to demonstrate the impact of certain risk parameters on the CAT bond price.

The talk is based on a joint work with Zhongyi Yuan.


Contributed talk: Monday, 11:05, Room 2

Bertrand Tavin (EMLyon Business School, France)

Measuring exposure to dependence risk with random Bernstein copula scenarios

This paper considers the problem of measuring the exposure to dependence risk carried by a portfolio gathering an arbitrary number of two-asset derivative contracts. We develop a worst-case risk measure computed over a set of dependence scenarios within a divergence restricted region. The set of dependence scenarios corresponds to Bernstein copulas obtained by simulating random doubly stochastic matrices. We then devise a method to compute hedging positions when a limited number of hedging instruments are available for trading. In an empirical study we show how the proposed method can be used to reveal an exposure to dependence risk where usual sensitivity methods fail to reveal it. We also illustrate the ability of the proposed method to generate parsimonious hedging strategies in order to reduce the exposure to dependence risk of a given portfolio.


Contributed talk: Wednesday, 09:25, Room 7

Liivika Tee (University of Tartu, Estonia)

Lambert W random variables and their applications in non-life insurance

We consider the Lambert W distribution as an alternative approach in loss modelling. The Lambert W approach can be considered as a transformation of a known random variable rather than creation of a new one. Thus, we can construct a Lambert W skewed version from any distribution. The properties and applications of Lambert W random variables are of interest when dealing with asymmetric/skewed data. Since the Lambert W function is double-valued, we distinguish the corresponding branches. Both principal and non-principal branches are analyzed theoretically. We obtain the values of the skewness parameters leading to the extreme values of the Lambert W function. In the practical part the suitability of corresponding location-scale distributions as well as Lambert W transformed exponential distribution are evaluated on the real insurance data. The results are compared with common loss distributions.

The talk is based on a joint work with Meelis Käärik.


Contributed talk: Monday, 13:55, Room 6

Julie Thøgersen (Aarhus University, Denmark)

Optimal premium as function of the deductible: Customer analysis and portfolio characteristics.

An insurance company offers an insurance contract (p,K), consisting of a premium p and a deductible K. In this paper we consider the problem of choosing the premium optimally as a function of the deductible. The insurance company is facing a market of N customers, each characterized by their personal claim frequency and risk aversion. When a customer is offered an insurance contract, she will based on these characteristics choose whether or not to insure. The decision process of the customer is analyzed in details. Since the customer characteristics are unknown to the company, it models them as iid random variables. Depending on the distributions, expressions for the portfolio size and average claim frequency in the portfolio are obtained. Knowing these, the company can choose the premium optimally, mainly by minimizing the ruin probability.


Contributed talk: Wednesday, 09:25, Room 4

Stefan Thonhauser (Graz University of Technology, Austria)

On a QMC method for Gerber-Shiu functions

In risk theory many quantities of interest, such as ruin probabilities, penalty functions or expected dividend payments, can be characterized as solutions to particular integral equations and their numerical evaluation boils down to the computation of high dimensional integrals. Consequently, QMC-integration is a potential tool for such problems. In this talk we consider a risk model of piecewise-deterministic Markov type. This particular type of model allows for various extensions of the classical risk model and can be used to overcome its static parameter choice, i.e., non-constant drift and jump distribution parameters can be introduced. In particular one can smoothen the process' parameters for making QMC methods applicable. We show that novel QMC results can be exploited in the proposed framework and the results will be illustrated by an evaluation of the Gerber-Shiu function which generalizes the traditional ruin probability.

The talk is based on a joint work with Michael Preischl and Robert F. Tichy.


Contributed talk: Monday, 10:40, Room 5

Dongzi Tian (The University of Hong Kong, Hong Kong S.A.R. (China))

Analysis of the calendar year effect in claims reserving: From ultimate to one-year perspectives

This paper studies the calendar year effect (CYE) on the estimation of incurred but unpaid claims (losses) which is required to calculate reserve for a single line of business and the overall portfolio in insurance company. Three different types of of CYE models including (i) common CYE (ii) independent CYE, and (iii) dependent CYE, are considered to develop the claim payment triangles for each business line. To this end, the CYE factor is added as a covariate to the systematicatic component of the increment payments. A Bayesian method together with the plug-in estimates and also full Bayesian analysis are adopted for parameter estimations. Then, the aggregated reserves for the overall portfolio is obtained via Gaussian copula which captures a cell-wise dependence between lines of business. To illustrate the application of the proposed models, we use the data set from Schedule P of National Association of Insurance Commissioners (NAIC) which includes the claim payments for personal auto and commercial auto. Under different CYE models, the predictive (undiscounted) outstanding claim payments are obtained for each line of business and the overall amounts for the company. This result is related to the ultimate reserve risk which is recognized for the lifetime of the liabilities. Furthermore, the reserve risk at different confidence levels is analyzed in the one-year time horizon, which is more useful from the regulator’s perspective. Lastly, we numerically demonstrate the performance of full Bayesian method on reserve estimation.

The talk is based on a joint work with Jae-Kyung Woo, Victoria Rivas and Viktor H. de la Pena.


Contributed talk: Tuesday, 14:45, Room 4

Carol Anne Troy (Da-Yeh University, Taiwan)

Auditor Choice, Insurer Characteristics, and the Property-Liability Reserve Error: A Utility Maximization Model

Estimation of the property-liability loss reserve presents tradeoffs between the needs of insurers and their auditors. This paper models the reserve estimate as an equilibrium value jointly determined by both parties (Antle & Nalebuff, 1991). Large and small auditors have distinct utility functions characterized by differing levels of uncertainty and risk aversion. The utility function, based on work by Cozzolino and Kleiman (1982), imposes asymmetric penalties for understatement and overstatement. Within this context, auditors maximize their own utility by balancing the competing requirements of accuracy and conservatism.

Using Monte Carlo simulation, this paper derives evidence from the model supporting four conclusions concerning the reported reserve estimate. First, auditors respond to asymmetric risk by using a conservative estimation strategy. Second, the optimal estimates of large auditors are less conservative than those of smaller auditors, and therefore more accurate. Third, financially weak insurers exert downward pressure on reserve estimates, however this impact is less pronounced among large auditors. Finally, when the client makes an “atypical” hiring choice (e.g., when an insurer expected to choose a small auditor instead selects a Big 4 auditor), the auditor guards against information asymmetry by imposing more conservative estimates, with a more pronounced effect among large auditors.

Antle, R., & Nalebuff, B. (1991). Conservatism and auditor-client negotiations. Journal of Accounting Research, 29(Special Issue: Studies on Accounting Institutions in Markets and Organizations), 31-54.

Cozzolino, J., & Kleiman, N. (1982, May 23-26). A Capacity Management Model Based on Utility Theory. Paper presented at the Pricing, Underwriting and Managing the Large Risk, Casualty Actuarial Society, Palm Beach, Florida.

The talk is based on a joint work with Wenyen Hsu.


Contributed talk: Tuesday, 15:40, Room 4

Julien Trufin (Université Libre de Bruxelles, Belgium)

Bounds on concordance-based validation statistics in regression models for binary responses

Association measures based on concordance, such as Kendall's tau, Somers' delta or Goodman and Kruskal's gamma are often used to measure explained variations in regression models for binary outcomes.

As responses only assume values in {0,1}, these association measures are constrained, which makes their interpretation more difficult as a relatively small value may in fact strongly support the fitted model. In this paper, we derive the set of attainable values for concordance-based association measures in this setting so that the closeness to the best-possible fit can be properly assessed.

The talk is based on a joint work with Michael Denuit and Mhamed Mesfioui.


Contributed talk: Monday, 11:30, Room 2

Jeffrey Tsai (National Tsing Hua University, Taiwan)

Measuring Underwriting Risk on Multivariate Loss Distributions for U.S. Property-Casualty Insurance Industry

This article evaluates the underlying risk of U.S. property-casualty insurance industry under a byline multivariate framework. We propose a SUR-copula with multivariate singular spectrum analysis model to measure the byline loss ratio marginal distributions and their dependence structure across line of business. The contemporaneous risk between lines of business is significant and contributes distinct diversification effects to RBC formula suggested. The empirical studies also show that the VaRs are underestimated if the byline jointly loss distributions are assumed to be multivariate normal. These findings provide insightful information for insurers, regulators, and decision makers in managing their total liabilities and reserves.


Contributed talk: Tuesday, 14:45, Room 7

Spyridon M. Tzaninis (University of Piraeus, Greece)

Change of Measures for Compound Renewal Processes with Applications to Premium Calculation Principles

Given a compound renewal process S under a probability measure P we characterize all probability measures Q on the domain of P such that Q and P are progressively equivalent and S remains a compound renewal process under Q. As a consequence we prove that any compound renewal process can be converted into a compound Poisson process through change of measure, and we show how this approach is related to equivalent martingale measures and to premium calculation principles.

The talk is based on a joint work with Nikolaos D. Macheras.


Contributed talk: Monday, 14:45, Room 4

George Tzougas (London School of Economics and Political Science, United Kingdom)

Confidence Intervals of the Premiums of Optimal Bonus-Mlaus Systems

Bonus-Malus Systems, BMS in short, are experience rating mechanisms which impose penalties on policyholders responsible for one or more accidents by premium surcharges (or maluses) and reward discounts (or bonuses) to policyholders who had a claim-free year. Optimal BMS are financially balanced for the insurer, i.e. the total amount of bonuses must be equal to the total amount of maluses, and fair for the policyholder, i.e. the premium paid for each policyholder is proportional to the risk that they impose on the pool.

However, even though the construction of optimal BMS has been a basic interest of actuarial literature for over four decades, scientific attention has only now focused on deriving credibility updates of the claim frequency based on the employment of an abundance of alternative parametric distributions, nonparametric distributions and advanced regression models. In this respect, a major drawback in the design of such systems was neglected: namely the fact that they do not give a measure of uncertainty of the resulting premium estimates by providing a confidence interval that contains plausible values.

In this paper we extend BMS literature research by addressing the problem of building confidence intervals for the premiums determined by an optimal BMS in the following ways. Firstly, we consider a flexible class of nonparametric mixtures of Poisson distributions for assessing claim frequency. A variant of the EM algorithm adjusted for jumping between different numbers of components is proposed in order to approximate the unknown mixing, or risk, distribution based on nonparametric maximum likelihood estimation (NPMLE). The use of the nonparametric estimate of the risk distribution allows for a rich family of claim frequency distributions instead of restricting attention to particular laws such as the negative binomial distribution that has been widely applied for modelling claim count data. On the path toward actuarial relevance the Bayesian view is taken and the NPMLE of the risk distribution is used to calculate premiums as posterior means. Also, it is shown that the NPMLE based posterior mean claim frequency behaves asymptotically normal. Based on the asymptotic normality of the estimator of the posterior mean claim frequency Wald type two-sided confidence intervals are constructed. The Wald CIs are not degenerated and therefore are more useful than the corresponding intervals based on model analogy or ad hoc reasoning. Specifically, Efron percentile bootstrap confidence intervals are investigated and compared to the Wald Type confidence intervals obtained directly from the NPML estimates. Our analysis reveals that Efron percentile bootstrap intervals on certain occasions improve the asymptotic normal approximation used by Wald intervals. The aforementioned constructions of NPMLE and bootstrap based CIs account for the uncertainty as well as the fluctuations of the individual premium estimates. In an experience ratemaking scheme the use of such intervals leaves room for the informed judgment of the actuary to select the final premiums to be charged to each policyholder based on the fluctuations that occur equally on either side of the posterior mean claim frequency. In this respect, the insurance company can be responsive to the needs of different constituencies, such as broader economic trends for the insurance market in which it operates.

The talk is based on a joint work with Dimitris Karlis and Nicholaos Frangos.


Contributed talk: Tuesday, 13:55, Room 6

Servaas van Bilsen (University of Amsterdam, The Netherlands)

How to Invest and Draw-Down Wealth? A Utility-Based Analysis

This paper explores how Baby Boomers should invest and draw-down their accumulated wealth over the rest of their lives. To answer this question we build a consumption and portfolio choice model with multiplicative internal habit formation and stochastic differential utility. We show analytically that after a wealth shock it is optimal to adjust both the level and future growth rates of consumption, implying gradual response of consumption to financial shocks. Furthermore, fostering the ability to keep catching up with the internal habit creates upward pressure on expected consumption growth. Welfare losses associated with popular alternative investment and draw-down strategies can be large.

The talk is based on a joint work with Roger Laeven and Lans Bovenberg.


Contributed talk: Monday, 14:45, Room 7

Eleni Vatamidou (INRIA, France)

Approximations for Gerber-Shiu type functions with two-sided jumps

In this talk, we study the Gerber-Shiu function of a two-sided Lévy risk model, where the negative jumps describe claims and the positive ones describe the stochastic income. In particular, we assume that the positive jumps are of phase-type, while the negative jumps are heavy-tailed. This model recasts with the aid of fluid embedding as a spectrally negative Markov Additive Process (MAP), for which occupation densities and the scale matrix need to be evaluated. Note that closed-form expressions for the Gerber-Shiu function are available when the negative claims are also of phase-type. Therefore, by relating our model to a Lévy risk model with two-sided phase-type jumps, our aim is to derive accurate approximations for the Gerber-Shiu function. More precisely, we intend to derive a ‘matrix’-expansion of the Gerber-Shiu function via utilising perturbation analysis alongside with results of spectrally negative Lévy processes, and from this construct our approximations.

The talk is based on a joint work with Zbigniew Palmowski.


Contributed talk: Tuesday, 13:55, Room 5

Michel Vellekoop (University of Amsterdam, The Netherlands)

Dependency structures in models for human mortality

The required financial reserves for future payments in life insurance companies and pension funds strongly depend on the future number of survivors in their portfolios. Predictions for future mortality rates, and an assessment of the uncertainty in their impact on the reserves, often benefit from a decomposition of the dynamics in terms of factors which only affect the own population, and factors which play a role in a larger population, since data from the larger population can then be used to improve the parameter estimation procedure [1]. This requires, however, a careful statistical separation of different stochastic factors and analysis of the associated dependency structure, see for example [2].

We propose a systematic procedure to do this, which combines classical maximal likelihood estimators with L1-regularization methods that have been introduced more recently, and investigate to what extent the use of multiple populations enhances prediction accuracy. Numerical results are given for a case study which involves a large dataset from a variety of European countries.

[1] Lee,R.D. and Li, N. (2005) Coherend mortality forecasts for a group of populations: An extension of the Lee-Carter method. Demography, 42(3):575-594.
[2] Enchev, V., Kleinow, T. & Cairns, A. J. G. (2017). Multi-population mortality models: Fitting, Forecasting and Comparisons. Scandinavian Actuarial Journal, 4: 319-342.


Contributed talk: Monday, 11:55, Room 4

Raluca Vernic (Ovidius University of Constanta, Romania)

Multivariate count data generalized linear models: two approaches based on Sarmanov's distribution

Since quantifying the risk of accidents is a very important aspect of pricing in insurance, there is a large amount of literature on the question: “Which is the accident risk of an insured?”; however, most of the papers approach this question from a univariate perspective, i.e., they deal with only one insurance line.

In this paper, we consider a multivariate approach by taking into account three types of accident risks and the possible dependence between them. Driven by a real data set, we propose two trivariate Sarmanov’s distributions with GLM marginals, incorporating, hence, some individual characteristics of the policyholders by means of explanatory variables. Since the data set was collected during a longer time period of 10 years, we also added the exposure at risk of each individual.

Regarding the proposed models, if the first one is just a trivariate Sarmanov distribution with GLMs marginals and exponential kernels, the second one is obtained by mixing a trivariate distribution obtained from independent Poisson distributions with a trivariate Sarmanov having gamma marginals and exponential kernels. Moreover, these two models are also compared with the simpler trivariate Negative Binomial GLM. The challenging part of the study consisted in estimating the parameters of the two Sarmanov distributions; in this sense, we propose a partial Expectation Maximization algorithm combined with a pseudo-maximum-likelihood method.

The numerical results obtained for the real data set show the good fit of the Sarmanov distributions, which proved to be better than the simpler NB GLM. We also mention the structure of the data set, which consists of two insurance lines: home and auto, the auto line being split into material damage and bodily injury.

The presenting author gratefully acknowledges financial support from the University of Barcelona.

The talk is based on a joint work with Catalina Bolance.


Contributed talk: Tuesday, 13:55, Room 8

Richard Verrall (City, University of London, United Kingdom)

Micro models for reinsurance reserving based on aggregate data

This paper addresses a new problem in the literature, which is how to consider reserving issues for a portfolio of general insurance policies when there is excess-of-loss reinsurance. This is very important for pricing considerations and for decision making regarding capital issues. The paper sets out how this is currently often tackled in practice and provides an alternative approach using recent developments in stochastic claims reserving. These alternative approaches are illustrated and compared in an example using real data. The stochastic modelling framework used in this paper is Double Chain Ladder, but other approaches would also be possible. The paper sets out an approach which could be explored further and built on in future research.

The talk is based on a joint work with Valandis Elpidorou and Carolin Margraf.


Contributed talk: Wednesday, 09:50, Room 3

Thomas Viehmann (Deloitte, Germany)

Simultaneous calibration of an interest model to multiple valuation dates

The typical approach to market consistent valuation of life technical provisions is to adjust paramters of capital market models in order to match model prices of selected instruments to market prices at observed at the valuation date. This is in contrast to modelling e.g. in physics, where models are parametrised by fundamental physical constants and only the (initial) state is varied for from one applications to another. Taking a practitioner's view, we explore the possibility of sharpening the distinction between parameters and state for interest rate models.


Contributed talk: Monday, 16:05, Room 6

Elena Vigna (Università di Torino and Collegio Carlo Alberto, Italy)

Tail optimality and preferences consistency for intertemporal optimization problems

When an intertemporal optimization problem over a time interval $[t_0,T]$ is linear and can be solved via dynamic programming, the Bellman's principle holds, and the optimal control map has the desirable feature of being tail-optimal in the right queue; moreover, the optimizer keeps solving the same problem at any time time $t$ with renovated conditions: we will say that he is preferences-consistent.

Opposite, when an intertemporal optimization problem is non-linear and cannot be tackled via dynamic programming, the Bellman's principle does not hold and, according to existing literature, the problem gives raise to time inconsistency. Typical examples are the mean-variance problem and non-exponential discounting. Currently, there are three different ways to attack a time-inconsistent problem: (i) precommitment approach, (ii) game theoretical approach, (iii) dynamically optimal approach. The three approaches coincide when the problem can be solved via dynamic programming. None of the three approaches presents simultaneously the two features of tail optimality and preferences consistency that hold for linear problems.

In this paper, given an optimization problem and the control map associated to it, we formulate the four notions of local and global tail optimality of the control map, and local and global preferences consistency of the optimizer. While the notion of tail optimality of a control map is not new in optimization theory, to the best of our knowledge the notion of preferences consistency of an optimizer is novel.

We prove that, due to the validity of the Bellman's principle, in the case of a linear problem the optimal control map is globally tail-optimal and the optimizer is globally preferences-consistent. Opposite, in the case of a non-linear problem global tail optimality and global preferences consistency cannot coexist. For the precommitment approach, there is local tail optimality and local preferences consistency. For the dynamically optimal approach, there is global preferences consistency, but not even local tail optimality. For the game theoretical approach, there is neither local tail optimality nor local preferences consistency with respect to the original non-linear problem, but there is global tail optimality and global preferences consistency with respect to a different linear problem.

This analysis should underline the key characteristics of tail optimality and preferences consistency satisfied by linear problems and their optimal solutions, and should shed light on the price to be paid in terms of tail optimality and preferences consistency with each of the three approaches currently available for time inconsistent non-linear problems.


Contributed talk: Tuesday, 13:30, Room 8

Léonard Vincent (University of Lausanne, Switzerland)

On event-based retentions and limits for XL reinsurance

In this paper, we investigate the potential of higher-order event retentions, which are particular XL covers for which the retention and the size of the layer applied to the claims vary in a pre-defined way according to the chronological order of appearance. We show that even for slight modifications of the classical XL it is possible to improve several performance measures such as the expected profit of the retained portfolio when taking into account cost of capital for the required solvency capital, as well as return on risk-adjusted capital (RORAC) of the cedent.

The talk is based on a joint work with Hansjoerg Albrecher.


Contributed talk: Tuesday, 14:45, Room 5

Chou-Wen Wang (National Sun Yat-sen University, Taiwan)

Annuity Portfolio Management with Correlated Age-Specific Mortality Rates

In the article, we propose to model mortality dynamics for an age by a stochastic process in which the drift rate can be simply and effectively modeled as an overall time trend driving mortality changes for all ages and the distribution for error terms can be fitted by one of the distributions (Normal, Student t, JD, VG, and NIG). We then use the one-factor copula models with five-kind distributions for the factors (Normal-Normal, Normal-Student t, Student t-Normal, Student t-Student t, Skewed t-Normal and Skewed t-Student t) to capture the inter-age mortality dependence. We apply our model to managing three kinds of annuity portfolios (Barbell, Ladder, and Bullet) is built by using an approximation change of portfolio values in response to static changes in mortality rates and Value at Risk values for the portfolios in response to dynamic mortality changes.

The talk is based on a joint work with Tzuling Lin and Cary Chi-Liang Tsai.


Contributed talk: Wednesday, 10:15, Room 5

Jennifer L. Wang (National Chengchi University, Taiwan)

Explaining the Risk Premiums of Life Settlements

Scholars have paid attention to the determinants of rate spreads on various investment products and this paper extends the literatures to life settlements, emerging alternative investments. We show that the premium of non-systematic mortality risk is substantial but the systematic premium is insignificant. The impact of tax on life settlements’ spreads is material. We further find that life settlements have negative betas and are quality assets to fly to in market turmoil. The proprietary information provided by medical underwriter and surrender behaviors of underlying policyholders are also significant determinants of the rate spreads on life settlements.

The talk is based on a joint work with Ming-Hua Hsieh, Ko-Lun Kung, Jin-Lung Peng and Chenghsien Tsai.


Contributed talk: Wednesday, 09:00, Room 2

Kili Wang (Tamkang University, Taiwan)

Solution or spillover? Exploring the impact of Taiwan’s DRG payment system on the private health insurance market

In this study we investigated variation in claimed hospitalization days before and after the implementation of the diagnosis-related group (DRG) payment system. We determined that the non-DRG (NDRG) hospitalization days of people covered by high-coverage private health insurance significantly increased after the DRG system was implemented (when services were provided by district hospitals). We additionally identified the existence of a moral hazard: when Taiwan’s Ministry of Health and Welfare declared that the DRG system effectively restricts hospitalization days in the National Health Insurance system, expenditure seemed to shift to the private sector and NDRG items, instead of decreasing. The public sector solution generated a problem for the private sector, and district hospitals played a "push" role in this spillover effect.

The talk is based on a joint work with Chia-Ling Ho.


Contributed talk: Monday, 10:40, Room 3

Ruodu Wang (University of Waterloo, Canada)

Scenario-based risk evaluation and compatibility of scenarios

We aim to bridge the gap between a few practical considerations in risk measurement for internal management and external regulation. We provide a unified risk measure framework to take into account three relevant issues: statistical and simulational tractability (typically results in a law-based risk measure), scenario sensitivity (typically results in a non-law-based risk measure), and robustness (could be either law-based or not). Along the way of our study, the compatibility issue of scenarios arises naturally, with rather surprising mathematical implications. This talk resembles some part of on-going joint research projects with Damir Filipovic, Yi Shen, Jie Shen, Bin Wang, and Johanna Ziegel.


Contributed talk: Monday, 11:05, Room 3

Yunran Wei (University of Waterloo, Canada)

Characterization, Robustness and Aggregation of Signed Choquet Integrals

This article contains various results on a class of non-monotone risk functionals, called the signed Choquet integrals. A functional characterization and some theoretical properties are established, including four equivalent conditions for a signed Choquet integral to be convex. We proceed to address two practical issues recently popular in risk management, namely, various continuity properties and risk aggregation with dependence uncertainty, for signed Choquet integrals. Our results generalize in several directions those in the literature of risk functionals. From the results obtained in this paper, we see that many profound and elegant mathematical results in the theory of risk measures hold for the general class of signed Choquet integrals; thus they do not rely on the assumption of monotonicity.

The talk is based on a joint work with Ruodu Wang.


Contributed talk: Wednesday, 09:00, Room 4

Alasdair David Wilkie (InQA Limited, United Kingdom)

Improving the realism of actuarial simulations

Random simulation methods are now widely used, especially in models for the assets of insurance companies and pension funds. The simplest models might use fixed parameter values and normally distributed innovations. But we do not know the correct parameter values; we can only make estimates, based either on past data or on hypothetical assumptions. Maximum likelihood estimation allows estimates of a covariance matrix of parameter values to be calculated, and one can use these, with the maximum likelihood estimates as the means, to assume that the parameter values are random variables, distributed multivariate normally. But there are complications with this when the parameter values are required to be within limited ranges.

Many asset variables have fatter-tailed residuals than normal, and there are alternative distributions, some of which come into what we call the series of conical distributions, normal, Laplace, hyperbolic. This give more complications, since parameters often have restricted ranges, so we need to used modified versions of the usual parameters. Further, if we adopt a non-normal distribution for the residuals, we should use that also for the maximum likelihood estimation.

We discuss these features, using as an example the very simple model for the consumer price index in the Wilkie investment model. But the principles apply to any simulation modelling used by actuaries.

The talk is based on a joint work with Sule Sahin.


Contributed talk: Monday, 11:30, Room 8

Rafal Wojcik (AIR Worldwide, United States of America)

Probabilistic aggregation of correlated catastrophe losses: A predictability study.

When aggregating catastrophe (CAT) losses at different locations to a portfolio level it is important to account for spatial correlation between these losses relative to CAT model estimate. Such conditional dependency is designated as the correlation of model errors. To obtain an estimate of a compound (aggregate) loss distribution (i) a methodology to quantify the spatial variability of CAT model errors is combined with (ii) computationally efficient loss aggregation algorithm. Here, we use hierarchical linear model to parameterize (i) and mixture method, i.e. convex sum of convolution (independent model errors) and comonotonization (maximally correlated model errors), to induce the desired correlation structure and implement (ii). The compound distribution constitutes a probabilistic forecast of CAT losses. We assess predictability or quality of such probabilistic forecasts as verified by their realizations i.e. insurance claims data collected after CAT events. We first introduce an information-theoretic measure of forecasting skill. This function, called a logarithmic scoring rule is a modified version of relative entropy. Then, we aggregate ground-up losses induced by various CAT events that occurred in the past for hurricane, earthquake and flood perils. Loss analysis is performed on portfolios of several major insurance companies using logarithmic scores. To tackle the problem of sparsity of claims data, we randomly create a number of artificial portfolios (varying in size) by re-sampling the total portfolios and study the predictive distributions in each of them. The predictability scores obtained with the mixture method in (ii) are compared with those for convolutions. We also evaluate adequacy of the correlation structure in (i) by comparing it to more flexible yet computationally expensive process convolution approach. The latter allows the spatial dependence structure to vary as a function of location by convolving a very simple, perhaps independent, spatial process with a kernel or point spread function.

The talk is based on a joint work with Charlie Wusuo Liu and Jayanta Guin.


Contributed talk: Tuesday, 15:40, Room 7

Jeff Wong (University of Waterloo, Canada)

Parisian-type Ruin for Spectrally Negative Levy Process with Poisson Observations

In this paper, a model with a hybrid observation mechanism to monitor an insurance surplus process is proposed. At the same time, it blends the ultimate bankruptcy and Parisian concept in defining ruin. Specifically, the business will be inspected at arrival epochs of a Poisson process with rate λ1 unless a negative surplus is detected such that the observation rate is subsequently increased to λ2, while an exponential Parisian clock sets in at the same time. Ruin is defined as the moment when a surplus lower than the bankruptcy level is detected, or the moment when the Parisian clock rings before a positive surplus is detected, whichever comes first. For a spectrally negative Levy risk process (SNLP), it turns out the potential measures of a SNLP killed on exiting an interval under Poisson discrete observations, a natural counterpart to the classical potential measures with continuous observations, play an indispensable role in the analysis of risk quantities pertained to the ruin time. Upon utilizing the new potential measures, closed–form expression of the Gerber–Shiu type function in relation to the ruin time is derived.

The talk is based on a joint work with David Landriault, Bin Li and Di Xu.


Contributed talk: Tuesday, 10:40, Room 2

Chin-Wen Wu (Nanhua University, Taiwan)

Automatic Trading Strategies with Rule-Based Technical Pattern Recognition

An aging society, characterized by a growing proportion of the retired to the active working population, is primarily due to either declining fertility rates or mortality improvement. Population aging affects the fiscal sustainability of pension funds. A partial solution to solve this problem is using automatic trading strategies to improve the performance of pension funds. Consequently, in this paper, focusing on systematic and automatic trading strategies with rule-based technical pattern recognition, we apply this method to a large number of stocks in Taiwanese stock market with a lengthy sample period running from 1990 to 2016 to evaluate the effectiveness of technical analysis. Our empirical evidence shows that several rule-based technical patterns could well prove to be profitable for the Taiwanese stocks after considering transaction cost with 50 basis points and their performance could be further improved by controlling the volatility and momentum factors when the buying signals are generated. Finally, the performance of automatic trading strategies with rule-based technical pattern recognition is also robust to alternative top and bottom formation and to a variety of control variables.

The talk is based on a joint work with Chou-Wen Wang.


Contributed talk: Monday, 11:55, Room 8

Yang-Che Wu (Feng Chia University, Taiwan)

Feasibility of Equity–Efficiency Trade-off in the Natural Catastrophe Insurance Market

This study establishes a stakeholder framework in the natural catastrophe insurance market: insurers charge policyholders the full insurance premium and pays the public catastrophe insurance scheme (PCIS) the contributions for the contingent bailout. The government subsidises policyholders and taxes insurers. Furthermore, a series of accounting procedures are developed to illustrate how the stakeholders’ cash flows change. Numerical analysis reveals that both the PCIS and subsidy policy can achieve long-term self-finance under special tax rates, contribution rates, and subsidy conditions. The results show that the short-term inequity of favouring insurers and policyholders can promote long-term equity and efficiency for all stakeholders.


Contributed talk: Monday, 11:55, Room 3

Yue Xing (The Chinese University of Hong Kong, Hong Kong S.A.R. (China))

Importance Sampling based Simulation for Non-linear Portfolios’ Risk Measures

Value-at-risk (VaR) and Conditional Value-at-risk (CVaR) are two standard risk measures adopted in both financial and insurance industries. Estimation for tail quantile of portfolios has been known as a challenging problem, where two difficulties are especially prominent: non-linear structures, and high-dimensionality of underlying assets. To tackle the problems of this sort, we propose the use of variable selection and one variance reduction technique in the existing simulation-based framework. In particular, we study importance sampling for high-dimensional distribution and extend the existing single-direction mean-shift approach for CVaR and theoretical results are also presented to verify the effectiveness of the proposed methods.

The talk is based on a joint work with Tony Sit and Hoi Ying Wong.


Contributed talk: Tuesday, 13:55, Room 2

Fan Yang (University of Waterloo, Canada)

Testing the multivariate regular variation model for extreme risks

Extreme risks in insurance and finance have the features of heavy-tailedness and strong dependence. Multivariate regular variation (MRV) is a semi-parametric model, which has no restriction in the normal range. It has heavy-tailed marginals and is flexible in capturing the dependence structures. Typical examples include multivariate t distribution, multivariate stable distributions and copulas, such as Archimedean and extreme value copulas. However, given a data set, to test if a MRV model fits can be very complicated. This paper aims to propose an easy implemented approach to test the MRV mode. Extensively simulation studies are provided. This work can be regarded as an extension of the test on whether the tail index is constant for non-identically distributed observations in Einmahl et al. (2016).

Reference
Einmahl, J. H., de Haan, L., & Zhou, C. (2016). Statistics of heteroscedastic extremes. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 78(1), 31-51.

The talk is based on a joint work with Chen Zhou.


Contributed talk:

Lin Yang (Xi'an Jiaotong-Liverpool University, China, People's Republic of)

Robust Hoo control for Premium-Reserve models in a stochastic nonlinear discrete-time varying framework

The premium pricing process and the medium- and long- term stability of the reserve policy under conditions of uncertainty present very challenging issues in relation to the insurance world. Following [1, 2, 3], the paper investigates the robustness and stabilization of a reserve-premium system as well as the insurer's strategy over a period. Analytically, the current work develops a stochastic nonlinear model in a discrete-time varying framework subject to Lipschitz-type conditions. A robust Hoo controller for the premium-reserve process is designed to guarantee the stability of the system under structured parameter uncertainties. In our case, as an extension of the classical quadratic condition, the one-side Lipschitz conditions are considered and a nonconvex feasibility problem is solved. Finally, a numerical example is presented to illustrate the applicability of the theoretical results.

[1] A.A. Pantelous, and A. Papageorgiou, On the robust stability of pricing models for non-life insurance products, European Actuarial Journal Vol. 3, No. 2 (2013): pp. 535--550.
[2] A.A. Pantelous, and L. Yang, Robust LMI stability, stabilization and Hoo control for premium pricing models with uncertainties into a stochastic discrete-time framework, Insurance: Mathematics and Economics Vol. 59 (2014): pp. 133--143.
[3] L. Yang, A.A. Pantelous, and H. Assa, Robust stability, stabilisation and H-infinity control for premium-reserve models in a Markovian regime switching discrete-time framework. ASTIN Bulletin, Vol. 46, No. 3 (2016): pp. 747--778.

The talk is based on a joint work with Rong Li and Athanasios Pantelous.


Contributed talk: Monday, 13:30, Room 2

Sharon Yang (National Central University, Taiwan)

Systematic Risk of Housing Price and Its Impact on Valuation and Risk Analysis for a Portfolio of Home Equity Conversion Mortgages

The U.S. Home Equity Conversion Mortgage (HECM) program uses mortgage insurance to manage the lender’s inherent risk. Such program is on a national-wide basis and pools the risk across different cities. To evaluate the sustainability of the HECM program, a valuation and risk analysis framework that accounts for the dependence structure of the housing price dynamics are essential. We then propose a multi-city housing price model based on the dynamic copula approach. The dependence structure is investigated empirically and our result strongly indicates that housing data exhibits dependence risk and shows the time-varying correlation across cities. As a result, the lenders could underestimate the risk of HECM program significantly if they ignore the dependence structure of housing prices across cities, especially when measuring tail risk with value at risk and conditional tail expectations. In addition, among all dependence structures, the time-varying T dependence structure has the most significant effect on the risk for HECM program.

The talk is based on a joint work with Jr-Wei Huang.


Contributed talk: Monday, 11:05, Room 8

Min-Teh Yu (Providence University, Taiwan)

Prediction Markets for Catastrophe Risk: Evidence of Catastrophe Bond Markets

This paper examines the efficiency of prediction markets by studying markets of catastrophe (CAT) bonds - an existing large scale financial market; while previous studies of prediction markets have used small-scale observational field data or experiments. We collect actual catastrophe loss data, match defined trigger events of each CAT bond contract, and then employ an empirical pricing framework to obtain the excess CAT premiums in order to represent the market-based forecasts. Our results indeed show that the market-based forecasts have more significant predictive content for future CAT losses than professional forecasts that use natural catastrophe risk models. Although the predictive information for CAT events is specialized and complex, our evidence supports that CAT bond markets are a successful prediction markets which efficiently aggregate information about future CAT losses. Our results also highlight the discovery role of future risk in CAT bond spreads and provides a new explanation for excess CAT bond spreads.

The talk is based on a joint work with Hwei-Lin Chaung and Yang Zhao.


Contributed talk: Tuesday, 16:05, Room 9

CANCELLED: Zhongyi Yuan (The Pennsylvania State University, United States of America)

Insurers’ Contingent Convertibles with Regulation Consistent Triggers

Failing the stress test of the 2007--2010 crisis, the stability of financial system has caused great concerns among governments, the financial industry, as well as the research community. An early proposal of using contingent capital to enhance the stability of the banking system has since received serious considerations. Discussions on pivotal issues of using contingent capital, proposals of variants of contingent convertibles (CoCos), and pricing and hedging techniques have started to appear in recent literature, with a focus on issuance by banks.

In this paper, we consider a CoCo bond issued by an insurance company, as more such issuances are now anticipated by the market. While the CoCo currently issued by the insurance industry uses the solvency ratio as the trigger, we propose a new trigger that is both forward looking and consistent with the current insurance regulation. We then price the CoCo using a canonical approach, and also discuss the use of stratification as a variance reduction method in the pricing exercise.

The talk is based on a joint work with Dan Zhu.


Contributed talk: Monday, 13:30, Room 8

Yuan Yue (University of Amsterdam, The Netherlands)

Earthquake risk premia in property prices: Evidence from five Japanese cities

This paper analyzes the impact of long-run and short-run earthquake risk on Japanese property prices. We use a rich panel dataset of housing characteristics, ward attractiveness information, macroeconomic variables, seismic hazard data and historical earthquake occurrences, supplemented with objective short-run earthquake probabilities derived from a self-exciting Epidemic Type Aftershock Sequence (ETAS) model. We develop a hedonic price model with a multivariate error components structure and design an associated maximum likelihood estimation procedure. Our approach allows to identify the total compensation for earthquake risk embedded in property prices and to decompose it into pieces stemming from objective long-run risk, objective short-run risk, and subjective short-run risk.

The talk is based on a joint work with Masako Ikeuji, Roger J. A. Laeven and Jan R. Magnus.


Contributed talk: Tuesday, 11:30, Room 4

Fei Lung Yuen (Hang Seng Management College, Hong Kong S.A.R. (China))

On the Uncertainty of Individual Risk

Risk measure is an important tool for risk management, control, and many other decision making processes. It usually represents the loss under extreme adverse conditions, such as value at risk (VaR). One practical issue for risk management is to determine a robust risk measure with uncertainty on the loss distribution. In the presentation, we propose a model for the distribution uncertainty on individual risk. It is used to identify a more robust VaR. The properties of the worst scenario and the associated VaR will be discussed.

The talk is based on a joint work with Ka Chung Cheung.


Contributed talk: Monday, 16:30, Room 1

Ana Zalokar (University of Primorska, Andrej Marušič Institute, Slovenia)

Optimal switching among hedging strategies in equity-linked products

Equity-linked insurance policies are offered by most insurance companies. In many cases such contracts have guarantees like a minimum return over the lifetime of the policy. Liabilities arising from such guarantees must be hedged by suitable investments. There are restrictions on hedging strategies in many jurisdictions but with the more flexible regulatory framework of Solvency II there are alternative ways to hedge certain guaranteed products using derivative securities. In this talk we investigate when it is optimal to switch from one hedging strategy to the other in the case when options are valued in the framework of the Cox-Ross-Rubinstein model.

The talk is based on a joint work with Mihael Perman.


Contributed talk: Wednesday, 09:50, Room 5

Ling Zhang (Guangdong University of Finance, China)

Robust portfolio choice for a DC pension plan with stochastic income and interest rate

This paper considers a robust portfolio choice problem for a defined contribution (DC) pension plan with stochastic income and stochastic interest rate. The investment objective of the pension plan is to maximize the expected utility of the wealth at the retirement time. We assume that the financial market consists of a stock, a zero-coupon bond and a risk-free asset. And the member of DC pension plan is ambiguity-averse, which means that the member is uncertain about the expected return rate of the bond and stock. Meanwhile, the member's ambiguity-aversion level toward these two financial assets is quite different. The closed-form expressions of the robust optimal investment strategy and the corresponding value function are derived by adopting the stochastic dynamic programming approach. Furthermore, the sensitive analysis of model parameters on the optimal investment strategy are presented. We find that the member's aversion on model ambiguity increases her hedging demand and has remarkable impact on the optimal investment strategy. Moreover, we demonstrate that ignoring model uncertainty will lead to significant utility loss for the ambiguity-averse member (AAM), and the model uncertainty about the stock dynamics implies greater effect on the outcome of the investment than the bond.

Keywords: DC pension plan; Robust control; Utility loss; Stochastic interest; Stochastic income.

The talk is based on a joint work with Yujing Chen and Yongzeng Lai.


Contributed talk: Tuesday, 16:55, Room 5

Ying Zhang (Simon Fraser University, Canada)

A Multi-Dimensional Bühlmann Credibility Approach to Modeling Multi-Population Mortality Rates

In this talk, we first propose a multi-dimensional Bühlmann credibility approach to forecasting mortality rates for multiple populations, and then compare forecasting performances among the proposed approach and the joint-k/co-integrated/augmented common factor Lee-Carter models. The model is applied to mortality data for both genders of several developed countries with an age span and a wide range of fitting year spans. Empirical results show that the proposed credibility approach contributes to more accurate forecasting performances, measured by MAPE (mean absolute percentage error), than those based on the Lee-Carter model.

The talk is based on a joint work with Cary Chi-Liang Tsai.


Contributed talk: Monday, 16:55, Room 7

Yinglin Zhang (LMU Munich, Germany)

Robust reduced-form framework

The first part of this talk deals with the pricing and hedging problem for payment streams under model uncertainty and establishes several equivalent versions of robust superhedging duality with a generic family of possibly nondominated probability measures. In the second part, we construct a consistent robust framework which extends the classical reduced-form setting, applicable to both credit and insurance market. A consistent robust conditional expectation in this context is explicitly defined and the superhedging problem in this framework is studied as well. This work extends the robust framework for financial market to credit and insurance markets.

The talk is based on a joint work with Francesca Biagini.


Contributed talk: Monday, 16:05, Room 7

Yiying Zhang (The University of Hong Kong, Hong Kong S.A.R. (China))

Ordering the Largest Claim Amounts and Ranges from Two Sets of Heterogeneous Portfolios

In this talk, we discuss the ordering properties of the largest claim amounts and sample ranges arising from two sets of heterogeneous portfolios. First, some sufficient conditions are provided in the sense of the usual stochastic ordering to compare the largest claim amounts from two sets of independent or interdependent claims. Second, comparison results on the largest claim amounts in the sense of the reversed hazard rate and hazard rate orderings are established for two batches of heterogeneous independent claims. Finally, we present sufficient conditions to stochastically compare sample ranges from two sets of heterogeneous claims by means of the usual stochastic ordering. Some numerical examples are also given to illustrate the theoretical findings. The results established here not only extend and generalize those known in the literature, but also provide insight that will be useful to lay down the annual premiums of policyholders.

The talk is based on a joint work with Narayanaswamy Balakrishnan and Peng Zhao.


Contributed talk: Wednesday, 09:25, Room 2

CANCELLED: Lili Zheng (Central University of Finance and Economics, China)

Excess demand and supplier-induced demand for social health insurance: Evidence from China

The social health insurance market has a very significant asymmetric information aspect, which generates both excess and supplier-induced demand. We employ a social health insurance model to simultaneously examine the effect of both. First, we test excess and supply-induced demand from the perspective of demand and supply variables, using different variables in the same model, and including the four part model; Second, we set different characteristic variables of health insurance according to the different types of social health insurance in China, which is compared with dummy variables in the existing literature. Third, we distinguish the excess demand and release of demand in medical expenditure, and also distinguish induced supply and accessibility effect, which make our conclusions more robust.

This article examines demand in a sample of the China Health and Nutrition Survey Database (CHNS) between 1989 and 2011. This article examines ED and SID in 10018 samples. We examine whether sample with health insurance have more moral hazard. Using medical expenses as an approximation for demand and medical institute as well as physician as an approximation for SID, we find significant evidence that insurance copayment ratios significantly affect medical expenditure. Medical expenses increased by 10.1% when health insurance copayment ratio increased by 1%, the degree for outpatient is 11.9% and inpatient is 12.8%. Outpatient medical expenditure per capita increased by 10.4% and inpatient medical expenditure per capita increased by 5% when health institutions per one hundred thousand increase by 1% per capita. Outpatient medical expenditure per capita increased by 9% and inpatient health expenditure per capita increased by 0.2% in relation to health physician per thousand population increasing.

Further test on excess demand shows that there is excess demand caused by medical insurance when self-paid medical expenses as a percentage of income is lower than 40%. While when self-paid medical expenses as a percentage of income is higher than 40%, the growth in health spending is more the release of demand. Further test on SID shows that there is SID in the rich medical resource regions, while accessibility demand is in the poor medical resource regions

Our study provides a theoretical and empirical basis for further improving China's social health insurance. Firstly, health insurance copayment has a significant impact on excess demand, so we should set a reasonable copayment ratio. Secondly, SID has a significant impact on medical expenditure, so we should affect the supply-induced demand by designing the policy to influence medical provider's behavior. Lastly, medical insurance institutions play an important role in reform. China should not only strengthen the construction of the medical insurance institutions, but also improve the negotiation mechanism of health insurance.

The talk is based on a joint work with Hua Cen.


Contributed talk: Tuesday, 16:30, Room 6

Jinxia Zhu (The University of New South Wales, Australia)

Dividend optimization for a linear diffusion model with time inconsistent preferences

We investigate the optimal dividend control problem for a general linear diffusion model when the management uses non-constant discounting. Non-constant discounting leads to time inconsistency and therefore optimal dividend distribution strategies are not implementable. So instead of studying “optimal” dividend strategies, we employ the game theoretic approach and look for subgame perfect Markov equilibrium (PME) strategies. We show that a barrier strategy with optimal barrier is a PME strategy and the optimal barrier is lower than the barrier of the optimal strategy in the corresponding time consistent optimal problem. We also show that in some cases, optimal barrier does not exist and therefore, the widely used barrier strategies are no longer solutions.

The talk is based on a joint work with Tak Kuen Siu and Hailiang Yang.


Contributed talk: Tuesday, 13:30, Room 7

Wei Zhu (University of Liverpool, United Kingdom)

A first application of Fractional Differential Equations in Risk Theory

In risk theory, the classical risk model assumes that each claim comes after an exponential time. This work generalises the classical risk model to gamma-time risk model and fractional Poisson risk model. In both cases, the ruin probabilities will satisfy corresponding fractional integro-differential equations, which have explicit solutions under some assumptions of claim size distributions.

The talk is based on a joint work with Corina Constantinescu.


Contributed talk: Tuesday, 11:30, Room 5

Jonathan Ziveyi (UNSW Sydney, Australia)

Cohort and Value-Based Multi-Country Longevity Risk Management

Longevity risk management for guaranteed lifetime income streams requires consideration of both interest rate and mortality risks. This paper develops a cohort-based affine term structure model for multi-country mortality developments and uses an arbitrage-free multi-country Nelson-Siegel model for the dynamics of interest rates. These are used to construct value-based longevity indexes for multiple cohorts in two different countries that take into account the major sources of risks impacting life insurance portfolios, mortality and interest rates. Index based longevity hedging strategies have liquidity and cost benefits but are exposed to basis risk. Graphical risk metrics provide valuable visual demonstrations of the relationship between an insurer portfolio and hedging strategies. We demonstrate the application and effectiveness of the value index to longevity risk management between two countries with the aid of graphical basis risk metrics. We use Australia and U.K. as domestic and foreign countries, with both interest rate and mortality risk, and the male populations of Netherlands and France, with common interest rates and basis risk which arises only from differences in mortality risks.

The talk is based on a joint work with Michael Sherris and Yajing Xu.


Contributed talk: Tuesday, 11:30, Room 7

Pierre Zuyderhoff (Université Libre de Bruxelles, Belgium)

Some comparison results for finite time ruin probabilities in the classical risk model

In this talk, we aim at showing how an ordering of claim amounts can influence finite time ruin probabilities. The influence of claim sizes on finite time ruin probabilities has been very little studied so far. A notable exception is the paper of De Vylder and Goovaerts (1984) who showed that contrary to the infinite time case, a more dangerous claim amount in the convex order sense does not necessarily imply larger ruin probabilities over finite time horizons.

In this talk, we go further in the analysis of the possible influence of the claim amounts on the finite time ruin probabilities within the compound Poisson risk model. The problem is examined under several sets of conditions. This primarily covers the cases where the initial reserve and/or the time horizon are very small or large.

To begin with, we obtain a general comparison result for the stop-loss transform of non-ruin probabilities. Such a result gives a partial perspective on the comparison of the non-ruin probabilities themselves. We then bring some complements to the analysis made by De Vylder and Goovaerts (1984). These concern the special cases where the initial reserve is null or the time horizon is very small. We next establish an asymptotic comparison of ruin probabilities as the initial reserve is large. Our study is inspired from the approach of asymptotic ordering developed by Klüppelberg (1993). Finally, we derive a comparison result for the time dependent Lundberg exponent. This enables us to discuss the situation where the initial reserve and the time horizon are both large.

References :
De Vylder, F. E., Goovaerts, M. (1984). Dangerous distributions and ruin probabilities in the classical risk model. ICA Sydney.
Klüppelberg, C. (1993). Asymptotic ordering of risks and ruin probabilities. Insurance: Mathematics and Economics, 12(3), 259-264.

The talk is based on a joint work with Claude Lefevre and Julien Trufin.


 

Poster Presentations


Poster presentation:

Marcin Bartkowiak (Poznan University of Economics and Business, Poland)

Fuzzy mortality models

People are living longer. The observed constant improvements in longevity are bringing new issues and challenges at various levels: social, political, economic and regulatory to mention only a few. The large increases in life expectancy were unexpected and as a result they have often been underestimated by actuaries and insurers. It has become more and more important to find efficient models for mortality rates. One of the most influential approaches to modelling of mortality rates is the stochastic mortality model proposed by Lee and Carter (1992). The Lee-Carter model has inspired numerous variants and extensions. Koissi and Shapiro (2006) have formulated the fuzzy version of the Lee-Carter model, where the model coefficients are assumed to be fuzzy numbers with the symmetric triangular membership function. Koissi and Shapiro applied the Diamond distance to obtain least-square estimates of the model parameters. However, there exists no effective optimization algorithm because, the structure of optimization task, which not allow using a derivative based solution algorithm. Szymanski and Rossa (2014) improved fuzzy Lee-Carter model by applying oriented fuzzy numbers introduced by Kosinski et al. (2003). Nevertheless the Lee-Carter model is very simple and not allow to explain all patterns in mortality rates. We decided to introduce cohort effect in fuzzy formulation of mortality model.

This paper examined the efficiency of fuzzy mortality models in terms of goodness-of-fit of mortality models to historical population mortality rates and the quality of forecasts as well.


Poster presentation:

Lluís Bermúdez (University of Barcelona, Spain)

Modelling work disability days data for workers compensation insurance

Workers compensation insurance provides a compensation for a portion of the income they lose while they are unable to return to work functioning, in this case, as a form of disability insurance. For work disability days data, the frequency distribution exhibits spikes at multiples of 5, 7 and 30 days, implying perhaps the different time scales used by doctors when deciding on the number of days of sick leave for workers. In this paper we present a regression model to fit appropriately data with a lot of spikes in certain values. A new regression model, based on finite mixtures of multiple Poisson distributions of different multiplicities, is proposed to fit this kind of data. A numerical application with a real dataset concerning the length, measured in days, of inability to work after an accident occurs is treated. The main finding is that the model provides a very good fit when working week, calendar week and month multiplicities are taken into account.

The talk is based on a joint work with dimitris Karlis and Miguel Santolino.


Poster presentation:

Patricia Carracedo (Universitat Politècnica de València, Spain)

Detecting spatio-temporal mortality clusters of European countries by sex and age

Mortality has decreased in European Union countries (EU) during the last century. Despite these similar trends, there are still considerable differences in the levels of mortality between Eastern and Western European countries. There are many descriptive studies that detect differences of mortality in Europe, but none of them quantify the differences of mortality by age and sex, checking in turn their significativity. Thus, the objective of this study is to quantify the dynamics of mortality in Europe and detect significant clusters of European countries with similar mortality, applying spatio-temporal methodology. In order to quantify mortality and carry out comparative studies between European countries the most suitable measure was used, given the information available, which produces the Comparative Mortality Figure (CMF). To detect significant clusters of European countries and to check their results, the space-methodology was used. This methodology takes into account two factors: the time and the neighbourhood relationships between countries.

Results of this study quantify the differences of CMF in European countries by age and sex and confirm that they are significant. Principally, two significant clusters are observed, for people older than 64: one cluster of high mortality formed by Eastern European countries (Poland, Lithuania, Latvia, Estonia, Ukraine, Belarus, Slovakia and Hungary) and the other cluster of low mortality composed of Western countries (The United Kingdom, Austria, Spain, Italy, France, Switzerland, Germany, Luxembourg, Portugal and Belgium). In contrast, for ages below or equal to 64 only the significant cluster of high mortality formed by Eastern European countries was observed. In addition, the space-time interaction space-time between the 26 studied European countries during the period 1990-2012 was confirmed. For this reason, the countries that make up the different clusters vary depending on the year considered besides age and sex.

The talk is based on a joint work with Ana Debon, Adina Iftimi and Francisco Montes.


Poster presentation:

Linus Fang-Shu Chan (Soochow University, Taiwan)

Reinsurance Arrangement and Reinsurance Counterparty Risk: Evidence from Taiwan Property-Liability Insurance Industry

Insurers use reinsurance to transfer the risks they undertake in respect of policies sold but they inevitably make expose themselves to counterparty risk in Taiwan where Property-Liability Insurance market is thin. However, extant literature does not seem to have explored the determinants of reinsurance arrangement and reinsurance counterparty risk due to the limitation of data. This study attempts to analyze reinsurance transactions information of property-liability insurance companies in Taiwan from 2005 to 2007 to determine whether firm characteristics affect reinsurance arrangements and the reinsurance counterparty risk it involves. Firstly the relationship between firm characteristics and the proportions of premium ceded to professional reinsurers, foreign reinsurers, the largest reinsurers, and the top five and top ten reinsurers, to total insurance premium, are analyzed. Furthermore, we analyze the relationship between firm characteristics and several proxies of reinsurance counterparty risk such as reinsurance premium concentration, average rating of counterparties and regional concentration of counterparties. This study is expected to fill the gap in literature on reinsurance arrangements and reinsurance counterparty risk.

The talk is based on a joint work with Jin-Lung Peng.


Poster presentation:

Chin-Chih Chen (Feng Chia University, Taiwan)

Pricing and hedging Deposit Insurance with Moral Hazard by Credit Default Swaps

Moral hazard may emerge as a result of deposit insurance schemes that guarantee all deposits because managers of banks and depositors bear no consequences for banks’ risk-taking in the pursuit of higher yields. In this paper, we propose a method for modeling moral hazard with deposit rate spreads and quantify the impact of moral hazard on deposit insurance premiums. Thus, we provide a closed-form solution for deposit premiums that incorporates moral hazard, early closure, capital forbearance, and a stochastic risk-free interest rate under a risk-based option pricing framework. Furthermore, we use credit default swaps to investigate a market-based method to estimate bank risk, and we present a hedging concept can be used by deposit insurance corporations to diversify the risk of deposit insurance via credit derivatives.

The talk is based on a joint work with Yang-Che Wu.


Poster presentation:

Siu Kai Choy (Hang Seng Management College, Hong Kong S.A.R. (China))

Property Market Analysis Using Time-Frequency Decomposition

Real estate is one of the most complex but important investment tools in the financial market. Abrupt changes in housing prices usually influence consumer price inflation and have substantial impact to the economy. Essentially, such changes represent important information which are possibly driven by government policies, economic and financial events. Detecting such changes accurately and studying their characteristics would help investors to understand the correlations among government policies, economic events and financial implications. While existing abrupt change detection methodologies have been applied successfully to various financial time series, they may not perform well in abrupt change detection for the signals in the presence of noise. To remedy this shortcoming so as to improve the detection accuracy, we propose a hybrid approach that integrates the Hilbert-Huang transform with a newly developed multi-level rate-of-change detector for abrupt change detection. The proposed hybrid approach aims to simultaneously perform signal decomposition and reconstruction, and capture abrupt changes in the noisy signal. Comparative experiments with existing approaches show that the proposed method achieves remarkable success in terms of abrupt change detection accuracy.

The talk is based on a joint work with Wing Yan Lee, Yan Wu and Tsz Fung Zel.


Poster presentation:

Michal Gerthofer (University of Economics, Prague, Czech Republic)

Maximum likelihood estimate of the count of IBNR claims based on truncated data

There are several methods available to estimate the expected number of claims that already occurred but have not yet been reported (IBNR claims). Usually, these methods are based on aggregated data. One of the most popular triangle methods is the chain-ladder method. As detailed data are now commonly available, aggregating data prior to predictions is not necessary anymore. An alternative statistical approach for prediction of a number of IBNR claims is presented. The method is based on an estimate of the time lag between occurrence and reporting of a claim. Observations of this variable are naturally truncated at any point in time. Based on the model of the probability distribution of time to reporting, and based on observed number of reported claims, a maximum likelihood estimate of the number of unreported claims is derived. Predictions are compared with the chain ladder predictions.

The talk is based on a joint work with Pavel Zimmermann and Václav Sládek.


Poster presentation:

Lingqi Gu (Univerisity of Vienna, Austria)

Stability of utility maximization problem under transaction costs

We first observe the static stability of the utility maximization problem in the market with proportional transaction costs when the stock price process is càdlàg. The primal and dual value functions as well as the optimizers are continuous of initial capitals, utility functions and physical probability measures. Then, we consider the convergence of optimal dual processes and shadow price processes by assuming the continuity of the stock price process. Under the strict positivity of liquidation value process of an optimal trading strategy, all optimal dual processes which induce the unique dual optimizer are local martingales. Perturbing the initial investments as well as the investor's preferences, the sequence of optimal dual processes converges in optional strong supermartingale ( in the sense of Theorem 2.7 in paper of Czichowsky and Schachermayer, 2016) to an optimal dual process in the original market. The limit process defines a shadow price in the original market.

The talk is based on a joint work with Yiqing Lin and Junjian Yang.


Poster presentation:

Xixuan Han (The University of Hong Kong, Hong Kong S.A.R. (China))

Index Options and Volatility Derivatives via A Gaussian Random Field Risk-Neutral Density Model

We propose a risk-neutral forward density model using Gaussian random fields to capture different aspects of market information from European options and volatility derivatives of a market index. The well-structured model is built in the framework of Heath–Jarrow–Morton philosophy and Musiela parametrization with a user-friendly arbitrage-free condition. It implies the popular Geometric Brownian Model for the spot price of the market index and can be intuitively visualized to have a better view of the market trend. In addition, we develop theorems to show how our model drives local volatility and variance swap rates. Hence volatility futures and options can be priced taking the forward density implied by European options as the initialization input. And our model can be accordingly calibrated to the market prices of these volatility derivatives. An efficient algorithm is also developed for both simulating and pricing. And a comparative study is conducted between our model and existing models.

The talk is based on a joint work with Boyu Wei and Hailiang Yang.


Poster presentation:

Han Li (University of New South Wales, Australia)

Estimating Healthy Life Expectancy: A Province-by-province Study for China

With rapid economic growth and medical advances, the longevity and health in China has been continuously improving during recent decades. However, even as one nation, the disparity in health situations across Chinese provinces is still large. Investigations are needed to help us get a better understanding of the size and trend of these differences. In this paper, we provide a province-by-province breakdown analysis on healthy life expectancy (HLE) at birth for the 33 provincial-level regions in China. Due to the fact that information on prevalence estimates and mortality rates is not always available for each province at any given age and time, it is difficult to calculate provincial HLE according to the well-known Sullivan method. Therefore, we propose a multiple regression model with explanatory variables including life expectancy (LE) to calculate HLE for Chinese provinces. Data from Global Burden of Disease (2013) for LE and HLE at birth of 188 countries during 1990--2013 is used to calibrate the model. We test the robustness of our model based on HLE data in Hong Kong SAR and Taiwan. Future projections of HLE at birth for Chinese provinces are also shown in the result section.

The talk is based on a joint work with Katja Hanewald, Kevin Krahe and Shang Wu.


Poster presentation:

Hong Mao (Shanghai Second Polytechnic University, China)

Is Risk Taking or Moral Hazard Beneficial To the Insured and the Society?

In this work, we consider the question whether and under what circumstances is risk taking beneficial to the society. We also discuss the role of moral hazard, a special case of risk taking, additional risk taking resulting from insurance protection. We establish models utilizing stochastic optimal control theory. We show that optimal solutions do exist, and obtain optimal levels of risk taking (and moral hazard) from perspectives of the insured and the society. We conclude with discussions concerning insurance and moral hazard, and the relationship between them.

The talk is based on a joint work with James M. Carson and Krzysztof M. Ostaszewski.


Poster presentation:

Yang Shen (York University, Canada)

Mean-variance indifference pricing

In this paper, we propose a new theory of derivatives pricing, namely, mean-variance indifference pricing, which synthesizes the idea of utility indifference pricing and Markowitz's mean-variance analysis. We develop the theory under continuous-time Markovian regime-switching models, with a focus on unhedgable risk due to market incompleteness and regime switches. The pricing framework is not limited to the chosen underlying models, but works for and is worth being extended to other models.

As the dynamic mean-variance optimization is essentially a time-inconsistent optimal control problem, Bellman's dynamic programming principle is not applicable. We resort to the notion of equilibrium in game theory and solve the problem via an extended regime-switching HJB equation. We find that the buyer's and seller's indifference prices are both given by nonlinear pricing operators, which are not only mathematically neat, but also have profound financial implications. In fact, the buyer's (resp. seller's) indifference price equals a linear price minus (resp. plus) correction terms accounting for the volatility of the derivative in the linear pricing framework and quantifying instantaneous fluctuations from the financial market and structural changes of macro-economic conditions. Indeed, the linear price of the derivative reduces to the risk-neutral price under certain conditions.

As application, we compute mean-variance indifference prices of European call and put options. We also give a new version of put-call parity in our framework. Our ultimate goal is to apply the buyer's and the seller's indifference pricing formulas to calibrate model parameters from the bid-ask spread observed in the real market. In particular, the estimated risk aversion parameters of the representative buyer and seller can serve as good indicators for market sentiment.


Poster presentation:

Elena Vigna (Università di Torino and Collegio Carlo Alberto, Italy)

On time consistency for mean-variance portfolio selection

This paper addresses a comparison between different approaches to time inconsistency for the mean-variance portfolio selection problem. We define a suitable intertemporal preferences-driven reward and use it to compare the three possible approaches to time inconsistency for the mean-variance portfolio selection problem over $[\tz,T]$: precommitment approach (\citeasnoun{zhou-li}), game theoretical approach (\citeasnoun{basak-chabakauri}, \citeasnoun{bjork-murgoci}), and dynamic approach (\citeasnoun{pedersen-peskir}). We find that the precommitment strategy beats the other strategies if the investor only cares at the view point at time $t_0$ and is not concerned to be time-inconsistent in $(t_0,T)$; the Nash-equilibrium strategy dominates the dynamic strategy until a time point $t^*\in(t_0,T)$ and is dominated by the dynamic strategy from $t^*$ onwards.


Poster presentation:

Fan Yang (University of Waterloo, Canada)

Testing the multivariate regular variation model

This paper aims at testing the multivariate regular variation (MRV) model. Let (X,Y) be a random vector that follows a MRV model with a regularly varying index alpha. We consider a polar coordinate representation (X,Y)=R(1-Theta,Theta). A necessary condition for the MRV model is that given Theta in a set A, R follows a heavy-tailed distribution with tail index alpha for any Borel set A in [0,1]. We develop a test for the MRV model based on this property. We construct the tail index estimator for alpha(A) for any given set A and prove the joint asymptotic property for all such estimators via the tail empirical process. By comparing across the estimates, we construct tests for testing the constancy across the tail indices. This work can be regarded as an extension of the test on whether the tail index is constant for non-identically distributed observations in Einmahl et al. (2016).

References
[1] Einmahl, J. H., de Haan, L., & Zhou, C. (2016). Statistics of heteroscedastic extremes. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 78(1), 31-51.

The talk is based on a joint work with Chen Zhou.


Poster presentation:

Kwok Wai Yu (Hang Seng Management College, Hong Kong S.A.R. (China))

Dynamic Stock Market Analysis: Algorithm and Applications

The stock market has become increasingly dynamic. Jump point detection for the stock price process plays an essential role for the policy-makers and investors to study the change of market environment and to look forward. In this paper, we propose a robust jump point detection methodology, which makes use of the time-frequency analytical tool, namely, empirical mode decomposition (EMD) algorithm and a novel derivative-based detector, to detect jump points in the time series of some stock price indices. The merits of our approach are threefold. First, the EMD adopts adaptive basis functions, which can enhance the detection of jump points. Second, a high-level derivative detector, which accurately captures the rate of change of the signal, is proposed to improve the detection performance for a highly nonlinear and fluctuated signal. Third, our detection system is insensitive to the initial parameters and fully automated. Experimental results reveal that our proposed method has a superior performance and outperforms the existing jump point detection tools.

The talk is based on a joint work with Fei Lung Yuen, siu Kai Choy, Tsz Fung Zel and Yan Wu.


 


Gold Sponsors

msg life Austria Ges.m.b.H.
Sparkassen Versicherung AG Vienna Insurance Group
Munich Re - Munich Reinsurance Company

Silver Sponsors

Vienna Insurance Group AG Wiener Versicherung Gruppe
Raiffeisen Bank International AG (RBI)

Organisers

FAM @ TU Wien - Vienna University of Technology
Actuarial Association of Austria (AVÖ - Aktuarvereinigung Österreichs)

IME-Journal

IME - Insurance: Mathematics and Economics