Optimal threshold for static 99R (original) (raw)

401 Exposure Dependent Modeling of Percent of Ultimate Loss Development Curves

2013

This paper presents a loss development model in which exposure period dependence is fundamental to the structure of the model. The basic idea is that an exposure period, such as an accident year or policy year, gives rise to a particular distribution of accident date lags, where the accident date lag is the time elapsed from the start of the exposure pedod till the accident date. The paper shows how to derive the density of the accident date lag from a familiar parallelogram diagram. A fairly general theory of development is then presented and simplified under certain conditions to arrive at a total development random variable whose cumulative distribution is related to the usual percent of ultimate development curve. After presenting the theory, the paper tums to practical applications. Simulation is used to generate consistent pattems for different exposure pedods. A convenient accident period development formula is derived and then used to fit and convert factors. The average dat...

CODATA Symposium on Risk Models and Applications Kiev, Ukraine, Oct. 5, 2008

2008

This international symposium was held at Kiev National Technical University, organized by CODATA in connection with the 21 st International CODATA Conference following the symposium and in cooperation with GI TC 4.6 WG on Risk Management. The symposium homepage is http://www.codata-germany.org/RMA\_2008 This Symposium was dedicated to the data science and information system aspects of Risk Model Structure, Implementation, and Application on a very interdisciplinary level. It gave a very good overview of risk models and application from lots of different perspectives.

thresholdmodeling: A Python package for modeling excesses over a threshold using the Peak-Over-Threshold Method and the Generalized Pareto Distribution

Journal of Open Source Software

Extreme value analysis has emerged as one of the most important disciplines for the applied sciences when dealing with reduced datasets and when the main idea is to extrapolate the observations over a given time. By using a threshold model with an asymptotic characterization, it is posible to work with the Generalized Pareto Distribution (GPD) (Coles, 2001) and use it to model the stochastic behavior of a process at an unusual level, either a maximum or minimum. For example, consider a large dataset of wind velocity in Florida, USA, during a certain period of time. It is possible to model this process and to quantify extreme events' probability, for example hurricanes, which are maximum observations of wind velocity, in a time of interest using the return value tool.

On the methodologies of threshold selection in the context of Peaks Over Threshold approach for market risk modelling

The Peaks Over Threshold (POT) model has emerged as a highly promising approach for estimating crucial market risk metrics like Expected Shortfall and Value at Risk. Nonetheless, this technique presents a major challenge: there is not a standard methodology to choose the optimal threshold that separates extreme values of an empirical distribution from the rest. As a result, several approaches for the determination of this threshold have been developed in the last years. In this context, this study aims to conduct a comprehensive comparative analysis of various optimal threshold selection methodologies within the Peak Over Threshold framework, as applied to the measurement of market risk. The primary objective is to examine the influence of these methodologies on market risk assessments and capital requirement calculations. With this goal, we analyze the results in terms of threshold return values, market risk measures and daily capital requirements for eight different methodologies of threshold selection. We conclude that, even though considerable discrepancies can be observed between methodologies in terms of threshold returns values, these discrepancies do not translate into large deviations in terms of market risk measurements and capital requirements estimates. Consequently, the selection of the optimal threshold selection methodology may not wield substantial relevance in determining the eventual outcomes.

Modelling and Forecasting Dynamic VaR Thresholds for Risk Management and Regulation

SSRN Electronic Journal, 2000

The paper presents methods of estimating Value-at-Risk (VaR) thresholds utilising two calibrated models and three conditional volatility or GARCH models. These are used to estimate and forecast the VaR thresholds of an equally-weighted portfolio, comprising: the S&P500, CAC40, FTSE100 a Swiss market index (SMI). On the basis of the number of (non-)violations of the Basel Accord thresholds, the best performing model is PS-GARCH, followed by VARMA-AGARCH, then Portfolio-GARCH and the Riskmetrics TM -EWMA models, both of which would attract a penalty of 0.5. The worst forecasts are obtained from the standard normal method based on historical variances.

Three influential risk foundation papers from the 80s and 90s: Are they still state-of-the-art?

Reliability Engineering & System Safety, 2019

Three of the most influential scientific works in the risk field, at least in the engineering environment, are Stan Kaplan and John Garrick's paper from 1981 on risk quantification, George Apostolakis' paper on probability from 1990, and Elisabeth Paté-Cornell's paper on uncertainty levels in risk assessments from 1996. The present article reviews and discusses these works, the aim being to acknowledge their important contributions to risk science and provide insights on how these works have influenced and relate to the state-of-the-art of the risk science of today. It is questioned to what extent these papers still represent state-of-the-art. Recent documents by the Society for Risk Analysis are used as a reference for comparison, in addition to related publications in scientific journals.

Risk Analysis and Estimation of a Bimodal Heavy-Tailed Burr XII Model in Insurance Data: Exploring Multiple Methods and Applications

Mathematics

Actuarial risks can be analyzed using heavy-tailed distributions, which provide adequate risk assessment. Key risk indicators, such as value-at-risk, tailed-value-at-risk (conditional tail expectation), tailed-variance, tailed-mean-variance, and mean excess loss function, are commonly used to evaluate risk exposure levels. In this study, we analyze actuarial risks using these five indicators, calculated using four different estimation methods: maximum likelihood, ordinary least square, weighted least square, and Cramer-Von-Mises. To achieve our main goal, we introduce and study a new distribution. Monte Carlo simulations are used to assess the performance of all estimation methods. We provide two real-life datasets with two applications to compare the classical methods and demonstrate the importance of the proposed model, evaluated via the maximum likelihood method. Finally, we evaluate and analyze actuarial risks using the abovementioned methods and five actuarial indicators based ...

APPLICATION OF LOGISTIC REGRESSION MODELS IN RISK MANAGEMENT

INTERNATIONAL JOURNAL OF INNOVATIONS IN ENGINEERING RESEARCH AND TECHNOLOGY, 2021

Logistic regression is a technique that uses statistics to develop a prediction model on any occurrence that is binary in itself and its nature (Ahmad et al., 2021). When it comes to a binary event, it may either occur or not occur. A binary nature has only two results which are represented in form of 0 (non-occurrence) and 1 (occurrence). Another application for logistic regression is where there are more than two classifications on the dependent variable. Logistic regression can be binary when the classification of the dependent variable is in two groups while it could be multinomial when the dependent variable is two groups or more. Predictive modeling is a technique where the known results are taken and develop a model that will help in predicting the later activities and occurrence (Midi et al., 2010). It uses ancient data to predict events that will occur in the future. Predictive modeling comes in different types which are ANOVA, logistic regression, decision trees, time series, neutral networks, linear regression and ridge regression. It is very critical in selecting the right model for regression to save on time in a project. Selecting incorrect modeling may result in the synthesis of a wrong prediction and non constant mean and varying variances (Hosmer et al., 2013) Variances should be constant and not varying. Consequently, regression analysis predicts continuous variance target from more than one independent variable. Regression analysis utilizes the natural variance and not a variance that have gone through experiments since they are manipulated and cannot produce the correct result. Predictive churn model is used to explain customer churn or a customer stepping down on a product or a service. This model provides quantifiable matrices and alertness to fight the retention effort (Harrell 2015). The probable monthly churn; the number of active users who churned in divided by total number of active user days, this will provide the number of churns in every user day. I hope this study will educate practitioners in the estimation of the independent variable and determining the risk factors. It's helpful because probability produces results for independent variable and result variable in multiple models and/or binary events. When the result of two or more variable is established, it becomes easy to understand and organize a business examination for decisionmaking.

A review of discrete-time risk models

Revista de la Real Academia de Ciencias Exactas, Fisicas y Naturales. Serie A. Matematicas, 2009

In this paper, we present a review of results for discrete-time risk models, including the compound binomial risk model and some of its extensions. While most theoretical risk models use the concept of time continuity, the practical reality is discrete. For instance, recursive formulas for discretetime models can be obtained without assuming a claim severity distribution and are readily programmable in practice. Hence the models, techniques used, and results reviewed here for discrete-time risk models are of independent scientific interest. Yet, results for discrete-time risk models can give, in addition, a simpler understanding of their continuous-time analogue. For example, these results can serve as approximations or bounds for the corresponding results in continuous-time models. This paper will serve as a detailed reference for the study of discrete-time risk models. Una revista de modelos de riesgo en tiempo discreto Resumen. En este artículo hacemos un repaso de los resultados para modelos de riesgo en tiempo discreto, incluyendo el modelo de riesgo binomial-compuesto, así como algunas de sus extensiones. Aunque gran parte de los modelos teóricos de riesgo se basen en el concepto de continuidad del tiempo, la realidad práctica es en sí discreta. Por ejemplo, en la práctica actuarial se programan fórmulas recursivas para modelos en tiempo discreto, sin necesidad de suponer una distribución de pérdidas conocida. Con lo cual estos modelos, las técnicas y los resultados que listamos para modelos de riesgo en tiempo discreto, generan un cierto interés científico propio. Pero más allá de sus aplicaciones directas, estos resultados para modelos en tiempo discreto también proporcionan un camino más simple hacia los modelos de riesgo análogos en tiempo continuo. Por ejemplo, los resultados en tiempo discreto pueden servir de aproximaciones o de cotas para sus resultados correspondientes en tiempo continuo. El propósito de este artículo es que pueda servir de referencia detallada para el estudio de modelos de riesgo en tiempo discreto.