Evdokia Xekalaki | Athens University of Economics and Business (original) (raw)
Papers by Evdokia Xekalaki
Fibonacci Quarterly, May 1, 1987
Quality & Quantity, 39(5), 515-536, 2005
M. Perakis, P. Maravelakis, S. Psarakis, E. Xekalaki & J. Panaretos
Statistics & Probability Letters, 1986
Abstract: The paper considers the problem of selecting one of two not necessarily nested competin... more Abstract: The paper considers the problem of selecting one of two not necessarily nested competing regression models based on comparative evaluations of their abilities in each of two different issues: The first pertains to viewing the problem as a" best-fitting" model ...
Quality & Quantity, 2005
In this paper, some new indices for ordinal data are introduced. These indices have been develope... more In this paper, some new indices for ordinal data are introduced. These indices have been developed so as to measure the degree of concentration on the "small" or the "large" values of a variable whose level of measurement is ordinal. Their advantage in relation to other approaches is that they ascribe unequal weights to each class of values. Although, they constitute a useful tool in various fields of applications, the focus here is on their use in sample surveys and specifically in situations where one is interested in taking into account the "distance" of the responses from the "neutral" category in a given question. The properties of these indices are examined and methods for constructing confidence intervals for their actual values are discussed. The performance of these methods is evaluated through an extensive simulation study.
Communications in Statistics - Theory and Methods, 1989
Technical Report no 14, Department of Statistics, Athens University of Economics and Business, 1994
Technical Report no 69. Department of Statistics, Athens University of Economics and Business, 1999
Most of the methods used in the ARCH literature for selecting the appropriate model are based on ... more Most of the methods used in the ARCH literature for selecting the appropriate model are based on evaluating the ability of the models to describe the data. An alternative model selection approach is examined based on the evaluation of the predictability of the models on the basis of standardized prediction errors.
Technical Report no 121, Department of Statistics, Athens University of Economics and Business, 2001
Technical Report no 128, Department of Statistics, Athens University of Economics and Business, 2001
This paper presents a modification of a method for constructing approximate lower confidence limi... more This paper presents a modification of a method for constructing approximate lower confidence limits for the proportion of conformance of normally distributed processes, which was proposed by Wang and Lam (1996). The proposed method is very simple and its implementation does not require the use of special tables. In addition, as verified via simulation, it leads to confidence limits with coverage much closer to the nominal in comparison to the method of Wang and Lam (1996).
Technical Report no 131, Department of Statistics, Athens University of Economics and Business, 2001
The common way to measure the performance of a volatility prediction model is to assess its abili... more The common way to measure the performance of a volatility prediction model is to assess its ability to predict future volatility. However, as volatility is unobservable, there is no natural metric for measuring the accuracy of any particular model. Noh et al. (1994) assessed the performance of a volatility prediction model by devising trading rules to trade options on a daily basis and using forecasts of option prices obtained by the Black & Scholes (BS) option pricing formula. (An option is a security that gives its owner the right, not the obligation, to buy or sell an asset at a fixed price within a specified period of time, subject to certain conditions. The BS formula amounts to buying (selling) an option when its price forecast for tomorrow is higher (lower) than today's market settlement price.) In this paper, adopting Noh et al.'s (1994) idea, we assess the performance of a number of Autoregressive Conditional Heteroscedasticity (ARCH) models. For, each trading day, the ARCH model, selected on the basis of the prediction error criterion (PEC) introduced by Xekalaki et al. (2003) and suggested by Degiannakis and Xekalaki (1999) in the context of ARCH models, is used to forecast volatility. According to this criterion, the ARCH model with the lowest sum of squared standardized one step ahead prediction errors is selected for forecasting future volatility. A comparative study is made in order to examine which ARCH volatility estimation method yields the highest profits and whether there is any gain in using the PEC model selection algorithm for speculating with financial derivatives. Among a set of model selection algorithms, even marginally, the PEC algorithm appears to achieve the highest rate of return.
Technical Report no 133, Department of Statistics, Athens University of Economics and Business, 2001
Autoregressive Conditional Heteroscedasticity (ARCH) models have successfully been applied in ord... more Autoregressive Conditional Heteroscedasticity (ARCH) models have successfully been applied in order to predict asset return volatility. Predicting volatility is of great importance in pricing financial derivatives, selecting portfolios, measuring and managing investment risk more accurately. In this paper, a number of ARCH models are examined in the framework of a method for model selection based on a prediction error criterion (PEC) and their ability to predict future volatility is examined. According to this method, the ARCH model with the lowest sum of squared standardized forecasting errors is selected for predicting future volatility. A number of evaluation criteria are used to examine the performance of a model to predict future volatility, for forecasting horizons ranging from one day to one hundred days ahead. The results show that the PEC model selection procedure has a satisfactory performance in selecting that model that generates " better " volatility predictions. It appears, therefore, that it can be regarded as a tool in guiding one's choice of the appropriate model for predicting future volatility, with applications in evaluating portfolios, managing financial risk and creating speculative strategies with options.
Technical Report no 191, Department of Statistics, Athens University of Economics and Business, 2002
Degiannakis and Xekalaki (1999) compare the forecasting ability of Autoregressive Conditional Het... more Degiannakis and Xekalaki (1999) compare the forecasting ability of Autoregressive Conditional Heteroscedastic (ARCH) models using the Correlated Gamma Ratio (CGR) distribution. According to the PEC model selection algorithm, the models with the lowest sum of squared standardized one-step-ahead prediction errors are the most appropriate to exploit future volatility. Based on Engle et al. (1993), an economic criterion to evaluate the PEC model selection algorithm is applied: the cumulative profits of the participants in an options market in pricing one-day index straddle options based on the variance forecasts. An options market consisting of 104 traders is simulated. Each participant applies his/her own variance forecast algorithm to price a straddle on Standard and Poor's 500 (S&P500) index for the next day. Traders who based their selection on the PEC model selection algorithm achieve the highest profits. Thus, the PEC selection method appears to be a tool in guiding one's choice of the appropriate model for estimating future volatility in pricing derivatives.
In statistical modeling contexts, the use of one-stepahead prediction errors for testing hypothes... more In statistical modeling contexts, the use of one-stepahead prediction errors for testing hypotheses on the forecasting ability of an assumed model has been widely considered (see, e.g. Xekalaki et al. (2003, in Stochastic Musings, J.Panaretos (ed.), Laurence Erlbaum), Degiannakis and Xekalaki (2005, Journal of Applied Stochastic Models in Business and Industry). Quite often, the testing procedure requires independence in a sequence of recursive standardized prediction errors, which cannot always be readily deduced particularly in the case of econometric modeling. In this paper, the results of a series of Monte Carlo simulations reveal that independence can be assumed to hold. They are also indicative of a chi-square distribution for the sum of squared standardized one-step-ahead prediction errors. Some theoretical justification of these findings can be traced in Degiannakis and Xekalaki's results (2005, Journal of Applied Stochastic Models in Business and Industry).
A synopsis of probability models for over-and underdispersion is provided, looking at their origi... more A synopsis of probability models for over-and underdispersion is provided, looking at their origins, motivation, first main contributions, important milestones, and applications. As the field of accident studies has received much attention, and various theories have been developed for the interpretation of factors underlying an accident situation, most of the models will be presented in accident or actuarial contexts. Of course, with appropriate parameter interpretations, the results are adaptable in a great variety of other situations in fields ranging from economics, inventory control, and insurance through to demometry, biometry, psychometry, and web access modeling.
An overview of the evolution of probability models for over-dispersion is given looking at their ... more An overview of the evolution of probability models for over-dispersion is given looking at their origins, motivation, first main contributions, important milestones and applications. A specific class of models called the Waring and generalized Waring models will be a focal point. Their advantages relative to other classes of models and how they can be adapted to handle multivariate data and temporally evolving data will be highlighted.
In statistical modelling contexts, the use of one-step-ahead prediction errors for testing hypoth... more In statistical modelling contexts, the use of one-step-ahead prediction errors for testing hypotheses on the forecasting ability of an assumed model has been widely considered. Quite often, the testing procedure requires independence in a sequence of recursive standardized prediction errors, which cannot always be readily deduced particularly in the case of econometric modelling. In this paper, the results of a series of Monte Carlo simulations reveal that independence can be assumed to hold.
Fibonacci Quarterly, May 1, 1987
Quality & Quantity, 39(5), 515-536, 2005
M. Perakis, P. Maravelakis, S. Psarakis, E. Xekalaki & J. Panaretos
Statistics & Probability Letters, 1986
Abstract: The paper considers the problem of selecting one of two not necessarily nested competin... more Abstract: The paper considers the problem of selecting one of two not necessarily nested competing regression models based on comparative evaluations of their abilities in each of two different issues: The first pertains to viewing the problem as a" best-fitting" model ...
Quality & Quantity, 2005
In this paper, some new indices for ordinal data are introduced. These indices have been develope... more In this paper, some new indices for ordinal data are introduced. These indices have been developed so as to measure the degree of concentration on the "small" or the "large" values of a variable whose level of measurement is ordinal. Their advantage in relation to other approaches is that they ascribe unequal weights to each class of values. Although, they constitute a useful tool in various fields of applications, the focus here is on their use in sample surveys and specifically in situations where one is interested in taking into account the "distance" of the responses from the "neutral" category in a given question. The properties of these indices are examined and methods for constructing confidence intervals for their actual values are discussed. The performance of these methods is evaluated through an extensive simulation study.
Communications in Statistics - Theory and Methods, 1989
Technical Report no 14, Department of Statistics, Athens University of Economics and Business, 1994
Technical Report no 69. Department of Statistics, Athens University of Economics and Business, 1999
Most of the methods used in the ARCH literature for selecting the appropriate model are based on ... more Most of the methods used in the ARCH literature for selecting the appropriate model are based on evaluating the ability of the models to describe the data. An alternative model selection approach is examined based on the evaluation of the predictability of the models on the basis of standardized prediction errors.
Technical Report no 121, Department of Statistics, Athens University of Economics and Business, 2001
Technical Report no 128, Department of Statistics, Athens University of Economics and Business, 2001
This paper presents a modification of a method for constructing approximate lower confidence limi... more This paper presents a modification of a method for constructing approximate lower confidence limits for the proportion of conformance of normally distributed processes, which was proposed by Wang and Lam (1996). The proposed method is very simple and its implementation does not require the use of special tables. In addition, as verified via simulation, it leads to confidence limits with coverage much closer to the nominal in comparison to the method of Wang and Lam (1996).
Technical Report no 131, Department of Statistics, Athens University of Economics and Business, 2001
The common way to measure the performance of a volatility prediction model is to assess its abili... more The common way to measure the performance of a volatility prediction model is to assess its ability to predict future volatility. However, as volatility is unobservable, there is no natural metric for measuring the accuracy of any particular model. Noh et al. (1994) assessed the performance of a volatility prediction model by devising trading rules to trade options on a daily basis and using forecasts of option prices obtained by the Black & Scholes (BS) option pricing formula. (An option is a security that gives its owner the right, not the obligation, to buy or sell an asset at a fixed price within a specified period of time, subject to certain conditions. The BS formula amounts to buying (selling) an option when its price forecast for tomorrow is higher (lower) than today's market settlement price.) In this paper, adopting Noh et al.'s (1994) idea, we assess the performance of a number of Autoregressive Conditional Heteroscedasticity (ARCH) models. For, each trading day, the ARCH model, selected on the basis of the prediction error criterion (PEC) introduced by Xekalaki et al. (2003) and suggested by Degiannakis and Xekalaki (1999) in the context of ARCH models, is used to forecast volatility. According to this criterion, the ARCH model with the lowest sum of squared standardized one step ahead prediction errors is selected for forecasting future volatility. A comparative study is made in order to examine which ARCH volatility estimation method yields the highest profits and whether there is any gain in using the PEC model selection algorithm for speculating with financial derivatives. Among a set of model selection algorithms, even marginally, the PEC algorithm appears to achieve the highest rate of return.
Technical Report no 133, Department of Statistics, Athens University of Economics and Business, 2001
Autoregressive Conditional Heteroscedasticity (ARCH) models have successfully been applied in ord... more Autoregressive Conditional Heteroscedasticity (ARCH) models have successfully been applied in order to predict asset return volatility. Predicting volatility is of great importance in pricing financial derivatives, selecting portfolios, measuring and managing investment risk more accurately. In this paper, a number of ARCH models are examined in the framework of a method for model selection based on a prediction error criterion (PEC) and their ability to predict future volatility is examined. According to this method, the ARCH model with the lowest sum of squared standardized forecasting errors is selected for predicting future volatility. A number of evaluation criteria are used to examine the performance of a model to predict future volatility, for forecasting horizons ranging from one day to one hundred days ahead. The results show that the PEC model selection procedure has a satisfactory performance in selecting that model that generates " better " volatility predictions. It appears, therefore, that it can be regarded as a tool in guiding one's choice of the appropriate model for predicting future volatility, with applications in evaluating portfolios, managing financial risk and creating speculative strategies with options.
Technical Report no 191, Department of Statistics, Athens University of Economics and Business, 2002
Degiannakis and Xekalaki (1999) compare the forecasting ability of Autoregressive Conditional Het... more Degiannakis and Xekalaki (1999) compare the forecasting ability of Autoregressive Conditional Heteroscedastic (ARCH) models using the Correlated Gamma Ratio (CGR) distribution. According to the PEC model selection algorithm, the models with the lowest sum of squared standardized one-step-ahead prediction errors are the most appropriate to exploit future volatility. Based on Engle et al. (1993), an economic criterion to evaluate the PEC model selection algorithm is applied: the cumulative profits of the participants in an options market in pricing one-day index straddle options based on the variance forecasts. An options market consisting of 104 traders is simulated. Each participant applies his/her own variance forecast algorithm to price a straddle on Standard and Poor's 500 (S&P500) index for the next day. Traders who based their selection on the PEC model selection algorithm achieve the highest profits. Thus, the PEC selection method appears to be a tool in guiding one's choice of the appropriate model for estimating future volatility in pricing derivatives.
In statistical modeling contexts, the use of one-stepahead prediction errors for testing hypothes... more In statistical modeling contexts, the use of one-stepahead prediction errors for testing hypotheses on the forecasting ability of an assumed model has been widely considered (see, e.g. Xekalaki et al. (2003, in Stochastic Musings, J.Panaretos (ed.), Laurence Erlbaum), Degiannakis and Xekalaki (2005, Journal of Applied Stochastic Models in Business and Industry). Quite often, the testing procedure requires independence in a sequence of recursive standardized prediction errors, which cannot always be readily deduced particularly in the case of econometric modeling. In this paper, the results of a series of Monte Carlo simulations reveal that independence can be assumed to hold. They are also indicative of a chi-square distribution for the sum of squared standardized one-step-ahead prediction errors. Some theoretical justification of these findings can be traced in Degiannakis and Xekalaki's results (2005, Journal of Applied Stochastic Models in Business and Industry).
A synopsis of probability models for over-and underdispersion is provided, looking at their origi... more A synopsis of probability models for over-and underdispersion is provided, looking at their origins, motivation, first main contributions, important milestones, and applications. As the field of accident studies has received much attention, and various theories have been developed for the interpretation of factors underlying an accident situation, most of the models will be presented in accident or actuarial contexts. Of course, with appropriate parameter interpretations, the results are adaptable in a great variety of other situations in fields ranging from economics, inventory control, and insurance through to demometry, biometry, psychometry, and web access modeling.
An overview of the evolution of probability models for over-dispersion is given looking at their ... more An overview of the evolution of probability models for over-dispersion is given looking at their origins, motivation, first main contributions, important milestones and applications. A specific class of models called the Waring and generalized Waring models will be a focal point. Their advantages relative to other classes of models and how they can be adapted to handle multivariate data and temporally evolving data will be highlighted.
In statistical modelling contexts, the use of one-step-ahead prediction errors for testing hypoth... more In statistical modelling contexts, the use of one-step-ahead prediction errors for testing hypotheses on the forecasting ability of an assumed model has been widely considered. Quite often, the testing procedure requires independence in a sequence of recursive standardized prediction errors, which cannot always be readily deduced particularly in the case of econometric modelling. In this paper, the results of a series of Monte Carlo simulations reveal that independence can be assumed to hold.
International Statistical Institute Bulletin, 48, 577-580, 1979
Proceedings of the 13th International Workshop on Statistical Modelling, New Orleans, Louisiana, USA, B. Marx & H. Friedl (eds.) 470-473, 1998
Proceedings of the 4th Hellenic European Conference on Computer Mathematics and its Applications, Athens, Greece, E.A. Lypitakis (ed.), 596-603, 1998
Proceedings of the 4th Hellenic European Conference on Computer Mathematics and its Applications, Athens, Greece, E.A. Lypitakis (ed.), 612-619, 1998
Proceedings of the 4th Hellenic European Conference on Computer Mathematics and its Applications, Athens, Greece, E.A. Lypitakis (ed.), 549-556, 1998
Proceedings of the 5th Hellenic-European Conference on Computer Mathematics and its Applications, Athens, Greece, E.A. Lypitakis (ed.), 886-893, 2001
Proceedings of the 5th Hellenic-European Conference on Computer Mathematics and its Applications, Athens, Greece, E.A. Lypitakis (ed.), 778-783, 2001
Proceedings of the 5th Hellenic-European Conference on Computer Mathematics and its Applications, Athens, Greece, E.A. Lypitakis (ed.), 832-838, 2001
Proceedings of the 5th Hellenic-European Conference on Computer Mathematics and its Applications, Athens, Greece, E.A. Lypitakis (ed.), 872-877, 2001
Proceedings of the 5th Hellenic-European Conference on Computer Mathematics and its Applications, September 2001, Athens, Greece, E.A. Lypitakis (ed.), 790-795, 2001
Proceedings of the 5th Hellenic-European Conference on Computer Mathematics and its Applications, Athens, Greece, E.A. Lypitakis (ed.), 747-754, 2001
Technical Report no 205, Department of Statistics, Athens University of Economics and Business, 2004
A C o m p a r i s o n o f t h e S t a n d a r d i z e d P r e d i c t i o n E r r o r C r i t e r... more A C o m p a r i s o n o f t h e S t a n d a r d i z e d P r e d i c t i o n E r r o r C r i t e r i o n w i t h o t h e r A Abstract In this report, two important issues that arise in the evaluation of the standardized prediction error criterion (SPEC) model selection method are investigated in the context of a simulated options market. The first refers to the question of whether the performance of the SPEC algorithm is sensitive to the size of the sample used and the second to that of how the SPEC algorithm compares with other methods of model selection that measure the accuracy of the ARCH models to forecast the realized intra-day volatility.