Imre Kondor - Academia.edu (original) (raw)
Papers by Imre Kondor
arXiv (Cornell University), Oct 11, 2012
Correlations and other collective phenomena are considered in a schematic model of pairwise inter... more Correlations and other collective phenomena are considered in a schematic model of pairwise interacting, competing and collaborating agents facing a binary choice and situated at the nodes of the complete graph and a two-dimensional regular lattice, respectively. The agents may be subjected to an idiosyncratic or common external influence and also some random noise. The system's stochastic dynamics is studied by numerical simulations. It displays the characteristics of punctuated, multiple equilibria, sensitivity to small details, and path dependence. The dynamics is so slow that one can meaningfully speak of quasi-equilibrium states. Performing measurements of correlations between the agents' choices we find that they are random both as to their sign and absolute value, but their distribution is very broad when the interaction dominates both the noise and the external field. This means that random but strong correlations appear with large probability. In the two dimensional model this also implies that the correlations on average fall off with distance very slowly: the system is essentially non-local, small changes at one end may have a strong impact at the other. The strong, random correlations tend to organize a large fraction of the agents into strongly correlated clusters that act together. If we think of this model as a metaphor of social or economic agents or bank networks, the systemic risk implications of this tendency are clear: any impact on even a single strongly correlated agent will not be confined to a small set but will spread, in an unforeseeable manner, to the whole system via the strong random correlations.
RePEc: Research Papers in Economics, Nov 1, 2009
The optimization of large portfolios displays an inherent instability to estimation error. This p... more The optimization of large portfolios displays an inherent instability to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification "pressure". This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade-off between the two, depending on the size of the available data set.
Journal of Statistical Mechanics: Theory and Experiment, Jan 4, 2019
The optimization of a large random portfolio under the expected shortfall risk measure with an 2 ... more The optimization of a large random portfolio under the expected shortfall risk measure with an 2 regularizer is carried out by analytical calculation for the case of uncorrelated Gaussian returns. The regularizer reins in the large sample fluctuations and the concomitant divergent estimation error, and eliminates the phase transition where this error would otherwise blow up. In the data-dominated region, where the number N of dierent assets in the portfolio is much less than the length T of the available time series, the regularizer plays a negligible role even if its strength η is large, while in the opposite limit, where the size of samples is comparable to, or even smaller than the number of assets, the optimum is almost entirely determined by the regularizer. We construct the contour map of estimation error on the N/T versus η plane and find that for a given value of the estimation error the gain in N/T due to the regularizer can reach a factor of about four for a suciently strong regularizer.
European Physical Journal B, 2019
The optimization of the variance of a portfolio of N independent but not identically distributed ... more The optimization of the variance of a portfolio of N independent but not identically distributed assets, supplemented by a budget constraint and an asymmetric 1 regularizer, is carried out analytically by the replica method borrowed from the theory of disordered systems. The asymmetric regularizer allows us to penalize short and long positions differently, so the present treatment includes the no-short-constrained portfolio optimization problem as a special case. Results are presented for the out-of-sample and the in-sample estimator of the regularized variance, the relative estimation error, the density of the assets eliminated from the portfolio by the regularizer, and the distribution of the optimal portfolio weights. We have studied the dependence of these quantities on the ratio r of the portfolio's dimension N to the sample size T , and on the strength of the regularizer. We have checked the analytic results by numerical simulations, and found general agreement. Regularization extends the interval where the optimization can be carried out, and suppresses the large sample fluctuations, but the performance of 1 regularization is rather disappointing: if the sample size is large relative to the dimension, i.e. r is small, the regularizer does not play any role, while for r's where the regularizer starts to be felt the estimation error is already so large as to make the whole optimization exercise pointless. We find that the 1 regularization can eliminate at most half the assets from the portfolio (by setting their weights to exactly zero), corresponding to this there is a critical ratio r = 2 beyond which the 1 regularized variance cannot be optimized: the regularized variance becomes constant over the simplex. These facts do not seem to have been noticed in the literature.
WORLD SCIENTIFIC eBooks, Apr 1, 1992
The book is dedicated to the 60th birthday of Peter Szepfalusy, an outstanding physicist who cont... more The book is dedicated to the 60th birthday of Peter Szepfalusy, an outstanding physicist who contributed to several fields of modern statistical physics. The 43 contributions of renowned scientists deal with various subjects reflecting the broad range of research activities of Peter Szepfalusy. About one half of the papers is devoted to phase transitions in many-body systems. Current papers in the fields of the quantum Hall effect, superconductivity and spin glasses are included. Chaotic dynamical systems are another main topic of the book. Five papers on
RePEc: Research Papers in Economics, Apr 1, 2014
Investors who optimize their portfolios under any of the coherent risk measures are naturally led... more Investors who optimize their portfolios under any of the coherent risk measures are naturally led to regularized portfolio optimization when they take into account the impact their trades make on the market. We show here that the impact function determines which regularizer is used. We also show that any regularizer based on the norm L p with p > 1 makes the sensitivity of coherent risk measures to estimation error disappear, while regularizers with p < 1 do not. The L 1 norm represents a border case: its "soft" implementation does not remove the instability, but rather shifts its locus, whereas its "hard" implementation (equivalent to a ban on short selling) eliminates it. We demonstrate these effects on the important special case of Expected Shortfall (ES) that is on its way to becoming the next global regulatory market risk measure.
arXiv (Cornell University), Feb 22, 2014
The problem of estimation error of Expected Shortfall is analyzed, with a view of its introductio... more The problem of estimation error of Expected Shortfall is analyzed, with a view of its introduction as a global regulatory risk measure.
European Physical Journal B, May 1, 2002
According to recent findings [1, 2], empirical covariance matrices deduced from financial return ... more According to recent findings [1, 2], empirical covariance matrices deduced from financial return series contain such a high amount of noise that, apart from a few large eigenvalues and the corresponding eigenvectors, their structure can essentially be regarded as random. In [1], e.g., it is reported that about 94% of the spectrum of these matrices can be fitted by that of a random matrix drawn from an appropriately chosen ensemble. In view of the fundamental role of covariance matrices in the theory of portfolio optimization as well as in industry-wide risk management practices, we analyze the possible implications of this effect. Simulation experiments with matrices having a structure such as described in [1, 2] lead us to the conclusion that in the context of the classical portfolio problem (minimizing the portfolio variance under linear constraints) noise has relatively little effect. To leading order the solutions are determined by the stable, large eigenvalues, and the displacement of the solution (measured in variance) due to noise is rather small: depending on the size of the portfolio and on the length of the time series, it is of the order of 5 to 15%. The picture is completely different, however, if we attempt to minimize the variance under non-linear constraints, like those that arise e.g. in the problem of margin accounts or in international capital adequacy regulation. In these problems the presence of noise leads to a serious instability and a high degree of degeneracy of the solutions.
RePEc: Research Papers in Economics, Nov 1, 2001
Recent studies inspired by results from random matrix theory (
arXiv (Cornell University), Apr 15, 2014
Investors who optimize their portfolios under any of the coherent risk measures are naturally led... more Investors who optimize their portfolios under any of the coherent risk measures are naturally led to regularized portfolio optimization when they take into account the impact their trades make on the market. We show here that the impact function determines which regularizer is used. We also show that any regularizer based on the norm L p with p > 1 makes the sensitivity of coherent risk measures to estimation error disappear, while regularizers with p < 1 do not. The L 1 norm represents a border case: its "soft" implementation does not remove the instability, but rather shifts its locus, whereas its "hard" implementation (equivalent to a ban on short selling) eliminates it. We demonstrate these effects on the important special case of Expected Shortfall (ES) that is on its way to becoming the next global regulatory market risk measure.
RePEc: Research Papers in Economics, 2021
Social Science Research Network, 2015
The contour map of estimation error of Expected Shortfall (ES) is constructed. It allows one to q... more The contour map of estimation error of Expected Shortfall (ES) is constructed. It allows one to quantitatively determine the sample size (the length of the time series) required by the optimization under ES of large institutional portfolios for a given size of the portfolio, at a given confidence level and a given estimation error. ES is on its way to becoming the new regulatory market risk measure [1]. Even though the primary purpose of ES will be to characterize the risk in an institution's existing portfolio, banks will have to optimize their investment and trading activities under the constraints of ES. This is analogous to the classical portfolio selection problem, with ES replacing variance as the cost function.
arXiv (Cornell University), Feb 26, 2016
The optimization of a large random portfolio under the Expected Shortfall risk measure with an 2 ... more The optimization of a large random portfolio under the Expected Shortfall risk measure with an 2 regularizer is carried out by analytical calculation. The regularizer reins in the large sample fluctuations and the concomitant divergent estimation error, and eliminates the phase transition where this error would otherwise blow up. In the data-dominated region, where the number N of different assets in the portfolio is much less than the length T of the available time series, the regularizer plays a negligible role even if its strength η is large, while in the opposite limit, where the size of samples is comparable to, or even smaller than the number of assets, the optimum is almost entirely determined by the regularizer. We construct the contour map of estimation error on the N/T vs. η plane and find that for a given value of the estimation error the gain in N/T due to the regularizer can reach a factor of about 4 for a sufficiently strong regularizer.
arXiv (Cornell University), Feb 26, 2016
The optimization of a large random portfolio under the Expected Shortfall risk measure with an 2 ... more The optimization of a large random portfolio under the Expected Shortfall risk measure with an 2 regularizer is carried out by analytical calculation. The regularizer reins in the large sample fluctuations and the concomitant divergent estimation error, and eliminates the phase transition where this error would otherwise blow up. In the data-dominated region, where the number N of different assets in the portfolio is much less than the length T of the available time series, the regularizer plays a negligible role even if its strength η is large, while in the opposite limit, where the size of samples is comparable to, or even smaller than the number of assets, the optimum is almost entirely determined by the regularizer. We construct the contour map of estimation error on the N/T vs. η plane and find that for a given value of the estimation error the gain in N/T due to the regularizer can reach a factor of about 4 for a sufficiently strong regularizer.
Journal of Statistical Mechanics: Theory and Experiment, Dec 12, 2017
A portfolio of independent, but not identically distributed, returns is optimized under the varia... more A portfolio of independent, but not identically distributed, returns is optimized under the variance risk measure, in the high-dimensional limit where the number N of the different assets in the portfolio and the sample size T are assumed large with their ratio r = N/T kept finite, with a ban on short positions. To the best of our knowledge, this is the first time such a constrained optimization is carried out analytically, which is made possible by the application of methods borrowed from the theory of disordered systems. The no-short selling constraint acts as an asymmetric 1 regularizer, setting some of the portfolio weights to zero and keeping the out of sample estimator for the variance bounded, avoiding the divergence present in the non-regularized case. However, the susceptibility, i.e. the sensitivity of the optimal portfolio weights to changes in the returns, diverges at a critical value r = 2. This means that a ban on short positions does not prevent the phase transition in the optimization problem, it merely shifts the critical point from its non-regularized value of r = 1 to 2. At r = 2 the out of sample estimator for the portfolio variance stays finite and the estimated in-sample variance vanishes. We have performed numerical simulations to support the analytic results and found perfect agreement for N/T < 2. Numerical experiments on finite size samples of symmetrically distributed returns show that above this critical point the probability of finding solutions with zero in-sample variance increases rapidly with increasing N , becoming one in the large N limit. However, these are not legitimate solutions of the optimization problem, as they are infinitely sensitive to any change in the input parameters, in particular they will wildly fluctuate from sample to sample. With some narrative license we may say that the regularizer takes care of the longitudinal fluctuations of the optimal weight vector, but does not eliminate the divergent 1
Physical Review E, Mar 24, 2006
The dynamics of many social, technological and economic phenomena are driven by individual human ... more The dynamics of many social, technological and economic phenomena are driven by individual human actions, turning the quantitative understanding of human behavior into a central question of modern science. Current models of human dynamics, used from risk assessment to communications, assume that human actions are randomly distributed in time and thus well approximated by Poisson processes. Here we provide direct evidence that for five human activity patterns, such as email and letter based communications, web browsing, library visits and stock trading, the timing of individual human actions follow non-Poisson statistics, characterized by bursts of rapidly occurring events separated by long periods of inactivity. We show that the bursty nature of human behavior is a consequence of a decision based queuing process: when individuals execute tasks based on some perceived priority, the timing of the tasks will be heavy tailed, most tasks being rapidly executed, while a few experiencing very long waiting times. In contrast, priority blind execution is well approximated by uniform interevent statistics. We discuss two queueing models that capture human activity. The first model assumes that there are no limitations on the number of tasks an individual can hadle at any time, predicting that the waiting time of the individual tasks follow a heavy tailed distribution P (τw) ∼ τ −α w with α = 3/2. The second model imposes limitations on the queue length, resulting in a heavy tailed waiting time distribution characterized by α = 1. We provide empirical evidence supporting the relevance of these two models to human activity patterns, showing that while emails, web browsing and library visitation display α = 1, the surface mail based communication belongs to the α = 3/2 universality class. Finally, we discuss possible extension of the proposed queueing models and outline some future challenges in exploring the statistical mechanisms of human dynamics. These findings have important implications not only for our quantitative understanding of human activity patterns, but also for resource management and service allocation in both communications and retail.
arXiv (Cornell University), Oct 16, 2015
The contour maps of the error of historical resp. parametric estimates for large random portfolio... more The contour maps of the error of historical resp. parametric estimates for large random portfolios optimized under the risk measure Expected Shortfall (ES) are constructed. Similar maps for the sensitivity of the portfolio weights to small changes in the returns as well as the VaR of the ES-optimized portfolio are also presented, along with results for the distribution of portfolio weights over the random samples and for the out-of-sample and in-the-sample estimates for ES. The contour maps allow one to quantitatively determine the sample size (the length of the time series) required by the optimization for a given number of different assets in the portfolio, at a given confidence level and a given level of relative estimation error. The necessary sample sizes invariably turn out to be unrealistically large for any reasonable choice of the number of assets and the confidence level. These results are obtained via analytical calculations based on methods borrowed from the statistical physics of random systems, supported by numerical simulations.
arXiv (Cornell University), Apr 23, 2010
We consider the problem of portfolio optimization in the presence of market impact, and derive op... more We consider the problem of portfolio optimization in the presence of market impact, and derive optimal liquidation strategies. We discuss in detail the problem of finding the optimal portfolio under Expected Shortfall (ES) in the case of linear market impact. We show that, once market impact is taken into account, a regularized version of the usual optimization problem naturally emerges. We characterize the typical behavior of the optimal liquidation strategies, in the limit of large portfolio sizes, and show how the market impact removes the instability of ES in this context.
LSE Research Online Documents on Economics, 2019
The optimization of a large random portfolio under the Expected Shortfall risk measure with an 2 ... more The optimization of a large random portfolio under the Expected Shortfall risk measure with an 2 regularizer is carried out by analytical calculation. The regularizer reins in the large sample fluctuations and the concomitant divergent estimation error, and eliminates the phase transition where this error would otherwise blow up. In the data-dominated region, where the number of different assets in the portfolio is much less than the length of the available time series, the regularizer plays a negligible role, while in the (much more frequently occurring in practice) opposite limit, where the samples are comparable or even small compared to the number of different assets, the optimum is almost entirely determined by the regularizer. Our results clearly show that the transition region between these two extremes is relatively narrow, and it is only here that one can meaningfully speak of a trade-off between fluctuations and bias.
World Scientific lecture notes in physics, Nov 1, 1986
We study, near T" the stability of Parisi's solution for the long-range spin-glass. In addition t... more We study, near T" the stability of Parisi's solution for the long-range spin-glass. In addition to the discrete, "longitudinal" spectrum found by Thouless, de Almeida, and Kosterlitz, we find "transverse" bands depending on one or two continuous parameters, and a host of zero modes occupying most of the parameter space. All eigenvalues are non-negative, proving that Parisi's solution is marginally stable.
arXiv (Cornell University), Oct 11, 2012
Correlations and other collective phenomena are considered in a schematic model of pairwise inter... more Correlations and other collective phenomena are considered in a schematic model of pairwise interacting, competing and collaborating agents facing a binary choice and situated at the nodes of the complete graph and a two-dimensional regular lattice, respectively. The agents may be subjected to an idiosyncratic or common external influence and also some random noise. The system's stochastic dynamics is studied by numerical simulations. It displays the characteristics of punctuated, multiple equilibria, sensitivity to small details, and path dependence. The dynamics is so slow that one can meaningfully speak of quasi-equilibrium states. Performing measurements of correlations between the agents' choices we find that they are random both as to their sign and absolute value, but their distribution is very broad when the interaction dominates both the noise and the external field. This means that random but strong correlations appear with large probability. In the two dimensional model this also implies that the correlations on average fall off with distance very slowly: the system is essentially non-local, small changes at one end may have a strong impact at the other. The strong, random correlations tend to organize a large fraction of the agents into strongly correlated clusters that act together. If we think of this model as a metaphor of social or economic agents or bank networks, the systemic risk implications of this tendency are clear: any impact on even a single strongly correlated agent will not be confined to a small set but will spread, in an unforeseeable manner, to the whole system via the strong random correlations.
RePEc: Research Papers in Economics, Nov 1, 2009
The optimization of large portfolios displays an inherent instability to estimation error. This p... more The optimization of large portfolios displays an inherent instability to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification "pressure". This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade-off between the two, depending on the size of the available data set.
Journal of Statistical Mechanics: Theory and Experiment, Jan 4, 2019
The optimization of a large random portfolio under the expected shortfall risk measure with an 2 ... more The optimization of a large random portfolio under the expected shortfall risk measure with an 2 regularizer is carried out by analytical calculation for the case of uncorrelated Gaussian returns. The regularizer reins in the large sample fluctuations and the concomitant divergent estimation error, and eliminates the phase transition where this error would otherwise blow up. In the data-dominated region, where the number N of dierent assets in the portfolio is much less than the length T of the available time series, the regularizer plays a negligible role even if its strength η is large, while in the opposite limit, where the size of samples is comparable to, or even smaller than the number of assets, the optimum is almost entirely determined by the regularizer. We construct the contour map of estimation error on the N/T versus η plane and find that for a given value of the estimation error the gain in N/T due to the regularizer can reach a factor of about four for a suciently strong regularizer.
European Physical Journal B, 2019
The optimization of the variance of a portfolio of N independent but not identically distributed ... more The optimization of the variance of a portfolio of N independent but not identically distributed assets, supplemented by a budget constraint and an asymmetric 1 regularizer, is carried out analytically by the replica method borrowed from the theory of disordered systems. The asymmetric regularizer allows us to penalize short and long positions differently, so the present treatment includes the no-short-constrained portfolio optimization problem as a special case. Results are presented for the out-of-sample and the in-sample estimator of the regularized variance, the relative estimation error, the density of the assets eliminated from the portfolio by the regularizer, and the distribution of the optimal portfolio weights. We have studied the dependence of these quantities on the ratio r of the portfolio's dimension N to the sample size T , and on the strength of the regularizer. We have checked the analytic results by numerical simulations, and found general agreement. Regularization extends the interval where the optimization can be carried out, and suppresses the large sample fluctuations, but the performance of 1 regularization is rather disappointing: if the sample size is large relative to the dimension, i.e. r is small, the regularizer does not play any role, while for r's where the regularizer starts to be felt the estimation error is already so large as to make the whole optimization exercise pointless. We find that the 1 regularization can eliminate at most half the assets from the portfolio (by setting their weights to exactly zero), corresponding to this there is a critical ratio r = 2 beyond which the 1 regularized variance cannot be optimized: the regularized variance becomes constant over the simplex. These facts do not seem to have been noticed in the literature.
WORLD SCIENTIFIC eBooks, Apr 1, 1992
The book is dedicated to the 60th birthday of Peter Szepfalusy, an outstanding physicist who cont... more The book is dedicated to the 60th birthday of Peter Szepfalusy, an outstanding physicist who contributed to several fields of modern statistical physics. The 43 contributions of renowned scientists deal with various subjects reflecting the broad range of research activities of Peter Szepfalusy. About one half of the papers is devoted to phase transitions in many-body systems. Current papers in the fields of the quantum Hall effect, superconductivity and spin glasses are included. Chaotic dynamical systems are another main topic of the book. Five papers on
RePEc: Research Papers in Economics, Apr 1, 2014
Investors who optimize their portfolios under any of the coherent risk measures are naturally led... more Investors who optimize their portfolios under any of the coherent risk measures are naturally led to regularized portfolio optimization when they take into account the impact their trades make on the market. We show here that the impact function determines which regularizer is used. We also show that any regularizer based on the norm L p with p > 1 makes the sensitivity of coherent risk measures to estimation error disappear, while regularizers with p < 1 do not. The L 1 norm represents a border case: its "soft" implementation does not remove the instability, but rather shifts its locus, whereas its "hard" implementation (equivalent to a ban on short selling) eliminates it. We demonstrate these effects on the important special case of Expected Shortfall (ES) that is on its way to becoming the next global regulatory market risk measure.
arXiv (Cornell University), Feb 22, 2014
The problem of estimation error of Expected Shortfall is analyzed, with a view of its introductio... more The problem of estimation error of Expected Shortfall is analyzed, with a view of its introduction as a global regulatory risk measure.
European Physical Journal B, May 1, 2002
According to recent findings [1, 2], empirical covariance matrices deduced from financial return ... more According to recent findings [1, 2], empirical covariance matrices deduced from financial return series contain such a high amount of noise that, apart from a few large eigenvalues and the corresponding eigenvectors, their structure can essentially be regarded as random. In [1], e.g., it is reported that about 94% of the spectrum of these matrices can be fitted by that of a random matrix drawn from an appropriately chosen ensemble. In view of the fundamental role of covariance matrices in the theory of portfolio optimization as well as in industry-wide risk management practices, we analyze the possible implications of this effect. Simulation experiments with matrices having a structure such as described in [1, 2] lead us to the conclusion that in the context of the classical portfolio problem (minimizing the portfolio variance under linear constraints) noise has relatively little effect. To leading order the solutions are determined by the stable, large eigenvalues, and the displacement of the solution (measured in variance) due to noise is rather small: depending on the size of the portfolio and on the length of the time series, it is of the order of 5 to 15%. The picture is completely different, however, if we attempt to minimize the variance under non-linear constraints, like those that arise e.g. in the problem of margin accounts or in international capital adequacy regulation. In these problems the presence of noise leads to a serious instability and a high degree of degeneracy of the solutions.
RePEc: Research Papers in Economics, Nov 1, 2001
Recent studies inspired by results from random matrix theory (
arXiv (Cornell University), Apr 15, 2014
Investors who optimize their portfolios under any of the coherent risk measures are naturally led... more Investors who optimize their portfolios under any of the coherent risk measures are naturally led to regularized portfolio optimization when they take into account the impact their trades make on the market. We show here that the impact function determines which regularizer is used. We also show that any regularizer based on the norm L p with p > 1 makes the sensitivity of coherent risk measures to estimation error disappear, while regularizers with p < 1 do not. The L 1 norm represents a border case: its "soft" implementation does not remove the instability, but rather shifts its locus, whereas its "hard" implementation (equivalent to a ban on short selling) eliminates it. We demonstrate these effects on the important special case of Expected Shortfall (ES) that is on its way to becoming the next global regulatory market risk measure.
RePEc: Research Papers in Economics, 2021
Social Science Research Network, 2015
The contour map of estimation error of Expected Shortfall (ES) is constructed. It allows one to q... more The contour map of estimation error of Expected Shortfall (ES) is constructed. It allows one to quantitatively determine the sample size (the length of the time series) required by the optimization under ES of large institutional portfolios for a given size of the portfolio, at a given confidence level and a given estimation error. ES is on its way to becoming the new regulatory market risk measure [1]. Even though the primary purpose of ES will be to characterize the risk in an institution's existing portfolio, banks will have to optimize their investment and trading activities under the constraints of ES. This is analogous to the classical portfolio selection problem, with ES replacing variance as the cost function.
arXiv (Cornell University), Feb 26, 2016
The optimization of a large random portfolio under the Expected Shortfall risk measure with an 2 ... more The optimization of a large random portfolio under the Expected Shortfall risk measure with an 2 regularizer is carried out by analytical calculation. The regularizer reins in the large sample fluctuations and the concomitant divergent estimation error, and eliminates the phase transition where this error would otherwise blow up. In the data-dominated region, where the number N of different assets in the portfolio is much less than the length T of the available time series, the regularizer plays a negligible role even if its strength η is large, while in the opposite limit, where the size of samples is comparable to, or even smaller than the number of assets, the optimum is almost entirely determined by the regularizer. We construct the contour map of estimation error on the N/T vs. η plane and find that for a given value of the estimation error the gain in N/T due to the regularizer can reach a factor of about 4 for a sufficiently strong regularizer.
arXiv (Cornell University), Feb 26, 2016
The optimization of a large random portfolio under the Expected Shortfall risk measure with an 2 ... more The optimization of a large random portfolio under the Expected Shortfall risk measure with an 2 regularizer is carried out by analytical calculation. The regularizer reins in the large sample fluctuations and the concomitant divergent estimation error, and eliminates the phase transition where this error would otherwise blow up. In the data-dominated region, where the number N of different assets in the portfolio is much less than the length T of the available time series, the regularizer plays a negligible role even if its strength η is large, while in the opposite limit, where the size of samples is comparable to, or even smaller than the number of assets, the optimum is almost entirely determined by the regularizer. We construct the contour map of estimation error on the N/T vs. η plane and find that for a given value of the estimation error the gain in N/T due to the regularizer can reach a factor of about 4 for a sufficiently strong regularizer.
Journal of Statistical Mechanics: Theory and Experiment, Dec 12, 2017
A portfolio of independent, but not identically distributed, returns is optimized under the varia... more A portfolio of independent, but not identically distributed, returns is optimized under the variance risk measure, in the high-dimensional limit where the number N of the different assets in the portfolio and the sample size T are assumed large with their ratio r = N/T kept finite, with a ban on short positions. To the best of our knowledge, this is the first time such a constrained optimization is carried out analytically, which is made possible by the application of methods borrowed from the theory of disordered systems. The no-short selling constraint acts as an asymmetric 1 regularizer, setting some of the portfolio weights to zero and keeping the out of sample estimator for the variance bounded, avoiding the divergence present in the non-regularized case. However, the susceptibility, i.e. the sensitivity of the optimal portfolio weights to changes in the returns, diverges at a critical value r = 2. This means that a ban on short positions does not prevent the phase transition in the optimization problem, it merely shifts the critical point from its non-regularized value of r = 1 to 2. At r = 2 the out of sample estimator for the portfolio variance stays finite and the estimated in-sample variance vanishes. We have performed numerical simulations to support the analytic results and found perfect agreement for N/T < 2. Numerical experiments on finite size samples of symmetrically distributed returns show that above this critical point the probability of finding solutions with zero in-sample variance increases rapidly with increasing N , becoming one in the large N limit. However, these are not legitimate solutions of the optimization problem, as they are infinitely sensitive to any change in the input parameters, in particular they will wildly fluctuate from sample to sample. With some narrative license we may say that the regularizer takes care of the longitudinal fluctuations of the optimal weight vector, but does not eliminate the divergent 1
Physical Review E, Mar 24, 2006
The dynamics of many social, technological and economic phenomena are driven by individual human ... more The dynamics of many social, technological and economic phenomena are driven by individual human actions, turning the quantitative understanding of human behavior into a central question of modern science. Current models of human dynamics, used from risk assessment to communications, assume that human actions are randomly distributed in time and thus well approximated by Poisson processes. Here we provide direct evidence that for five human activity patterns, such as email and letter based communications, web browsing, library visits and stock trading, the timing of individual human actions follow non-Poisson statistics, characterized by bursts of rapidly occurring events separated by long periods of inactivity. We show that the bursty nature of human behavior is a consequence of a decision based queuing process: when individuals execute tasks based on some perceived priority, the timing of the tasks will be heavy tailed, most tasks being rapidly executed, while a few experiencing very long waiting times. In contrast, priority blind execution is well approximated by uniform interevent statistics. We discuss two queueing models that capture human activity. The first model assumes that there are no limitations on the number of tasks an individual can hadle at any time, predicting that the waiting time of the individual tasks follow a heavy tailed distribution P (τw) ∼ τ −α w with α = 3/2. The second model imposes limitations on the queue length, resulting in a heavy tailed waiting time distribution characterized by α = 1. We provide empirical evidence supporting the relevance of these two models to human activity patterns, showing that while emails, web browsing and library visitation display α = 1, the surface mail based communication belongs to the α = 3/2 universality class. Finally, we discuss possible extension of the proposed queueing models and outline some future challenges in exploring the statistical mechanisms of human dynamics. These findings have important implications not only for our quantitative understanding of human activity patterns, but also for resource management and service allocation in both communications and retail.
arXiv (Cornell University), Oct 16, 2015
The contour maps of the error of historical resp. parametric estimates for large random portfolio... more The contour maps of the error of historical resp. parametric estimates for large random portfolios optimized under the risk measure Expected Shortfall (ES) are constructed. Similar maps for the sensitivity of the portfolio weights to small changes in the returns as well as the VaR of the ES-optimized portfolio are also presented, along with results for the distribution of portfolio weights over the random samples and for the out-of-sample and in-the-sample estimates for ES. The contour maps allow one to quantitatively determine the sample size (the length of the time series) required by the optimization for a given number of different assets in the portfolio, at a given confidence level and a given level of relative estimation error. The necessary sample sizes invariably turn out to be unrealistically large for any reasonable choice of the number of assets and the confidence level. These results are obtained via analytical calculations based on methods borrowed from the statistical physics of random systems, supported by numerical simulations.
arXiv (Cornell University), Apr 23, 2010
We consider the problem of portfolio optimization in the presence of market impact, and derive op... more We consider the problem of portfolio optimization in the presence of market impact, and derive optimal liquidation strategies. We discuss in detail the problem of finding the optimal portfolio under Expected Shortfall (ES) in the case of linear market impact. We show that, once market impact is taken into account, a regularized version of the usual optimization problem naturally emerges. We characterize the typical behavior of the optimal liquidation strategies, in the limit of large portfolio sizes, and show how the market impact removes the instability of ES in this context.
LSE Research Online Documents on Economics, 2019
The optimization of a large random portfolio under the Expected Shortfall risk measure with an 2 ... more The optimization of a large random portfolio under the Expected Shortfall risk measure with an 2 regularizer is carried out by analytical calculation. The regularizer reins in the large sample fluctuations and the concomitant divergent estimation error, and eliminates the phase transition where this error would otherwise blow up. In the data-dominated region, where the number of different assets in the portfolio is much less than the length of the available time series, the regularizer plays a negligible role, while in the (much more frequently occurring in practice) opposite limit, where the samples are comparable or even small compared to the number of different assets, the optimum is almost entirely determined by the regularizer. Our results clearly show that the transition region between these two extremes is relatively narrow, and it is only here that one can meaningfully speak of a trade-off between fluctuations and bias.
World Scientific lecture notes in physics, Nov 1, 1986
We study, near T" the stability of Parisi's solution for the long-range spin-glass. In addition t... more We study, near T" the stability of Parisi's solution for the long-range spin-glass. In addition to the discrete, "longitudinal" spectrum found by Thouless, de Almeida, and Kosterlitz, we find "transverse" bands depending on one or two continuous parameters, and a host of zero modes occupying most of the parameter space. All eigenvalues are non-negative, proving that Parisi's solution is marginally stable.