Anna Chernobai - Academia.edu (original) (raw)
Papers by Anna Chernobai
HSC Research Reports, 2005
In this paper, we present a procedure for consistent estimation of the severity and frequency dis... more In this paper, we present a procedure for consistent estimation of the severity and frequency distributions based on incomplete insurance data and demonstrate that ignoring the thresholds leads to a serious underestimation of the ruin probabilities. The event frequency is modelled with a non-homogeneous Poisson process with a sinusoidal intensity rate function. The choice of an adequate loss distribution is conducted via the in-sample goodness-of-fit procedures and forecasting, using classical and robust methodologies.
Social Science Research Network, 2009
We examine the incidence of operational losses among U.S. financial institutions using publicly r... more We examine the incidence of operational losses among U.S. financial institutions using publicly reported loss data from 1980 to 2005. We show that most operational losses can be traced to a breakdown of internal control, and that firms suffering from these losses tend to be younger and more complex, and have higher credit risk, more antitakeover provisions, and chief executive officers (CEOs) with higher stock option holdings and bonuses relative to salary. These findings highlight the correlation between operational risk and credit risk, as well as the role of corporate governance and proper managerial incentives in mitigating operational risk.
Social Science Research Network, 2012
We investigate the disclosures of material weaknesses in internal control mandated for Japanese f... more We investigate the disclosures of material weaknesses in internal control mandated for Japanese firms under the 2006 Financial Instruments and Exchange Law. We find that the presence of a material weakness is more likely for firms that are younger, have better growth prospects, have a volatile operating environment, are financially constrained, and have weak governance structures. We examine the role of Japan's main banks in this process and find that the likelihood of a material weakness is higher for firms with stronger links with their main banks. We also show that the financial health of the main banks themselves-proxied for by the banks' BIS ratios and bad loan ratios-increases the likelihood of a material weakness in affiliated firms. This paper provides novel insights into the determinants of material weaknesses of Japanese firms since the passage of the law. Results from this study contribute to the literature on material weaknesses and relationship banking.
Social Science Research Network, 2017
We investigate information transfer effects of operational loss announcements to the announcing f... more We investigate information transfer effects of operational loss announcements to the announcing firm’s blockholder. Based on an event study, we find that the firm-blockholder link tends to be weak for U.S. financial sector blockholders, with significant negative spillover effects occurring mainly for larger blockholders, for blockholders incorporated as depository institutions with a higher exposure to the respective loss announcing firm, and for higher operational loss amounts. Our findings contribute to the understanding of the equity pricing process in the presence of operational risk, and they show that operational risk is not entirely idiosyncratic but may bear a considerable contagious element.
This paper develops a theoretical model that explains how long-term consumption goals influence h... more This paper develops a theoretical model that explains how long-term consumption goals influence households' choice of real assets in situations when these assets have no investment value. Using a suburban residential housing market that exemplifies such a market, we develop a general equilibrium model for the valuation of illiquid assets. We show that, in equilibrium, clientele effect persists, with long-tenure agents overwhelmingly choosing higher quality properties and short-tenure agents settling for lower quality properties. Equilibrium values of our model variables related to buyers and sellers are simultaneously determined in a competitive Nash equilibrium. We also show that price-based liquidity and time-based liquidity measures may behave in a conflicting manner in equilibrium, which is a novel result in itself. This paper contributes to the understanding of a consumption-based clientele effect as well as different measures of liquidity.
Social Science Research Network, 2020
We study earnings per share (EPS) forecast revision and accuracy of banking analysts around opera... more We study earnings per share (EPS) forecast revision and accuracy of banking analysts around operational risk event announcements in U.S. banks. We find that first announcements of operational risk events are more informative than their settlement announcements. Optimistic banking analysts revise their forecasts downward more aggressively around operational risk disclosures, thereby improving forecast accuracy. Career concerns of banking analysts cause an upward bias in forecast revision and deterioration in forecast accuracy only if the potential employer is a systemically important bank (SIB). We find consistent evidence linking competition among banking analysts with optimistic and inaccurate forecasts, which is consistent with analysts seeking to use inflated forecasts to curry favour and attract businesses to their brokerage house around the time of operational risk disclosures. Global settlement has no favourable impact on analyst forecast accuracy around operational risk event announcements. We find evidence supporting a materiality threshold of $10 million for the informativeness of operational risk event announcements in SIBs. Overall, our results shed light on optimism bias in banking analyst behaviour upon the arrival of unanticipated news.
Social Science Research Network, 2023
"We develop a theoretical model of illiquidity, in which illiquid assets are being traded by... more "We develop a theoretical model of illiquidity, in which illiquid assets are being traded by two agents -- buyers and sellers. Illiquidity is defined as the expected time it takes a seller to sell his asset at the optimal price. The theoretical model is developed in the context of transactions of residential housing properties that exemplify this type of assets. Unlike markets with perfectly liquid assets, in trading illiquid assets delaying the transaction creates a positive value to both sellers and buyers, that is induced by a non-zero probability that a better deal may come up. Then, because housing markets are decentralized, home buyers are willing to accept time-related search costs as the price of finding a better match. Selleris waiting, on the other hand, may be rewarded with a visit of the buyer who assigns the highest value to his house and is willing to pay the offered price. In our model, buyers are modeled as agents that are heterogeneous in their observed match with a house that they visit. At the same time, by their nature houses are heterogeneous and thus offer different levels of utility to different prospective buyers. A household buys a house only if the observed match is at least as high as the reservation match. During the homeownership, however, the match may get lost (for example, due to change in family size or job relocation), in which case the owner immediately puts the house up for sale, thus assuming simultaneously a role of a seller and a buyer. The objective of this paper is two-fold. The first goal is to develop a theoretical framework for the effects of competition among buyers n a consequence of a multiple-buyer arrival n on (a) equilibrium sales prices for the properties and (b) their levels of liquidity, namely time on the market. This Nash equilibrium is simultaneously determined for the two outcomes of this competitive behavior. As an illustration, if a house that is put up by seller for sale is inspected by two prospective buyers, then the probability of the house getting sold during this period is higher than in the case with only one buyer. Thus, time on the market should decline with a higher number of buyers. At the same time, the seller knows that each buyer visits two different houses each period. As a result, a buyeris match for a house exceeding his reservation match does not guarantee his buying this house. This occurs because there exists a possibility that the match observed for the second house is higher than for the first house. This in turn reduces the probability that the seller sells her house to a prospective buyer in this period, and so the time on the market is expected to increase. We hypothesize that the reduction in the probability of sale due to the possibility of a buyeris better match with a different house would be outweighed by a better chance of having her own house sold due to the presence of an additional prospective buyer. In equilibrium, in the case with two buyers this results in time on the market being shorter. In turn, the equilibrium sale price is expected to be lower with two prospective buyers visiting each house in each period. Furthermore, if the buyer arrival process is sufficiently dense, a higher number of buyers visiting same house each period is expected to result in a lower equilibrium price and time on the market. In the limit, we expect to see a perfectly liquid market. We seek to determine whether the relation between the arrival frequency and the level of liquidity is linear. The second objective is to generalize the model by extending it to a case with two classes of buyers and two classes of houses. Namely, in our model, buyers can vary in their expected tenure and can be short-term or long-term buyers and the two types of houses are (1) lower-quality houses that do not provide a match sufficiently high to appeal to a prospective buyer, and (2) higher-quality houses that do. The two types of houses should be priced differently. Then, in each period, each prospective buyer visits two houses n one house of each type n and picks the one with the highest net gain. The trade-off to be considered here can be intuitively summarized as follows: Even if a buyer observes a higher match with the first house than with the second, provided that the price exceeds substantially the price of the second house, the net gain of the first house is reduced, thus encouraging the buyeris preference to switch from the first house with a higher match to the second house with a lower match but a lower price. In our model, short-term buyersi per-period probability of buying a house is more sensitive to changes in sales prices than the corresponding probability of long-term buyers. In our model we also expect to establish the result that, in the presence of short-term and longterm buyers, there is a higher equilibrium relative proportion of short-term buyers choosing the lower-quality houses and a higher equilibrium relative proportion of…
Social Science Research Network, 2005
In this paper, we present a procedure for consistent estimation of the severity and frequency dis... more In this paper, we present a procedure for consistent estimation of the severity and frequency distributions based on incomplete insurance data and demonstrate that ignoring the thresholds leads to a serious underestimation of the ruin probabilities. The event frequency is modelled with a non-homogeneous Poisson process with a sinusoidal intensity rate function. The choice of an adequate loss distribution is conducted via the in-sample goodness-of-fit procedures and forecasting, using classical and robust methodologies.
QUT Business School, 2006
In this paper, we present a procedure for consistent estimation of the severity and frequency dis... more In this paper, we present a procedure for consistent estimation of the severity and frequency distributions based on incomplete insurance data and demonstrate that ignoring the thresholds leads to a serious underestimation of the ruin probabilities. The event frequency is modelled with a non-homogeneous Poisson process with a sinusoidal intensity rate function. The choice of an adequate loss distribution is conducted via the in-sample goodness-of-fit procedures and forecasting, using classical and robust methodologies.
Operational Risk Is Not Just ''Other'' Risks U ntil very recently, it has been believed that bank... more Operational Risk Is Not Just ''Other'' Risks U ntil very recently, it has been believed that banks are exposed to two main risks. In the order of importance, they are credit risk (counterparty failure) and market risk (loss due to changes in market indicators, such as equity prices, interest rates, and exchange rates). Operational risk has been regarded as a mere part of ''other'' risks. Operational risk is not a new concept for banks. Operational losses have been reflected in banks' balance sheets for many decades. They occur in the banking industry every day. Operational risk affects the soundness and operating efficiency of all banking activities and all business units. Most of the losses are relatively small in magnitude-the fact that these losses are frequent makes them predictable and often preventable. Examples of such operational losses include losses resulting from accidental accounting errors, minor credit card fraud, or equipment failures. Operational risk-related events that are often more severe in the magnitude of incurred loss include tax noncompliance, unauthorized trading activities, major internal fraudulent activities, business disruptions due to natural disasters, and vandalism. Until around the 1990s, the latter events have been infrequent, and even if they did occur, banks were capable of sustaining the losses without major consequences. This is quite understandable because the operations within the banking industry until roughly 20 years ago have been subject to numerous restrictions, keeping trading volumes relatively modest, and diversity of operations limited. Therefore, the significance of operational risk (whose impact is positively correlated with income size and dispersion of business units) has been perceived as minor, with limited effect on management's decision-making and capital allocation when compared to credit risk and market risk. However, serious changes in the global financial markets in the last 20 years or so have caused noticeable shifts in banks' risk profiles.
Operational IT failures have significant negative impacts on firms but little is known about thei... more Operational IT failures have significant negative impacts on firms but little is known about their origins. Building on accounting research linking adverse operational events to SOX-disclosed control weaknesses (CWs) over financial reporting, we study the origins of IT failures in relation to IT-CWs. We use a sample of 212 operational IT failures where the confidentiality, integrity, or availability of data assets and functional IT assets (hardware, networks, etc.) has been compromised. We find that IT failures are linked to a relatively small set of IT-CWs, where each IT failure type is linked to a distinctly different subset of IT-CWs. Moreover, IT failures that are more harmful to the firm are found to be associated with IT-CWs that are more severe in the sense that they are more difficult to remediate.
Computational Statistics, 2006
In this paper, we present a procedure for consistent estimation of the severity and frequency dis... more In this paper, we present a procedure for consistent estimation of the severity and frequency distributions based on incomplete insurance data and demonstrate that ignoring the thresholds leads to a serious underestimation of the ruin probabilities. The event frequency is modelled with a nonhomogeneous Poisson process with a sinusoidal intensity rate function. The choice of an adequate loss distribution is conducted via the in-sample goodnessof-fit procedures and forecasting, using classical and robust methodologies.
We examine the microeconomic and macroeconomic determinants of operational losses in nancial ins... more We examine the microeconomic and macroeconomic determinants of operational losses in nancial institutions. Using 24 years of U.S. public operational loss data from 1980 to 2003, we demonstrate that the rm-speci c environment is a key determinant of operational risk; rmspeci c characteristics such as size, leverage, volatility, book-to-market, pro tability, and the number of employees are all highly signi cant in our models. In contrast, while there is some evidence that operational losses are more frequent and more severe during economic downturns, overall the macroeconomic environment appears less important. We further test the doubly-stochastic Poisson assumption with respect to the arrivals of operational losses, given the estimated arrival intensities. Despite the traditional view that operational risk is unsystematic, we nd evidence of clustering of operational risk events at the industry level in excess of what is predicted by the stochastic frequency estimates. JEL Classi...
The Basel II Capital Accord of 2004 sets guidelines on operational risk capital requirements to b... more The Basel II Capital Accord of 2004 sets guidelines on operational risk capital requirements to be adopted by internationally active banks by around year-end 2007. Operational loss databases are subject to a minimum recording threshold of roughly 10,000(internal)and10,000 (internal) and 10,000(internal)and1 million (external) – an aspect often overlooked by practitioners. We provide theoretical and empirical evidence that ignoring these thresholds leads to underestimation of the VaR and CVaR figures within the Loss Distribution Approach. We emphasize that four crucial components of a reliable operational loss actuarial model are: (1) non-homogenous Poisson process for the loss arrival process, (2) flexible loss severity distributions, (3) accounting for the incomplete data, and (4) robustness analysis of the model.
Risk Management, 2004
The Basel II Capital Accord requires banks to determine the capital charge to account for operati... more The Basel II Capital Accord requires banks to determine the capital charge to account for operational losses. Compound Poisson process with Lognormal losses is suggested for this purpose. The paper examines the impact of possibly censored and/or truncated data on the estimation of loss distributions. A procedure on consistent estimation of the severity and frequency distributions based on incomplete data samples is presented. It is also demonstrated that ignoring the peculiarities of available data samples leads to inaccurate Value-at-Risk estimates that govern the operational risk capital charge.
In this paper, we present a procedure for consistent estimation of the severity and frequency dis... more In this paper, we present a procedure for consistent estimation of the severity and frequency distributions based on incomplete insurance data and demonstrate that ignoring the thresholds leads to a serious underestimation of the ruin probabilities. The event frequency is modelled with a non-homogeneous Poisson process with a sinusoidal intensity rate function. The choice of an adequate loss distribution is conducted via the in-sample goodness-of-fit procedures and forecasting, using classical and robust methodologies. Modelling catastrophe claims with left-truncated severity distributions Anna Chernobai, Krzysztof Burnecki, Svetlozar Rachev, Stefan Trück and Rafa l Weron University of California Santa Barbara, CA 93106, USA 2 Hugo Steinhaus Center for Stochastic Methods, Institute of Mathematics, Wroc law University of Technology, 50-370 Wroc law, Poland 3 Institut für Statistik und Mathematische Wirtschaftstheorie, Universität Karlsruhe, D-76128 Karlsruhe, Germany
The European Journal of Finance
We study earnings per share (EPS) forecast revision and accuracy of banking analysts around opera... more We study earnings per share (EPS) forecast revision and accuracy of banking analysts around operational risk event announcements in U.S. banks. We find that first announcements of operational risk events are more informative than their settlement announcements. Optimistic banking analysts revise their forecasts downward more aggressively around operational risk disclosures, thereby improving forecast accuracy. Career concerns of banking analysts cause an upward bias in forecast revision and deterioration in forecast accuracy only if the potential employer is a systemically important bank (SIB). We find consistent evidence linking competition among banking analysts with optimistic and inaccurate forecasts, which is consistent with analysts seeking to use inflated forecasts to curry favour and attract businesses to their brokerage house around the time of operational risk disclosures. Global settlement has no favourable impact on analyst forecast accuracy around operational risk event announcements. We find evidence supporting a materiality threshold of $10 million for the informativeness of operational risk event announcements in SIBs. Overall, our results shed light on optimism bias in banking analyst behaviour upon the arrival of unanticipated news.
MIS Quarterly
Following Goldstein et al. (2011), we classify operational IT failures into data-and function-rel... more Following Goldstein et al. (2011), we classify operational IT failures into data-and function-related IT failures. Data-related IT failures are those that result in disclosure of confidential data assets to unauthorized parties, misuse of data assets, or destruction of data assets. Functionrelated IT failures are those that result in loss of availability, or are the result of the mis-operation, of functional IT assets responsible for the handling of data assets. Consistent with Benaroch et al. (2012), we define operational IT failure as any threat to the integrity, confidentiality, or availability of data assets or IT assets (software, hardware, networks, users, system operators, etc.) responsible for the creation, storage, processing, transport, and safeguarding of data assets. Confidentiality events are violations of the assurance that data and IT assets are shared only among authorized persons, systems, or organizations. These events include intentional incidents, such as phishing and hacker attacks on sensitive data (e.g., trade secrets, customer data), misuse of access codes, emailing of confidential data by unauthorized internal personnel, theft of proprietary source code; and unintentional data leakage incidents, such as loss of notebooks with sensitive data by an employee, erroneous posting of customer data on the firm's website, and vendor loss of data in transport. Integrity events are violations of the assurance that data and IT assets, including information flows, are authentic (i.e., genuine and trustworthy) and correct (i.e., preserved without corruption). Examples of Integrity events include erroneous updates of customer accounts by buggy software, user mistyping of a social security number, execution of unauthorized trades by external hackers, transactions settled at incorrect prices because of an erroneous data feed, ATM network malfunction due to a software bug, executing an incorrect trade due to a trader's keystroke error, accidental or malicious deletion or modification of important data or programs by computer viruses or worms, and defacing of a company's website by hackers. Availability events are violations of the assurance that data and IT assets are delivered on a timely basis to those who need them. Examples include denial-of-service attacks and viruses that reproduce to overwhelm network bandwidth and email servers); unforeseen or accidental causes such as technical problems (e.g., hardware malfunctions, network outages, power failures, system crashes, and ISP problems that prevent a website from receiving customer e-orders); and, natural phenomena (e.g., floods and earthquakes), and human errors (e.g., operator errors).
HSC Research Reports, 2005
In this paper, we present a procedure for consistent estimation of the severity and frequency dis... more In this paper, we present a procedure for consistent estimation of the severity and frequency distributions based on incomplete insurance data and demonstrate that ignoring the thresholds leads to a serious underestimation of the ruin probabilities. The event frequency is modelled with a non-homogeneous Poisson process with a sinusoidal intensity rate function. The choice of an adequate loss distribution is conducted via the in-sample goodness-of-fit procedures and forecasting, using classical and robust methodologies.
Social Science Research Network, 2009
We examine the incidence of operational losses among U.S. financial institutions using publicly r... more We examine the incidence of operational losses among U.S. financial institutions using publicly reported loss data from 1980 to 2005. We show that most operational losses can be traced to a breakdown of internal control, and that firms suffering from these losses tend to be younger and more complex, and have higher credit risk, more antitakeover provisions, and chief executive officers (CEOs) with higher stock option holdings and bonuses relative to salary. These findings highlight the correlation between operational risk and credit risk, as well as the role of corporate governance and proper managerial incentives in mitigating operational risk.
Social Science Research Network, 2012
We investigate the disclosures of material weaknesses in internal control mandated for Japanese f... more We investigate the disclosures of material weaknesses in internal control mandated for Japanese firms under the 2006 Financial Instruments and Exchange Law. We find that the presence of a material weakness is more likely for firms that are younger, have better growth prospects, have a volatile operating environment, are financially constrained, and have weak governance structures. We examine the role of Japan's main banks in this process and find that the likelihood of a material weakness is higher for firms with stronger links with their main banks. We also show that the financial health of the main banks themselves-proxied for by the banks' BIS ratios and bad loan ratios-increases the likelihood of a material weakness in affiliated firms. This paper provides novel insights into the determinants of material weaknesses of Japanese firms since the passage of the law. Results from this study contribute to the literature on material weaknesses and relationship banking.
Social Science Research Network, 2017
We investigate information transfer effects of operational loss announcements to the announcing f... more We investigate information transfer effects of operational loss announcements to the announcing firm’s blockholder. Based on an event study, we find that the firm-blockholder link tends to be weak for U.S. financial sector blockholders, with significant negative spillover effects occurring mainly for larger blockholders, for blockholders incorporated as depository institutions with a higher exposure to the respective loss announcing firm, and for higher operational loss amounts. Our findings contribute to the understanding of the equity pricing process in the presence of operational risk, and they show that operational risk is not entirely idiosyncratic but may bear a considerable contagious element.
This paper develops a theoretical model that explains how long-term consumption goals influence h... more This paper develops a theoretical model that explains how long-term consumption goals influence households' choice of real assets in situations when these assets have no investment value. Using a suburban residential housing market that exemplifies such a market, we develop a general equilibrium model for the valuation of illiquid assets. We show that, in equilibrium, clientele effect persists, with long-tenure agents overwhelmingly choosing higher quality properties and short-tenure agents settling for lower quality properties. Equilibrium values of our model variables related to buyers and sellers are simultaneously determined in a competitive Nash equilibrium. We also show that price-based liquidity and time-based liquidity measures may behave in a conflicting manner in equilibrium, which is a novel result in itself. This paper contributes to the understanding of a consumption-based clientele effect as well as different measures of liquidity.
Social Science Research Network, 2020
We study earnings per share (EPS) forecast revision and accuracy of banking analysts around opera... more We study earnings per share (EPS) forecast revision and accuracy of banking analysts around operational risk event announcements in U.S. banks. We find that first announcements of operational risk events are more informative than their settlement announcements. Optimistic banking analysts revise their forecasts downward more aggressively around operational risk disclosures, thereby improving forecast accuracy. Career concerns of banking analysts cause an upward bias in forecast revision and deterioration in forecast accuracy only if the potential employer is a systemically important bank (SIB). We find consistent evidence linking competition among banking analysts with optimistic and inaccurate forecasts, which is consistent with analysts seeking to use inflated forecasts to curry favour and attract businesses to their brokerage house around the time of operational risk disclosures. Global settlement has no favourable impact on analyst forecast accuracy around operational risk event announcements. We find evidence supporting a materiality threshold of $10 million for the informativeness of operational risk event announcements in SIBs. Overall, our results shed light on optimism bias in banking analyst behaviour upon the arrival of unanticipated news.
Social Science Research Network, 2023
"We develop a theoretical model of illiquidity, in which illiquid assets are being traded by... more "We develop a theoretical model of illiquidity, in which illiquid assets are being traded by two agents -- buyers and sellers. Illiquidity is defined as the expected time it takes a seller to sell his asset at the optimal price. The theoretical model is developed in the context of transactions of residential housing properties that exemplify this type of assets. Unlike markets with perfectly liquid assets, in trading illiquid assets delaying the transaction creates a positive value to both sellers and buyers, that is induced by a non-zero probability that a better deal may come up. Then, because housing markets are decentralized, home buyers are willing to accept time-related search costs as the price of finding a better match. Selleris waiting, on the other hand, may be rewarded with a visit of the buyer who assigns the highest value to his house and is willing to pay the offered price. In our model, buyers are modeled as agents that are heterogeneous in their observed match with a house that they visit. At the same time, by their nature houses are heterogeneous and thus offer different levels of utility to different prospective buyers. A household buys a house only if the observed match is at least as high as the reservation match. During the homeownership, however, the match may get lost (for example, due to change in family size or job relocation), in which case the owner immediately puts the house up for sale, thus assuming simultaneously a role of a seller and a buyer. The objective of this paper is two-fold. The first goal is to develop a theoretical framework for the effects of competition among buyers n a consequence of a multiple-buyer arrival n on (a) equilibrium sales prices for the properties and (b) their levels of liquidity, namely time on the market. This Nash equilibrium is simultaneously determined for the two outcomes of this competitive behavior. As an illustration, if a house that is put up by seller for sale is inspected by two prospective buyers, then the probability of the house getting sold during this period is higher than in the case with only one buyer. Thus, time on the market should decline with a higher number of buyers. At the same time, the seller knows that each buyer visits two different houses each period. As a result, a buyeris match for a house exceeding his reservation match does not guarantee his buying this house. This occurs because there exists a possibility that the match observed for the second house is higher than for the first house. This in turn reduces the probability that the seller sells her house to a prospective buyer in this period, and so the time on the market is expected to increase. We hypothesize that the reduction in the probability of sale due to the possibility of a buyeris better match with a different house would be outweighed by a better chance of having her own house sold due to the presence of an additional prospective buyer. In equilibrium, in the case with two buyers this results in time on the market being shorter. In turn, the equilibrium sale price is expected to be lower with two prospective buyers visiting each house in each period. Furthermore, if the buyer arrival process is sufficiently dense, a higher number of buyers visiting same house each period is expected to result in a lower equilibrium price and time on the market. In the limit, we expect to see a perfectly liquid market. We seek to determine whether the relation between the arrival frequency and the level of liquidity is linear. The second objective is to generalize the model by extending it to a case with two classes of buyers and two classes of houses. Namely, in our model, buyers can vary in their expected tenure and can be short-term or long-term buyers and the two types of houses are (1) lower-quality houses that do not provide a match sufficiently high to appeal to a prospective buyer, and (2) higher-quality houses that do. The two types of houses should be priced differently. Then, in each period, each prospective buyer visits two houses n one house of each type n and picks the one with the highest net gain. The trade-off to be considered here can be intuitively summarized as follows: Even if a buyer observes a higher match with the first house than with the second, provided that the price exceeds substantially the price of the second house, the net gain of the first house is reduced, thus encouraging the buyeris preference to switch from the first house with a higher match to the second house with a lower match but a lower price. In our model, short-term buyersi per-period probability of buying a house is more sensitive to changes in sales prices than the corresponding probability of long-term buyers. In our model we also expect to establish the result that, in the presence of short-term and longterm buyers, there is a higher equilibrium relative proportion of short-term buyers choosing the lower-quality houses and a higher equilibrium relative proportion of…
Social Science Research Network, 2005
In this paper, we present a procedure for consistent estimation of the severity and frequency dis... more In this paper, we present a procedure for consistent estimation of the severity and frequency distributions based on incomplete insurance data and demonstrate that ignoring the thresholds leads to a serious underestimation of the ruin probabilities. The event frequency is modelled with a non-homogeneous Poisson process with a sinusoidal intensity rate function. The choice of an adequate loss distribution is conducted via the in-sample goodness-of-fit procedures and forecasting, using classical and robust methodologies.
QUT Business School, 2006
In this paper, we present a procedure for consistent estimation of the severity and frequency dis... more In this paper, we present a procedure for consistent estimation of the severity and frequency distributions based on incomplete insurance data and demonstrate that ignoring the thresholds leads to a serious underestimation of the ruin probabilities. The event frequency is modelled with a non-homogeneous Poisson process with a sinusoidal intensity rate function. The choice of an adequate loss distribution is conducted via the in-sample goodness-of-fit procedures and forecasting, using classical and robust methodologies.
Operational Risk Is Not Just ''Other'' Risks U ntil very recently, it has been believed that bank... more Operational Risk Is Not Just ''Other'' Risks U ntil very recently, it has been believed that banks are exposed to two main risks. In the order of importance, they are credit risk (counterparty failure) and market risk (loss due to changes in market indicators, such as equity prices, interest rates, and exchange rates). Operational risk has been regarded as a mere part of ''other'' risks. Operational risk is not a new concept for banks. Operational losses have been reflected in banks' balance sheets for many decades. They occur in the banking industry every day. Operational risk affects the soundness and operating efficiency of all banking activities and all business units. Most of the losses are relatively small in magnitude-the fact that these losses are frequent makes them predictable and often preventable. Examples of such operational losses include losses resulting from accidental accounting errors, minor credit card fraud, or equipment failures. Operational risk-related events that are often more severe in the magnitude of incurred loss include tax noncompliance, unauthorized trading activities, major internal fraudulent activities, business disruptions due to natural disasters, and vandalism. Until around the 1990s, the latter events have been infrequent, and even if they did occur, banks were capable of sustaining the losses without major consequences. This is quite understandable because the operations within the banking industry until roughly 20 years ago have been subject to numerous restrictions, keeping trading volumes relatively modest, and diversity of operations limited. Therefore, the significance of operational risk (whose impact is positively correlated with income size and dispersion of business units) has been perceived as minor, with limited effect on management's decision-making and capital allocation when compared to credit risk and market risk. However, serious changes in the global financial markets in the last 20 years or so have caused noticeable shifts in banks' risk profiles.
Operational IT failures have significant negative impacts on firms but little is known about thei... more Operational IT failures have significant negative impacts on firms but little is known about their origins. Building on accounting research linking adverse operational events to SOX-disclosed control weaknesses (CWs) over financial reporting, we study the origins of IT failures in relation to IT-CWs. We use a sample of 212 operational IT failures where the confidentiality, integrity, or availability of data assets and functional IT assets (hardware, networks, etc.) has been compromised. We find that IT failures are linked to a relatively small set of IT-CWs, where each IT failure type is linked to a distinctly different subset of IT-CWs. Moreover, IT failures that are more harmful to the firm are found to be associated with IT-CWs that are more severe in the sense that they are more difficult to remediate.
Computational Statistics, 2006
In this paper, we present a procedure for consistent estimation of the severity and frequency dis... more In this paper, we present a procedure for consistent estimation of the severity and frequency distributions based on incomplete insurance data and demonstrate that ignoring the thresholds leads to a serious underestimation of the ruin probabilities. The event frequency is modelled with a nonhomogeneous Poisson process with a sinusoidal intensity rate function. The choice of an adequate loss distribution is conducted via the in-sample goodnessof-fit procedures and forecasting, using classical and robust methodologies.
We examine the microeconomic and macroeconomic determinants of operational losses in nancial ins... more We examine the microeconomic and macroeconomic determinants of operational losses in nancial institutions. Using 24 years of U.S. public operational loss data from 1980 to 2003, we demonstrate that the rm-speci c environment is a key determinant of operational risk; rmspeci c characteristics such as size, leverage, volatility, book-to-market, pro tability, and the number of employees are all highly signi cant in our models. In contrast, while there is some evidence that operational losses are more frequent and more severe during economic downturns, overall the macroeconomic environment appears less important. We further test the doubly-stochastic Poisson assumption with respect to the arrivals of operational losses, given the estimated arrival intensities. Despite the traditional view that operational risk is unsystematic, we nd evidence of clustering of operational risk events at the industry level in excess of what is predicted by the stochastic frequency estimates. JEL Classi...
The Basel II Capital Accord of 2004 sets guidelines on operational risk capital requirements to b... more The Basel II Capital Accord of 2004 sets guidelines on operational risk capital requirements to be adopted by internationally active banks by around year-end 2007. Operational loss databases are subject to a minimum recording threshold of roughly 10,000(internal)and10,000 (internal) and 10,000(internal)and1 million (external) – an aspect often overlooked by practitioners. We provide theoretical and empirical evidence that ignoring these thresholds leads to underestimation of the VaR and CVaR figures within the Loss Distribution Approach. We emphasize that four crucial components of a reliable operational loss actuarial model are: (1) non-homogenous Poisson process for the loss arrival process, (2) flexible loss severity distributions, (3) accounting for the incomplete data, and (4) robustness analysis of the model.
Risk Management, 2004
The Basel II Capital Accord requires banks to determine the capital charge to account for operati... more The Basel II Capital Accord requires banks to determine the capital charge to account for operational losses. Compound Poisson process with Lognormal losses is suggested for this purpose. The paper examines the impact of possibly censored and/or truncated data on the estimation of loss distributions. A procedure on consistent estimation of the severity and frequency distributions based on incomplete data samples is presented. It is also demonstrated that ignoring the peculiarities of available data samples leads to inaccurate Value-at-Risk estimates that govern the operational risk capital charge.
In this paper, we present a procedure for consistent estimation of the severity and frequency dis... more In this paper, we present a procedure for consistent estimation of the severity and frequency distributions based on incomplete insurance data and demonstrate that ignoring the thresholds leads to a serious underestimation of the ruin probabilities. The event frequency is modelled with a non-homogeneous Poisson process with a sinusoidal intensity rate function. The choice of an adequate loss distribution is conducted via the in-sample goodness-of-fit procedures and forecasting, using classical and robust methodologies. Modelling catastrophe claims with left-truncated severity distributions Anna Chernobai, Krzysztof Burnecki, Svetlozar Rachev, Stefan Trück and Rafa l Weron University of California Santa Barbara, CA 93106, USA 2 Hugo Steinhaus Center for Stochastic Methods, Institute of Mathematics, Wroc law University of Technology, 50-370 Wroc law, Poland 3 Institut für Statistik und Mathematische Wirtschaftstheorie, Universität Karlsruhe, D-76128 Karlsruhe, Germany
The European Journal of Finance
We study earnings per share (EPS) forecast revision and accuracy of banking analysts around opera... more We study earnings per share (EPS) forecast revision and accuracy of banking analysts around operational risk event announcements in U.S. banks. We find that first announcements of operational risk events are more informative than their settlement announcements. Optimistic banking analysts revise their forecasts downward more aggressively around operational risk disclosures, thereby improving forecast accuracy. Career concerns of banking analysts cause an upward bias in forecast revision and deterioration in forecast accuracy only if the potential employer is a systemically important bank (SIB). We find consistent evidence linking competition among banking analysts with optimistic and inaccurate forecasts, which is consistent with analysts seeking to use inflated forecasts to curry favour and attract businesses to their brokerage house around the time of operational risk disclosures. Global settlement has no favourable impact on analyst forecast accuracy around operational risk event announcements. We find evidence supporting a materiality threshold of $10 million for the informativeness of operational risk event announcements in SIBs. Overall, our results shed light on optimism bias in banking analyst behaviour upon the arrival of unanticipated news.
MIS Quarterly
Following Goldstein et al. (2011), we classify operational IT failures into data-and function-rel... more Following Goldstein et al. (2011), we classify operational IT failures into data-and function-related IT failures. Data-related IT failures are those that result in disclosure of confidential data assets to unauthorized parties, misuse of data assets, or destruction of data assets. Functionrelated IT failures are those that result in loss of availability, or are the result of the mis-operation, of functional IT assets responsible for the handling of data assets. Consistent with Benaroch et al. (2012), we define operational IT failure as any threat to the integrity, confidentiality, or availability of data assets or IT assets (software, hardware, networks, users, system operators, etc.) responsible for the creation, storage, processing, transport, and safeguarding of data assets. Confidentiality events are violations of the assurance that data and IT assets are shared only among authorized persons, systems, or organizations. These events include intentional incidents, such as phishing and hacker attacks on sensitive data (e.g., trade secrets, customer data), misuse of access codes, emailing of confidential data by unauthorized internal personnel, theft of proprietary source code; and unintentional data leakage incidents, such as loss of notebooks with sensitive data by an employee, erroneous posting of customer data on the firm's website, and vendor loss of data in transport. Integrity events are violations of the assurance that data and IT assets, including information flows, are authentic (i.e., genuine and trustworthy) and correct (i.e., preserved without corruption). Examples of Integrity events include erroneous updates of customer accounts by buggy software, user mistyping of a social security number, execution of unauthorized trades by external hackers, transactions settled at incorrect prices because of an erroneous data feed, ATM network malfunction due to a software bug, executing an incorrect trade due to a trader's keystroke error, accidental or malicious deletion or modification of important data or programs by computer viruses or worms, and defacing of a company's website by hackers. Availability events are violations of the assurance that data and IT assets are delivered on a timely basis to those who need them. Examples include denial-of-service attacks and viruses that reproduce to overwhelm network bandwidth and email servers); unforeseen or accidental causes such as technical problems (e.g., hardware malfunctions, network outages, power failures, system crashes, and ISP problems that prevent a website from receiving customer e-orders); and, natural phenomena (e.g., floods and earthquakes), and human errors (e.g., operator errors).