Amos Golan | American University (original) (raw)

Papers by Amos Golan

Research paper thumbnail of Modeling and Theories

Foundations of Info-Metrics

In this chapter I concentrate on model and theory building, including model-based hypotheses, bas... more In this chapter I concentrate on model and theory building, including model-based hypotheses, based on limited information. I show that the info-metrics framework provides a coherent perspective that helps to identify the elements that are needed for building a logically sound model. The examples given in this chapter show how the info-metrics framework can guide the construction of both theories and models. I start the chapter by introducing conceptual building blocks and providing very simple toy examples. Then a more detailed example, taken from the social sciences, is introduced. A detailed discussion of the falsification and validation of models is also provided.

Research paper thumbnail of Using Wine Quality Differential in Grapes Pricing

A model of pricing grapes is developed based on the hedonic hypothesis that wine produced from th... more A model of pricing grapes is developed based on the hedonic hypothesis that wine produced from those grapes is valued by consumers for its quality characteristics. The model attempts to establish a set of producers pricing rules for quality characteristics in a cooperative framework. The model is then used to estimate the grape components affecting wine quality. Those characteristics are valued and a producer pricing for grapes is established.

Research paper thumbnail of Entropy Maximization

Foundations of Info-Metrics, 2017

In this chapter I develop the essential maximum entropy procedure, which is an inversion procedur... more In this chapter I develop the essential maximum entropy procedure, which is an inversion procedure for inferring an unknown probability distribution function from incomplete information. The formulation here is the root of info-metrics and is grounded in the motivations provided in Chapter 2 and the metrics defined in Chapter 3. Once the basic maximum entropy problem is defined and the solution is derived via the method of Lagrange multipliers, I derive and discuss its dual formulation. I then define and discuss the concept of conjugate variables, which is related to the Lagrange multipliers. Throughout, the mathematical derivations are supported by graphical illustrations and supplemented with heuristic arguments and with numerous examples in ideal settings.

Research paper thumbnail of Estimating a Demand System with Choke Prices

We present a new, information-theoretic approach for estimating a system of many demand equations... more We present a new, information-theoretic approach for estimating a system of many demand equations where the unobserved reservation or choke prices vary across consumers. We illustrate this method by estimating a nonlinear, almost ideal demand system (AIDS) for four types of meat using cross-sectional data from Mexico, where most households did not buy at least one type of meat during the survey week. The system of de­mand curves vary across demo­graphic groups.

Research paper thumbnail of Efficiency, Sufficiency, and Optimality

Foundations of Info-Metrics

In this chapter I provide additional rationalization for using the info-metrics framework. This t... more In this chapter I provide additional rationalization for using the info-metrics framework. This time the justifications are in terms of the statistical, mathematical, and information-theoretic properties of the formalism. Specifically, in this chapter I discuss optimality, statistical and computational efficiency, sufficiency, the concentration theorem, the conditional limit theorem, and the concept of information compression. These properties, together with the other properties and measures developed in earlier chapters, provide logical, mathematical, and statistical justifications for employing the info-metrics framework.

Research paper thumbnail of Estimating Coke and Pepsi's Price Advertising Strategies

A semi-parametric, information-based estimator is used to estimate strategies in prices and adver... more A semi-parametric, information-based estimator is used to estimate strategies in prices and advertising for Coca-Cola and Pepsi-Cola. Separate strategies for each firm are estimated with and without restrictions from game theory. These information/entropy estimators are consistent and efficient. These estimates are used to test theories about the strategies of firms and to see how changes in incomes or factor prices affect these strategies.

Research paper thumbnail of On Information Conservation and Algorithmic Complexity

In this paper we investigate standard prefix-free Kolmogorov complexity in the context of Zellner... more In this paper we investigate standard prefix-free Kolmogorov complexity in the context of Zellner’s information conservation principle (ICP). We show that prefix-free Kolmogorov complexity K is not efficient in this sense. We introduce Information Conserving Algorithmic complexity K∗, defined on a kernel space of random strings. We prove that this version is efficient in a weak sense. We prove that universal Turing machines do not conserve information in a strong sense, but we conjecture the existence of at least one such machine U. Because K∗ conserves information, the prefix-free aspect of the program code can be ignored as an internal aspect of the representation. This leads to a variant of the universal distribution m∗ using a uniform density estimator ξU for the distribution of the random strings. This distribution is shown to be smoother than the standard Solomonoff distribution. Of course ξU is unknown, but since it ’absorbs’ our uncertainty about the distribution m∗ uniforml...

Research paper thumbnail of Interval estimation: An information theoretic approach

Econometric Reviews, 2017

We develop here an alternative information theoretic method of inference of problems where all of... more We develop here an alternative information theoretic method of inference of problems where all of the observed information is in terms of intervals. We focus on the unconditional case where the observed information is in terms the minimal and maximal value at each period. Given interval data, we infer the joint and marginal distributions of the interval variable and its range. Our inferential procedure is based on entropy maximization subject to multidimensional moment conditions and normalization where the entropy is defined over discretized intervals. The discretization is based on theory or empirically observed quantities. The number of estimated parameters is independent of the discretization so the level of discretization does not change the fundamental level of complexity of our model. As an example, we apply our method to study the weather pattern for Los Angeles and New York City across the last century.

Research paper thumbnail of On the Construction of Prior Information – An Info-Metrics Approach

Advances in Econometrics, 2016

Abstract Although in principle prior information can significantly improve inference, incorporati... more Abstract Although in principle prior information can significantly improve inference, incorporating incorrect prior information will bias the estimates of any inferential analysis. This fact deters many scientists from incorporating prior information into their inferential analyses. In the natural sciences, where experiments are more regularly conducted, and can be combined with other relevant information, prior information is often used in inferential analysis, despite it being sometimes nontrivial to specify what that information is and how to quantify that information. In the social sciences, however, prior information is often hard to come by and very hard to justify or validate. We review a number of ways to construct such information. This information emerges naturally, either from fundamental properties and characteristics of the systems studied or from logical reasoning about the problems being analyzed. Borrowing from concepts and philosophical reasoning used in the natural sciences, and within an info-metrics framework, we discuss three different, yet complimentary, approaches for constructing prior information, with an application to the social sciences.

Research paper thumbnail of Effect of Universal TB Vaccination and Other Policy-Relevant Factors on the Probability of Patient Death from COVID-19

SSRN Electronic Journal, 2020

The possibility of reoccurring waves of the novel coronavirus that triggered the 2020 pandemic ma... more The possibility of reoccurring waves of the novel coronavirus that triggered the 2020 pandemic makes it critical to identify underlying policy-relevant factors that could be leveraged to decrease future COVID-19 death rates. We examined variation in a number of underlying, policyrelevant, country-level factors and COVID-19 death rates across countries. We found three such factors that significantly impact the survival probability of patients infected with COVID-19. In order of impact, these are universal TB (BCG) vaccination, air pollution deaths and a healthrelated expenditure. We quantify each probability change by age and sex. To deal with small sample size and high correlations, we use an information-theoretic inferential method that also allows us to introduce priors constructed from independent SARS data.

Research paper thumbnail of What May Visualization Processes Optimize?

In this paper, we present an abstract model of visualization and inference processes and describe... more In this paper, we present an abstract model of visualization and inference processes and describe an information-theoretic measure for optimizing such processes. In order to obtain such an abstraction, we first examined six classes of workflows in data analysis and visualization, and identified four levels of typical visualization components, namely disseminative, observational, analytical and model-developmental visualization. We noticed a common phenomenon at different levels of visualization, that is, the transformation of data spaces (referred to as alphabets) usually corresponds to the reduction of maximal entropy along a workflow. Based on this observation, we establish an information-theoretic measure of cost-benefit ratio that may be used as a cost function for optimizing a data visualization process. To demonstrate the validity of this measure, we examined a number of successful visualization processes in the literature, and showed that the information-theoretic measure can...

Research paper thumbnail of Image Reconstruction : An Information-Theoretic Approach

The objective of this project is to develop an appropriate econometric (statistical) method for a... more The objective of this project is to develop an appropriate econometric (statistical) method for analyzing, with minimal assumptions, small and possibly incomplete and ill-behaved data. In this paper we develop an efficient and easy to apply image reconstruction (estimation) method for analyzing such data. The resulting method extends (and builds on the foundations of) information-theoretic methods by further relaxing some of the underlying assumptions, uses minimal distributional assumptions, performs well (relative to current methods of estimation and image reconstruction) and uses efficiently all the available information (hard and soft data). Further, this method is computationally efficient.

Research paper thumbnail of Prior Information

Oxford Scholarship Online, 2017

In this chapter I introduce and quantify prior information and show how to incorporate it into th... more In this chapter I introduce and quantify prior information and show how to incorporate it into the info-metrics framework. The priors developed arise from fundamental properties of the system, from logical reasoning, or from empirical observations. I start the chapter with the derivation of priors for discrete distributions, which can be handled via the grouping property, and a detailed derivation of surprisal analysis. Constructing priors for continuous distributions is more challenging. That problem is tackled via the method of transformation groups, which is related to the mathematical concept of group theory. That method works for both discrete and continuous functions. The last approaches I discuss are based on empirical information. The close relationship between priors, treatment effects, and score functions is discussed and demonstrated in the last section. Visual illustrations of the theory and numerous theoretical and applied examples are provided.

Research paper thumbnail of A Markov Model of Bank Failure Estimated Using an Information-Theoretic Approach

In this paper, we develop an early-warning bank failure model (EWM) designed specifically to capt... more In this paper, we develop an early-warning bank failure model (EWM) designed specifically to capture the dynamic process underlying the transition from financially sound to closure. We model the transition process as a stationary Markov model and estimate the transition probabilities using a Generalized Maximum Entropy (GME) estimation technique. The GME estimation method is a member of the class of information-theoretic methods, is semi-parametric, and is better suited for estimating models in which the data are limited (e.g., few events, and data availability problems), highly collinear, and measured with error – conditions that often exist with micro-level banking data. In addition, this method allows us to incorporate prior information and impose fewer distributional assumptions relative to conventional maximum likelihood (or full information maximum likelihood) methods. We report estimates of the transition probabilities for nine transition states for the population of national...

Research paper thumbnail of Information and Its Value

Advances in Info-Metrics, 2020

In this chapter, we are interested in understanding the nature of information and its value. We f... more In this chapter, we are interested in understanding the nature of information and its value. We focus on information that is used for making decisions, including related activities such as constructing models, performing inferences, and making predictions. Our discussion is mostly qualitative, and it touches on certain aspects of information as related to the sender, receiver, and a possible observer. Although our emphasis is on shedding more light on the concept of information for making decisions, we are not concerned here with the exact details of the decision process, or information processing itself. In addition to discussing information, our expedition takes us through the traditional notions of utility, prices, and risk, all of which, under certain conditions, relate to the value of information. Our main conclusion is that the value of information (used in decision making) is relative and subjective. Since information is relative, it can have more than one value, say a value ...

Research paper thumbnail of Estimating Market Power and Strategies: Bibliography

Research paper thumbnail of Career Analyzer Planning Tool (CAPT)

: The overall objective of this research is to analyze the impact of a Sailor's personal attr... more : The overall objective of this research is to analyze the impact of a Sailor's personal attributes and demographics as well as the prevailing macroeconomic conditions and Navy policy on a Sailor's career. In this study a transition probability for each Sailor is estimated. This model allows investigators to examine many different possible scenarios, such as promotion probability, given an individual's acquisition of new skills or training, changes in geographic location, or economic downturns. The technique used is an Information Theoretic, Generalized Cross Entropy (IT-GCE) method.

Research paper thumbnail of Perloff, Jeffrey M., Larry S. Karp, and Amos Golan, Editors. Estimating Market Power and Strategies. Cambridge,UK: Cambridge University Press, 2007, 352 pp., $88.99

Research paper thumbnail of Interpreting Social Accounting Matrix (SAM) as an Information Channel

Entropy, 2020

Information theory, and the concept of information channel, allows us to calculate the mutual inf... more Information theory, and the concept of information channel, allows us to calculate the mutual information between the source (input) and the receiver (output), both represented by probability distributions over their possible states. In this paper, we use the theory behind the information channel to provide an enhanced interpretation to a Social Accounting Matrix (SAM), a square matrix whose columns and rows present the expenditure and receipt accounts of economic actors. Under our interpretation, the SAM’s coefficients, which, conceptually, can be viewed as a Markov chain, can be interpreted as an information channel, allowing us to optimize the desired level of aggregation within the SAM. In addition, the developed information measures can describe accurately the evolution of a SAM over time. Interpreting the SAM matrix as an ergodic chain could show the effect of a shock on the economy after several periods or economic cycles. Under our new framework, finding the power limit of t...

Research paper thumbnail of An information-theoretic approach for forecasting interval-valued SP500 daily returns

International Journal of Forecasting, 2020

Research paper thumbnail of Modeling and Theories

Foundations of Info-Metrics

In this chapter I concentrate on model and theory building, including model-based hypotheses, bas... more In this chapter I concentrate on model and theory building, including model-based hypotheses, based on limited information. I show that the info-metrics framework provides a coherent perspective that helps to identify the elements that are needed for building a logically sound model. The examples given in this chapter show how the info-metrics framework can guide the construction of both theories and models. I start the chapter by introducing conceptual building blocks and providing very simple toy examples. Then a more detailed example, taken from the social sciences, is introduced. A detailed discussion of the falsification and validation of models is also provided.

Research paper thumbnail of Using Wine Quality Differential in Grapes Pricing

A model of pricing grapes is developed based on the hedonic hypothesis that wine produced from th... more A model of pricing grapes is developed based on the hedonic hypothesis that wine produced from those grapes is valued by consumers for its quality characteristics. The model attempts to establish a set of producers pricing rules for quality characteristics in a cooperative framework. The model is then used to estimate the grape components affecting wine quality. Those characteristics are valued and a producer pricing for grapes is established.

Research paper thumbnail of Entropy Maximization

Foundations of Info-Metrics, 2017

In this chapter I develop the essential maximum entropy procedure, which is an inversion procedur... more In this chapter I develop the essential maximum entropy procedure, which is an inversion procedure for inferring an unknown probability distribution function from incomplete information. The formulation here is the root of info-metrics and is grounded in the motivations provided in Chapter 2 and the metrics defined in Chapter 3. Once the basic maximum entropy problem is defined and the solution is derived via the method of Lagrange multipliers, I derive and discuss its dual formulation. I then define and discuss the concept of conjugate variables, which is related to the Lagrange multipliers. Throughout, the mathematical derivations are supported by graphical illustrations and supplemented with heuristic arguments and with numerous examples in ideal settings.

Research paper thumbnail of Estimating a Demand System with Choke Prices

We present a new, information-theoretic approach for estimating a system of many demand equations... more We present a new, information-theoretic approach for estimating a system of many demand equations where the unobserved reservation or choke prices vary across consumers. We illustrate this method by estimating a nonlinear, almost ideal demand system (AIDS) for four types of meat using cross-sectional data from Mexico, where most households did not buy at least one type of meat during the survey week. The system of de­mand curves vary across demo­graphic groups.

Research paper thumbnail of Efficiency, Sufficiency, and Optimality

Foundations of Info-Metrics

In this chapter I provide additional rationalization for using the info-metrics framework. This t... more In this chapter I provide additional rationalization for using the info-metrics framework. This time the justifications are in terms of the statistical, mathematical, and information-theoretic properties of the formalism. Specifically, in this chapter I discuss optimality, statistical and computational efficiency, sufficiency, the concentration theorem, the conditional limit theorem, and the concept of information compression. These properties, together with the other properties and measures developed in earlier chapters, provide logical, mathematical, and statistical justifications for employing the info-metrics framework.

Research paper thumbnail of Estimating Coke and Pepsi's Price Advertising Strategies

A semi-parametric, information-based estimator is used to estimate strategies in prices and adver... more A semi-parametric, information-based estimator is used to estimate strategies in prices and advertising for Coca-Cola and Pepsi-Cola. Separate strategies for each firm are estimated with and without restrictions from game theory. These information/entropy estimators are consistent and efficient. These estimates are used to test theories about the strategies of firms and to see how changes in incomes or factor prices affect these strategies.

Research paper thumbnail of On Information Conservation and Algorithmic Complexity

In this paper we investigate standard prefix-free Kolmogorov complexity in the context of Zellner... more In this paper we investigate standard prefix-free Kolmogorov complexity in the context of Zellner’s information conservation principle (ICP). We show that prefix-free Kolmogorov complexity K is not efficient in this sense. We introduce Information Conserving Algorithmic complexity K∗, defined on a kernel space of random strings. We prove that this version is efficient in a weak sense. We prove that universal Turing machines do not conserve information in a strong sense, but we conjecture the existence of at least one such machine U. Because K∗ conserves information, the prefix-free aspect of the program code can be ignored as an internal aspect of the representation. This leads to a variant of the universal distribution m∗ using a uniform density estimator ξU for the distribution of the random strings. This distribution is shown to be smoother than the standard Solomonoff distribution. Of course ξU is unknown, but since it ’absorbs’ our uncertainty about the distribution m∗ uniforml...

Research paper thumbnail of Interval estimation: An information theoretic approach

Econometric Reviews, 2017

We develop here an alternative information theoretic method of inference of problems where all of... more We develop here an alternative information theoretic method of inference of problems where all of the observed information is in terms of intervals. We focus on the unconditional case where the observed information is in terms the minimal and maximal value at each period. Given interval data, we infer the joint and marginal distributions of the interval variable and its range. Our inferential procedure is based on entropy maximization subject to multidimensional moment conditions and normalization where the entropy is defined over discretized intervals. The discretization is based on theory or empirically observed quantities. The number of estimated parameters is independent of the discretization so the level of discretization does not change the fundamental level of complexity of our model. As an example, we apply our method to study the weather pattern for Los Angeles and New York City across the last century.

Research paper thumbnail of On the Construction of Prior Information – An Info-Metrics Approach

Advances in Econometrics, 2016

Abstract Although in principle prior information can significantly improve inference, incorporati... more Abstract Although in principle prior information can significantly improve inference, incorporating incorrect prior information will bias the estimates of any inferential analysis. This fact deters many scientists from incorporating prior information into their inferential analyses. In the natural sciences, where experiments are more regularly conducted, and can be combined with other relevant information, prior information is often used in inferential analysis, despite it being sometimes nontrivial to specify what that information is and how to quantify that information. In the social sciences, however, prior information is often hard to come by and very hard to justify or validate. We review a number of ways to construct such information. This information emerges naturally, either from fundamental properties and characteristics of the systems studied or from logical reasoning about the problems being analyzed. Borrowing from concepts and philosophical reasoning used in the natural sciences, and within an info-metrics framework, we discuss three different, yet complimentary, approaches for constructing prior information, with an application to the social sciences.

Research paper thumbnail of Effect of Universal TB Vaccination and Other Policy-Relevant Factors on the Probability of Patient Death from COVID-19

SSRN Electronic Journal, 2020

The possibility of reoccurring waves of the novel coronavirus that triggered the 2020 pandemic ma... more The possibility of reoccurring waves of the novel coronavirus that triggered the 2020 pandemic makes it critical to identify underlying policy-relevant factors that could be leveraged to decrease future COVID-19 death rates. We examined variation in a number of underlying, policyrelevant, country-level factors and COVID-19 death rates across countries. We found three such factors that significantly impact the survival probability of patients infected with COVID-19. In order of impact, these are universal TB (BCG) vaccination, air pollution deaths and a healthrelated expenditure. We quantify each probability change by age and sex. To deal with small sample size and high correlations, we use an information-theoretic inferential method that also allows us to introduce priors constructed from independent SARS data.

Research paper thumbnail of What May Visualization Processes Optimize?

In this paper, we present an abstract model of visualization and inference processes and describe... more In this paper, we present an abstract model of visualization and inference processes and describe an information-theoretic measure for optimizing such processes. In order to obtain such an abstraction, we first examined six classes of workflows in data analysis and visualization, and identified four levels of typical visualization components, namely disseminative, observational, analytical and model-developmental visualization. We noticed a common phenomenon at different levels of visualization, that is, the transformation of data spaces (referred to as alphabets) usually corresponds to the reduction of maximal entropy along a workflow. Based on this observation, we establish an information-theoretic measure of cost-benefit ratio that may be used as a cost function for optimizing a data visualization process. To demonstrate the validity of this measure, we examined a number of successful visualization processes in the literature, and showed that the information-theoretic measure can...

Research paper thumbnail of Image Reconstruction : An Information-Theoretic Approach

The objective of this project is to develop an appropriate econometric (statistical) method for a... more The objective of this project is to develop an appropriate econometric (statistical) method for analyzing, with minimal assumptions, small and possibly incomplete and ill-behaved data. In this paper we develop an efficient and easy to apply image reconstruction (estimation) method for analyzing such data. The resulting method extends (and builds on the foundations of) information-theoretic methods by further relaxing some of the underlying assumptions, uses minimal distributional assumptions, performs well (relative to current methods of estimation and image reconstruction) and uses efficiently all the available information (hard and soft data). Further, this method is computationally efficient.

Research paper thumbnail of Prior Information

Oxford Scholarship Online, 2017

In this chapter I introduce and quantify prior information and show how to incorporate it into th... more In this chapter I introduce and quantify prior information and show how to incorporate it into the info-metrics framework. The priors developed arise from fundamental properties of the system, from logical reasoning, or from empirical observations. I start the chapter with the derivation of priors for discrete distributions, which can be handled via the grouping property, and a detailed derivation of surprisal analysis. Constructing priors for continuous distributions is more challenging. That problem is tackled via the method of transformation groups, which is related to the mathematical concept of group theory. That method works for both discrete and continuous functions. The last approaches I discuss are based on empirical information. The close relationship between priors, treatment effects, and score functions is discussed and demonstrated in the last section. Visual illustrations of the theory and numerous theoretical and applied examples are provided.

Research paper thumbnail of A Markov Model of Bank Failure Estimated Using an Information-Theoretic Approach

In this paper, we develop an early-warning bank failure model (EWM) designed specifically to capt... more In this paper, we develop an early-warning bank failure model (EWM) designed specifically to capture the dynamic process underlying the transition from financially sound to closure. We model the transition process as a stationary Markov model and estimate the transition probabilities using a Generalized Maximum Entropy (GME) estimation technique. The GME estimation method is a member of the class of information-theoretic methods, is semi-parametric, and is better suited for estimating models in which the data are limited (e.g., few events, and data availability problems), highly collinear, and measured with error – conditions that often exist with micro-level banking data. In addition, this method allows us to incorporate prior information and impose fewer distributional assumptions relative to conventional maximum likelihood (or full information maximum likelihood) methods. We report estimates of the transition probabilities for nine transition states for the population of national...

Research paper thumbnail of Information and Its Value

Advances in Info-Metrics, 2020

In this chapter, we are interested in understanding the nature of information and its value. We f... more In this chapter, we are interested in understanding the nature of information and its value. We focus on information that is used for making decisions, including related activities such as constructing models, performing inferences, and making predictions. Our discussion is mostly qualitative, and it touches on certain aspects of information as related to the sender, receiver, and a possible observer. Although our emphasis is on shedding more light on the concept of information for making decisions, we are not concerned here with the exact details of the decision process, or information processing itself. In addition to discussing information, our expedition takes us through the traditional notions of utility, prices, and risk, all of which, under certain conditions, relate to the value of information. Our main conclusion is that the value of information (used in decision making) is relative and subjective. Since information is relative, it can have more than one value, say a value ...

Research paper thumbnail of Estimating Market Power and Strategies: Bibliography

Research paper thumbnail of Career Analyzer Planning Tool (CAPT)

: The overall objective of this research is to analyze the impact of a Sailor's personal attr... more : The overall objective of this research is to analyze the impact of a Sailor's personal attributes and demographics as well as the prevailing macroeconomic conditions and Navy policy on a Sailor's career. In this study a transition probability for each Sailor is estimated. This model allows investigators to examine many different possible scenarios, such as promotion probability, given an individual's acquisition of new skills or training, changes in geographic location, or economic downturns. The technique used is an Information Theoretic, Generalized Cross Entropy (IT-GCE) method.

Research paper thumbnail of Perloff, Jeffrey M., Larry S. Karp, and Amos Golan, Editors. Estimating Market Power and Strategies. Cambridge,UK: Cambridge University Press, 2007, 352 pp., $88.99

Research paper thumbnail of Interpreting Social Accounting Matrix (SAM) as an Information Channel

Entropy, 2020

Information theory, and the concept of information channel, allows us to calculate the mutual inf... more Information theory, and the concept of information channel, allows us to calculate the mutual information between the source (input) and the receiver (output), both represented by probability distributions over their possible states. In this paper, we use the theory behind the information channel to provide an enhanced interpretation to a Social Accounting Matrix (SAM), a square matrix whose columns and rows present the expenditure and receipt accounts of economic actors. Under our interpretation, the SAM’s coefficients, which, conceptually, can be viewed as a Markov chain, can be interpreted as an information channel, allowing us to optimize the desired level of aggregation within the SAM. In addition, the developed information measures can describe accurately the evolution of a SAM over time. Interpreting the SAM matrix as an ergodic chain could show the effect of a shock on the economy after several periods or economic cycles. Under our new framework, finding the power limit of t...

Research paper thumbnail of An information-theoretic approach for forecasting interval-valued SP500 daily returns

International Journal of Forecasting, 2020