Chi Tim Ng - Academia.edu (original) (raw)

Papers by Chi Tim Ng

Research paper thumbnail of Information criterion of seriously over-fitting change-point models

It is shown that a general class of information criteria is able to rule out seriously over-fitti... more It is shown that a general class of information criteria is able to rule out seriously over-fitting change-point models where the number of change points is comparable to the sample size. Equivalently speaking, it is not necessary to impose a pre-specified upper bound on the number of change points when we search for the optimal solution as in Bardet, Kengne, and Wintenberger (2012). For the time series with finite but unknown number of change points, the model with consistently estimated number of change points tends to be preferred to any other models (even seriously over-fitting) under such a class of information criteria. The results hold under a broad class of time series model introduced in Bardet and Wintenberger (2009) that includes ARMA-GARCH as a special case. Since exhaustive search of all possible change-point models for the optimal information criterion value is computationally infeasible, it is common to impose certain restrictions on the searching range. The applications of the information criterion to the restricted search of the optimal model are also discussed.

Research paper thumbnail of Change-point estimators with true identification property

The change-point problem is reformulated as a penalized likelihood estimation problem. A new non-... more The change-point problem is reformulated as a penalized likelihood estimation problem. A new non-convex penalty function is introduced to allow consistent estimation of the number of change points, and their locations and sizes. Penalized likelihood methods based on LASSO and SCAD penalties may not satisfy such a property. The asymptotic properties for the local solutions are established and numerical studies are conducted to highlight their performance. An application to copy number variation is discussed.

Research paper thumbnail of Journal of Statistical Planning and Inference xx (xxxx) xxx–xxx Regularized LRT for large scale covariance matrices: One sample problem

The main theme of this paper is a modification of the likelihood ratio test (LRT) for testing hig... more The main theme of this paper is a modification of the likelihood ratio test (LRT) for testing high dimensional covariance matrix. Recently, the correct asymptotic distribution of the LRT for a large-dimensional case (the case p/n approaches to a constant γ ∈ (0, 1]) is specified by researchers. The correct procedure is named as corrected LRT. Despite of its correction, the corrected LRT is a function of sample eigenvalues that are suffered from redundant variability from high dimensionality and, subsequently, still does not have full power in differentiating hypotheses on the covariance matrix. In this paper, motivated by the successes of a linearly shrunken covariance matrix estimator (simply shrinkage estimator) in various applications, we propose a regularized LRT that uses, in defining the LRT, the shrinkage estimator instead of the sample covariance matrix. We compute the asymptotic distribution of the regularized LRT, when the true covariance matrix is the identity matrix and a spiked covariance matrix. The obtained asymptotic results have applications in testing various hypotheses on the covariance matrix. Here, we apply them to testing the identity of the true covariance matrix, which is a long standing problem in the literature, and show that the regularized LRT outperforms the corrected LRT, which is its non-regularized counterpart. In addition, we compare the power of the regularized LRT to those of recent non-likelihood based procedures.

Research paper thumbnail of SHRINKAGE ESTIMATION OF MEAN-VARIANCE PORTFOLIO

This paper studies the optimal expected gain/loss of a portfolio at a given risk level when the i... more This paper studies the optimal expected gain/loss of a portfolio at a given risk level when the initial investment is zero and the number of stocks p grows with the sample size n. A new estimator of the optimal expected gain/loss of such a portfolio is proposed after examining the behavior of the sample mean vector and the sample covariance matrix based on conditional expectations. It is found that the effect of the sample mean vector is additive and the effect of the sample covariance matrix is multiplicative, both of which over-predict the optimal expected gain/loss. By virtue of a shrinkage method, a new

Research paper thumbnail of Going beyond oracle property: Selection consistency and uniqueness of local solution of the generalized linear model

Recently, the selection consistency of penalized least square estimators has received a great dea... more Recently, the selection consistency of penalized least square estimators has received a great deal of attention. For the penalized likelihood estimation with certain non-convex penalties, search space can be constructed within which there exists a unique local minimizer that exhibits selection consistency in high-dimensional generalized linear models under certain conditions. In particular, we prove that the SCAD penalty of Fan and Li (2001) and a new modified version of the unbounded penalty of Lee and Oh (2014) can be employed to achieve such a property. These results hold even for the non-sparse cases where the number of relevant covariates increases with the sample size. Simulation studies are provided to compare the performance of SCAD penalty and the newly proposed penalty.

Research paper thumbnail of Comparison of non-nested models under a general measure of distance

As a supplement to summary statistics of information criteria, the closeness of two or more compe... more As a supplement to summary statistics of information criteria, the closeness of two or more competing non-nested models can be compared under a procedure that is more general than that proposed in Vuong (1989); measures of closeness other than the Kullback-Leibler divergence are allowed. Large deviation theory is used to obtain a bound of the power of rejecting the null hypothesis that the two models are equally close to the true model. Such a bound can be expressed in terms of a constant \gamma \in [0, 1); can be computed empirically without any knowledge of the data generating
mechanism. Additionally, based on the constant \gamma, the procedures constructed based on different measures of distance can be compared on their abilities to conclude a difference between two models.

Research paper thumbnail of Stochastic integral convergence: A white noise calculus approach

By virtue of long-memory time series, it is illustrated in this paper that white noise calculus c... more By virtue of long-memory time series, it is illustrated in this paper that white noise calculus can be used to handle subtle issues of stochastic integral convergence that often arise in the asymptotic theory of time series. A main difficulty of such an issue is that the limiting stochastic integral cannot be defined path-wise in general. As a result, continuous mapping theorem cannot be directly applied to deduce the convergence of stochastic integrals \int^1_0H_n(s)dZ_n(s) to \int^1_0
H(s)dZ(s) based on the convergence of (H_n,Z_n) to (H,Z) in distribution. The white noise calculus, in particular the technique of S-transform, allows one to establish the asymptotic results directly.

Research paper thumbnail of A new integral representation of the coverage probability of a random convex hull

In this paper, the probability that a given point is covered by a random convex hull generated by... more In this paper, the probability that a given point is covered by a random convex hull generated by independent and identically-distributed random points in a plane is studied. It is shown that such probability can be expressed in terms of an integral that can be approximated numerically by function-evaluations over the grid-points in a 2-dimensional space. The new integral representation allows such probability be computed efficiently. The computational burdens under the proposed integral representation and those in the existing literature are compared.
The proposed method is illustrated through numerical examples where the random points are drawn from (i) uniform
distribution over a square and (ii) bivariate normal distribution over the two-dimensional Euclidean space.
The applications of the proposed method in statistics are are discussed.

Research paper thumbnail of Likelihood inferences for high dimensional factor analysis of time series with applications in finance

This paper investigates likelihood inferences for high-dimensional factor analysis of time series... more This paper investigates likelihood inferences for high-dimensional factor analysis of time series data. A matrix decomposition technique is developed to obtain expressions of the likelihood functions and its derivatives. With such expressions, the traditional delta method that relies heavily on score function and Hessian matrix can be extended to high dimensional cases. Asymptotic theories including consistency and asymptotic normality are established. Moreover, fast computational algorithms are developed for estimation. Applications to high-dimensional stock price data and portfolio analysis are discussed. The technical proofs of the asymptotic results and the computer codes are available online.

Research paper thumbnail of Modified SCAD penalty for constrained variable selection problems

Instead of using sample information only to do variable selection, in this article we also take p... more Instead of using sample information only to do variable selection, in this article we also take priori information — linear constraints of regression coefficients — into account. Penalized likelihood estimation method is adopted. However under constraints, it is not guaranteed that information criteria like AIC and BIC are minimized at an oracle solution using the lasso or SCAD penalty. To overcome such difficulties, a modified SCAD penalty is proposed. The definitions of information criteria GCV, AIC and BIC for constrained variable selection problems are also proposed. Statistically, we show that if the tuning parameter is appropriately
chosen, the proposed estimators enjoy the oracle properties and satisfy the linear constraints. Additionally, they also posses the robust property to outliers if the linear model with M-estimation is used.

Research paper thumbnail of A fast algorithm to sample the number of vertexes and the area of the random convex hull on the unit square

Computational Statistics

We propose an algorithm to sample the area of the smallest convex hull containing n sample points... more We propose an algorithm to sample the area of the smallest convex hull containing n sample points uniformly distributed over unit square. To do it, we introduce a new coordinate system for the position of vertexes and re-write joint distribution of the number of vertexes and their locations in the new coordinate system. The proposed algorithm is much faster than existing procedure and has a computational complexity on the order of O(T ), where T is the number of vertexes. Using the proposed algorithm, we numerically investigate the asymptotic behavior of functionals of the random convex hull. In addition, we apply it to finding pairs of stocks where the returns are dependent on each other on the New York Stock Exchange.

Research paper thumbnail of Model Comparison with Composite Likelihood

Bernoulli

Comparisons are made for the amount of agreement of the composite likelihood information criteria... more Comparisons are made for the amount of agreement of the composite likelihood information criteria and their full likelihood counterpart when making decisions among the fits of different models, and some properties of penalty term for composite likelihood information criterion are obtained. Asymptotic theory is given for for the case when a simpler model is nested within a bigger model, and the bigger model approaches the simpler model under a sequence of local alternatives. Composite likelihood can more or less frequently choose the bigger model, depending on the direction of local alternatives; in the former case, composite likelihood has more "power" to choose the bigger model. The behaviors of the information criteria are illustrated via theory and simulation examples of the Gaussian linear mixed-effects model.

Research paper thumbnail of Testing stochastic orders in tails of contingency tables

Journal of Applied Statistics, 2011

Testing for the difference in the strength of bivariate association in two independent contingenc... more Testing for the difference in the strength of bivariate association in two independent contingency tables is an important issue that finds applications in various disciplines. Currently, many of the commonly used tests are based on single-index measures of association. More specifically, one obtains single-index measurements of association from two tables and compares them based on asymptotic theory. Although they are usually easy to understand and use, often much of the information contained in the data is lost with single-index measures. Accordingly, they fail to fully capture the association in the data. To remedy this shortcoming, we introduce a new summary statistic measuring various types of association in a contingency table. Based on this new summary statistic, we propose a likelihood ratio test comparing the strength of association in two independent contingency tables. The proposed test examines the stochastic order between summary statistics. We derive its asymptotic null distribution and demonstrate that the least favorable distributions are chi-bar distributions. We numerically compare the power of the proposed test to that of the tests based on single-index measures. Finally, we provide two examples illustrating the new summary statistics and the related tests.

Research paper thumbnail of Fractional Volatility Models and Malliavin Calculus

Research paper thumbnail of Stochastic integrals driven by fractional Brownian motion and arbitrage: a tale of two integrals

Quantitative Finance, 2009

Recent research suggests that fractional Brownian motion can be used to model the long-range depe... more Recent research suggests that fractional Brownian motion can be used to model the long-range dependence structure of the stock market. Fractional Brownian motion is not a semi-martingale and arbitrage opportunities do exist, however. Hu and Øksendal [Infin. Dimens. Anal., Quant. Probab. Relat. Top., 2003, 6, 1–32] and Elliott and van der Hoek [Math. Finan., 2003, 13, 301–330] propose the use of the white noise calculus approach to circumvent this difficulty. Under such a setting, they argue that arbitrage does not exist in the fractional market. To unravel this discrepancy, we examine the definition of self-financing strategies used by these authors. By refining their definitions, a new notion of continuously rebalanced self-financing strategies, which is compatible with simple buy and hold strategies, is given. Under this definition, arbitrage opportunities do exist in fractional markets.

Research paper thumbnail of Statistical inference for FIGARCH and related models

Statistical inference for FIGARCH and related models. Chi Tim Ng Dissertation Abstracts Internati... more Statistical inference for FIGARCH and related models. Chi Tim Ng Dissertation Abstracts International 69:0101, 2007. ABSTRACT NOT AVAILABLE. < b>{Q 1}</b>; Electronic mail; Inference; 54 Probability and Statistics(CI); 65 Statistics and Probability(AH).

Research paper thumbnail of Normality test for multivariate conditional heteroskedastic dynamic regression models

Economics Letters, 2011

In this paper, we study the Jarque–Bera test for the normality of the innovations of multivariate... more In this paper, we study the Jarque–Bera test for the normality of the innovations of multivariate GARCH models. It is shown that the test is distribution free and its limiting null distribution is a chi-square distribution.► We consider the normality test for multivariate GARCH models. ► For a test, we employ the Jarque-Bera test. ► The limiting null distribution is a chi-square distribution. ► Simulation result con¯rms the validity of the test.

Research paper thumbnail of A Note on the Asymptotic Inference for FIGARCH (p, d, q) Models

Parameters estimation for a FIGARCH(p, d, q) model is studied in this paper. By constructing a co... more Parameters estimation for a FIGARCH(p, d, q) model is studied in this paper. By constructing a compact parameter space Θ satisfying the non-negativity constraints for the FI-GARCH model, it is shown that the results of can be applied to establish the strong consistency and asymptotic normality of the quasi-maximum likelihood (QML) estimator of the FIGARCH model.

Research paper thumbnail of Fractional constant elasticity of variance model

Lecture Notes-Monograph Series, 2006

This paper develops a European option pricing formula for fractional market models. Although ther... more This paper develops a European option pricing formula for fractional market models. Although there exist option pricing results for a fractional Black-Scholes model, they are established without accounting for stochastic volatility. In this paper, a fractional version of the Constant Elasticity of Variance (CEV) model is developed. European option pricing formula similar to that of the classical CEV model is obtained and a volatility skew pattern is revealed. * Research supported in part by HKSAR RGC grants 4043/02P and 400305.

Research paper thumbnail of Statistical inference for non-stationary GARCH (p, q) models

Electronic Journal of Statistics, 2009

This paper studies the quasi-maximum likelihood estimator (QMLE) of non-stationary GARCH(p, q) mo... more This paper studies the quasi-maximum likelihood estimator (QMLE) of non-stationary GARCH(p, q) models. By expressing GARCH models in matrix form, the log-likelihood function is written in terms of the product of random matrices. Oseledec's multiplicative ergodic theorem is then used to establish the asymptotic properties of the log-likelihood function and thereby, showing the weak consistency and the asymptotic normality of the QMLE for non-stationary GARCH(p, q) models.

Research paper thumbnail of Information criterion of seriously over-fitting change-point models

It is shown that a general class of information criteria is able to rule out seriously over-fitti... more It is shown that a general class of information criteria is able to rule out seriously over-fitting change-point models where the number of change points is comparable to the sample size. Equivalently speaking, it is not necessary to impose a pre-specified upper bound on the number of change points when we search for the optimal solution as in Bardet, Kengne, and Wintenberger (2012). For the time series with finite but unknown number of change points, the model with consistently estimated number of change points tends to be preferred to any other models (even seriously over-fitting) under such a class of information criteria. The results hold under a broad class of time series model introduced in Bardet and Wintenberger (2009) that includes ARMA-GARCH as a special case. Since exhaustive search of all possible change-point models for the optimal information criterion value is computationally infeasible, it is common to impose certain restrictions on the searching range. The applications of the information criterion to the restricted search of the optimal model are also discussed.

Research paper thumbnail of Change-point estimators with true identification property

The change-point problem is reformulated as a penalized likelihood estimation problem. A new non-... more The change-point problem is reformulated as a penalized likelihood estimation problem. A new non-convex penalty function is introduced to allow consistent estimation of the number of change points, and their locations and sizes. Penalized likelihood methods based on LASSO and SCAD penalties may not satisfy such a property. The asymptotic properties for the local solutions are established and numerical studies are conducted to highlight their performance. An application to copy number variation is discussed.

Research paper thumbnail of Journal of Statistical Planning and Inference xx (xxxx) xxx–xxx Regularized LRT for large scale covariance matrices: One sample problem

The main theme of this paper is a modification of the likelihood ratio test (LRT) for testing hig... more The main theme of this paper is a modification of the likelihood ratio test (LRT) for testing high dimensional covariance matrix. Recently, the correct asymptotic distribution of the LRT for a large-dimensional case (the case p/n approaches to a constant γ ∈ (0, 1]) is specified by researchers. The correct procedure is named as corrected LRT. Despite of its correction, the corrected LRT is a function of sample eigenvalues that are suffered from redundant variability from high dimensionality and, subsequently, still does not have full power in differentiating hypotheses on the covariance matrix. In this paper, motivated by the successes of a linearly shrunken covariance matrix estimator (simply shrinkage estimator) in various applications, we propose a regularized LRT that uses, in defining the LRT, the shrinkage estimator instead of the sample covariance matrix. We compute the asymptotic distribution of the regularized LRT, when the true covariance matrix is the identity matrix and a spiked covariance matrix. The obtained asymptotic results have applications in testing various hypotheses on the covariance matrix. Here, we apply them to testing the identity of the true covariance matrix, which is a long standing problem in the literature, and show that the regularized LRT outperforms the corrected LRT, which is its non-regularized counterpart. In addition, we compare the power of the regularized LRT to those of recent non-likelihood based procedures.

Research paper thumbnail of SHRINKAGE ESTIMATION OF MEAN-VARIANCE PORTFOLIO

This paper studies the optimal expected gain/loss of a portfolio at a given risk level when the i... more This paper studies the optimal expected gain/loss of a portfolio at a given risk level when the initial investment is zero and the number of stocks p grows with the sample size n. A new estimator of the optimal expected gain/loss of such a portfolio is proposed after examining the behavior of the sample mean vector and the sample covariance matrix based on conditional expectations. It is found that the effect of the sample mean vector is additive and the effect of the sample covariance matrix is multiplicative, both of which over-predict the optimal expected gain/loss. By virtue of a shrinkage method, a new

Research paper thumbnail of Going beyond oracle property: Selection consistency and uniqueness of local solution of the generalized linear model

Recently, the selection consistency of penalized least square estimators has received a great dea... more Recently, the selection consistency of penalized least square estimators has received a great deal of attention. For the penalized likelihood estimation with certain non-convex penalties, search space can be constructed within which there exists a unique local minimizer that exhibits selection consistency in high-dimensional generalized linear models under certain conditions. In particular, we prove that the SCAD penalty of Fan and Li (2001) and a new modified version of the unbounded penalty of Lee and Oh (2014) can be employed to achieve such a property. These results hold even for the non-sparse cases where the number of relevant covariates increases with the sample size. Simulation studies are provided to compare the performance of SCAD penalty and the newly proposed penalty.

Research paper thumbnail of Comparison of non-nested models under a general measure of distance

As a supplement to summary statistics of information criteria, the closeness of two or more compe... more As a supplement to summary statistics of information criteria, the closeness of two or more competing non-nested models can be compared under a procedure that is more general than that proposed in Vuong (1989); measures of closeness other than the Kullback-Leibler divergence are allowed. Large deviation theory is used to obtain a bound of the power of rejecting the null hypothesis that the two models are equally close to the true model. Such a bound can be expressed in terms of a constant \gamma \in [0, 1); can be computed empirically without any knowledge of the data generating
mechanism. Additionally, based on the constant \gamma, the procedures constructed based on different measures of distance can be compared on their abilities to conclude a difference between two models.

Research paper thumbnail of Stochastic integral convergence: A white noise calculus approach

By virtue of long-memory time series, it is illustrated in this paper that white noise calculus c... more By virtue of long-memory time series, it is illustrated in this paper that white noise calculus can be used to handle subtle issues of stochastic integral convergence that often arise in the asymptotic theory of time series. A main difficulty of such an issue is that the limiting stochastic integral cannot be defined path-wise in general. As a result, continuous mapping theorem cannot be directly applied to deduce the convergence of stochastic integrals \int^1_0H_n(s)dZ_n(s) to \int^1_0
H(s)dZ(s) based on the convergence of (H_n,Z_n) to (H,Z) in distribution. The white noise calculus, in particular the technique of S-transform, allows one to establish the asymptotic results directly.

Research paper thumbnail of A new integral representation of the coverage probability of a random convex hull

In this paper, the probability that a given point is covered by a random convex hull generated by... more In this paper, the probability that a given point is covered by a random convex hull generated by independent and identically-distributed random points in a plane is studied. It is shown that such probability can be expressed in terms of an integral that can be approximated numerically by function-evaluations over the grid-points in a 2-dimensional space. The new integral representation allows such probability be computed efficiently. The computational burdens under the proposed integral representation and those in the existing literature are compared.
The proposed method is illustrated through numerical examples where the random points are drawn from (i) uniform
distribution over a square and (ii) bivariate normal distribution over the two-dimensional Euclidean space.
The applications of the proposed method in statistics are are discussed.

Research paper thumbnail of Likelihood inferences for high dimensional factor analysis of time series with applications in finance

This paper investigates likelihood inferences for high-dimensional factor analysis of time series... more This paper investigates likelihood inferences for high-dimensional factor analysis of time series data. A matrix decomposition technique is developed to obtain expressions of the likelihood functions and its derivatives. With such expressions, the traditional delta method that relies heavily on score function and Hessian matrix can be extended to high dimensional cases. Asymptotic theories including consistency and asymptotic normality are established. Moreover, fast computational algorithms are developed for estimation. Applications to high-dimensional stock price data and portfolio analysis are discussed. The technical proofs of the asymptotic results and the computer codes are available online.

Research paper thumbnail of Modified SCAD penalty for constrained variable selection problems

Instead of using sample information only to do variable selection, in this article we also take p... more Instead of using sample information only to do variable selection, in this article we also take priori information — linear constraints of regression coefficients — into account. Penalized likelihood estimation method is adopted. However under constraints, it is not guaranteed that information criteria like AIC and BIC are minimized at an oracle solution using the lasso or SCAD penalty. To overcome such difficulties, a modified SCAD penalty is proposed. The definitions of information criteria GCV, AIC and BIC for constrained variable selection problems are also proposed. Statistically, we show that if the tuning parameter is appropriately
chosen, the proposed estimators enjoy the oracle properties and satisfy the linear constraints. Additionally, they also posses the robust property to outliers if the linear model with M-estimation is used.

Research paper thumbnail of A fast algorithm to sample the number of vertexes and the area of the random convex hull on the unit square

Computational Statistics

We propose an algorithm to sample the area of the smallest convex hull containing n sample points... more We propose an algorithm to sample the area of the smallest convex hull containing n sample points uniformly distributed over unit square. To do it, we introduce a new coordinate system for the position of vertexes and re-write joint distribution of the number of vertexes and their locations in the new coordinate system. The proposed algorithm is much faster than existing procedure and has a computational complexity on the order of O(T ), where T is the number of vertexes. Using the proposed algorithm, we numerically investigate the asymptotic behavior of functionals of the random convex hull. In addition, we apply it to finding pairs of stocks where the returns are dependent on each other on the New York Stock Exchange.

Research paper thumbnail of Model Comparison with Composite Likelihood

Bernoulli

Comparisons are made for the amount of agreement of the composite likelihood information criteria... more Comparisons are made for the amount of agreement of the composite likelihood information criteria and their full likelihood counterpart when making decisions among the fits of different models, and some properties of penalty term for composite likelihood information criterion are obtained. Asymptotic theory is given for for the case when a simpler model is nested within a bigger model, and the bigger model approaches the simpler model under a sequence of local alternatives. Composite likelihood can more or less frequently choose the bigger model, depending on the direction of local alternatives; in the former case, composite likelihood has more "power" to choose the bigger model. The behaviors of the information criteria are illustrated via theory and simulation examples of the Gaussian linear mixed-effects model.

Research paper thumbnail of Testing stochastic orders in tails of contingency tables

Journal of Applied Statistics, 2011

Testing for the difference in the strength of bivariate association in two independent contingenc... more Testing for the difference in the strength of bivariate association in two independent contingency tables is an important issue that finds applications in various disciplines. Currently, many of the commonly used tests are based on single-index measures of association. More specifically, one obtains single-index measurements of association from two tables and compares them based on asymptotic theory. Although they are usually easy to understand and use, often much of the information contained in the data is lost with single-index measures. Accordingly, they fail to fully capture the association in the data. To remedy this shortcoming, we introduce a new summary statistic measuring various types of association in a contingency table. Based on this new summary statistic, we propose a likelihood ratio test comparing the strength of association in two independent contingency tables. The proposed test examines the stochastic order between summary statistics. We derive its asymptotic null distribution and demonstrate that the least favorable distributions are chi-bar distributions. We numerically compare the power of the proposed test to that of the tests based on single-index measures. Finally, we provide two examples illustrating the new summary statistics and the related tests.

Research paper thumbnail of Fractional Volatility Models and Malliavin Calculus

Research paper thumbnail of Stochastic integrals driven by fractional Brownian motion and arbitrage: a tale of two integrals

Quantitative Finance, 2009

Recent research suggests that fractional Brownian motion can be used to model the long-range depe... more Recent research suggests that fractional Brownian motion can be used to model the long-range dependence structure of the stock market. Fractional Brownian motion is not a semi-martingale and arbitrage opportunities do exist, however. Hu and Øksendal [Infin. Dimens. Anal., Quant. Probab. Relat. Top., 2003, 6, 1–32] and Elliott and van der Hoek [Math. Finan., 2003, 13, 301–330] propose the use of the white noise calculus approach to circumvent this difficulty. Under such a setting, they argue that arbitrage does not exist in the fractional market. To unravel this discrepancy, we examine the definition of self-financing strategies used by these authors. By refining their definitions, a new notion of continuously rebalanced self-financing strategies, which is compatible with simple buy and hold strategies, is given. Under this definition, arbitrage opportunities do exist in fractional markets.

Research paper thumbnail of Statistical inference for FIGARCH and related models

Statistical inference for FIGARCH and related models. Chi Tim Ng Dissertation Abstracts Internati... more Statistical inference for FIGARCH and related models. Chi Tim Ng Dissertation Abstracts International 69:0101, 2007. ABSTRACT NOT AVAILABLE. < b>{Q 1}</b>; Electronic mail; Inference; 54 Probability and Statistics(CI); 65 Statistics and Probability(AH).

Research paper thumbnail of Normality test for multivariate conditional heteroskedastic dynamic regression models

Economics Letters, 2011

In this paper, we study the Jarque–Bera test for the normality of the innovations of multivariate... more In this paper, we study the Jarque–Bera test for the normality of the innovations of multivariate GARCH models. It is shown that the test is distribution free and its limiting null distribution is a chi-square distribution.► We consider the normality test for multivariate GARCH models. ► For a test, we employ the Jarque-Bera test. ► The limiting null distribution is a chi-square distribution. ► Simulation result con¯rms the validity of the test.

Research paper thumbnail of A Note on the Asymptotic Inference for FIGARCH (p, d, q) Models

Parameters estimation for a FIGARCH(p, d, q) model is studied in this paper. By constructing a co... more Parameters estimation for a FIGARCH(p, d, q) model is studied in this paper. By constructing a compact parameter space Θ satisfying the non-negativity constraints for the FI-GARCH model, it is shown that the results of can be applied to establish the strong consistency and asymptotic normality of the quasi-maximum likelihood (QML) estimator of the FIGARCH model.

Research paper thumbnail of Fractional constant elasticity of variance model

Lecture Notes-Monograph Series, 2006

This paper develops a European option pricing formula for fractional market models. Although ther... more This paper develops a European option pricing formula for fractional market models. Although there exist option pricing results for a fractional Black-Scholes model, they are established without accounting for stochastic volatility. In this paper, a fractional version of the Constant Elasticity of Variance (CEV) model is developed. European option pricing formula similar to that of the classical CEV model is obtained and a volatility skew pattern is revealed. * Research supported in part by HKSAR RGC grants 4043/02P and 400305.

Research paper thumbnail of Statistical inference for non-stationary GARCH (p, q) models

Electronic Journal of Statistics, 2009

This paper studies the quasi-maximum likelihood estimator (QMLE) of non-stationary GARCH(p, q) mo... more This paper studies the quasi-maximum likelihood estimator (QMLE) of non-stationary GARCH(p, q) models. By expressing GARCH models in matrix form, the log-likelihood function is written in terms of the product of random matrices. Oseledec's multiplicative ergodic theorem is then used to establish the asymptotic properties of the log-likelihood function and thereby, showing the weak consistency and the asymptotic normality of the QMLE for non-stationary GARCH(p, q) models.