Testing for Change Points in Time Series
- Additional Document Info
- View All
This article considers the CUSUM-based (cumulative sum) test for a change point in a time series. In the case of testing for a mean shift, the traditional Kolmogorov-Smirnov test statistic involves a consistent long-run variance estimator, which is needed to make the limiting null distribution free of nuisance parameters. The commonly used lag-window type long-run variance estimator requires to choose a bandwidth parameter and its selection is a difficult task in practice. The bandwidth that is a fixed function of the sample size (e.g., n1/3, where n is sample size) is not adaptive to the magnitude of the dependence in the series, whereas the data-dependent bandwidth could lead to nonmonotonic power as shown in previous studies. In this article, we propose a self-normalization (SN) based Kolmogorov-Smirnov test, where the formation of the self-normalizer takes the change point alternative into account. The resulting test statistic is asymptotically distribution free and its power is monotonic. Furthermore, we extend the SN-based test to test for a change in other parameters associated with a time series, such as marginal median, autocorrelation at lag one, and spectrum at certain frequency bands. The use of the SN idea thus allows a unified treatment and offers a new perspective to the large literature of change point detection in the time series setting. Monte Carlo simulations are conducted to compare the finite sample performance of the new SN-based test with the traditional Kolmogorov-Smirnov test. Illustrations using real data examples are presented. © 2010 American Statistical Association.
author list (cited authors)