Options
Prediction of time series by statistical learning: general losses and fast rates
Author(s)
Date Issued
2013-12-31
Date Available
2017-02-22T12:31:50Z
Abstract
We establish rates of convergences in statistical learning for time series forecasting. Using the PAC-Bayesian approach, slow rates of convergence √ d/n for the Gibbs estimator under the absolute loss were given in a previous work [7], where n is the sample size and d the dimension of the set of predictors. Under the same weak dependence conditions, we extend this result to any convex Lipschitz loss function. We also identify a condition on the parameter space that ensures similar rates for the classical penalized ERM procedure. We apply this method for quantile forecasting of the French GDP. Under additional conditions on the loss functions (satisfied by the quadratic loss function) and for uniformly mixing processes, we prove that the Gibbs estimator actually achieves fast rates of convergence d/n. We discuss the optimality of these different rates pointing out references to lower bounds when they are available. In particular, these results bring a generalization the results of [29] on sparse regression estimation to some autoregression.
Type of Material
Journal Article
Publisher
De Gruyter
Journal
Dependence Modelling
Volume
1
Start Page
65
End Page
93
Copyright (Published Version)
2013 the Authors
Language
English
Status of Item
Peer reviewed
This item is made available under a Creative Commons License
File(s)
Loading...
Name
insight_publication.pdf
Size
1.55 MB
Format
Adobe PDF
Checksum (MD5)
7af161689ac46c5e78e5a91193303268
Owning collection
Mapped collections