Online Regression Competitive with Changing Predictors

Steven Busuttil and Yuri Kalnishkan

(2007)

Steven Busuttil and Yuri Kalnishkan (2007) Online Regression Competitive with Changing Predictors
In: Algorithmic Learning Theory. , Berlin / Heidelberg, pp. 181-195.

Our Full Text Deposits

Full text access: Open

Full Text - 217.69 KB

Links to Copies of this Item Held Elsewhere


Abstract

This paper deals with the problem of making predictions in the online mode of learning where the dependence of the outcome yt on the signal xt can change with time. The Aggregating Algorithm (AA) is a technique that optimally merges experts from a pool, so that the resulting strategy suffers a cumulative loss that is almost as good as that of the best expert in the pool. We apply the AA to the case where the experts are all the linear predictors that can change with time. KAARCh is the kernel version of the resulting algorithm. In the kernel case, the experts are all the decision rules in some reproducing kernel Hilbert space that can change over time. We show that KAARCh suffers a cumulative square loss that is almost as good as that of any expert that does not change very rapidly.

Information about this Version

This is a Draft version
This version's date is: 11/10/2007
This item is peer reviewed

Link to this Version

https://repository.royalholloway.ac.uk/items/f6085cc7-b0de-48cc-e424-e3af7e9e2e95/1/

Deposited by () on 30-Mar-2010 in Royal Holloway Research Online.Last modified on 06-Jan-2011

Notes

(C) 2007 Springer Verlag, whose permission to mount this version for private study and research is acknowledged.  The repository version is the author's final draft.

 

References

1. Vovk, V.: Aggregating strategies. In Fulk, M., Case, J., eds.: Proceedings of the 3rd
Annual Workshop on Computational Learning Theory, Morgan Kaufmann (1990)
371–383
2. Vovk, V.: A game of prediction with expert advice. Journal of Computer and
System Sciences 56 (1998) 153–173
3. Vovk, V.: Competitive on-line statistics. International Statistical Review 69(2)
(2001) 213–248
4. Cesa-Bianchi, N., Lugosi, G.: Prediction, Learning, and Games. Cambridge University
Press (2006)
5. Gammerman, A., Kalnishkan, Y., Vovk, V.: On-line prediction with kernels and
the complexity approximation principle. In: Proceedings of the 20th Conference
on Uncertainty in Artificial Intelligence, AUAI Press (2004) 170–176
6. Vovk, V.: On-line regression competitive with reproducing kernel Hilbert spaces.
Technical Report arXiv:cs.LG/0511058 (version 2), arXiv.org (2006)
7. Herbster, M., Warmuth, M.K.: Tracking the best linear predictor. Journal of
Machine Learning Research 1 (2001) 281–309
8. Kivinen, J., Smola, A.J., Williamson, R.C.: Online learning with kernels. IEEE
Transactions on Signal Processing 52(8) (2004) 2165–2176
9. Cavallanti, G., Cesa-Bianchi, N., Gentile, C.: Tracking the best hyperplane with a
simple budget perceptron. Machine Learning (to appear)
10. Busuttil, S., Kalnishkan, Y.: Weighted kernel regression for predicting changing
dependencies. In: Proceedings of the 18th European Conference on Machine Learning
(ECML 2007). (to appear)
11. Cristianini, N., Shawe-Taylor, J.: An Introduction to Support Vector Machines
(and Other Kernel-Based Learning Methods). Cambridge University Press, UK
(2000)
12. Aizerman, M., Braverman, E., Rozonoer, L.: Theoretical foundations of the potential
function method in pattern recognition learning. Automation and Remote
Control 25 (1964) 821–837
13. Aronszajn, N.: Theory of reproducing kernels. Transactions of the American
Mathematical Society 68 (1950) 337–404
14. Sch¨olkopf, B., Smola, A.J.: Learning with Kernels — Support Vector Machines,
Regularization, Optimization and Beyond. The MIT Press, USA (2002)
15. Beckenbach, E.F., Bellman, R.: Inequalities. Springer (1961)


Details