Generalised Entropy and Asymptotic Complexities of Languages

Yuri Kalnishkan, Vladimir Vovk and Michael V. Vyugin

(2007)

Yuri Kalnishkan, Vladimir Vovk and Michael V. Vyugin (2007) Generalised Entropy and Asymptotic Complexities of Languages
In: Learning Theory. , Berlin / Heidelberg, pp. .

Our Full Text Deposits

Full text access: Open

Full Text - 194.53 KB

Links to Copies of this Item Held Elsewhere


Abstract

In this paper the concept of asymptotic complexity of languages is introduced. This concept formalises the notion of learnability in a particular environment and generalises Lutz and Fortnow’s concepts of predictability and dimension. Then asymptotic complexities in different prediction environments are compared by describing the set of all pairs of asymptotic complexities w.r.t. different environments. A geometric characterisation in terms of generalised entropies is obtained and thus the results of Lutz and Fortnow are generalised.

Information about this Version

This is a Submitted version
This version's date is: 12/06/2007
This item is peer reviewed

Link to this Version

https://repository.royalholloway.ac.uk/items/6cbd3167-5250-a7b0-7c1d-ea65ebfe31ff/1/

Item TypeBook Item
TitleGeneralised Entropy and Asymptotic Complexities of Languages
AuthorsKalnishkan, Yuri
Vovk, Vladimir
Vyugin, Michael
DepartmentsFaculty of Science\Computer Science

Identifiers

doi10.1007/978-3-540-72927-3_22
doi10.1007/978-3-540-72927-3

Deposited by () on 30-Mar-2010 in Royal Holloway Research Online.Last modified on 04-Jan-2011

Notes

(C) 2007 Springer Verlag, whose permission to mount this version for private study and research is acknowledged.  The repository version is the author's final draft.

References

1. Cesa-Bianchi, N., Lugosi, G.: Prediction, Learning, and Games. Cambridge University
Press (2006)
2. Vovk, V., Watkins, C.J.H.C.: Universal portfolio selection. In: Proceedings of the
11th Annual Conference on Computational Learning Theory, ACM Press (1998)
12–23
3. Fortnow, L., Lutz, J.H.: Prediction and dimension. Journal of Computer and System
Sciences 70(4) (2005) 570–589
4. Gr¨unwald, P.D., Dawid, A.P.: Game theory, maximum entropy, minimum discrepancy
and robust Bayesian decision theory. The Annals of Statistics 32(4) (2004)
1367–1433
5. Kalnishkan, Y., Vovk, V., Vyugin, M.V.: Loss functions, complexities, and the
Legendre transformation. Theoretical Computer Science 313(2) (2004) 195–207
6. Kalnishkan, Y., Vyugin, M.V.: The weak aggregating algorithm and weak mixability.
In: Learning Theory, Proceedings of the 18th Annual Conference (COLT 2005).
Volume 3559 of Lecture Notes in Computer Science., Springer (2005) 188–203
7. Eggleston, H.G.: Convexity. Cambridge University Press, Cambridge (1958)
8. Hoeffding,W.: Probability inequalities for sums of bounded random variables. Journal
of the American Statistical Association 58(301) (March 1963) 13–30


Details