# Online Learning and Prediction With Eventually Almost Surely Guarantee

Date: 2019-06-27 Add to Google CalendarTime: 2:00pm - 3:00pm

Location: Holmes Hall 485

Speaker: Changlong Wu, candidate for PhD, advisor Dr. Naranya Prasad Santhanam

Abstract:

We introduce a new paradigm for predicting the property of a probabilistic model sequentially in the infinite horizon, where the model is unknown but belong to a model class which is known. The predicting will incur some predefined {0,1}-loss at each time step, which depends both on the sample observed at next time step and the model itself. We say a class is eventually almost surely (e.a.s.)-predictable if there exist an universal prediction rule such that the cumulative loss is finite with probability 1 for all model in the class. In this proposal we will introduce a general framework for the (e.a.s.)-predictability setup. We study several concrete examples in this framework and prove the necessary and sufficient conditions for the existence of consistent prediction rules. We then extend our framework in several directions: 1. instead of guarantee to have finite loss, under what condition we can have a stopping rule by which one will be able to tell if we are stopping making errors after that? 2. instead of allow the predictor to be arbitrary, what will be happen if the ability of the predictor is bounded (e.g. bounded computational resources)? The second extension led us to a surprising connection with the computational complexity theory.

<back>