|Topic:||Demystifying the efficiency of reinforcement learning: A statistical perspective|
|Time:||2:30 pm - 3:30 pm|
|Venue:||Lady Shaw Building C3|
|Speaker:||Dr. Gen Li|
Reinforcement learning (RL) is frequently modeled as learning and decision making in a Markov decision process (MDP). A core objective of RL is to search for a policy-based on a collection of noisy data samples-that approximately maximizes expected cumulative rewards in an MDP, without direct access to a precise description of the underlying model. In contemporary applications, it is increasingly more common to encounter environments with prohibitively large state and action space, thus exacerbating the challenge of collecting enough samples to learn the model. In this talk, we present three recent works to show how to break the sample size barrier in reinforcement learning. The first part demonstrates that a perturbed model-based RL approach is minimax optimal under a generative model, without suffering from a sample size barrier that was present in all past work. The second work shows that model-based offline reinforcement learning is minimax optimal without burn-in cost. Finally, we develop a minimax optimal algorithm for multi-agent Markov game to break curse of multi-agents and the long horizon barrier at the same time. These results might shed light on the efficacy of these algorithms in more complicated scenarios.
Reference: arxiv2005.12900, arxiv2204.05275, arxiv2208.10458