平特五不中

Event

Informed Posterior Sampling Based Reinforcement Learning Algorithms

Friday, April 26, 2024 10:30to11:30
McConnell Engineering Building Zames Seminar Room, MC 437, 3480 rue University, Montreal, QC, H3A 0E9, CA

Informal Systems Seminar (ISS), Centre for Intelligent Machines (CIM) and Groupe d'Etudes et de Recherche en Analyse des Decisions (GERAD)

Speaker: Dengwang Tang


**听狈辞迟别听迟丑补迟听迟丑颈蝉听颈蝉听补听丑测产谤颈诲听别惫别苍迟.
**听罢丑颈蝉听蝉别尘颈苍补谤听飞颈濒濒听产别听辫谤辞箩别肠迟别诲听补迟听惭肠颁辞苍苍别濒濒听437听补迟听惭肠骋颈濒濒听鲍苍颈惫别谤蝉颈迟测


惭别别迟颈苍驳听滨顿:听845听1388听1004听听听听听听听
笔补蝉蝉肠辞诲别:听痴滨厂厂

Abstract: In many traditional reinforcement learning (RL) settings, an agent learns to
control the system without incorporating any prior knowledge. However, such a
paradigm can be impractical since learning can be slow. In many engineering
applications, offline datasets are often available. To leverage the information provided
by the offline datasets with the power of online-finetuning, we proposed the informed
posterior sampling based reinforcement learning (iPSRL) for both episodic and
continuing MDP learning problems. In this algorithm, the learning agent forms an
informed prior with the offline data along with the knowledge about the offline policy that
generated the data. This informed prior is then used to initiate the posterior sampling
procedure. Through a novel prior-dependent regret analysis of the posterior sampling
procedure, we showed that when the offline data is informative enough, the iPSRL
algorithm can significantly reduce the learning regret compared to the baselines (that do
not use offline data in the same way). Based on iPSRL, we then proposed the more
practical iRLSVI algorithm. Empirical results showed that iRLSVI can significantly
谤别诲耻肠别听谤别驳谤别迟听肠辞尘辫补谤别诲听迟辞听产补蝉别濒颈苍别蝉听飞颈迟丑辞耻迟听谤别驳谤别迟.

Bio: Dengwang Tang is currently a postdoctoral researcher at University of Southern
California. He obtained his B.S.E in Computer Engineering from University of Michigan,
Ann Arbor in 2016. He earned his Ph.D. in Electrical and Computer Engineering (2021),
M.S. in Mathematics (2021), and M.S. in Electrical and Computer Engineering (2018) all
from University of Michigan, Ann Arbor. Prior to joining USC he was a postdoctoral
researcher at University of California, Berkeley. His research interests involve control
and learning algorithms in stochastic dynamic systems, multi-armed bandits, multi-agent
蝉测蝉迟别尘蝉,听辩耻别耻颈苍驳听迟丑别辞谤测,听补苍诲听驳补尘别听迟丑别辞谤测

Back to top