Skip to main content
To KTH's start page To KTH's start page

Transfer Learning using low-dimensional Representations in Reinforcement Learning

Time: Tue 2020-09-22 10.00

Location: 304, Teknikringen 14, Stockholm (English)

Subject area: Computer Science

Doctoral student: Isac Arnekvist , Robotik, perception och lärande, RPL

Opponent: Docent Christos Dimitrakakis, Chalmers tekniska högskola

Supervisor: Professor Danica Kragic, Numerisk analys och datalogi, NADA, Robotik, perception och lärande, RPL, Centrum för autonoma system, CAS; Johannes Andreas Stork, Robotik, perception och lärande, RPL

Export to calendar


Successful learning of behaviors in Reinforcement Learning (RL) are often learned tabula rasa, requiring many observations and interactions in the environment. Performing this outside of a simulator, in the real world, often becomes infeasible due to the large amount of interactions needed. This has motivated the use of Transfer Learning for Reinforcement Learning, where learning is accelerated by using experiences from previous learning in related tasks. In this thesis, I explore how we can transfer from a simple single-object pushing policy, to a wide array of non-prehensile rearrangement problems. I then explain how we can model task differences using a low-dimensional latent variable representation to make adaption to novel tasks efficient. Lastly, the dependence of accurate function approximation is sometimes problematic, especially in RL, where statistics of target variables are not known a priori. I present observations, along with explanations, that small target variances along with momentum optimization of ReLU-activated neural network parameters leads to dying ReLU.