Synergies between Policy Learning and Sampling-based Planning
Time: Tue 2024-01-30 15.00
Location: F3 (Flodis), Lindstedtsvägen 26 & 28
Video link: https://kth-se.zoom.us/j/63888939859
Language: English
Subject area: Computer Science
Doctoral student: Robert Gieselmann , Robotik, perception och lärande, RPL
Opponent: Associate Professor Edward Johns, Imperial College London, London, England, UK
Supervisor: Associate Professor Florian T. Pokorny, Robotik, perception och lärande, RPL
QC 20240108
Abstract
Recent advances in artificial intelligence and machine learning have significantly impacted the field of robotics and led to the interdisciplinary study of robot learning. These developments have the potential to revolutionize the automation of tasks in various industries by reducing the reliance on human workers. However, fully autonomous, learning-based robotic systems are still mainly limited to controlled environments. Ideally, we are looking for methods that enable autonomous acquisition of robotic skills for any temporally extended setting with potentially complex sensor observations. Classical sampling-based planning algorithms used in robot motion planning compute feasible paths between robot states over long time horizons and even in geometrically complex environments. This thesis investigates the possibility of combining learning-based methods with these classical approaches to solve challenging problems in robot manipulation, e.g. the manipulation of deformable objects. The core idea is to leverage the best of both worlds and achieve long-horizon control through planning, while using learning to obtain useful environment models from potentially high-dimensional and complex observation data. The presented frameworks rely on recent machine learning techniques such as contrastive representation learning, generative modeling and reinforcement learning. Finally, we outline the potentials, challenges and limitations of this type of approaches and highlight future directions.