Skip to main content

Transfer-Aware Kernels, Priors and Latent Spaces from Simulation to Real Robots

Time: Fri 2020-11-20 14.00

Location: F3, Lindstedtsvägen 26, Stockholm (English)

Subject area: Computer Science

Doctoral student: Rika Antonova , Robotik, perception och lärande, RPL, Centrum för autonoma system, CAS

Opponent: Associate Professor Jens Kober, Delft University of Technology (TU Delft)

Supervisor: Danica Kragic, Robotik, perception och lärande, RPL, Numerisk analys och datalogi, NADA, Centrum för autonoma system, CAS

Export to calendar

Abstract

Consider challenging sim-to-real cases lacking high-fidelity simulators and allowing only 10-20 hardware trials. This work shows that even imprecise simulation can be beneficial if used to build transfer-aware representations.

First, the thesis introduces an informed kernel that embeds the space of simulated trajectories into a lower-dimensional space of latent paths. It uses a sequential variational autoencoder (sVAE) to handle large-scale training from simulated data. Its modular design enables quick adaptation when used for Bayesian optimization (BO) on hardware. The thesis and the included publications demonstrate that this approach works for different areas of robotics: locomotion and manipulation. Furthermore, a variant of BO that ensures recovery from negative transfer when using corrupted kernels is introduced. An application to task-oriented grasping validates its performance on hardware.

For the case of parametric learning, simulators can serve as priors or regularizers. This work describes how to use simulation to regularize a VAE's decoder to bind the VAE's latent space to simulator parameter posterior. With that, training on a small number of real trajectories can quickly shift the posterior to reflect reality. The included publication demonstrates that this approach can also help reinforcement learning (RL) quickly overcome the sim-to-real gap on a manipulation task on hardware.

A longer-term vision is to shape latent spaces without needing to mandate a particular simulation scenario. A first step is to learn general relations that hold on sequences of states from a set of related domains. This work introduces a unifying mathematical formulation for learning independent analytic relations. Relations are learned from source domains, then used to help structure the latent space when learning on target domains. This formulation enables a more general, flexible and principled way of shaping the latent space. It formalizes the notion of learning independent relations, without imposing restrictive simplifying assumptions or requiring domain-specific information. This work presents mathematical properties, concrete algorithms and experimental validation of successful learning and transfer of latent relations.

urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-284138