Skip to main content
To KTH's start page To KTH's start page

An honourable mention at EUROGRAPHICS to Style-Controllable Speech-Driven Gesture Synthesis Using No

Published Jun 16, 2020

The WASP-funded research "Style-Controllable Speech-Driven Gesture Synthesis Using Normalising Flows" from EECS TMH, was awarded an honourable mention at the high-ranking computer graphics conference EUROGRAPHICS a few weeks back. Only 4 works out of 141 conference submissions received an award or an honourable mention.

Congratulations, Simon and Gustav!
How does it feel to receive this honourable mention, and what does it mean to you?

Simon: Thanks! It is always nice to get the extra recognition and attention, especially when it comes from a great conference such as Eurographics. Mainly, I think this means we are on a fruitful track with our probabilistic motion models and that we have a good team working on it.

What excites you the most about your area of research?

Simon: I really enjoy working with computer graphics and animation, and I'm fascinated by how emotionally attached we humans can be towards non-living objects, if only they behave life-like. The recent developments in machine learning offer exciting new possibilities to synthesise animation automatically.

Gustav: I love the meeting between the abstract and the concrete. I enjoy mathematics, but when working on generative applications such as animation or speech synthesis, I get to listen to the mathematics and see it move. And that's just fun!

In what way is your research important for society and what use or problem solving do you see for the future?

Simon & Gustav: Our research has applications within fields such as animation, virtual agents and social robots. We strive to make artificial humanoids more engaging and relatable, regardless if it is for entertainment, educational or assistive use. Up until now, the best results in these applications have been highly customised solutions that can be used only for one thing, for example to animate walking but not talking. We believe our work shows that this is about to change. The new methods we have presented give good results no matter what the application is. Being able to use the same tool in all these different situations will really move these areas forward.

Authors: Simon Alexanderson, Gustav Henter, Taras Kucherenko, and Jonas Beskow.

Paper and associated materials (open access): diglib.eg.org/handle/10.1111/cgf13946

Contact: