Skip to main content

Artur Podobas new assistant professor in high-performance computing

NEW DIGITALISATION RESEARCHERS

Published Feb 20, 2023

In 2022, Artur Podobas was appointed assistant professor in high-performance computing at the Division of software and computer systems/Department of computer science/EECS school. He is specialising in hardware accelerators.

Artur Podobas, what are your research area, research interests, research methods, and application area?

A man with a beard and dark blue shirt.
Artur Podobas. Photo: KTH

I work on making future computer systems faster and greener (more power-efficient). More specifically, my work lies in the intersection between hardware, computer architecture, and high-performance computing, where I research how to create custom hardware accelerators for use in future high-performance computers. Such accelerators help to make important scientific calculations faster and significantly greener than traditional computer systems.

I work a lot with reconfigurable systems such as Field-Programmable Gate Arrays (FPGAs) and Coarse-Grained Reconfigurable Architectures (CGRAs), as well as with creating specialised brain-like architectures (called neuromorphic systems). I also work on improving the programmability aspects of said architectures by using (and designing) different High-Level Synthesis (HLS) tools, allowing non-experts to use these exciting emerging architectures with ease.

What do you think are the large research challenges in your research area and why

Last year the first Exascale machine was deployed (Frontier at Oak Ridge National Laboratory), capable of executing 10¹⁸ floating-point operations per second (FLOPS). Now, we set our eyes on the next computing frontier: Zettascale (10²¹ FLOP/s). How we reach Zettascale is as of yet an entirely open question and a notoriously hard one since Moore's law (transistor scaling) is ending. I believe that by specialising the architecture to important applications using reconfigurable and neuromorphic systems, we can make better use of the silicon and thus become faster and -- perhaps more importantly -- far more power-efficient. These will be essential steps towards reaching Zettascale.

Granted, there are several large research challenges that need to be tackled at different levels of abstraction. For example, how should future (Zettascale-capable) reconfigurable and neuromorphic architecture look like, encapsulating emerging architecture trends such as 3D die stacking? Another example would be programmability: using reconfigurable systems is very difficult today, and there is a need to raise the level of abstraction in order to make this architecture more widespread and cater to a broader scientific audience. These are all questions that I work on with my research.

If you are looking for some research collaborator(s), what competence are you looking for?

I have been blessed with several fantastic collaborations with respect to neuroscience/brain-like computing, reconfigurable computing, and high-performance computing, both nationally and internationally, so I would like to give a shoutout to them (you know who you are).

However, as a general rule, I tend to look for collaborators that have a particular application or use case that they are in need of accelerating. In short, if you have some exciting application that you think is in need of higher performance or to be greened, throw me an e-mail !

Can you tell us more about one of your research results and why you picked it?

There are many research results I am proud of, which range from accelerating novel brain-inspired methods such as BCPNN (in a framework called StreamBreain; collaboration with colleagues at CST, KTH) to large-scale (multi-FPGA) accelerators to exciting surveys and investigations on emerging architectures to understand future opportunities (collaboration with colleagues at RIKEN, Japan). For example, just this year, we submitted a paper on a custom Quantum Circuit (QC) accelerator that is capable of being several times faster than state-of-the-art simulators running on classical processors, and we are well underway with similar results in (a VR-sponsored) project on neuromorphic accelerators.

However, I would probably select a study we conducted when I was a postdoc at Matsuoka Laboratory  in TiTech, Japan. Here, Dr. Hamid Reza Zohouri, myself, and Prof. Satoshi Matsuoka created among the highest-performing general stencil accelerators in the world. This was quite a feat in itself since stencil methods are often used in scientific computations, and our accelerator was often many times faster than running on traditional computers (e.g., server-class processors or Xeon Phis) and up to an order of magnitude more power efficient.

However, what made me decide to select this study, is that it shows the tremendous depths that architecture developers need to dive into to create a high-performance accelerator for even the simplest of algorithms and include the design, implementation, performance modeling, and empirical evaluation of said accelerator. I would highly encourage anyone that is interested in reconfigurable systems to read that paper, whose results stand strong to this day! The papers are available here for the curious reader:

Lastly, what do you like with Sweden, Stockholm and KTH?

I am in born in Sweden, but I would have to say the generous conditions for parental leave allow me to spend time with my beautiful daughter Mika and my fantastic wife, Linda. As for Stockholm, since I am born here, there are many nice things about it. One thing that I used to do a lot when I was younger (read: as a child) was to take a walk with my parents or grandparents. In particular, there is a fantastic beach walk that starts from Mälarhöjden and goes all the way to Sätra, offering great nature, resting places, and awesome fishing spots (fishing being a hobby of mine). I recommend visiting and walking here to anyone.

As for KTH, I really like and appreciate working with my colleagues and students both at KTH Kista and KTH Campus. Overall, I think I have a good working environment and good, encouraging bosses. I also teach a rather large course in KTH (code: IS1500), which I really enjoy (and I hope my students do, too!). Finally, I really like the freedom that comes with academic research, and that in working with research here at KTH things never get monotonic or boring!

Artur Podobas profile
Podobas-Labs
Artur Podobas Google Scholar

New digitalisation researcher presentations

The KTH Digitalisation Platform regularly presents one new KTH faculty in the digitalisation area.

Interview archive

Feel free to forward suggestions to the platform directors: digitalizationplatform@kth.se

Did you find this page useful?
Thank you for helping us!
Belongs to: Research
Last changed: Feb 20, 2023