Parallel Programming Using Message Passing with MPI Online Course
Juan De Gracia Triviño, PDC
As part of the National Academic Infrastructure for Supercomputing in Sweden (NAISS) training programme, PDC collaborated with the High Performance Computing Center North (HPC2N) at Umeå University and the Centre for Scientific and Technical Computing (LUNARC) at Lund University to offer an online course on parallel programming using message passing with MPI at the end of last year. The course concluded with remarkable success, drawing keen interest from Swedish academia and industry. Designed for beginners with no previous experience in parallel computing, the course offered an engaging introduction to the world of message passing and distributed memory computing, both of which are cornerstones of high-performance computing (HPC) across a variety of computer architectures.
About forty people participated in the course, and they displayed a strong preference for Python (mpi4py) over traditional languages like C++ and Fortran, with the latter being the least popular amongst the participants. This trend underscores the growing appeal of Python in the parallel computing domain, reflecting its accessibility and the expanding ecosystem of scientific libraries.
The course explored the essentials of message passing, a programming model that has gained widespread adoption in massively parallel HPC. By covering topics from point-to-point communications to non-blocking and collective communications calls, the course equipped learners with the foundational knowledge and skills to navigate the complexities of current computer architectures. From multi-core desktops to the fastest HPC systems in the world, message passing is pivotal in harnessing the potential of several hundred thousand processing elements.
Theoretical lectures were complemented by live demonstrations and practical sessions, enriching the participants’ learning experience and deepening their understanding of the material. This hands-on approach ensured that by the end of the course, attendees were not only familiar with the key MPI calls but also capable of writing their own MPI programs at an intermediate level.
The successful conclusion of this course highlights the growing interest and importance of parallel computing skills in today’s technology-driven landscape. PDC was encouraged by the enthusiastic participation and the positive feedback received, and we look forward to offering further courses that meet the evolving needs of the global computing community.