This course will take place as an on-site and in-person event. It is not possible to attend online.
An introduction to the parallel programming of supercomputers is given. The focus is on the usage of the Message Passing Interface (MPI), the most often used programming model for systems with distributed memory. Furthermore, OpenMP will be presented, which is often used on shared-memory architectures.
The first four days of the course consist of lectures and short exercises. A fifth day is devoted to demonstrating the use of MPI and OpenMP in a larger context. To this end, starting from a simple but representative serial algorithm, a parallel version will be designed and implemented using the techniques presented in the course.
Topics covered will include
- Fundamentals of Parallel Computing
- MPI (basics, point-to-point communication, collective communication, derived data types, I/O, communicators, thread compliance)
- OpenMP (parallel construct, data sharing, loop work sharing, task work sharing)
knowledge about Linux (e.g. make, command line editor, Linux shell), experience in one of the following programming languages C/C++/Fortran/Python
This course is given in English.
14 - 18 August 2023, 09:00-16:30 each day
Chew Junxian, Ilya Zhukov, Jolanta Zjupa (JSC)