Introduction to Parallel Programming with MPI
Course Description
This course gives an introduction to the Message Passing Interface (MPI), the dominating distributed-memory programming paradigm in High Performance Computing. The following topics are covered:
- Basic principles of distributed-memory computer architecture and the Message Passing Interface (MPI)
- Blocking and non-blocking point-to-point communication
- Blocking and non-blocking collective communication
- Derived data types
- Sub-communicators, inter-communicators
- Performance issues
Lectures are accompanied by hands-on exercises.
Learning Objectives
At the conclusion of the course, you will be able to:
- understand the principles of HPC cluster architecture and distributed-memory parallel programming on clusters,
- employ the fundamental communication primitives of MPI,
- use derived data types in order to simplify complex communication requirements,
- employ communicators and sub-communicators,
- understand the most common performance issues with MPI programming and parallel programming in general,
- employ a tracing tool for simple MPI program analysis.
Course Structure
Certification
A digital certificate of attendance will be awarded to all participants who attended the majority of the course.
Prerequisites
Participants should meet the following requirements:
- Familiarity with one of the standard HPC programming languages (C, C++, or Fortran)
- Ability to handle the Linux command line via a remote connection (editing, compiling)
Upcoming Iterations and Additional Courses
You can find dates and registration links for this and other upcoming NHR@FAU courses at https://go-nhr.de/trainings .
