NHR@FAU Course Program in Spring 2024
We are happy to announce the NHR@FAU course program for Spring 2024.
- Fundamentals of Accelerated Computing with CUDA C/C++. Full-day online course, February 29.
The course covers fundamental tools and techniques for GPU-accelerated C/C++ applications with CUDA. The course and the hands-on exercises are part of the NVIDIA DLI program. - Fundamentals of Accelerated Computing with CUDA Python. Full-day online course, March 14.
The course covers fundamental tools and techniques for GPU-accelerated Python applications with CUDA and Numba. The course and the hands-on exercises are part of the NVIDIA DLI program. - Introduction to Parallel Programming with OpenMP, Part 1. Full-day online course, March 5.
OpenMP is a standard for parallelizing shared-memory C/C++ and Fortran applications. It is supported by major compilers and provides a simple, low-entry barrier for thread-based parallelization. This course with hands-on exercises gives an introduction to the basic workings and constructs used for parallelizing applications with OpenMP. - Introduction to Parallel Programming with OpenMP, Part 2. Full-day online course, March 12.
OpenMP is a standard for parallelizing shared-memory C/C++ and Fortran applications. It is supported by major compilers and provides a simple, low-entry barrier for thread-based parallelization. This course with hands-on exercises introduces advanced topics for parallelizing applications with OpenMP, including thread and memory locality, tasking, SIMD, and accelerator offloading. - Performance Analysis on GPUs with NVIDIA Tools. Half-day online course, March 19.
This course introduces NVIDIA’s profiler as a tool to spot common performance bugs that arise when porting code to GPUs. Attendees will be able to follow along the demos and conduct their own experiments on the NHR@FAU GPU cluster. - Multi-GPU Programming with CUDA C++ Part 1: Accelerating CUDA C++ Applications with Multiple GPUs. Full-day online course, April 5.
This course covers techniques to accelerate single-GPU CUDA applications by overlapping computation with data transfers as well as by employing multiple GPUs within one compute node. The course and the hands-on exercises are part of the NVIDIA DLI program. - Multi-GPU Programming with CUDA C++ Part 2: Scaling CUDA C++ Applications to Multiple Nodes. Full-day online course, April 10.
This course covers techniques to scale existing CUDA applications to multiple GPU-enabled nodes. The course and the hands-on exercises are part of the NVIDIA DLI program. - Introduction to Parallel Programming with MPI. Two-day online course, April 11/12.
This course gives an introduction to the Message Passing Interface (MPI), the dominating distributed-memory programming paradigm in High Performance Computing.
Please click on the links for more information and registration. For a full overview, please have a look at: NHR@FAU Tutorials and Courses
Dr. Georg Hager
Head of Training & Support
Erlangen National High Performance Computing Center (NHR@FAU)
Training & Support Division
- Phone number: +49913185-28973
- Email: georg.hager@fau.de