Lecture number for WS 2018/19: L.079.05733
Please note: Due to a very high number of participants, the lecture and exercises need to be moved to a larger room. Please check PANDA and PAUL for the latest information.
All materials (slides, exercises, programming exercises) and current information will be provided via the PANDA lecture management system, see PANDA page for this lecture.
The course comprises three components: lecture, theoretical exercises, and programming exercises, the will be held during the following time slots:
- Lecture: mondays 11:15-12:45, lecture hall C1 on 29 October, see PANDA or PAUL for future dates
- Theoretical exercises: wednesdays 11:15-12:00, lecture hall G (may change in future)
- Programming exercises: wednesdays 12:30-13.45, lecture hall G (may change in future)
The first lecture will take place on 15 Oct 2018.
The exercises will begin on 24 Oct 2018.
Goals and Contents of the Lecture
The goal of this course is to teach the fundamentals of high-performance computing. The emphasis of the course is on programming. That is, we will discuss programming models, languages and frameworks for efficiently using parallel computer sytems. The lecture will be complemented by a considerable amount of practical programming exercises that allow the students to gain practical experiences with programming, performance optimization and debugging parallel computer systems. To this end, the student will get access to the HPC clusters operated by the Paderborn Center for Parallel Computing (PC²).
The lecture and exercises will be partially based on the textbook Peter S. Pacheco, An introduction to Parallel Programming, Morgan Kaufmann publishers, 2011. The book is available online within the Paderborn University Network (use VPN for DFN-AAI for access from outside). The book also comprises a number of code excerpts from programs that illustrate the use of the parallel programming techniques introduced in the book. The source code for these examples is available here.
The following topics will be covered:
- Introduction to parallel computing and parallel computing systems
- Distributed memory programming with MPI
- Shared memory programming with pThreads
- Shared memory programming with OpenMP
- Single node performance optimization
Additionally, we may cover more advanced topics, such as, load balancing, programming heterogeneous computing systems with OpenCL.