|Term ||WS 2022/23 + SS 2023 |
|Program ||Computer Science Master's |
Computer Engineering Master's
|Lecture number || |
|Status || |
24 June 2022: Project Group Announced
|Regular Meeting Hours ||TBD |
Goals and Contents
Over the last couple of years, FPGAs have been established as power efficient accelerators in datacenters and HPC. With the installation of 48 Xilinx Alveo U280 FPGA cards and 32 Bittware 520N cards with Intel Stratix 10 FPGAs in the new Noctua 2 HPC cluster, the Paderborn Center for Parallel Computing (PC2) hosts a unique infrastructure for parallel computations on FPGAs, that will be used in this project. For some applications, like N-Body simulations, Shallow Water Simulations on unstructured meshes, and communication focused HPCC benchmarks, we have already investigated and demonstrated the particular scaling potential of Multi-FPGA applications. This project group can contribute to this still novel field.
In this project, you will
- In a tutorial phase get familiar with FPGA programming using Intel oneAPI, Xilinx Vitis and possibly OpenCL.
- Learn about performance modeling in HPC in general and for FPGA acceleration in particular.
- Explore different communication modes for Multi-FPGA applications (MPI via host, MPI via direct FPGA network, DMA transfers within a node, streaming with direct point-to-point connections) and characterize them in terms of bandwidth, latency, and ease of use.
- Port one or two (depending on group size) HPC applications to a Multi-FPGA design, testing different communication variants or using the analytically determined best one.
- Relevant application domains can include
- Numeric simulations on structured or unstructured meshes
- Dense or sparse linear algebra
- Parallel graph processing