Achtung:

Sie haben Javascript deaktiviert!
Sie haben versucht eine Funktion zu nutzen, die nur mit Javascript möglich ist. Um sämtliche Funktionalitäten unserer Internetseite zu nutzen, aktivieren Sie bitte Javascript in Ihrem Browser.

Show image information

Journal Papers


Open list in Research Information System

2021

The HighPerMeshes framework for numerical algorithms on unstructured grids

S. Alhaddad, J. Förstner, S. Groth, D. Grünewald, Y. Grynko, F. Hannig, T. Kenter, F. Pfreundt, C. Plessl, M. Schotte, T. Steinke, J. Teich, M. Weiser, F. Wende, Concurrency and Computation: Practice and Experience (2021), pp. e6616


The Strong Scaling Advantage of FPGAs in HPC for N-body Simulations

J. Menzel, C. Plessl, T. Kenter, ACM Transactions on Reconfigurable Technology and Systems (2021), 15(1), pp. 1-30

N-body methods are one of the essential algorithmic building blocks of high-performance and parallel computing. Previous research has shown promising performance for implementing n-body simulations with pairwise force calculations on FPGAs. However, to avoid challenges with accumulation and memory access patterns, the presented designs calculate each pair of forces twice, along with both force sums of the involved particles. Also, they require large problem instances with hundreds of thousands of particles to reach their respective peak performance, limiting the applicability for strong scaling scenarios. This work addresses both issues by presenting a novel FPGA design that uses each calculated force twice and overlaps data transfers and computations in a way that allows to reach peak performance even for small problem instances, outperforming previous single precision results even in double precision, and scaling linearly over multiple interconnected FPGAs. For a comparison across architectures, we provide an equally optimized CPU reference, which for large problems actually achieves higher peak performance per device, however, given the strong scaling advantages of the FPGA design, in parallel setups with few thousand particles per device, the FPGA platform achieves highest performance and power efficiency.


2020

Accurate Sampling with Noisy Forces from Approximate Computing

V. Rengaraj, M. Lass, C. Plessl, T. Kühne, Computation (2020), 8(2), 39

In scientific computing, the acceleration of atomistic computer simulations by means of custom hardware is finding ever-growing application. A major limitation, however, is that the high efficiency in terms of performance and low power consumption entails the massive usage of low precision computing units. Here, based on the approximate computing paradigm, we present an algorithmic method to compensate for numerical inaccuracies due to low accuracy arithmetic operations rigorously, yet still obtaining exact expectation values using a properly modified Langevin-type equation.


CP2K: An electronic structure and molecular dynamics software package - Quickstep: Efficient and accurate electronic structure calculations

T. Kühne, M. Iannuzzi, M.D. Ben, V.V. Rybkin, P. Seewald, F. Stein, T. Laino, R.Z. Khaliullin, O. Schütt, F. Schiffmann, D. Golze, J. Wilhelm, S. Chulkov, M.H.B. Mohammad Hossein Bani-Hashemian, V. Weber, U. Borstnik, M. Taillefumier, A.S. Jakobovits, A. Lazzaro, H. Pabst, T. Müller, R. Schade, M. Guidon, S. Andermatt, N. Holmberg, G.K. Schenter, A. Hehn, A. Bussy, F. Belleflamme, G. Tabacchi, A. Glöß, M. Lass, I. Bethune, C.J. Mundy, C. Plessl, M. Watkins, J. VandeVondele, M. Krack, J. Hutter, The Journal of Chemical Physics (2020), 152(19), 194103

CP2K is an open source electronic structure and molecular dynamics software package to perform atomistic simulations of solid-state, liquid, molecular, and biological systems. It is especially aimed at massively parallel and linear-scaling electronic structure methods and state-of-theart ab initio molecular dynamics simulations. Excellent performance for electronic structure calculations is achieved using novel algorithms implemented for modern high-performance computing systems. This review revisits the main capabilities of CP2K to perform efficient and accurate electronic structure simulations. The emphasis is put on density functional theory and multiple post–Hartree–Fock methods using the Gaussian and plane wave approach and its augmented all-electron extension.


2019

A General Algorithm to Calculate the Inverse Principal p-th Root of Symmetric Positive Definite Matrices

D. Richters, M. Lass, A. Walther, C. Plessl, T. Kühne, Communications in Computational Physics (2019), 25(2), pp. 564-585

We address the general mathematical problem of computing the inverse p-th root of a given matrix in an efficient way. A new method to construct iteration functions that allow calculating arbitrary p-th roots and their inverses of symmetric positive definite matrices is presented. We show that the order of convergence is at least quadratic and that adaptively adjusting a parameter q always leads to an even faster convergence. In this way, a better performance than with previously known iteration schemes is achieved. The efficiency of the iterative functions is demonstrated for various matrices with different densities, condition numbers and spectral radii.


FPGAs im Rechenzentrum

M. Platzner, C. Plessl, Informatik Spektrum (2019)


Transparent Acceleration for Heterogeneous Platforms with Compilation to OpenCL

H. Riebler, G.F. Vaz, T. Kenter, C. Plessl, ACM Trans. Archit. Code Optim. (TACO) (2019), 16(2), pp. 14:1–14:26

DOI


2018

Sprint diagnostic with GPS and inertial sensor fusion

J.C. Mertens, A. Boschmann, M. Schmidt, C. Plessl, Sports Engineering (2018), 21(4), pp. 441-451

DOI


Using Approximate Computing for the Calculation of Inverse Matrix p-th Roots

M. Lass, T. Kühne, C. Plessl, Embedded Systems Letters (2018), 10(2), pp. 33-36

Approximate computing has shown to provide new ways to improve performance and power consumption of error-resilient applications. While many of these applications can be found in image processing, data classification or machine learning, we demonstrate its suitability to a problem from scientific computing. Utilizing the self-correcting behavior of iterative algorithms, we show that approximate computing can be applied to the calculation of inverse matrix p-th roots which are required in many applications in scientific computing. Results show great opportunities to reduce the computational effort and bandwidth required for the execution of the discussed algorithm, especially when targeting special accelerator hardware.


2017

Efficient Branch and Bound on FPGAs Using Work Stealing and Instance-Specific Designs

H. Riebler, M. Lass, R. Mittendorf, T. Löcke, C. Plessl, ACM Transactions on Reconfigurable Technology and Systems (TRETS) (2017), 10(3), pp. 24:1-24:23

Branch and bound (B&B) algorithms structure the search space as a tree and eliminate infeasible solutions early by pruning subtrees that cannot lead to a valid or optimal solution. Custom hardware designs significantly accelerate the execution of these algorithms. In this article, we demonstrate a high-performance B&B implementation on FPGAs. First, we identify general elements of B&B algorithms and describe their implementation as a finite state machine. Then, we introduce workers that autonomously cooperate using work stealing to allow parallel execution and full utilization of the target FPGA. Finally, we explore advantages of instance-specific designs that target a specific problem instance to improve performance. We evaluate our concepts by applying them to a branch and bound problem, the reconstruction of corrupted AES keys obtained from cold-boot attacks. The evaluation shows that our work stealing approach is scalable with the available resources and provides speedups proportional to the number of workers. Instance-specific designs allow us to achieve an overall speedup of 47 × compared to the fastest implementation of AES key reconstruction so far. Finally, we demonstrate how instance-specific designs can be generated just-in-time such that the provided speedups outweigh the additional time required for design synthesis.


High-Throughput and Low-Latency Network Communication with NetIO

J. Schumacher, C. Plessl, W. Vandelli, Journal of Physics: Conference Series (2017), 898, 082003

DOI


2016

Potential and Methods for Embedding Dynamic Offloading Decisions into Application Code

G.F. Vaz, H. Riebler, T. Kenter, C. Plessl, Computers and Electrical Engineering (2016), 55, pp. 91-111

A broad spectrum of applications can be accelerated by offloading computation intensive parts to reconfigurable hardware. However, to achieve speedups, the number of loop it- erations (trip count) needs to be sufficiently large to amortize offloading overheads. Trip counts are frequently not known at compile time, but only at runtime just before entering a loop. Therefore, we propose to generate code for both the CPU and the coprocessor, and defer the offloading decision to the application runtime. We demonstrate how a toolflow, based on the LLVM compiler framework, can automatically embed dynamic offloading de- cisions into the application code. We perform in-depth static and dynamic analysis of pop- ular benchmarks, which confirm the general potential of such an approach. We also pro- pose to optimize the offloading process by decoupling the runtime decision from the loop execution (decision slack). The feasibility of our approach is demonstrated by a toolflow that automatically identifies suitable data-parallel loops and generates code for the FPGA coprocessor of a Convey HC-1. We evaluate the integrated toolflow with representative loops executed for different input data sizes.


2015

Aktuelles Schlagwort: Approximate Computing

C. Plessl, M. Platzner, P.J. Schreier, Informatik Spektrum (2015)(5), pp. 396-399

DOI


Exploring Tradeoffs between Specialized Kernels and a Reusable Overlay in a Stereo-Matching Case Study

T. Kenter, H. Schmitz, C. Plessl, International Journal of Reconfigurable Computing (IJRC) (2015), 2015, 859425

FPGAs are known to permit huge gains in performance and efficiency for suitable applications but still require reduced design efforts and shorter development cycles for wider adoption. In this work, we compare the resulting performance of two design concepts that in different ways promise such increased productivity. As common starting point, we employ a kernel-centric design approach, where computational hotspots in an application are identified and individually accelerated on FPGA. By means of a complex stereo matching application, we evaluate two fundamentally different design philosophies and approaches for implementing the required kernels on FPGAs. In the first implementation approach, we designed individually specialized data flow kernels in a spatial programming language for a Maxeler FPGA platform; in the alternative design approach, we target a vector coprocessor with large vector lengths, which is implemented as a form of programmable overlay on the application FPGAs of a Convey HC-1. We assess both approaches in terms of overall system performance, raw kernel performance, and performance relative to invested resources. After compensating for the effects of the underlying hardware platforms, the specialized dataflow kernels on the Maxeler platform are around 3x faster than kernels executing on the Convey vector coprocessor. In our concrete scenario, due to trade-offs between reconfiguration overheads and exposed parallelism, the advantage of specialized dataflow kernels is reduced to around 2.5x.


FELIX: a High-Throughput Network Approach for Interfacing to Front End Electronics for ATLAS Upgrades

J. Anderson, A. Borga, H. Boterenbrood, H. Chen, K. Chen, G. Drake, D. Francis, B. Gorini, F. Lanni, G. Lehmann Miotto, L. Levinson, J. Narevicius, C. Plessl, A. Roich, S. Ryu, F. Schreuder, J. Schumacher, W. Vandelli, J. Vermeulen, J. Zhang, Journal of Physics: Conference Series (2015), 664, 082050

The ATLAS experiment at CERN is planning full deployment of a new unified optical link technology for connecting detector front end electronics on the timescale of the LHC Run 4 (2025). It is estimated that roughly 8000 GBT (GigaBit Transceiver) links, with transfer rates up to 10.24 Gbps, will replace existing links used for readout, detector control and distribution of timing and trigger information. A new class of devices will be needed to interface many GBT links to the rest of the trigger, data-acquisition and detector control systems. In this paper FELIX (Front End LInk eXchange) is presented, a PC-based device to route data from and to multiple GBT links via a high-performance general purpose network capable of a total throughput up to O(20 Tbps). FELIX implies architectural changes to the ATLAS data acquisition system, such as the use of industry standard COTS components early in the DAQ chain. Additionally the design and implementation of a FELIX demonstration platform is presented and hardware and software aspects will be discussed.


Self-Aware and Self-Expressive Systems – Guest Editor's Introduction

J. Torresen, C. Plessl, X. Yao, IEEE Computer (2015), 48(7), pp. 18-20

DOI


2014

Accelerating Finite Difference Time Domain Simulations with Reconfigurable Dataflow Computers

H. Giefers, C. Plessl, J. Förstner, ACM SIGARCH Computer Architecture News (2014), 41(5), pp. 65-70

DOI


ReconOS - An Operating System Approach for Reconfigurable Computing

A. Agne, M. Happe, A. Keller, E. Lübbers, B. Plattner, M. Platzner, C. Plessl, IEEE Micro (2014), 34(1), pp. 60-71

The ReconOS operating system for reconfigurable computing offers a unified multi-threaded programming model and operating system services for threads executing in software and threads mapped to reconfigurable hardware. The operating system interface allows hardware threads to interact with software threads using well-known mechanisms such as semaphores, mutexes, condition variables, and message queues. By semantically integrating hardware accelerators into a standard operating system environment, ReconOS allows for rapid design space exploration, supports a structured application development process and improves the portability of applications


Self-awareness as a Model for Designing and Operating Heterogeneous Multicores

A. Agne, M. Happe, A. Lösch, C. Plessl, M. Platzner, ACM Transactions on Reconfigurable Technology and Systems (TRETS) (2014), 7(2), 13

Self-aware computing is a paradigm for structuring and simplifying the design and operation of computing systems that face unprecedented levels of system dynamics and thus require novel forms of adaptivity. The generality of the paradigm makes it applicable to many types of computing systems and, previously, researchers started to introduce concepts of self-awareness to multicore architectures. In our work we build on a recent reference architectural framework as a model for self-aware computing and instantiate it for an FPGA-based heterogeneous multicore running the ReconOS reconfigurable architecture and operating system. After presenting the model for self-aware computing and ReconOS, we demonstrate with a case study how a multicore application built on the principle of self-awareness, autonomously adapts to changes in the workload and system state. Our work shows that the reference architectural framework as a model for self-aware computing can be practically applied and allows us to structure and simplify the design process, which is essential for designing complex future computing systems.


Seven Recipes for Setting Your FPGA on Fire – A Cookbook on Heat Generators

A. Agne, H. Hangmann, M. Happe, M. Platzner, C. Plessl, Microprocessors and Microsystems (2014), 38(8, Part B), pp. 911-919

Due to the continuously shrinking device structures and increasing densities of FPGAs, thermal aspects have become the new focus for many research projects over the last years. Most researchers rely on temperature simulations to evaluate their novel thermal management techniques. However, these temperature simulations require a high computational effort if a detailed thermal model is used and their accuracies are often unclear. In contrast to simulations, the use of synthetic heat sources allows for experimental evaluation of temperature management methods. In this paper we investigate the creation of significant rises in temperature on modern FPGAs to enable future evaluation of thermal management techniques based on experiments. To that end, we have developed seven different heat-generating cores that use different subsets of FPGA resources. Our experimental results show that, according to external temperature probes connected to the FPGA’s heat sink, we can increase the temperature by an average of 81 !C. This corresponds to an average increase of 156.3 !C as measured by the built-in thermal diodes of our Virtex-5 FPGAs in less than 30 min by only utilizing about 21 percent of the slices.


2012

IMORC: An Infrastructure and Architecture Template for Implementing High-Performance Reconfigurable FPGA Accelerators

T. Schumacher, C. Plessl, M. Platzner, Microprocessors and Microsystems (2012), 36(2), pp. 110-126

DOI


On the Feasibility and Limitations of Just-In-Time Instruction Set Extension for FPGA-based Reconfigurable Processors

M. Grad, C. Plessl, Int. Journal of Reconfigurable Computing (IJRC) (2012)

DOI


2011

FPGA Acceleration of Communication-bound Streaming Applications: Architecture Modeling and a 3D Image Compositing Case Study

T. Schumacher, T. Süß, C. Plessl, M. Platzner, Int. Journal of Recon- figurable Computing (IJRC) (2011)

DOI


2005

System-level performance evaluation of reconfigurable processors

R. Enzler, C. Plessl, M. Platzner, Microprocessors and Microsystems (2005), 29(2-3), pp. 63-73

Reconfigurable architectures that tightly integrate a standard CPU core with a field-programmable hardware structure have recently been receiving impact of these design decisions on the overall system performance is a challenging task. In this paper, we first present a framework for the cycle-accurate performance evaluation of hybrid reconfigurable processors on the system level. Then, we discuss a reconfigurable processor for data-streaming applications, which attaches a coarse-grained reconfigurable unit to the coprocessor interface of a standard embedded CPU core. By means of a case study we evaluate the system-level impact of certain design features for the reconfigurable unit, such as multiple contexts, register replication, and hardware context scheduling. The results illustrate that a system-level evaluation framework is of paramount importance for studying the architectural trade-offs and optimizing design parameters for reconfigurable processors.


2003

Instance-Specific Accelerators for Minimum Covering

C. Plessl, M. Platzner, Journal of Supercomputing (2003), 26(2), pp. 109-129

This paper presents the acceleration of minimum-cost covering problems by instance-specific hardware. First, we formulate the minimum-cost covering problem and discuss a branch \& bound algorithm to solve it. Then we describe instance-specific hardware architectures that implement branch \& bound in 3-valued logic and use reduction techniques similar to those found in software solvers. We further present prototypical accelerator implementations and a corresponding design tool flow. Our experiments reveal significant raw speedups up to five orders of magnitude for a set of smaller unate covering problems. Provided that hardware compilation times can be reduced, we conclude that instance-specific acceleration of hard minimum-cost covering problems will lead to substantial overall speedups.


The Case for Reconfigurable Hardware in Wearable Computing

C. Plessl, R. Enzler, H. Walder, J. Beutel, M. Platzner, L. Thiele, G. Tröster, Personal and Ubiquitous Computing (2003), 7(5), pp. 299-308

Wearable computers are embedded into the mobile environment of their users. A design challenge for wearable systems is to combine the high performance required for tasks such as video decoding with the low energy consumption required to maximise battery runtimes and the flexibility demanded by the dynamics of the environment and the applications. In this paper, we demonstrate that reconfigurable hardware technology is able to answer this challenge. We present the concept and the prototype implementation of an autonomous wearable unit with reconfigurable modules (WURM). We discuss experiments that show the uses of reconfigurable hardware in WURM: ASICs-on-demand and adaptive interfaces. Finally, we present an experiment with an operating system layer for WURM.


2001

Server-Side-Techniken im Web – ein Überblick

C. Plessl, E. Wilde, iX (2001), pp. 88-93


Open list in Research Information System

The University for the Information Society