Achtung:

Sie haben Javascript deaktiviert!
Sie haben versucht eine Funktion zu nutzen, die nur mit Javascript möglich ist. Um sämtliche Funktionalitäten unserer Internetseite zu nutzen, aktivieren Sie bitte Javascript in Ihrem Browser.

Teaching Show image information

Teaching

Seminar: Domain-specific Hardware Architectures for Deep Neural Networks (5LP / 4LP, 2SE)

This seminar will be given in English. Course number in SS 2020: L.079.08002

News

Due to the current situation with the Corona virus SARS-CoV-2 and the corresponding measures of Paderborn University we will run this seminar until further notice fully in electronic form. The central platform for the seminar is this PANDA course. We will provide all further information via PANDA, in particular the seminar schedule, reading materials and assignments. During the originally planned weekly seminar slots, i.e., Thursdays from 14:15-15:45 hours, we will have PANDA chats to discuss any open issues and answer questions. The first PANDA chat will take place on 09.04.2020 at 14:15 hours.

Content of the Seminar

There are two laws, actually observations, that drove processor design and implementation in the last decades: Dennard scaling and Moore's law. Dennard scaling states that “The power density for a given silicon area remains constant when you increase the number of transistors." and ended around 2003/2004. Consequently, performance growth due to increasing clock speeds was no longer economical, and for large chips it is even not possible to switch on all transistors (“dark silicon” effect) due to the limited energy budget. Processor architectures moved away from more aggressive exploitation of instruction-level parallelism to other, more explicit forms of parallelism, such as data and thread-level parallelism, to turn the increasing number of transistors into more performance. However, more recently Moore's law, known as “The number of transistors per chip doubles every (12) 18 months.” is slowing down as well, resulting in a slower increase in the number of available transistors.

To still be able to increase performance and energy-efficiency, there is currently a strong interest in domain-specific architectures (DSA). These are programmable hardware architectures tailored to a specific application domain.

An important domain with a lot of commercial interst and pressing computation demands is deep neural networks (DNN). In this seminar, we will look at the main concepts for DSAs and then study a number of DSAs for DNNs, analyze which architectural techniques they employ and what level of programming support is available.

 

Dozent

Prof. Dr. Marco Platzner

Computer Engineering

Marco Platzner
Phone:
+49 5251 60-5250
Fax:
+49 5251 60-4250
Office:
O3.207

Office hours:

by appointment

The University for the Information Society