AlexNet Deep Neural Network on a Many Core Platform

Filipe Borges
Masters Thesis
VLSI Computation Laboratory
Department of Electrical and Computer Engineering
University of California, Davis
Technical Report ECE-VCL-2019-1, VLSI Computation Laboratory, University of California, Davis, 2019.

Abstract:

Deep neural networks are used in many engineering applications such as autonomous driving, image recognition, natural language processing, etc. For real time applications, low-latency performance (i.e., the time it takes for the result to be calculated) is critical. This thesis proposes a many-core implementation of AlexNet that is complete except for Local Response Normalization and offers low latency and high throughput performance i.e., larger number of images classified per second. Details of the underlying modular algorithms and architecture of the deep neural network layers and alternative mappings of the convolution and max pooling layers are also presented.

The many-core implementation is compared against several general purpose processors, GPUs and FPGAs. The key metrics by which processing platforms are compared are throughput, energy efficiency, throughput per area and the energy-delay product using a batch size of 1 since this results in the lowest possible latency. Due to different fabrication technologies, throughput, energy and area data for all platforms are scaled to 32 nm. The many-core implementation offers between 2×–104× improvement over general purpose processors, 2.5×–21× over FPGAs and 2×–21× improvement over GPUs in throughput per watt. Simultaneously, the many-core implementation provides 6×–22× improvement in throughput per area versus the general-purpose processors, 2×–13× over FPGAs and 1.3×–3.4× improvement over GPUs. The many-core implementation also has the lowest energy-delay product among all platforms; offering between 30×–2693× improvement over general purpose processors, 4×–738× over FPGAs and 8×–84× improvement over GPUs.

Thesis

Reference

Filipe Borges, "AlexNet Deep Neural Network on a Many Core Platform," Masters Thesis, Technical Report ECE-VCL-2019-1, VLSI Computation Laboratory, ECE Department, University of California, Davis, September 2019.

BibTeX entry

@mastersthesis{filipe:vcl:mastersthesis,
   author      = {Filipe Borges},
   title       = {AlexNet Deep Neural Network on a Many Core Platform},
   school      = {University of California, Davis},
   year        = 2019,
   address     = {Davis, CA, USA},
   month       = sep,
   note        = {\url{http://vcl.ece.ucdavis.edu/pubs/theses/2019-1.fborges/}}
   }

VCL Lab | ECE Dept. | UC Davis