+353-1-416-8900REST OF WORLD
+44-20-3973-8888REST OF WORLD
1-917-300-0470EAST COAST U.S
1-800-526-8630U.S. (TOLL FREE)

PRINTER FRIENDLY

Hardware Architectures for Deep Learning. Materials, Circuits and Devices - Product Image

Hardware Architectures for Deep Learning. Materials, Circuits and Devices

  • ID: 4791667
  • Book
  • IET Books
1 of 3

This book presents and discusses innovative ideas in the design, modelling, implementation, and optimization of hardware platforms for neural networks.

The rapid growth of server, desktop, and embedded applications based on deep learning has brought about a renaissance in interest in neural networks, with applications including image and speech processing, data analytics, robotics, healthcare monitoring, and IoT solutions. Efficient implementation of neural networks to support complex deep learning-based applications is a complex challenge for embedded and mobile computing platforms with limited computational/storage resources and a tight power budget. Even for cloud-scale systems it is critical to select the right hardware configuration based on the neural network complexity and system constraints in order to increase power- and performance-efficiency.

Hardware Architectures for Deep Learning provides an overview of this new field, from principles to applications, for researchers, postgraduate students and engineers who work on learning-based services and hardware platforms.

Note: Product cover images may vary from those shown
2 of 3
- Section I: Neural Networks: Concepts and Models
- Chapter 1: An Introduction to Artificial Neural Networks
- Chapter 2: Hardware Acceleration for Recurrent Neural Networks
- Chapter 3: Feedforward Neural Networks on Massively Parallel Architectures

- Section II: Neural Networks and Approximate Data Representation
- Chapter 4: Stochastic-Binary Convolutional Neural Networks with Deterministic Bit-streams
- Chapter 5: Binary Neural Networks

- Section III: Neural Networks and Sparsity
- Chapter 6: Hardware and Software Techniques for Sparse Deep Neural Networks
- Chapter 7: Computation Reuse-aware Accelerator for Neural Networks

- Section IV: Convolutional Neural Networks for Embedded Systems
- Chapter 8: CNN-Agnostic Accelerator Design for Low Latency Inference on FPGAs
- Chapter 9: Iterative Convolutional Neural Network (ICNN): An iterative CNN solution for low power and real-time systems

- Section V: Analog Neural Network Implementation: Methods and Applications
- Chapter 10: Mixed-Signal Neuromorphic Platform Design for Streaming Bio-Medical Signal Processing
- Chapter 11: Inverter-Based Memristive Neuromorphic Circuit for Ultra-Low-Power IoT Smart Applications
Note: Product cover images may vary from those shown
3 of 3

Loading
LOADING...

4 of 3
Masoud Daneshtalab Tenured Associate Professor.
Mälardalen University (MDH), Sweden.
Adjunct Professor.
Tallinn University of Technology (TalTech), Estonia.

Masoud Daneshtalab is a tenured associate professor at Mälardalen University (MDH) in Sweden, an adjunct professor at Tallinn University of Technology (TalTech) in Estonia, and sits on the board of directors of Euromicro. His research interests include interconnection networks, brain-like computing, and deep learning architectures. He has published over 300-refereed papers.

Mehdi Modarressi Assistant Professor.
University of Tehran, Department of Electrical and Computer Engineering, Iran.

Mehdi Modarressi is an assistant professor at the Department of Electrical and Computer Engineering, University of Tehran, Iran. He is the founder and director of the Parallel and Network-based Processing research laboratory at the University of Tehran, where he leads several industrial and research projects on deep learning-based embedded system design and implementation.

Note: Product cover images may vary from those shown
Adroll
adroll