Skip to main content

Purdue-affiliated Startup FWDNXT Designing Low-power Hardware for Deep Learning

[caption id="" align="alignleft" width="343"] The "Snowflake" mobile coprocessor for Deep Learning from Purdue-affiliated startup FWDNXT (via HPCWire).[/caption]

A Purdue affiliated startup called FWDNXT is trying to design low-power mobile deep learning. Eugenio Culurciello, an associate professor at Purdue, says that the "Snowflake" mobile coprocessor "is able to achieve a computational efficiency of more than 91 percent on entire convolutional neural networks..."

I'm not exactly sure what he means by 91% computational efficiency. One of the biggest problems with deep learning right now is the last of agreed-upon benchmarks and means of measuring performance. This could mean 91% of peak theoretical efficiency for the chip in terms of single-precision FLOPs, or 91% of theoretical peak performance on this particular network architecture. Unfortunately, there just isn't any way to know.

The original article is posted below (via HPCWire).

https://www.hpcwire.com/off-the-wire/purdue-affiliated-startup-designing-hardware-software-deep-learning/

Popular posts from this blog

Neural Network Dense Layers

Neural network dense layers (or fully connected layers) are the foundation of nearly all neural networks. If you look closely at almost any topology, somewhere there is a dense layer lurking. This post will cover the history behind dense layers, what they are used for, and how to use them by walking through the "Hello, World!" of neural networks: digit classification.

Arrays of Structures or Structures of Arrays: Performance vs. Readability

It's one of those things that might have an obvious answer if you have ever written scientific software for a vector machine. For everyone else, it's something you probably never even thought about: Should I write my code with arrays of structures (or classes), or structures (or classes) of arrays. Read on to see how both approaches perform, and what kind of readability you can expect from each approach.

Neural Network Pooling Layers

Neural networks need to map inputs to outputs. It seems simple enough, but in most useful cases this means building a network with millions of parameters, which look at millions or billions of relationships hidden in the input data. Do we need all of these relationships? Is all of this information necessary? Probably not. That's where neural network pooling layers can help. In this post, we're going to dive into the deep end and learn how pooling layers can reduce the size of your network while producing highly accurate models.