[caption id="" align="alignleft" width="343"]
The "Snowflake" mobile coprocessor for Deep Learning from Purdue-affiliated startup FWDNXT (via HPCWire).[/caption]
A Purdue affiliated startup called FWDNXT is trying to design low-power mobile deep learning. Eugenio Culurciello, an associate professor at Purdue, says that the "Snowflake" mobile coprocessor "is able to achieve a computational efficiency of more than 91 percent on entire convolutional neural networks..."
I'm not exactly sure what he means by 91% computational efficiency. One of the biggest problems with deep learning right now is the last of agreed-upon benchmarks and means of measuring performance. This could mean 91% of peak theoretical efficiency for the chip in terms of single-precision FLOPs, or 91% of theoretical peak performance on this particular network architecture. Unfortunately, there just isn't any way to know.
The original article is posted below (via HPCWire).
https://www.hpcwire.com/off-the-wire/purdue-affiliated-startup-designing-hardware-software-deep-learning/

A Purdue affiliated startup called FWDNXT is trying to design low-power mobile deep learning. Eugenio Culurciello, an associate professor at Purdue, says that the "Snowflake" mobile coprocessor "is able to achieve a computational efficiency of more than 91 percent on entire convolutional neural networks..."
I'm not exactly sure what he means by 91% computational efficiency. One of the biggest problems with deep learning right now is the last of agreed-upon benchmarks and means of measuring performance. This could mean 91% of peak theoretical efficiency for the chip in terms of single-precision FLOPs, or 91% of theoretical peak performance on this particular network architecture. Unfortunately, there just isn't any way to know.
The original article is posted below (via HPCWire).
https://www.hpcwire.com/off-the-wire/purdue-affiliated-startup-designing-hardware-software-deep-learning/