News

BrainChip to Release Biologically Inspired Neurmorphic System-on-a-Chip for AI Acceleration

September 12, 2018 by Chantelle Dubois

BrainChip has announced the Akida Neuromorphic System-on-Chip (NSoC), the first production-volume artificial intelligence accelerator utilizing Spiking Neural Networks (SNN).

BrainChip has announced the Akida Neuromorphic System-on-Chip (NSoC), the first production-volume artificial intelligence accelerator utilizing Spiking Neural Networks (SNN).

BrainChip has announced the Akida Neuromorphic System-on-Chip (NSoC), the first production-volume artificial intelligence accelerator utilizing Spiking Neural Networks (SNNs). 

 

Image courtesy of BrainChip.

 

The US-based company, which also has offices in France and Australia, specializes in neuromorphic computing solutions for AI, taking inspiration from how the neuron works and translating that to digital logic. Their recently announced NSoC promises high performance with low power using SNNs that are based on basic logic functions using CMOS gates without requiring the use of power-hungry GPUs. 

The Akida NSoC can interface with a variety of sensors for processing digital, analog, audio, dynamic vision sensor, and pixel-based data. Once received by the NSoC, the data is converted into a spike which is then processed by the chip’s neuron fabric where the SNN model is hosted. The NSoC can also interface with co-processors using PCIe, USB 3.0, UART, CAN, and Ethernet.

 

Image courtesy of BrainChip.

 

BrainChip is betting on the expectation that the AI acceleration market will be worth more than $60 billion by 2025, and that AI computing at the edge will be an increasingly sought after application.

What Is a Spiking Neural Network?

The Spiking Neural Network is the backbone technology behind the Akida NSoC. SNNs mimic neuron behavior more closely than more traditionally used Convolutional Neural Networks. 

In the spiking neuron model, a neuron will only fire if a certain potential (or state) is reached within its "membrane"; a threshold must be met before it reacts and propagates information to other neurons. And in turn, those neurons will react and behave according to their own potential thresholds. Time is also taken into consideration by each neuron, with the membrane potential decaying over time.

Using SNNs, the Akida NSoC is supposed to be highly efficient at learning with minimal training data, and able to associate information more similarly to the human brain.

SNNs were described by neural network expert Wolfgang Maas as being the "third generation of neural networks" as far back as 1997. While not exactly a new neural network model, only a handful of attempts have been made to implement it in hardware:

  • The analog computing Neurogrid  by Stanford University in 2009
  • The SpiNNaker network at the University of Manchester using ARM processors and massive parallel computing
  • The 5.4 billion transistor TrueNorth processor by IBM in 2014

Applications Across Domains

Some examples of applications of the Akida NSoC include:

 

Vision Systems

The Akida NSoC is expected to be particularly adept at object classification, pairing well with pixel-based, LiDAR, or dynamic vision sensors for use in robotic/drone navigation, autonomous driving, or surveillance. 

Using an SNN modeled for object classification, the NSoC is reported to consume less than 1 watt and can classify data from the CIFAR-10 data set at a rate of 1,400 images per second per watt. 

 

Image courtesy of BrainChip.

 

Surveillance

The Akida NSoC’s ability to learn with minimal training data also makes it useful for surveillance and law enforcement. The BrainChip Studio software works with the NSoC to process images with resolutions as low as 24x24 pixels for face detection and classification. This is certainly ideal in the field where operators likely do not have access to multiple images of a suspect’s face, or only have low-resolution security footage to work with. 

When the Akida NSoC is paired with the BrainChip accelerator, up to 600 frames can be processed simultaneously over 16 channels using 16 virtual cores, using 15 watts of power.

Samples of the Akida NSoC are expected to be available Q3 of 2019.