JetsonHacks

Developing on NVIDIA® Jetson™ for AI on the Edge

Distributed Simulation of Spiking Neural Networks

Here’s an interesting paper: Power, Energy and Speed of Embedded and Server Multi-Cores applied to Distributed Simulation of Spiking Neural Networks: ARM in NVIDIA Tegra vs Intel Xeon quad-cores by authors Pier Stanislao Paolucci1*, Roberto Ammendola2, Andrea Biagioni1, Ottorino Frezza1, Francesca Lo Cicero1, Alessandro Lonardo1, Michele Martinelli1, Elena Pastorelli1, Francesco Simula1, Piero Vicini1.

From the Abstract:

This short note regards a comparison of instantaneous power, total energy consumption, execution time and energetic cost per synaptic event of a spiking neural network simulator (DPSNN-STDP) distributed on MPI processes when executed either on an embedded platform (based on a dual-socket quad-core ARM platform) or a server platform (INTEL- based quad-core dual-socket platform).

In summary, we observed that: 1- we spent 2.2 micro-Joule per simulated synaptic event on the “embedded platform”, approx. 4.4 times lower than what was spent by the “server platform”; 2- the instantaneous power consumption of the “embedded platform” was 14.4 times better than the “server” one; 3- the server platform is a factor 3.3 faster. The “embedded platform” is made of NVIDIA Jetson TK1 boards, interconnected by Ethernet, each mounting a Tegra K1 chip including a quad-core ARM Cortex-A15@2.3GHz. The “server platform” is based on nodes which are dual-socket, quad-core Intel Xeon CPUs (E5620@2.4GHz). The measures were obtained with the DPSNN-STDP simulator (Distributed Simulation of Polychronous Spiking Neural Network with synaptic Spike-Timing Dependent Plasticity) developed by INFN, that already proved its efficient scalability and execution speed-up on hundreds of similar “server” cores and MPI processes, applied to neural nets composed of several billions of synapses.

The paper describes the tasks involved, and a more detailed account of energy consumption. On the server, the total energy to complete the task was 2.3K Joules, on the Jetson Tegra 528 Joules. The server is faster, 3.3 times less time than the Jetson. However, the Jetson consumes a total energy 4.4 times lower to complete their simulation task, with instantaneous power consumption 14.4 lower than a server node.

An interesting read.

Facebook
Twitter
LinkedIn
Reddit
Email
Print

Leave a Reply

Your email address will not be published. Required fields are marked *

Disclaimer

Some links here are affiliate links. If you purchase through these links I will receive a small commission at no additional cost to you. As an Amazon Associate, I earn from qualifying purchases.

Books, Ideas & Other Curiosities