JetsonHacks

Developing on NVIDIA® Jetson™ for AI on the Edge

BIDMach – Deep Learning with GPUs

A great article over at the NVIDIA Parallel FORALL blog: BIDMach: Machine Learning at the Limit with GPUs by John Canny over at Berkeley.

Introduction:

Deep learning has made enormous leaps forward thanks to GPU hardware. But much Big Data analysis is still done with classical methods on sparse data. Tasks like click prediction, personalization, recommendation, search ranking, etc. still account for most of the revenue from commercial data analysis. The role of GPUs in that realm has been less clear. In the BIDMach project (part of the BID Data Project at UC Berkeley), we have been exploring general machine learning with GPUs. The results are remarkable: not only do we see order-of-magnitude speedups for most problems, but our system also outperforms today’s cluster computing systems running up to several hundred nodes on typical workloads. As well as the incentives to adopt GPU technology for deep learning tasks, there is now a strong incentive for organizations to migrate to GPUs for the remainder of their analytics workload.

The article includes a good discussion about ‘Roofline Design’ and its application in system architecture, along with some performance analysis and insight. There also a link to: BID Data Project which includes links to downloads and source code for BIDMach. BIDMach runs on 64 bit Windows, Mac OS and Linux systems with NVIDIA GPUs.

Good stuff by people who know what they be doing.

Facebook
Twitter
LinkedIn
Reddit
Email
Print

2 Responses

Leave a Reply

Your email address will not be published. Required fields are marked *

Disclaimer

Some links here are affiliate links. If you purchase through these links I will receive a small commission at no additional cost to you. As an Amazon Associate, I earn from qualifying purchases.

Books, Ideas & Other Curiosities