A great article over at the NVIDIA Parallel FORALL blog: BIDMach: Machine Learning at the Limit with GPUs by John Canny over at Berkeley.
Introduction:
Deep learning has made enormous leaps forward thanks to GPU hardware. But much Big Data analysis is still done with classical methods on sparse data. Tasks like click prediction, personalization, recommendation, search ranking, etc. still account for most of the revenue from commercial data analysis. The role of GPUs in that realm has been less clear. In the BIDMach project (part of the BID Data Project at UC Berkeley), we have been exploring general machine learning with GPUs. The results are remarkable: not only do we see order-of-magnitude speedups for most problems, but our system also outperforms today’s cluster computing systems running up to several hundred nodes on typical workloads. As well as the incentives to adopt GPU technology for deep learning tasks, there is now a strong incentive for organizations to migrate to GPUs for the remainder of their analytics workload.
The article includes a good discussion about ‘Roofline Design’ and its application in system architecture, along with some performance analysis and insight. There also a link to: BID Data Project which includes links to downloads and source code for BIDMach. BIDMach runs on 64 bit Windows, Mac OS and Linux systems with NVIDIA GPUs.
Good stuff by people who know what they be doing.
2 Responses
Where can I download BIDMach. I’ve tried the NVIDIA site and got nothing.
https://github.com/BIDData/BIDMach