The research paper An Empirical Evaluation of Deep Learning on Highway Driving was recently published by researchers from Stanford University, Twitter, Texas Instruments and Baidu. One of the authors is Andrew Y. Ng, who gave one of the keynotes at GTC 2015.
Here’s the abstract:
Abstract—Numerous groups have applied a variety of deep learning techniques to computer vision problems in highway perception scenarios. In this paper, we presented a number of empirical evaluations of recent deep learning advances. Com- puter vision, combined with deep learning, has the potential to bring about a relatively inexpensive, robust solution to au- tonomous driving. To prepare deep learning for industry uptake and practical applications, neural networks will require large data sets that represent all possible driving environments and scenarios. We collect a large data set of highway data and apply deep learning and computer vision algorithms to problems such as car and lane detection. We show how existing convolutional neural networks (CNNs) can be used to perform lane and vehicle detection while running at frame rates required for a real-time system. Our results lend credence to the hypothesis that deep learning holds promise for autonomous driving.
The paper goes into depth explaining the various steps on how the network was trained, the steps and techniques, along with the challenges, faced while building the system on an Infinity Q50 platform.
The results are promising:
“The detection network used is capable of running at 44Hz using a desktop PC equipped with a GTX 780 Ti. When using a mobile GPU, such as the Tegra K1, we were capable of running the network at 2.5Hz, and would expect the system to run at 5Hz using the Nvidia PX1 chipset.“.
Running on the current Jetson automotive dev kit, they were able to get 2.5Hz for detection. While they are theorizing that they will get 5Hz in the next generation NVIDIA PX1, my guess is that they will see closer to 10Hz without too much work.
A good read.