JetsonHacks

Developing on NVIDIA® Jetson™ for AI on the Edge

Jetson Nano B01 – Dual Raspberry Pi Cameras

The new B01 revision of the NVIDIA Jetson Nano Development Kit adds another MIPI-CSI camera connector. Looky here:

Background

We previously wrote up how to use the Raspberry Pi Version 2.1 camera with the original Jetson Nano A02 kit. Note that several other manufacturers now offer compatible cameras, some with interchangeable lenses.

Note: The V1 Raspberry Pi Camera Module is not compatible with the default Jetson Nano Install. The driver for the imaging element is not included in the base kernel modules.

Installation

Installation of the camera is the same as on the earlier development kit, and the Raspberry Pi. You’ll need a couple of RPi cameras, the new Jetson Nano B01, and so on:

Jetson Nano B01: On the NVIDIA Store

RPi Camera: https://amzn.to/2HVQxIl

Adafruit 5 volt, 4 Amp power supply: https://amzn.to/2Uvd6uc

Installation is simple. On a Camera Connector, lift up the plastic camera cable retainer which holds the ribbon cable in place. Be gentle, the retainer is fragile. You should be able to pry it up with a finger/fingernail or a small screwdriver. Once loose, insert the camera ribbon cable with the contacts on the cable facing inwards towards the Nano module. Make sure that the ribbon cable seats all the way into the connector. The tape on the connector should face towards the outer edge of the board. Then press down on the plastic tab to capture the ribbon cable, applying even pressure on both sides of the retainer. Some pics (natch):

Once you have the cameras installed, you can easily test them. The Gstreamer pipeline element nvarguscamerasrc parameter sensor_id controls which camera is being selected:

$ gst-launch-1.0 nvarguscamerasrc sensor_id=0 ! nvoverlaysink
and:
$ gst-launch-1.0 nvarguscamerasrc sensor_id=1 ! nvoverlaysink

The examples in the CSI-Camera examples have been extended to support the extra parameter. Also, as shown in the video, we’ve added some instrumented examples so that we can get the elapsed time for executing blocks of code, and print the number of frames per second being both read from the camera and displayed on the screen.

Software

On the JetsonHacksNano account on Github, there is a repository named CSI-Camera. There are a couple of simple ‘read the camera and show it in a window’ code samples using OpenCV, one written in Python, the other C++. You can checkout v3.1 to match the video, though releases are bound to change.

$ git clone https://github.com/JetsonHacksNano/CSI-Camera.git
$ git checkout v3.1

For the Python examples you may need to install numpy. You can install it using pip:

$ sudo apt-get install python3-pip

$ pip3 install cython

$ pip3 install numpy

The third demo is more interesting. The ‘face_detect.py’ script runs a Haar Cascade classifier to detect faces on the camera stream. You can read the article Face Detection using Haar Cascades to learn the nitty gritty. The example in the CSI-Camera repository is a straightforward implementation from that article. This is one of the earlier examples of mainstream machine learning.

Mo’ Better Frame Rates

In the ‘instrumented’ folder of CSI-Camera, there are several different works in progress and tools for benchmarking performance. These are written in Python. One of these tools is a very simple minded time profiler which allows you examine the elapsed time for executing a block of code. This is written as a Python class in the file timecontext.py.

You can play with the samples, like we did in the video. Of note is the CSI_Camera class, which allows camera capture to happen in a separate process. On a multi-processor CPU like the Jetson, we typically think of a separate process running on a different CPU. Of course, this is only if another CPU is available. This is all handled transparently by the operating system.

Dealing with the camera hardware in a separate thread provides several advantages. Typically the main thread has a loop (usually referred to as the display loop) which gathers the frame from the camera, processes the frame, displays the frame in a window, and then yields for a short amount of time to the operating system so that other processes can execute. These processes include things like reading the keyboard, mouse movements, background tasks and so on.

The main thread has a limited amount of time for this loop, it’s a pretty close match to the maximum expected frame rate. By reading the camera in another process, we get the benefit of reading the camera frames in a timely manner, regardless of the main loop speed. This helps with obvious frame speed mismatches, say when you are using a camera that is providing 120 fps, but the display loop can only show 30.

Reading through the code, you will see that the camera read saves the latest frame. When the display loop requests the latest frame, the CSI-Camera instance will return the latest frame that it has received from the camera. This effectively skips over any frames that were not consumed by the main loop, essentially discarding them.

Pick Your Frame Size Wisely

Processing video data is all about picking the smallest amount of data that you can get away with and still receive consistent results. Let’s say we have a frame size of 1280×720. Remember that if we ‘halve’ the size to 640×360, we have reduced the number of pixels that we need to process to 1/4 of the full frame! In other words, it takes 4 640×360 frames to make up a full 1280×720 frame. In many cases, this is a reasonable tradeoff. You will notice that many machine learning algorithms also use this trick, using what what we usually think of as thumbnail size images to represent full images.

The Raspberry Pi Camera provides several different frame sizes and frame rates. Pick what best fits your need. Also, remember that the nvarguscamerasrc supports setting the frame rate the camera driver returns. You may find this useful.

Watch the video, play with the code!

Notes

Demonstration environment:

  • Jetson Nano Developer Kit – B01
  • L4T 32.3.1 (JetPack 4.3)
  • OpenCV 4.1.1
  • Raspberry Pi Camera Module V2.1

In the video, we also use a Logitech Webcam: https://amzn.to/2HTXkCq with the Cheese application.

Facebook
Twitter
LinkedIn
Reddit
Email
Print

4 Responses

  1. Dear Jim,

    I am working on making a mini robot and I haven’t solved power supply problem yet. Please give me some solutions!
    My bot include these electronic devices with power recommended for stable working:

    – Nvidia Jetson Nano: 5V-4A.
    – Camera Intel Realsense D435(USB connected to Nano).
    – Arduino Mega 2560: 5-7V.
    – 2 stepper motors 28BYJ-48: 5-12V.
    – Driver control motors ULN2003: 5-12V(I’m using 12V for this driver).
    – IMU MPU9250(USB connected to Arduino).
    – SLAM method: RTAB-Map_ROS package.
    – Power supply: Lipo 11.1V-2200mAh(Buck converter XL4015 used to down voltage from 11.1V to 5V-5A for Jetson Nano).

    I plan to make this mini robot(small size) with using a Lipo power supply only, specifically 11.1V-2200mAh. I tried to build this system, but seems like I can not solve power supply yet, Jetson Nano is shut down right away if I try running some tools(e.g realsense-viewer) so I suppose the way I distributed power for my system went wrong, It’s more complicated than I thought. This is my very first time making a mobile robot, I have not much experience about power management.
    So can anybody help me out with this problem? Could be possible if only 11.V-2200mAh power supply is enough or I have to use more power dependency?
    Hope you can give me some advice! Any help would be greatly appreciated!

    Many thanks,
    Dat.

    1. I do not have a good answer for this question, as it can not be answered in a short space here. Typically you need to understand the power requirements of each component. You should also understand how each component uses power. In most cases where computer components share a power supply with a motor, people will place some type of filter in the circuit to isolate the EMF noise from the motor. This “noise” causes issues with the computer. You can look at the NVIDIA JetBot and Kaya robots to get some ideas on how to supply power. You can also ask on the official NVIDIA Jetson Nano forum, where a large number of developers and NVIDIA engineers share their experience. Good luck on your project!

  2. Hello sir,
    I want to connect 4 cameras at once with jetson nano so, that I can get live video feed at once on the single screen. How we can connect 4 video cameras at once with jetson nano please suggest me.

Leave a Reply

Your email address will not be published. Required fields are marked *

Disclaimer

Some links here are affiliate links. If you purchase through these links I will receive a small commission at no additional cost to you. As an Amazon Associate, I earn from qualifying purchases.

Books, Ideas & Other Curiosities