JetsonHacks

Developing on NVIDIA® Jetson™ for AI on the Edge

In Practice: USB Cameras on Jetson

The new “In Practice” series takes an in depth look on how to work with the Jetson with developer tools and external hardware. In this article we are working with USB Cameras. Looky here:

In a previous article, “In Depth: Cameras” we discussed how digital cameras work. Here, we will go over how to integrate a USB camera with a Jetson Developer Kit.

Introduction

The NVIDIA Jetson Developer Kits support direct connect camera integration using two main methods. The first is to use a MIPI Camera Serial Interface (CSI). The MIPI Alliance is the name of the industry group that develops technical specifications for the mobile ecosystem. On Jetsons like the Nano, this might be a sensor module like the familiar Raspberry Pi V2 camera based on the Sony IMX219 Image Sensor.

The second type of camera is a camera that connects via a USB port, for example a webcam. In this article, we will discuss USB cameras. 

Jetson Camera Architecture

At its core, the Jetsons use the Linux kernel module Video Four Linux (version 2) (V4L2).  V4L2 is part of Linux, and not unique to the Jetsons. V4L2 does a wide variety of tasks, here we concentrate on video capture from cameras.

Here is the Jetson Camera Architecture Stack (from the NVIDIA Jetson documentation):

Camera Architecture Stack

CSI Input

As mentioned previously, there are two main ways to acquire video from cameras directly connected to the Jetson. The first, through the CSI ports, uses libargus which is a Jetson specific library that connects the camera sensor to the Tegra SoC Image Signal Processor (ISP). The ISP hardware does a wide variety of image processing tasks, such as debayering an image, manages white balance, contrast and so on. This is non-trivial, and requires special tools and expertise in order to produce high quality video. For more info: NVIDIA Jetson ISP Control Description from RidgeRun.

USB Input

The video from a USB camera takes a different route.  The USB Video Class driver (uvcvideo) is a Linux kernel module. The uvcvideo module supports extension unit controls (XU controls) to V4l2 through mapping and a driver-specific ioctl interface. Input/Output Controls (ioctls) are the interface to the V4L2 API. For more info: The Linux USB Video Class (UVC) driver

What this means is that a driver in a USB camera device can specify different device dependent controls, without uvcvideo or V4l2 having to specifically understand them. The USB camera driver hands a code for a feature, along with a range of possible values, to uvcvideo. When the user specifies a feature value for the camera, uvcvideo checks the bounds of the argument and passes it to the camera along with the feature code. The camera then adjusts itself accordingly.

Because a USB camera does not interface directly with the ISP, there is a more direct path to user space. Here’s a diagram:

The term user space refers to where users have access to the cameras in their applications.

Outside the USB Box

This is one path to get USB camera video into the Jetson. Not all camera manufacturers provide this type of implementation. Most “plug and play” webcams do, as well as many mature “specialty” cameras. Let’s define here that specialty are cameras with special or multiple sensors. Many thermal cameras provide this capability, as well as depth cameras like the Stereolabs ZED camera and Intel RealSense cameras. 

However, some specialty cameras may have limited access to their feature set through V4L2, or may have a stand alone SDK. After all, it’s simpler to have a stand alone SDK that simply reads from a given USB serial port and not have to deal with going through the kernel interface. People then interface with the camera through the SDK. We are not discussing those types of cameras here.

Camera Capabilities

Camera capabilities describe many aspects of how a camera operates. This includes many features including pixel formats, available frame sizes, and frame rates (frames per second). You can also examine different parameters that control the camera operation, such as brightness, saturation, contrast and so on.

There is a command line tool which you can use to examine camera capabilities named v4l2-ctl. As shown in the video, to install v4l2-ctl install the v4l-utils debian package:

$ sudo apt install v4l-utils

Here are some useful commands:

# List attached devices
$ v4l2-ctl --list-devices
# List all info about a given device
$ v4l2-ctl --all -d /dev/videoX
# Where X is the device number. For example:
$ v4l2-ctl --all -d /dev/video0
# List the cameras pixel formats, images sizes, frame rates
$ v4l2-ctl --list-formats-ext -d /dev/videoX

There are many more commands, use the help command to get more information.

The video demonstrates a GUI wrapper around v4l2-ctl. The code is available in the JetsonHacks Github repository camera-caps. Please read the installation directions in the README file.

Interfacing with your Application

In the Jetson Camera Architecture, a V4L2 camera stream is available to an application in two different ways. The first way is to use the V4L2 interface directly using ioctl controls, or through a library that has a V4L2 backend. The second way is to use GStreamer which is a media processing framework.

ioctl

The most popular library on the Jetson for interfacing with V4L2 cameras via ioctl on the Jetson is the Github repository dusty-nv/jetson-utils. Dustin Franklin from NVIDIA has C/C++ wrapper Linux utilities for camera, codecs, GStreamer, CUDA and OpenCV in the repository. Check out the ‘camera‘ folders for samples on interfacing with V4L2 cameras via ioclt.

GStreamer

GStreamer is a major part of the Jetson Camera Architecture. The GStreamer architecture is extensible. NVIDIA implements DeepStream elements which when added to the GStreamer pipeline provide deep learning analysis of a video stream. NVIDIA calls this Intelligent Video Analytics.

GStreamer has tools which allows it to run as a stand alone application. There are also libraries which allow GStreamer to be part of an application. GStreamer can be integrated into an application in several ways, for example using OpenCV. For example, the popular Python library on Github NVIDIA-AI-IOT/jetcam uses OpenCV to help manage the camera and display.

OpenCV

OpenCV is a popular framework for building computer vision applications. Many people use OpenCV to handle camera input and display the video in a window. This can be done in just a few lines of code. OpenCV is also flexible. In the default Jetson distribution, OpenCV can use either a V4L2 (ioctl) interface or a GStreamer interface. Also, OpenCV can implement GTK+ or QT displays for graphics.

The simple camera applications shown in the video utilize OpenCV. The examples are on the JetsonHacks Github repository: USB-Camera. usb-camera-simple.py utilizes V4L2 camera input. usb-camera-gst.py utilizes GStreamer for its input interface.

One note here, if you are using a different version of OpenCV than the default Jetson distribution, then you will need to make sure that have the appropriate libraries linked in, such as V4L2 and GStreamer.

Looking around Github, you should be able to find other Jetson camera helpers. It’s always fun to take a look around and see what others are doing!

USB Notes

As mentioned in the video, the USB bandwidth for the Jetson may not match your expectations. For example on the Jetson Nano and Xavier NX, there are 4 USB 3 ports. However, the USB ports are connected to one hub internally on the Jetson. This means that you are limited to the speed of the hub, which is less than 4 super speed USB ports.

Also, in order to conserve power the Jetson implements what is called ‘USB autosuspend‘. This powers down a USB port when not in use. Most USB cameras handle this correctly and do not let the USB port autosuspend. However, occasionally you may encounter a camera that does not do this.

You will usually encounter this behavior when everything appears to be working correctly for a little while, and then just stops. It’s a little baffling if you don’t know what’s going on.

Fortunately, you can turn off USB autosuspend. This is a little bit different on different OS releases and Jetson types. You should be able to search the Jetson forums for how to control this for your particular configuration.

Conclusion

Hopefully this is enough information to get you started. This area is one of the most important areas of Jetson development. This should be enough information to get you asking the right questions for advanced development.

Facebook
Twitter
LinkedIn
Reddit
Email
Print

12 Responses

  1. Hello, I really appreciate the content and guidance you provide. But I have some lacking points that I could not catch up.
    I have an Nvidia Jetson Xavier NX in my hand, which I have bought a year ago and so.
    Now I am trying to make some projects with it. I have bought a few CSI cameras (Raspberry Pi Camera v2s) and a few PS3 Eye cameras after having seen your results in the past.
    For the CSI cameras the ribbon cables are hard to manage. Especially considering I am making a robot and a pair of cameras will be its eyes. I will use the cameras to make it understand the environment, makes its calculations, measurements and move accordingly.
    For the PS3 Eye cameras they are perfect to be used with their relatively long usb cables but “patching the kernel” steps are not so clear. I mean I could not use your resource since my Xavier has a different version of kernel. Is there a way for you to provide a more general tutorial on this? Or could you please guide me on? I need to use my cameras either with or without GStreamer but definitely with OpenCV + some more libraries.

      1. In fact this was not so hard and I managed it by combining some info you have provided here and there. Your 3 blog entries since 2015 plus your script to pull the right kernel version ended up with a correctly patched kernel for PS3 Eye. It is up and working. I will some it up in my blog once I can find time.
        Appreciated.

  2. Hello, i’m new to Jetson nano and these videos have helped me alot in getting up and running. In this tutorial i just faced one issue when using the GUI to display different video formats and sizes when i choose the YUYV format everything runs perfectly but when i choose MJPEG of any size the camera turns on but the video stream shows a black window and then everything freezes.
    I’m using an A4tech PK-910H FHD camera that supports both YUYV and MJPEG video formats. Could you please guide me on what could be the issue?

    1. From your description, it’s hard to tell about what command line the program is attempting to run. Can you use gst-launch on the command line to launch the camera using MJPEG?

  3. I’ve tried using gst-launch to launch the camera using the terminal, the camera turns on but the video stream window doesnot open. iIt works without any issue when i launch the camera using the YUYV format.

  4. Jim, Stereolabs has put more features in their latest SDKs and I wonder which Nvidia hardware is required to properly run the ZED cameras? I’m thinking of a robot with ROS2 and the latest ZED camera. It even integrates with GPS. Have you worked with this setup? Thanks for your great articles and videos!

    1. Thank you for the kind words. I haven’t worked with the new ZED camera SDK yet. If you are buying a new Jetson for your project, the choice is pretty simple if a little pricey.
      There are a couple of offerings in the Jetson lineup. The Jetson Nano may be able to run the camera, but is not a good solution going forward with ROS2. The Nano will stay on Ubuntu 18.04, which is not a happy camper with ROS2.
      The Jetson Orins are the way forward. It depends on how big your robot, power and budget. Nasty engineering things. A 8GB Jetson Orin Nano Developer Kit would be the choice if you need light weight, and have a smaller budget. If you have the room on the robot and the money, the 64GB AGX Orin Developer Kit will allow just about any mobile robot support in the ecosystem. More memory, more capabilities. The AGX Orin also provides all of the hardware accelerators like the DLA and such.
      Third parties have carrier boards with different capabilities, depending on your interface needs. There’s also the Orin NX SoC which sits in the middle of the Orin lineup. It’s generally more capable, has more memory, and the same footprint as the Orin Nano module. However, it is not available in a NVIDIA Developer Kit at this time. Hopes this helps, and thanks for reading!

  5. I am looking to use a Jetson Orin NX interfacing to a single 13 MP color camera streaming at 30 fps, so I can pipe UHD 4K2K 30 FPS out of the HDMI 2.1 (or Display) port.

    I could bring this video stream into either the MIPI (via a GMSL Ser-Des) or USB 3 Gen 2 Port, as RAW 10 Bit Bayer or YUV422_8 pixel formats to stay within my cable bandwidth requirements.

    I want to debayer and do Image processing on the de-bayered green channel to control local cropping for digital stabilization of an 8MP live video output image to HDMI. Then I want to store the live video and stream it out under H.265.

    My question is, to do all of this processing, which video port entry makes the most sense for me to use?

Leave a Reply

Your email address will not be published. Required fields are marked *

Disclaimer

Some links here are affiliate links. If you purchase through these links I will receive a small commission at no additional cost to you. As an Amazon Associate, I earn from qualifying purchases.

Books, Ideas & Other Curiosities