An article over on the antmicro blog discusses how to install YUV cameras on the NVIDIA Jetson TK1. The cameras run from the CSI lanes on the J3 header.
Antmicro is working on developing Linux drivers for the following YUV sensors/chips:
- Omni Vision OV5640, 5Mpx sensor, 1080p@30fps
- Analog Devices ADV7280M, analog camera decoder with deinterlacer, PAL/NTSC@60fps
- ON Semiconductor (formerly Aptina) AP1302 ISP + AR1820 image sensor, 13Mpx, 1080p@30fps
- Toshiba TC358743 HDMI to CSI-2 bridge, 1080p@60fps
The article includes a brief tutorial on how to download and compile their up-to-date camera drivers currently under development. Looks like an interesting development in the Jetson community!
7 Responses
Thanks for the link! A bit offtopic, but what is the use for the YUV in the digital technology? I went on Wikipedia for a refresher and found that “Y’UV was invented when engineers wanted color television in a black-and-white infrastructure.”
Antmicro wrote that they used them for “our and our customers’ R&D purposes”. But I’m just curious what direction of R&D could that be?
Kind regards,
Andrew
Hi Andrew,
YUV in this context means the format of the digital stream being delivered by the camera. There are a wide variety of formats that cameras/imagers deliver, YUV (sometimes called YCrCb) simply defines the structure of the digital data being delivered. You’ll probably hear about Bayer and RGB in a similar context, and conversion between the different formats in terms like “Bayer conversion to YUV 4:2:0”. The main idea is that you have an image, and the image is in a given format whether it is YUV, RGB, Bayer and so on. It’s similar to photo images that are in a compressed format like JPEG or PNG. It’s the same image basically, but just stored in a particular way. The format gives you the map for dealing with the image.
In most cases, the type of imager inside the camera makes data packing simpler in a given format, so that’s what manufacturers tend to deliver ‘natively’.
I believe that the intent by antmicro is to deliver a hardware platform for building vision enabled embedded systems and applications (which is the Development part of R&D). The ‘Research’ is for implementing CUDA code for image/video processing and possible solutions to vision processing, exploration or novel tasks.
Thank you Jim (I hope I’m not mistaken),
I thought that RGB is the dominating format in the camera sensors (just looked up the datasheet for the first camera in antmicro’s list – OV5640 and it’s raw format is JPEG). But then you mentioned “Bayer conversion to YUV 4:2:0” and I started to think about the subsampling – and it seems to be the case for existence of the non-RGB color spaces, “4:2:2 Y’CbCr scheme requires two-thirds the bandwidth of (4:4:4) R’G’B’.” (https://en.wikipedia.org/wiki/Chroma_subsampling). And the same article explains that the visual artifacts are smallest for reduction in color quality opposed to reduction in luminance channel.
Thanks for the link and the answer, TIL why RGB is not ruling the world (yet)!
Andrew
Hi Andrew,
Yes, there are a lot of factors in determining the best color space, a lot of them are related to the physical constraints of the imager and bandwidth restrictions. A JPEG format may be used because it’s cheaper to put an encoder on board than to have a fatter pipe to transfer the bits. Or there’s a limitation on the actual bandwidth itself, like a USB 2.0 interface. It’s a whole discipline in and of itself.
I’m sure you’re familiar with the idea of a small imager vs large imager, the physical size of the light sensing device itself. As the marketing race to more megapixels started to heat up, the more pixel elements there were per die. But the size of the actual light sensing elements decreased. You’ll see that in DSLR types of cameras all the time where devices with fewer actual pixels have significantly better picture quality because larger sensor elements can gather more light.
Another thing to take into account is how fast the images have to be acquired. When you have a stream of 4K video running 60fps, the requirements are obviously a lot different than 1080@30fps. So there are all sorts of tradeoffs the engineers make (especially on inexpensive cameras) to get the best image/performance possible at a given price point. That’s where you can imagine subsampling and luminance ‘cheats’ for better overall image quality or low light performance. Typically imaging is easy when there is a bunch of light about, a lot of tradeoffs get made as things get darker to minimize the noise in the image or deal with moire effects on small, high resolution imagers.
Lots of things to take into account. In the broad sense, the actual encoding of the image is just moving some bits around in the end, so the hardware guys don’t get real excited about that, they’re just happy to get the bits out in the first place.
hi kangalow:
I am trying to integrate the adv7280m with JETSON-TK1.
I use the driver from https://github.com/antmicro/linux-tk1.
But I always miss kernel panic when start streaming.
can you help me ?
below is log:
[ 1018.375127] kernel BUG at /home/heaven/Work/TK1/Linux_for_Tegra_tk1/sources/yuv_kernel/linux-tk1/drivers/platform/tegra/hier_ictlr/hier_ictlr.c:54!
Hi heavenward,
Unfortunately I do not have a camera. You should contact Antmicro for help, plus file an issue report on their Github repository.
Hi please help, if you are able to get stream on Jetson CSI from ADV7280M