Using Jetson Docker Containers (we call them jetson-containers) is one of the best ways to leverage the many Jetson tools for development. Looky here:
One of the most challenging issues in developing on Jetson is the sheer number of libraries and intermingled dependencies. This is especially true when using the ever advancing deep learning environments.
Generally each JetPack release bundles together the major CUDA and related libraries, matching each with the corresponding base element. Because development is happening so fast in the machine learning world, it is difficult to match different JetPack elements for a given machine learning task. Each package seems to have its own little niggles to get working properly for each release.
On top of that, the requirements and underlying substrate change so quickly it can be difficult to build a stable application. In a small development group this is hard. Not only do you have to build your own application, you have to keep the rest of the foundation in a stable state. It’s like building in a swamp.
The Jetson Zoo
The Jetson Zoo is one of the first places that Jetson developers should visit. There are links to Docker containers for dozens of different applications. If you’re a Python developer, links to wheels for the most popular machine learning libraries can have you up and running in a hurry. There are links to Tensorflow, PyTorch, Onnx, and MXNet just to name a few. And of course the popular “Hello AI World” two days to a demo tutorial is linked there too.
The Zoo be where to get your start if you’re smart.
The real gold mine is jetson-containers from the dusty-nv account on Github. Owner Dustin Franklin is a NVIDIA engineer who has built dozens of different Docker images that span machine learning, data science, robotics and deep learning. Best of all, the source Dockerfiles are in the repository! That makes it much easier to build your own custom containers. And learn how to create the correct environment if you are trying to load SDKs with lots of dependencies and quirks.
Over the last few years, the number of machine learning packages has been growing at a substantial rate. While some work with just a little tinkering, others require major work to get up and running. This is especially true in early release states, when people have little experience with them.
Because many of the new packages are built on top of other base packages, it can be a bit of a struggle getting everything to work together. And because the packages are new, they tend to go through frequent release cycles before becoming stable. This can be quite the challenge.
Add to this that the JetPack is always adding new functionality to support even more innovation, you can understand why it sometimes feel like you are stuck in a never ending spiral. Fortunately, by building containers you can isolate your development task and decouple it from the environment of the machine that you are working on.
Here’s the thing. The jetson-containers repository holds packages that are known to work. There’s no guessing, a lot of the hard work has been done for you. Plus, the initial performance tuning is done too. It works, and performs well.
Containers are a feature of the Linux kernel. The operating system kernel is responsible for many low level operating system tasks. These tasks include interfacing with hardware via drivers (modules), process management (such as launching programs and memory allocation), low level interface with the file system, system calls and so on. In a Linux based operating system, such as Ubuntu, the kernel represents about 10% of the code of the OS.
We think of the kernel being in “kernel space” or system space. The rest of the system, including other parts of the operating system, runs in “user space” or userland by another name. You can think of a Container as the kernel launching a process outside of user space. It is (mostly) completely independent of the rest of user space – this is called sandboxing. Because containers are not “part” of the rest of the user space, they do not effect either user space happenings or other containers.
It is easy to see why containers are popular in the server community. You can have multiple servers running on one machine, each separate from one another. This results in better resource allocation in situations such as when several low volume servers are aggregated onto one physical machine.
Docker is the runtime engine on the Jetsons which manages containers. While there are several different Linux container managers, Docker is installed on the default Jetson image.
Docker Concepts and Commands
There are three major organizational concepts in Docker that you should know:
- Docker Image
- Docker Container
You can think of a Dockerfile as the source code to specify the operating system, the system libraries, the application libraries code, application code and scripts, and how to arrange the environment for execution.
Using the docker build command, Docker creates a Docker Image. The Docker Image represents a template of a container process to launch. The image contains the binaries specified in the Dockerfile, as well as all of the rest of the accouterment to actually run the specified application.
A Docker Image is layered and immutable. You can build on top of a given image, using the image as a substrate. The Docker Image is a stand alone entity, you can place it in a remote repository and then use docker pull to bring it to a local machine. NVIDIAs NGC and Docker Hub are two examples of such repositories. This makes it easy for other people to share their work.
Finally, using a docker run command you instantiate a process on the machine from the template description of the Docker Image. The result is called a Docker Container. This is probably the trickiest of the group for people new to the concept.
The Docker runtime engine is a daemon which runs as a service. It automagically runs in the background. When it launches a Docker Image creating a Docker Container, the daemon attempts to keep the process running on the machine. Just exiting out of a running container from a Terminal or rebooting the machine won’t kill the Container. It will keep running, happily consuming memory and other resources. You have to make sure to stop the container when you don’t need it anymore. The jetson-containers run script will do this automatically for you, making it very useful for the uninitiated.
You can use docker image ls and docker container ls to list out the images and containers currently on the machine. These rascals can get pretty big, keep an eye on them!
Fortunately jetson-containers is well documented. If you know some of the terms before jumping in, you won’t have much trouble getting started. In the video, we run some of the newest images available, Stable Diffusion and Llama2-GPT. They are running on the a AGX Orin Dev Kit, as they require JetPack 5.X.
When you look through the Dockerfiles of these packages, you can see not only all of the code you have to build, but also all of the little workarounds you need. Fortunately there are enough experienced eyes, and the tireless Dusty, to work through the issues for you. It’s certainly a win for you!
Docker lends itself to server/micro-service type of applications. It’s not strong at building desktop GUIs. As most development has gone through a web interface in recent years, this might not be as a big of a problem as it initially seems.
- The demo in the video is running on a 32GB NVIDIA AGX Orin Development Kit
- JetPack 5.1.2, L4T 35.4.1
- Stable Diffusion and Llama2 GPT