Developing on NVIDIA® Jetson™ for AI on the Edge

Install ROS on Jetson Nano

For robotics application, many consider Robot Operating System (ROS) as the default go to solution. The version of ROS that runs on the NVIDIA Jetson Nano Developer Kit is ROS Melodic. Installing ROS on the Jetson Nano is simple. Looky here:


ROS was originally developed at Stanford University as a platform to integrate methods drawn from all areas of artificial intelligence, including machine learning, vision, navigation, planning, reasoning, and speech/natural language processing.

From 2008 until 2013, development on ROS was performed primarily at the robotics research company Willow Garage who open sourced the code. During that time, researchers at over 20 different institutions collaborated with Willow Garage and contributed to the code base. In 2013, ROS stewardship transitioned to the Open Source Robotics Foundation.

From the ROS website:

The Robot Operating System (ROS) is a flexible framework for writing robot software. It is a collection of tools, libraries, and conventions that aim to simplify the task of creating complex and robust robot behavior across a wide variety of robotic platforms.

Why? Because creating truly robust, general-purpose robot software is hard. From the robot’s perspective, problems that seem trivial to humans often vary wildly between instances of tasks and environments. Dealing with these variations is so hard that no single individual, laboratory, or institution can hope to do it on their own.

Core Components

At the lowest level, ROS offers a message passing interface that provides inter-process communication. Like most message passing systems, ROS has a publish/subscribe mechanism along with request/response procedure calls. An important thing to remember about ROS, and one of the reason that it is so powerful, is that you can run the system on a heterogeneous group of computers. This allows you to distribute tasks across different systems easily.

For example, you may want to have the Jetson running as the main node, and controlling other processors as control subsystems. A concrete example is to have the Jetson doing a high level task like path planning, and instructing micro controllers to perform lower level tasks like controlling motors to drive the robot to a goal.

At a higher level, ROS provides facilities and tools for a Robot Description Language, diagnostics, pose estimation, localization, navigation and visualization.

You can read more about the Core Components here.


The installROS repository on the JetsonHacksNano account on Github contains convenience scripts to help us install ROS. The main script,, is a straightforward implementation of the install instructions taken from the ROS Wiki. The instructions install ROS Melodic on the Jetson Nano.

You can clone the repository on to the Jetson:

$ git clone
$ cd installROS

The install script will install the prerequisites and ROS packages you specify. Usage:

Usage: ./  [[-p package] | [-h]]
 -p | --package <packagename>  ROS package to install
                               Multiple Usage allowed
                               The first package should be a base package. One of the following:

Default is ros-melodic-ros-base if do not specify any packages. Typically people will install ros-base if they are not running any desktop applications on the robot.

Example Usage:

$ ./ -p ros-melodic-desktop -p ros-melodic-rgbd-launch

This script installs a baseline ROS environment. There are several tasks:

  • Enable repositories universe, multiverse, and restricted
  • Adds the ROS sources list
  • Sets the needed keys
  • Loads specified ROS packages (defaults to ros-melodic-base-ros if none specified)
  • Initializes rosdep

You can edit this file to add the ROS packages for your application. builds a Catkin Workspace.


$ ./ [optionalWorkspaceName]

where optionalWorkspaceName is the name and path of the workspace to be used. The default workspace name is catkin_ws. If a path is not specified, the default path is the current home directory. This script also sets up some ROS environment variables.

The script sets placeholders for some ROS environment variables in the file ~/.bashrc

The script .bashrc is located in the home directory. The preceding period indicates that the file is “hidden”. The names of the ROS variables that the script adds are (they should be towards the bottom of the .bashrc file):

  • ROS_IP

The script sets ROS_MASTER_URI to the local host, and basically lists the network interfaces after the ROS_IP entry. You will need to configure these variables for your robots network configuration and how you desire your network topology.


  • In the video, the Jetson Nano is freshly prepared with L4T 32.2.1 / JetPack 4.2.2
  • In the video, the Jetson Nano is running from a micro-SD card.

10 Responses

  1. Do you think this script could be ported back to the TX2? I’m not sure of the ARM/GPU specific changes that you may have done to make this work on the Nvidia Jetson platform. I’m currently at Kinetic and really could use the upgrade.

  2. I am trying to understand the advantage of running ROS on a Jetson. Do any of the packages in ROS have algorithms optimized to take advantage of the GPUs on the various jetsons? Or are the ROS packages all just running on the CPU unless they are custom built and optimized for the Jetson?

    1. Not sure I understand the question. There are literally thousands of ROS packages, it is difficult to describe what they all do. Many developers run machine learning algorithms on the Jetson, and then interface with the results with ROS to control the robot. Some do this in a ROS node, others do it as a separate subsystem and then send ROS messages.

      1. I guess the question was too vague.

        My initial thought was how would a simple robot like the turtlebot3 benefit from replacing the RPI3 with a jetson?

        I think this leads to the question of which particular packages the turtlebot3 uses would benefit from having CUDU cores available?

        I think that The biggest improvements will happen in visual perception and, as you say, deep leaning.

        I guess I need to look at each package in my perception and planning stack individually and see what speed improvements can be gained by the available CUDA cores.

        1. Right. So typically someone would have a higher level algorithm running running something like machine learning inferencing running, and then instruct the robot to take actions based on that information. For example, you may want to do reinforcement learning to follow a path or lane. This would how something like AWS DeepRacer works, or DIYRobocars. A RPI3 doesn’t have the compute capacity for doing that on board. For example DeepRacer round trips to a server to do their processing, which is significantly slower than doing it on board.

Leave a Reply

Your email address will not be published. Required fields are marked *


Some links here are affiliate links. If you purchase through these links I will receive a small commission at no additional cost to you. As an Amazon Associate, I earn from qualifying purchases.

Books, Ideas & Other Curiosities