Developing on NVIDIA® Jetson™ for AI on the Edge

TensorFlow on NVIDIA Jetson TX1 Development Kit

Building TensorFlow on the NVIDIA Jetson TX1 is a little more complicated than some of the installations we have done in the past. Looky here:


TensorFlow is one of the major deep learning systems. Created at Google, it is an open-source software library for machine intelligence. The Jetson TX1 ships with TensorRT, which is the run time for TensorFlow. TensorRT is what is called an “Inference Engine“, the idea being that large machine learning systems can train models which are then transferred over and “run” on the Jetson.

However, some people would like to use the entire TensorFlow system on a Jetson. This has been difficult for a few reasons. The first reason is that TensorFlow binaries aren’t generally available for ARM based processors like the Tegra TX1. The second reason is that actually compiling TensorFlow takes a larger amount of system resources than is normally available on the Jetson TX1. The third reason is that TensorFlow itself is rapidly changing (it’s only a year old), and the experience has been a little like building on quicksand.

In this article, we’ll go over the steps to build TensorFlow r0.11 on the Jetson TX1. This will take about three hours or so to build.

Note: You may want to read through this article and then read the secret article: Install TensorFlow on TX1. Just a thought. But you didn’t hear it from me.


Note: Jan. 17, 2017 – Some issues have been addressed as the installation has changed over the last few weeks. Following the instructions in this article incorporates the changes. You can read about the changes here: Building TensorFlow Update

This article assumes that JetPack 2.3.1 is used to flash the Jetson TX1. Install:

  • L4T 24.2.1 an Ubuntu 16.04 64-bit variant (aarch64)
  • CUDA 8.0
  • cuDNN 5.1.5

Note that the library locations when installed by JetPack may not match a manual installation. TensorFlow will use CUDA and cuDNN in this build.

In order to get TensorFlow to compile on the Jetson TX1, a swap file is needed for virtual memory. Also, a good amount of disk space ( > 5.5GB ) is needed to actually build the program. If you’re unfamiliar with how to set the Jetson TX1 up like that, see a previous article: Jetson TX1 Swap File and Development Preparation.

There is a repository on the JetsonHacks account on Github named installTensorFlowTX1. Clone the repository and switch over to that directory.

$ git clone
$ cd installTensorFlowTX1

Next, tell the dynamic linker to use /usr/local/lib

$ ./


There is a convenience script which will install the required prerequisites such as Java, along with Protobuf, grpc-java and Bazel. The script also patches the source files appropriately for ARM 64. Bazel and grpc-java each require a different version of Protobuf, so that is also taken care of in the script.

$ ./

From the video installation of the prerequisites takes a little over 30 minutes, but will depend on your internet connection speed.

Building TensorFlow

First, clone the TensorFlow repository and patch for Arm 64 operation:

$ ./

then setup the TensorFlow environment variables. This is a semi-automated way to run the TensorFlow file. Note that most of the library locations are configured in this script. As stated before, the library locations are determined by the JetPack installation.

$ ./

We’re now ready to build TensorFlow:

$ ./

This will take a couple of hours. After TensorFlow is finished building, we package it into a ‘wheel’ file:

$ ./

The wheel file will be in the $HOME directory, tensorflow-0.11.0-py2-none-any.whl


Pip can be used to install the wheel file:

$ pip install $HOME/tensorflow-0.11.0-py2-none-any.whl


Then run a simple TensorFlow example for the initial sanity check:

$ cd $HOME/tensorflow
$ time python tensorflow/models/image/mnist/


So there you have it. Building TensorFlow is quite a demanding task, but hopefully some of these scripts may the job a little bit simpler.


36 Responses

  1. Hi Jim, thanks for this. Two days ago I tried to do it on my own following posts on stockoverflow and github, but completely run out of space in the middle of the process. Had to perform some memory gymnastics to even boot the device properly 🙂

    Judging by the terminal output from that test at the end, tensorflow is compiled with cuda support? Also, is there any reason why this wouldn’t work with 0.12 version TF?

  2. HI Marko,
    I found the whole TensorFlow build process rather trying. Looking at the Github and Stackoverflow threads, it looks like people far more determined and smarter than I are having a bunch of issues building it on the TX1. I attempted to build everything in an entirely automated fashion, but ran into numerous roadblocks. Eventually I gave in and published what I had, admitting defeat. Hopefully others can follow along with my attempt. On a TX1 directly after flashing with enough memory and disk space, I did NOT find that the solutions posted actually built 0.11 correctly

    The script file sets all of the environment variables before calling the TF script named ‘configure’. configureTensorFlow sets TensorFlow to use CUDA 8.0 and cuDNN 5.1.15. The environment variables use the JetPack configuration, so if someone set their TX1 up manually they may have to adjust those environment variables in the script file, or just run the TensorFlow configure script directly.

    As for TensorFlow 0.12, the quick answer is that I haven’t tried it. Personally, I was just happy to get 0.11 to build consistently.

    Directions for getting the pre-built wheel file for 0.11:

  3. Dear Marko and kangalow,
    it took me about 40hrs in total till I finished the build of Tensorflow.
    The scripts definitely gave a good reference.
    As you can see the output of the example mentioned in the end of the article Tensorflow is using the TX1 gpu.

    ubuntu@jetson:~/tensorflow$ time python tensorflow/models/image/mnist/
    I tensorflow/stream_executor/] successfully opened CUDA library locally
    I tensorflow/stream_executor/] successfully opened CUDA library locally
    I tensorflow/stream_executor/] successfully opened CUDA library locally
    I tensorflow/stream_executor/] successfully opened CUDA library locally
    I tensorflow/stream_executor/] successfully opened CUDA library locally
    Successfully downloaded train-images-idx3-ubyte.gz 9912422 bytes.
    Successfully downloaded train-labels-idx1-ubyte.gz 28881 bytes.
    Successfully downloaded t10k-images-idx3-ubyte.gz 1648877 bytes.
    Successfully downloaded t10k-labels-idx1-ubyte.gz 4542 bytes.
    Extracting data/train-images-idx3-ubyte.gz
    Extracting data/train-labels-idx1-ubyte.gz
    Extracting data/t10k-images-idx3-ubyte.gz
    Extracting data/t10k-labels-idx1-ubyte.gz
    I tensorflow/stream_executor/cuda/] ARM has no NUMA node, hardcoding to return zero
    I tensorflow/core/common_runtime/gpu/] Found device 0 with properties:
    name: NVIDIA Tegra X1
    major: 5 minor: 3 memoryClockRate (GHz) 0.072
    pciBusID 0000:00:00.0
    Total memory: 3.90GiB
    Free memory: 1.40GiB
    I tensorflow/core/common_runtime/gpu/] DMA: 0
    I tensorflow/core/common_runtime/gpu/] 0: Y
    I tensorflow/core/common_runtime/gpu/] Creating TensorFlow device (/gpu:0) -> (device: 0, name: NVIDIA Tegra X1, pci bus id: 0000:00:00.0)
    Step 0 (epoch 0.00), 42.6 ms
    Minibatch loss: 12.054, learning rate: 0.010000
    Minibatch error: 90.6%
    Validation error: 84.6%
    Step 100 (epoch 0.12), 56.8 ms
    Minibatch loss: 3.283, learning rate: 0.010000
    Minibatch error: 6.2%
    Validation error: 7.1%
    Step 200 (epoch 0.23), 55.0 ms

    Because this was running so long and maybe I was just lucky that it didn’t crashed in some way I uploaded the wheel-file to Drive so one can just download it 😉

    best regards and thanks for your work,

  4. I’m having the following issue when running the tensorflow configure script:

    ERROR: /home/ubuntu/tensorflow/tensorflow/core/platform/default/build_config/BUILD:56:1: no such package ‘@jpeg_archive//’: Error downloading from to /home/ubuntu/.cache/bazel/_bazel_ubuntu/ad1e09741bb4109fbc70ef8216b59ee2/external/jpeg_archive: Error downloading to /home/ubuntu/.cache/bazel/_bazel_ubuntu/ad1e09741bb4109fbc70ef8216b59ee2/external/jpeg_archive/jpegsrc.v9a.tar.gz: Connection reset and referenced by ‘//tensorflow/core/platform/default/build_config:platformlib’.
    ERROR: /home/ubuntu/tensorflow/tensorflow/core/platform/default/build_config/BUILD:56:1: no such package ‘@jpeg_archive//’: Error downloading from to /home/ubuntu/.cache/bazel/_bazel_ubuntu/ad1e09741bb4109fbc70ef8216b59ee2/external/jpeg_archive: Error downloading to /home/ubuntu/.cache/bazel/_bazel_ubuntu/ad1e09741bb4109fbc70ef8216b59ee2/external/jpeg_archive/jpegsrc.v9a.tar.gz: Connection reset and referenced by ‘//tensorflow/core/platform/default/build_config:platformlib’.
    ERROR: Evaluation of query “deps((//tensorflow/… union @bazel_tools//tools/jdk:toolchain))” failed: errors were encountered while computing transitive closure.

    Has anyone seen this error before?

  5. Hi, I’m trying to install tensorflow on TX1 as this post.
    However, every time I run ./
    a prompt is shown as below.
    Reversed (or previously applied) patch detected! Assume -R? [n]
    Did I something wrong?

    1. When you run it patches files that are outside the tensorflow tree. If you run those patches are already applied. Assuming that you do not wish to revert the patches, the default answer [n] is correct. Hope this helps.

  6. After successfully setting up the environment. Tensorflow has been building for 5-6 days. Seems to have been stuck on the following step of building protobuf:

    INFO: From Compiling external/protobuf/src/google/protobuf/compiler/java/
    external/protobuf/src/google/protobuf/compiler/java/ warning: ‘bool google::protobuf::compiler::java::{anonymous}::EnumHasCustomOptions(const google::protobuf::EnumDescriptor*)’ defined but not used [-Wunused-function]
    bool EnumHasCustomOptions(const EnumDescriptor* descriptor) {

    I ran the before starting the build process, but still seems to be taking longer than expected and mentioned online.

    Anyone had the same behavior or have a recommendation on how to debug/speed up the process?

    1. What version of L4T are you using? I do know that the build process is very sensitive to the ./ step. If you don’t have to actually build it yourself, consider using the pre-built wheel file available through:

      The build should only take a few hours. Certainly if it gets stuck you should consider that there is something wrong, and restart the build. The steps to build protobuf should take less than 15 minutes, most of the build time is spent on Tensorflow itself.

  7. Hello, after using “./” it gets stuck in “external/protobuf/python/google/protobuf/pyext/ warning: ISO C++ forbids converting a string constant to ‘char*’ [-Wwrite-strings]
    “. Till now more than 48 hours have passed. Any suggestion?

    1. To give you a sense of time, the whole build should take around 4 hours. The bulk of the time is spent compiling TensorFlow itself. Unless you have some reason to actually build TensorFlow yourself, consider that the easiest way to install is from a pre-built wheel file:

      There isn’t much information just given the warning you listed, it’s difficult to tell where in the compilation process it occurs. You may want to read:

      and make sure that you have sufficient swap space available. Good luck!

  8. thank you for the all the info and scripts. I have been able to install tensor flow on a tx1 with a ssd drive. i’m using the ssd drive as the root and a 10 g swap file. it took several days of work but there is plenty of feed back from the compiler and linker when something was wrong and the proper file version that is needed for the tx1 was among the files, some just had to be renamed. running the tutorials that came with the tensor flow and getting same results. thanks again it does work. running real sense camera r200 and ros and tensorRT and now looking forward to training.

  9. Hi,
    Have you tried installing TensorFlow on TX2 development kit? is the process same? How do you use Tensorflow Python interface in TX kit? does it have python interpreter installed like Anaconda?

    1. I haven’t tried it yet on the TX2, though I’ve heard people have had success installing v1.0. The TX2 nvcc compiler update fixes a bug that prevented v1.0 on the TX1. The Jetson Dev kits are pretty much standard Ubuntu desktops, so it’s pretty much the standard TensorFlow dev environments that you want to use. Thanks for reading!

  10. Hi,
    Folling your video,I have successfully install the Tensorflow R0.11 in Jetson TX1. Now I try to install the Tensorflow R1.0. But it is not successful according to your video. Have you install Tensorflow R1.0? Anything I should pay attention to?

    1. v1.0 does not build on the Jetson TX1 currently because of an issue with some instructions in the nvcc compiler. I’ve heard that some people have work arounds, but for me it’s not worth the trouble. The next release of JetPack should bring L4T R27.2 which fixes the issue. Thanks for reading!

  11. Hello, this article was very helpful.
    But when I was following this article, I noticed the url of avro specified in workspace.bzl is invalid. So we shoud specify avro-1.8.1 in it.

    1. I do not know. Did they change the workspace.bzl file recently, or has avro changed? You can try avro-1.8.1 and see it it works. Thanks for reading!

  12. when I run [](

    output :

    I tensorflow/core/common_runtime/gpu/] DMA: 0
    I tensorflow/core/common_runtime/gpu/] 0: Y
    I tensorflow/core/common_runtime/gpu/] Creating TensorFlow device (/gpu:0) -> (device: 0, name: NVIDIA Tegra X1, pci bus id: 0000:00:00.0)
    [1.0, 1.0, 1.0, 1.0, 1.0]
    DESCRIPTION: Reinforcement Learning (DeepQ) Batch: 200 RandProb: 0.01 LR: 1e-05 DR: 0.25 Brain: [140, 120, 100, 80]/[1.0, 1.0, 1.0, 1.0]/[‘tanh’, ‘tanh’, ‘tanh’, ‘tanh’]
    Timeline Sample 1000
    I tensorflow/stream_executor/] Couldn’t open CUDA library LD_LIBRARY_PATH: /usr/local/cuda-8.0/lib64:
    F tensorflow/core/platform/default/gpu/] Check failed: f != nullptr could not find cuptiActivityRegisterCallbacksin libcupti DSO; dlerror: /home/ubuntu/py2/local/lib/python2.7/site-packages/tensorflow/python/ undefined symbol: cuptiActivityRegisterCallbacks


  13. Hi Kangalow!
    I can’t install tensorflow on jetson tx1,when i run ./,it told me can not find output/bazel,my TX1 jetpack is 3.0,but in your video is jetpack 2.3.1,is this problem?

  14. Hi kangalow!
    When i run “./”,then it told me
    Problem with java installation: couldn’t find/access rt.jar in /usr/jdk-8
    Problem with java installation: couldn’t find/access rt.jar in /usr/jdk-8
    Configuration finished
    how can i do it?

    1. Hi kangalow!
      In your video,when you ran “$./”,it will pop up a “configuring oracle-java8-installer”,but i haven’t i ,could you please tell me why,thanks very much!

  15. I do not know why it doesn’t work as expected. You can try running the installPrerequisites shell file manually, line by line, and see if any issues arise.

  16. When I execute to $ ./, showing that avro-1.8.0 / cpp download failed.
    http: // -1.8.0.tar.gz is Not Found.
    When I consulted, finding that avro has been updated to 1.8.2. How to solve this problem?
    Thank you!

  17. Hi,

    After I ran the, everything was going well until it was trying to build Bazel. Here’s the output:

    gcc: error trying to exec ‘cc1plus’: execvp: No such file or directory
    Target //src:bazel failed to build
    INFO: Elapsed time: 20.973s, Critical Path: 0.33s
    cp: cannot stat ‘output/bazel’: No such file or directory

    Any thoughts?

Leave a Reply

Your email address will not be published. Required fields are marked *


Some links here are affiliate links. If you purchase through these links I will receive a small commission at no additional cost to you. As an Amazon Associate, I earn from qualifying purchases.

Books, Ideas & Other Curiosities