JetsonHacks

Developing on NVIDIA® Jetson™ for AI on the Edge

Install TensorFlow for Python – NVIDIA Jetson TX Dev Kits

The last few articles we’ve been building TensorFlow packages which support Python. The packages are now in a Github repository, so we can install TensorFlow without having to build it from source. Install files are available both for the Jetson TX1 and Jetson TX2. Looky here:

Background

In the earlier articles we went over the procedure to building TensorFlow for the Jetson TX1 and the Jetson TX2. The procedure takes a few hours on each platform. Worse, for the Jetson TX1 a new kernel is needed so that swap memory can be enabled.

Fortunately we were able to build a new Github repository called installTensorFlowJetsonTX which contains all of the Python install .whl (wheel) files that were created during the process (and then some).

The repository contains two sets of wheel files. There is a set of files for the Jetson TX1 in the TX1 directory. The other set of wheel files are for the Jetson TX2, which is in the TX2 directory. Each of the directories contains two wheel files. One of the wheel files is for Python 2.7, the other is for Python 3.5. For example, the file tensorflow-1.3.0-cp35-cp35m-linux_aarch64.whl in the TX1 folder is the TensorFlow 1.3 wheel file for Python 3.5. The term aarch64 indicates that the file is for the Jetson TX1 ARM64 architecture.

Install TensorFlow

Install the wheel files with then Python python-pip tool. The first step is to install the appropriate version python-pip. Next, download the appropriate wheel file from the repository. Then use the pip tool with the corresponding wheel file to finish the installation.

For Python 2.7

$ sudo apt-get install -y python-pip python-dev
$ pip install tensorflow-wheel-file

Python 3.5

$ sudo apt-get install -y python3-pip python3-dev
$ pip3 install tensorflow-wheel-file

Downloading the wheel file can be done in a couple of ways. The first way is to clone the repository. This has the side effect of downloading all of the wheel files (currently about 200MB). Perhaps a better way, as shown in the video, is to navigate to the wheel file you wish to download in the web browser on the Github site, and then click the Download button. The wheel file will be placed in your Downloads folder.

Conclusion

This is a relatively easy way to install TensorFlow. TensorFlow is a very rich environment, the downside of this method is that you will not be able to specify the build options. Note that you will also have to install the TensorFlow models and support materials. The upside is that you don’t have to spend the build time just to get TensorFlow up and running.

Notes

There are wheel files for Python 2.7 and Python 3.5 The Jetson environment for both the TX1 and TX2:

  • L4T 28.1 (JetPack 3.1)
  • CUDA 8.0
  • cuDNN 6.0

TensorFlow

  • Version 1.3.0
  • Built with CUDA support
  • Leverages cuDNN

Repositories used for building the wheel files:

NVIDIA Jetson TX1: https://github.com/jetsonhacks/installTensorFlowTX1
NVIDIA Jetson TX2: https://github.com/jetsonhacks/installTensorFlowTX2

Note: The Jetson TX1 GPU architecture is 5.3, the Jetson TX2 is 6.2. The builds reflect this.

Facebook
Twitter
LinkedIn
Reddit
Email
Print

20 Responses

  1. Thank you for this post. It made it were easy to install TensorFlow.

    But is the jetson able to train models? I have tried to run the mnist examples from tensorflow.org. It can run the mnist_softmax.py sample. But trying to run the more advanced mnist_deep.py, i’m running out of memory on the Jetson TX1.
    Do you know away around this problem? Or is the Jetson not made for training?

      1. No. And no extra disk installed, just the on-board 16GB. I have seen you have a tutorial on how to enable swap, by rebuilding the kernel.
        Is and external SSD a must for this device?

        1. I don’t know how to answer your question. I believe that the idea is that you have a model that you train on a desktop/cloud and run the trained model on the Jetson using something like TensorRT. Can you train on a Jetson? Depends on the model, the memory requirements, training time, and other factors.

          If you run out of memory, the only way that I know of to fix that is to somehow add more memory. Swap memory does that. I am not sure what your expectations are for the device.

  2. Hello there. I’m having issues while installing TensorFlow 1.4 on my Jetson TX2. It says: “tensorflow_gpu-1.4.0-cp36-cp36m-manylinux1_x86_64.whl is not a supported wheel on this platform.”
    It did work while installing TF1.3, but a script I’m trying to run needs TF1.4.
    can somebody help me?
    Thanks!

    1. You are attempting to install a whl file for a Linux x86 machine on to the ARM based Jetson. That does not work, you will have to find a whl file for TensorFlow 1.4 for the Jetson, or build one yourself.

  3. Fantastic!

    Your post meant that I could get my TX1 up and running with TensorFlow quickly. This also meant that I could start an on-line Deep Learning course, which required TensorFlow. I appreciate each of these. Thank you!

    I did need to “sudo pip install” the wheel file to avoid permission issues (I was logged in a ubuntu).

    L O V E Y O U R W O R K !

  4. Hi,
    I have a doubt that is it mandatory to install Jetpack on the Jetson TX1 or TX2 device. Can’t we manually install cudnn, cuda, L4T drive packages,directly on the Jetson?

    1. JetPack is not run directly on the Jetson. JetPack is an installer which runs on a host PC which flashes the Jetson over USB (this installs the rootfs), and then installs packages that you select (CUDA, cuDNN, VisionWorks and so on). The Packages are installed from the host PC over a wired Ethernet connection. If you have any further doubt, please ask questions on the official NVIDIA Jetson forum where a large number of developers and NVIDIA engineers share their experience. The forums are located here: https://devtalk.nvidia.com/default/board/139/jetson-embedded-systems/
      Thanks for reading!

        1. I don’t understand the question. The Jetson usually comes from the factory with an older version of the OS as a placeholder. NVIDIA recommends upgrading to the newest release, which fixes a large number of issues since the golden master installed on the straight from the factory devices. You can try to run it as received, but you are delaying the inevitable where you will need to flash. It is the nature of embedded programming, you should expect to have to regen the system from scratch occasionally, depending on your development process.

          1. Thank you angalow. I tried installing cudnn, cuda with flash os deselected, installation is completed but libraries are not compiled. I am using Ubuntu 16.04LTS for my host machine and Jetpack 3.3 installer.

Leave a Reply

Your email address will not be published. Required fields are marked *

Disclaimer

Some links here are affiliate links. If you purchase through these links I will receive a small commission at no additional cost to you. As an Amazon Associate, I earn from qualifying purchases.

Books, Ideas & Other Curiosities