JetsonHacks

Developing on NVIDIA® Jetson™ for AI on the Edge

GStreamer Network Video Stream and Save to File

In a previous entry, we discussed how to preview webcams. In this article, we’ll discuss a Server which saves both video and audio to a file, and streams video over the network.

On the other side of the network, we’ll build a small Client to get the video, decode it and display it. The Jetson does hardware decoding of the H.264 video streams.

Here’s a demonstration:

In the demonstration, we grab H.264 encoded video from a Logitech c920 webcam and the camera audio. We place the video and audio into a local multimedia file, and send the video out as a stream over the network.

The demonstration shell files are here:

gExampleServer.sh

and

gExampleClient.sh

gExampleServer

Here’s the command line that you run on the Server’s Terminal.

gst-launch-1.0 -vvv -e \
mp4mux name=mux ! filesink location=gtest1.mp4 \
$v4l2src device=/dev/video0 timestamp=true \
! video/x-h264, width=1920, height=1080, framerate=30/1
! tee name=tsplit \
! queue ! h264parse ! omxh264dec ! videoconvert ! videoscale \
! video/x-raw, width=1280, height=720 ! xvimagesink sync=false split. \
! queue ! h264parse ! mux.video_0 tsplit. \
! queue ! h264parse ! mpegtsmux ! tcpserversink host=$IP_ADDRESS port=5000 \
pulsesrc device=alsa_input.usb-046d_HD_Pro_Webcam_C920_A116B66F-02-C920.analog-stereo do-timestamp=true \
! audio/x-raw ! queue ! audioconvert ! voaacenc ! queue ! mux.audio_0

Where $IP_ADDRESS is the host ipaddress.

There are a couple of GStreamer elements which we use to facilitate the distribution of the video. The first, called a tee is used to split the video pipeline and route it different places, in our case the screen preview, a local video file, and to the network itself.

The second element, called a mux (multiplexer), can sometimes be thought of as a container. The first mux, called mp4mux, is a .MP4 file container where we store the video and the audio being collected from the webcam. The video is encoded in H.264, the audio is encoded as AAC. The mp4mux has a place to store video (mp4mux.video_0) and a place where the audio goes (mp4mux.audio_0), and prepares it to go into a file. It’s more complicated than that, of course, but we’ll go easy here.

The second mux, called mpegtsmux, can be thought of as an element which combines media streams and prepares them as a Transport Stream (in this case MPEG) to be sent out over the network. We take the output of mpegtsmux and send it to a tcpserversink element, out over the network.

As a side talk, you’ll encounter a term: Real Time Streaming Protocol (RTSP) which is a network control protocol and is how Gstreamer sends out its Transport Stream. Since we’re going to send our video stream out over TCP, we need to make sure that our video is “glued together” and arrives over the network in the proper order. That’s why we put it into a MPEG container (mpegtsmux) to board the network train. If we used UDP instead of TCP, this wouldn’t be an issue and we could use other Gstreamer mechanisms, such as the udpsink and rtph264pay elements.

Another thing worth pointing out is that the preview size is smaller than the size of the video that is captured. This is defined by the second capability, video/x-raw, width=1280, height=720. For the demo, basically we wanted different sized windows on the server and the client to show that the client just isn’t a screen clone.

In comparison to the Server, the Client is easy:

gst-launch-1.0 -vvv \
tcpclientsrc host=$IP_ADDRESS port=5000 \
! tsdemux \
! h264parse ! omxh264dec ! videoconvert \
! xvimagesink sync=false

The tcpclientsrc element is basically the match to the Server tcpserversink. tsdemux takes the combined stream elements and separates them out to video (and audio if available). After that we decode the video, and put it up on the screen.

Video File Issues

I’ve had a lot of issues with the video files that are being saved. In this particular setup, there are ‘freezes’ about every couple of seconds. The audio still synchs, but there’s definitely a hitch it in its giddy up.

Facebook
Twitter
LinkedIn
Reddit
Email
Print

20 Responses

  1. Hi, thank you for the great works on the JetsonHack series, saving me a lot of time.

    I’m new to Gstreamer, and I want to stream camera video from TK1 to another PC. I have a question here:
    If I want to stream only video , how can I do so?
    My guess is to comment the last line `$ASOURCE ! queue ! $AUDIO_ENC ! queue ! mux.audio_0` of the command in the bash file.
    However, I got error when I executed it.(output is showed below)
    What’s the problem with these error messages? I’ve done some search but no answer.
    Command in the previous post (webcam preview) worked for me.
    Note that my webcam is Logitech c310.
    Thanks

    My output:
    gst-launch-1.0 -vvv -e mp4mux name=mux ! filesink location=gtest1.mp4 v4l2src device=/dev/video0 ! video/x-h264, width=1280, height=720, framerate=30/1 ! tee name=tsplit ! queue ! h264parse ! omxh264dec ! videoconvert ! videoscale ! video/x-raw, width=1280, height=720 ! xvimagesink sync=false tsplit. ! queue ! h264parse ! mux.video_0 tsplit. ! queue ! h264parse ! mpegtsmux ! udpsink host=127.0.0.1 port=5000
    Setting pipeline to PAUSED …
    Inside NvxLiteH264DecoderLowLatencyInitNvxLiteH264DecoderLowLatencyInit set DPB and MjstreamingPipeline is live and does not need PREROLL …
    Setting pipeline to PLAYING …
    New clock: GstSystemClock
    ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Internal data flow error.
    Additional debug info:
    gstbasesrc.c(2865): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
    streaming task paused, reason not-negotiated (-4)
    EOS on shutdown enabled — waiting for EOS after Error
    Waiting for EOS…
    ERROR: from element /GstPipeline:pipeline0/GstH264Parse:h264parse0: No valid frames found before end of stream
    Additional debug info:
    gstbaseparse.c(1153): gst_base_parse_sink_event_default (): /GstPipeline:pipeline0/GstH264Parse:h264parse0
    /GstPipeline:pipeline0/GstMP4Mux:mux.GstPad:src: caps = video/quicktime, variant=(string)iso
    /GstPipeline:pipeline0/GstFileSink:filesink0.GstPad:sink: caps = video/quicktime, variant=(string)iso
    ERROR: from element /GstPipeline:pipeline0/GstH264Parse:h264parse1: No valid frames found before end of stream
    Additional debug info:
    gstbaseparse.c(1153): gst_base_parse_sink_event_default (): /GstPipeline:pipeline0/GstH264Parse:h264parse1
    ERROR: from element /GstPipeline:pipeline0/GstH264Parse:h264parse2: No valid frames found before end of stream
    Additional debug info:
    gstbaseparse.c(1153): gst_base_parse_sink_event_default (): /GstPipeline:pipeline0/GstH264Parse:h264parse2
    ERROR: from element /GstPipeline:pipeline0/MpegTsMux:mpegtsmux0: Could not create handler for stream
    Additional debug info:
    mpegtsmux.c(767): mpegtsmux_create_streams (): /GstPipeline:pipeline0/MpegTsMux:mpegtsmux0

  2. It’s been a long time since I have looked at this, but I believe that the split is being used to grab both the video and audio, so you should not need that. Unfortunately I’m way behind on a project due next week, so I can’t been of much more help.

    1. Thanks for your reply. I’ve managed to make it work, but quality is not as good as yours. I’ll keep searching.

      1. Did you ever figure it out? If you still have your code can you post it please, I’m trying to figure out the same thing!

  3. I am getting below mentioned error while testing the camera for capture.

    ERROR: from element /GstPipeline:pipeline0/GstH264Parse:h264parse0: No valid frames found before end of stream
    How to Solve it

  4. I was very surprised after i found this page. But I could’t implement because I’m not a Jetson user. Is this command only for NVIDIA Jetson TK Board? I really want to stream video and audio with gstreamer-1.0. Perhaps if it is not a impolite behavior, Can I know how to implement streaming video and audio??

  5. Hi Jetsonhacks,

    I tried your pipeline with ‘nvcamerasrc’ as the video source and it works! I also modified the pipeline for video recording and for sending stream over network by TCP only without the preview queue. So, it goes like this :

    gst-launch-1.0 avimux name=mux
    ! filesink location=/media/nvidia/SSDJetson/test.mp4 nvcamerasrc fpsRange=”30.0 30.0″
    ! ‘video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1’ ! omxh264enc bitrate=14000000 control-rate=variable ! tee name=tsplit
    ! queue ! h264parse ! mux.video_0 tsplit.
    ! queue ! h264parse ! queue ! matroskamux
    ! queue leaky=2 ! tcpserversink host=192.x.x.x port=7001

    Thanks to your guide, I managed to launch this on command line. I’m used to transform gstreamer pipeline into C/C++ programming code but I’m still a beginner. However, this is my first time I saw this form of pipeline where it starts with the muxer and then the “filesrc” and “nvcamerasrc” elements are bordered in the same “! filesink location=/media/nvidia/SSDJetson/test.mp4 nvcamerasrc fpsRange=”30.0 30.0″ !”. I’m not familiar at all with this form of pipeline but I wanted to transform this into code.

    I have two problems, I don’t know how to create the “filesrc” and “nvcamerasrc” elements in the same ‘gst_element_factory_make’.
    Plus, I don’t know how to transform “mux.video_0 tsplit.” to code.

    Do you have any examples on this? So far, I haven’t managed to find any code examples related to this on internet.

    Thanks in advance!

Leave a Reply

Your email address will not be published. Required fields are marked *

Disclaimer

Some links here are affiliate links. If you purchase through these links I will receive a small commission at no additional cost to you. As an Amazon Associate, I earn from qualifying purchases.

Books, Ideas & Other Curiosities