This is the second progress video about the port of openFrameworks to the Jetson.
The demonstration sketch shows that most of the OpenGL issues have been sorted out. Also, this is the first demo that includes a GLSL shader which is rendering the background. So some good progress is being made.
Two different openFrameworks add-ons are being demonstrated:
ofxTimeline and ofxUI .
ofxTimeline is being used to control the virtual camera movement. The timeline runs a 1 minute loop which controls the pan, zoom and orbit of the camera. ofxUI provides the GUI element at the bottom left hand corner which notates the camera position in a dynamic manner.
The depth information from the Kinect is being rendered in what is referred to as a point cloud. Basically a 3D point mesh is constructed for each frame that is being displayed, and the color of each point is calculated from the color camera (RGB), then displayed. There is no CUDA hardware acceleration of the process at this point, it’s just done on one of the ARM cores.
While there are still a few bugs hidden in the openFrameworks port, for the most part everything is running smoothly.
2 Responses
I’m struggling to obtain a depth point cloud from a Kinect v2 plugged to a Jetson board. Looks like you have nailed it. Can you please give me some advice as to where should I begin ?? Thanks to your other posts I’ve got libfreenect2 up and running.
Thank you in advance.
Hi Yoni,
The video above is from a version 1 Kinect. I haven’t tried to render a point cloud out of a V2 yet, but I believe that people have been doing it with a ROS bridge into rviz. libfreenect2 should give you the depth bits and the rgb pixels, and it’s possible to actually get them registered with the new versions of libfreenect2, but I haven’t tied the two together personally.
You can try to look through the Point Cloud Library (PCL) lists to see if anyone has worked with libfreenect2.