orb:stereoscopy

The octanis rover will be able to visualize and render its environment using a pair of cameras. They will use the principle of stereo vision - the same that our human brain/eyes use - to get the depth information of the environment.
This vision part is developed under the supervision of the EPFL eSpace lab as a bachelor project.

Stereoscopy, what it is ?

Stereoscopy is a technique used to create an illusion of depth to a flat image and enable a three dimensional effect. It is possible according to the stereovision principle where two images taken from slightly different angles are merged into a single one containing the depth information.

about Libelas algorithm

Information I could retrieve about libelas algorithm and elas_ROS

Useful links:

The algorithm needs to be fed with already undistorted and rectified input images, such that correspondences are restricted to the same line in both images. So there is a necessity to pre-process the image before handing it to the algorithms. Also images must be .pgm format (black and white images). It can be done with openCV or with matlab (until today, as I used only pair of photos I used matlab for simplicity).
The algorithm itself DOES NOT need any calibration parameters or information about the setup used to record the pictures. Such parameters are useful for the pre-processing of the images (removing the distortion and rectifying) and the post-processing (retrieving a pointcloud version of the disparity map).
The usage of the algorithms with its default “robotics parameters” (see in elas.h, line 87 to 142) - chosen in the main.cpp function (line 61) - gives crooked images. It is necessary to set “add_corners = 1” for non-crooked images.
Greater “ipol_gap_width” parameters will fill the holes from the disparity with an estimation based on the closer/best support point.
Others parameters values to be determined with the new cameras for the best results.

The main function given with the algorithm doesn’t produce a mixed disparity: it gives two different disparity map for right and left image. I used the code in the main.cpp function from the middleburry benchmark – which gives a single merged disparity map of the two pictures.

As the Olimex is running ROS Kinetic, we get into some trouble to get the raw images from a driver. In ROS Indigo, this is quickly achieved by installing the usb_cam package, but as we cannot use this, we had to get some trick explained below

In order to get the two .yaml calibration files, we followed those link http://wiki.ros.org/camera_calibration and http://wiki.ros.org/camera_calibration/Tutorials/StereoCalibration. We first need to have image_raw node running and a cardboard with known dimensions. Then the work is straight forward. The saved file will be generated in /tmp/calibrationdata.tar.gz. When uncompressing it, we get lots of .png images and a ost.yaml file, which is our calibration file. Those files was renamed and put in the correct folder in Octanis 1 Mission

We get raw images by the usb_cam node, which is launched by a launch file for example. It can takes some parameters, listed here:

  • video_device(string, default: “/dev/video0”)
  • image_width(integer, default: “640”)
  • image_height(integer, default: “580”)
  • pixel_format(string, default: “mjpeg”, possible: “yuyv”, “uyvy”)
  • io_method(string, default: “mmap”, possible: “read”,“userptr”)
  • camera_frame_id(string, default: “head_camera”)
  • framerate(integer, default: 30)
  • contrast(integer, default: 32, between 0-255)
  • brightness(integer, default: 32, between 0-255)
  • saturation(integer, default: 32, between 0-255)
  • sharpness(integer, default: 32, between 0-255)
  • autofocus(boolean, default: false)
  • camera_info_url(string, default: )
  • camera_name(string, default: head_camera)

usb_cam_mono_raw.launch

This is an example covering the mono camera process coming from https://github.com/Octanis1/Octanis1-ROS/blob/master/catkin_ws_mission/src/usb_cam-develop/launch/usb_cam_mono_raw.launch.

As we can see, there is no calibration file used here, it only get the raw_image and publish them to the corresponding topics. It will then publish a topic called ^

  • usb_cam/camera_info
  • usb_cam/image_raw

usb_cam_stereo_raw.launch

In thi snode we have the both camera running with the calibration files. There are posted in the group stereo, there it should produce the following topics

  • /stereo/left_cam/image_raw
  • /stereo/left_cam/camera_info
  • /stereo/right_cam/image_raw
  • /stereo/right_cam/camera_info

usb-cam_stereo.launch

This launchfile use the mostly the same things as the usb_cam_stereo_raw.launch file does, but with the stereo_image_proc node in addition. This picture illustrate the mix of nodes used and the commonly name used for the topic published. As we can see, we will publish the following topics :

  • /stereo/left_cam/image_raw
  • /stereo/left_cam/camera_info
  • /stereo/right_cam/image_raw
  • /stereo/right_cam/camera_info
  • /stereo/left_cam/mono
  • /stereo/left_cam/image_rect
  • /stereo/left_cam/image_color
  • /stereo/left_cam/image_rect_color
  • /stereo/right_cam/mono
  • /stereo/right_cam/image_rect
  • /stereo/right_cam/image_color
  • /stereo/right_cam/image_rect_color
  • /dispartity
  • /points2
  • /points

There are also a bunch of compressed images topics that we don't need. As far as it have been tested, the disparity map is not generated using our usb_cam on ROS Kinetic, but it is on ROS indigo.

Framerate

We have choose a framerate of 2 images taken each seconds. Which represent in reality 4 images as we have 2 camera. As a 640×480 resolution on a .jpg compression takes somehow 15kb memory (ROS compression), we fill the memory at 0.216Gb per hour (this calculus is very approximate but gives an idea)

Debug problems

As we need to use a “somehow home made” usb_cam package, we can see that when we use the mjpeg image compression for the camera, we get bunch of debug warning which come from an unknown librairy.

Hint

Those debug problems may come from one of those librairies

  • linux/videodev2.h
  • libavcodec/avcodec.h
  • libswscale/swscale.h
  • libavutil/mem.h

Image Processing

For a good stereo vision you need to process the images. Therefore a disparity map (map with a colour scaling the distance of the objects on the pictures) was used. For all this to be effective a good calibration of the camera is required. For the calibration MATLAB has a toolbox 'stereo calibration', you need to upload pictures of a chessboard so it can compute the parameters of both cameras. Using MATLAB to take the pictures allows you to be certain that the images are the raw images and not processed in any way. It also permits you to choose the resolution of the pictures. Calibrating with a different resolution than the one you use during the experiments will lead you to wrong image processing. Looking at the disparity map, if the camera is well calibrated, you can enhance the quality of these by reinforcing the contrast on the different pictures so that it is easier to detect the spatial disposition.

The MATLAB scritp 'stereo_camera.m' just need to be run. You have to change the path for the pictures. The actual calibration is for '640×480' resolution so if you want higher resolution you will have to calibrate again. The script is making a video of frames containing the disparity maps and the (non-contrast enhenced) picture taken by the left camera.

  • orb/stereoscopy.txt
  • Last modified: 4 years ago
  • by aarnould