Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
stereoscopy [2016/05/15 18:23]
nicolas
stereoscopy [2016/05/22 13:00] (current)
sam
Line 1: Line 1:
-===== Vision by stereoscopy ===== +this page has moved to [[orb:stereoscopy|orb:stereoscopy]]
- +
-The octanis rover will be able to visualize and render its environment using a pair of cameras. They will use the principle of stereo vision - the same that our human brain/eyes use - to get the depth information of the environment. \\ +
-This vision part is developed under the supervision of the EPFL eSpace lab as a bachelor project. +
- +
-=== Stereoscopy,​ what it is ? === +
- +
-Stereoscopy is a technique used to create an illusion of depth to a flat image and enable a three dimensional effect. It is possible according to the stereovision principle where two images taken from slightly different angles are merged into a single one containing the depth information. +
- +
-=== about Libelas algorithm === +
- +
-Information I could retrieve about libelas algorithm and elas_ROS +
- +
-Useful links: +
-  * developer of the algorithm websitehttp://​www.cvlibs.net/​software/​libelas/​ +
-  * paper explaining the algorithm: https://​drive.google.com/​file/​d/​0B9tpGqn0abMhS1BjR1AzcmZnU2c/​view +
-  * wiki for the elas_ros : http://​wiki.ros.org/​elas_ros +
-  * where to find the elas_ros sources : http://​rosindex.github.io/​p/​elas_ros/​code-google-p-cyphy-elas-ros/#​fuerte-overview it seems to be working with and older version of ros (fuerte) : but I couldn’t manage to verify that due to my knowledge in ros +
-  * middleburry benchmark ant its dataset: http://​vision.middlebury.edu/​stereo/​data/​ +
- +
- +
- +
-The algorithm needs to be fed with already undistorted and rectified input images, such that correspondences are restricted to the same line in both images. So there is a necessity to pre-process the image before handing it to the algorithms. Also images must be .pgm format (black and white images). It can be done with openCV or with matlab (until today, as I used only pair of photos I used matlab for simplicity).\\ +
-The algorithm itself DOES NOT need any calibration parameters or information about the setup used to record the pictures. Such parameters are useful for the pre-processing of the images (removing the distortion and rectifying) and the post-processing (retrieving a pointcloud version of the disparity map). \\ +
-The usage of the algorithms with its default “robotics parameters” (see in elas.h, line 87 to 142) - chosen in the main.cpp function (line 61) - gives crooked images. It is necessary to set “add_corners = 1” for non-crooked images. \\ +
-Greater “ipol_gap_width” parameters will fill the holes from the disparity with an estimation based on the closer/best support point. \\ +
-Others parameters values to be determined with the new cameras for the best results. +
- +
-The main function given with the algorithm doesn’t produce a mixed disparity: it gives two different disparity map for right and left image. I used the code in the main.cpp function from the middleburry benchmark – which gives a single merged disparity map of the two pictures.+