Sunday, March 24, 2013

Stereo Progress

Recently I've been spending time processing the data from our GoPro stereo cameras. I've have written most of the algorithm/backend processing that we need to reconstruct geometry found in the cisterns from stereo images, so most of the remaining work is actually processing the raw images captured in the cisterns to form something called a disparity map. I'll explain more about disparity maps later.

Our GoPro cameras:


I began by working on a set of pictures that were taken above the water surface but were still part of one of a cistern. I have processed a ton of photos now, and this is just one example.

Unfortunately, our GoPro cameras have a bit of a fish-eye lens, which can lead to some complications below the water surface. Last year, Tim and I corrected for this fish eye by dropping a checkerboard pattern into the pool, taking pictures, and flattening the image until the checkerboard was aligned with the dominant axes on the screen. A few other steps were applied to each image before rectification. To the left is an original image from the right GoPro camera.




After doing some basic preprocessing, we attempt to align the stereo images along epipoles in order to aid in stereo matching. To do this, I am using a MATLAB implementation which is freely available.



Here is a composite Red/Cyan image of the left and right images, respectively.









To rectify the images, we search for 'significant features' which can be matched between left and right. Then, we apply a transform on each image in order to align all of the significant features on top of one another. To the right are some feature identification pics. Note that not all features have a match between images. To match features, we do some more processing.



Some SURF features and their weights shown in green circles.
 Once features are identified and refined, we match significant features between left and right images. Notice how the features in each image are slightly offset from each other.




 Once we have features matched, we can apply a 2D affine transform to align the images along their stereo epipoles. This makes the disparity mapping algorithm easier later on (now we only have to search for similar features to our left and right, rather than up and down!).





Once the images are rectified, you can save the red and cyan channels as the new left and right images as input for the disparity matching algorithm. When we run the disparity algorithm, we get something that looks like this. Red things are close, blue things are far!



When you get a disparity map like this, you can do some pretty cool things. For example, you can now make x,y,z points in 3D space using the x and y values in the pixel coordinates in the original image, with the z value obtained from the color in the disparity map! Here is a cool example of what I mean.



Today I was able to get all of this processing scripted, so now I can click 2 buttons and process all of the images from a deployment into disparity maps. These disparity maps will be used for some exciting things, including reintroducing medium and fine details back into the cistern which were originally omitted from the low resolution SONAR data! This is just one half of my project, and I have a good deal of the second half complete already. Now it's time to combine the two and get some real-world results!

1 comment:

  1. Hi which disparity matching algorithm are you using? I have been having some trouble using the MATLAB example code with my project and it is exceptionally slow. Your help would be much appreciated!

    ReplyDelete