Recently I’m doing a research project on omni-vSLAM with ground plane constraint, a quite challenging task in machine vision. Firstly I have to build a omni-vision system totally on my own as I can’t find any place to buy such things. I’ve worked on stereo vision depth estimation before:

The above stereo vision system is quite hard to caliberate, so I bought a bumblebee camera from the Point Gray company( on my hands):

It is very easy to use and the image quality is very ideal. Best of all, it supports Linux. So this time I decided to use Point Gray Firefly MV CMOS  camera for my omni-vision system(you can see its label through the transparent connecting component):

I sended my design to another company to build the hyperbolic mirror and outside structure, and carefully installed the system on the top of P3-AT,  because I must make sure that no parts of the robot itself is in the view range of the omni-camera:

It spent me a whole day to finish all the stuff:

The next thing is to build the mathematical model of the system.The general idea of my method is to use a single passive omni-directional camera to learn the ground appearance and find the obstacle edge feature points. Then, under the plane constraint, each feature point is mapped to a coordinate in the global coordinate frame. The coordinate set generated by our system has a similar format with the data coming from the 2D laser range finder and can be imported into toolboxes like CARMEN to do simultaneous localization and mapping.

The key of this system is to generate exact obstacle edges using only one image. I tried many methods and finally wrote the program using canny edge detection, texture features and adaptive learning methods to detect obstacle edge( the yellow squares are haar feature points used to calculate distance):

Extracted ground features:

Outdoor ground extraction  experiment results:

My mentor recommended to to register a patent for this algorithm. Now I’m considering: it can also be used to do mono SLAM using a camera with a view-range of about 60 degree. Now I’m working hard on this and I’ll upload the experimental results here soon.

[Nov.6 Updated] I strengthened the mechanical structure and added a Canon camera:

[Nov.12 Updated] More testing videos, I’ll soon release the rendered map generation results.

[Dec.6 Updated]Here’s the map of our laboratory using my SLAM algorithm:

Feature points obtained from a single frame in the omni-vision video stream:

Yesterday I went outside and tested the improved navigation algorithm, it’s very robust now.

Next, I’m going to work on my graduation thesis project, mainly focusing on stereo-based autonomous navigation. The depth recovered from stereo camera is quite accurate within 25 meters,  so I’m confident the system will be much better than the current monocular omni-directional navigation system.  My final goal is to place P3-AT at the south gate of my university, give a GPS coordinate, and let it guide itself to the north gate without global maps or GPS checkpoints.  There’re a lot of works to do on stereo SLAM, path planning, traversable area segmentation and robot dynamics control. Tomorrow I’m going to install the Bumblebee2 stereo camera.

[Dec.7 Updated]