Category: Robots


The DARPA Robotics Challenge

The DARPA Robotics Challenge (DRC) is a prize competition funded by the Defense Advanced Research Projects Agency. Held from 2012 to 2014, it aims to develop autonomous humanoid robots that can do complex tasks in dangerous environments.

I’ve been working on the DARPA Robotic Challenge since December 2012 as part of the University of Delaware team focusing on developing various vision algorithms for event 1 autonomous driving:

ImageOur platform is called DRC-Hubo Beta, which is a modified version of the KAIST Hubo2 robot with hardware retrofits and software algorithms for autonomy.  Our team consists of ten universities focusing on 7 different events, including driving (UD), rough terrain walking (OSU),  debris removal (GT), door opening (Swarthmore), ladder climbing (IU & Purdue), valve turning (WPI) and hose installation (Columbia):

cropped-drc-blog-banner2

This set of complex tasks requires seamless integration between vision, motion planning and hardware control. Therefore, Drexel University invites students and professors from different institutions to join the ‘DRC Boot Camp’ and work together in the Philadelphia Armory for about 6 weeks in the summer.

OLYMPUS DIGITAL CAMERA

My work is mainly focused on machine vision, including CPU/GPU based stereo matching and point cloud generation, CAT/Kin-Fu based model making, RVIZ interface development and so on. In the last 6 months, a lot of new software packages have been developed and pushed to the ROS repository. Also, many new hardware components and control methods have been made and implemented to this particular project.  Considering the influence of past DARPA Grand Challenges, this event may become a turning point in the development of robotic technology (especially humanoids) for the next 5 to 10 years. The KAIST Hubo robot has been in development since 2003 and its more well-known counterpart, the Honda ASIMO, represents almost 40 years of study on bipedal walking:

honda_humanoid_history_1However, these robots emphasize more on mechanical design and manufacturing precision rather than visual information processing and real-time closed-loop control/stabilization. It was not until recently did we see the Boston Dynamics ATLAS (The Agile Anthropomorphic Robot), being provided as Government Furnished Equipment for the DARPA Robotics Challenge program Track B teams:Atlas_4437_shrunk-1373567699341

This robot shows perfect combination of sensing and control, and it only takes them less than a years to develop the prototype from PETMAN after they get the $10.9 million contract from DARPA in August 2012. This is an example of how the DRC project speeds up robotic hardware development. Hopefully, with the amazing creations, imaginations and explorations from different teams/organizations throughout this whole project, robotics technology can really be pushed forward.

[Updated 12/31/13]

From the 2013 DARPA Robotics Challenge trials:

Carnegie Mellon University — CMU Highly Intelligent Mobile Platform (CHIMP):

OLYMPUS DIGITAL CAMERA

MIT/Boston Dynamics — The Agile Anthropomorphic Robot (ATLAS):

OLYMPUS DIGITAL CAMERA
OLYMPUS DIGITAL CAMERA

NASA Johnson Space Center — Valkyrie (R5):

OLYMPUS DIGITAL CAMERA

Boston Dynamics — Legged Squad Support Systems (LS3):

OLYMPUS DIGITAL CAMERA

Google driverless car:

OLYMPUS DIGITAL CAMERA

Videos:

Video summary of our team:

[Updated 3/7/15]

At UNLV Howard R. Hughes College of Engineering with DRC-Hubo, congrats on the qualification to DRC final!

IMG_0144_1

[Updated 6/7/15]

DRC final rules do not allow protection tether cables, which leads to:

Congrats DRC-Hubo@UNLV (Which I worked on the vision system) for the 8th place finish at DRC Final! Congrats DRC-Hubo @ KAIST for the 1st place and winning the $2m prize!

150607+back+2

A New Era of Robotics is coming!

TimeArticle

A full overview of the DRC:

Quadcopter UAV project

Recently I’m working on a quadcopter UAV project. The on-board electronic system includes 3-Axis Gyro, GPS/INS, AHRS, 5.8Ghz FPV transmitter, GoPro Hero camera and a small Gumstix computer.

After I finished assembling and tuning all the parts, I’ll test autonomous flight outside and upload the HD videos recorded by GoPro Hero camera. The future experiments include vision based auto-landing on moving vehicles, large-scale 3D terrain generation using SFM and vision based tracking of a specific ground vehicle. So, please check my blog for the exciting videos to come!

[Updated 5/12/12]

Added Compass/IMU/GPS, central controller, 5.8Ghz FPV system and AHRS system, ready to tune the controller on next Wednesday.

Thanks for Nate at Hobby Hut for helping me to tune the quadcopter. If you live close to the tri-state area and have problems with RC stuff, go to Hobby Hut, Eagleville, PA and ask for Nate, he’s very helpful.

[Updated 11/15/12]

Attended a meeting in Villanova University with the AUVSI local chapter members. This is a very good chance to know and communicate with local people doing UAV activities.

Thanks for the presentation by Mr.Carl Bianchini and the opportunity provided by Mr. Steven Matthews. I will be giving a presentation on Jan. 17 or 24, 7pm, CEER 210 Conference Room , Villanova University.

[Updated 6/1/13]

How are you guys doing ? I was busy preparing the PhD prelim this semester and working on the DARPA Robotic Challenge, so don’t have much time testing the quadcopter. Today we tested RTL (return to launch) successfully:

I will add more videos and pictures later.

[Updated 6/4/13]

Took a flight in the UD campus:

UD1UD2

Rectification test at the UD football field:

rawRectified:

processed

Raw image:

raw2

Rectified:

output2Probably I’ll try to mount another GoPro and do stereo matching + visual odometry based large terrain reconstruction. This method is simple and works well, and then we can generate a 3D model of UD from the point cloud files.

[Updated 6/5/13]

Took a flight at the Chesapeake bay:

[Updated 6/8/13]

Tested in Winterthur:

Testing the UAV for scientific research purposes,maximum altitude on AHRS is set to 350 ft to follow FAA/AMA’s 400 ft limit.

[Updated 6/25/13]

Yesterday I went to the DARPA Robotic Challenge boot camp orientation at Drexel University (I’ll be working on the DRC throughout this summer in Drexel). I did a 360 degree spin at 400 ft over Drexel campus and generated a fan panorama directly from the video input (click to see the full res images):

OK

I also turned this into a polar panorama:

Omni Result

This is actually not a very easy task. As you can see from the video, the quadcopter has to incline its body to fight against the wind, since I don’t have a gimbal, the view of the camera is not level. and the rotating axis is somewhere outside of the quadcopter itself (makes it much more difficult than putting a camera perfectly level on a tripod and spin the camera around its own Z axis). Therefore, feature matching is needed to recover the pose and I can use that information to unwrap the images and blend them together. Anyway, later I will do stereo matching and reconstruction, any experiment on the UAV multi-view geometry and camera calibration at this point is helpful.

[Updated 8/4/13]

Initial results on my SFM and dense mesh reconstruction algorithm (all input images are from the above youtube video):

Center city Philadelphia:

QQ截图20130804154657QQ截图20130804154711

U Penn Franklin field area:

QQ截图20130804155012

QQ截图20130804155024

As you can see, the algorithm only works well on close-by buildings. The disparity of far-away object pixel is too small for the algorithm to calculate depth. Another problem is that I’m only doing self-spins on the UAV, so basically I only get translation of the feature points instead of rotation, which makes it very hard to recover point geometry. Later I’ll fly the UAV around a specific target with a higher resolution camera and see what happens.

[Updated 8/25/13]

A Pennsylvania based company has shown interest in my software. I will improve the algorithm and test on their UAV. Once more accurate 3D mesh results are generated, we can prepare materials to apply for a US patent.

[Updated 10/1/14]

The parameters of the 3D reconstruction program is almost perfectly tuned. Here are some input sequences and reconstruction result of the University of Delaware main campus:

Sample input sequence:

GOPR2677.MP4_20141002_180901.146

GOPR2677.MP4_20141002_181107.867 GOPR2677.MP4_20141002_181235.992Output 3D mesh model:
VIEW201VIEW504

Top-down view:

all_ud

3d_ud

You can view the 3D model of UD on Verold or Sketchfab.

[Updated 10/2/15]
New model with improved texture mapping:

Open source packages used:

Bundler:
PMVS/CMVS:
Poisson Surface Reconstruction:
Point Cloud Registration:
Labeling of buildings on the point cloud:
labelled

My Rovio fire extinguisher mod

This is my first project, modify Rovio into an automatic fire extinguisher. I originally posted it on robocommunity, the official forum of the Wowwee company: http://www.robocommunity.com/forum/thread/15894/Rovio-fire-extinguisher

The thread on robocommunity gained wide media interest, you can find the following websites also introducing this project:

http://botropolis.com/2009/04/rovio-fire-extinguisher-mod/

http://www.slashgear.com/rovio-fire-extinguisher-mod-2842083/

http://www.makeclub.org/ideas/items/view/7579

http://www.engadget.com/2009/04/30/rovio-finds-new-purpose-in-life-with-fire-extinguisher-mod/

http://gizmodo.com/5231474/rovio-modded-to-fight-blazing-candles

Report by New York Times, Nov.4 2010, on page B10:

View full story:

http://www.nytimes.com/2010/11/04/technology/personaltech/04basics.html

Report on my university’s home page:

http://xjtunews.xjtu.edu.cn/xssh/2009-06/1245376865d23550.shtml

http://xjtunews.xjtu.edu.cn/xssh/resource/h000/h49/img200906191000192.jpg

I’ve done a lot of works regarding machine vision algorithms and microelectronics. The Wowwee Rovio is a wonderful platform to do some experiments, and I changed its shell as well as its inner circuit to make it a fully automatic vision guided fire extinguisher robot . I made a electromagnetic valve and added it to rovio, the bottle on the right is filled with CF2ClBr(4ATM):

Then I use adaboost+SVM to train rovio, I’ve worked in visual flame tracking before and at that time I was asked to develop a surveillance software to track smoke and flames. I tested many different algorithms, and finally I applyed the method used in face detection to give a robust result. I used Haar-like rectangular features and integral image to describe flame features and used SVM instead of the cascade classifier. The exprimental result is quite good, even though Rovio’s camera is not that stable and reliable:

And then…rovio became an auto fire extinguisher! Rovio is just wonderful! The following are snapshots of the experiment:

Recently I’m doing a research project on omni-vSLAM with ground plane constraint, a quite challenging task in machine vision. Firstly I have to build a omni-vision system totally on my own as I can’t find any place to buy such things. I’ve worked on stereo vision depth estimation before:

The above stereo vision system is quite hard to caliberate, so I bought a bumblebee camera from the Point Gray company( on my hands):

It is very easy to use and the image quality is very ideal. Best of all, it supports Linux. So this time I decided to use Point Gray Firefly MV CMOS  camera for my omni-vision system(you can see its label through the transparent connecting component):

I sended my design to another company to build the hyperbolic mirror and outside structure, and carefully installed the system on the top of P3-AT,  because I must make sure that no parts of the robot itself is in the view range of the omni-camera:

It spent me a whole day to finish all the stuff:

The next thing is to build the mathematical model of the system.The general idea of my method is to use a single passive omni-directional camera to learn the ground appearance and find the obstacle edge feature points. Then, under the plane constraint, each feature point is mapped to a coordinate in the global coordinate frame. The coordinate set generated by our system has a similar format with the data coming from the 2D laser range finder and can be imported into toolboxes like CARMEN to do simultaneous localization and mapping.

The key of this system is to generate exact obstacle edges using only one image. I tried many methods and finally wrote the program using canny edge detection, texture features and adaptive learning methods to detect obstacle edge( the yellow squares are haar feature points used to calculate distance):

Extracted ground features:

Outdoor ground extraction  experiment results:

My mentor recommended to to register a patent for this algorithm. Now I’m considering: it can also be used to do mono SLAM using a camera with a view-range of about 60 degree. Now I’m working hard on this and I’ll upload the experimental results here soon.

[Nov.6 Updated] I strengthened the mechanical structure and added a Canon camera:

[Nov.12 Updated] More testing videos, I’ll soon release the rendered map generation results.

[Dec.6 Updated]Here’s the map of our laboratory using my SLAM algorithm:

Feature points obtained from a single frame in the omni-vision video stream:

Yesterday I went outside and tested the improved navigation algorithm, it’s very robust now.

Next, I’m going to work on my graduation thesis project, mainly focusing on stereo-based autonomous navigation. The depth recovered from stereo camera is quite accurate within 25 meters,  so I’m confident the system will be much better than the current monocular omni-directional navigation system.  My final goal is to place P3-AT at the south gate of my university, give a GPS coordinate, and let it guide itself to the north gate without global maps or GPS checkpoints.  There’re a lot of works to do on stereo SLAM, path planning, traversable area segmentation and robot dynamics control. Tomorrow I’m going to install the Bumblebee2 stereo camera.

[Dec.7 Updated]

Self-made R2-D2 telepresence robot

Inspired by Rovio, I decided to build a low-cost telepresence robot which processes audio and visual information on the upper computing unit. I contacted some of my friends and classmates from different majors and founded a group to start the building process.

First we designed the outward appearance and inner circuits and mechanical structure:

Then I use a CNC machine to build the chasis and installed three AC gearmotors, the output power and gear reduction ratio is carefully calculated:

We made the amplify and optocoupler input drive circuits:

The bottle on the left is a micro  fire extinguisher facility and I want to install it into the robot so it can use its own IP camera to recognize fire and extinguish the fire automatically.

I also added a lazer sensor, a fischertechnik robotic arm, a microphone and two speakers and finished its PIC controller and wireless data transmission module:

Finally I finished the programs and visual navigation algorithms:

Test drive:

Fully automatic visual flame detection:

Obstacle avoidance:

Demo video here:


Introducing my new robot “Black Swan”

Recently I pay more attention to hardware building. I need a robot that is agile and fast with a small turning radius and long endurance.

I modified a P3-DX robot base and added a stereo cam to my setup.

Inserting the controller into the base:

The main controller board:


Robot base and main power motherboard:

Upper structure:
Nearly finished:
Differential drive and a caster wheel, looks like DARPA’s LAGR robot platform:
 

[Updated Feb.28, 2011]

I finished the program on real time disparity map generation:

[Updated March 11, 2011]

Now the program can do real-time point cloud generation and camera pose estimation:

[Updated April 6, 2011]

Oudoor experiment result:

The Challenge Cup Competition is the most well-known and authoritative academic competition for university students in China.I  attended the eleventh challenge cup competition in Beijing University of Aeronautics & Astronautics:

The above photo is the main center for the competition, there’s a tarmac not far away. I found a northrop P-61 black widow there:

They held a grand opening ceremony:

Six ministers and two hundred scientists from different fields attended the opening ceremony:

Looking outside of the window, the Bird’s Nest Stadium is just several blocks away:

Late in the evening I walked there and took some photos:

The next day is the show time. The following is the photo I had taken with my team members and our robot:

We are the only team representing Xi’an Jiaotong University to take part in the ECE section. Here are two works from the mechanical engineering section:

Patrolbot from Beijing Technology University:

VTOL demonstrator model from BeiJing University of Aeronautics & Astronautics:

Finally we got the national third price, the three seals with the national emblem of China are from the Ministry of Education(MOE), Ministry of Industry and Information Technology(MIIT) and the Beijing city government: