Vision for autonomous locomotion

2.0 Vision for autonomous locomotion

Numerous scenarios exist where it is necessary or advantageous to classify terrain at a distance from a moving forward-facing camera. For example, image based sensors can be used for assessing and predicting terrain type in association with the control or navigation of autonomous vehicles or robots.

Upcoming terrain may be sloping, slippery, rough or present other characteristics that would result in a platform needing to change speed, direction or gait in order to ensure safe and smooth motion.

BVI researchers Pui Anantrasirichai, Jeremy Burn, Iain Gilchrist and David Bull have produced an integrated framework to help solve this problem. Addressing issues such as motion blur and perspective distortions, the team has developed robust texture features that improve the performance of a terrain classifier based on monocular video captured from the viewpoint of human locomotion. This research is particularly important for biped robots or for humans with visual impairments and takes account of gait where probabilities of path consistency are employed to improve terrain-type estimation.

Further details on the current research for both Terrain analysis for robotics and Bio-inspired visual framework for autonomous locomotion.

Edit this page