P Babu | Anna University (original) (raw)

Uploads

Papers by P Babu

Research paper thumbnail of A visual navigation system for autonomous land vehicles

IEEE Journal on Robotics and Automation, 1987

A modular system architecture has been developed to support visual navigation by an autonomous la... more A modular system architecture has been developed to support visual navigation by an autonomous land vehicle. The system consists of vision modules performing image processing, three-dimensional shape recovery, and geometric reasoning, as well as modules for planning, navigating, and piloting. The system runs in two distinct modes, bootstrap and feedforward. The bootstrap mode requires analysis of entire images to find and model the objects of interest in the scene (e.g., roads). In the feedforward mode (while the vehicle is moving), attention is focused on small parts of the visual field as determined by prior views of the scene, to continue to track and model the objects of interest. General navigational tasks are decomposed into three categories, all of which contribute to planning a vehicle path. They are called long-, intermediate-, and short-range navigation, reflecting the scale to which they apply. The system has been implemented as a set of concurrent communicating modules and used to drive a camera (carried by a robot arm) over a scale model road network on a terrain board. A large subset of the system has been reimplemented on a VICOM image processor and has driven the DARPA Autonomous Land Vehicle (ALV) at Martin Marietta's test site in Denver, CO.

Research paper thumbnail of New Microsoft Office Word Document (2)

Research paper thumbnail of New Microsoft Office Word Document (2)

Research paper thumbnail of A visual navigation system for autonomous land vehicles

IEEE Journal on Robotics and Automation, 1987

A modular system architecture has been developed to support visual navigation by an autonomous la... more A modular system architecture has been developed to support visual navigation by an autonomous land vehicle. The system consists of vision modules performing image processing, three-dimensional shape recovery, and geometric reasoning, as well as modules for planning, navigating, and piloting. The system runs in two distinct modes, bootstrap and feedforward. The bootstrap mode requires analysis of entire images to find and model the objects of interest in the scene (e.g., roads). In the feedforward mode (while the vehicle is moving), attention is focused on small parts of the visual field as determined by prior views of the scene, to continue to track and model the objects of interest. General navigational tasks are decomposed into three categories, all of which contribute to planning a vehicle path. They are called long-, intermediate-, and short-range navigation, reflecting the scale to which they apply. The system has been implemented as a set of concurrent communicating modules and used to drive a camera (carried by a robot arm) over a scale model road network on a terrain board. A large subset of the system has been reimplemented on a VICOM image processor and has driven the DARPA Autonomous Land Vehicle (ALV) at Martin Marietta's test site in Denver, CO.

Research paper thumbnail of New Microsoft Office Word Document (2)

Research paper thumbnail of New Microsoft Office Word Document (2)

Log In