Argos: Princeton University's entry in the 2009 Intelligent Ground Vehicle Competition (original) (raw)
Related papers
Kratos: Princeton University's entry in the 2008 Intelligent Ground Vehicle Competition
Intelligent Robots and Computer Vision XXVI: Algorithms and Techniques, 2009
In this paper we present Kratos, an autonomous ground robot capable of static obstacle field navigation and lane following. A sole color stereo camera provides all environmental data. We detect obstacles by generating a 3D point cloud and then searching for nearby points of differing heights, and represent the results as a cost map of the environment. For lane detection we merge the output of a custom set of filters and iterate the RANSAC algorithm to fit parabolas to lane markings. Kratos' state estimation is built on a square root central difference Kalman filter, incorporating input from wheel odometry, a digital compass, and a GPS receiver. A 2D A* search plans the straightest optimal path between Kratos' position and a target waypoint, taking vehicle geometry into account. A novel C++ wrapper for Carnegie Mellon's IPC framework provides flexible communication between all services. Testing showed that obstacle detection and path planning were highly effective at generating safe paths through complicated obstacle fields, but that Kratos tended to brush obstacles due to the proportional law control algorithm cutting turns. In addition, the lane detection algorithm made significant errors when only a short stretch of a lane line was visible or when lighting conditions changed. Kratos ultimately earned first place in the Design category of the Intelligent Ground Vehicle Competition, and third place overall.
Argos: Princeton University's entry in the 2009 Intelligent Ground Vehicle Competition
Intelligent Robots and Computer Vision XXVII: Algorithms and Techniques, 2010
The Princeton IGVC team consists of members of Princeton Autonomous Vehicle Engineering (PAVE ), Princeton University's undergraduate student-led robotics research group. Our team builds upon PAVE 's experience in robotics competitions, including participation in the 2005 DARPA Grand Challenge [1], the 2007 DARPA Urban Challenge [9] and the 2008 Intelligent Ground Vehicle Competition (IGVC) [2]. Our team placed third overall and won rookie-of-the-year in the 2008 IGVC, placing 1st, 4th and 6th in the Design, Navigation and Autonomous challenges, respectively. Argos, our entry in the 2009 IGVC, is an all-new robot based on improvements from 2008. We believe that Argos and the Princeton Team will once again be competitive in the 2009 IGVC.
The onset of Unmanned Ground Vehicles (UGVs) date back to World War II. These full autonomous robots or remote-controlled robots provide many services for military purposes. The deployment of UGVs in battlefield keeps the soldiers safe from harm, navigates the target points and works as a path tracker. As time goes by, researchers are encouraged to apply UGVs in other domains as in the industrial, road services and the urban domains. The autonomy of UGVs comes from the collective sensory resources and the manipulators that are used to perform specialized tasks. This paper presents the design of a fully autonomous vehicle called “E500”, which has been implemented to compete in the 22nd Intelligent Ground Vehicle Competition (IGVC), held at Oakland University, Rochester, Michigan in June 2014. The E500's body and chassis are custom made. Its power plant is based on two scooter electric motors that are driven through Pulse Width Modulation (PWM). It receives the information from a camera, few ultrasonic range finding sensors and global positioning system (GPS) receiver. The unmanned vehicle also incorporates vision and navigation systems. They are implemented to meet the design requirements of the IGVC competition. The E500's vision system acquires the images through a Microsoft camera then processes them on an onboard laptop. The vehicle was able to extract the features of the road, detecting the white lines and the position of obstacles then figure out the best path to avoid collisions. A navigation algorithm has been developed to achieve high accuracy up to 10 cm using a Samsung mobile phone running android. The algorithms were tested in a green area with two white lines and some obstacles distributed in a random way.
Autonomous Ground Vehicle, 2014
Autonomous Ground Vehicle is a powerful, robust, well-designed and relatively inexpensive Unmanned Ground Vehicle (UGV). The goal of this project is to develop a vehicle that could reach a degree of autonomy where it will be able find its own path to a required destination by avoiding the obstacles. Obstacle avoidance is one of the most critical factors in the design of autonomous vehicles such as mobile robots. One of the major challenges in designing intelligent vehicles capable of autonomous travel on highways, is reliable obstacle avoidance. Obstacle avoidance may be divided into two parts, obstacle detection and avoidance control. Numerous methods for obstacle avoidance have been suggested and research in this area of robotics is done extensively. Five sonar sensors are used in AGV, which are mounted on different angles. Similarly, there are many methods to provide co-ordinates to robots. In AGV, serial communication is used to provide the final co-ordinates, AGV makes an angle delta with the final destination and moves in straight line to reach the final destination.
Black knight: An autonomous vehicle for competition
Journal of Field Robotics, 2004
Black Knight, the University of Central Florida's vehicle in the 11th Intelligent Ground Vehicle Competition (IGVC) competed in 2003. Completing in 5th place in the navigational challenge and 10th in the autonomous challenge in its first competition has proven our vehicle to be a strong competitor in this competition. The vehicle has many interesting features that allow it to achieve its success. The vehicle's 300 lb. capacity allows for two onboard full-sized computers and two 12 V marine batteries that power the computers for up to 2 h. The vision system is not a simple reactive system but rather it classifies its view into objects and builds a map of the territory as it learns of its features while traveling. Two transformations and the location data from the GPS and other sensors are used to associate the locations in the image to locations in the map. The operations of the vehicle are modeled after the typical operations of a ship. We have programs that perform the functions of the captain, the helm, the navigator, and the engineer. In addition we have a program performing sensor data fusion from the GPS, compass, and wheel encoder data. The navigation uses an adapted two-dimensional approximate cell decomposition method that satisfies the nonholononic constraints of our vehicle and allows it to find the shortest path to the goal while avoiding all obstacles.
The DARPA grand challenge - development of an autonomous vehicle
2004
The DARPA Grand Challenge (DGC) was an opportunity to test autonomous vehicles in a competitive situation. In addition to intelligent behaviour, the participating vehicles must also exhibit ruggedness and endurance in order to survive the fast ride over rough terrain ("win with the software -lose with the hardware"). The SciAutonics teams decided to use compact and agile vehicles that employ proven mechanical designs very suitable for the desert environment. 4-wheel drive ensures robust controllability even in slippery ground, and a roll cage protects the vehicle components from damage in case of a collision. The control system relies primarily on a differential GPS (Starfire) and a set of inertial sensors for navigating between the given set of waypoints. A sensor suite using infrared laser (LIDAR) and ultrasound sensing provides the capability of obstacle avoidance and path following. This paper shows the components of the vehicle and results from driving at the DGC.
Team AnnieWAY’s Autonomous System for the DARPA Urban Challenge 2007
Springer Tracts in Advanced Robotics, 2009
This paper reports on AnnieWAY, an autonomous vehicle that is capable of driving through urban scenarios and that successfully entered the finals of the 2007 DARPA Urban Challenge competition. After describing the main challenges imposed and the major hardware components, we outline the underlying software structure and focus on selected algorithms. Environmental perception mainly relies on a recent laser scanner that delivers both range and reflectivity measurements. Whereas range measurements are used to provide three-dimensional scene geometry, measuring reflectivity allows for robust lane marker detection. Mission and maneuver planning is conducted using a hierarchical state machine that generates behavior in accordance with California traffic laws. We conclude with a report of the results achieved during the competition. C 2008 Wiley Periodicals, Inc.