A New Video Rate Region Color Segmentation and Classification for Sony Legged RoboCup Application (original) (raw)

Video rate color region segmentation for mobile robotic applications

2005

Color Region may be an interesting image feature to extract for visual tasks in robotics, such as navigation and obstacle avoidance. But, whereas numerous methods are used for vision systems embedded on robots, only a few use this segmentation mainly because of the processing duration. In this paper, we propose a new real-time (ie. video rate) color region segmentation followed by a robust color classification and a merging of regions, dedicated to various applications such as RoboCup four-legged league or an industrial conveyor wheeled robot. Performances of this algorithm and confrontation with other methods, in terms of result quality and temporal performances are provided. For better quality results, the obtained speed up is between 2 and 4. For same quality results, the it is up to 10. We present also the outlines of the Dynamic Vision System of the CLEOPATRE Project - for which this segmentation has been developed - and the Clear Box Methodology which allowed us to create the new color region segmentation from the evaluation and the knowledge of other well known segmentations.

Robust color segmentation for the RoboCup domain

Object recognition supported by user interaction for service robots, 2002

Color segmentation is crucial in robotic applications, such as RoboCup, where the relevant objects can be distinguished by their color. In these applications, real-time performance and robustness are primary concerns. We present a hybrid method for color segmentation based on seeded region growing (SRG) in which the initial seeds are provided by a conservative threshold color segmentation. The key to the robustness of our approach is to use multiple seeds to perform local blob growing, and then merge blobs that belong to the same region. We have implemented our technique on a team of Sony AIBO 4-legged robots, and have successfully tested it in the RoboCup 2001 competition.

Color-spaces and color segmentation for real-time object recognition in robotic applications

Resumo -O reconhecimento de objectos é uma parte importante numa grande gama de aplicações robóticas. O presente artigo 1 visa explorar as vantagens e desvantagens no uso de diferentes espaços de cor (diferentes representações digitais de cores) e diferentes métodos de identificar cores numa imagem com vista à detecção de objectos em aplicações robóticas. O assunto da supressão de sombras através da manipulação da informação de cor também é estudado.

Color Detection in Autonomous Robot-Camera

Journal of Physics: Conference Series, 2019

This is a research about color detection in which the researchers propose and implement an algorithm to distinguish the location of any desired color in any kind of scene. The algorithm is implemented in a robot that is capable to move toward the desired color. The robot system operates real-time since the algorithm works on frames of an on-time video captured by a simple camera of the robot. The algorithm uses partitioning for finding the location of the desired color, and then considers all pixels at each partition. The robot system functions rapidly since there is no transformation or other time-consuming technique related to digital image processing in the algorithm. At the end of this study, the accuracy of the system is separately presented regarding the right movement and the left movement.

Towards Eliminating Manual Color Calibration at RoboCup

Lecture Notes in Computer Science, 2006

Color calibration is a time-consuming, and therefore costly requirement for most robot teams at RoboCup. This paper presents an approach for autonomous color learning on-board a mobile robot with limited computational and memory resources. It works without any labeled training data and trains autonomously from a color-coded map of its environment. The process is fully implemented, completely autonomous, and provides high degree of segmentation accuracy. Most importantly, it dramatically reduces the time needed to train a color map in a new environment. * The initial position of the robot and its joint angles over time, particularly those specifying the camera motion. 2. Output:

Techniques for obtaining robust, real-time, colour-based vision for robotics

RoboCup-99: Robot Soccer World Cup III, 2000

An early stage in image understanding using colour involves recognizing the colour of target objects by looking at individual pixels. However, even when, to the human eye, the colours in the image are distinct, it is a challenge for machine vision to reliably recognize the whole object from colour alone, due to variations in lighting and other environmental issues. In this paper, we investigate the use of decision trees as a basis for recognizing colour. We also investigate the use of colour space transforms as a way of eliminating variations due to lighting.

Autonomous color learning on a mobile robot

2005

Abstract Color segmentation is a challenging subtask in computer vision. Most popular approaches are computationally expensive, involve an extensive off-line training phase and/or rely on a stationary camera. This paper presents an approach for color learning on-board a legged robot with limited computational and memory resources. A key defining feature of the approach is that it works without any labeled training data. Rather, it trains autonomously from a color-coded model of its environment.

A fast vision system for middle size robots in robocup

2002

A mobile robot should be able to analyze what it is seeing in real time rate and decide accordingly. Fast and reliable analysis of image data is one of the key points in soccer robot performance. In this paper we suggest a very fast method for object nding which uses the concept of perspective view. In our method, we introduce a set of jump points in perspective on which we search for objects. An object is estimated by a rectangle surrounding it. A vector based calculation is introduced to nd the distance and angle of a robot from objects in the eld. In addition we present a new color model which takes its components from di erent color models. The proposed method can detect all objects in each frame and their distance and angle in one scan on the jump points in that frame. This process takes about 1 50 of a second. Our vision system uses a commercially available frame grabber and is implemented only in software. It has shown a very good performance in RoboCup competitions.

Improved Object Recognition - The RoboCup 4Legged League

Proceedings of the 4th International Conference on Intelligent Data Engineering and Automated Learning,, 2003

The RoboCup competition has brought back to attention the classification of objects in a controlled illumination environment. We present a very fast classifier to achieve image segmentation. Our methods are based on the machine literature, but adapted to robots equipped with low cost image-capture equipment. We then present new fast methods for object recognition, based on also rapid methods for blob formation. We describe how to extract the skeleton of a polygon and we use this for object recognition.