Computer Vision Based Robotic Arm Controlled Using Interactive GUI (original) (raw)

Smart Robot Vision for a Pick and Place Robotic System

Maǧallaẗ al-handasaẗ wa-al-tiknūlūǧiyā, 2023

The movement of the 5 DOF robotic arm was controlled through the geometric approach in inverse kinematics analysis  HSV color space, a series of filters (M edian, Bilateral, and Gaussian), and the drawcontour method to discover objects' colors, shapes, and centroid were implemented.  The shapes and colors of the objects in different lighting conditions ranging from 5 to 7000 lux with an accuracy of 93.83% were discovered. The main contribution of this paper is to develop an innovative algorithm to accurately detect and identify the shape and color of objects under various light intensities and find their location to be manipulated by a pick-and-place robotic arm. Workpieces of various shapes and colors are dispersed on the robot's work plane and manipulated according to its specifications. The proposed algorithm utilizes the HSV color model to distinguish between different object colors and shapes. The S channel is used to detect the shapes of objects. After that, a series of filters (M edian, Bilateral, and Gaussian) are applied to reduce the noise of the segmented image to make the process of discovering the shap e and coordinates of the objects successful. The draw-contour method is used to discover the object's shapes. After the shape of the object is discovered, the centroid coordinates are calculated. After extensive testing on 354 images that are captured in various lighting conditions in the range of (5-7000 lux), the overall system performance of 93.83% is achieved, and the average execution time is 2.21s. Finally, we had a dependable flexible automatic pick and place system that could correctly detect and identify the objects based on their features.

Automation Of Object Sorting System Using Pick & Place Robotic Arm & Image Processing AUTOMATION OF OBJECT SORTING SYSTEM USING PICK & PLACE ROBOTIC ARM & IMAGE PROCESSING

The paper presents a smart approach for a real time inspection and selection of objects in continuous flow. Image processing in today's world grabs massive attentions as it leads to possibilities of broaden application in many fields of high technology. The real challenge is how to improve existing sorting system in the modular processing system which consists of four integrated stations of identification, processing, selection and sorting with a new image processing feature. Existing sorting method uses a set of inductive, capacitive and optical sensors do differentiate object color. This paper presents a mechatronics color sorting system solution with the application of image processing. Image processing procedure senses the objects in an image captured in real-time by a webcam and then identifies color and information out of it. This information is processed by image processing for pick-and-place mechanism.The Project deals with an automated material handling system. It aims in classifying the colored objects by colour, size, which are coming on the conveyor by picking and placing the objects in its respective pre-programmed place. Thereby eliminating the monotonous work done by human, achieving accuracy and speed in the work. The project involve sensors that senses the object's colour, size and sends the signal to the microcontroller. The microcontroller sends signal to circuit which drives the various motors of the robotic arm to grip the object and place it in the specified location. Based upon the detection, the robotic arm moves to the specified location, releases the object and comes back to the original position.

Real-time object detection and selection with the LabVIEW program

International journal of electrical and computer engineering systems

Nowadays, the demand for production is increasing due to the increase in the human population. For this reason, different developing technologies are used to meet the required production. In developing technologies, image processing techniques are used to save manpower and time and to minimize possible errors. In this study, image processing techniques were used to detect and select the colors and shapes of the objects coming over the conveyor belt system. Real-time images were preferred in the study. In the implemented system, the selection process was carried out by using the LabVIEW program to define the colors and shapes of the objects. LabVIEW NI- IMAQdx was used to find the colors of the objects. To facilitate the definition of shapes, the images taken from the Vision Assistant module in the LabVIEW program were converted to HSL format, and shape definitions were made using different algorithms. After these processes were done, the servo motors in the conveyor belt system were...

A computer controlled visual system for object classification

IFAC Proceedings Volumes, 2003

In this paper the design, implementation and application of a computer controlled camera system for classification and assortment of various objects is introduced. The system, which consists of a RF-Color Camera, a PC and pneumatically actuated robot manipulator, is configured to operate in real time and on line applications. On the first stage of the work, the captured image (in RGB scale) viewed by the camera, is converted and stored by using a frame grabber card and special software written in C++ into a digital image data in the computer memory for a further segmentation process to recognize the object. In the second stage, the objects are recognized in size or property via camera, which is mounted to a horizontally moved belt driven with a servo stepper motor. The recognized objects are then assorted by a built up, computer controlled electro-pneumatic robot manipulator with four degrees of freedom and a suction-gripper as the end-effector, which is controlled by a PlC to carry out the objects via a predefined trajectory to designated locations. A special control algorithm written in C++, is also used to control the servo stepper belt motor and the electro-pneumatic valves of the manipulator. Successful results are evaluated both in assorting of objects in size or property and in synchronous working of the robot arm in connection with image processing.

ROBOTIC ARM WITH REAL-TIME IMAGE PROCESSING USING RASPBERRY PI, BOTH AUTOMATED AND MANUALLY.

This paper proposes to create a Robotic Arm with Real-Time Image Processing using Raspberry Pi which can either be automated or can be operated manually. In the present era, we are making a robot capable of detecting and placing the pre-specified object. The code for detection of color has been written in Python. For a hardware implementation, we are using raspberry pi which has Raspbian OS based on Debian which is Linux OS. The robotic arm detects the pre-specified objects and segregates them based on color (RGB). The program includes controlling the robotic arm, capturing the object image processing, identifying the RGB object, using a local page to control motor manually and perform all task automatically using Raspberry Pi.

A real-time automated sorting of robotic vision system based on the interactive design approach

International Journal on Interactive Design and Manufacturing (IJIDeM)

This research paper presents the proposes a robotic vision system to distinguish the color for the object and his position coordinate, and then sort the object (product) on the right branch conveyor belt according to color in real-time. The system was built based on the HVS mode algorithm for sorting product based on color. Furthermore, the system can be distinguished the object shape and then find his position to picking the object shape and putting on the right branch conveyor belt. The assumptions for the object shape were based on the shape properties, centroid algorithm, and border extraction. Both the object detection and the contour coordinate extraction methods are implemented using a series of image processing techniques. The main goal is met by sorting the object depends on the color feature from a gathering of objects. The robot movement (open and close griper, move up and down the arm, and move to the left and right) controlled by a microcontroller that controls the movement to the right branch conveyor belt. When the color or the object is detected, the microcontroller will initiate the actions of the robot. It was found that the accuracy of results based on the approach that developed in this paper which is 92% for shape sorting and 97% for colors sorting objects.

Vision Assisted Pick and Place Robotic Arm

This paper presents the design of a Vision assisted pick and place robotic Arm. The main objective of the paper is to pick and place an object from one place to other by 2 DOF robotic arm. USB camera is used as a vision sensor to measure the dimensions of the object to be picked. The USB camera collects the image of the object is transferred to the LabVIEW API with image processing toolkits and modules to process the image. The processed dimension of the object is transmitted via RS-232 serial communication to the microcontroller LPC2129. The appropriate PWM signal is generated by LPC2129 respectively to the servomotors. The robotic arm is designed with servomotors. Digital image processing algorithms are implemented to process the image captured by the USB camera to find the exact dimension of the object thereby to assist the robotic arm to finest. NI-IMAQ - Machine vision based functions are implemented and the results are presented.

Automated Visual Inspection: Position Identification of Object for Industrial Robot Application based on Color and Shape

International Journal of Intelligent Systems and Applications, 2016

Inspection task is traditionally carried out by human. However, Automated Visual Inspection (AVI) has gradually become more popular than human inspection due to the advantageous in the aspect of high precision and short processing time. Therefore, this paper proposed a system which identifies the object's position for industrial robot based on colors and shapes where, red, green, blue and circle, square, triangle are recognizable. The proposed system is capable to identify the object's position in three modes, either based on color, shape or both color and shape of the desired objects. During the image processing, RGB color space is utilized by the proposed system while winner take all approach is used to classify the color of the object through the evaluation of the pixel's intensity value of the R, G and B channel. Meanwhile, the shapes and position of the objects are determined based on the compactness and the centroid of the region respectively. Camera settings, such as brightness, contrast and exposure is another important factor which can affect the performance of the proposed system. Lastly, a Graphical User Interface was developed. The experimental result shows that the developed system is highly efficient when implemented in the selected database.

Object Detection and Recognition System for Pick and Place Robot

Advances in Intelligent Systems and Computing, 2018

Object recognition system plays a vital role in controlling the robotic arm for applications such as picking and placing of objects. This paper is directed towards the development of the image processing algorithm which is the main process of pick and place robotic arm control system. In this paper, soft drink can objects such as "Shark", "Burn", "Sprite" and "100 Plus" are recognized. When the user specifies a soft drink can object, the system tries to recognize the object automatically. In the system, the target object region and the motion of the object are firstly detected using Template Matching (Normalized Cross Correlation) based on YC b C r color space. The detected image is segmented into five parts horizontally to extract color features. In feature extraction step, mean color and Hue values are extracted from each segmented image. And then, Adaptive Neural Fuzzy Inference System (ANFIS) is employed to recognize the target object based on the color features. After recognizing the user specified object, the robotic arm pick and place it in the target region. Experimental results show that the proposed method is efficiently able to identify and recognize soft drink can objects.