Integrating Computer Vision with a Robot Arm System.
AIR FORCE INST OF TECH WRIGHT-PATTERSON AFB OH
Pagination or Media Count:
The goal of this project is to interface an image processing system to a robot arm system. Computer vision is used to compute the location and orientation of a block in the work space so that the block may be grasped and manipulated by a robot arm. A detailed explanation of how information flows from the image processor to the robot arm system is given. This flow can be broken down into three different steps. In the first step, a gradient map from a digitized image of the work space is transferred, one line at a time, from the Grinnell image processor to its host, a VAX 11750. As each line of the gradient map is uploaded, the coordinates of pixels on the edges of a block edge pixels are stored in an array. Storing the coordinates of only the edge pixels results in significant image compression. The second step involves processing the edge pixel coordinate array to extract relevant features. The location of the corners of a block is all that is needed to compute the location of the centroid and the orientation of the block in the work space. In step three, the centroid and roll angle information is sent from the VAX to the robot arm system host, a DUAL microcomputer. One of the robot arms is selected and appropriate commands are sent to that arm to grasp and manipulate the block.
- Optical Detection and Detectors