毕业论文

打赏
当前位置: 毕业论文 > 外文文献翻译 >

立体视觉系统的机器人英文文献和中文翻译(3)

时间:2019-05-25 14:55来源:毕业论文
b) the hardware setup of the cameras and their calibration and c) the interconnection of the systems to each other so that they can communicate in real time. 4.1. Hardware setup The following componen


b) the hardware setup of the cameras and their calibration and c) the interconnection of the systems to each other so that they can communicate in real time. 4.1. Hardware setup The following components were used for the implementation of the vision system: x  Two 2MP cameras, Basler A641FC were equipped with a 1/1.8 CCD colour image sensor at a maximum resolution of 1624x1234 pixels. The cameras support IEEE 1394 Firewire connection and are fixed on a metal base at a distance of 110mm from each other. x  Each camera is equipped with a Computar M0814-MP2 lens with 8mm focal length with manual iris and focus control.  x  Laser diodes which were equipped with multi line projection heads were used to projecting laser lines, on the points to be identified and measured. The diodes had the capability of manually adjusting the focus of the beam so that the intensity of the laser light, at the point of interest, to be optimized. x  A personal computer with an Intel Pentium 4 processor, were running at 3.20 Ghz and 1 GB of RAM. In order to communicate with the cameras through the FireWire port, a NI PCI-8252 Interface Board with three ports was also installed. 4.2. Software development In order to implement the required image analysis and stereo vision techniques as well as to allow the interconnection of the systems, a Visual Basic application was developed. The software acts as a bridge and/ or hosts the following modules: x  The NI IMAQ is the software driver provided by National Instruments and is compatible with the cameras that were used. x  The NI Vision Acquisition Software. The main functionality of this software is the acquisition of the images. It acts as a middleware between the developed software and the IMAQ driver in order to transfer the images to the desktop computer. x  OpenCV v2.2.1 (Open Source Computer Vision) is a library of programming functions for a real time computer vision. The routines of OpenCV were used in our application for the realization of the un-distortion and rectification of the images as well as for implementing the triangulation method   [16].   x  In addition, the OpenCV libraries have also been used for calculating the calibration parameters. The method used is a variation of the one contained in the freely distributed software Matlab calibration toolbox   [18]. 4.3. System integration The vision system was deployed on a robotic cell that welds the parts of a passenger car door.  The cell includes a Comau Smart NJ 130 robot, carrying a medium frequency spot welding gun. The robot is guided by its own C4G type controller and the door parts are welded using resistance spot welding. The base with the two cameras is mounted on the robot. In order for the different systems to be linked together, the communication architecture of Figure 5 has been implemented. The connection of the camera to the computer was realized through the FireWire port, due to the high speed data transfer provided by this protocol. Next, the desktop PC was connected to the controller by means of a TCP-IP protocol. When the robot is ready to receive the corrected coordinates for the welding points, the desktop PC client sends them, in a string form, to the specific port that the server listens to. At the last step, the proprietary connection between the robot and the controller (DeviceNET protocol) was maintained in order to send the final motion parameters to the robot. The numbers in Figure 5 indicate the sequence of events during the system’s operation.  
Fig. 5. Vision system integration with the NJ130 robot 5. Case Study Within this case study, the vision system was tested on the welding of a door frame. A photograph of the door frame is shown in Figure 6a. The laser line that was used to identify the welding spot was also visible on the top right part of the door frame. The operation of the cell can be summarized as follows: x  An operator loads the door parts on a table in the cell. x  The laser diodes are already positioned and illuminate the flanges near the area, of each welding spot.  x  The robot assumes a position where the area of interest is within both cameras’ field of view.   x  The robot sends a notification to the vision system PC, via the robot controller that the cameras are in place for the image acquisition. x  The computer triggers the image captured over the FireWire port and receives the images. x  The software system developed applies the image rectification and thresholding techniques. Figure 6a, shows the acquired image and Figure 6b presents the application of the threshold filter where the laser signature is the most visible area in the image.                       (a)                (b) Fig. 6. Acquired image (a) and image after processing (b)  x  The edge of the flange is identified by finding the breakpoint of the laser line. Following, the OpenCV triangulation algorithm is applied. The point coordinates with respect to the camera frame are calculated.  x  The coordinates are transmitted to the robot controller and the translation of the coordinates takes place with respect to the robot base. x  The coordinates received are used in combination with the pre-programmed ones in order for the necessary offset to be calculated.  x  The robot performs the welding based on the corrected coordinates.  6. Results The integrated system was evaluated with respect to the efficiency of the utilized technologies especially in terms of the required time periods, since respecting the cycle time was the primary target. Table 1, summarizes the time required for each of the system functionalities. Table 1. System evaluation Action  Time(sec) Acquisition of images and transfer to vision system PC  2.5 Application of the Threshold filter  0.5 Image rectification  0.5 Correspondence calculation and stereo triangulation    1 Transmission of  coordinates to robot controller  0.5 Controller processing of coordinates – offset calculation  1 Total  6 It is obvious that the time required for the triangulation and the transfer of the images, from the camera to the PC, are areas where improvement should be sought after, since the controllers’ built in routines are hard to modify. The image size itself results in large file sizes, which in turn, require a long time to be transmitted through the FireWire connection. Nevertheless, a reduction in the resolution would lead to a significant accuracy loss. Therefore,  the solution needs to be investigated in the use of algorithms that are capable of improving the triangulation process even in low resolution images. The use of techniques to achieve sub pixel disparity is such a case. With respect to the accuracy of the system, the tests have indicated that with such a setup, accuracy is possible in the range of 1mm. Further investigation is required to evaluate the accuracy over large volume production requirements. 7. Conclusions and Outlook The integration of a hybrid vision system with an industrial welding robot that uses both passive vision principles and structured light was presented in this paper. The findings indicate that the performance of the system and the integration effectiveness is suitable for real life applications thanks to the easy installation and the fast processing of the tasks. The principle behind the operation of the vision system is such that allows it to handle any welding operation, performed on flanges, regardless of the joining technology. The integration with the C4G controller did not require any changes in the hardware (all communication were directed through the standard Ethernet network) thus making it an economically viable solution. The cost of the vision system components was also kept low (under 10k€) with respect to existing solutions that may well exceed 100k€. Further work will need to focus on achieving even higher precision by means of a) more effective calibration techniques and b) the application of algorithms involving sub-pixel disparity. The latter foresee that the precision of the disparity between two images can be increased through an interpolation of the values obtained by the application of SAD (sum of absolute differences) algorithm for matching between the two camera’s images   [19]. 立体视觉系统的机器人英文文献和中文翻译(3):http://www.youerw.com/fanyi/lunwen_33781.html
------分隔线----------------------------
推荐内容