毕业论文

打赏
当前位置: 毕业论文 > 外文文献翻译 >

基于视觉的六个自由度机器人英文文献和中文翻译(5)

时间:2019-07-12 23:04来源:毕业论文
Fig. 14 Installation of the experimental visual servoing system. The image that would be used as reference was previously captured. From this one the image primitives were obtained and corresponded to


Fig. 14 Installation of the experimental visual servoing system.   The image that would be used as reference was previously captured. From this one the image primitives were obtained and corresponded to the centroids of 8 lighting circles Visual control algorithm hostshown in figure 5.8. The end effectors or more precisely the target was displaced 30º away from that position around joint 3. The robot tries to reach the desired position through the control scheme shown in figure 8. In spite of the performed displacement correspond to the simple situation of moving around a joint, the followed trajectory is realized by the use of the other joints. The goal of the task performed by the robot will be to place the target in a position which corresponds to the desired image. Therefore the robot manipulator follows a trajectory in order to minimize the error between the current and reference images,  ( ) d s s − .  In the experiments carried out two different controllers were used: proportional and predictive. As in the simulation case the robot is controlled in velocity through the internal loop which includes gravitic compensation. This one operates at frequency of 1 KHz and the external vision controller operates at a frequency of 12 Hz.  6.2 Experimental results of the vision control system using a proportional controller  In the experimental work it was concluded that very reasonable results were obtained through the use of a proportional controller and that the integrative or derivative factor had no influence on the system performance From figure15 is possible to evaluate the images error in a 2D visual servoing architecture using a proportional controller. The good system convergence is notable and one can observe that the error is near zero approximately after 20 seconds for all the image primitives.   Fig. 15 Image error  for the 2D architecture using a PI controller.   6.3 Experimental results of the vision control system using a predictive controller  From the developed work presented the predictive control algorithm presented was implemented. A prediction horizon of  6 p H =  was used. The implementation of the predictive control algorithm was preceded by the identification of the ARIMAX model for each of the controlled joints. In the identification procedure a PRBS (pseudo random binary signal) with a frequency of 100 Hz was injected in each joint. The Identification algorithm based on the prediction error [12] was used.  From figure 16 is possible to evaluate the image results through the use of a generalized predictive control algorithm. The good convergence is notable and one can conclude that this system is faster when compared with the use of a proportional controller. The error is near zero approximately after 15 seconds for all the image primitives.   
Fig. 16 Image error for the 2D architecture using a GPC controller.  From figure 17 the joint velocity evolution for the case of using a GPC and a PI controller can be observed. The joint velocities are lower than in the proportional controller case as well of the oscillations.    Fig. 17 Joint velocity using a 2D architecture with a GPC (a) and a PI (b) controller.   7. CONCLUSIONS  A vision control system applied to a six degrees of freedom robot was studied. A PUMA 560 model was carried out and tested with a PI controller using the parameters of the real system. A prediction error method was used to identify the robot dynamics and to implement a predictive control algorithm (MPC and GPC). The three different algorithms always converge to the desired position. In general, we can conclude that in visual servoing is obvious the good performance of both predictive controllers. The obtained results also show that the 2D algorithm associated with the studied controllers allows to control larger displacements than those referred in [4].  From the analysis of r.m.s error presented in Table 1 we can conclude the better performance of the GPC. In spite of the better MPC results for the translation in the xy plane when compared to the GPC, the global error is worse. In the visual servoing trajectory is obvious the good performance of this approach. The identification procedure has a great influence on the results.  Image error (pixels) Time (s) Time (s)  Image error (pixels) Time (s)(a)  Joint velocity (rad/s)  Joint velocity (rad/s) Time (s) (b) The evaluation of the graphical trajectories and the computed errors allow finally concluding that the GPC vision control algorithm leads to the best performance. Finally the obtained experimental results for the “eye to hand” case are in agreement with those expected from the theoretical algorithm development and the simulation results. The developed experimental system allows using with great versatility the simulation algorithms.    8. FUTURE WORKS  In future works, another kind of controllers such as intelligent, neural and fuzzy will be used. Other algorithms to estimate the joints coordinates should be tested. These algorithms will be applied to the real robot in visual servoing path planning. Furthermore others target and other visual features should be tested.    ACKNOWLEDGEMENTS  This work is partially supported by the “Programa de Financiamento Plurianual de Unidades de I&D (POCTI), do Quadro Comunitário de Apoio III” by program FEDER and by the FCT project POCTI/EME/39946/2001.    REFERENCES  [1] Camacho, E. F. and C. Bordons (1999).  Model Predictive Control. Springer Berlin  [2] Clarke, D.W., C.Mohtafi, and P.S.Tuffs.  Generalized Predictive Control-Part I (1987). The Basic Algorithm, Automatica, Vol.23, Nº2, pp.137-148. [3] Lee, J.H,. and B. Cooley (1997). Recent advances in model predictive control.In:Chemical Process Control, Vol.93, no. 316. pp201-216b. AIChe Syposium Series – American Institute of Chemical Engineers. [4] Gangloff, J. (1999). Asservissements visuel rapides D’ un Robot manipulateur à six degrés de liberté. Thèse de Doutorat de L’ Université Louis Pasteur. [5] Corke, P. (1994). A Search for Consensus Among Model Parameters Reported for the Puma 560 In Proc. IEEE Int. Conf.. Robotics and Automation, pages 1608-1613, San Diego.  [6] Malis,E., F. Chaumette and S. Boudet 1999. 21/2D visual Servoing.  IEEE Transactions on Robotics and Automation 15 (2), 238-250.  [7] Chaumette, F. (1990).  La relation vision-commande: théorie et application à des tâches robotiques. Thése de doctorat, Université de Rennes. [8] Craig, J. (1988).  Adaptive Control of Mechanical Manipulators,  Addison-Wesley. [9] Sciliano, B. and L Sciavicco 2000.  Modelling and Control of Robot Manipulators, 2nd edition, Springer-Verlag. [10] Ferreira, P. e Caldas, P. (2003). 3D and 2D Visual servoing Architectures for a PUMA 560 Robot. In Proceedings of the 7th IFAC Symposium on Robot Control, September 1-3, , pp 193-198.  [11] Hashimoto, K. and T. Noritsugu (1998). Performance and Sensitivity in Visual Servoing.  IEEE Int. Conf. on Robotics and Automation, Leuven, Belgium, pp.2321-2326.  [12] Ljung, L. (1987).  System Identification Theory For The User, Prentice Hall, Englewood Cliffs, New York.   基于视觉的六个自由度机器人英文文献和中文翻译(5):http://www.youerw.com/fanyi/lunwen_35661.html
------分隔线----------------------------
推荐内容