Ce diaporama a bien été signalé.
Le téléchargement de votre SlideShare est en cours. ×

sduGroupEvent

Publicité
Publicité
Publicité
Publicité
Publicité
Publicité
Publicité
Publicité
Publicité
Publicité
Publicité
Publicité
Prochain SlideShare
ppt
ppt
Chargement dans…3
×

Consultez-les par la suite

1 sur 1 Publicité

Plus De Contenu Connexe

Diaporamas pour vous (20)

Similaire à sduGroupEvent (20)

Publicité

sduGroupEvent

  1. 1. Pose Estimation We are using 3D descriptor based Pose Estimating for computing the pose of the object, original proposed by Anders Glent Buch. This work is unpublished at the moment. The pose estimation algorithm first computes the pose of the object from a single view to be able to make boundary reasoning. In a second step we are utilizing all three views to refine the initial pose obtained from a single view using ICP for fine registration. Figure 7 shows a scene take with the kinect. The initial pose is shown in read and the final pose in green. Introduction In this work we present a framework which enables one shot 3D model learning in an industrial like setting. We show how a multi-view sensor setup are used for learning the robot new 3D CAD models of objects. This learning framework serves as input for both pose estimation and grasping algorithms that enables an unskilled work to teach the robot how to handle new objects in a short time. We are planning to evaluate the system by showing that fairly noisy object models are good enough for the pose estimation and grasping task in a pick and place scenario. Pushing for Simplicity for vision and grasping in Industrial RoboticsT. Sølund*^, A.B Beck*, H. Aanæs^ & N. Krüger** *Danish Technological Institut - Center for Robot Technology Denmark **Mærsk McKinney Møller Institute, University of Southern Denmark ^Department ofApplied Mathematics and Computer Science, Technical University of Denmark Acknowledgement The research leading to these results has been funded by the Danish Ministry of Science, Innovation and Higher Education under grant agreement #11-117524. Figure 1 : Extracted 3D representation of the object using a multi view stereo setup with additional texture added Figure 2 :Scene setup: Three stereo pairs covers 360 degree of the scene Approach In this work a multi view scene setup is utilized to ensure 360 degree coverage of the object of interest. Figure 2 shows a sketch of the setup containing three bumbleebee sereo cameras, three kinect sensors and two universal Robots model UR5. Figure 1 shows the scene take with the three bumbleebee cameras. In stereo vision the lack of features can result in bad matching. To overcome this we added three projectors which illuminate the scene with a random dot pattern. Modelling steps: 1. Filtering the merged point cloud to get ROI 2. Remove the table using RANSAC 3. Find clusters using euclidean clustering 4. Turn the object and do step 1 to 3 5. Do Sample Consensus Initial Alignment (SAC-IA) to roughly registre the two models 6. ICP for fine registration of the models 7. Noise removal using Moving Least Square 8. Poisson reconstruction to create a solid model (.obj or .stl etc.) Grasp planning One of the main reasons for adding CAD models to a robot system is the ability to simulate actions in a virtual environment before executing the real action.When a unskilled worker has to configure a robot system for a new task there exists two major challenges namely; which object to handle and how?. In this work we are quantifying our learned model by answering the simple question; is a noisy and learned model good enough for stable analytic grasp planning? For doing th we use RobWork and RobWork Sim to compute stable grasp using our model. Furthermore, we evaluate the results in real world by executing the grasps and compare the result with the real CAD model of the object. Figure 3 :Roughly aligned point clouds through Sample Consensus Initial Alignment (SAC-IA) Figure 4 :ICP Registration of the two models Figure 5 :Reconstructed model in Robwork Figure 6 :Computed Grasps in Robwork Further work 1 . Better approach in computing initial alignment of the two point clouds 2 . Compare Pose estimation to ground truth using one and three cameras 3. Extend and include the grasp planning framework in this work 4. Execute grasps in the real world using a PG70 parallel gripper 5. Compare grasp using learned model and ground truth model References [1] M. Krainin, P. Henry, X. Ren, and D. Fox, “Manipulator and object tracking for in-hand 3D object modeling,” The International Journal of Robotics Research, vol. 30, no. 11, pp. 1311–1327, Jul. 2011 [2] W. Mustafa, N. Pugeault, and N. Krüger, “Multi-View Object Recognition using View-Point Invariant Shape Relations and Appearance Information,” in IEEE International Conference on Robotics and Automation, 2012, pp. 4230 – 4237 Figure 7 :Pose estimation result

×