Lifeng Liu - Sudbury MA, US Aaron S. Wallack - Natick MA, US Cyril C. Marrion - Acton MA, US
Assignee:
Cognex Corporation - Natick MA
International Classification:
G06K 9/36 G06K 9/00
US Classification:
382285, 382154
Abstract:
This invention provides a system and method for determining correspondence between camera assemblies in a 3D vision system implementation having a plurality of cameras arranged at different orientations with respect to a scene, so as to acquire contemporaneous images of a runtime object and determine the pose of the object, and in which at least one of the camera assemblies includes a non-perspective lens. The searched 2D object features of the acquired non-perspective image, corresponding to trained object features in the non-perspective camera assembly, can be combined with the searched 2D object features in images of other camera assemblies (perspective or non-perspective), based on their trained object features to generate a set of 3D image features and thereby determine a 3D pose of the object. In this manner the speed and accuracy of the overall pose determination process is improved. The non-perspective lens can be a telecentric lens.
System And Method For Three-Dimensional Alignment Of Objects Using Machine Vision
Cyril C. Marrion - Acton MA, US Nigel J. Foster - Natick MA, US Lifeng Liu - Arlington MA, US David Y. Li - West Roxbury MA, US Guruprasad Shivaram - Chestnut Hill MA, US Aaron S. Wallack - Natick MA, US Xiangyun Ye - Framingham MA, US
Assignee:
COGNEX CORPORATION - Natick MA
International Classification:
G06K 9/00
US Classification:
382154
Abstract:
This invention provides a system and method for determining the three-dimensional alignment of a modeledobject or scene. After calibration, a 3D (stereo) sensor system views the object to derive a runtime 3D representation of the scene containing the object. Rectified images from each stereo head are preprocessed to enhance their edge features. A stereo matching process is then performed on at least two (a pair) of the rectified preprocessed images at a time by locating a predetermined feature on a first image and then locating the same feature in the other image. 3D points are computed for each pair of cameras to derive a 3D point cloud. The 3D point cloud is generated by transforming the 3D points of each camera pair into the world 3D space from the world calibration. The amount of 3D data from the point cloud is reduced by extracting higher-level geometric shapes (HLGS), such as line segments. Found HLGS from runtime are corresponded to HLGS on the model to produce candidate 3D poses. A coarse scoring process prunes the number of poses. The remaining candidate poses are then subjected to a further more-refined scoring process. These surviving candidate poses are then verified by, for example, fitting found 3D or 2D points of the candidate poses to a larger set of corresponding three-dimensional or two-dimensional model points, whereby the closest match is the best refined three-dimensional pose.
System And Method For Robust Calibration Between A Machine Vision System And A Robot
Aaron S. Wallack - Natick MA, US Lifeng Liu - Arlington MA, US Xiangyun Ye - Framingham MA, US
International Classification:
G06T 7/00
US Classification:
382153, 901 14
Abstract:
A system and method for robustly calibrating a vision system and a robot is provided. The system and method enables a plurality of cameras to be calibrated into a robot base coordinate system to enable a machine vision/robot control system to accurately identify the location of objects of interest within robot base coordinates.
System And Method For Training A Model In A Plurality Of Non-Perspective Cameras And Determining 3D Pose Of An Object At Runtime With The Same
Lifeng Liu - Sudbury MA, US Aaron S. Wallack - Natick MA, US
Assignee:
COGNEX CORPORATION - Natick MA
International Classification:
H04N 13/02
US Classification:
348 50, 348E13074, 348E13001
Abstract:
This invention provides a system and method for training and performing runtime 3D pose determination of an object using a plurality of camera assemblies in a 3D vision system. The cameras are arranged at different orientations with respect to a scene, so as to acquire contemporaneous images of an object, both at training and runtime. Each of the camera assemblies includes a non-perspective lens that acquires a respective non-perspective image for use in the process. The searched object features in one of the acquired non-perspective image can be used to define the expected location of object features in the second (or subsequent) non-perspective images based upon an affine transform, which is computed based upon at least a subset of the intrinsics and extrinsics of each camera. The locations of features in the second, and subsequent, non-perspective images can be refined by searching within the expected location of those images. This approach can be used in training, to generate the training model, and in runtime operating on acquired images of runtime objects. The non-perspective cameras can employ telecentric lenses.