Search

Lifeng Liu

age ~52

from Sudbury, MA

Also known as:
  • Liseng Liu
  • Li Fengliu
  • Liu Lifeng
  • Liv Lifeng

Lifeng Liu Phones & Addresses

  • Sudbury, MA
  • Acton, MA
  • 19 Union St, Arlington, MA 02474 • (781)6461790
  • Boston, MA
  • Brookline, MA
  • 54 Patricia Rd, Sudbury, MA 01776

Work

  • Position:
    Professional/Technical

Education

  • Degree:
    High school graduate or higher

Resumes

Lifeng Liu Photo 1

Software Engineer At Cognex

view source
Position:
Software Engineer at Cognex
Location:
Greater Boston Area
Industry:
Computer Software
Work:
Cognex
Software Engineer

MDOL 2000 - 2002
Senior software engineer
Education:
Tsinghua University 1987 - 1996
Lifeng Liu Photo 2

Engineer At Welocalize

view source
Position:
Engineer at Welocalize
Location:
United States
Industry:
Computer Software
Work:
Welocalize since Jun 2010
Engineer
Education:
Portland State University 2008 - 2010
Master, Engineering Management
Lifeng Liu Photo 3

Lifeng Liu

view source
Location:
United States
Lifeng Liu Photo 4

Lifeng Liu

view source
Lifeng Liu Photo 5

Lifeng Liu

view source

Us Patents

  • System And Method For Finding Correspondence Between Cameras In A Three-Dimensional Vision System

    view source
  • US Patent:
    8600192, Dec 3, 2013
  • Filed:
    Dec 8, 2010
  • Appl. No.:
    12/962918
  • Inventors:
    Lifeng Liu - Sudbury MA,
    Aaron S. Wallack - Natick MA,
    Cyril C. Marrion - Acton MA,
  • Assignee:
    Cognex Corporation - Natick MA
  • International Classification:
    G06K 9/36
    G06K 9/00
  • US Classification:
    382285, 382154
  • Abstract:
    This invention provides a system and method for determining correspondence between camera assemblies in a 3D vision system implementation having a plurality of cameras arranged at different orientations with respect to a scene, so as to acquire contemporaneous images of a runtime object and determine the pose of the object, and in which at least one of the camera assemblies includes a non-perspective lens. The searched 2D object features of the acquired non-perspective image, corresponding to trained object features in the non-perspective camera assembly, can be combined with the searched 2D object features in images of other camera assemblies (perspective or non-perspective), based on their trained object features to generate a set of 3D image features and thereby determine a 3D pose of the object. In this manner the speed and accuracy of the overall pose determination process is improved. The non-perspective lens can be a telecentric lens.
  • System And Method For Three-Dimensional Alignment Of Objects Using Machine Vision

    view source
  • US Patent:
    2010016, Jul 1, 2010
  • Filed:
    Dec 29, 2008
  • Appl. No.:
    12/345130
  • Inventors:
    Cyril C. Marrion - Acton MA,
    Nigel J. Foster - Natick MA,
    Lifeng Liu - Arlington MA,
    David Y. Li - West Roxbury MA,
    Guruprasad Shivaram - Chestnut Hill MA,
    Aaron S. Wallack - Natick MA,
    Xiangyun Ye - Framingham MA,
  • Assignee:
    COGNEX CORPORATION - Natick MA
  • International Classification:
    G06K 9/00
  • US Classification:
    382154
  • Abstract:
    This invention provides a system and method for determining the three-dimensional alignment of a modeledobject or scene. After calibration, a 3D (stereo) sensor system views the object to derive a runtime 3D representation of the scene containing the object. Rectified images from each stereo head are preprocessed to enhance their edge features. A stereo matching process is then performed on at least two (a pair) of the rectified preprocessed images at a time by locating a predetermined feature on a first image and then locating the same feature in the other image. 3D points are computed for each pair of cameras to derive a 3D point cloud. The 3D point cloud is generated by transforming the 3D points of each camera pair into the world 3D space from the world calibration. The amount of 3D data from the point cloud is reduced by extracting higher-level geometric shapes (HLGS), such as line segments. Found HLGS from runtime are corresponded to HLGS on the model to produce candidate 3D poses. A coarse scoring process prunes the number of poses. The remaining candidate poses are then subjected to a further more-refined scoring process. These surviving candidate poses are then verified by, for example, fitting found 3D or 2D points of the candidate poses to a larger set of corresponding three-dimensional or two-dimensional model points, whereby the closest match is the best refined three-dimensional pose.
  • System And Method For Robust Calibration Between A Machine Vision System And A Robot

    view source
  • US Patent:
    2011028, Nov 17, 2011
  • Filed:
    May 14, 2010
  • Appl. No.:
    12/780119
  • Inventors:
    Aaron S. Wallack - Natick MA,
    Lifeng Liu - Arlington MA,
    Xiangyun Ye - Framingham MA,
  • International Classification:
    G06T 7/00
  • US Classification:
    382153, 901 14
  • Abstract:
    A system and method for robustly calibrating a vision system and a robot is provided. The system and method enables a plurality of cameras to be calibrated into a robot base coordinate system to enable a machine vision/robot control system to accurately identify the location of objects of interest within robot base coordinates.
  • System And Method For Training A Model In A Plurality Of Non-Perspective Cameras And Determining 3D Pose Of An Object At Runtime With The Same

    view source
  • US Patent:
    2012014, Jun 14, 2012
  • Filed:
    Dec 8, 2010
  • Appl. No.:
    12/963007
  • Inventors:
    Lifeng Liu - Sudbury MA,
    Aaron S. Wallack - Natick MA,
  • Assignee:
    COGNEX CORPORATION - Natick MA
  • International Classification:
    H04N 13/02
  • US Classification:
    348 50, 348E13074, 348E13001
  • Abstract:
    This invention provides a system and method for training and performing runtime 3D pose determination of an object using a plurality of camera assemblies in a 3D vision system. The cameras are arranged at different orientations with respect to a scene, so as to acquire contemporaneous images of an object, both at training and runtime. Each of the camera assemblies includes a non-perspective lens that acquires a respective non-perspective image for use in the process. The searched object features in one of the acquired non-perspective image can be used to define the expected location of object features in the second (or subsequent) non-perspective images based upon an affine transform, which is computed based upon at least a subset of the intrinsics and extrinsics of each camera. The locations of features in the second, and subsequent, non-perspective images can be refined by searching within the expected location of those images. This approach can be used in training, to generate the training model, and in runtime operating on acquired images of runtime objects. The non-perspective cameras can employ telecentric lenses.

Googleplus

Lifeng Liu Photo 6

Lifeng Liu

Lifeng Liu Photo 7

Lifeng Liu

Lifeng Liu Photo 8

Lifeng Liu

Lifeng Liu Photo 9

Lifeng Liu

Lifeng Liu Photo 10

Lifeng Liu

Lifeng Liu Photo 11

Lifeng Liu

Lifeng Liu Photo 12

Lifeng Liu

Plaxo

Lifeng Liu Photo 13

Lifeng Liu

view source
Cognex

Myspace

Lifeng Liu Photo 14

Lifeng Liu

view source
Locality:
ARLINGTON, Massachusetts
Birthday:
1929

Facebook

Lifeng Liu Photo 15

Lifeng Liu

view source
Friends:
Xiao Qiang, Russell Basil McKinnon, Carlyle Laurie, Athena Yu Rao, Can Zhai
Lifeng Liu Photo 16

Lifeng Liu

view source
Lifeng Liu Photo 17

liu lifeng

view source
Lifeng Liu Photo 18

Liu Lifeng

view source
Friends:
Meizi Jiao, Xiangyu Jiao
Lifeng Liu Photo 19

Lifeng Liu

view source

Get Report for Lifeng Liu from Sudbury, MA, age ~52
Control profile