Search

Gierad P Laput

age ~38

from Pittsburgh, PA

Also known as:
  • Gieric S Laput
  • Gerad Laput
  • Geirad Laput
Phone and address:
550 Peebles St, Pittsburgh, PA 15221

Gierad Laput Phones & Addresses

  • 550 Peebles St, Pittsburgh, PA 15221
  • Shadyside, PA
  • Ann Arbor, MI
  • 217 Canford Park, Canton, MI 48187 • (734)4045014
  • Warren, MI
  • Sterling Heights, MI
  • Madison Heights, MI

Us Patents

  • Natural Language Image Spatial And Tonal Localization

    view source
  • US Patent:
    20140081625, Mar 20, 2014
  • Filed:
    Nov 21, 2012
  • Appl. No.:
    13/683416
  • Inventors:
    Walter W. Chang - San Jose CA, US
    Lubomira A. Dontcheva - San Francisco CA, US
    Gierad P. Laput - Canton MI, US
    Aseem O. Agarwala - Seattle WA, US
  • Assignee:
    Adobe Systems Incorporated - San Jose CA
  • International Classification:
    G06F 3/16
    G06F 17/27
  • US Classification:
    704 9, 704275
  • Abstract:
    Natural language image spatial and tonal localization techniques are described. In one or more implementations, a natural language input is processed to determine spatial and tonal localization of one or more image editing operations specified by the natural language input. Performance is initiated of the one or more image editing operations on image data using the determined spatial and tonal localization.
  • Natural Language And User Interface Controls

    view source
  • US Patent:
    20140082500, Mar 20, 2014
  • Filed:
    Nov 21, 2012
  • Appl. No.:
    13/683341
  • Inventors:
    Walter W. Chang - San Jose CA, US
    Lubomira A. Dontcheva - San Francisco CA, US
    Gierad P. Laput - Canton MI, US
    Aseem O. Agarwala - Seattle WA, US
  • Assignee:
    ADOBE SYSTEMS INCORPORATED - San Jose CA
  • International Classification:
    G06F 3/0484
    G06F 3/16
    G06F 3/048
  • US Classification:
    715727, 715764, 715833
  • Abstract:
    Natural language and user interface control techniques are described. In one or more implementations, a natural language input is received that is indicative of an operation to be performed by one or more modules of a computing device. Responsive to determining that the operation is associated with a degree to which the operation is performable, a user interface control is output that is manipulable by a user to control the degree to which the operation is to be performed.
  • Natural Language Image Tags

    view source
  • US Patent:
    20140078076, Mar 20, 2014
  • Filed:
    Nov 21, 2012
  • Appl. No.:
    13/683466
  • Inventors:
    Walter W. Chang - San Jose CA, US
    Lubomira A. Dontcheva - San Francisco CA, US
    Gierad P. Laput - Canton MI, US
    Aseem O. Agarwala - Seattle WA, US
  • Assignee:
    ADOBE SYSTEMS INCORPORATED - San Jose CA
  • International Classification:
    G06F 3/16
  • US Classification:
    345173
  • Abstract:
    Natural language image tags are described. In one or more implementations, at least a portion of an image displayed by a display device is defined based on a gesture. The gesture is identified from one or more touch inputs detected using touchscreen functionality of the display device. Text received in a natural language input is located and used to tag the portion of the image using one or more items of the text received in the natural language input.
  • Natural Language Image Editing

    view source
  • US Patent:
    20140078075, Mar 20, 2014
  • Filed:
    Nov 21, 2012
  • Appl. No.:
    13/683278
  • Inventors:
    Walter W. Chang - San Jose CA, US
    Lubomira A. Dontcheva - San Francisco CA, US
    Gierad P. Laput - Canton MI, US
    Aseem O. Agarwala - Seattle WA, US
  • Assignee:
    ADOBE SYSTEMS INCORPORATED - San Jose CA
  • International Classification:
    G06F 3/0488
    G10L 15/26
  • US Classification:
    345173, 345156
  • Abstract:
    Natural language image editing techniques are described. In one or more implementations, a natural language input is converted from audio data using a speech-to-text engine. A gesture is recognized from one or more touch inputs detected using one or more touch sensors. Performance is then initiated of an operation identified from a combination of the natural language input and the recognized gesture.
  • User Identification Using Headphones

    view source
  • US Patent:
    20220408173, Dec 22, 2022
  • Filed:
    Aug 22, 2022
  • Appl. No.:
    17/893158
  • Inventors:
    - Cupertino CA, US
    Gierad LAPUT - Pittsburgh PA, US
  • International Classification:
    H04R 1/10
    G06K 9/62
    G06F 21/32
    G06F 21/84
    G06F 3/01
    G06F 9/54
    G06F 3/16
  • Abstract:
    Systems and processes for user identification using headphones associated with a first device are provided. For example, first movement information corresponding to movement of a second electronic device is detected. Second movement information corresponding to movement of a third electronic device is detected. A similarity score is determined based on the first movement information and the second movement information. In accordance with a determination that the similarity score is above a threshold similarity score, a user is identified as an authorized user of the first electronic device and the second electronic device. Based on the identification, an output is provided to the second electronic device.
  • User Identification Using Headphones

    view source
  • US Patent:
    20220382843, Dec 1, 2022
  • Filed:
    Jul 5, 2022
  • Appl. No.:
    17/857947
  • Inventors:
    - Cupertino CA, US
    Gierad LAPUT - Pittsburgh PA, US
  • International Classification:
    G06F 21/32
    G06K 9/62
    G06F 3/01
    G06F 3/16
    H04R 1/08
    H04R 1/10
  • Abstract:
    Systems and processes for user identification using headphones associated with a first device are provided. For example, first movement information corresponding to movement of a second electronic device is detected. Second movement information corresponding to movement of a third electronic device is detected. A similarity score is determined based on the first movement information and the second movement information. In accordance with a determination that the similarity score is above a threshold similarity score, a user is identified as an authorized user of the first electronic device and the second electronic device. Based on the identification, an output is provided to the second electronic device.
  • Diagnosis And Monitoring Of Bruxism Using Earbud Motion Sensors

    view source
  • US Patent:
    20220313153, Oct 6, 2022
  • Filed:
    Mar 30, 2022
  • Appl. No.:
    17/709322
  • Inventors:
    - Cupertino CA, US
    Jun Gong - Austin TX, US
    Gierad Laput - Pittsburgh PA, US
    Mengying Fang - Pittsburgh PA, US
    Ke-Yu Chen - San Ramon CA, US
    Runchang Kang - Melrose MA, US
  • International Classification:
    A61B 5/00
    G06N 20/10
  • Abstract:
    Enclosed are embodiments for diagnosis and monitoring of bruxism using earbud motion sensors. In an embodiment, a method comprises: receiving, with at least one processor, a signal derived from a motion sensor in an earbud, wherein the signal is captured while the earbud is inserted in an ear of a user; segmenting, with the at least one processor, the signal into segments; extracting, with the at least one processor, features from the segments; classifying, with the at least one processor, the features; and determining, with the at least one processor, that orofacial activity is predicted based on the classifying.
  • System And Method For Capturing Cardiopulmonary Signals

    view source
  • US Patent:
    20230097790, Mar 30, 2023
  • Filed:
    Jul 25, 2022
  • Appl. No.:
    17/872970
  • Inventors:
    - Cupertino CA, US
    Gierad LAPUT - Pittsburgh PA, US
    Seyedeh Fereshteh SHAHMIRI - Atlanta GA, US
  • International Classification:
    A61B 7/00
    A61B 5/08
  • Abstract:
    A method is provided that includes receiving an accelerometer signal from an accelerometer in a headphone configured to be mounted in a user's ear canal and filtering the accelerometer signal to extract a cardiac signal. The method further includes detecting a plurality of peaks in the cardiac signal and determining a cardiac rate of the user based on the detected plurality of peaks.

Get Report for Gierad P Laput from Pittsburgh, PA, age ~38
Control profile