Search

Chaojun Liu

age ~53

from Kirkland, WA

Also known as:
  • Yang Lu Chun
  • Yang Liu Chaojun
  • Chun I Liu
  • Chun Yang Lu
  • Jun Liu Chao
  • Chunyang Lu
  • Chaojung Lu
  • G Lu
  • N Liu
Phone and address:
12832 NE 141St Ct, Kirkland, WA 98034
(425)2964172

Chaojun Liu Phones & Addresses

  • 12832 NE 141St Ct, Kirkland, WA 98034 • (425)2964172
  • Bellevue, WA
  • San Jose, CA
  • 8935 160Th Ave, Redmond, WA 98052 • (425)8856017
  • 2944 Moda Way, Hillsboro, OR 97124 • (503)6909011
  • Beaverton, OR
  • Salt Lake City, UT
  • Saratoga, CA
  • Kiona, WA

Resumes

Chaojun Liu Photo 1

Chaojun Liu

view source

Us Patents

  • Enhanced Automatic Speech Recognition Using Mapping Between Unsupervised And Supervised Speech Model Parameters Trained On Same Acoustic Training Data

    view source
  • US Patent:
    8306819, Nov 6, 2012
  • Filed:
    Mar 9, 2009
  • Appl. No.:
    12/400528
  • Inventors:
    Chaojun Liu - Kirkland WA, US
    Yifan Gong - Sammamish WA, US
  • Assignee:
    Microsoft Corporation - Redmond WA
  • International Classification:
    G10L 15/06
    G10L 15/00
  • US Classification:
    704244, 704234, 704251
  • Abstract:
    Techniques for enhanced automatic speech recognition are described. An enhanced ASR system may be operative to generate an error correction function. The error correction function may represent a mapping between a supervised set of parameters and an unsupervised training set of parameters generated using a same set of acoustic training data, and apply the error correction function to an unsupervised testing set of parameters to form a corrected set of parameters used to perform speaker adaptation. Other embodiments are described and claimed.
  • Discriminative Training Of Hmm Models Using Maximum Margin Estimation For Speech Recognition

    view source
  • US Patent:
    20070083373, Apr 12, 2007
  • Filed:
    Oct 11, 2005
  • Appl. No.:
    11/247854
  • Inventors:
    Chaojun Liu - San Jose CA, US
    David Kryze - Santa Barbara CA, US
    Luca Rigazio - Santa Barbara CA, US
  • Assignee:
    Matsushita Electric Industrial Co., Ltd. - Osaka
  • International Classification:
    G10L 15/14
  • US Classification:
    704256200
  • Abstract:
    An improved discriminative training method is provided for hidden Markov models. The method includes: defining a measure of separation margin for the data; identifying a subset of training utterances having utterances misrecognized by the models; defining a training criterion for the models based on maximizing the separation margin; formulating the training criterion as a constrained minimax optimization problem; and solving the constrained minimax optimization problem over the subset of training utterances, thereby discriminatively training the models.
  • Discriminative Training For Speaker And Speech Verification

    view source
  • US Patent:
    20070143109, Jun 21, 2007
  • Filed:
    Dec 20, 2005
  • Appl. No.:
    11/312981
  • Inventors:
    Chaojun Liu - San Jose CA, US
    David Kryze - Santa Barbara CA, US
    Luca Rigazio - Santa Barbara CA, US
  • Assignee:
    Matsushita Electric Industrial Co., Ltd. - Osaka
  • International Classification:
    G10L 15/00
  • US Classification:
    704239000
  • Abstract:
    A method for discriminatively training acoustic models is provided for automated speaker verification (SV) and speech (or utterance) verification (UV) systems. The method includes: defining a likelihood ratio for a given speech segment, whose speaker identity (for SV system) or linguist identity (for UV system) is known, using a corresponding acoustic model, and an alternative acoustic model which represents all other speakers (in SV) or all other linguist identities (in UV); determining an average likelihood ratio score for the likelihood ratio scores over a set of training utterances (referred to as true data set) whose speaker identities (for SV) or linguist identities (for UV) are the same; determining an average likelihood ratio score for the likelihood ratio scores over a competing set of training utterances which excludes the speech data in the true data set (referred to as competing data set); and optimizing a difference between the average likelihood ratio score over the true data set and the average likelihood ratio score over the competing data set, thereby improving the acoustic model.
  • User Interaction For Content Based Storage And Retrieval

    view source
  • US Patent:
    20090064008, Mar 5, 2009
  • Filed:
    Aug 31, 2007
  • Appl. No.:
    11/848781
  • Inventors:
    Chaojun Liu - Kirkland WA, US
    Luca Rigazio - San Jose CA, US
    Peter Veprek - San Jose CA, US
    David Kryze - Campbell CA, US
    Steve Pearson - Felton CA, US
  • Assignee:
    MATSUSHITA ELECTRIC INDUSTRIAL CO., LTD. - Osaka
  • International Classification:
    G06F 3/048
  • US Classification:
    715764
  • Abstract:
    A graphic user interface system for use with a content based retrieval system includes an active display having display areas. For example, the display areas include a main area providing an overview of database contents by displaying representative samples of the database contents. The display areas also include one or more query areas into which one or more of the representative samples can be moved from the main area by a user employing gesture based interaction. A query formulation module employs the one or more representative samples moved into the query area to provide feedback to the content based retrieval system.
  • Model Training For Automatic Speech Recognition From Imperfect Transcription Data

    view source
  • US Patent:
    20100318355, Dec 16, 2010
  • Filed:
    Jun 10, 2009
  • Appl. No.:
    12/482142
  • Inventors:
    Jinyu Li - Bellevue WA, US
    Yifan Gong - Sammamish WA, US
    Chaojun Liu - Kirkland WA, US
    Kaisheng Yao - Bellevue WA, US
  • Assignee:
    MICROSOFT CORPORATION - Redmond WA
  • International Classification:
    G10L 15/06
  • US Classification:
    704244, 704E15007
  • Abstract:
    Techniques and systems for training an acoustic model are described. In an embodiment, a technique for training an acoustic model includes dividing a corpus of training data that includes transcription errors into N parts, and on each part, decoding an utterance with an incremental acoustic model and an incremental language model to produce a decoded transcription. The technique may further include inserting silence between a pair of words into the decoded transcription and aligning an original transcription corresponding to the utterance with the decoded transcription according to time for each part. The technique may further include selecting a segment from the utterance having at least Q contiguous matching aligned words, and training the incremental acoustic model with the selected segment. The trained incremental acoustic model may then be used on a subsequent part of the training data. Other embodiments are described and claimed.
  • Modular Deep Learning Model

    view source
  • US Patent:
    20170256254, Sep 7, 2017
  • Filed:
    Jun 30, 2016
  • Appl. No.:
    15/199346
  • Inventors:
    - Redmond WA, US
    Chaojun LIU - Redmond WA, US
    Kshitiz KUMAR - Redmond WA, US
    Kaustubh Prakash KALGAONKAR - Redmond WA, US
    Yifan GONG - Sammamish WA, US
  • International Classification:
    G10L 15/16
    G10L 15/06
    G10L 15/02
    G10L 15/183
    G10L 15/28
  • Abstract:
    The technology described herein uses a modular model to process speech. A deep learning based acoustic model comprises a stack of different types of neural network layers. The sub-modules of a deep learning based acoustic model can be used to represent distinct non-phonetic acoustic factors, such as accent origins (e.g. native, non-native), speech channels (e.g. mobile, bluetooth, desktop etc.), speech application scenario (e.g. voice search, short message dictation etc.), and speaker variation (e.g. individual speakers or clustered speakers), etc. The technology described herein uses certain sub-modules in a first context and a second group of sub-modules in a second context.
  • Confidence Features For Automated Speech Recognition Arbitration

    view source
  • US Patent:
    20170140759, May 18, 2017
  • Filed:
    Nov 13, 2015
  • Appl. No.:
    14/941058
  • Inventors:
    - Redmond WA, US
    Hosam Khalil - Redmond WA, US
    Yifan Gong - Sammamish WA, US
    Ziad Al-Bawab - San Jose CA, US
    Chaojun Liu - Kirkland WA, US
  • International Classification:
    G10L 15/32
    G10L 15/06
    G10L 15/18
    G10L 15/30
  • Abstract:
    The described technology provides arbitration between speech recognition results generated by different automatic speech recognition (ASR) engines, such as ASR engines trained according to different language or acoustic models. The system includes an arbitrator that selects between a first speech recognition result representing an acoustic utterance as transcribed by a first ASR engine and a second speech recognition result representing the acoustic utterance as transcribed by a second ASR engine. This selection is based on a set of confidence features that is initially used by the first ASR engine or the second ASR engine to generate the first and second speech recognition results.
  • Automatic Speech Recognition Confidence Classifier

    view source
  • US Patent:
    20170076725, Mar 16, 2017
  • Filed:
    Sep 11, 2015
  • Appl. No.:
    14/852083
  • Inventors:
    - Redmond WA, US
    Yifan Gong - Sammamish WA, US
    Chaojun Liu - Kirkland WA, US
  • International Classification:
    G10L 15/32
    G10L 15/10
    G10L 15/26
  • Abstract:
    The described technology provides normalization of speech recognition confidence classifier (CC) scores that maintains the accuracy of acceptance metrics. A speech recognition CC scores quantitatively represents the correctness of decoded utterances in a defined range (e.g., [0,1]). An operating threshold is associated with a confidence classifier, such that utterance recognitions having scores exceeding the operating threshold are deemed acceptable. However, when a speech recognition engine, an acoustic model, and/or other parameters are updated by the platform, the correct-accept (CA) versus false-accept (FA) profile can change such that the application software's operating threshold is no longer valid or as accurate. Normalizing of speech recognition CC scores to map to the same or better CA and/or FA profiles at the previously-set operating thresholds allows preset operating thresholds to remain valid and accurate, even after a speech recognition engine, acoustic model, and/or other parameters are changed.

Youtube

MKX5123 Group16 28042017

MKX5123 Group16 28042017.

  • Duration:
    9m 27s

The Microstructure Exchange: Chaojun Wang

Chaojun Wang (Wharton) presents his paper "Information Chasing versus ...

  • Duration:
    1h 30m 40s

ChaoJun in 2010 Canada Morningside Music Bridge

Rachmaninov piano concert no.2 3rd mov.

  • Duration:
    12m 49s

Kate Liu Andante Spianato and Grande Polonai...

Narodowy Instytut Fryderyka Chopina/ The Fryderyk Chopin Institute All...

  • Duration:
    15m 6s

Classical Chinese Landscape Painting Lesson 4...

This is a Zoom recording of the live streamed lesson with better sound...

  • Duration:
    2h 8m 22s

J.S. Bach, Prelude BWV 996 performed by Fangf...

Indiana University Jacobs School of Music Audio Engineering Society Of...

  • Duration:
    2m 2s

Get Report for Chaojun Liu from Kirkland, WA, age ~53
Control profile