Xiaodong He - Issaquah WA, US Jian Wu - Redmond WA, US
Assignee:
Microsoft Corporation - Redmond WA
International Classification:
G10L 15/06
US Classification:
704243, 704245
Abstract:
Speech models are trained using one or more of three different training systems. They include competitive training which reduces a distance between a recognized result and a true result, data boosting which divides and weights training data, and asymmetric training which trains different model components differently.
Adaptation Of Language Models And Context Free Grammar In Speech Recognition
Architecture is disclosed herewith for minimizing an empirical error rate by discriminative adaptation of a statistical language model in a dictation and/or dialog application. The architecture allows assignment of an improved weighting value to each term or phrase to reduce empirical error. Empirical errors are minimized whether a user provides correction results or not based on criteria for discriminatively adapting the user language model (LM)/context-free grammar (CFG) to the target. Moreover, algorithms are provided for the training and adaptation processes of LM/CFG parameters for criteria optimization.
Dong Yu - Kirkland WA, US Li Deng - Redmond WA, US Yifan Gong - Sammamish WA, US Jian Wu - Redmond WA, US Alejandro Acero - Bellevue WA, US
Assignee:
Microsoft Corporation - Redmond WA
International Classification:
G10L 15/20
US Classification:
704233
Abstract:
Described is noise reduction technology generally for speech input in which a noise-suppression related gain value for the frame is determined based upon a noise level associated with that frame in addition to the signal to noise ratios (SNRs). In one implementation, a noise reduction mechanism is based upon minimum mean square error, Mel-frequency cepstra noise reduction technology. A high gain value (e. g. , one) is set to accomplish little or no noise suppression when the noise level is below a threshold low level, and a low gain value set or computed to accomplish large noise suppression above a threshold high noise level. A noise-power dependent function, e. g. , a log-linear interpolation, is used to compute the gain between the thresholds. Smoothing may be performed by modifying the gain value based upon a prior frame's gain value. Also described is learning parameters used in noise reduction via a step-adaptive discriminative learning algorithm.
Adapting A Compressed Model For Use In Speech Recognition
Jinyu Li - Redmond WA, US Li Deng - Redmond WA, US Dong Yu - Kirkland WA, US Jian Wu - Sammamish WA, US Yifan Gong - Sammamish WA, US Alejandro Acero - Bellevue WA, US
Assignee:
Microsoft Corporation - Redmond WA
International Classification:
G10L 15/20
US Classification:
704233, 704226, 704256
Abstract:
A speech recognition system includes a receiver component that receives a distorted speech utterance. The speech recognition also includes an adaptor component that selectively adapts parameters of a compressed model used to recognize at least a portion of the distorted speech utterance, wherein the adaptor component selectively adapts the parameters of the compressed model based at least in part upon the received distorted speech utterance.
Methods For Modulating Mannose Content Of Recombinant Proteins
Jian Wu - Lynnwood WA, US Nicole Le - Camarillo CA, US Michael De La Cruz - Camarillo CA, US Gregory Flynn - Thousand Oaks CA, US
Assignee:
Amgen Inc. - Thousand Oaks CA
International Classification:
A61K 39/00 C07K 16/00 C12P 21/04
US Classification:
4241421, 53038815, 435 703
Abstract:
The present invention relates to methods of modulating (e. g. , reducing) the mannose content, particularly high-mannose content of recombinant glycoproteins.
Speech Models Generated Using Competitive Training, Asymmetric Training, And Data Boosting
Xiaodong He - Issaquah WA, US Jian Wu - Redmond WA, US
Assignee:
Microsoft Corporation - Redmond WA
International Classification:
G10L 15/06 G10L 15/00
US Classification:
704243, 704255
Abstract:
Speech models are trained using one or more of three different training systems. They include competitive training which reduces a distance between a recognized result and a true result, data boosting which divides and weights training data, and asymmetric training which trains different model components differently.
Methods For Modulating Mannose Content Of Recombinant Proteins
The present invention relates to methods of modulating (e.g., reducing) the mannose content, particularly high-mannose content of recombinant glycoproteins.
Core-tech Inc Newark, NJ Oct 2011 to Sep 2012 Accounting ClerkFirst Bagel, Inc Union, NJ Jun 2006 to May 2011 Bookkeeper (P/T)Future Textiles International Trade, Inc Jamesburg, NJ Feb 2005 to May 2006 Assistant Project ManagementContinental Auto Parts, LLC Newark, NJ Dec 2003 to Feb 2005 Jr. Accountant
Education:
State University of New York at Old Westbury Old Westbury, NY Dec 2002 Bachelor of Science in Accounting