Xiaodong He - Issaquah WA, US Jian Wu - Redmond WA, US
Assignee:
Microsoft Corporation - Redmond WA
International Classification:
G10L 15/06
US Classification:
704243, 704245
Abstract:
Speech models are trained using one or more of three different training systems. They include competitive training which reduces a distance between a recognized result and a true result, data boosting which divides and weights training data, and asymmetric training which trains different model components differently.
Adaptation Of Language Models And Context Free Grammar In Speech Recognition
Architecture is disclosed herewith for minimizing an empirical error rate by discriminative adaptation of a statistical language model in a dictation and/or dialog application. The architecture allows assignment of an improved weighting value to each term or phrase to reduce empirical error. Empirical errors are minimized whether a user provides correction results or not based on criteria for discriminatively adapting the user language model (LM)/context-free grammar (CFG) to the target. Moreover, algorithms are provided for the training and adaptation processes of LM/CFG parameters for criteria optimization.
Dong Yu - Kirkland WA, US Li Deng - Redmond WA, US Yifan Gong - Sammamish WA, US Jian Wu - Redmond WA, US Alejandro Acero - Bellevue WA, US
Assignee:
Microsoft Corporation - Redmond WA
International Classification:
G10L 15/20
US Classification:
704233
Abstract:
Described is noise reduction technology generally for speech input in which a noise-suppression related gain value for the frame is determined based upon a noise level associated with that frame in addition to the signal to noise ratios (SNRs). In one implementation, a noise reduction mechanism is based upon minimum mean square error, Mel-frequency cepstra noise reduction technology. A high gain value (e. g. , one) is set to accomplish little or no noise suppression when the noise level is below a threshold low level, and a low gain value set or computed to accomplish large noise suppression above a threshold high noise level. A noise-power dependent function, e. g. , a log-linear interpolation, is used to compute the gain between the thresholds. Smoothing may be performed by modifying the gain value based upon a prior frame's gain value. Also described is learning parameters used in noise reduction via a step-adaptive discriminative learning algorithm.
Adapting A Compressed Model For Use In Speech Recognition
Jinyu Li - Redmond WA, US Li Deng - Redmond WA, US Dong Yu - Kirkland WA, US Jian Wu - Sammamish WA, US Yifan Gong - Sammamish WA, US Alejandro Acero - Bellevue WA, US
Assignee:
Microsoft Corporation - Redmond WA
International Classification:
G10L 15/20
US Classification:
704233, 704226, 704256
Abstract:
A speech recognition system includes a receiver component that receives a distorted speech utterance. The speech recognition also includes an adaptor component that selectively adapts parameters of a compressed model used to recognize at least a portion of the distorted speech utterance, wherein the adaptor component selectively adapts the parameters of the compressed model based at least in part upon the received distorted speech utterance.
Methods For Modulating Mannose Content Of Recombinant Proteins
Jian Wu - Lynnwood WA, US Nicole Le - Camarillo CA, US Michael De La Cruz - Camarillo CA, US Gregory Flynn - Thousand Oaks CA, US
Assignee:
Amgen Inc. - Thousand Oaks CA
International Classification:
A61K 39/00 C07K 16/00 C12P 21/04
US Classification:
4241421, 53038815, 435 703
Abstract:
The present invention relates to methods of modulating (e. g. , reducing) the mannose content, particularly high-mannose content of recombinant glycoproteins.
Speech Models Generated Using Competitive Training, Asymmetric Training, And Data Boosting
Xiaodong He - Issaquah WA, US Jian Wu - Redmond WA, US
Assignee:
Microsoft Corporation - Redmond WA
International Classification:
G10L 15/06 G10L 15/00
US Classification:
704243, 704255
Abstract:
Speech models are trained using one or more of three different training systems. They include competitive training which reduces a distance between a recognized result and a true result, data boosting which divides and weights training data, and asymmetric training which trains different model components differently.
Methods For Modulating Mannose Content Of Recombinant Proteins
The present invention relates to methods of modulating (e.g., reducing) the mannose content, particularly high-mannose content of recombinant glycoproteins.
Jian Wu - Seattle WA Hugh West - Seattle WA Terry M. Grant - Auburn WA
Assignee:
Weyerhaeuser Company - Federal Way WA
International Classification:
D21H 1733 D21H 1763 D21H 1768 D21H 1769 D21H 1755
US Classification:
1621641
Abstract:
The invention relates to cellulose fluff pulp products that are debondable into fluff with markedly lower energy input, to a process for making the products, and to absorbent products using the fluff Most of the pulp products show no reduction in liquid absorbency rate from that of untreated fiber and significantly higher rates than pulps treated with the usual debonding agents. The products are made by adhering fine non-cellulosic particles to the fiber surfaces using a retention aid. The fiber is preferably treated with the retention aid in an aqueous suspension for a sufficient time so that the retention aid is substantively bonded with little or none left free in the water. The fine particulate additive is then added and becomes attached and uniformly distributed over the fiber surfaces with very little particle agglomeration occurring. The fiber is most usually not refined or only very lightly refined before sheeting. However it may be significantly refined to produce a product having a very high surface area.