- Redmond WA, US Christopher Edward Frederick GEDDES - Kirkland WA, US
International Classification:
G10L 25/63 H04L 29/06 G10L 15/08 A63F 13/215
Abstract:
Techniques for adjusting user experiences for participants of a multiuser session by deploying vocal-characteristic models to analyze audio streams received in association with the participants. The vocal-characteristic models are used to identify emotional state indicators corresponding to certain vocal properties being exhibited by individual participants. Based on the identified emotional state indicators, probability scores are generated indicating a likelihood that individual participants are experiencing a predefined emotional state. For example, a specific participant's voice may be continuously received and analyzed using a vocal-characteristic model designed to detect vocal properties are consistent with a predefined emotional state. Probability scores may be generated based on how strongly the detected vocal properties correlate with the vocal-characteristic model. Responsive to the probability score that results from the vocal-characteristic model exceeding a threshold score, some remedial action may be performed with respect to the specific participant that is experiencing the predefined emotional state.