- Menlo Park CA, US Chengxuan Bai - San Mateo CA, US Jiemin Zhang - Sunnyvale CA, US Vincent Charles Cheung - San Carlos CA, US Andrew Pitcher Thompson - Tarrytown NY, US Maheen Sohail - San Francisco CA, US Tali Zvi - San Carlos CA, US
The present embodiments relate to automated memory creation and retrieval from moment content items. In some implementations, the automated memory creation and retrieval system can obtain moment content items from user-designated sources with a single user perspective or multiple user perspectives. The moment content items can be assigned tags and arranged in chronological order. The arranged moment content items can be clustered into memory content items based on clustering conditions. Once memory content items are created, they can be arranged into a memory hierarchy made up of connected nodes. In some implementations, the memory content items are also linked together based on similarity in various dimensions in a memory graph. The automated memory creation and retrieval system can receive search criteria for memories from a user interface and provide the user with memories from matched nodes in the memory hierarchy or linked memories in the memory graph.
Modifying Capture Of Video Data By An Image Capture Device Based On Video Data Previously Captured By The Image Capture Device
Various client devices include displays and one or more image capture devices configured to capture video data. Different users of an online system may authorize client devices to exchange information captured by their respective image capture devices. Additionally, a client device modifies captured video data based on users identified in the video data. For example, the client device changes parameters of the image capture device to more prominently display a user identified in the video data and may further change parameters of the image capture device based on gestures or movement of the user identified in the video data. The client device may apply multiple models to captured video data to modify the captured video data or subsequent capturing of video data by the image capture device.
Automated Conversation Content Items From Natural Language
A conversation augmentation system can automatically augment a conversation with content items based on natural language from the conversation. The conversation augmentation system can select content items to add to the conversation based on determined user “intents” generated using machine learning models. The conversation augmentation system can generate intents for natural language from various sources, such as video chats, audio conversations, textual conversations, virtual reality environments, etc. The conversation augmentation system can identify constraints for mapping the intents to content items or context signals for selecting appropriate content items. In various implementations, the conversation augmentation system can add selected content items to a storyline the conversation describes or can augment a platform in which an unstructured conversation is occurring.
Private Collaboration Spaces For Computing Systems
This disclosure describes a computing system that automatically detects users in visual proximity and adds the users to a private collaboration space enabling the users to share digital content. In one example, the computing system includes a video processing engine configured to detect, from first image data representative of a first physical environment that includes a second user, the second user, wherein the first image data is captured by an image capture system of a head-mounted display (HMD) worn by a first user. The computing system also includes a collaboration application configured to add, in response to detection of the second user, the second user to a set of users associated with a private collaboration space in which the set of users access shared digital content, wherein the set of users includes the first user.
Modifying Capture Of Video Data By An Image Capture Device Based On Video Data Previously Captured By The Image Capture Device
Various client devices include displays and one or more image capture devices configured to capture video data. Different users of an online system may authorize client devices to exchange information captured by their respective image capture devices. Additionally, a client device modifies captured video data based on users identified in the video data. For example, the client device changes parameters of the image capture device to more prominently display a user identified in the video data and may further change parameters of the image capture device based on gestures or movement of the user identified in the video data. The client device may apply multiple models to captured video data to modify the captured video data or subsequent capturing of video data by the image capture device.
Synchronizing Presentation Of Content Presented By Multiple Client Devices
- Menlo Park CA, US Olivier Charles Gratry - Mill Valley CA, US Vincent Charles Cheung - San Carlos CA, US Connie Yeewei Ho - San Jose CA, US
International Classification:
H04N 21/43 H04N 21/24
Abstract:
Various client devices include displays and one or more image capture devices configured to capture video data. Different users of an online system are associated with client devices that exchange information captured by their respective image capture devices. When exchanging information, presentation of content to users associated with different client device may be initially synchronized across the client devices. To synchronize content presentation, a client device initiating presentation of the content transmits a request identifying the content and an initial time to other client devices. The initial time is greater than than a maximum return time or latency in a network coupling the client devices and the online system from a time when the request is transmitted. A client device determined to be out of synchronization with one or more other client devices receives a command to modify a rate at which the content is presented to reestablish synchronization.
Modifying Capture Of Video Data By An Image Capture Device Based On Video Data Previously Captured By The Image Capture Device
- Menlo Park CA, US Vincent Charles Cheung - San Carlos CA, US
International Classification:
H04N 5/232 H04N 5/262 G06T 7/90 G06F 17/30
Abstract:
Various client devices include displays and one or more image capture devices configured to capture video data. Different users of an online system may authorize client devices to exchange information captured by their respective image capture devices. Additionally, a client device modifies captured video data based on users identified in the video data. For example, the client device changes parameters of the image capture device to more prominently display a user identified in the video data and may further change parameters of the image capture device based on gestures or movement of the user identified in the video data. The client device may apply multiple models to captured video data to modify the captured video data or subsequent capturing of video data by the image capture device.
Modifying Capture Of Video Data By An Image Capture Device Based On Video Data Previously Captured By The Image Capture Device
Various client devices include displays and one or more image capture devices configured to capture video data. Different users of an online system may authorize client devices to exchange information captured by their respective image capture devices. Additionally, a client device modifies captured video data based on users identified in the video data. For example, the client device changes parameters of the image capture device to more prominently display a user identified in the video data and may further change parameters of the image capture device based on gestures or movement of the user identified in the video data. The client device may apply multiple models to captured video data to modify the captured video data or subsequent capturing of video data by the image capture device.
Facebook
Software Engineer
University of Toronto Sep 2003 - May 2011
Computer Vision and Machine Learning Research Assistant
Loupe Shape Collage Sep 2003 - May 2011
Founder and Chief Executive Officer
University of Toronto Sep 2004 - May 2008
Computational Biology Research Assistant
Vincentcheung.ca Sep 2004 - May 2008
President
Education:
University of Toronto 2003 - 2013
Doctorates, Doctor of Philosophy, Computer Engineering
University of Manitoba 1999 - 2003
Bachelors, Bachelor of Science, Computer Engineering
Shaftesbury High School
Skills:
Software Development C++ Algorithms Machine Learning Java Php Entrepreneurship Javascript Computer Vision Sql Matlab Css Html Linux Image Processing Web Development Software Engineering Seo Mysql Jquery Ajax Computer Science Public Speaking Mobile Applications Windows Artificial Intelligence Pattern Recognition