New York Methodist Hospital Park Slope Infusion Center 343 4 Ave, Brooklyn, NY 11215 (718)4992169 (phone), (718)4993218 (fax)
New York Methodist Hospital Hematology Oncology 501 6 St APT 3C, Brooklyn, NY 11215 (718)7805541 (phone), (718)7805545 (fax)
Education:
Medical School University of Iowa Carver College of Medicine Graduated: 1977
Procedures:
Bone Marrow Biopsy Bone Marrow or Stem Cell Transplant
Conditions:
Sickle-Cell Disease Anemia Hemolytic Anemia Hodgkin's Lymphoma Iron Deficiency Anemia
Languages:
English Spanish
Description:
Dr. Cook graduated from the University of Iowa Carver College of Medicine in 1977. He works in Brooklyn, NY and 1 other location and specializes in Hematology/Oncology. Dr. Cook is affiliated with New York Methodist Hospital.
Wind River Radiology PC 1320 Bishop Randall Dr, Lander, WY 82520 (307)3356451 (phone), (307)3324276 (fax)
Education:
Medical School Duke University School of Medicine Graduated: 1988
Languages:
English
Description:
Dr. Cook graduated from the Duke University School of Medicine in 1988. She works in Lander, WY and specializes in Diagnostic Radiology and Radiology. Dr. Cook is affiliated with SageWest Health Care Lander.
Wikipedia References
Perry R. Cook
Us Patents
System And Method For Communication Between Mobile Devices Using Digital/Acoustic Techniques
Techniques have been developed for transmitting and receiving information conveyed through the air from one portable device to another as a generally unperceivable coding within an otherwise recognizable acoustic signal. For example, in some embodiments in accordance with the present invention(s), information is acoustically communicated from a first handheld device toward a second by encoding the information in a signal that, when converted into acoustic energy at an acoustic transducer of the first handheld device, is characterized in that the acoustic energy is discernable to a human ear yet the encoding of the information therein is generally not perceivable by the human. The acoustic energy is transmitted from the acoustic transducer of the first handheld device toward the second handheld device across an air gap that constitutes a substantially entirety of the distance between the devices. Acoustic energy received at the second handheld device may then be processed using signal processing techniques tailored to detection of the particular information encodings employed.
Pitch-Correction Of Vocal Performance In Accord With Score-Coded Harmonies
Perry R. Cook - Applegate OR, US Ari Lazier - San Francisco CA, US Tom Lieber - San Francisco CA, US Turner E. Kirk - Mountain View CA, US
International Classification:
G10L 11/04
US Classification:
704207, 704E11006
Abstract:
Despite many practical limitations imposed by mobile device platforms and application execution environments, vocal musical performances may be captured and continuously pitch-corrected for mixing and rendering with backing tracks in ways that create compelling user experiences. In some cases, the vocal performances of individual users are captured on mobile devices in the context of a karaoke-style presentation of lyrics in correspondence with audible renderings of a backing track. Such performances can be pitch-corrected in real-time at a portable computing device (such as a mobile phone, personal digital assistant, laptop computer, notebook computer, pad-type computer or netbook) in accord with pitch correction settings. In some cases, pitch correction settings include a score-coded melody and/or harmonies supplied with, or for association with, the lyrics and backing tracks. Harmonies notes or chords may be coded as explicit targets or relative to the score coded melody or even actual pitches sounded by a vocalist, if desired.
Coordinating And Mixing Vocals Captured From Geographically Distributed Performers
Perry R. Cook - Applegate OR, US Ari Lazier - San Francisco CA, US Tom Lieber - San Francisco CA, US Turner E. Kirk - Mountain View CA, US
International Classification:
G10L 11/04
US Classification:
704207, 704E11006
Abstract:
Despite many practical limitations imposed by mobile device platforms and application execution environments, vocal musical performances may be captured and continuously pitch-corrected for mixing and rendering with backing tracks in ways that create compelling user experiences. Based on the techniques described herein, even mere amateurs are encouraged to share with friends and family or to collaborate and contribute vocal performances as part of virtual “glee clubs.” In some implementations, these interactions are facilitated through social network- and/or eMail-mediated sharing of performances and invitations to join in a group performance. Using uploaded vocals captured at clients such as a mobile device, a content server (or service) can mediate such virtual glee clubs by manipulating and mixing the uploaded vocal performances of multiple contributing vocalists.
Computational Techniques For Continuous Pitch Correction And Harmony Generation
Perry R. Cook - Applegate OR, US Ari Lazier - San Francisco CA, US Tom Lieber - San Francisco CA, US
International Classification:
G10L 11/04
US Classification:
704207, 704E11006
Abstract:
Using signal processing techniques described herein, pitch detection and correction of a user's vocal performance can be performed continuously and in real-time with respect to the audible rendering of the backing track at the handheld or portable computing device. In some implementations, pitch detection builds on time-domain pitch correction techniques that employ average magnitude difference function (AMDF) or autocorrelation-based techniques together with zero-crossing and/or peak picking techniques to identify differences between pitch of a captured vocal signal and score-coded target pitches. Based on detected differences, pitch correction based on pitch synchronous overlapped add (PSOLA) and/or linear predictive coding (LPC) techniques allow captured vocals to be pitch shifted in real-time to “correct” notes in accord with pitch correction settings that code score-coded melody targets and harmonies.
User-Generated Templates For Segmented Multimedia Performance
- San Francisci CA, US Perry Raymond Cook - Jacksonville OR, US David Adam Steinwedel - San Francisco CA, US
International Classification:
G11B 27/036 H04N 9/87
Abstract:
Disclosed herein are computer-implemented method, system, and computer-readable storage-medium embodiments for implementing user-generated templates for segmented multimedia performances. An embodiment includes at least one computer processor configured to transmit a first version of a content instance and corresponding metadata. The first version of the content instance may include a plurality of structural elements, with at least one structural element corresponding to at least part of the metadata. The first content instance may be transformed by a rendering engine triggered by the at least one computer processor.
Audiovisual Capture And Sharing Framework With Coordinated, User-Selectable Audio And Video Effects Filters
- San Francisco CA, US Perry R. COOK - Jacksonville OR, US Mark T. GODFREY - Atlanta GA, US Prerna GUPTA - Los Altos Hills CA, US Nicholas M. KRUGE - San Francisco CA, US Randal J. LEISTIKOW - Palo Alto CA, US Alexander M.D. RAE - Atlanta GA, US Ian S. SIMON - San Francisco CA, US
Coordinated audio and video filter pairs are applied to enhance artistic and emotional content of audiovisual performances. Such filter pairs, when applied in audio and video processing pipelines of an audiovisual application hosted on a portable computing device (such as a mobile phone or media player, a computing pad or tablet, a game controller or a personal digital assistant or book reader) can allow user selection of effects that enhance both audio and video coordinated therewith. Coordinated audio and video are captured, filtered and rendered at the portable computing device using camera and microphone interfaces, using digital signal processing software executable on a processor and using storage, speaker and display devices of, or interoperable with, the device. By providing audiovisual capture and personalization on an intimate handheld device, social interactions and postings of a type made popular by modern social networking platforms can now be extended to audiovisual content.
Coordinating And Mixing Audiovisual Content Captured From Geographically Distributed Performers
- San Francisco CA, US Perry R. Cook - Jacksonville OR, US
International Classification:
H04N 5/04 G10L 21/013 G10L 13/033 G10H 1/36
Abstract:
Audiovisual performances, including vocal music, are captured and coordinated with those of other users in ways that create compelling user experiences. In some cases, the vocal performances of individual users are captured (together with performance synchronized video) on mobile devices, television-type display and/or set-top box equipment in the context of karaoke-style presentations of lyrics in correspondence with audible renderings of a backing track. Contributions of multiple vocalists are coordinated and mixed in a manner that selects for visually prominent presentation performance synchronized video of one or more of the contributors. Prominence of particular performance synchronized video may be based, at least in part, on computationally-defined audio features extracted from (or computed over) captured vocal audio. Over the course of a coordinated audiovisual performance timeline, these computationally-defined audio features are selective for performance synchronized video of one or more of the contributing vocalists.
Template-Based Excerpting And Rendering Of Multimedia Performance
- San Francisco CA, US Perry Raymond COOK - Jacksonville OR, US David Adam STEINWEDEL - San Francisco CA, US Ka Yee CHAN - Milpitas CA, US
International Classification:
G11B 27/036 H04N 9/87
Abstract:
Disclosed herein are computer-implemented method, system, and computer-readable storage-medium embodiments for implementing template-based excerpting and rendering of multimedia performances technologies. An embodiment includes at least one computer processor configured to retrieve a first content instance and applying a template that results in transforming the first content instance. The first content instance may include a plurality of structural elements. The first content instance may be transformed by a rendering engine running on the at least one computer processor and/or transmitted to a content-playback device. An embodiment of transforming the first content instance includes trimming the content instance based on requirements provided by social media platforms.