A touch alphabet and communication system is provided. The communication system uses a predetermined set of touch gestures, such as fingertip touch patterns performable on keyless touch-sensitive surfaces, to express the user's desired communication. The touch-sensitive surface may be the touch screen display of a computer, tablet device, cell phone, or a touch-sensitive pad, for example. The finger touch patterns are based on a limited set of unique and ergonomically pleasing finger positions that may be performed in a limited area. The touch alphabet allows the user to comprehensively communicate without looking at the communication device, and with just one hand, or in another implementation, with two hands. Thus, a user can comfortably tap an entire alphabet and related functions, with one hand, without having to visualize the user interface surface or hunt for individual keys.
A touch alphabet and communication system is provided. The communication system uses a predetermined set of touch gestures, such as fingertip touch patterns performable on keyless touch-sensitive surfaces, to express the user's desired communication. The touch-sensitive surface may be the touch screen display of a computer, tablet device, cell phone, or a touch-sensitive pad, for example. The finger touch patterns are based on a limited set of unique and ergonomically pleasing finger positions that may be performed in a limited area. The touch alphabet allows the user to comprehensively communicate without looking at the communication device, and with just one hand, or in another implementation, with two hands. Thus, a user can comfortably tap an entire alphabet and related functions, with one hand, without having to visualize the user interface surface or hunt for individual keys.
An adaptive virtual keyboard is provided. In one implementation, a system senses fingertip contact on a sensing surface and generates a virtual keyboard on the sensing surface where a user's hand or hands are placed. The system automatically adjusts placement of the right and left hand parts of the virtual keyboard on the sensing surface in real time to follow drift of the user's hands out of expected ranges, and can distort the geometry of the virtual keyboard to accommodate characteristics of the user's fingertip typing style. The virtual keyboard is composed of approximately 6-20 touch zones, each touch zone representing one or multiple characters or functions. A disambiguator interprets the sequence of touch zones contacted by the user into intended words, symbols, and control characters. The system can optionally display an image of the dynamically adapting virtual keyboard, for visual targeting.
Bob Duffield (1953-1957), Daniel Wood (2003-2007), Frances Hurst (1997-2001), Jeff Slate (1983-1987), Eva Simich (1964-1968), Heather Wiles (2002-2006)