In a paper published as part of the UIST ’12 Proceedings of the 25th annual ACM symposium on User interface software and technology, Mayank Goel, Jacob Wobbrock, and Shwetak Patel – University of Washington – present their system to infer hand interaction on mobile devices.
The abstract: “We introduce , use of thumb or index finger, or use on a table. GripSense also senses the amount of pressure a user exerts on the touchscreen despite a lack of direct pressure sensors by observing diminished gyroscope readings when the vibration motor is “pulsed.” In a controlled study with 10 participants, GripSense accurately differentiated device usage on a table vs. in hand with 99.7% accuracy; when in hand, it inferred hand postures with 84.3% accuracy. In addition, GripSense distinguished three levels of pressure with 95.1% accuracy. A usability analysis of GripSense was conducted in three custom applications and showed that pressure input and hand-posture sensing can be useful in a number of scenarios.”
Some further insight: ” A typical computer user is no longer confined to a desk in a relatively consistent and comfortable environment. The world’s typical computer user is now holding a mobile device smaller than his or her hand, is perhaps outdoors, perhaps in motion, and perhaps carrying more things than just a mobile device. A host of assumptions about a user’s environment and capabilities that were tenable in comfortable desktop environments no longer applies to mobile users. This dynamic state of a user’s environment can lead to situational impairments , which pose a significant challenge to effective interaction because our current mobile Figure 1. (left) It is difficult for a user to perform interactions like pinch-to-zoom with one hand. (right) GripSense senses user’s hand posture and infers pressure exerted on the screen to facilitate new interactions like zoom-in and zoom-out. Devices do not have much awareness of our environments or how those environments affect users’ abilities . One of the most significant contextual factors affecting mobile device use may be a user’s hand posture with which he or she manipulates a mobile device. Research has shown that hand postures including grip, one or two hands, hand pose, the number of fingers used, and so on significantly affect performance and usage of mobile devices . For example, the pointing performance of index fingers is significantly better than thumbs, as is pointing performance when using two hands versus one hand. Similarly, the performance of a user’s dominant hand is better than that of his or her non-dominant hand. Research has found distinct touch patterns for different hand postures while typing on on-screen keyboards . And yet our devices, for the most part, have no clue how they are being held or manipulated, and therefore cannot respond appropriately with adapted user interfaces better suited to different hand postures. Researchers have explored various techniques to accommodate some of these interaction challenges, like the change in device orientation due to hand movement [2,15]. But despite prior explorations, there remains a need to develop new techniques for sensing the hand postures with which people use mobile devices in order to adapt to postural and grip changes during use.”
Read the full paper.
Thanks to Joshua Tucker for sharing this paper on Framer Community. You can watch his implementation of the hand usage research on a nice prototype he made with FramerJS.