Paper: Mobile and Interactive Media Use by Young Children: The Good, the Bad, and the Unknown

Paper: Mobile and Interactive Media Use by Young Children: The Good, the Bad, and the Unknown

In a recent paper appeared on Pediatrics, a group of researchers of the Boston University Medical Center writes: “The use of interactive screen media such as smartphones and tablets by young children is increasing rapidly. However, research regarding the impact of this portable and instantly accessible source of screen time on learning, behavior, and family dynamics has lagged considerably behind its rate of adoption. Pediatric guidelines specifically regarding mobile device use by young children have not yet been formulated, other than recent suggestions that a limited amount of educational interactive media use may be acceptable for children aged <2 years. New guidance is needed because mobile media differs from television in its multiple modalities (eg, videos, games, educational apps), interactive capabilities, and near ubiquity in children’s lives. Recommendations for use by infants, toddlers, and preschool-aged children are especially crucial, because effects of screen time are potentially more pronounced in this group. The aim of this commentary is to review the existing literature, discuss future research directions, and suggest preliminary guidance for families.”

Full article available to subscribers only. A quick review to this commentary is available here.

PhD Dissertation: Desing with emotion: improving web search experience for older adults

PhD Dissertation: Desing with emotion: improving web search experience for older adults

In the abstract of his PhD dissertation, Tamirat Abegaz (Clemson University) writes: “Research indicates that older adults search for information all together about 15% less than younger adults prior to making decisions. Prior research findings associated such behavior mainly with age-related cognitive difficulties. However, recent studies indicate that emotion is linked to influence search decision quality. This research approaches questions about why older adults search less and how this search behavior could be improved. The research is motivated by the broader issues of older users’ search behavior, while focusing on the emotional usability of search engine user interfaces. Therefore, this research attempts to accomplish the following three objectives: a) to explore the usage of low level design elements as emotion manipulation tools b) to seamlessly integrate these design elements into currently existing search engine interfaces, and finally c) to evaluate the impact of emotional design elements on search performance and user satisfaction. To achieve these objectives, two usability studies were conducted. The aim of the first study was to explore emotion induction capabilities of colors, shapes, and combination of both. The study was required to determine if the proposed design elements have strong mood induction capabilities. The results demonstrated that low level design elements such as color and shape have high visceral effects that could be used as potentially viable alternatives to induce the emotional states of users without the users having knowledge of their presence.The purpose of the second study was to evaluate alternative search engine user interfaces, derived from this research, for search thoroughness and user preference. In general, search based performance variables showed that participants searched more thoroughly using interface types that integrate angular shape features. In addition, user preference variables also indicated that participants seemed to enjoy search tasks using search engine interfaces that used color/shape combinations. Overall, the results indicated that seamless integration of low level emotional design elements into currently existing search engine interfaces could potentially improve web search experience.”

Download the full dissertation.

Paper: Emotional learning selectively and retroactively strengthens memories for related events

A team of researchers might have found a connection between emotional events and memory processes. These emotional events seems to work retroactively enhancing the memory of information acquired before the emotional event itself. According to the abstract: “Neurobiological models of long-term memory propose a mechanism by which initially weak memories are strengthened through subsequent activation that engages common neural pathways minutes to hours later1. This synaptic tag-and-capture model has been hypothesized to explain how inconsequential information is selectively consolidated following salient experiences. Behavioural evidence for tag-and-capture is provided by rodent studies in which weak early memories are strengthened by future behavioural training. Whether a process of behavioural tagging occurs in humans to transform weak episodic memories into stable long-term memories is unknown. Here we show, in humans, that information is selectively consolidated if conceptually related information, putatively represented in a common neural substrate, is made salient through an emotional learning experience. Memory for neutral objects was selectively enhanced if other objects from the same category were paired with shock. Retroactive enhancements as a result of emotional learning were observed following a period of consolidation, but were not observed in an immediate memory test or for items strongly encoded before fear conditioning. These findings provide new evidence for a generalized retroactive memory enhancement, whereby inconsequential information can be retroactively credited as relevant, and therefore selectively remembered, if conceptually related information acquires salience in the future.”

Full article available on Nature.

Paper: Personalized Keystroke Dynamics for Self-Powered Human–Machine Interfacing

The title may sound a bit cryptic. Simply put, it summarises a project developed by a team of Chinese and American researchers at work on a self-powered human-computer interface: “The computer keyboard is one of the most common, reliable, accessible, and effective tools used for human–machine interfacing and information exchange. Although keyboards have been used for hundreds of years for advancing human civilization, studying human behavior by keystroke dynamics using smart keyboards remains a great challenge. Here we report a self-powered, non-mechanical-punching keyboard enabled by contact electrification between human fingers and keys, which converts mechanical stimuli applied to the keyboard into local electronic signals without applying an external power. The intelligent keyboard (IKB) can not only sensitively trigger a wireless alarm system once gentle finger tapping occurs but also trace and record typed content by detecting both the dynamic time intervals between and during the inputting of letters and the force used for each typing action. Such features hold promise for its use as a smart security system that can realize detection, alert, recording, and identification. Moreover, the IKB is able to identify personal characteristics from different individuals, assisted by the behavioral biometric of keystroke dynamics. Furthermore, the IKB can effectively harness typing motions for electricity to charge commercial electronics at arbitrary typing speeds greater than 100 characters per min. Given the above features, the IKB can be potentially applied not only to self-powered electronics but also to artificial intelligence, cyber security, and computer or network access control.”

Read the full paper on ACS.

PhD Dissertation: The Fundamental Issues of Pen-Based Interaction with Tablet Devices

In the abstract of her PhD dissertation,  Michelle Kathryn Annett (Dept. of Computing Science, University of Alberta), writes: “Although pens and paper are pervasive in the analog world, their digital counterparts, styli and tablets, have yet to achieve the same adoption or frequency of use. Digital styli should provide a natural, intuitive method to take notes, annotate, and sketch, but have yet to reach their full potential. There has been surprisingly little research focused on understanding why inking experiences differ so vastly between analog and digital media and amongst various styli themselves. To enrich our knowledge on the stylus experience, this thesis contributes a foundational understanding of the factors implicated in the varied experiences found within the stylus ecosystem today.

The thesis first reports on an exploratory study utilizing traditional pen and paper and tablets and styli that observed quantitative and behavioural data, in addition to preferential opinions, to understand current inking experiences. The exploration uncovered the significant impact latency, unintended touch, and stylus accuracy have on the user experience, whilst also determining the increasing importance of stylus and device aesthetics and stroke beautification. The observed behavioural adaptations and quantitative measurements dictated the direction of the research presented herein.

A systematic approach was then taken to gather a deeper understanding of device latency and stylus accuracy. A series of experiments garnered insight into latency and accuracy, examining the underlying elements that result in the lackluster experiences found today. The results underscored the importance of visual feedback, user expectations, and perceptual limitations on user performance and satisfaction. The proposed Latency Perception Model has provided a cohesive understanding of touch- and pen-based latency perception, and a solid foundation upon which future explorations of latency can occur.

The thesis also presents an in-depth exploration of unintended touch. The data collection and analysis underscored the importance of stylus information and the use of additional data sources for solving unintended touch. The behavioral observations reemphasized the importance of designing devices and interfaces that support natural, fluid interaction and suggested hardware and software advancements necessary in the future. The commentary on the interaction – rejection dichotomy should be of great value to developers of unintended touch solutions along with designers of next-generation interaction techniques and styli.

The thesis then concludes with a commentary on the areas of the stylus ecosystem that would benefit from increased attention and focus in the years to come and future technological advancements that could present interesting challenges in the future.”

Download “The Fundamental Issues of Pen-Based Interaction with Tablet Devices” 

User Experienced Software Aging: Test Environment, Testing and Improvement Suggestions

In this MA Thesis, Sayed Tenkanen (University of Tampere) offers a framework for the automated analysis of software aging.

Abstract: “Software aging is empirically observed in software systems in a variety of manifestations ranging from slower performance to various failures as reported by users. Unlike hardware aging, where in its lifetime hardware goes through wear and tear resulting in an increased rate of failure after certain stable use conditions, software aging is a result of software bugs. Such bugs are always present in the software but may not make themselves known unless a set of preconditions are met. When activated, software bugs may result in slower performance and contribute to user dissatisfaction.

However, the impact of software bugs on PCs and mobile phones is different as their uses are different. A PC is often turned off or rebooted on an average of every seven days, but a mobile device may continue to be used without a reboot for much longer. The prolonged operation period of mobile devices thus opens up opportunities for software bugs to be activated more often compared to PCs. Therefore, software aging in mobile devices, a considerable challenge to the ultimate user experience, is the focus of this thesis. The study was done in three consecutive phases: firstly, a test environment was set up; secondly, mobile device was tested as a human user would use under ordinary-use circumstances and finally, suggestions were made on future testing implementations. To this end, a LG Nexus 4 was setup in an automated test environment that simulates a regular user’s use conditions and executes a set of human user use cases, and gathers data on consumption of power as well as reaction and response times in the various interactions. The results showed that an operating system agnostic test environment can be constructed with a limited number of equipment that is capable of simulating a regular user’s use cases as a user would interact with a mobile device to measure user experienced software aging”.

Download the MA Thesis.

GripSense: Using Built-In Sensors to Detect Hand Posture and Pressure on Commodity Mobile Phones

In a paper published as part of the UIST ’12 Proceedings of the 25th annual ACM symposium on User interface software and technology, Mayank Goel,  Jacob Wobbrock, and Shwetak Patel – University of Washington – present their system to infer hand interaction on mobile devices.

The abstract: “We introduce , use of thumb or index finger, or use on a table. GripSense also senses the amount of pressure a user exerts on the touchscreen despite a lack of direct pressure sensors by observing diminished gyroscope readings when the vibration motor is “pulsed.” In a controlled study with 10 participants, GripSense accurately differentiated device usage on a table vs. in hand with 99.7% accuracy; when in hand, it inferred hand postures with 84.3% accuracy. In addition, GripSense distinguished three levels of pressure with 95.1% accuracy. A usability analysis of GripSense was conducted in three custom applications and showed that pressure input and hand-posture sensing can be useful in a number of scenarios.”

Some further insight: ” A typical computer user is no longer confined to a desk in a relatively consistent and comfortable environment. The world’s typical computer user is now holding a mobile device smaller than his or her hand, is perhaps outdoors, perhaps in motion, and perhaps carrying more things than just a mobile device. A host of assumptions about a user’s environment and capabilities that were tenable in comfortable desktop environments no longer applies to mobile users. This dynamic state of a user’s environment can lead to situational impairments [28], which pose a significant challenge to effective interaction because our current mobile Figure 1. (left) It is difficult for a user to perform interactions like pinch-to-zoom with one hand. (right) GripSense senses user’s hand posture and infers pressure exerted on the screen to facilitate new interactions like zoom-in and zoom-out. Devices do not have much awareness of our environments or how those environments affect users’ abilities [33]. One of the most significant contextual factors affecting mobile device use may be a user’s hand posture with which he or she manipulates a mobile device. Research has shown that hand postures including grip, one or two hands, hand pose, the number of fingers used, and so on significantly affect performance and usage of mobile devices [34]. For example, the pointing performance of index fingers is significantly better than thumbs, as is pointing performance when using two hands versus one hand. Similarly, the performance of a user’s dominant hand is better than that of his or her non-dominant hand. Research has found distinct touch patterns for different hand postures while typing on on-screen keyboards [1]. And yet our devices, for the most part, have no clue how they are being held or manipulated, and therefore cannot respond appropriately with adapted user interfaces better suited to different hand postures. Researchers have explored various techniques to accommodate some of these interaction challenges, like the change in device orientation due to hand movement [2,15]. But despite prior explorations, there remains a need to develop new techniques for sensing the hand postures with which people use mobile devices in order to adapt to postural and grip changes during use.”

 

Read the full paper.

 

Thanks to Joshua Tucker for sharing this paper on Framer Community. You can watch his implementation of the hand usage research on a nice prototype he made with FramerJS.

The Drift Table: Designing for Ludic Engagement

As part of a current interest in (critical) making – and with some hint by Gabriele Ferri, I’ve stumbled upon this paper from CHI 2004. The work deals with the notion of ludic design – or design for ludic engagement  – applied to the ideation of interactive devices: “The Drift Table is an electronic coffee table that displays slowly moving aerial photography controlled by the distribution of weight on its surface. It was designed to investigate our ideas about how technologies for the home could support ludic activities—that is, activities motivated by curiosity, exploration, and reflection rather than externally- defined tasks. The many design choices we made, for example to block or disguise utilitarian functionality, helped to articulate our emerging understanding of ludic design. Observations of the Drift Table being used in volunteers’ homes over several weeks gave greater insight into how playful exploration is practically achieved and the issues involved in designing for ludic engagement”.

Download the full paper.

Use-Dependent Cortical Processing from Fingertips in Touchscreen Phone Users

In a recent paper appeared on Current Biology and titled: “Use-Dependent Cortical Processing from Fingertips in Touchscreen Phone Users”, Arko Ghosh (University of Zurich) et al. found evidences of the effects of touchscreen phone uses on user brain processes. According to the summary: “Cortical activity allotted to the tactile receptors on fingertips conforms to skilful use of the hand [ 1–3 ]. For instance, in string instrument players, the somatosensory cortical activity in response to touch on the little fingertip is larger than that in control subjects [ 1 ]. Such plasticity of the fingertip sensory representation is not limited to extraordinary skills and occurs in monkeys trained to repetitively grasp and release a handle as well [ 4 ]. Touchscreen phones also require repetitive finger movements, but whether and how the cortex conforms to this is unknown. By using electroencephalography (EEG), we measured the cortical potentials in response to mechanical touch on the thumb, index, and middle fingertips of touchscreen phone users and nonusers (owning only old-technology mobile phones). Although the thumb interacted predominantly with the screen, the potentials associated with the three fingertips were enhanced in touchscreen users compared to nonusers. Within the touchscreen users, the cortical potentials from the thumb and index fingertips were directly proportional to the intensity of use quantified with built-in battery logs. Remarkably, the thumb tip was sensitive to the day-to-day fluctuations in phone use: the shorter the time elapsed from an episode of intense phone use, the larger the cortical potential associated with it. Our results suggest that repetitive movements on the smooth touchscreen reshaped sensory processing from the hand and that the thumb representation was updated daily depending on its use. We propose that cortical sensory processing in the contemporary brain is continuously shaped by the use of personal digital technology.”

Read the full paper.