Google Glass for Education: A Remote Mobile Usability Study of a Responsive Instructional Website

Google Glass for Education: A Remote Mobile Usability Study of a Responsive Instructional Website

Patricia J. Stemmle (University of Hawai‘i at Mānoa) writes:

As wearable computing devices, ubiquitous mobile access, and advances in information and communications technology (ICT) become a global reality, the opportunities for innovation in distance learning expand exponentially. Educators face special challenges in designing effective instruction for delivery in online learning environments that are becoming increasingly mobile and many seek professional development resources to acquire the skills and expertise needed to adopt and integrate new technologies into their practices in impactful ways. With the release of the new Google Glass Explorer Edition (Glass), a head-mounted display, came a need to provide instruction for operating Glass with a focus on education. Google Glass in Education, a website of asynchronous, instructional modules (URL: eLearn.Glass), was created to instruct members of the Google+ Community—Google Glass in Education to impart the fundamentals of operating Google Glass, to record and stream live video, integrate augmented reality, and explore curated resources for educational use. The aim of this mobile usability study was to evaluate the website’s ease of use and effectiveness and to improve user satisfaction through iterative usability testing. Overall, data analysis revealed that participants did experience improved ease-of-use and increased satisfaction with the final revised instructional website.

Download the full paper.

[Paper] Evaluation of Smartphone Accessibility Interface Practices for Older Adults

[Paper] Evaluation of Smartphone Accessibility Interface Practices for Older Adults

From the abstract: “Smartphone’s can play a significant role in maintaining decent Quality of Life for elderly people. Key factor to Smartphone’s usage success among elderly people depends on the accessibility of phone interface. Indeed, there is an exponential growth of the elderly population that suffers from age-related disabilities. Accessibility problems should be in mindfor developers. To address these issues in new smart phone devices there is no proper set of guidelines available that focus on this domain. So in this paper the work focuses on: (1) a set of guidelines to keep in mind in order to achieve accessibility in mobileinterfaces for older people. This checklist is the result of a review study of the literature, standards and best practices that are being performed in this area of knowledge, (2) use of this accessibility checklist aimed at elderly people, a survey of three mobile native Apps on android platform has been carried out, these Apps have as aim to modify the default interface for another more accessible one”.

Full paper available for (free) downlaod.

 

Amira Ahmed, Aleeha Iftikhar, Sarmad Sadik, “Evaluation of Smartphone Accessibility Interface Practices for Older Adults”, in International Journal of Engineering Science Invention, ISSN (Online): 2319 – 6734, ISSN (Print): 2319 – 6726

www.ijesi.org ||Volume 4 Issue 3 || March 2015 || PP.24-30

Eye tracking and game mechanics:

MSc Thesis: Eye tracking and game mechanics: An evaluation of unmodified tablet computers for use in vision screening software for children

Eye tracking and game mechanics: An evaluation of unmodified tablet computers for use in vision screening software for children, Carisa Chang, University of Wisconsin, 2014. From the abstract:

“Binocular vision issues are a primary cause of reading and writing difficulties among children, with as many at 25 percent of school-aged children needing vision therapy. Vision issues are a better predictor of academic success and quality of life than race or socioeconomic status. In spite of validated screening exams and vision therapy to address these issues, the vast majority of children are never examined, or if examined do not receive follow up care.

Barriers to access include financial limitations, availability of qualified eye care professionals and child engagement in screening and therapy. Current screening and therapy methodologies are not scalable, but computer programs and mobile applications already deployed in medical and behavioral fields offer insight to the impact that new screening and therapy technologies could provide. However, potential negative effects from the prolonged used of tablet screens must also be considered.

A study was conducted to evaluate the use of computer vision and machine learning algorithms to estimate user gaze when applied to video input from an unmodified tablet computer. Accurate gaze estimation would enable the development of tablet-based vision screening and therapy applications. Recommendations for future work are made based on the results of this study.”

Download the full dissertation.

[Paper] Human-level control through deep reinforcement learning

Google Deepmind‘s scientists have built a software that can play video games as a human being, or even better. By exploiting the theory of reinforcement learning, Google’s software is able to improve its performance after hours of play. As stated in the abstract:

The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific  perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.

Read the full article on Nature.

[Paper] Recommended for you. The Netflix Prize and the production of algorithmic culture

[Paper] Recommended for you. The Netflix Prize and the production of algorithmic culture

In this article, Blake Hallinan (PhD candidate at Indiana University) and Ted Striphas (Associate Professor and Director of Graduate Studies in the Department of Communication & Culture at Indiana University) address the impact of algorithms on cultural practices through a critical and cultural review of the Netflix Prize contest. From the abstract:

How does algorithmic information processing affect the meaning of the word culture, and, by extension, cultural practice? We address this question by focusing on the Netflix Prize (2006–2009), a contest offering US$1m to the first individual or team to boost the accuracy of the company’s existing movie recommendation system by 10%. Although billed as a technical challenge intended for engineers, we argue that the Netflix Prize was equally an effort to reinterpret the meaning of culture in ways that overlapped with, but also diverged in important respects from, the three dominant senses of the term assayed by Raymond Williams. Thus, this essay explores the conceptual and semantic work required to render algorithmic information processing systems legible as forms of cultural decision making. It also then represents an effort to add depth and dimension to the concept of “algorithmic culture.”

Read the full article on Sage Journal Online.

Narrative framing of consumer sentiment in online restaurant reviews

Narrative framing of consumer sentiment in online restaurant reviews

Dan Jurafsky, (Stanford University), Victor Chahuneau, Bryan R. Routledge, and Noah A. Smith (all of Carnagie Mellon) published a study where online restaurant reviews were used as source to get insights on people inner worlds. As reported in the abstract:

The vast increase in online expressions of consumer sentiment offers a powerful new tool for studying consumer attitudes. To explore the narratives that consumers use to frame positive and negative sentiment online, we computationally investigate linguistic structure in 900,000 online restaurant reviews. Negative reviews, especially in expensive restaurants, were more likely to use features previously associated with narratives of trauma: negative emotional vocabulary, a focus on the past actions of third person actors such as waiters, and increased use of references to “we” and “us”, suggesting that negative reviews function as a means of coping with service–related trauma. Positive reviews also employed framings contextualized by expense: inexpensive restaurant reviews use the language of addiction to frame the reviewer as craving fatty or starchy foods. Positive reviews of expensive restaurants were long narratives using long words emphasizing the reviewer’s linguistic capital and also focusing on sensory pleasure. Our results demonstrate that portraying the self, whether as well–educated, as a victim, or even as addicted to chocolate, is a key function of reviews and suggests the important role of online reviews in exploring social psychological variables.

On Stanford Online, Clifton B. Parker has put together some of most interesting narrative patterns outlined by researchers:

  • Positive reviews of expensive restaurants tended to use metaphors of sex and sensual pleasure, such as “orgasmic pastry” or “seductively seared foie gras.” And the words used in those reviews were longer and fancier.
  • Positive reviews of cheap restaurants and foods often employed metaphors of drugs or addiction – “these cupcakes are like crack.”
  • Negative reviews were frequently associated with the language of personal trauma and poor customer service: “We waited 10 min. before we even got her attention to order.”
  • Women were more likely than men to use drug metaphors to describe their attitudes toward food.
  • The foods most likely to be described using drug metaphors were pizza, burgers, sweets and sushi.

A list that leads to three main areas:

‘Sense of self’: […]”Bad reviews,” authors said, “seem to be caused by bad customer service rather than just bad food or atmosphere. The bottom line is that it’s all about the personal interactions. When people are rude or mean to you, it goes straight to your sense of self.”. The negative reviews function as a means of coping with service-related trauma, according to the study.

‘Sensuality’: […] “Positive reviews appeal, presumably light-heartedly, to the author as an addict suffering from cravings for junk foods, non-normative meals and other guilty pleasures, […] We talk about food as an addiction when we’re feeling guilty about what we’re eating,”

‘Data miningn’:  […] The fact that many negative reviews highlight “service-related traumas” may encourage restaurant managers to prioritize customer satisfaction if they are not doing so already. “When you write a review on the web you’re providing a window into your own psyche – and the vast amount of text on the web means that researchers have millions of pieces of data about people’s mindsets,” said Jurafsky.

Read the study on FirstMonday.org.

 

Ian Bogost: The cathedral of computing

Ian Bogost: The cathedral of computing

According to Ian Bogost, the tendency to read our present through algorithms and softwares is driven by a misleading sense of devotion rather than the “materiality” of a phenomenon. So, we end up with considering softwares as the foundation of today culture instead of one of the available abstractions  to understand  it. At odds with Manovich, Bogost writes:

[…]

The algorithmic metaphor is just a special version of the machine metaphor, one specifying a particular kind of machine (the computer) and a particular way of operating it (via a step-by-step procedure for calculation). And when left unseen, we are able to invent a transcendental ideal for the algorithm. The canonical algorithm is not just a model sequence but a concise and efficient one. In its ideological, mythic incarnation, the ideal algorithm is thought to be some flawless little trifle of lithe computer code, processing data into tapestry like a robotic silkworm. A perfect flower, elegant and pristine, simple and singular. A thing you can hold in your palm and caress. A beautiful thing. A divine one. But just as the machine metaphor gives us a distorted view of automated manufacture as prime mover, so the algorithmic metaphor gives us a distorted, theological view of computational action.

[…]

The same could be said for data, the material algorithms operate upon. Data has become just as theologized as algorithms, especially “big data,” whose name is meant to elevate information to the level of celestial infinity. Today, conventional wisdom would suggest that mystical, ubiquitous sensors are collecting data by the terabyteful without our knowledge or intervention. Even if this is true to an extent, examples like Netflix’s altgenres show that data is created, not simply aggregated, and often by means of laborious, manual processes rather than anonymous vacuum-devices. Once you adopt skepticism toward the algorithmic- and the data-divine, you can no longer construe any computational system as merely algorithmic. Think about Google Maps, for example. It’s not just mapping software running via computer—it also involves geographical information systems, geolocation satellites and transponders, human-driven automobiles, roof-mounted panoramic optical recording systems, international recording and privacy law, physical- and data-network routing systems, and web/mobile presentational apparatuses. That’s not algorithmic culture—it’s just, well, culture.

[…]

If algorithms aren’t gods, what are they instead? Like metaphors, algorithms are simplifications, or distortions. They are caricatures. They take a complex system from the world and abstract it into processes that capture some of that system’s logic and discard others. And they couple to other processes, machines, and materials that carry out the extra-computational part of their work. Unfortunately, most computing systems don’t want to admit that they are burlesques. They want to be innovators, disruptors, world-changers, and such zeal requires sectarian blindness. The exception is games, which willingly admit that they are caricatures—and which suffer the consequences of this admission in the court of public opinion. Games know that they are faking it, which makes them less susceptible to theologization.SimCity isn’t an urban planning tool, it’s  a cartoon of urban planning. Imagine the folly of thinking otherwise! Yet, that’s precisely the belief people hold of Google and Facebook and the like

 

Read the full article.

Rethinking human focus

A new brain scanning technology to rethinking human focus

A team of researchers at Princeton University has published an article on Nature Neuroscience to present the results of a new brain-scanning technology aimed at explaining how human focus works. From the abstract:

Lapses of attention can have negative consequences, including accidents and lost productivity. Here we used closed-loop neurofeedback to improve sustained attention abilities and reduce the frequency of lapses. During a sustained attention task, the focus of attention was monitored in real time with multivariate pattern analysis of whole-brain neuroimaging data. When indicators of an attentional lapse were detected in the brain, we gave human participants feedback by making the task more difficult. Behavioral performance improved after one training session, relative to control participants who received feedback from other participants’ brains. This improvement was largest when feedback carried information from a frontoparietal attention network. A neural consequence of training was that the basal ganglia and ventral temporal cortex came to represent attentional states more distinctively. These findings suggest that attentional failures do not reflect an upper limit on cognitive potential and that attention can be trained with appropriate feedback about neural signals.

According to Taylor Beck (The Atlantic):

The scientists who invented this attention machine, led by professor Nick Turk-Browne, are calling it a “mind booster.” It could, they say, change the way we think about paying attention—and even introduce new ways of treating illnesses like depression. Here’s how the brain decoder works: You lie down in an a functional magnetic resonance imaging machine (fMRI)—similar to the MRI machines used to diagnose diseases—which lets scientists track brain activity. Once you’re in the scanner, you watch a series of pictures and press a button when you see certain targets. The task is like a video game—the dullest video game in the world, really, which is the point. You see a face, overlaid atop an image of a landscape. Your job is to press a button if the face is female, as it is 90 percent of the time, but not if it’s male. And ignore the landscape. (There’s also a reverse task, in which you’re asked to judge whether the scene is outside or inside, and ignore the faces.)

[…]

Neuroscientists have been reading brain patterns with computer programs like this for just over a decade. Machine-learning algorithms, like the ones Google and Facebook use to recognize everything online, can hack the brain’s code, too: essentially software for reading brain scans […] What’s new and remarkable now is how fast neural decoding is happening. Machines today can harness brain activity to drive what a person sees in real time. “The idea that we could tell anything about a person’s thoughts from a single brain snapshot was such a rush,” Norman recalls of the early days, over a decade ago. “Certainly the kinds of decoding we are doing now can be done much faster.” Here is how Princeton’s current scanner sees a human brain: First, it divides a brain image into around 40,000 cubes, called voxels, or 3-D pixels. This basic unit of fMRI is a 3 millimeter by 3 millimeter cube of brain. So, the neural pattern representing any mental state—from how you feel when you smell your wife’s perfume to suicidal despair—is represented by this matrix. The same neural code for, say, Scarlett Johansson, will represent her in your memory, or as you talk to her on the phone, or in your dreams. The decoding approach, first pioneered in 2001 by the neuroscientist James Haxby and colleagues at Princeton, is known technically as “multi-voxel pattern analysis,” or MVPA. This “decoding” is distinct from the more common, less sophisticated form of fMRI analysis that gets a lot of attention in the media, the kind that shows what parts of the brain “light up” when a person does a task, relative to a control. “Though fMRI is not very cheap to use, there may be a certain advantage of neurofeedback training, compared to pure behavioral training,” suggests Kazuhisa Shibata, an assistant professor at Brown University, “if this work is shown to generalize to other tasks or domains.”

Read the original paper on Nature Neuroscience. Find out more on The Atlantic.