Better call Ross or the IBM’s Watson powered attorney

Ross, the super intelligent attorney, is an application built on top of IBM’s Watson cognitive system  by Andrew Arruda, Shuai Wang, Pargles Dall’Oglio, Jimoh Ovbiagele, and Akash Venat – a group of students of the University of Toronto. Ross was created by filling Watson with a huge volume of public legal documents and using experts to calibrate the answers it provides. Considering the “cognitive” abilities of Watson, the more lawyers use Ross, the better it learns.

As you may read on the official website: “ROSS is a digital legal expert that helps you power through your legal research. You ask ROSS your questions in natural language such as: “ROSS, in Ontario, can courts pierce the corporate veil where a corporation has misappropriated funds?” ROSS then reads through the entire body of law and returns a cited answer and topical readings from case law, legislation and secondary sources to get you up-to-speed quickly. In addition, ROSS monitors the law around the clock to notify you of new court decisions that can affect your case.”

You can read more about ROSS on ITBusiness.ca

Paper:Wearable Devices as Facilitators, Not Drivers, of Health Behavior Change

Paper: Wearable Devices as Facilitators, Not Drivers, of Health Behavior Change

In this online paper,  Mitesh M. Patel, David A. Asch, and Kevin G. Volpp writes about the relations between wearable technologies and their ability to activate behavioural changes. As they point out, although the popularity  of these devices is increasing,  the gap between recording information – a process also known as “quantified self”– and changing behaviour is substantial and little evidence suggests that they are bridging this gap. A brief extract:

“Several large technology companies including Apple, Google, and Samsung are entering the expanding market of population health with the introduction of wearable devices. This technology, worn in clothing or accessories, is part of a larger movement often referred to as the “quantified self.” The notion is that by recording and reporting information about behaviors such as physical activity or sleep patterns, these devices can educate and motivate individuals toward better habits and better health. The gap between recording information and changing behavior is substantial, however, and while these devices are increasing in popularity, little evidence suggests that they are bridging that gap.

Only 1% to 2% of individuals in the United States have used a wearable device, but annual sales are projected to increase to more than $50 billion by 2018. Some of these devices aim at individuals already motivated to change their health behaviors. Others are being considered by health care organizations, employers, insurers, and clinicians who see promise in using these devices to better engage less motivated individuals. Some of these devices may justify that promise, but less because of their technology and more because of the behavioral change strategies that can be designed around them.

Most health-related behaviors such as eating well and exercising regularly could lead to meaningful improvements in population health only if they are sustained. If wearable devices are to be part of the solution, they either need to create enduring new habits, turning external motivations into internal ones (which is difficult), or they need to sustain their external motivation (which is also difficult). This requirement of sustained behavior change is a major challenge, but many mobile health applications have not yet leveraged principles from theories of health behavior.

Feedback loops could be better designed around wearable devices to sustain engagement by using concepts from behavioral economics. Individuals are often motivated by the experience of past rewards and the prospect of future rewards. Lottery-based designs leverage the fact that individuals tend to assign undue weight to small probabilities and are more engaged by intermittent variable rewards than with constant reinforcement. Anticipated regret, an individual’s concern or anxiety over the reward he or she might not win, can have a significant effect on decision making. Feedback could be designed to use this concept by informing individuals what they would have won had they been adherent to the new behavior. Building new habits may be best facilitated by presenting frequent feedback with appropriate framing and by using a trigger that captures the individual’s attention at those moments when he or she is most likely to take action.”

Read the full article on JAMA – The Journal of The American Medical Association

Picture of human capillar network

Facepay or a new payment system based on face recognition

For those of you who think that Apple Pay is the future,  food retailer 100% Genuine Imported Food Chain Stores has released a retail payment system based on face recognition. Currently available in Shanghai, the technology measures and records the capillary network data of customer’s faces and hands instead of keeping trace of the distance between given points.

Read the full article on PSFK and Chinadaily.com.

 

A new prototype turns human tongues into ears

A team of three researchers from Colorado State University – John Williams , Leslie Stone-Roy , and JJ Moritz – is working on a prototype capable to turn tongues into ears (kind of). The device is made by a  bluetooth-enabled microphone earpiece and smart retainer that fits on a person’s tongue:

“The two devices work in tandem to strengthen a partially deaf person’s ability to recognize words. […] The retainer/earpiece system works by reprogramming areas of the brain, helping them to interpret various sensations on the tongue as certain words. The process starts with the earpiece’s microphone, which takes in sounds and words from the surrounding environment. A processor converts these sounds into distinct, complex waveforms that represent individual words. The waveforms are then sent via Bluetooth to the retainer, where they are specially designed to stimulate the tongue. Utilizing an array of electrodes, the retainer excites a distinct pattern of somatic nerves (those related to touch) on the tongue, depending on which waveform it receives. The electrodes excite the nerves just enough to cause them to fire their own action potentials. […] According to Leslie Stone-Roy, one of the researchers on the team, the team chose the tongue because if its hypersensitive ability to discern between tactile sensations: “We’re able to discriminate between fine points that are just a short distance on the tongue […] The tongue is similar in that it has high acuity. With lots of time and practice, the retainer helps to strengthen the brain’s ability to recognize certain words. For example, every time the microphone hears the word “ball,” the retainer excites the same pattern of nerves on the tongue. Over time, the brain learns to associate that specific tongue sensation with the word “ball,” making it easier to recognize the word in the future”.

Read the full article on Popular Science.

Look at me: a mobile game app to increase social skills in children with autism

Look at me is a recent Android app made by Samsung  to help children with autism to increase their interpersonal skills. The app is a game that relies on camera (and game mechanics) to challenge and rewards its young players: children earn points by recognising faces, emotions, and facial features on pictures visualised by the device. They also have to replicate facial emotions in front of the smartphone so the app can scan their faces and validate the performances.

According to Samsung, this project  – developed with the help of researchers from Seoul National University and Yonsei University – is first and foremost an example of the role of inclusive innovation and technology as means to increase the quality of life of human beings. A vision somehow connected to a broader and “new” area of HCI known as positive computing.  We’ll cover this topic in a forth-coming post.

Happiness: experience versus memory

I’ve stumbled upon this “old” TEDTALKS video while flipping some pages on Flipboard. Daniel Kahneman talks about the correlation of happiness with our different selves and the way this correlation may influences our decision-making process. It turns out that  in our pursuing of happiness, goals and conditions to satisfy may vary according to the self we are looking at. Here, stories our brain makes up for us seems to increase the weight of memory over experiences.

Paper: Emotional learning selectively and retroactively strengthens memories for related events

A team of researchers might have found a connection between emotional events and memory processes. These emotional events seems to work retroactively enhancing the memory of information acquired before the emotional event itself. According to the abstract: “Neurobiological models of long-term memory propose a mechanism by which initially weak memories are strengthened through subsequent activation that engages common neural pathways minutes to hours later1. This synaptic tag-and-capture model has been hypothesized to explain how inconsequential information is selectively consolidated following salient experiences. Behavioural evidence for tag-and-capture is provided by rodent studies in which weak early memories are strengthened by future behavioural training. Whether a process of behavioural tagging occurs in humans to transform weak episodic memories into stable long-term memories is unknown. Here we show, in humans, that information is selectively consolidated if conceptually related information, putatively represented in a common neural substrate, is made salient through an emotional learning experience. Memory for neutral objects was selectively enhanced if other objects from the same category were paired with shock. Retroactive enhancements as a result of emotional learning were observed following a period of consolidation, but were not observed in an immediate memory test or for items strongly encoded before fear conditioning. These findings provide new evidence for a generalized retroactive memory enhancement, whereby inconsequential information can be retroactively credited as relevant, and therefore selectively remembered, if conceptually related information acquires salience in the future.”

Full article available on Nature.

Paper: Personalized Keystroke Dynamics for Self-Powered Human–Machine Interfacing

The title may sound a bit cryptic. Simply put, it summarises a project developed by a team of Chinese and American researchers at work on a self-powered human-computer interface: “The computer keyboard is one of the most common, reliable, accessible, and effective tools used for human–machine interfacing and information exchange. Although keyboards have been used for hundreds of years for advancing human civilization, studying human behavior by keystroke dynamics using smart keyboards remains a great challenge. Here we report a self-powered, non-mechanical-punching keyboard enabled by contact electrification between human fingers and keys, which converts mechanical stimuli applied to the keyboard into local electronic signals without applying an external power. The intelligent keyboard (IKB) can not only sensitively trigger a wireless alarm system once gentle finger tapping occurs but also trace and record typed content by detecting both the dynamic time intervals between and during the inputting of letters and the force used for each typing action. Such features hold promise for its use as a smart security system that can realize detection, alert, recording, and identification. Moreover, the IKB is able to identify personal characteristics from different individuals, assisted by the behavioral biometric of keystroke dynamics. Furthermore, the IKB can effectively harness typing motions for electricity to charge commercial electronics at arbitrary typing speeds greater than 100 characters per min. Given the above features, the IKB can be potentially applied not only to self-powered electronics but also to artificial intelligence, cyber security, and computer or network access control.”

Read the full paper on ACS.

Cooper Hewitt Design Museum: a magic pen to create your personal collection

Last month the Cooper Hewitt Smithsonian’s Design Museum opened its doors after three-years-long renovation period. A process that went beyond walls, halls and objects to affect the very inner meaning (and mission) of the design museum itself. The result was quite interesting: the birth of a new organization “[something] between a media and a tech firm, […] a Thing That Puts Stuff on the Internet”, a new structure capable to links objects that should live forever and people who want to interact with them in a new way (thanks to the superpowers granted by the technology) .

This radical philosophical led to the adoption of a “magic tool” (a pen) that allow future visitors to save their favourites works in a personal repository which will grow with each new visit. A database users can play with.

Next to every object on-display at the Cooper Hewitt is a small pattern that looks like the origin point of the coordinate plane. When the pen touches it, the digital record of that object is added to the visitor’s personal museum collection. When they leave, they will have to return the pen, but information about and high-resolution photos of the object will be waiting for them.

[…]

But the real treats are in the museum’s interactives that draw from its collection. There’s an “immersion room,” which projects patterns from the museum’s expansive wallpaper archive on the wall. Visitors can also draw their own patterns in there too, which tessellate on the projected walls like the original historical decorations. There are also large, “social” touch-screen tables—think of giant iPads—that let people alone or in groups sort through and look at objects in the collection. These have special search and manipulation features: Someone can draw a shape on the table and see what items in the collection fit it. And the pen—the jewel of the museum’s collection-based interactives—will function as a pen on these touch surfaces. The pen is the exact kind of object that the museum hopes to deploy in the mansion, as it augments a smartphone without requiring one.

All three of these tools […] used an infrastructure […] that lets the museum plan for the near future, that lets it bridge digital and physical, that lets it Put Things on the Internet: the API.

 

 

What the API means, for someone who will never visit the museum, is that every object, every designer, every nation, every era, even every color has a stable URL on the Internet. No other museum does this with the same seriousness as the Cooper Hewitt. If you want to talk about Van Gogh’s Starry Night online, you have to link to the Wikipedia page. Wikipedia is the best permanent identifier ofStarry Night-ness on the web. But if you want to talk about an Eames Chair, you can link to the Cooper Hewitt’s page for it.

The Cooper Hewitt isn’t the only museum in the world with an API. The Powerhouse has one, and many art museums have uploaded high-quality images of their collections. But the power of the Cooper Hewitt’s digital interface is unprecedented. There’s a command that asks for colors as defined by the Crayola crayon palette. Another asks if the snack bar is open. A third mimics the speech of one of the Labs members. It’s a fun piece of software, and it makes a point about the scope of the museum’s vision. If design is in everything, the API says, then the museum’s collection includes every facet of the museum itself.

Read the full article on The Atlantic.

 

 

 

 

 

 

Skype Translator, Deep Learning, and the nuances of human communication

As John Pavlus points out, some of the most interesting and newest innovations technology has to offer legitimate more than a comparison with their sci-fi counterparts. But are they as good as their fictional ancestors? Skype (real-time) Translator takes advantage of the Microsoft Deep Learning system to reach the experience of the famous Star Trek Universal Translator (broadly speaking). Unfortunately, while the number of words it “learns” increases  – it is even capable to filter out the “um”, “ah”, etc. – it misses some of the most characteristic nuances of human communication:

The limitations of Skype’s translation software are also revealing, since they show how difficult it is for even the smartest machine to mimic the subtleties of effective human conversation. Determining which meaning of a word is appropriate in different contexts can be vexing. “If software is translating between American and British English, and it recognizes the word ‘football,’ it also needs to know when to change it to ‘soccer’ and when to keep it as ‘football’ or ‘gridiron,’” says Christopher Manning, a professor of linguistics and computer science in Stanford University’s Natural Language Processing Group.

[…]

With practice I could probably learn Skype Translator’s “rhythm” in the same way, which could make the audio experience less distracting. Introducing an on-screen avatar for the “bot” might also help reinforce the metaphor of a third person on the call, perhaps making it easier for the two human speakers to modulate their conversation in a way that makes room for the software speaking on their behalf.

[…]

Dendi admits that Skype and Microsoft still don’t know yet what an ideal user experience for the software looks like. “When we watch these things in action on TV [as on Star Trek], it seems so obvious: you just speak and it comes out translated,” he says. “But when you start digging into the actual implementation and put it in people’s hands to use, there are so many little details that can make or break the experience.”

Read the full article on MIT Technology Reviewer.