Tag Archives: york

#434643 Sensors and Machine Learning Are Giving ...

According to some scientists, humans really do have a sixth sense. There’s nothing supernatural about it: the sense of proprioception tells you about the relative positions of your limbs and the rest of your body. Close your eyes, block out all sound, and you can still use this internal “map” of your external body to locate your muscles and body parts – you have an innate sense of the distances between them, and the perception of how they’re moving, above and beyond your sense of touch.

This sense is invaluable for allowing us to coordinate our movements. In humans, the brain integrates senses including touch, heat, and the tension in muscle spindles to allow us to build up this map.

Replicating this complex sense has posed a great challenge for roboticists. We can imagine simulating the sense of sight with cameras, sound with microphones, or touch with pressure-pads. Robots with chemical sensors could be far more accurate than us in smell and taste, but building in proprioception, the robot’s sense of itself and its body, is far more difficult, and is a large part of why humanoid robots are so tricky to get right.

Simultaneous localization and mapping (SLAM) software allows robots to use their own senses to build up a picture of their surroundings and environment, but they’d need a keen sense of the position of their own bodies to interact with it. If something unexpected happens, or in dark environments where primary senses are not available, robots can struggle to keep track of their own position and orientation. For human-robot interaction, wearable robotics, and delicate applications like surgery, tiny differences can be extremely important.

Piecemeal Solutions
In the case of hard robotics, this is generally solved by using a series of strain and pressure sensors in each joint, which allow the robot to determine how its limbs are positioned. That works fine for rigid robots with a limited number of joints, but for softer, more flexible robots, this information is limited. Roboticists are faced with a dilemma: a vast, complex array of sensors for every degree of freedom in the robot’s movement, or limited skill in proprioception?

New techniques, often involving new arrays of sensory material and machine-learning algorithms to fill in the gaps, are starting to tackle this problem. Take the work of Thomas George Thuruthel and colleagues in Pisa and San Diego, who draw inspiration from the proprioception of humans. In a new paper in Science Robotics, they describe the use of soft sensors distributed through a robotic finger at random. This placement is much like the constant adaptation of sensors in humans and animals, rather than relying on feedback from a limited number of positions.

The sensors allow the soft robot to react to touch and pressure in many different locations, forming a map of itself as it contorts into complicated positions. The machine-learning algorithm serves to interpret the signals from the randomly-distributed sensors: as the finger moves around, it’s observed by a motion capture system. After training the robot’s neural network, it can associate the feedback from the sensors with the position of the finger detected in the motion-capture system, which can then be discarded. The robot observes its own motions to understand the shapes that its soft body can take, and translate them into the language of these soft sensors.

“The advantages of our approach are the ability to predict complex motions and forces that the soft robot experiences (which is difficult with traditional methods) and the fact that it can be applied to multiple types of actuators and sensors,” said Michael Tolley of the University of California San Diego. “Our method also includes redundant sensors, which improves the overall robustness of our predictions.”

The use of machine learning lets the roboticists come up with a reliable model for this complex, non-linear system of motions for the actuators, something difficult to do by directly calculating the expected motion of the soft-bot. It also resembles the human system of proprioception, built on redundant sensors that change and shift in position as we age.

In Search of a Perfect Arm
Another approach to training robots in using their bodies comes from Robert Kwiatkowski and Hod Lipson of Columbia University in New York. In their paper “Task-agnostic self-modeling machines,” also recently published in Science Robotics, they describe a new type of robotic arm.

Robotic arms and hands are getting increasingly dexterous, but training them to grasp a large array of objects and perform many different tasks can be an arduous process. It’s also an extremely valuable skill to get right: Amazon is highly interested in the perfect robot arm. Google hooked together an array of over a dozen robot arms so that they could share information about grasping new objects, in part to cut down on training time.

Individually training a robot arm to perform every individual task takes time and reduces the adaptability of your robot: either you need an ML algorithm with a huge dataset of experiences, or, even worse, you need to hard-code thousands of different motions. Kwiatkowski and Lipson attempt to overcome this by developing a robotic system that has a “strong sense of self”: a model of its own size, shape, and motions.

They do this using deep machine learning. The robot begins with no prior knowledge of its own shape or the underlying physics of its motion. It then repeats a series of a thousand random trajectories, recording the motion of its arm. Kwiatkowski and Lipson compare this to a baby in the first year of life observing the motions of its own hands and limbs, fascinated by picking up and manipulating objects.

Again, once the robot has trained itself to interpret these signals and build up a robust model of its own body, it’s ready for the next stage. Using that deep-learning algorithm, the researchers then ask the robot to design strategies to accomplish simple pick-up and place and handwriting tasks. Rather than laboriously and narrowly training itself for each individual task, limiting its abilities to a very narrow set of circumstances, the robot can now strategize how to use its arm for a much wider range of situations, with no additional task-specific training.

Damage Control
In a further experiment, the researchers replaced part of the arm with a “deformed” component, intended to simulate what might happen if the robot was damaged. The robot can then detect that something’s up and “reconfigure” itself, reconstructing its self-model by going through the training exercises once again; it was then able to perform the same tasks with only a small reduction in accuracy.

Machine learning techniques are opening up the field of robotics in ways we’ve never seen before. Combining them with our understanding of how humans and other animals are able to sense and interact with the world around us is bringing robotics closer and closer to becoming truly flexible and adaptable, and, eventually, omnipresent.

But before they can get out and shape the world, as these studies show, they will need to understand themselves.

Image Credit: jumbojan / Shutterstock.com Continue reading

Posted in Human Robots

#434616 What Games Are Humans Still Better at ...

Artificial intelligence (AI) systems’ rapid advances are continually crossing rows off the list of things humans do better than our computer compatriots.

AI has bested us at board games like chess and Go, and set astronomically high scores in classic computer games like Ms. Pacman. More complex games form part of AI’s next frontier.

While a team of AI bots developed by OpenAI, known as the OpenAI Five, ultimately lost to a team of professional players last year, they have since been running rampant against human opponents in Dota 2. Not to be outdone, Google’s DeepMind AI recently took on—and beat—several professional players at StarCraft II.

These victories beg the questions: what games are humans still better at than AI? And for how long?

The Making Of AlphaStar
DeepMind’s results provide a good starting point in a search for answers. The version of its AI for StarCraft II, dubbed AlphaStar, learned to play the games through supervised learning and reinforcement learning.

First, AI agents were trained by analyzing and copying human players, learning basic strategies. The initial agents then played each other in a sort of virtual death match where the strongest agents stayed on. New iterations of the agents were developed and entered the competition. Over time, the agents became better and better at the game, learning new strategies and tactics along the way.

One of the advantages of AI is that it can go through this kind of process at superspeed and quickly develop better agents. DeepMind researchers estimate that the AlphaStar agents went through the equivalent of roughly 200 years of game time in about 14 days.

Cheating or One Hand Behind the Back?
The AlphaStar AI agents faced off against human professional players in a series of games streamed on YouTube and Twitch. The AIs trounced their human opponents, winning ten games on the trot, before pro player Grzegorz “MaNa” Komincz managed to salvage some pride for humanity by winning the final game. Experts commenting on AlphaStar’s performance used words like “phenomenal” and “superhuman”—which was, to a degree, where things got a bit problematic.

AlphaStar proved particularly skilled at controlling and directing units in battle, known as micromanagement. One reason was that it viewed the whole game map at once—something a human player is not able to do—which made it seemingly able to control units in different areas at the same time. DeepMind researchers said the AIs only focused on a single part of the map at any given time, but interestingly, AlphaStar’s AI agent was limited to a more restricted camera view during the match “MaNA” won.

Potentially offsetting some of this advantage was the fact that AlphaStar was also restricted in certain ways. For example, it was prevented from performing more clicks per minute than a human player would be able to.

Where AIs Struggle
Games like StarCraft II and Dota 2 throw a lot of challenges at AIs. Complex game theory/ strategies, operating with imperfect/incomplete information, undertaking multi-variable and long-term planning, real-time decision-making, navigating a large action space, and making a multitude of possible decisions at every point in time are just the tip of the iceberg. The AIs’ performance in both games was impressive, but also highlighted some of the areas where they could be said to struggle.

In Dota 2 and StarCraft II, AI bots have seemed more vulnerable in longer games, or when confronted with surprising, unfamiliar strategies. They seem to struggle with complexity over time and improvisation/adapting to quick changes. This could be tied to how AIs learn. Even within the first few hours of performing a task, humans tend to gain a sense of familiarity and skill that takes an AI much longer. We are also better at transferring skill from one area to another. In other words, experience playing Dota 2 can help us become good at StarCraft II relatively quickly. This is not the case for AI—yet.

Dwindling Superiority
While the battle between AI and humans for absolute superiority is still on in Dota 2 and StarCraft II, it looks likely that AI will soon reign supreme. Similar things are happening to other types of games.

In 2017, a team from Carnegie Mellon University pitted its Libratus AI against four professionals. After 20 days of No Limit Texas Hold’em, Libratus was up by $1.7 million. Another likely candidate is the destroyer of family harmony at Christmas: Monopoly.

Poker involves bluffing, while Monopoly involves negotiation—skills you might not think AI would be particularly suited to handle. However, an AI experiment at Facebook showed that AI bots are more than capable of undertaking such tasks. The bots proved skilled negotiators, and developed negotiating strategies like pretending interest in one object while they were interested in another altogether—bluffing.

So, what games are we still better at than AI? There is no precise answer, but the list is getting shorter at a rapid pace.

The Aim Of the Game
While AI’s mastery of games might at first glance seem an odd area to focus research on, the belief is that the way AI learn to master a game is transferrable to other areas.

For example, the Libratus poker-playing AI employed strategies that could work in financial trading or political negotiations. The same applies to AlphaStar. As Oriol Vinyals, co-leader of the AlphaStar project, told The Verge:

“First and foremost, the mission at DeepMind is to build an artificial general intelligence. […] To do so, it’s important to benchmark how our agents perform on a wide variety of tasks.”

A 2017 survey of more than 350 AI researchers predicts AI could be a better driver than humans within ten years. By the middle of the century, AI will be able to write a best-selling novel, and a few years later, it will be better than humans at surgery. By the year 2060, AI may do everything better than us.

Whether you think this is a good or a bad thing, it’s worth noting that AI has an often overlooked ability to help us see things differently. When DeepMind’s AlphaGo beat human Go champion Lee Sedol, the Go community learned from it, too. Lee himself went on a win streak after the match with AlphaGo. The same is now happening within the Dota 2 and StarCraft II communities that are studying the human vs. AI games intensely.

More than anything, AI’s recent gaming triumphs illustrate how quickly artificial intelligence is developing. In 1997, Dr. Piet Hut, an astrophysicist at the Institute for Advanced Study at Princeton and a GO enthusiast, told the New York Times that:

”It may be a hundred years before a computer beats humans at Go—maybe even longer.”

Image Credit: Roman Kosolapov / Shutterstock.com Continue reading

Posted in Human Robots

#434611 This Week’s Awesome Stories From ...

AUTOMATION
The Rise of the Robot Reporter
Jaclyn Paiser | The New York Times
“In addition to covering company earnings for Bloomberg, robot reporters have been prolific producers of articles on minor league baseball for The Associated Press, high school football for The Washington Post and earthquakes for The Los Angeles Times.”

ROBOTICS
Penny-Sized Ionocraft Flies With No Moving Parts
Evan Ackerman | IEEE Spectrum
“Electrohydrodynamic (EHD) thrusters, sometimes called ion thrusters, use a high strength electric field to generate a plasma of ionized air. …Magical, right? No moving parts, completely silent, and it flies!”

ARTIFICIAL INTELLIGENCE
Making New Drugs With a Dose of Artificial Intelligence
Cade Metz | The New York Times
“…DeepMind won the [protein folding] competition by a sizable margin—it improved the prediction accuracy nearly twice as much as experts expected from the contest winner. DeepMind’s victory showed how the future of biochemical research will increasingly be driven by machines and the people who oversee those machines.”

COMPUTING
Nano-Switches Made Out of Graphene Could Make Our Devices Even Smaller
Emerging Technology From the arXiv | MIT Technology Review
“For the first time, physicists have built reliable, efficient graphene nanomachines that can be fabricated on silicon chips. They could lead to even greater miniaturization.”

BIOTECH
The Problem With Big DNA
Sarah Zhang | The Atlantic
“It took researchers days to search through thousands of genome sequences. Now it takes just a few seconds. …As sequencing becomes more common, the number of publicly available bacterial and viral genomes has doubled. At the rate this work is going, within a few years multiple millions of searchable pathogen genomes will be available—a library of DNA and disease, spread the world over.”

CRYPTOCURRENCY
Fire (and Lots of It): Berkeley Researcher on the Only Way to Fix Cryptocurrency
Dan Goodin | Ars Technica
“Weaver said, there’s no basis for the promises that cryptocurrencies’ decentralized structure and blockchain basis will fundamentally transform commerce or economics. That means the sky-high valuations spawned by those false promises are completely unjustified. …To support that conclusion, Weaver recited an oft-repeated list of supposed benefits of cryptocurrencies and explained why, after closer scrutiny, he believed them to be myths.”

Image Credit: Katya Havok / Shutterstock.com Continue reading

Posted in Human Robots

#434544 This Week’s Awesome Stories From ...

ARTIFICIAL INTELLIGENCE
DeepMind Beats Pros at Starcraft in Another Triumph for Bots
Tom Simonite | Wired
“DeepMind’s feat is the most complex yet in a long train of contests in which computers have beaten top humans at games. Checkers fell in 1994, chess in 1997, and DeepMind’s earlier bot AlphaGo became the first to beat a champion at the board game Go in 2016. The StarCraft bot is the most powerful AI game player yet; it may also be the least unexpected.”

GENETICS
Complete Axolotl Genome Could Pave the Way Toward Human Tissue Regeneration
George Dvorsky | Gizmodo
“Now that researchers have a near-complete axolotl genome—the new assembly still requires a bit of fine-tuning (more on that in a bit)—they, along with others, can now go about the work of identifying the genes responsible for axolotl tissue regeneration.”

FUTURE
We Analyzed 16,625 Papers to Figure Out Where AI Is Headed Next
Karen Hao | MIT Technology Review
“…though deep learning has singlehandedly thrust AI into the public eye, it represents just a small blip in the history of humanity’s quest to replicate our own intelligence. It’s been at the forefront of that effort for less than 10 years. When you zoom out on the whole history of the field, it’s easy to realize that it could soon be on its way out.”

COMPUTING
Apple’s Finger-Controller Patent Is a Glimpse at Mixed Reality’s Future
Mark Sullivan | Fast Company
“[Apple’s] engineers are now looking past the phone touchscreen toward mixed reality, where the company’s next great UX will very likely be built. A recent patent application gives some tantalizing clues as to how Apple’s people are thinking about aspects of that challenge.”

GOVERNANCE
How Do You Govern Machines That Can Learn? Policymakers Are Trying to Figure That Out
Steve Lohr | The New York Times
“Regulation is coming. That’s a good thing. Rules of competition and behavior are the foundation of healthy, growing markets. That was the consensus of the policymakers at MIT. But they also agreed that artificial intelligence raises some fresh policy challenges.”

Image Credit: Victoria Shapiro / Shutterstock.com Continue reading

Posted in Human Robots

#434303 Making Superhumans Through Radical ...

Imagine trying to read War and Peace one letter at a time. The thought alone feels excruciating. But in many ways, this painful idea holds parallels to how human-machine interfaces (HMI) force us to interact with and process data today.

Designed back in the 1970s at Xerox PARC and later refined during the 1980s by Apple, today’s HMI was originally conceived during fundamentally different times, and specifically, before people and machines were generating so much data. Fast forward to 2019, when humans are estimated to produce 44 zettabytes of data—equal to two stacks of books from here to Pluto—and we are still using the same HMI from the 1970s.

These dated interfaces are not equipped to handle today’s exponential rise in data, which has been ushered in by the rapid dematerialization of many physical products into computers and software.

Breakthroughs in perceptual and cognitive computing, especially machine learning algorithms, are enabling technology to process vast volumes of data, and in doing so, they are dramatically amplifying our brain’s abilities. Yet even with these powerful technologies that at times make us feel superhuman, the interfaces are still crippled with poor ergonomics.

Many interfaces are still designed around the concept that human interaction with technology is secondary, not instantaneous. This means that any time someone uses technology, they are inevitably multitasking, because they must simultaneously perform a task and operate the technology.

If our aim, however, is to create technology that truly extends and amplifies our mental abilities so that we can offload important tasks, the technology that helps us must not also overwhelm us in the process. We must reimagine interfaces to work in coherence with how our minds function in the world so that our brains and these tools can work together seamlessly.

Embodied Cognition
Most technology is designed to serve either the mind or the body. It is a problematic divide, because our brains use our entire body to process the world around us. Said differently, our minds and bodies do not operate distinctly. Our minds are embodied.

Studies using MRI scans have shown that when a person feels an emotion in their gut, blood actually moves to that area of the body. The body and the mind are linked in this way, sharing information back and forth continuously.

Current technology presents data to the brain differently from how the brain processes data. Our brains, for example, use sensory data to continually encode and decipher patterns within the neocortex. Our brains do not create a linguistic label for each item, which is how the majority of machine learning systems operate, nor do our brains have an image associated with each of these labels.

Our bodies move information through us instantaneously, in a sense “computing” at the speed of thought. What if our technology could do the same?

Using Cognitive Ergonomics to Design Better Interfaces
Well-designed physical tools, as philosopher Martin Heidegger once meditated on while using the metaphor of a hammer, seem to disappear into the “hand.” They are designed to amplify a human ability and not get in the way during the process.

The aim of physical ergonomics is to understand the mechanical movement of the human body and then adapt a physical system to amplify the human output in accordance. By understanding the movement of the body, physical ergonomics enables ergonomically sound physical affordances—or conditions—so that the mechanical movement of the body and the mechanical movement of the machine can work together harmoniously.

Cognitive ergonomics applied to HMI design uses this same idea of amplifying output, but rather than focusing on physical output, the focus is on mental output. By understanding the raw materials the brain uses to comprehend information and form an output, cognitive ergonomics allows technologists and designers to create technological affordances so that the brain can work seamlessly with interfaces and remove the interruption costs of our current devices. In doing so, the technology itself “disappears,” and a person’s interaction with technology becomes fluid and primary.

By leveraging cognitive ergonomics in HMI design, we can create a generation of interfaces that can process and present data the same way humans process real-world information, meaning through fully-sensory interfaces.

Several brain-machine interfaces are already on the path to achieving this. AlterEgo, a wearable device developed by MIT researchers, uses electrodes to detect and understand nonverbal prompts, which enables the device to read the user’s mind and act as an extension of the user’s cognition.

Another notable example is the BrainGate neural device, created by researchers at Stanford University. Just two months ago, a study was released showing that this brain implant system allowed paralyzed patients to navigate an Android tablet with their thoughts alone.

These are two extraordinary examples of what is possible for the future of HMI, but there is still a long way to go to bring cognitive ergonomics front and center in interface design.

Disruptive Innovation Happens When You Step Outside Your Existing Users
Most of today’s interfaces are designed by a narrow population, made up predominantly of white, non-disabled men who are prolific in the use of technology (you may recall The New York Times viral article from 2016, Artificial Intelligence’s White Guy Problem). If you ask this population if there is a problem with today’s HMIs, most will say no, and this is because the technology has been designed to serve them.

This lack of diversity means a limited perspective is being brought to interface design, which is problematic if we want HMI to evolve and work seamlessly with the brain. To use cognitive ergonomics in interface design, we must first gain a more holistic understanding of how people with different abilities understand the world and how they interact with technology.

Underserved groups, such as people with physical disabilities, operate on what Clayton Christensen coined in The Innovator’s Dilemma as the fringe segment of a market. Developing solutions that cater to fringe groups can in fact disrupt the larger market by opening a downward, much larger market.

Learning From Underserved Populations
When technology fails to serve a group of people, that group must adapt the technology to meet their needs.

The workarounds created are often ingenious, specifically because they have not been arrived at by preferences, but out of necessity that has forced disadvantaged users to approach the technology from a very different vantage point.

When a designer or technologist begins learning from this new viewpoint and understanding challenges through a different lens, they can bring new perspectives to design—perspectives that otherwise can go unseen.

Designers and technologists can also learn from people with physical disabilities who interact with the world by leveraging other senses that help them compensate for one they may lack. For example, some blind people use echolocation to detect objects in their environments.

The BrainPort device developed by Wicab is an incredible example of technology leveraging one human sense to serve or compliment another. The BrainPort device captures environmental information with a wearable video camera and converts this data into soft electrical stimulation sequences that are sent to a device on the user’s tongue—the most sensitive touch receptor in the body. The user learns how to interpret the patterns felt on their tongue, and in doing so, become able to “see” with their tongue.

Key to the future of HMI design is learning how different user groups navigate the world through senses beyond sight. To make cognitive ergonomics work, we must understand how to leverage the senses so we’re not always solely relying on our visual or verbal interactions.

Radical Inclusion for the Future of HMI
Bringing radical inclusion into HMI design is about gaining a broader lens on technology design at large, so that technology can serve everyone better.

Interestingly, cognitive ergonomics and radical inclusion go hand in hand. We can’t design our interfaces with cognitive ergonomics without bringing radical inclusion into the picture, and we also will not arrive at radical inclusion in technology so long as cognitive ergonomics are not considered.

This new mindset is the only way to usher in an era of technology design that amplifies the collective human ability to create a more inclusive future for all.

Image Credit: jamesteohart / Shutterstock.com Continue reading

Posted in Human Robots