Tag Archives: questions
Every year, for just a few days in a major city, a small team of roboticists get to live the dream: ordering around their own personal robot butlers. In carefully-constructed replicas of a restaurant scene or a domestic setting, these robots perform any number of simple algorithmic tasks. “Get the can of beans from the shelf. Greet the visitors to the museum. Help the humans with their shopping. Serve the customers at the restaurant.”
This is Robocup @ Home, the annual tournament where teams of roboticists put their autonomous service robots to the test for practical domestic applications. The tasks seem simple and mundane, but considering the technology required reveals that they’re really not.
The Robot Butler Contest
Say you want a robot to fetch items in the supermarket. In a crowded, noisy environment, the robot must understand your commands, ask for clarification, and map out and navigate an unfamiliar environment, avoiding obstacles and people as it does so. Then it must recognize the product you requested, perhaps in a cluttered environment, perhaps in an unfamiliar orientation. It has to grasp that product appropriately—recall that there are entire multi-million-dollar competitions just dedicated to developing robots that can grasp a range of objects—and then return it to you.
It’s a job so simple that a child could do it—and so complex that teams of smart roboticists can spend weeks programming and engineering, and still end up struggling to complete simplified versions of this task. Of course, the child has the advantage of millions of years of evolutionary research and development, while the first robots that could even begin these tasks were only developed in the 1970s.
Even bearing this in mind, Robocup @ Home can feel like a place where futurist expectations come crashing into technologist reality. You dream of a smooth-voiced, sardonic JARVIS who’s already made your favorite dinner when you come home late from work; you end up shouting “remember the biscuits” at a baffled, ungainly droid in aisle five.
Caring for the Elderly
Famously, Japan is one of the most robo-enthusiastic nations in the world; they are the nation that stunned us all with ASIMO in 2000, and several studies have been conducted into the phenomenon. It’s no surprise, then, that humanoid robotics should be seriously considered as a solution to the crisis of the aging population. The Japanese government, as part of its robots strategy, has already invested $44 million in their development.
Toyota’s Human Support Robot (HSR-2) is a simple but programmable robot with a single arm; it can be remote-controlled to pick up objects and can monitor patients. HSR-2 has become the default robot for use in Robocup @ Home tournaments, at least in tasks that involve manipulating objects.
Alongside this, Toyota is working on exoskeletons to assist people in walking after strokes. It may surprise you to learn that nurses suffer back injuries more than any other occupation, at roughly three times the rate of construction workers, due to the day-to-day work of lifting patients. Toyota has a Care Assist robot/exoskeleton designed to fix precisely this problem by helping care workers with the heavy lifting.
The Home of the Future
The enthusiasm for domestic robotics is easy to understand and, in fact, many startups already sell robots marketed as domestic helpers in some form or another. In general, though, they skirt the immensely complicated task of building a fully capable humanoid robot—a task that even Google’s skunk-works department gave up on, at least until recently.
It’s plain to see why: far more research and development is needed before these domestic robots could be used reliably and at a reasonable price. Consumers with expectations inflated by years of science fiction saturation might find themselves frustrated as the robots fail to perform basic tasks.
Instead, domestic robotics efforts fall into one of two categories. There are robots specialized to perform a domestic task, like iRobot’s Roomba, which stuck to vacuuming and became the most successful domestic robot of all time by far.
The tasks need not necessarily be simple, either: the impressive but expensive automated kitchen uses the world’s most dexterous hands to cook meals, providing it can recognize the ingredients. Other robots focus on human-robot interaction, like Jibo: they essentially package the abilities of a voice assistant like Siri, Cortana, or Alexa to respond to simple questions and perform online tasks in a friendly, dynamic robot exterior.
In this way, the future of domestic automation starts to look a lot more like smart homes than a robot or domestic servant. General robotics is difficult in the same way that general artificial intelligence is difficult; competing with humans, the great all-rounders, is a challenge. Getting superhuman performance at a more specific task, however, is feasible and won’t cost the earth.
Individual startups without the financial might of a Google or an Amazon can develop specialized robots, like Seven Dreamers’ laundry robot, and hope that one day it will form part of a network of autonomous robots that each have a role to play in the household.
The Smart Home has been a staple of futurist expectations for a long time, to the extent that movies featuring smart homes out of control are already a cliché. But critics of the smart home idea—and of the internet of things more generally—tend to focus on the idea that, more often than not, software just adds an additional layer of things that can break (NSFW), in exchange for minimal added convenience. A toaster that can short-circuit is bad enough, but a toaster that can refuse to serve you toast because its firmware is updating is something else entirely.
That’s before you even get into the security vulnerabilities, which are all the more important when devices are installed in your home and capable of interacting with them. The idea of a smart watch that lets you keep an eye on your children might sound like something a security-conscious parent would like: a smart watch that can be hacked to track children, listen in on their surroundings, and even fool them into thinking a call is coming from their parents is the stuff of nightmares.
Key to many of these problems is the lack of standardization for security protocols, and even the products themselves. The idea of dozens of startups each developing a highly-specialized piece of robotics to perform a single domestic task sounds great in theory, until you realize the potential hazards and pitfalls of getting dozens of incompatible devices to work together on the same system.
It seems inevitable that there are yet more layers of domestic drudgery that can be automated away, decades after the first generation of time-saving domestic devices like the dishwasher and vacuum cleaner became mainstream. With projected market values into the billions and trillions of dollars, there is no shortage of industry interest in ironing out these kinks. But, for now at least, the answer to the question: “Where’s my robot butler?” is that it is gradually, painstakingly learning how to sort through groceries.
Image Credit: Nonchanon / Shutterstock.com Continue reading
It’s common to hear phrases like ‘machine learning’ and ‘artificial intelligence’ and believe that somehow, someone has managed to replicate a human mind inside a computer. This, of course, is untrue—but part of the reason this idea is so pervasive is because the metaphor of human learning and intelligence has been quite useful in explaining machine learning and artificial intelligence.
Indeed, some AI researchers maintain a close link with the neuroscience community, and inspiration runs in both directions. But the metaphor can be a hindrance to people trying to explain machine learning to those less familiar with it. One of the biggest risks of conflating human and machine intelligence is that we start to hand over too much agency to machines. For those of us working with software, it’s essential that we remember the agency is human—it’s humans who build these systems, after all.
It’s worth unpacking the key differences between machine and human intelligence. While there are certainly similarities, it’s by looking at what makes them different that we can better grasp how artificial intelligence works, and how we can build and use it effectively.
Central to the metaphor that links human and machine learning is the concept of a neural network. The biggest difference between a human brain and an artificial neural net is the sheer scale of the brain’s neural network. What’s crucial is that it’s not simply the number of neurons in the brain (which reach into the billions), but more precisely, the mind-boggling number of connections between them.
But the issue runs deeper than questions of scale. The human brain is qualitatively different from an artificial neural network for two other important reasons: the connections that power it are analogue, not digital, and the neurons themselves aren’t uniform (as they are in an artificial neural network).
This is why the brain is such a complex thing. Even the most complex artificial neural network, while often difficult to interpret and unpack, has an underlying architecture and principles guiding it (this is what we’re trying to do, so let’s construct the network like this…).
Intricate as they may be, neural networks in AIs are engineered with a specific outcome in mind. The human mind, however, doesn’t have the same degree of intentionality in its engineering. Yes, it should help us do all the things we need to do to stay alive, but it also allows us to think critically and creatively in a way that doesn’t need to be programmed.
The Beautiful Simplicity of AI
The fact that artificial intelligence systems are so much simpler than the human brain is, ironically, what enables AIs to deal with far greater computational complexity than we can.
Artificial neural networks can hold much more information and data than the human brain, largely due to the type of data that is stored and processed in a neural network. It is discrete and specific, like an entry on an excel spreadsheet.
In the human brain, data doesn’t have this same discrete quality. So while an artificial neural network can process very specific data at an incredible scale, it isn’t able to process information in the rich and multidimensional manner a human brain can. This is the key difference between an engineered system and the human mind.
Despite years of research, the human mind still remains somewhat opaque. This is because the analog synaptic connections between neurons are almost impenetrable to the digital connections within an artificial neural network.
Speed and Scale
Consider what this means in practice. The relative simplicity of an AI allows it to do a very complex task very well, and very quickly. A human brain simply can’t process data at scale and speed in the way AIs need to if they’re, say, translating speech to text, or processing a huge set of oncology reports.
Essential to the way AI works in both these contexts is that it breaks data and information down into tiny constituent parts. For example, it could break sounds down into phonetic text, which could then be translated into full sentences, or break images into pieces to understand the rules of how a huge set of them is composed.
Humans often do a similar thing, and this is the point at which machine learning is most like human learning; like algorithms, humans break data or information into smaller chunks in order to process it.
But there’s a reason for this similarity. This breakdown process is engineered into every neural network by a human engineer. What’s more, the way this process is designed will be down to the problem at hand. How an artificial intelligence system breaks down a data set is its own way of ‘understanding’ it.
Even while running a highly complex algorithm unsupervised, the parameters of how an AI learns—how it breaks data down in order to process it—are always set from the start.
Human Intelligence: Defining Problems
Human intelligence doesn’t have this set of limitations, which is what makes us so much more effective at problem-solving. It’s the human ability to ‘create’ problems that makes us so good at solving them. There’s an element of contextual understanding and decision-making in the way humans approach problems.
AIs might be able to unpack problems or find new ways into them, but they can’t define the problem they’re trying to solve.
Algorithmic insensitivity has come into focus in recent years, with an increasing number of scandals around bias in AI systems. Of course, this is caused by the biases of those making the algorithms, but underlines the point that algorithmic biases can only be identified by human intelligence.
Human and Artificial Intelligence Should Complement Each Other
We must remember that artificial intelligence and machine learning aren’t simply things that ‘exist’ that we can no longer control. They are built, engineered, and designed by us. This mindset puts us in control of the future, and makes algorithms even more elegant and remarkable.
Image Credit: Liu zishan/Shutterstock Continue reading