Tag Archives: grasping
#434260 The Most Surprising Tech Breakthroughs ...
Development across the entire information technology landscape certainly didn’t slow down this year. From CRISPR babies, to the rapid decline of the crypto markets, to a new robot on Mars, and discovery of subatomic particles that could change modern physics as we know it, there was no shortage of headline-grabbing breakthroughs and discoveries.
As 2018 comes to a close, we can pause and reflect on some of the biggest technology breakthroughs and scientific discoveries that occurred this year.
I reached out to a few Singularity University speakers and faculty across the various technology domains we cover asking what they thought the biggest breakthrough was in their area of expertise. The question posed was:
“What, in your opinion, was the biggest development in your area of focus this year? Or, what was the breakthrough you were most surprised by in 2018?”
I can share that for me, hands down, the most surprising development I came across in 2018 was learning that a publicly-traded company that was briefly valued at over $1 billion, and has over 12,000 employees and contractors spread around the world, has no physical office space and the entire business is run and operated from inside an online virtual world. This is Ready Player One stuff happening now.
For the rest, here’s what our experts had to say.
DIGITAL BIOLOGY
Dr. Tiffany Vora | Faculty Director and Vice Chair, Digital Biology and Medicine, Singularity University
“That’s easy: CRISPR babies. I knew it was technically possible, and I’ve spent two years predicting it would happen first in China. I knew it was just a matter of time but I failed to predict the lack of oversight, the dubious consent process, the paucity of publicly-available data, and the targeting of a disease that we already know how to prevent and treat and that the children were at low risk of anyway.
I’m not convinced that this counts as a technical breakthrough, since one of the girls probably isn’t immune to HIV, but it sure was a surprise.”
For more, read Dr. Vora’s summary of this recent stunning news from China regarding CRISPR-editing human embryos.
QUANTUM COMPUTING
Andrew Fursman | Co-Founder/CEO 1Qbit, Faculty, Quantum Computing, Singularity University
“There were two last-minute holiday season surprise quantum computing funding and technology breakthroughs:
First, right before the government shutdown, one priority legislative accomplishment will provide $1.2 billion in quantum computing research over the next five years. Second, there’s the rise of ions as a truly viable, scalable quantum computing architecture.”
*Read this Gizmodo profile on an exciting startup in the space to learn more about this type of quantum computing
ENERGY
Ramez Naam | Chair, Energy and Environmental Systems, Singularity University
“2018 had plenty of energy surprises. In solar, we saw unsubsidized prices in the sunny parts of the world at just over two cents per kwh, or less than half the price of new coal or gas electricity. In the US southwest and Texas, new solar is also now cheaper than new coal or gas. But even more shockingly, in Germany, which is one of the least sunny countries on earth (it gets less sunlight than Canada) the average bid for new solar in a 2018 auction was less than 5 US cents per kwh. That’s as cheap as new natural gas in the US, and far cheaper than coal, gas, or any other new electricity source in most of Europe.
In fact, it’s now cheaper in some parts of the world to build new solar or wind than to run existing coal plants. Think tank Carbon Tracker calculates that, over the next 10 years, it will become cheaper to build new wind or solar than to operate coal power in most of the world, including specifically the US, most of Europe, and—most importantly—India and the world’s dominant burner of coal, China.
Here comes the sun.”
GLOBAL GRAND CHALLENGES
Darlene Damm | Vice Chair, Faculty, Global Grand Challenges, Singularity University
“In 2018 we saw a lot of areas in the Global Grand Challenges move forward—advancements in robotic farming technology and cultured meat, low-cost 3D printed housing, more sophisticated types of online education expanding to every corner of the world, and governments creating new policies to deal with the ethics of the digital world. These were the areas we were watching and had predicted there would be change.
What most surprised me was to see young people, especially teenagers, start to harness technology in powerful ways and use it as a platform to make their voices heard and drive meaningful change in the world. In 2018 we saw teenagers speak out on a number of issues related to their well-being and launch digital movements around issues such as gun and school safety, global warming and environmental issues. We often talk about the harm technology can cause to young people, but on the flip side, it can be a very powerful tool for youth to start changing the world today and something I hope we see more of in the future.”
BUSINESS STRATEGY
Pascal Finette | Chair, Entrepreneurship and Open Innovation, Singularity University
“Without a doubt the rapid and massive adoption of AI, specifically deep learning, across industries, sectors, and organizations. What was a curiosity for most companies at the beginning of the year has quickly made its way into the boardroom and leadership meetings, and all the way down into the innovation and IT department’s agenda. You are hard-pressed to find a mid- to large-sized company today that is not experimenting or implementing AI in various aspects of its business.
On the slightly snarkier side of answering this question: The very rapid decline in interest in blockchain (and cryptocurrencies). The blockchain party was short, ferocious, and ended earlier than most would have anticipated, with a huge hangover for some. The good news—with the hot air dissipated, we can now focus on exploring the unique use cases where blockchain does indeed offer real advantages over centralized approaches.”
*Author note: snark is welcome and appreciated
ROBOTICS
Hod Lipson | Director, Creative Machines Lab, Columbia University
“The biggest surprise for me this year in robotics was learning dexterity. For decades, roboticists have been trying to understand and imitate dexterous manipulation. We humans seem to be able to manipulate objects with our fingers with incredible ease—imagine sifting through a bunch of keys in the dark, or tossing and catching a cube. And while there has been much progress in machine perception, dexterous manipulation remained elusive.
There seemed to be something almost magical in how we humans can physically manipulate the physical world around us. Decades of research in grasping and manipulation, and millions of dollars spent on robot-hand hardware development, has brought us little progress. But in late 2018, the Berkley OpenAI group demonstrated that this hurdle may finally succumb to machine learning as well. Given 200 years worth of practice, machines learned to manipulate a physical object with amazing fluidity. This might be the beginning of a new age for dexterous robotics.”
MACHINE LEARNING
Jeremy Howard | Founding Researcher, fast.ai, Founder/CEO, Enlitic, Faculty Data Science, Singularity University
“The biggest development in machine learning this year has been the development of effective natural language processing (NLP).
The New York Times published an article last month titled “Finally, a Machine That Can Finish Your Sentence,” which argued that NLP neural networks have reached a significant milestone in capability and speed of development. The “finishing your sentence” capability mentioned in the title refers to a type of neural network called a “language model,” which is literally a model that learns how to finish your sentences.
Earlier this year, two systems (one, called ELMO, is from the Allen Institute for AI, and the other, called ULMFiT, was developed by me and Sebastian Ruder) showed that such a model could be fine-tuned to dramatically improve the state-of-the-art in nearly every NLP task that researchers study. This work was further developed by OpenAI, which in turn was greatly scaled up by Google Brain, who created a system called BERT which reached human-level performance on some of NLP’s toughest challenges.
Over the next year, expect to see fine-tuned language models used for everything from understanding medical texts to building disruptive social media troll armies.”
DIGITAL MANUFACTURING
Andre Wegner | Founder/CEO Authentise, Chair, Digital Manufacturing, Singularity University
“Most surprising to me was the extent and speed at which the industry finally opened up.
While previously, only few 3D printing suppliers had APIs and knew what to do with them, 2018 saw nearly every OEM (or original equipment manufacturer) enabling data access and, even more surprisingly, shying away from proprietary standards and adopting MTConnect, as stalwarts such as 3D Systems and Stratasys have been. This means that in two to three years, data access to machines will be easy, commonplace, and free. The value will be in what is being done with that data.
Another example of this openness are the seemingly endless announcements of integrated workflows: GE’s announcement with most major software players to enable integrated solutions, EOS’s announcement with Siemens, and many more. It’s clear that all actors in the additive ecosystem have taken a step forward in terms of openness. The result is a faster pace of innovation, particularly in the software and data domains that are crucial to enabling comprehensive digital workflow to drive agile and resilient manufacturing.
I’m more optimistic we’ll achieve that now than I was at the end of 2017.”
SCIENCE AND DISCOVERY
Paul Saffo | Chair, Future Studies, Singularity University, Distinguished Visiting Scholar, Stanford Media-X Research Network
“The most important development in technology this year isn’t a technology, but rather the astonishing science surprises made possible by recent technology innovations. My short list includes the discovery of the “neptmoon”, a Neptune-scale moon circling a Jupiter-scale planet 8,000 lightyears from us; the successful deployment of the Mars InSight Lander a month ago; and the tantalizing ANITA detection (what could be a new subatomic particle which would in turn blow the standard model wide open). The highest use of invention is to support science discovery, because those discoveries in turn lead us to the future innovations that will improve the state of the world—and fire up our imaginations.”
ROBOTICS
Pablos Holman | Inventor, Hacker, Faculty, Singularity University
“Just five or ten years ago, if you’d asked any of us technologists “What is harder for robots? Eyes, or fingers?” We’d have all said eyes. Robots have extraordinary eyes now, but even in a surgical robot, the fingers are numb and don’t feel anything. Stanford robotics researchers have invented fingertips that can feel, and this will be a kingpin that allows robots to go everywhere they haven’t been yet.”
BLOCKCHAIN
Nathana Sharma | Blockchain, Policy, Law, and Ethics, Faculty, Singularity University
“2017 was the year of peak blockchain hype. 2018 has been a year of resetting expectations and technological development, even as the broader cryptocurrency markets have faced a winter. It’s now about seeing adoption and applications that people want and need to use rise. An incredible piece of news from December 2018 is that Facebook is developing a cryptocurrency for users to make payments through Whatsapp. That’s surprisingly fast mainstream adoption of this new technology, and indicates how powerful it is.”
ARTIFICIAL INTELLIGENCE
Neil Jacobstein | Chair, Artificial Intelligence and Robotics, Singularity University
“I think one of the most visible improvements in AI was illustrated by the Boston Dynamics Parkour video. This was not due to an improvement in brushless motors, accelerometers, or gears. It was due to improvements in AI algorithms and training data. To be fair, the video released was cherry-picked from numerous attempts, many of which ended with a crash. However, the fact that it could be accomplished at all in 2018 was a real win for both AI and robotics.”
NEUROSCIENCE
Divya Chander | Chair, Neuroscience, Singularity University
“2018 ushered in a new era of exponential trends in non-invasive brain modulation. Changing behavior or restoring function takes on a new meaning when invasive interfaces are no longer needed to manipulate neural circuitry. The end of 2018 saw two amazing announcements: the ability to grow neural organoids (mini-brains) in a dish from neural stem cells that started expressing electrical activity, mimicking the brain function of premature babies, and the first (known) application of CRISPR to genetically alter two fetuses grown through IVF. Although this was ostensibly to provide genetic resilience against HIV infections, imagine what would happen if we started tinkering with neural circuitry and intelligence.”
Image Credit: Yurchanka Siarhei / Shutterstock.com Continue reading
#433634 This Robotic Skin Makes Inanimate ...
In Goethe’s poem “The Sorcerer’s Apprentice,” made world-famous by its adaptation in Disney’s Fantasia, a lazy apprentice, left to fetch water, uses magic to bewitch a broom into performing his chores for him. Now, new research from Yale has opened up the possibility of being able to animate—and automate—household objects by fitting them with a robotic skin.
Yale’s Soft Robotics lab, the Faboratory, is led by Professor Rebecca Kramer-Bottiglio, and has long investigated the possibilities associated with new kinds of manufacturing. While the typical image of a robot is hard, cold steel and rigid movements, soft robotics aims to create something more flexible and versatile. After all, the human body is made up of soft, flexible surfaces, and the world is designed for us. Soft, deformable robots could change shape to adapt to different tasks.
When designing a robot, key components are the robot’s sensors, which allow it to perceive its environment, and its actuators, the electrical or pneumatic motors that allow the robot to move and interact with its environment.
Consider your hand, which has temperature and pressure sensors, but also muscles as actuators. The omni-skins, as the Science Robotics paper dubs them, combine sensors and actuators, embedding them into an elastic sheet. The robotic skins are moved by pneumatic actuators or memory alloy that can bounce back into shape. If this is then wrapped around a soft, deformable object, moving the skin with the actuators can allow the object to crawl along a surface.
The key to the design here is flexibility: rather than adding chips, sensors, and motors into every household object to turn them into individual automatons, the same skin can be used for many purposes. “We can take the skins and wrap them around one object to perform a task—locomotion, for example—and then take them off and put them on a different object to perform a different task, such as grasping and moving an object,” said Kramer-Bottiglio. “We can then take those same skins off that object and put them on a shirt to make an active wearable device.”
The task is then to dream up applications for the omni-skins. Initially, you might imagine demanding a stuffed toy to fetch the remote control for you, or animating a sponge to wipe down kitchen surfaces—but this is just the beginning. The scientists attached the skins to a soft tube and camera, creating a worm-like robot that could compress itself and crawl into small spaces for rescue missions. The same skins could then be worn by a person to sense their posture. One could easily imagine this being adapted into a soft exoskeleton for medical or industrial purposes: for example, helping with rehabilitation after an accident or injury.
The initial motivating factor for creating the robots was in an environment where space and weight are at a premium, and humans are forced to improvise with whatever’s at hand: outer space. Kramer-Bottoglio originally began the work after NASA called out for soft robotics systems for use by astronauts. Instead of wasting valuable rocket payload by sending up a heavy metal droid like ATLAS to fetch items or perform repairs, soft robotic skins with modular sensors could be adapted for a range of different uses spontaneously.
By reassembling components in the soft robotic skin, a crumpled ball of paper could provide the chassis for a robot that performs repairs on the spaceship, or explores the lunar surface. The dynamic compression provided by the robotic skin could be used for g-suits to protect astronauts when they rapidly accelerate or decelerate.
“One of the main things I considered was the importance of multi-functionality, especially for deep space exploration where the environment is unpredictable. The question is: How do you prepare for the unknown unknowns? … Given the design-on-the-fly nature of this approach, it’s unlikely that a robot created using robotic skins will perform any one task optimally,” Kramer-Bottiglio said. “However, the goal is not optimization, but rather diversity of applications.”
There are still problems to resolve. Many of the videos of the skins indicate that they can rely on an external power supply. Creating new, smaller batteries that can power wearable devices has been a focus of cutting-edge materials science research for some time. Much of the lab’s expertise is in creating flexible, stretchable electronics that can be deformed by the actuators without breaking the circuitry. In the future, the team hopes to work on streamlining the production process; if the components could be 3D printed, then the skins could be created when needed.
In addition, robotic hardware that’s capable of performing an impressive range of precise motions is quite an advanced technology. The software to control those robots, and enable them to perform a variety of tasks, is quite another challenge. With soft robots, it can become even more complex to design that control software, because the body itself can change shape and deform as the robot moves. The same set of programmed motions, then, can produce different results depending on the environment.
“Let’s say I have a soft robot with four legs that crawls along the ground, and I make it walk up a hard slope,” Dr. David Howard, who works on robotics at CSIRO in Australia, explained to ABC.
“If I make that slope out of gravel and I give it the same control commands, the actual body is going to deform in a different way, and I’m not necessarily going to know what that is.”
Despite these and other challenges, research like that at the Faboratory still hopes to redefine how we think of robots and robotics. Instead of a robot that imitates a human and manipulates objects, the objects themselves will become programmable matter, capable of moving autonomously and carrying out a range of tasks. Futurists speculate about a world where most objects are automated to some degree and can assemble and repair themselves, or are even built entirely of tiny robots.
The tale of the Sorcerer’s Apprentice was first written in 1797, at the dawn of the industrial revolution, over a century before the word “robot” was even coined. Yet more and more roboticists aim to prove Arthur C Clarke’s maxim: any sufficiently advanced technology is indistinguishable from magic.
Image Credit: Joran Booth, The Faboratory Continue reading
#433506 MIT’s New Robot Taught Itself to Pick ...
Back in 2016, somewhere in a Google-owned warehouse, more than a dozen robotic arms sat for hours quietly grasping objects of various shapes and sizes. For hours on end, they taught themselves how to pick up and hold the items appropriately—mimicking the way a baby gradually learns to use its hands.
Now, scientists from MIT have made a new breakthrough in machine learning: their new system can not only teach itself to see and identify objects, but also understand how best to manipulate them.
This means that, armed with the new machine learning routine referred to as “dense object nets (DON),” the robot would be capable of picking up an object that it’s never seen before, or in an unfamiliar orientation, without resorting to trial and error—exactly as a human would.
The deceptively simple ability to dexterously manipulate objects with our hands is a huge part of why humans are the dominant species on the planet. We take it for granted. Hardware innovations like the Shadow Dexterous Hand have enabled robots to softly grip and manipulate delicate objects for many years, but the software required to control these precision-engineered machines in a range of circumstances has proved harder to develop.
This was not for want of trying. The Amazon Robotics Challenge offers millions of dollars in prizes (and potentially far more in contracts, as their $775m acquisition of Kiva Systems shows) for the best dexterous robot able to pick and package items in their warehouses. The lucrative dream of a fully-automated delivery system is missing this crucial ability.
Meanwhile, the Robocup@home challenge—an offshoot of the popular Robocup tournament for soccer-playing robots—aims to make everyone’s dream of having a robot butler a reality. The competition involves teams drilling their robots through simple household tasks that require social interaction or object manipulation, like helping to carry the shopping, sorting items onto a shelf, or guiding tourists around a museum.
Yet all of these endeavors have proved difficult; the tasks often have to be simplified to enable the robot to complete them at all. New or unexpected elements, such as those encountered in real life, more often than not throw the system entirely. Programming the robot’s every move in explicit detail is not a scalable solution: this can work in the highly-controlled world of the assembly line, but not in everyday life.
Computer vision is improving all the time. Neural networks, including those you train every time you prove that you’re not a robot with CAPTCHA, are getting better at sorting objects into categories, and identifying them based on sparse or incomplete data, such as when they are occluded, or in different lighting.
But many of these systems require enormous amounts of input data, which is impractical, slow to generate, and often needs to be laboriously categorized by humans. There are entirely new jobs that require people to label, categorize, and sift large bodies of data ready for supervised machine learning. This can make machine learning undemocratic. If you’re Google, you can make thousands of unwitting volunteers label your images for you with CAPTCHA. If you’re IBM, you can hire people to manually label that data. If you’re an individual or startup trying something new, however, you will struggle to access the vast troves of labeled data available to the bigger players.
This is why new systems that can potentially train themselves over time or that allow robots to deal with situations they’ve never seen before without mountains of labelled data are a holy grail in artificial intelligence. The work done by MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) is part of a new wave of “self-supervised” machine learning systems—little of the data used was labeled by humans.
The robot first inspects the new object from multiple angles, building up a 3D picture of the object with its own coordinate system. This then allows the robotic arm to identify a particular feature on the object—such as a handle, or the tongue of a shoe—from various different angles, based on its relative distance to other grid points.
This is the real innovation: the new means of representing objects to grasp as mapped-out 3D objects, with grid points and subsections of their own. Rather than using a computer vision algorithm to identify a door handle, and then activating a door handle grasping subroutine, the DON system treats all objects by making these spatial maps before classifying or manipulating them, enabling it to deal with a greater range of objects than in other approaches.
“Many approaches to manipulation can’t identify specific parts of an object across the many orientations that object may encounter,” said PhD student Lucas Manuelli, who wrote a new paper about the system with lead author and fellow student Pete Florence, alongside MIT professor Russ Tedrake. “For example, existing algorithms would be unable to grasp a mug by its handle, especially if the mug could be in multiple orientations, like upright, or on its side.”
Class-specific descriptors, which can be applied to the object features, can allow the robot arm to identify a mug, find the handle, and pick the mug up appropriately. Object-specific descriptors allow the robot arm to select a particular mug from a group of similar items. I’m already dreaming of a robot butler reliably picking my favourite mug when it serves me coffee in the morning.
Google’s robot arm-y was an attempt to develop a general grasping algorithm: one that could identify, categorize, and appropriately grip as many items as possible. This requires a great deal of training time and data, which is why Google parallelized their project by having 14 robot arms feed data into a single neural network brain: even then, the algorithm may fail with highly specific tasks. Specialist grasping algorithms might require less training if they’re limited to specific objects, but then your software is useless for general tasks.
As the roboticists noted, their system, with its ability to identify parts of an object rather than just a single object, is better suited to specific tasks, such as “grasp the racquet by the handle,” than Amazon Robotics Challenge robots, which identify whole objects by segmenting an image.
This work is small-scale at present. It has been tested with a few classes of objects, including shoes, hats, and mugs. Yet the use of these dense object nets as a way for robots to represent and manipulate new objects may well be another step towards the ultimate goal of generalized automation: a robot capable of performing every task a person can. If that point is reached, the question that will remain is how to cope with being obsolete.
Image Credit: Tom Buehler/CSAIL Continue reading
#431689 Robotic Materials Will Distribute ...
The classical view of a robot as a mechanical body with a central “brain” that controls its behavior could soon be on its way out. The authors of a recent article in Science Robotics argue that future robots will have intelligence distributed throughout their bodies.
The concept, and the emerging discipline behind it, are variously referred to as “material robotics” or “robotic materials” and are essentially a synthesis of ideas from robotics and materials science. Proponents say advances in both fields are making it possible to create composite materials capable of combining sensing, actuation, computation, and communication and operating independently of a central processing unit.
Much of the inspiration for the field comes from nature, with practitioners pointing to the adaptive camouflage of the cuttlefish’s skin, the ability of bird wings to morph in response to different maneuvers, or the banyan tree’s ability to grow roots above ground to support new branches.
Adaptive camouflage and morphing wings have clear applications in the defense and aerospace sector, but the authors say similar principles could be used to create everything from smart tires able to calculate the traction needed for specific surfaces to grippers that can tailor their force to the kind of object they are grasping.
“Material robotics represents an acknowledgment that materials can absorb some of the challenges of acting and reacting to an uncertain world,” the authors write. “Embedding distributed sensors and actuators directly into the material of the robot’s body engages computational capabilities and offloads the rigid information and computational requirements from the central processing system.”
The idea of making materials more adaptive is not new, and there are already a host of “smart materials” that can respond to stimuli like heat, mechanical stress, or magnetic fields by doing things like producing a voltage or changing shape. These properties can be carefully tuned to create materials capable of a wide variety of functions such as movement, self-repair, or sensing.
The authors say synthesizing these kinds of smart materials, alongside other advanced materials like biocompatible conductors or biodegradable elastomers, is foundational to material robotics. But the approach also involves integration of many different capabilities in the same material, careful mechanical design to make the most of mechanical capabilities, and closing the loop between sensing and control within the materials themselves.
While there are stand-alone applications for such materials in the near term, like smart fabrics or robotic grippers, the long-term promise of the field is to distribute decision-making in future advanced robots. As they are imbued with ever more senses and capabilities, these machines will be required to shuttle huge amounts of control and feedback data to and fro, placing a strain on both their communication and computation abilities.
Materials that can process sensor data at the source and either autonomously react to it or filter the most relevant information to be passed on to the central processing unit could significantly ease this bottleneck. In a press release related to an earlier study, Nikolaus Correll, an assistant professor of computer science at the University of Colorado Boulder who is also an author of the current paper, pointed out this is a tactic used by the human body.
“The human sensory system automatically filters out things like the feeling of clothing rubbing on the skin,” he said. “An artificial skin with possibly thousands of sensors could do the same thing, and only report to a central ‘brain’ if it touches something new.”
There are still considerable challenges to realizing this vision, though, the authors say, noting that so far the young field has only produced proof of concepts. The biggest challenge remains manufacturing robotic materials in a way that combines all these capabilities in a small enough package at an affordable cost.
Luckily, the authors note, the field can draw on convergent advances in both materials science, such as the development of new bulk materials with inherent multifunctionality, and robotics, such as the ever tighter integration of components.
And they predict that doing away with the prevailing dichotomy of “brain versus body” could lay the foundations for the emergence of “robots with brains in their bodies—the foundation of inexpensive and ubiquitous robots that will step into the real world.”
Image Credit: Anatomy Insider / Shutterstock.com Continue reading
#431371 Amazon Is Quietly Building the Robots of ...
Science fiction is the siren song of hard science. How many innocent young students have been lured into complex, abstract science, technology, engineering, or mathematics because of a reckless and irresponsible exposure to Arthur C. Clarke at a tender age? Yet Arthur C. Clarke has a very famous quote: “Any sufficiently advanced technology is indistinguishable from magic.”
It’s the prospect of making that… ahem… magic leap that entices so many people into STEM in the first place. A magic leap that would change the world. How about, for example, having humanoid robots? They could match us in dexterity and speed, perceive the world around them as we do, and be programmed to do, well, more or less anything we can do.
Such a technology would change the world forever.
But how will it arrive? While true sci-fi robots won’t get here right away—the pieces are coming together, and the company best developing them at the moment is Amazon. Where others have struggled to succeed, Amazon has been quietly progressing. Notably, Amazon has more than just a dream, it has the most practical of reasons driving it into robotics.
This practicality matters. Technological development rarely proceeds by magic; it’s a process filled with twists, turns, dead-ends, and financial constraints. New technologies often have to answer questions like “What is this good for, are you being realistic?” A good strategy, then, can be to build something more limited than your initial ambition, but useful for a niche market. That way, you can produce a prototype, have a reasonable business plan, and turn a profit within a decade. You might call these “stepping stone” applications that allow for new technologies to be developed in an economically viable way.
You need something you can sell to someone, soon: that’s how you get investment in your idea. It’s this model that iRobot, developers of the Roomba, used: migrating from military prototypes to robotic vacuum cleaners to become the “boring, successful robot company.” Compare this to Willow Garage, a genius factory if ever there was one: they clearly had ambitions towards a general-purpose, multi-functional robot. They built an impressive device—PR2—and programmed the operating system, ROS, that is still the industry and academic standard to this day.
But since they were unable to sell their robot for much less than $250,000, it was never likely to be a profitable business. This is why Willow Garage is no more, and many workers at the company went into telepresence robotics. Telepresence is essentially videoconferencing with a fancy robot attached to move the camera around. It uses some of the same software (for example, navigation and mapping) without requiring you to solve difficult problems of full autonomy for the robot, or manipulating its environment. It’s certainly one of the stepping-stone areas that various companies are investigating.
Another approach is to go to the people with very high research budgets: the military.
This was the Boston Dynamics approach, and their incredible achievements in bipedal locomotion saw them getting snapped up by Google. There was a great deal of excitement and speculation about Google’s “nightmare factory” whenever a new slick video of a futuristic militarized robot surfaced. But Google broadly backed away from Replicant, their robotics program, and Boston Dynamics was sold. This was partly due to PR concerns over the Terminator-esque designs, but partly because they didn’t see the robotics division turning a profit. They hadn’t found their stepping stones.
This is where Amazon comes in. Why Amazon? First off, they just announced that their profits are up by 30 percent, and yet the company is well-known for their constantly-moving Day One philosophy where a great deal of the profits are reinvested back into the business. But lots of companies have ambition.
One thing Amazon has that few other corporations have, as well as big financial resources, is viable stepping stones for developing the technologies needed for this sort of robotics to become a reality. They already employ 100,000 robots: these are of the “pragmatic, boring, useful” kind that we’ve profiled, which move around the shelves in warehouses. These robots are allowing Amazon to develop localization and mapping software for robots that can autonomously navigate in the simple warehouse environment.
But their ambitions don’t end there. The Amazon Robotics Challenge is a multi-million dollar competition, open to university teams, to produce a robot that can pick and package items in warehouses. The problem of grasping and manipulating a range of objects is not a solved one in robotics, so this work is still done by humans—yet it’s absolutely fundamental for any sci-fi dream robot.
Google, for example, attempted to solve this problem by hooking up 14 robot hands to machine learning algorithms and having them grasp thousands of objects. Although results were promising, the 10 to 20 percent failure rate for grasps is too high for warehouse use. This is a perfect stepping stone for Amazon; should they crack the problem, they will likely save millions in logistics.
Another area where humanoid robotics—especially bipedal locomotion, or walking, has been seriously suggested—is in the last mile delivery problem. Amazon has shown willingness to be creative in this department with their notorious drone delivery service. In other words, it’s all very well to have your self-driving car or van deliver packages to people’s doors, but who puts the package on the doorstep? It’s difficult for wheeled robots to navigate the full range of built environments that exist. That’s why bipedal robots like CASSIE, developed by Oregon State, may one day be used to deliver parcels.
Again: no one more than Amazon stands to profit from cracking this technology. The line from robotics research to profit is very clear.
So, perhaps one day Amazon will have robots that can move around and manipulate their environments. But they’re also working on intelligence that will guide those robots and make them truly useful for a variety of tasks. Amazon has an AI, or at least the framework for an AI: it’s called Alexa, and it’s in tens of millions of homes. The Alexa Prize, another multi-million-dollar competition, is attempting to make Alexa more social.
To develop a conversational AI, at least using the current methods of machine learning, you need data on tens of millions of conversations. You need to understand how people will try to interact with the AI. Amazon has access to this in Alexa, and they’re using it. As owners of the leading voice-activated personal assistant, they have an ecosystem of developers creating apps for Alexa. It will be integrated with the smart home and the Internet of Things. It is a very marketable product, a stepping stone for robot intelligence.
What’s more, the company can benefit from its huge sales infrastructure. For Amazon, having an AI in your home is ideal, because it can persuade you to buy more products through its website. Unlike companies like Google, Amazon has an easy way to make a direct profit from IoT devices, which could fuel funding.
For a humanoid robot to be truly useful, though, it will need vision and intelligence. It will have to understand and interpret its environment, and react accordingly. The way humans learn about our environment is by getting out and seeing it. This is something that, for example, an Alexa coupled to smart glasses would be very capable of doing. There are rumors that Alexa’s AI will soon be used in security cameras, which is an ideal stepping stone task to train an AI to process images from its environment, truly perceiving the world and any threats it might contain.
It’s a slight exaggeration to say that Amazon is in the process of building a secret robot army. The gulf between our sci-fi vision of robots that can intelligently serve us, rather than mindlessly assemble cars, is still vast. But in quietly assembling many of the technologies needed for intelligent, multi-purpose robotics—and with the unique stepping stones they have along the way—Amazon might just be poised to leap that gulf. As if by magic.
Image Credit: Denis Starostin / Shutterstock.com Continue reading