Tag Archives: never
#431733 Why Humanoid Robots Are Still So Hard to ...
Picture a robot. In all likelihood, you just pictured a sleek metallic or chrome-white humanoid. Yet the vast majority of robots in the world around us are nothing like this; instead, they’re specialized for specific tasks. Our cultural conception of what robots are dates back to the coining of the term robots in the Czech play, Rossum’s Universal Robots, which originally envisioned them as essentially synthetic humans.
The vision of a humanoid robot is tantalizing. There are constant efforts to create something that looks like the robots of science fiction. Recently, an old competitor in this field returned with a new model: Toyota has released what they call the T-HR3. As humanoid robots go, it appears to be pretty dexterous and have a decent grip, with a number of degrees of freedom making the movements pleasantly human.
This humanoid robot operates mostly via a remote-controlled system that allows the user to control the robot’s limbs by exerting different amounts of pressure on a framework. A VR headset completes the picture, allowing the user to control the robot’s body and teleoperate the machine. There’s no word on a price tag, but one imagines a machine with a control system this complicated won’t exactly be on your Christmas list, unless you’re a billionaire.
Toyota is no stranger to robotics. They released a series of “Partner Robots” that had a bizarre affinity for instrument-playing but weren’t often seen doing much else. Given that they didn’t seem to have much capability beyond the automaton that Leonardo da Vinci made hundreds of years ago, they promptly vanished. If, as the name suggests, the T-HR3 is a sequel to these robots, which came out shortly after ASIMO back in 2003, it’s substantially better.
Slightly less humanoid (and perhaps the more useful for it), Toyota’s HSR-2 is a robot base on wheels with a simple mechanical arm. It brings to mind earlier machines produced by dream-factory startup Willow Garage like the PR-2. The idea of an affordable robot that could simply move around on wheels and pick up and fetch objects, and didn’t harbor too-lofty ambitions to do anything else, was quite successful.
So much so that when Robocup, the international robotics competition, looked for a platform for their robot-butler competition @Home, they chose HSR-2 for its ability to handle objects. HSR-2 has been deployed in trial runs to care for the elderly and injured, but has yet to be widely adopted for these purposes five years after its initial release. It’s telling that arguably the most successful multi-purpose humanoid robot isn’t really humanoid at all—and it’s curious that Toyota now seems to want to return to a more humanoid model a decade after they gave up on the project.
What’s unclear, as is often the case with humanoid robots, is what, precisely, the T-HR3 is actually for. The teleoperation gets around the complex problem of control by simply having the machine controlled remotely by a human. That human then handles all the sensory perception, decision-making, planning, and manipulation; essentially, the hardest problems in robotics.
There may not be a great deal of autonomy for the T-HR3, but by sacrificing autonomy, you drastically cut down the uses of the robot. Since it can’t act alone, you need a convincing scenario where you need a teleoperated humanoid robot that’s less precise and vastly more expensive than just getting a person to do the same job. Perhaps someday more autonomy will be developed for the robot, and the master maneuvering system that allows humans to control it will only be used in emergencies to control the robot if it gets stuck.
Toyota’s press release says it is “a platform with capabilities that can safely assist humans in a variety of settings, such as the home, medical facilities, construction sites, disaster-stricken areas and even outer space.” In reality, it’s difficult to see such a robot being affordable or even that useful in the home or in medical facilities (unless it’s substantially stronger than humans). Equally, it certainly doesn’t seem robust enough to be deployed in disaster zones or outer space. These tasks have been mooted for robots for a very long time and few have proved up to the challenge.
Toyota’s third generation humanoid robot, the T-HR3. Image Credit: Toyota
Instead, the robot seems designed to work alongside humans. Its design, standing 1.5 meters tall, weighing 75 kilograms, and possessing 32 degrees of freedom in its body, suggests it is built to closely mimic a person, rather than a robot like ATLAS which is robust enough that you can imagine it being useful in a war zone. In this case, it might be closer to the model of the collaborative robots or co-bots developed by Rethink Robotics, whose tons of safety features, including force-sensitive feedback for the user, reduce the risk of terrible PR surrounding killer robots.
Instead the emphasis is on graceful precision engineering: in the promo video, the robot can be seen balancing on one leg before showing off a few poised, yoga-like poses. This perhaps suggests that an application in elderly care, which Toyota has ventured into before and which was the stated aim of their simple HSR-2, might be more likely than deployment to a disaster zone.
The reason humanoid robots remain so elusive and so tempting is probably because of a simple cognitive mistake. We make two bad assumptions. First, we assume that if you build a humanoid robot, give its joints enough flexibility, throw in a little AI and perhaps some pre-programmed behaviors, then presto, it will be able to do everything humans can. When you see a robot that moves well and looks humanoid, it seems like the hardest part is done; surely this robot could do anything. The reality is never so simple.
We also make the reverse assumption: we assume that when we are finally replaced, it will be by perfect replicas of our own bodies and brains that can fulfill all the functions we used to fulfill. Perhaps, in reality, the future of robots and AI is more like its present: piecemeal, with specialized algorithms and specialized machines gradually learning to outperform humans at every conceivable task without ever looking convincingly human.
It may well be that the T-HR3 is angling towards this concept of machine learning as a platform for future research. Rather than trying to program an omni-capable robot out of the box, it will gradually learn from its human controllers. In this way, you could see the platform being used to explore the limits of what humans can teach robots to do simply by having them mimic sequences of our bodies’ motion, in the same way the exploitation of neural networks is testing the limits of training algorithms on data. No one machine will be able to perform everything a human can, but collectively, they will vastly outperform us at anything you’d want one to do.
So when you see a new android like Toyota’s, feel free to marvel at its technical abilities and indulge in the speculation about whether it’s a PR gimmick or a revolutionary step forward along the road to human replacement. Just remember that, human-level bots or not, we’re already strolling down that road.
Image Credit: Toyota Continue reading
#431603 What We Can Learn From the Second Life ...
For every new piece of technology that gets developed, you can usually find people saying it will never be useful. The president of the Michigan Savings Bank in 1903, for example, said, “The horse is here to stay but the automobile is only a novelty—a fad.” It’s equally easy to find people raving about whichever new technology is at the peak of the Gartner Hype Cycle, which tracks the buzz around these newest developments and attempts to temper predictions. When technologies emerge, there are all kinds of uncertainties, from the actual capacity of the technology to its use cases in real life to the price tag.
Eventually the dust settles, and some technologies get widely adopted, to the extent that they can become “invisible”; people take them for granted. Others fall by the wayside as gimmicky fads or impractical ideas. Picking which horses to back is the difference between Silicon Valley millions and Betamax pub-quiz-question obscurity. For a while, it seemed that Google had—for once—backed the wrong horse.
Google Glass emerged from Google X, the ubiquitous tech giant’s much-hyped moonshot factory, where highly secretive researchers work on the sci-fi technologies of the future. Self-driving cars and artificial intelligence are the more mundane end for an organization that apparently once looked into jetpacks and teleportation.
The original smart glasses, Google began selling Google Glass in 2013 for $1,500 as prototypes for their acolytes, around 8,000 early adopters. Users could control the glasses with a touchpad, or, activated by tilting the head back, with voice commands. Audio relay—as with several wearable products—is via bone conduction, which transmits sound by vibrating the skull bones of the user. This was going to usher in the age of augmented reality, the next best thing to having a chip implanted directly into your brain.
On the surface, it seemed to be a reasonable proposition. People had dreamed about augmented reality for a long time—an onboard, JARVIS-style computer giving you extra information and instant access to communications without even having to touch a button. After smartphone ubiquity, it looked like a natural step forward.
Instead, there was a backlash. People may be willing to give their data up to corporations, but they’re less pleased with the idea that someone might be filming them in public. The worst aspect of smartphones is trying to talk to people who are distractedly scrolling through their phones. There’s a famous analogy in Revolutionary Road about an old couple’s loveless marriage: the husband tunes out his wife’s conversation by turning his hearing aid down to zero. To many, Google Glass seemed to provide us with a whole new way to ignore each other in favor of our Twitter feeds.
Then there’s the fact that, regardless of whether it’s because we’re not used to them, or if it’s a more permanent feature, people wearing AR tech often look very silly. Put all this together with a lack of early functionality, the high price (do you really feel comfortable wearing a $1,500 computer?), and a killer pun for the users—Glassholes—and the final recipe wasn’t great for Google.
Google Glass was quietly dropped from sale in 2015 with the ominous slogan posted on Google’s website “Thanks for exploring with us.” Reminding the Glass users that they had always been referred to as “explorers”—beta-testing a product, in many ways—it perhaps signaled less enthusiasm for wearables than the original, Google Glass skydive might have suggested.
In reality, Google went back to the drawing board. Not with the technology per se, although it has improved in the intervening years, but with the uses behind the technology.
Under what circumstances would you actually need a Google Glass? When would it genuinely be preferable to a smartphone that can do many of the same things and more? Beyond simply being a fashion item, which Google Glass decidedly was not, even the most tech-evangelical of us need a convincing reason to splash $1,500 on a wearable computer that’s less socially acceptable and less easy to use than the machine you’re probably reading this on right now.
Enter the Google Glass Enterprise Edition.
Piloted in factories during the years that Google Glass was dormant, and now roaring back to life and commercially available, the Google Glass relaunch got under way in earnest in July of 2017. The difference here was the specific audience: workers in factories who need hands-free computing because they need to use their hands at the same time.
In this niche application, wearable computers can become invaluable. A new employee can be trained with pre-programmed material that explains how to perform actions in real time, while instructions can be relayed straight into a worker’s eyeline without them needing to check a phone or switch to email.
Medical devices have long been a dream application for Google Glass. You can imagine a situation where people receive real-time information during surgery, or are augmented by artificial intelligence that provides additional diagnostic information or questions in response to a patient’s symptoms. The quest to develop a healthcare AI, which can provide recommendations in response to natural language queries, is on. The famously untidy doctor’s handwriting—and the associated death toll—could be avoided if the glasses could take dictation straight into a patient’s medical records. All of this is far more useful than allowing people to check Facebook hands-free while they’re riding the subway.
Google’s “Lens” application indicates another use for Google Glass that hadn’t quite matured when the original was launched: the Lens processes images and provides information about them. You can look at text and have it translated in real time, or look at a building or sign and receive additional information. Image processing, either through neural networks hooked up to a cloud database or some other means, is the frontier that enables driverless cars and similar technology to exist. Hook this up to a voice-activated assistant relaying information to the user, and you have your killer application: real-time annotation of the world around you. It’s this functionality that just wasn’t ready yet when Google launched Glass.
Amazon’s recent announcement that they want to integrate Alexa into a range of smart glasses indicates that the tech giants aren’t ready to give up on wearables yet. Perhaps, in time, people will become used to voice activation and interaction with their machines, at which point smart glasses with bone conduction will genuinely be more convenient than a smartphone.
But in many ways, the real lesson from the initial failure—and promising second life—of Google Glass is a simple question that developers of any smart technology, from the Internet of Things through to wearable computers, must answer. “What can this do that my smartphone can’t?” Find your answer, as the Enterprise Edition did, as Lens might, and you find your product.
Image Credit: Hattanas / Shutterstock.com Continue reading
#431424 A ‘Google Maps’ for the Mouse Brain ...
Ask any neuroscientist to draw you a neuron, and it’ll probably look something like a star with two tails: one stubby with extensive tree-like branches, the other willowy, lengthy and dotted with spindly spikes.
While a decent abstraction, this cartoonish image hides the uncomfortable truth that scientists still don’t know much about what many neurons actually look like, not to mention the extent of their connections.
But without untangling the jumbled mess of neural wires that zigzag across the brain, scientists are stumped in trying to answer one of the most fundamental mysteries of the brain: how individual neuronal threads carry and assemble information, which forms the basis of our thoughts, memories, consciousness, and self.
What if there was a way to virtually trace and explore the brain’s serpentine fibers, much like the way Google Maps allows us to navigate the concrete tangles of our cities’ highways?
Thanks to an interdisciplinary team at Janelia Research Campus, we’re on our way. Meet MouseLight, the most extensive map of the mouse brain ever attempted. The ongoing project has an ambitious goal: reconstructing thousands—if not more—of the mouse’s 70 million neurons into a 3D map. (You can play with it here!)
With map in hand, neuroscientists around the world can begin to answer how neural circuits are organized in the brain, and how information flows from one neuron to another across brain regions and hemispheres.
The first release, presented Monday at the Society for Neuroscience Annual Conference in Washington, DC, contains information about the shape and sizes of 300 neurons.
And that’s just the beginning.
“MouseLight’s new dataset is the largest of its kind,” says Dr. Wyatt Korff, director of project teams. “It’s going to change the textbook view of neurons.”
http://mouselight.janelia.org/assets/carousel/ML-Movie.mp4
Brain Atlas
MouseLight is hardly the first rodent brain atlasing project.
The Mouse Brain Connectivity Atlas at the Allen Institute for Brain Science in Seattle tracks neuron activity across small circuits in an effort to trace a mouse’s connectome—a complete atlas of how the firing of one neuron links to the next.
MICrONS (Machine Intelligence from Cortical Networks), the $100 million government-funded “moonshot” hopes to distill brain computation into algorithms for more powerful artificial intelligence. Its first step? Brain mapping.
What makes MouseLight stand out is its scope and level of detail.
MICrONS, for example, is focused on dissecting a cubic millimeter of the mouse visual processing center. In contrast, MouseLight involves tracing individual neurons across the entire brain.
And while connectomics outlines the major connections between brain regions, the birds-eye view entirely misses the intricacies of each individual neuron. This is where MouseLight steps in.
Slice and Dice
With a width only a fraction of a human hair, neuron projections are hard to capture in their native state. Tug or squeeze the brain too hard, and the long, delicate branches distort or even shred into bits.
In fact, previous attempts at trying to reconstruct neurons at this level of detail topped out at just a dozen, stymied by technological hiccups and sky-high costs.
A few years ago, the MouseLight team set out to automate the entire process, with a few time-saving tweaks. Here’s how it works.
After injecting a mouse with a virus that causes a handful of neurons to produce a green-glowing protein, the team treated the brain with a sugar alcohol solution. This step “clears” the brain, transforming the beige-colored organ to translucent, making it easier for light to penetrate and boosting the signal-to-background noise ratio. The brain is then glued onto a small pedestal and ready for imaging.
Building upon an established method called “two-photon microscopy,” the team then tweaked several parameters to reduce imaging time from days (or weeks) down to a fraction of that. Endearingly known as “2P” by the experts, this type of laser microscope zaps the tissue with just enough photos to light up a single plane without damaging the tissue—sharper plane, better focus, crisper image.
After taking an image, the setup activates its vibrating razor and shaves off the imaged section of the brain—a waspy slice about 200 micrometers thick. The process is repeated until the whole brain is imaged.
This setup increased imaging speed by 16 to 48 times faster than conventional microscopy, writes team leader Dr. Jayaram Chandrashekar, who published a version of the method early last year in eLife.
The resulting images strikingly highlight every crook and cranny of a neuronal branch, popping out against a pitch-black background. But pretty pictures come at a hefty data cost: each image takes up a whopping 20 terabytes of data—roughly the storage space of 4,000 DVDs, or 10,000 hours of movies.
Stitching individual images back into 3D is an image-processing nightmare. The MouseLight team used a combination of computational power and human prowess to complete this final step.
The reconstructed images are handed off to a mighty team of seven trained neuron trackers. With the help of tracing algorithms developed in-house and a keen eye, each member can track roughly a neuron a day—significantly less time than the week or so previously needed.
A Numbers Game
Even with just 300 fully reconstructed neurons, MouseLight has already revealed new secrets of the brain.
While it’s widely accepted that axons, the neurons’ outgoing projection, can span the entire length of the brain, these extra-long connections were considered relatively rare. (In fact, one previously discovered “giant neuron” was thought to link to consciousness because of its expansive connections).
Images captured from two-photon microscopy show an axon and dendrites protruding from a neuron’s cell body (sphere in center). Image Credit: Janelia Research Center, MouseLight project team
MouseLight blows that theory out of the water.
The data clearly shows that “giant neurons” are far more common than previously thought. For example, four neurons normally associated with taste had wiry branches that stretched all the way into brain areas that control movement and process touch.
“We knew that different regions of the brain talked to each other, but seeing it in 3D is different,” says Dr. Eve Marder at Brandeis University.
“The results are so stunning because they give you a really clear view of how the whole brain is connected.”
With a tested and true system in place, the team is now aiming to add 700 neurons to their collection within a year.
But appearance is only part of the story.
We can’t tell everything about a person simply by how they look. Neurons are the same: scientists can only infer so much about a neuron’s function by looking at their shape and positions. The team also hopes to profile the gene expression patterns of each neuron, which could provide more hints to their roles in the brain.
MouseLight essentially dissects the neural infrastructure that allows information traffic to flow through the brain. These anatomical highways are just the foundation. Just like Google Maps, roads form only the critical first layer of the map. Street view, traffic information and other add-ons come later for a complete look at cities in flux.
The same will happen for understanding our ever-changing brain.
Image Credit: Janelia Research Campus, MouseLight project team Continue reading
#431371 Amazon Is Quietly Building the Robots of ...
Science fiction is the siren song of hard science. How many innocent young students have been lured into complex, abstract science, technology, engineering, or mathematics because of a reckless and irresponsible exposure to Arthur C. Clarke at a tender age? Yet Arthur C. Clarke has a very famous quote: “Any sufficiently advanced technology is indistinguishable from magic.”
It’s the prospect of making that… ahem… magic leap that entices so many people into STEM in the first place. A magic leap that would change the world. How about, for example, having humanoid robots? They could match us in dexterity and speed, perceive the world around them as we do, and be programmed to do, well, more or less anything we can do.
Such a technology would change the world forever.
But how will it arrive? While true sci-fi robots won’t get here right away—the pieces are coming together, and the company best developing them at the moment is Amazon. Where others have struggled to succeed, Amazon has been quietly progressing. Notably, Amazon has more than just a dream, it has the most practical of reasons driving it into robotics.
This practicality matters. Technological development rarely proceeds by magic; it’s a process filled with twists, turns, dead-ends, and financial constraints. New technologies often have to answer questions like “What is this good for, are you being realistic?” A good strategy, then, can be to build something more limited than your initial ambition, but useful for a niche market. That way, you can produce a prototype, have a reasonable business plan, and turn a profit within a decade. You might call these “stepping stone” applications that allow for new technologies to be developed in an economically viable way.
You need something you can sell to someone, soon: that’s how you get investment in your idea. It’s this model that iRobot, developers of the Roomba, used: migrating from military prototypes to robotic vacuum cleaners to become the “boring, successful robot company.” Compare this to Willow Garage, a genius factory if ever there was one: they clearly had ambitions towards a general-purpose, multi-functional robot. They built an impressive device—PR2—and programmed the operating system, ROS, that is still the industry and academic standard to this day.
But since they were unable to sell their robot for much less than $250,000, it was never likely to be a profitable business. This is why Willow Garage is no more, and many workers at the company went into telepresence robotics. Telepresence is essentially videoconferencing with a fancy robot attached to move the camera around. It uses some of the same software (for example, navigation and mapping) without requiring you to solve difficult problems of full autonomy for the robot, or manipulating its environment. It’s certainly one of the stepping-stone areas that various companies are investigating.
Another approach is to go to the people with very high research budgets: the military.
This was the Boston Dynamics approach, and their incredible achievements in bipedal locomotion saw them getting snapped up by Google. There was a great deal of excitement and speculation about Google’s “nightmare factory” whenever a new slick video of a futuristic militarized robot surfaced. But Google broadly backed away from Replicant, their robotics program, and Boston Dynamics was sold. This was partly due to PR concerns over the Terminator-esque designs, but partly because they didn’t see the robotics division turning a profit. They hadn’t found their stepping stones.
This is where Amazon comes in. Why Amazon? First off, they just announced that their profits are up by 30 percent, and yet the company is well-known for their constantly-moving Day One philosophy where a great deal of the profits are reinvested back into the business. But lots of companies have ambition.
One thing Amazon has that few other corporations have, as well as big financial resources, is viable stepping stones for developing the technologies needed for this sort of robotics to become a reality. They already employ 100,000 robots: these are of the “pragmatic, boring, useful” kind that we’ve profiled, which move around the shelves in warehouses. These robots are allowing Amazon to develop localization and mapping software for robots that can autonomously navigate in the simple warehouse environment.
But their ambitions don’t end there. The Amazon Robotics Challenge is a multi-million dollar competition, open to university teams, to produce a robot that can pick and package items in warehouses. The problem of grasping and manipulating a range of objects is not a solved one in robotics, so this work is still done by humans—yet it’s absolutely fundamental for any sci-fi dream robot.
Google, for example, attempted to solve this problem by hooking up 14 robot hands to machine learning algorithms and having them grasp thousands of objects. Although results were promising, the 10 to 20 percent failure rate for grasps is too high for warehouse use. This is a perfect stepping stone for Amazon; should they crack the problem, they will likely save millions in logistics.
Another area where humanoid robotics—especially bipedal locomotion, or walking, has been seriously suggested—is in the last mile delivery problem. Amazon has shown willingness to be creative in this department with their notorious drone delivery service. In other words, it’s all very well to have your self-driving car or van deliver packages to people’s doors, but who puts the package on the doorstep? It’s difficult for wheeled robots to navigate the full range of built environments that exist. That’s why bipedal robots like CASSIE, developed by Oregon State, may one day be used to deliver parcels.
Again: no one more than Amazon stands to profit from cracking this technology. The line from robotics research to profit is very clear.
So, perhaps one day Amazon will have robots that can move around and manipulate their environments. But they’re also working on intelligence that will guide those robots and make them truly useful for a variety of tasks. Amazon has an AI, or at least the framework for an AI: it’s called Alexa, and it’s in tens of millions of homes. The Alexa Prize, another multi-million-dollar competition, is attempting to make Alexa more social.
To develop a conversational AI, at least using the current methods of machine learning, you need data on tens of millions of conversations. You need to understand how people will try to interact with the AI. Amazon has access to this in Alexa, and they’re using it. As owners of the leading voice-activated personal assistant, they have an ecosystem of developers creating apps for Alexa. It will be integrated with the smart home and the Internet of Things. It is a very marketable product, a stepping stone for robot intelligence.
What’s more, the company can benefit from its huge sales infrastructure. For Amazon, having an AI in your home is ideal, because it can persuade you to buy more products through its website. Unlike companies like Google, Amazon has an easy way to make a direct profit from IoT devices, which could fuel funding.
For a humanoid robot to be truly useful, though, it will need vision and intelligence. It will have to understand and interpret its environment, and react accordingly. The way humans learn about our environment is by getting out and seeing it. This is something that, for example, an Alexa coupled to smart glasses would be very capable of doing. There are rumors that Alexa’s AI will soon be used in security cameras, which is an ideal stepping stone task to train an AI to process images from its environment, truly perceiving the world and any threats it might contain.
It’s a slight exaggeration to say that Amazon is in the process of building a secret robot army. The gulf between our sci-fi vision of robots that can intelligently serve us, rather than mindlessly assemble cars, is still vast. But in quietly assembling many of the technologies needed for intelligent, multi-purpose robotics—and with the unique stepping stones they have along the way—Amazon might just be poised to leap that gulf. As if by magic.
Image Credit: Denis Starostin / Shutterstock.com Continue reading