Tag Archives: Should

#437635 Toyota Research Demonstrates ...

Over the last several years, Toyota has been putting more muscle into forward-looking robotics research than just about anyone. In addition to the Toyota Research Institute (TRI), there’s that massive 175-acre robot-powered city of the future that Toyota still plans to build next to Mount Fuji. Even Toyota itself acknowledges that it might be crazy, but that’s just how they roll—as TRI CEO Gill Pratt told me a while back, when Toyota decides to do something, they really do go all-in on it.

TRI has been focusing heavily on home robots, which is reflective of the long-term nature of what TRI is trying to do, because home robots are both the place where we’ll need robots the most at the same time as they’re the place where it’s going to be hardest to deploy them. The unpredictable nature of homes, and the fact that homes tend to have squishy fragile people in them, are robot-unfriendly characteristics, but as the population continues to age (an increasingly acute problem in Japan), homes offer an enormous amount of potential for helping us maintain our independence.

Today, Toyota is showing off some of the research that it’s been working on recently, in the form of a virtual reality presentation in lieu of an in-person press event. For journalists, TRI pre-loaded the recording onto a VR headset, which was FedEx’ed to my house. You can watch the entire 40-minute presentation in 360 video on YouTube (or in VR if you have a headset of your own), but if you don’t watch the whole thing, you should at least check out the full-on GLaDOS (with arms) that TRI thinks belongs in your home.

The presentation features an introduction from Gill Pratt, who looks entirely too comfortable embedded inside of one of TRI’s telepresence robots. The event also covers a lot of territory, but the highlight is almost certainly the new hardware that TRI demonstrates.

Soft bubble gripper

Photo: TRI

This is a “soft bubble gripper,” under development at TRI’s Cambridge, Mass., branch. These passively-compliant, air-filled grippers make it easier to grasp many different kinds of objects safely, but the nifty thing is that they’ve got cameras inside of them watching a pattern of dots on the interior of the soft membrane.

When the outside of the bubble makes contact with an object, the bubble deforms, and the deformation of the dot pattern on the inside can be tracked by the camera to determine both directions and magnitudes of forces. This is a concept that we’ve seen elsewhere before, but TRI’s implementation is a clever way of making an inherently safe end effector that can still perform all the sensing you need it to do for relatively complex manipulation tasks.

The bubble gripper was presented at ICRA this year, and you can read the technical paper here.

Ceiling-mounted home robot

Photo: TRI

I don’t know whether robots dangling from the ceiling was somehow sinister pre-Portal, but it sure as heck is for me having played through that game a couple of times, and it’s since been reinforced by AUTO from WALL-E.

The reason that we generally see robots mounted on the floor or on tables or on mobile bases is that we’re bipeds, not bats, and giving a robot access to a human-like workspace is easiest to do if you also give that robot a human-like position and orientation. And if you want to be able to reach stuff high up, you do what TRI did with their previous generation of kitchen manipulator, and just give it the ability to make itself super tall. But TRI is convinced it’s a good place to put our future home robots:

One innovative concept is a “gantry robot” that would descend from an overhead framework to perform tasks such as loading the dishwasher, wiping surfaces, and clearing clutter. By traveling on the ceiling, the robot avoids the problems of navigating household floor clutter and navigating cramped spaces. When not in use, the robot would tuck itself up out of the way. To further investigate this idea, the team has built a laboratory prototype robot that can do all the same tasks as a floor-based mobile robot but with the innovative overhead mobility system.

Another obvious problem with the gantry robot is that you have to install all kinds of stuff in your ceiling for this to work, which makes it very impractical (if not totally impossible) to introduce a system like this into a home that wasn’t built specifically for it. If, however, you do build a home with a robot like this in mind, the animation below from TRI shows how it could be extra useful. Suddenly, stairs are a non-issue. Payload is presumably also a non-issue, since loads can be transferred to the ceiling. Batteries become unnecessary, so the whole robot can be much lighter weight, which in turn makes it safer. Sensors get a fantastic view, and obstacle avoidance becomes trivial.

Robots as “time machines”

Photo: TRI

TRI’s presentation covered more than what we’ve highlighted here—our focus has been on the hardware prototypes, but TRI had more to talk about, including learning through demonstration, scaling learning through simulation, and how TRI has been working with users to figure out what research directions should be explored. It’s all available right now on YouTube, and it’s well worth 40 minutes of your time.

“What we’re really focused on is this principle idea of amplifying, rather than replacing, human beings”
—Gill Pratt, TRI

It’s only been five years since Toyota announced the $1 billion investment that established TRI, and it feels like the progress that’s been made since then has been substantial. It’s not often that vision, resources, and long-term commitment come together like this, and TRI’s emphasis on making life better for people is one of the things that helps to keep us optimistic about the future of robotics.

“What we’re really focused on is this principle idea of amplifying, rather than replacing, human beings,” Gill Pratt told us. “And what it means to amplify a person, particularly as they’re aging—what we’re really trying to do is build a time machine. This may sound fanciful, and of course we can’t build a real time machine, but maybe we can build robotic assistants to make our lives as we age seem as if we are actually using a time machine.” He explains that it doesn’t mean building robots for convenience or to do our jobs for us. “It means building technology that enables us to continue to live and to work and to relate to each other as if we were younger,” he says. “And that’s really what our main goal is.” Continue reading

Posted in Human Robots

#437630 How Toyota Research Envisions the Future ...

Yesterday, the Toyota Research Institute (TRI) showed off some of the projects that it’s been working on recently, including a ceiling-mounted robot that could one day help us with household chores. That system is just one example of how TRI envisions the future of robotics and artificial intelligence. As TRI CEO Gill Pratt told us, the company is focusing on robotics and AI technology for “amplifying, rather than replacing, human beings.” In other words, Toyota wants to develop robots not for convenience or to do our jobs for us, but rather to allow people to continue to live and work independently even as we age.

To better understand Toyota’s vision of robotics 15 to 20 years from now, it’s worth watching the 20-minute video below, which depicts various scenarios “where the application of robotic capabilities is enabling members of an aging society to live full and independent lives in spite of the challenges that getting older brings.” It’s a long video, but it helps explains TRI’s perspective on how robots will collaborate with humans in our daily lives over the next couple of decades.

Those are some interesting conceptual telepresence-controlled bipeds they’ve got running around in that video, right?

For more details, we sent TRI some questions on how it plans to go from concepts like the ones shown in the video to real products that can be deployed in human environments. Below are answers from TRI CEO Gill Pratt, who is also chief scientist for Toyota Motor Corp.; Steffi Paepcke, senior UX designer at TRI; and Max Bajracharya, VP of robotics at TRI.

IEEE Spectrum: TRI seems to have a more explicit focus on eventual commercialization than most of the robotics research that we cover. At what point TRI starts to think about things like reliability and cost?

Photo: TRI

Toyota is exploring robots capable of manipulating dishes in a sink and a dishwasher, performing experiments and simulations to make sure that the robots can handle a wide range of conditions.

Gill Pratt: It’s a really interesting question, because the normal way to think about this would be to say, well, both reliability and cost are product development tasks. But actually, we need to think about it at the earliest possible stage with research as well. The hardware that we use in the laboratory for doing experiments, we don’t worry about cost there, or not nearly as much as you’d worry about for a product. However, in terms of what research we do, we very much have to think about, is it possible (if the research is successful) for it to end up in a product that has a reasonable cost. Because if a customer can’t afford what we come up with, maybe it has some academic value but it’s not actually going to make a difference in their quality of life in the real world. So we think about cost very much from the beginning.

The same is true with reliability. Right now, we’re working very hard to make our control techniques robust to wide variations in the environment. For instance, in work that Russ Tedrake is doing with manipulating dishes in a sink and a dishwasher, both in physical testing and in simulation, we’re doing thousands and now millions of different experiments to make sure that we can handle the edge cases and it works over a very wide range of conditions.

A tremendous amount of work that we do is trying to bring robotics out of the age of doing demonstrations. There’s been a history of robotics where for some time, things have not been reliable, so we’d catch the robot succeeding just once and then show that video to the world, and people would get the mis-impression that it worked all of the time. Some researchers have been very good about showing the blooper reel too, to show that some of the time, robots don’t work.

“A tremendous amount of work that we do is trying to bring robotics out of the age of doing demonstrations. There’s been a history of robotics where for some time, things have not been reliable, so we’d catch the robot succeeding just once and then show that video to the world, and people would get the mis-impression that it worked all of the time.”
—Gill Pratt, TRI

In the spirit of sharing things that didn’t work, can you tell us a bit about some of the robots that TRI has had under development that didn’t make it into the demo yesterday because they were abandoned along the way?

Steffi Paepcke: We’re really looking at how we can connect people; it can be hard to stay in touch and see our loved ones as much as we would like to. There have been a few prototypes that we’ve worked on that had to be put on the shelf, at least for the time being. We were exploring how to use light so that people could be ambiently aware of one another across distances. I was very excited about that—the internal name was “glowing orb.” For a variety of reasons, it didn’t work out, but it was really fascinating to investigate different modalities for keeping in touch.

Another prototype we worked on—we found through our research that grocery shopping is obviously an important part of life, and for a lot of older adults, it’s not necessarily the right answer to always have groceries delivered. Getting up and getting out of the house keeps you physically active, and a lot of people prefer to continue doing it themselves. But it can be challenging, especially if you’re purchasing heavy items that you need to transport. We had a prototype that assisted with grocery shopping, but when we pivoted our focus to Japan, we found that the inside of a Japanese home really needs to stay inside, and the outside needs to stay outside, so a robot that traverses both domains is probably not the right fit for a Japanese audience, and those were some really valuable lessons for us.

Photo: TRI

Toyota recently demonstrated a gantry robot that would hang from the ceiling to perform tasks like wiping surfaces and clearing clutter.

I love that TRI is exploring things like the gantry robot both in terms of near-term research and as part of its long-term vision, but is a robot like this actually worth pursuing? Or more generally, what’s the right way to compromise between making an environment robot friendly, and asking humans to make changes to their homes?

Max Bajracharya: We think a lot about the problems that we’re trying to address in a holistic way. We don’t want to just give people a robot, and assume that they’re not going to change anything about their lifestyle. We have a lot of evidence from people who use automated vacuum cleaners that people will adapt to the tools you give them, and they’ll change their lifestyle. So we want to think about what is that trade between changing the environment, and giving people robotic assistance and tools.

We certainly think that there are ways to make the gantry system plausible. The one you saw today is obviously a prototype and does require significant infrastructure. If we’re going to retrofit a home, that isn’t going to be the way to do it. But we still feel like we’re very much in the prototype phase, where we’re trying to understand whether this is worth it to be able to bypass navigation challenges, and coming up with the pros and cons of the gantry system. We’re evaluating whether we think this is the right approach to solving the problem.

To what extent do you think humans should be either directly or indirectly in the loop with home and service robots?

Bajracharya: Our goal is to amplify people, so achieving this is going to require robots to be in a loop with people in some form. One thing we have learned is that using people in a slow loop with robots, such as teaching them or helping them when they make mistakes, gives a robot an important advantage over one that has to do everything perfectly 100 percent of the time. In unstructured human environments, robots are going to encounter corner cases, and are going to need to learn to adapt. People will likely play an important role in helping the robots learn. Continue reading

Posted in Human Robots

#437620 The Trillion-Transistor Chip That Just ...

The history of computer chips is a thrilling tale of extreme miniaturization.

The smaller, the better is a trend that’s given birth to the digital world as we know it. So, why on earth would you want to reverse course and make chips a lot bigger? Well, while there’s no particularly good reason to have a chip the size of an iPad in an iPad, such a chip may prove to be genius for more specific uses, like artificial intelligence or simulations of the physical world.

At least, that’s what Cerebras, the maker of the biggest computer chip in the world, is hoping.

The Cerebras Wafer-Scale Engine is massive any way you slice it. The chip is 8.5 inches to a side and houses 1.2 trillion transistors. The next biggest chip, NVIDIA’s A100 GPU, measures an inch to a side and has a mere 54 billion transistors. The former is new, largely untested and, so far, one-of-a-kind. The latter is well-loved, mass-produced, and has taken over the world of AI and supercomputing in the last decade.

So can Goliath flip the script on David? Cerebras is on a mission to find out.

Big Chips Beyond AI
When Cerebras first came out of stealth last year, the company said it could significantly speed up the training of deep learning models.

Since then, the WSE has made its way into a handful of supercomputing labs, where the company’s customers are putting it through its paces. One of those labs, the National Energy Technology Laboratory, is looking to see what it can do beyond AI.

So, in a recent trial, researchers pitted the chip—which is housed in an all-in-one system about the size of a dorm room mini-fridge called the CS-1—against a supercomputer in a fluid dynamics simulation. Simulating the movement of fluids is a common supercomputer application useful for solving complex problems like weather forecasting and airplane wing design.

The trial was described in a preprint paper written by a team led by Cerebras’s Michael James and NETL’s Dirk Van Essendelft and presented at the supercomputing conference SC20 this week. The team said the CS-1 completed a simulation of combustion in a power plant roughly 200 times faster than it took the Joule 2.0 supercomputer to do a similar task.

The CS-1 was actually faster-than-real-time. As Cerebrus wrote in a blog post, “It can tell you what is going to happen in the future faster than the laws of physics produce the same result.”

The researchers said the CS-1’s performance couldn’t be matched by any number of CPUs and GPUs. And CEO and cofounder Andrew Feldman told VentureBeat that would be true “no matter how large the supercomputer is.” At a point, scaling a supercomputer like Joule no longer produces better results in this kind of problem. That’s why Joule’s simulation speed peaked at 16,384 cores, a fraction of its total 86,400 cores.

A comparison of the two machines drives the point home. Joule is the 81st fastest supercomputer in the world, takes up dozens of server racks, consumes up to 450 kilowatts of power, and required tens of millions of dollars to build. The CS-1, by comparison, fits in a third of a server rack, consumes 20 kilowatts of power, and sells for a few million dollars.

While the task is niche (but useful) and the problem well-suited to the CS-1, it’s still a pretty stunning result. So how’d they pull it off? It’s all in the design.

Cut the Commute
Computer chips begin life on a big piece of silicon called a wafer. Multiple chips are etched onto the same wafer and then the wafer is cut into individual chips. While the WSE is also etched onto a silicon wafer, the wafer is left intact as a single, operating unit. This wafer-scale chip contains almost 400,000 processing cores. Each core is connected to its own dedicated memory and its four neighboring cores.

Putting that many cores on a single chip and giving them their own memory is why the WSE is bigger; it’s also why, in this case, it’s better.

Most large-scale computing tasks depend on massively parallel processing. Researchers distribute the task among hundreds or thousands of chips. The chips need to work in concert, so they’re in constant communication, shuttling information back and forth. A similar process takes place within each chip, as information moves between processor cores, which are doing the calculations, and shared memory to store the results.

It’s a little like an old-timey company that does all its business on paper.

The company uses couriers to send and collect documents from other branches and archives across town. The couriers know the best routes through the city, but the trips take some minimum amount of time determined by the distance between the branches and archives, the courier’s top speed, and how many other couriers are on the road. In short, distance and traffic slow things down.

Now, imagine the company builds a brand new gleaming skyscraper. Every branch is moved into the new building and every worker gets a small filing cabinet in their office to store documents. Now any document they need can be stored and retrieved in the time it takes to step across the office or down the hall to their neighbor’s office. The information commute has all but disappeared. Everything’s in the same house.

Cerebras’s megachip is a bit like that skyscraper. The way it shuttles information—aided further by its specially tailored compiling software—is far more efficient compared to a traditional supercomputer that needs to network a ton of traditional chips.

Simulating the World as It Unfolds
It’s worth noting the chip can only handle problems small enough to fit on the wafer. But such problems may have quite practical applications because of the machine’s ability to do high-fidelity simulation in real-time. The authors note, for example, the machine should in theory be able to accurately simulate the air flow around a helicopter trying to land on a flight deck and semi-automate the process—something not possible with traditional chips.

Another opportunity, they note, would be to use a simulation as input to train a neural network also residing on the chip. In an intriguing and related example, a Caltech machine learning technique recently proved to be 1,000 times faster at solving the same kind of partial differential equations at play here to simulate fluid dynamics.

They also note that improvements in the chip (and others like it, should they arrive) will push back the limits of what can be accomplished. Already, Cerebras has teased the release of its next-generation chip, which will have 2.6 trillion transistors, 850,00 cores, and more than double the memory.

Of course, it still remains to be seen whether wafer-scale computing really takes off. The idea has been around for decades, but Cerebras is the first to pursue it seriously. Clearly, they believe they’ve solved the problem in a way that’s useful and economical.

Other new architectures are also being pursued in the lab. Memristor-based neuromorphic chips, for example, mimic the brain by putting processing and memory into individual transistor-like components. And of course, quantum computers are in a separate lane, but tackle similar problems.

It could be that one of these technologies eventually rises to rule them all. Or, and this seems just as likely, computing may splinter into a bizarre quilt of radical chips, all stitched together to make the most of each depending on the situation.

Image credit: Cerebras Continue reading

Posted in Human Robots

#437614 Video Friday: Poimo Is a Portable ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

IROS 2020 – October 25-29, 2020 – [Online]
ROS World 2020 – November 12, 2020 – [Online]
CYBATHLON 2020 – November 13-14, 2020 – [Online]
ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA
Let us know if you have suggestions for next week, and enjoy today's videos.

Engineers at the University of California San Diego have built a squid-like robot that can swim untethered, propelling itself by generating jets of water. The robot carries its own power source inside its body. It can also carry a sensor, such as a camera, for underwater exploration.

[ UCSD ]

Thanks Ioana!

Shark Robotics, French and European leader in Unmanned Ground Vehicles, is announcing today a disinfection add-on for Boston Dynamics Spot robot, designed to fight the COVID-19 pandemic. The Spot robot with Shark’s purpose-built disinfection payload can decontaminate up to 2,000 m2 in 15 minutes, in any space that needs to be sanitized – such as hospitals, metro stations, offices, warehouses or facilities.

[ Shark Robotics ]

Here’s an update on the Poimo portable inflatable mobility project we wrote about a little while ago; while not strictly robotics, it seems like it holds some promise for rapidly developing different soft structures that robotics might find useful.

[ University of Tokyo ]

Thanks Ryuma!

Pretty cool that you can do useful force feedback teleop while video chatting through a “regular broadband Internet connection.” Although, what “regular” means to you is a bit subjective, right?

[ HEBI Robotics ]

Thanks Dave!

While NASA's Mars rover Perseverance travels through space toward the Red Planet, its nearly identical rover twin is hard at work on Earth. The vehicle system test bed (VSTB) rover named OPTIMISM is a full-scale engineering version of the Mars-bound rover. It is used to test hardware and software before the commands are sent up to the Perseverance rover.

[ NASA ]

Jacquard takes ordinary, familiar objects and enhances them with new digital abilities and experiences, while remaining true to their original purpose — like being your favorite jacket, backpack or a pair of shoes that you love to wear.

Our ambition is simple: to make life easier. By staying connected to your digital world, your things can do so much more. Skip a song by brushing your sleeve. Take a picture by tapping on a shoulder strap. Get reminded about the phone you left behind with a blink of light or a haptic buzz on your cuff.

[ Google ATAP ]

Should you attend the IROS 2020 workshop on “Planetary Exploration Robots: Challenges and Opportunities”? Of course you should!

[ Workshop ]

Kuka makes a lot of these videos where I can’t help but think that if they put as much effort into programming the robot as they did into producing the video, the result would be much more impressive.

[ Kuka ]

The Colorado School of Mines is one of the first customers to buy a Spot robot from Boston Dynamics to help with robotics research. Watch as scientists take Spot into the school's mine for the first time.

[ HCR ] via [ CNET ]

A very interesting soft(ish) actuator from Ayato Kanada at Kyushu University's Control Engineering Lab.

A flexible ultrasonic motor (FUSM), which generates linear motion as a novel soft actuator. This motor consists of a single metal cube stator with a hole and an elastic elongated coil spring inserted into the hole. When voltages are applied to piezoelectric plates on the stator, the coil spring moves back and forward as a linear slider. In the FUSM that uses the friction drive as the principle, the most important parameter for optimizing its output is the preload between the stator and slider. The coil spring has a slightly larger diameter than the stator hole and generates the preload by expanding in a radial direction. The coil springs act not only as a flexible slider but also as a resistive positional sensor. Changes in the resistance between the stator and the coil spring end are converted to a voltage and used for position detection.

[ Control Engineering Lab ]

Thanks Ayato!

We show how to use the limbs of a quadruped robot to identify fine-grained soil, representative for Martian regolith.

[ Paper ] via [ ANYmal Research ]

PR2 is serving breakfast and cleaning up afterwards. It’s slow, but all you have to do is eat and leave.

That poor PR2 is a little more naked than it's probably comfortable with.

[ EASE ]

NVIDIA researchers present a hierarchical framework that combines model-based control and reinforcement learning (RL) to synthesize robust controllers for a quadruped robot (the Unitree Laikago).

[ NVIDIA ]

What's interesting about this assembly task is that the robot is using its arm only for positioning, and doing the actual assembly with just fingers.

[ RC2L ]

In this electronics assembly application, Kawasaki's cobot duAro2 uses a tool changing station to tackle a multitude of tasks and assemble different CPU models.

Okay but can it apply thermal paste to a CPU in the right way? Personally, I find that impossible.

[ Kawasaki ]

You only need to watch this video long enough to appreciate the concept of putting a robot on a robot.

[ Impress ]

In this lecture, we’ll hear from the man behind one of the biggest robotics companies in the world, Boston Dynamics, whose robotic dog, Spot, has been used to encourage social distancing in Singapore and is now getting ready for FDA approval to be able to measure patients’ vital signs in hospitals.

[ Alan Turing Institute ]

Greg Kahn from UC Berkeley wrote in to share his recent dissertation talk on “Mobile Robot Learning.”

In order to create mobile robots that can autonomously navigate real-world environments, we need generalizable perception and control systems that can reason about the outcomes of navigational decisions. Learning-based methods, in which the robot learns to navigate by observing the outcomes of navigational decisions in the real world, offer considerable promise for obtaining these intelligent navigation systems. However, there are many challenges impeding mobile robots from autonomously learning to act in the real-world, in particular (1) sample-efficiency–how to learn using a limited amount of data? (2) supervision–how to tell the robot what to do? and (3) safety–how to ensure the robot and environment are not damaged or destroyed during learning? In this talk, I will present deep reinforcement learning methods for addressing these real world mobile robot learning challenges and show results which enable ground and aerial robots to navigate in complex indoor and outdoor environments.

[ UC Berkeley ]

Thanks Greg!

Leila Takayama from UC Santa Cruz (and previously Google X and Willow Garage) gives a talk entitled “Toward a more human-centered future of robotics.”

Robots are no longer only in outer space, in factory cages, or in our imaginations. We interact with robotic agents when withdrawing cash from bank ATMs, driving cars with adaptive cruise control, and tuning our smart home thermostats. In the moment of those interactions with robotic agents, we behave in ways that do not necessarily align with the rational belief that robots are just plain machines. Through a combination of controlled experiments and field studies, we use theories and concepts from the social sciences to explore ways that human and robotic agents come together, including how people interact with personal robots and how people interact through telepresence robots. Together, we will explore topics and raise questions about the psychology of human-robot interaction and how we could invent a future of a more human-centered robotics that we actually want to live in.

[ Leila Takayama ]

Roboticist and stand-up comedian Naomi Fitter from Oregon State University gives a talk on “Everything I Know about Telepresence.”

Telepresence robots hold promise to connect people by providing videoconferencing and navigation abilities in far-away environments. At the same time, the impacts of current commercial telepresence robots are not well understood, and circumstances of robot use including internet connection stability, odd personalizations, and interpersonal relationship between a robot operator and people co-located with the robot can overshadow the benefit of the robot itself. And although the idea of telepresence robots has been around for over two decades, available nonverbal expressive abilities through telepresence robots are limited, and suitable operator user interfaces for the robot (for example, controls that allow for the operator to hold a conversation and move the robot simultaneously) remain elusive. So where should we be using telepresence robots? Are there any pitfalls to watch out for? What do we know about potential robot expressivity and user interfaces? This talk will cover my attempts to address these questions and ways in which the robotics research community can build off of this work

[ Talking Robotics ] Continue reading

Posted in Human Robots

#437610 How Intel’s OpenBot Wants to Make ...

You could make a pretty persuasive argument that the smartphone represents the single fastest area of technological progress we’re going to experience for the foreseeable future. Every six months or so, there’s something with better sensors, more computing power, and faster connectivity. Many different areas of robotics are benefiting from this on a component level, but over at Intel Labs, they’re taking a more direct approach with a project called OpenBot that turns US $50 worth of hardware and your phone into a mobile robot that can support “advanced robotics workloads such as person following and real-time autonomous navigation in unstructured environments.”

This work aims to address two key challenges in robotics: accessibility and scalability. Smartphones are ubiquitous and are becoming more powerful by the year. We have developed a combination of hardware and software that turns smartphones into robots. The resulting robots are inexpensive but capable. Our experiments have shown that a $50 robot body powered by a smartphone is capable of person following and real-time autonomous navigation. We hope that the presented work will open new opportunities for education and large-scale learning via thousands of low-cost robots deployed around the world.

Smartphones point to many possibilities for robotics that we have not yet exploited. For example, smartphones also provide a microphone, speaker, and screen, which are not commonly found on existing navigation robots. These may enable research and applications at the confluence of human-robot interaction and natural language processing. We also expect the basic ideas presented in this work to extend to other forms of robot embodiment, such as manipulators, aerial vehicles, and watercraft.

One of the interesting things about this idea is how not-new it is. The highest profile phone robot was likely the $150 Romo, from Romotive, which raised a not-insignificant amount of money on Kickstarter in 2012 and 2013 for a little mobile chassis that accepted one of three different iPhone models and could be controlled via another device or operated somewhat autonomously. It featured “computer vision, autonomous navigation, and facial recognition” capabilities, but was really designed to be a toy. Lack of compatibility hampered Romo a bit, and there wasn’t a lot that it could actually do once the novelty wore off.

As impressive as smartphone hardware was in a robotics context (even back in 2013), we’re obviously way, way beyond that now, and OpenBot figures that smartphones now have enough clout and connectivity that turning them into mobile robots is a good idea. You know, again. We asked Intel Labs’ Matthias Muller why now was the right time to launch OpenBot, and he mentioned things like the existence of a large maker community with broad access to 3D printing as well as open source software that makes broader development easier.

And of course, there’s the smartphone hardware: “Smartphones have become extremely powerful and feature dedicated AI processors in addition to CPUs and GPUs,” says Mueller. “Almost everyone owns a very capable smartphone now. There has been a big boost in sensor performance, especially in cameras, and a lot of the recent developments for VR applications are well aligned with robotic requirements for state estimation.” OpenBot has been tested with 10 recent Android phones, and since camera placement tends to be similar and USB-C is becoming the charging and communications standard, compatibility is less of an issue nowadays.

Image: OpenBot

Intel researchers created this table comparing OpenBot to other wheeled robot platforms, including Amazon’s DeepRacer, MIT’s Duckiebot, iRobot’s Create-2, and Thymio. The top group includes robots based on RC trucks; the bottom group includes navigation robots for deployment at scale and in education. Note that the cost of the smartphone needed for OpenBot is not included in this comparison.

If you’d like an OpenBot of your own, you don’t need to know all that much about robotics hardware or software. For the hardware, you probably need some basic mechanical and electronics experience—think Arduino project level. The software is a little more complicated; there’s a pretty good walkthrough to get some relatively sophisticated behaviors (like autonomous person following) up and running, but things rapidly degenerate into a command line interface that could be intimidating for new users. We did ask about why OpenBot isn’t ROS-based to leverage the robustness and reach of that community, and Muller said that ROS “adds unnecessary overhead,” although “if someone insists on using ROS with OpenBot, it should not be very difficult.”

Without building OpenBot to explicitly be part of an existing ecosystem, the challenge going forward is to make sure that the project is consistently supported, lest it wither and die like so many similar robotics projects have before it. “We are committed to the OpenBot project and will do our best to maintain it,” Mueller assures us. “We have a good track record. Other projects from our group (e.g. CARLA, Open3D, etc.) have also been maintained for several years now.” The inherently open source nature of the project certainly helps, although it can be tricky to rely too much on community contributions, especially when something like this is first starting out.

The OpenBot folks at Intel, we’re told, are already working on a “bigger, faster and more powerful robot body that will be suitable for mass production,” which would certainly help entice more people into giving this thing a go. They’ll also be focusing on documentation, which is probably the most important but least exciting part about building a low-cost community focused platform like this. And as soon as they’ve put together a way for us actual novices to turn our phones into robots that can do cool stuff for cheap, we’ll definitely let you know. Continue reading

Posted in Human Robots