Tag Archives: no
#439916 This Restaurant Robot Fries Your Food to ...
Four and a half years ago, a robot named Flippy made its burger-cooking debut at a fast food restaurant called CaliBurger. The bot consisted of a cart on wheels with an extending arm, complete with a pneumatic pump that let the machine swap between tools: tongs, scrapers, and spatulas. Flippy’s main jobs were pulling raw patties from a stack and placing them on the grill, tracking each burger’s cook time and temperature, and transferring cooked burgers to a plate.
This initial iteration of the fast-food robot—or robotic kitchen assistant, as its creators called it—was so successful that a commercial version launched last year. Its maker Miso Robotics put Flippy on the market for $30,000, and the bot was no longer limited to just flipping burgers; the new and improved Flippy could cook 19 different foods, including chicken wings, onion rings, french fries, and the Impossible Burger. It got sleeker, too: rather than sitting on a wheeled cart, the new Flippy was a “robot on a rail,” with the rail located along the hood of restaurant stoves.
This week, Miso Robotics announced an even newer, more improved Flippy robot called Flippy 2 (hey, they’re consistent). Most of the updates and improvements on the new bot are based on feedback the company received from restaurant chain White Castle, the first big restaurant chain to go all-in on the original Flippy.
So how is Flippy 2 different? The new robot can do the work of an entire fry station without any human assistance, and can do more than double the number of food preparation tasks its older sibling could do, including filling, emptying, and returning fry baskets.
These capabilities have made the robot more independent, eliminating the need for a human employee to step in at the beginning or end of the cooking process. When foods are placed in fry bins, the robot’s AI vision identifies the food, picks it up, and cooks it in a fry basket designated for that food specifically (i.e., onion rings won’t be cooked in the same basket as fish sticks). When cooking is complete, Flippy 2 moves the ready-to-go items to a hot-holding area.
Miso Robotics says the new robot’s throughput is 30 percent higher than that of its predecessor, which adds up to around 60 baskets of fried food per hour. So much fried food. Luckily, Americans can’t get enough fried food, in general and especially as the pandemic drags on. Even more importantly, the current labor shortages we’re seeing mean restaurant chains can’t hire enough people to cook fried food, making automated tools like Flippy not only helpful, but necessary.
“Since Flippy’s inception, our goal has always been to provide a customizable solution that can function harmoniously with any kitchen and without disruption,” said Mike Bell, CEO of Miso Robotics. “Flippy 2 has more than 120 configurations built into its technology and is the only robotic fry station currently being produced at scale.”
At the beginning of the pandemic, many foresaw that Covid-19 would push us into quicker adoption of many technologies that were already on the horizon, with automation of repetitive tasks being high on the list. They were right, and we’ve been lucky to have tools like Zoom to keep us collaborating and Flippy to keep us eating fast food (to whatever extent you consider eating fast food an essential activity; I mean, you can’t cook every day). Now if only there was a tech fix for inflation and housing shortages…
Seeing as how there’ve been three different versions of Flippy rolled out in the last four and a half years, there are doubtless more iterations coming, each with new skills and improved technology. But the burger robot is just one of many new developments in automation of food preparation and delivery. Take this pizzeria in Paris: there are no humans involved in the cooking, ordering, or pick-up process at all. And just this week, IBM and McDonald’s announced a collaboration to create drive-through lanes run by AI.
So it may not be long before you can order a meal from one computer, have that meal cooked by another computer, then have it delivered to your home or waiting vehicle by a third—you guessed it—computer.
Image Credit: Miso Robotics Continue reading
#439646 Elon Musk Has No Idea What He’s Doing ...
Yesterday, at the end of
Tesla's AI Day, Elon Musk introduced a concept for “Tesla Bot,” a 125 lb, 5'8″ tall electromechanically actuated autonomous bipedal “general purpose” humanoid robot. By “concept,” I mean that Musk showed some illustrations and talked about his vision for the robot, which struck me as, let's say, somewhat naïve. Based on the content of a six-minute long presentation, it seems as though Musk believes that someone (Tesla, suddenly?) should just go make an autonomous humanoid robot already—like, the technology exists, so why not do it?
To be fair, Musk did go out and do more or less much exactly that for electric cars and reusable rockets. But humanoid robots are much different, and much more complicated. With rockets, well, we already had rockets. And with electric cars, we already had cars, batteries, sensors, and the
DARPA competitions to build on. I don't say this to minimize what Musk has done with SpaceX and Tesla, but rather to emphasize that humanoid robotics is a very different challenge.
Unlike rockets or cars, humanoid robots aren't an existing technology that needs an ambitious vision plus a team of clever people plus sustained financial investment. With humanoid robotics, there are many more problems to solve, the problems are harder, and we're much farther away from practical solutions. Lots of very smart people have been actively working on these things for decades, and there's still a laundry list of fundamental breakthroughs in hardware and especially software that are probably necessary to make Musk's vision happen.
Are these fundamental breakthroughs impossible for Tesla? Not impossible, no. But from listening to what Elon Musk said today, I don't think he has any idea what getting humanoid robots to do useful stuff actually involves. Let's talk about why.
Watch the presentation if you haven't yet, and then let's go through what Musk talks about.
Okay, here we go!
“Our cars are semi-sentient robots on wheels.”
I don't know what that even means. Semi-sentient? Sure, whatever, a cockroach is semi-sentient I guess, although the implicit suggestion that these robots are therefore somehow part of the way towards actual sentience is ridiculous. Besides, autonomous cars live in a highly constrained action space within a semi-constrained environment, and Tesla cars in particular have plenty of well-known issues with their autonomy.
“With the full self-driving computer, essentially the inference engine on the car (which we'll keep evolving, obviously) and Dojo, and all the neural nets recognizing the world, understanding how to navigate through the world, it kind of makes sense to put that onto a humanoid form.”
Yes, because that's totally how it works. Look, the neural networks in a Tesla (the car) are trained to recognize the world from a car's perspective. They look for things that cars need to understand, and they have absolutely no idea about anything else, which can cause all kinds of problems for them. Same with navigation: autonomous cars navigate through a world that consists of roads and road-related stuff. You can't just “put that” onto a humanoid robot and have any sort of expectation that it'll be useful, unless all you want it to do is walk down the middle of the street and obey traffic lights. Also, the suggestion here seems to be that “AI for general purpose robotics” can be solved by just throwing enough computing power at it, which as far as I'm aware is not even remotely how that works, especially with physical robots.
“[Tesla] is also quite good at sensors and batteries and actuators. So, we think we'll probably have a prototype sometime next year.”
It's plausible that by spending enough money, Tesla could construct a humanoid robot with batteries, actuators, and computers in a similar design to what Musk has described. Can Tesla do it by sometime next year like Musk says they can? Sure, why not. But the hard part is not building a robot, it's getting that robot to do useful stuff, and I think Musk is way out of his depth here. People without a lot of experience in robotics often seem to think that once you've built the robot, you've solved most of the problem, so they focus on mechanical things like actuators and what it'll look like and how much it can lift and whatever. But that's backwards, and the harder problems come after you've got a robot that's mechanically functional.
What the heck does “human-level hands” mean?
“It's intended to navigate through a world built for humans…”
This is one of the few good reasons to make a humanoid robot, and I'm not even sure that by itself, it's a good enough reason to do so. But in any case, the word “intended” is doing a lot of heavy lifting here. The implications of a world built for humans includes an almost infinite variety of different environments, full of all kinds of robot-unfriendly things, not to mention the safety aspects of an inherently unstable 125 lb robot.
I feel like I have a pretty good handle on the current state of the art in humanoid robotics, and if you visit this site regularly, you probably do too. Companies like Boston Dynamics and Agility Robotics have been working on robots that can navigate through human environments for literally decades, and it's still a super hard problem. I don't know why Musk thinks that he can suddenly do better.
For anyone wondering why I Tweeted “Elon Musk has no idea what getting humanoid robots to do useful stuff actually… https://t.co/5uei4LIpyF
— Evan Ackerman (@BotJunkie)
1629446537.0
The “human-level hands” that you see annotated in Musk's presentation above are a good example of why I think Musk doesn't really grasp how much work this robot is going to be. What does “human-level hands” even mean? If we're talking about five-fingered hands with human-equivalent sensing and dexterity, those do exist (sort of), although they're generally fragile and expensive. It would take an enormous engineering effort to make hands like that into something practical just from a hardware perspective, which is why nobody has bothered—most robots use much simpler, much more robust two or three finger grippers instead. Could Tesla solve this problem? I have no doubt that they could, given enough time and money. But they've also got every other part of the robot to deal with. And even if you can make the hardware robust enough to be useful, you've still got to come up with all of the software to make it work. Again, we're talking about huge problems within huge problems at a scale that it seems like Musk hasn't considered.
“…And eliminate dangerous, repetitive, and boring tasks.”
Great. This is what robots should be doing. But as Musk himself knows, it's easy to say that robots will eliminate dangerous, repetitive, and boring tasks, and much more difficult to actually get them to do it—not because the robots aren't capable, but because humans are far more capable. We set a very high bar for performance and versatility in ways that aren't always obvious, and even when they are obvious, robots may not be able to replicate them effectively.
[Musk makes jokes about robots going rogue.]
Uh, okay.
“Things I think that are hard about having a really useful humanoid robot are, can it navigate through the world without being explicitly trained, without explicit line-by-line instructions? Can you talk to it and say, 'please pick up that bolt and attach it to the car with that wrench?' 'Please go to the store and get me the following groceries?' That kind of thing.”
Robots can already navigate through the world without “explicit line-by-line instructions” when they have a pretty good idea of what “the world” consists of. If the world is “roads” or “my apartment” or “this specific shopping mall,” that's probably a 95%+ solved problem, keeping in mind that the last 5% gets ridiculously complicated. But if you start talking about “my apartment plus any nearby grocery store along with everything between my apartment and that grocery store,” that's a whole lot of not necessarily well structured or predictable space.
And part of that challenge is just physically moving through those spaces. Are there stairs? Heavy doors? Crosswalks? Lots of people? These are complicated enough environments for those small wheeled sidewalk delivery robots with humans in the loop, never mind a (hypothetical) fully autonomous bipedal humanoid that is also carrying objects. And going into a crowded grocery store and picking things up off of shelves and putting them into a basket or a cart that then has to be pushed safely? These are cutting edge unsolved robotics problems, and we've barely seen this kind of thing happen with industrial arms on wheeled bases, even in a research context. Heck, even “pick up that bolt” is not an easy thing for a robot to do right now, if it wasn't specifically designed for that task.
“This I think will be quite profound, because what is the economy—at the foundation, it is labor. So, what happens when there is no shortage of labor? This is why I think long term there will need to be universal basic income. But not right now, because this robot doesn't work.”
Economics is well beyond my area of expertise, but as Musk says, until the robot works, this is all moot.
“AI for General Purpose Robotics.” Sure.
It's possible, even likely, that Tesla will build some sort of Tesla Bot by sometime next year, as Musk says. I think that it won't look all that much like the concept images in this presentation. I think that it'll be able to stand up, and perhaps walk. Maybe withstand a shove or two and do some basic object recognition and grasping. And I think after that, progress will be slow. I don't think Tesla will catch up with Boston Dynamics or Agility Robotics. Maybe they'll end up with the equivalent of Asimo, with a PR tool that can do impressive demos but is ultimately not all that useful.
Part of what bothers me so much about all this is how Musk's vision for the Tesla Bot implies that he's going to just casually leapfrog all of the roboticists who have been working towards useful humanoids for decades. Musk assumes that he will be able to wander into humanoid robot development and do what nobody else has yet been able to do: build a useful general purpose humanoid. I doubt Musk intended it this way, but I feel like he's backhandedly suggesting that the challenges with humanoids aren't actually that hard, and that if other people were cleverer, or worked harder, or threw more money at the problem, then we would have had general purpose humanoids already.
I think he's wrong. But if Tesla ends up investing time and money into solving some really hard robotics problems, perhaps they'll have some success that will help move the entire field forward. And I'd call that a win. Continue reading
#439119 No Human Can Match This High-Speed ...
Today at ProMat, a company called Pickle Robots is announcing Dill, a robot that can unload boxes from the back of a trailer at places like ecommerce fulfillment warehouses at very high speeds. With a peak box unloading rate of 1800 boxes per hour and a payload of up to 25 kg, Dill can substantially outperform even an expert human, and it can keep going pretty much forever as long as you have it plugged into the wall.
Pickle Robots says that Dill’s approach to the box unloading task is unique in a couple of ways. First, it can handle messy trailers filled with a jumble of boxes of different shapes, colors, sizes, and weights. And second, from the get-go it’s intended to work under human supervision, relying on people to step in and handle edge cases.
Pickle’s “Dill” robot is based around a Kuka arm with up to 30 kg of payload. It uses two Intel L515s (Lidar-based RGB-D cameras) for box detection. The system is mounted on a wheeled base, and after getting positioned at the back of a trailer by a human operator, it’ll crawl forward by itself as it picks its way into the trailer. We’re told that the rate at which the robot can shift boxes averages 1600 per hour, with a peak speed closer to 1800 boxes per hour. A single human in top form can move about 800 boxes per hour, so Dill is very, very fast. In the video, you can see the robot slow down on some packages, and Pickle CEO Andrew Meyer says that’s because “we probably have a tenuous grasp on that package. As we continue to improve the gripper, we will be able to keep the speed up on more cycles.”
While the video shows Dill operating at speed autonomously, the company says it’s designed to function under human supervision. From the press release: “To maintain these speeds, Dill needs people to supervise the operation and lend an occasional helping hand, stepping in every so often to pick up any dropped packages and handle irregular items.” Typically, Meyer says, that means one person for every five robots depending on the use case. Although if you have only one robot, it’ll still require someone to keep an eye on it. A supervisor is not occupied with the task full-time, to be clear. They can also be doing something else while the robot works—although the longer a human takes to respond to issues the robot may have, the slower its effective speed will be. Typically, the company says, a human will need to help out the robot once every five minutes when it’s doing something particularly complex. But even in situations with lots of hard-to-handle boxes resulting in relatively low efficiency, Meyer says that users can expect speeds exceeding 1000 boxes per hour.
Photo: Pickle Robots
Pickle Robots’ gripper, which includes a high contact area suction system and a retractable plate to help the robot quickly flip boxes.
From Pickle Robots’ video, it’s fairly obvious that the comparison that Pickle wants you to make is to Boston Dynamics’ Stretch robot, which has a peak box moving rate of 800 boxes per hour. Yes, Pickle’s robot is twice as fast. But it’s also a unitasker, designed to unload boxes from trucks, and that’s it. Focusing on a very specific problem is a good approach for robots, because then you can design a robot that does an excellent job of solving that problem, which is what Pickle has done. Boston Dynamics has chosen a different route with Stretch, which is to build a robot that has the potential to do many other warehouse tasks, although not nearly as optimally.
The other big difference between Boston Dynamics and Pickle is, of course, that Boston Dynamics is focusing on autonomy. Meanwhile, Pickle, Meyer says in a press release, “resisted the fool’s errand of trying to create a system that could work entirely unsupervised.” Personally, I disagree that trying to create a system that could work entirely unsupervised is a fool’s errand. Approaching practical commercial robotics (in any context) from a perspective of requiring complete unsupervised autonomy is generally not practical right now outside of highly structured environments. But many companies do have goals that include unsupervised operation while still acknowledging that occasionally their robots will need a human to step in and help. In fact, these companies are (generally) doing exactly what Pickle is doing in practice: they’re deploying robots with the goal of fully unsupervised autonomy, while keeping humans available as they work their way towards that goal. The difference, perhaps, is philosophical—some companies see unsupervised operation as the future of robotics in these specific contexts, while Pickle does not. We asked Meyer about why this is. He replied:
Some problems are hardware-related and not likely to yield an automated solution anytime soon. For example, the gripper is physically incapable of grasping some objects, like car tires, no matter what intelligence the robot has. A part might start to wear out, like a spring on the gripper, and the gripper can behave unpredictably. Things can be too heavy. A sensor might get knocked out of place, dust might get on the camera lens. Or an already damaged package falls apart when you pick it up, and dumps its contents on the ground.
Other problems can go away over time as the algorithms learn and the engineers innovate in small ways. For example, learning not to pick packages that will cause a bunch more to fall down, learning to approach boxes in the corner from the side, or—and this was a real issue in production for a couple days—learning to avoid picking directly on labels where they might peel off from suction.
Machine learning algorithms, on both the perception and action sides of the story, are critical ingredients for making any of this work. However, even with them your engineering team still has to do a lot of problem solving wherever the AI is struggling. At some point you run out of engineering resources to solve all these problems in the long tail. When we talk about problems that require AI algorithms as capable as people are, we mean ones where the target on the reliability curve (99.99999% in the case of self driving, for example) is out of reach in this way. I think the big lesson from self-driving cars is that chasing that long tail of edge cases is really, really hard. We realized that in the loading dock, you can still deliver tremendous value to the customer even if you assume you can only handle 98% of the cases.
These long-tail problems are everywhere in robotics, but again, some people believe that levels of reliability that are usable for unsupervised operation (at least in some specific contexts) are more near-term achievable than others do. In Pickle’s case, emphasizing human supervision means that they may be able to deploy faster and more reliably and at lower cost and with higher performance—we’ll just have to see how long it takes for other companies to come through with robots that are able to do the same tasks without human supervision.
Photo: Pickle Robots
Pickle robots is also working on other high speed package sorting systems.
We asked Meyer how much Dill costs, and to our surprise, he gave us a candid answer: Depending on the configuration, the system can cost anywhere from $50-100k to deploy and about that same amount per year to operate. Meyer points out that you can’t really compare the robot to a human (or humans) simply on speed, since with the robot, you don’t have to worry about injuries or improper sorting of packages or training or turnover. While Pickle is currently working on several other configurations of robots for package handling, this particular truck unloading configuration will be shipping to customers next year. Continue reading
#439105 This Robot Taught Itself to Walk in a ...
Recently, in a Berkeley lab, a robot called Cassie taught itself to walk, a little like a toddler might. Through trial and error, it learned to move in a simulated world. Then its handlers sent it strolling through a minefield of real-world tests to see how it’d fare.
And, as it turns out, it fared pretty damn well. With no further fine-tuning, the robot—which is basically just a pair of legs—was able to walk in all directions, squat down while walking, right itself when pushed off balance, and adjust to different kinds of surfaces.
It’s the first time a machine learning approach known as reinforcement learning has been so successfully applied in two-legged robots.
This likely isn’t the first robot video you’ve seen, nor the most polished.
For years, the internet has been enthralled by videos of robots doing far more than walking and regaining their balance. All that is table stakes these days. Boston Dynamics, the heavyweight champ of robot videos, regularly releases mind-blowing footage of robots doing parkour, back flips, and complex dance routines. At times, it can seem the world of iRobot is just around the corner.
This sense of awe is well-earned. Boston Dynamics is one of the world’s top makers of advanced robots.
But they still have to meticulously hand program and choreograph the movements of the robots in their videos. This is a powerful approach, and the Boston Dynamics team has done incredible things with it.
In real-world situations, however, robots need to be robust and resilient. They need to regularly deal with the unexpected, and no amount of choreography will do. Which is how, it’s hoped, machine learning can help.
Reinforcement learning has been most famously exploited by Alphabet’s DeepMind to train algorithms that thrash humans at some the most difficult games. Simplistically, it’s modeled on the way we learn. Touch the stove, get burned, don’t touch the damn thing again; say please, get a jelly bean, politely ask for another.
In Cassie’s case, the Berkeley team used reinforcement learning to train an algorithm to walk in a simulation. It’s not the first AI to learn to walk in this manner. But going from simulation to the real world doesn’t always translate.
Subtle differences between the two can (literally) trip up a fledgling robot as it tries out its sim skills for the first time.
To overcome this challenge, the researchers used two simulations instead of one. The first simulation, an open source training environment called MuJoCo, was where the algorithm drew upon a large library of possible movements and, through trial and error, learned to apply them. The second simulation, called Matlab SimMechanics, served as a low-stakes testing ground that more precisely matched real-world conditions.
Once the algorithm was good enough, it graduated to Cassie.
And amazingly, it didn’t need further polishing. Said another way, when it was born into the physical world—it knew how to walk just fine. In addition, it was also quite robust. The researchers write that two motors in Cassie’s knee malfunctioned during the experiment, but the robot was able to adjust and keep on trucking.
Other labs have been hard at work applying machine learning to robotics.
Last year Google used reinforcement learning to train a (simpler) four-legged robot. And OpenAI has used it with robotic arms. Boston Dynamics, too, will likely explore ways to augment their robots with machine learning. New approaches—like this one aimed at training multi-skilled robots or this one offering continuous learning beyond training—may also move the dial. It’s early yet, however, and there’s no telling when machine learning will exceed more traditional methods.
And in the meantime, Boston Dynamics bots are testing the commercial waters.
Still, robotics researchers, who were not part of the Berkeley team, think the approach is promising. Edward Johns, head of Imperial College London’s Robot Learning Lab, told MIT Technology Review, “This is one of the most successful examples I have seen.”
The Berkeley team hopes to build on that success by trying out “more dynamic and agile behaviors.” So, might a self-taught parkour-Cassie be headed our way? We’ll see.
Image Credit: University of California Berkeley Hybrid Robotics via YouTube Continue reading