Tag Archives: help
#439916 This Restaurant Robot Fries Your Food to ...
Four and a half years ago, a robot named Flippy made its burger-cooking debut at a fast food restaurant called CaliBurger. The bot consisted of a cart on wheels with an extending arm, complete with a pneumatic pump that let the machine swap between tools: tongs, scrapers, and spatulas. Flippy’s main jobs were pulling raw patties from a stack and placing them on the grill, tracking each burger’s cook time and temperature, and transferring cooked burgers to a plate.
This initial iteration of the fast-food robot—or robotic kitchen assistant, as its creators called it—was so successful that a commercial version launched last year. Its maker Miso Robotics put Flippy on the market for $30,000, and the bot was no longer limited to just flipping burgers; the new and improved Flippy could cook 19 different foods, including chicken wings, onion rings, french fries, and the Impossible Burger. It got sleeker, too: rather than sitting on a wheeled cart, the new Flippy was a “robot on a rail,” with the rail located along the hood of restaurant stoves.
This week, Miso Robotics announced an even newer, more improved Flippy robot called Flippy 2 (hey, they’re consistent). Most of the updates and improvements on the new bot are based on feedback the company received from restaurant chain White Castle, the first big restaurant chain to go all-in on the original Flippy.
So how is Flippy 2 different? The new robot can do the work of an entire fry station without any human assistance, and can do more than double the number of food preparation tasks its older sibling could do, including filling, emptying, and returning fry baskets.
These capabilities have made the robot more independent, eliminating the need for a human employee to step in at the beginning or end of the cooking process. When foods are placed in fry bins, the robot’s AI vision identifies the food, picks it up, and cooks it in a fry basket designated for that food specifically (i.e., onion rings won’t be cooked in the same basket as fish sticks). When cooking is complete, Flippy 2 moves the ready-to-go items to a hot-holding area.
Miso Robotics says the new robot’s throughput is 30 percent higher than that of its predecessor, which adds up to around 60 baskets of fried food per hour. So much fried food. Luckily, Americans can’t get enough fried food, in general and especially as the pandemic drags on. Even more importantly, the current labor shortages we’re seeing mean restaurant chains can’t hire enough people to cook fried food, making automated tools like Flippy not only helpful, but necessary.
“Since Flippy’s inception, our goal has always been to provide a customizable solution that can function harmoniously with any kitchen and without disruption,” said Mike Bell, CEO of Miso Robotics. “Flippy 2 has more than 120 configurations built into its technology and is the only robotic fry station currently being produced at scale.”
At the beginning of the pandemic, many foresaw that Covid-19 would push us into quicker adoption of many technologies that were already on the horizon, with automation of repetitive tasks being high on the list. They were right, and we’ve been lucky to have tools like Zoom to keep us collaborating and Flippy to keep us eating fast food (to whatever extent you consider eating fast food an essential activity; I mean, you can’t cook every day). Now if only there was a tech fix for inflation and housing shortages…
Seeing as how there’ve been three different versions of Flippy rolled out in the last four and a half years, there are doubtless more iterations coming, each with new skills and improved technology. But the burger robot is just one of many new developments in automation of food preparation and delivery. Take this pizzeria in Paris: there are no humans involved in the cooking, ordering, or pick-up process at all. And just this week, IBM and McDonald’s announced a collaboration to create drive-through lanes run by AI.
So it may not be long before you can order a meal from one computer, have that meal cooked by another computer, then have it delivered to your home or waiting vehicle by a third—you guessed it—computer.
Image Credit: Miso Robotics Continue reading
#439849 Boots Full of Nickels Help Mini Cheetah ...
As quadrupedal robots learn to do more and more dynamic tasks, they're likely to spend more and more time not on their feet. Not falling over, necessarily (although that's inevitable of course, because they're legged robots after all)—but just being in flight in one way or another. The most risky of flight phases would be a fall from a substantial height, because it's almost certain to break your very expensive robot and any payload it might have.
Falls being bad is not a problem unique to robots, and it's not surprising that quadrupeds in nature have already solved it. Or at least, it's already been solved by cats, which are able to reliably land on their feet to mitigate fall damage. To teach quadrupedal robots this trick, roboticists from the University of Notre Dame have been teaching a Mini Cheetah quadruped some mid-air self-righting skills, with the aid of boots full of nickels.
If this research looks a little bit familiar, it's because we recently covered some work from ETH Zurich that looked at using legs to reorient their SpaceBok quadruped in microgravity. This work with Mini Cheetah has to contend with Earth gravity, however, which puts some fairly severe time constraints on the whole reorientation thing with the penalty for failure being a smashed-up robot rather than just a weird bounce. When we asked the ETH Zurich researchers what might improve the performance of SpaceBok, they told us that “heavy shoes would definitely help,” and it looks like the folks from Notre Dame had the same idea, which they were able to implement on Mini Cheetah.
Mini Cheetah's legs (like the legs of many robots) were specifically designed to be lightweight because they have to move quickly, and you want to minimize the mass that moves back and forth with every step to make the robot as efficient as possible. But for a robot to reorient itself in mid air, it's got to start swinging as much mass around as it can. Each of Mini Cheetah's legs has been modified with 3D printed boots, packed with two rolls of American nickels each, adding about 500g to each foot—enough to move the robot around like it needs to. The reason why nickel boots are important is because the only way that Mini Cheetah has of changing its orientation while falling is by flailing its legs around. When its legs move one way, its body will move the other way, and the heavier the legs are, the more force they can exert on the body.
As with everything robotics, getting the hardware to do what you want it to do is only half the battle. Or sometimes much, much less than half the battle. The challenge with Mini Cheetah flipping itself over is that it has a very, very small amount of time to figure out how to do it properly. It has to detect that it's falling, figure out what orientation it's in, make a plan of how to get itself feet down, and then execute on that plan successfully. The robot doesn't have enough time to put a whole heck of a lot of thought into things as it starts to plummet, so the technique that the researchers came up with to enable it to do what it needs to do is called a “reflex” approach. Vince Kurtz, first author on the paper describing this technique, explains how it works:
While trajectory optimization algorithms keep getting better and better, they still aren't quite fast enough to find a solution from scratch in the fraction of a second between when the robot detects a fall and when it needs to start a recovery motion. We got around this by dropping the robot a bunch of times in simulation, where we can take as much time as we need to find a solution, and training a neural network to imitate the trajectory optimizer. The trained neural network maps initial orientations to trajectories that land the robot on its feet. We call this the “reflex” approach, since the neural network has basically learned an automatic response that can be executed when the robot detects that it's falling.This technique works quite well, but there are a few constraints, most of which wouldn't seem so bad if we weren't comparing quadrupedal robots to quadrupedal animals. Cats are just, like, super competent at what they do, says Kurtz, and being able to mimic their ability to rapidly twist themselves into a favorable landing configuration from any starting orientation is just going to be really hard for a robot to pull off:
The more I do robotics research the more I appreciate how amazing nature is, and this project is a great example of that. Cats can do a full 180° rotation when dropped from about shoulder height. Our robot ran up against torque limits when rotating 90° from about 10ft off the ground. Using the full 3D motion would be a big improvement (rotating sideways should be easier because the robot's moment of inertia is smaller in that direction), though I'd be surprised if that alone got us to cat-level performance.
The biggest challenge that I see in going from 2D to 3D is self-collisions. Keeping the robot from hitting itself seems like it should be simple, but self-collisions turn out to impose rather nasty non-convex constraints that make it numerically difficult (though not impossible) for trajectory optimization algorithms to find high-quality solutions.Lastly, we asked Kurtz to talk a bit about whether it's worth exploring flexible actuated spines for quadrupedal robots. We know that such spines offer many advantages (a distant relative of Mini Cheetah had one, for example), but that they're also quite complex. So is it worth it?
This is an interesting question. Certainly in the case of the falling cat problem a flexible spine would help, both in terms of having a naturally flexible mass distribution and in terms of controller design, since we might be able to directly imitate the “bend-and-twist” motion of cats. Similarly, a flexible spine might help for tasks with large flight phases, like the jumping in space problems discussed in the ETH paper.
With that being said, mid-air reorientation is not the primary task of most quadruped robots, and it's not obvious to me that a flexible spine would help much for walking, running, or scrambling over uneven terrain. Also, existing hardware platforms with rigid backs like the Mini Cheetah are quite capable and I think we still haven't unlocked the full potential of these robots. Control algorithms are still the primary limiting factor for today's legged robots, and adding a flexible spine would probably make for even more difficult control problems.Mini Cheetah, the Falling Cat: A Case Study in Machine Learning and Trajectory Optimization for Robot Acrobatics, by Vince Kurtz, He Li, Patrick M. Wensing, and Hai Lin from University of Notre Dame, is available on arXiv. Continue reading
#439628 How a Simple Crystal Could Help Pave the ...
Vaccine and drug development, artificial intelligence, transport and logistics, climate science—these are all areas that stand to be transformed by the development of a full-scale quantum computer. And there has been explosive growth in quantum computing investment over the past decade.
Yet current quantum processors are relatively small in scale, with fewer than 100 qubits— the basic building blocks of a quantum computer. Bits are the smallest unit of information in computing, and the term qubits stems from “quantum bits.”
While early quantum processors have been crucial for demonstrating the potential of quantum computing, realizing globally significant applications will likely require processors with upwards of a million qubits.
Our new research tackles a core problem at the heart of scaling up quantum computers: how do we go from controlling just a few qubits, to controlling millions? In research published today in Science Advances, we reveal a new technology that may offer a solution.
What Exactly Is a Quantum Computer?
Quantum computers use qubits to hold and process quantum information. Unlike the bits of information in classical computers, qubits make use of the quantum properties of nature, known as “superposition” and “entanglement,” to perform some calculations much faster than their classical counterparts.
Unlike a classical bit, which is represented by either 0 or 1, a qubit can exist in two states (that is, 0 and 1) at the same time. This is what we refer to as a superposition state.
Demonstrations by Google and others have shown even current, early-stage quantum computers can outperform the most powerful supercomputers on the planet for a highly specialized (albeit not particularly useful) task—reaching a milestone we call quantum supremacy.
Google’s quantum computer, built from superconducting electrical circuits, had just 53 qubits and was cooled to a temperature close to -273℃ in a high-tech refrigerator. This extreme temperature is needed to remove heat, which can introduce errors to the fragile qubits. While such demonstrations are important, the challenge now is to build quantum processors with many more qubits.
Major efforts are underway at UNSW Sydney to make quantum computers from the same material used in everyday computer chips: silicon. A conventional silicon chip is thumbnail-sized and packs in several billion bits, so the prospect of using this technology to build a quantum computer is compelling.
The Control Problem
In silicon quantum processors, information is stored in individual electrons, which are trapped beneath small electrodes at the chip’s surface. Specifically, the qubit is coded into the electron’s spin. It can be pictured as a small compass inside the electron. The needle of the compass can point north or south, which represents the 0 and 1 states.
To set a qubit in a superposition state (both 0 and 1), an operation that occurs in all quantum computations, a control signal must be directed to the desired qubit. For qubits in silicon, this control signal is in the form of a microwave field, much like the ones used to carry phone calls over a 5G network. The microwaves interact with the electron and cause its spin (compass needle) to rotate.
Currently, each qubit requires its own microwave control field. It is delivered to the quantum chip through a cable running from room temperature down to the bottom of the refrigerator at close to -273 degrees Celsius. Each cable brings heat with it, which must be removed before it reaches the quantum processor.
At around 50 qubits, which is state-of-the-art today, this is difficult but manageable. Current refrigerator technology can cope with the cable heat load. However, it represents a huge hurdle if we’re to use systems with a million qubits or more.
The Solution Is ‘Global’ Control
An elegant solution to the challenge of how to deliver control signals to millions of spin qubits was proposed in the late 1990s. The idea of “global control” was simple: broadcast a single microwave control field across the entire quantum processor.
Voltage pulses can be applied locally to qubit electrodes to make the individual qubits interact with the global field (and produce superposition states).
It’s much easier to generate such voltage pulses on-chip than it is to generate multiple microwave fields. The solution requires only a single control cable and removes obtrusive on-chip microwave control circuitry.
For more than two decades global control in quantum computers remained an idea. Researchers could not devise a suitable technology that could be integrated with a quantum chip and generate microwave fields at suitably low powers.
In our work we show that a component known as a dielectric resonator could finally allow this. The dielectric resonator is a small, transparent crystal which traps microwaves for a short period of time.
The trapping of microwaves, a phenomenon known as resonance, allows them to interact with the spin qubits longer and greatly reduces the power of microwaves needed to generate the control field. This was vital to operating the technology inside the refrigerator.
In our experiment, we used the dielectric resonator to generate a control field over an area that could contain up to four million qubits. The quantum chip used in this demonstration was a device with two qubits. We were able to show the microwaves produced by the crystal could flip the spin state of each one.
The Path to a Full-Scale Quantum Computer
There is still work to be done before this technology is up to the task of controlling a million qubits. For our study, we managed to flip the state of the qubits, but not yet produce arbitrary superposition states.
Experiments are ongoing to demonstrate this critical capability. We’ll also need to further study the impact of the dielectric resonator on other aspects of the quantum processor.
That said, we believe these engineering challenges will ultimately be surmountable— clearing one of the greatest hurdles to realizing a large-scale spin-based quantum computer.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Image Credit: Serwan Asaad/UNSW, Author provided Continue reading
#439110 Robotic Exoskeletons Could One Day Walk ...
Engineers, using artificial intelligence and wearable cameras, now aim to help robotic exoskeletons walk by themselves.
Increasingly, researchers around the world are developing lower-body exoskeletons to help people walk. These are essentially walking robots users can strap to their legs to help them move.
One problem with such exoskeletons: They often depend on manual controls to switch from one mode of locomotion to another, such as from sitting to standing, or standing to walking, or walking on the ground to walking up or down stairs. Relying on joysticks or smartphone apps every time you want to switch the way you want to move can prove awkward and mentally taxing, says Brokoslaw Laschowski, a robotics researcher at the University of Waterloo in Canada.
Scientists are working on automated ways to help exoskeletons recognize when to switch locomotion modes — for instance, using sensors attached to legs that can detect bioelectric signals sent from your brain to your muscles telling them to move. However, this approach comes with a number of challenges, such as how how skin conductivity can change as a person’s skin gets sweatier or dries off.
Now several research groups are experimenting with a new approach: fitting exoskeleton users with wearable cameras to provide the machines with vision data that will let them operate autonomously. Artificial intelligence (AI) software can analyze this data to recognize stairs, doors, and other features of the surrounding environment and calculate how best to respond.
Laschowski leads the ExoNet project, the first open-source database of high-resolution wearable camera images of human locomotion scenarios. It holds more than 5.6 million images of indoor and outdoor real-world walking environments. The team used this data to train deep-learning algorithms; their convolutional neural networks can already automatically recognize different walking environments with 73 percent accuracy “despite the large variance in different surfaces and objects sensed by the wearable camera,” Laschowski notes.
According to Laschowski, a potential limitation of their work their reliance on conventional 2-D images, whereas depth cameras could also capture potentially useful distance data. He and his collaborators ultimately chose not to rely on depth cameras for a number of reasons, including the fact that the accuracy of depth measurements typically degrades in outdoor lighting and with increasing distance, he says.
In similar work, researchers in North Carolina had volunteers with cameras either mounted on their eyeglasses or strapped onto their knees walk through a variety of indoor and outdoor settings to capture the kind of image data exoskeletons might use to see the world around them. The aim? “To automate motion,” says Edgar Lobaton an electrical engineering researcher at North Carolina State University. He says they are focusing on how AI software might reduce uncertainty due to factors such as motion blur or overexposed images “to ensure safe operation. We want to ensure that we can really rely on the vision and AI portion before integrating it into the hardware.”
In the future, Laschowski and his colleagues will focus on improving the accuracy of their environmental analysis software with low computational and memory storage requirements, which are important for onboard, real-time operations on robotic exoskeletons. Lobaton and his team also seek to account for uncertainty introduced into their visual systems by movements .
Ultimately, the ExoNet researchers want to explore how AI software can transmit commands to exoskeletons so they can perform tasks such as climbing stairs or avoiding obstacles based on a system’s analysis of a user's current movements and the upcoming terrain. With autonomous cars as inspiration, they are seeking to develop autonomous exoskeletons that can handle the walking task without human input, Laschowski says.
However, Laschowski adds, “User safety is of the utmost importance, especially considering that we're working with individuals with mobility impairments,” resulting perhaps from advanced age or physical disabilities.
“The exoskeleton user will always have the ability to override the system should the classification algorithm or controller make a wrong decision.” Continue reading
#439105 This Robot Taught Itself to Walk in a ...
Recently, in a Berkeley lab, a robot called Cassie taught itself to walk, a little like a toddler might. Through trial and error, it learned to move in a simulated world. Then its handlers sent it strolling through a minefield of real-world tests to see how it’d fare.
And, as it turns out, it fared pretty damn well. With no further fine-tuning, the robot—which is basically just a pair of legs—was able to walk in all directions, squat down while walking, right itself when pushed off balance, and adjust to different kinds of surfaces.
It’s the first time a machine learning approach known as reinforcement learning has been so successfully applied in two-legged robots.
This likely isn’t the first robot video you’ve seen, nor the most polished.
For years, the internet has been enthralled by videos of robots doing far more than walking and regaining their balance. All that is table stakes these days. Boston Dynamics, the heavyweight champ of robot videos, regularly releases mind-blowing footage of robots doing parkour, back flips, and complex dance routines. At times, it can seem the world of iRobot is just around the corner.
This sense of awe is well-earned. Boston Dynamics is one of the world’s top makers of advanced robots.
But they still have to meticulously hand program and choreograph the movements of the robots in their videos. This is a powerful approach, and the Boston Dynamics team has done incredible things with it.
In real-world situations, however, robots need to be robust and resilient. They need to regularly deal with the unexpected, and no amount of choreography will do. Which is how, it’s hoped, machine learning can help.
Reinforcement learning has been most famously exploited by Alphabet’s DeepMind to train algorithms that thrash humans at some the most difficult games. Simplistically, it’s modeled on the way we learn. Touch the stove, get burned, don’t touch the damn thing again; say please, get a jelly bean, politely ask for another.
In Cassie’s case, the Berkeley team used reinforcement learning to train an algorithm to walk in a simulation. It’s not the first AI to learn to walk in this manner. But going from simulation to the real world doesn’t always translate.
Subtle differences between the two can (literally) trip up a fledgling robot as it tries out its sim skills for the first time.
To overcome this challenge, the researchers used two simulations instead of one. The first simulation, an open source training environment called MuJoCo, was where the algorithm drew upon a large library of possible movements and, through trial and error, learned to apply them. The second simulation, called Matlab SimMechanics, served as a low-stakes testing ground that more precisely matched real-world conditions.
Once the algorithm was good enough, it graduated to Cassie.
And amazingly, it didn’t need further polishing. Said another way, when it was born into the physical world—it knew how to walk just fine. In addition, it was also quite robust. The researchers write that two motors in Cassie’s knee malfunctioned during the experiment, but the robot was able to adjust and keep on trucking.
Other labs have been hard at work applying machine learning to robotics.
Last year Google used reinforcement learning to train a (simpler) four-legged robot. And OpenAI has used it with robotic arms. Boston Dynamics, too, will likely explore ways to augment their robots with machine learning. New approaches—like this one aimed at training multi-skilled robots or this one offering continuous learning beyond training—may also move the dial. It’s early yet, however, and there’s no telling when machine learning will exceed more traditional methods.
And in the meantime, Boston Dynamics bots are testing the commercial waters.
Still, robotics researchers, who were not part of the Berkeley team, think the approach is promising. Edward Johns, head of Imperial College London’s Robot Learning Lab, told MIT Technology Review, “This is one of the most successful examples I have seen.”
The Berkeley team hopes to build on that success by trying out “more dynamic and agile behaviors.” So, might a self-taught parkour-Cassie be headed our way? We’ll see.
Image Credit: University of California Berkeley Hybrid Robotics via YouTube Continue reading