Tag Archives: with

#439916 This Restaurant Robot Fries Your Food to ...

Four and a half years ago, a robot named Flippy made its burger-cooking debut at a fast food restaurant called CaliBurger. The bot consisted of a cart on wheels with an extending arm, complete with a pneumatic pump that let the machine swap between tools: tongs, scrapers, and spatulas. Flippy’s main jobs were pulling raw patties from a stack and placing them on the grill, tracking each burger’s cook time and temperature, and transferring cooked burgers to a plate.

This initial iteration of the fast-food robot—or robotic kitchen assistant, as its creators called it—was so successful that a commercial version launched last year. Its maker Miso Robotics put Flippy on the market for $30,000, and the bot was no longer limited to just flipping burgers; the new and improved Flippy could cook 19 different foods, including chicken wings, onion rings, french fries, and the Impossible Burger. It got sleeker, too: rather than sitting on a wheeled cart, the new Flippy was a “robot on a rail,” with the rail located along the hood of restaurant stoves.

This week, Miso Robotics announced an even newer, more improved Flippy robot called Flippy 2 (hey, they’re consistent). Most of the updates and improvements on the new bot are based on feedback the company received from restaurant chain White Castle, the first big restaurant chain to go all-in on the original Flippy.

So how is Flippy 2 different? The new robot can do the work of an entire fry station without any human assistance, and can do more than double the number of food preparation tasks its older sibling could do, including filling, emptying, and returning fry baskets.

These capabilities have made the robot more independent, eliminating the need for a human employee to step in at the beginning or end of the cooking process. When foods are placed in fry bins, the robot’s AI vision identifies the food, picks it up, and cooks it in a fry basket designated for that food specifically (i.e., onion rings won’t be cooked in the same basket as fish sticks). When cooking is complete, Flippy 2 moves the ready-to-go items to a hot-holding area.

Miso Robotics says the new robot’s throughput is 30 percent higher than that of its predecessor, which adds up to around 60 baskets of fried food per hour. So much fried food. Luckily, Americans can’t get enough fried food, in general and especially as the pandemic drags on. Even more importantly, the current labor shortages we’re seeing mean restaurant chains can’t hire enough people to cook fried food, making automated tools like Flippy not only helpful, but necessary.

“Since Flippy’s inception, our goal has always been to provide a customizable solution that can function harmoniously with any kitchen and without disruption,” said Mike Bell, CEO of Miso Robotics. “Flippy 2 has more than 120 configurations built into its technology and is the only robotic fry station currently being produced at scale.”

At the beginning of the pandemic, many foresaw that Covid-19 would push us into quicker adoption of many technologies that were already on the horizon, with automation of repetitive tasks being high on the list. They were right, and we’ve been lucky to have tools like Zoom to keep us collaborating and Flippy to keep us eating fast food (to whatever extent you consider eating fast food an essential activity; I mean, you can’t cook every day). Now if only there was a tech fix for inflation and housing shortages…

Seeing as how there’ve been three different versions of Flippy rolled out in the last four and a half years, there are doubtless more iterations coming, each with new skills and improved technology. But the burger robot is just one of many new developments in automation of food preparation and delivery. Take this pizzeria in Paris: there are no humans involved in the cooking, ordering, or pick-up process at all. And just this week, IBM and McDonald’s announced a collaboration to create drive-through lanes run by AI.

So it may not be long before you can order a meal from one computer, have that meal cooked by another computer, then have it delivered to your home or waiting vehicle by a third—you guessed it—computer.

Image Credit: Miso Robotics Continue reading

Posted in Human Robots

#439842 AI-Powered Brain Implant Eases Severe ...

Sarah hadn’t laughed in five years.

At 36 years old, the avid home cook has struggled with depression since early childhood. She tried the whole range of antidepressant medications and therapy for decades. Nothing worked. One night, five years ago, driving home from work, she had one thought in her mind: this is it. I’m done.

Luckily she made it home safe. And soon she was offered an intriguing new possibility to tackle her symptoms—a little chip, implanted into her brain, that captures the unique neural signals encoding her depression. Once the implant detects those signals, it zaps them away with a brief electrical jolt, like adding noise to an enemy’s digital transmissions to scramble their original message. When that message triggers depression, hijacking neural communications is exactly what we want to do.

Flash forward several years, and Sarah has her depression under control for the first time in her life. Her suicidal thoughts evaporated. After quitting her tech job due to her condition, she’s now back on her feet, enrolled in data analytics classes and taking care of her elderly mother. “For the first time,” she said, “I’m finally laughing.”

Sarah’s recovery is just one case. But it signifies a new era for the technology underlying her stunning improvement. It’s one of the first cases in which a personalized “brain pacemaker” can stealthily tap into, decipher, and alter a person’s mood and introspection based on their own unique electrical brain signatures. And while those implants have achieved stunning medical miracles in other areas—such as allowing people with paralysis to walk again—Sarah’s recovery is some of the strongest evidence yet that a computer chip, in a brain, powered by AI, can fundamentally alter our perception of life. It’s the closest to reading and repairing a troubled mind that we’ve ever gotten.

“We haven’t been able to do this kind of personalized therapy previously in psychiatry,” said study lead Dr. Katherine Scangos at UCSF. “This success in itself is an incredible advancement in our knowledge of the brain function that underlies mental illness.”

Brain Pacemaker
The key to Sarah’s recovery is a brain-machine interface.

Roughly the size of a matchbox, the implant sits inside the brain, silently listening to and decoding its electrical signals. Using those signals, it’s possible to control other parts of the brain or body. Brain implants have given people with lower body paralysis the ability to walk again. They’ve allowed amputees to control robotic hands with just a thought. They’ve opened up a world of sensations, integrating feedback from cyborg-like artificial limbs that transmit signals directly into the brain.

But Sarah’s implant is different.

Sensation and movement are generally controlled by relatively well-defined circuits in the outermost layer of the brain: the cortex. Emotion and mood are also products of our brain’s electrical signals, but they tend to stem from deeper neural networks hidden at the center of the brain. One way to tap into those circuits is called deep brain stimulation (DBS), a method pioneered in the ’80s that’s been used to treat severe Parkinson’s disease and epilepsy, particularly for cases that don’t usually respond to medication.

Sarah’s neural implant takes this route: it listens in on the chatter between neurons deep within the brain to decode mood.

But where is mood in the brain? One particular problem, the authors explained, is that unlike movement, there is no “depression brain region.” Rather, emotions are regulated by intricate, intertwining networks across multiple brain regions. Adding to that complexity is the fact that we’re all neural snowflakes—each of us have uniquely personalized brain network connections.

In other words, zapping my circuit to reduce depression might not work for you. DBS, for example, has previously been studied for treating depression. But despite decades of research, it’s not federally approved due to inconsistent results. The culprit? The electrical stimulation patterns used in those trials were constant and engineered to be one-size-fits-all. Have you ever tried buying socks or PJs at a department store, seen the tag that says “one size,” and they don’t fit? Yeah. DBS has brought about remarkable improvements for some people with depression—ill-fitting socks are better than none in a pinch. But with increasingly sophisticated neuroengineering methods, we can do better.

The solution? Let’s make altering your brain more personal.

Unconscious Reprieve
That’s the route Sarah’s psychologist and UCSF neurosurgeon Dr. Edward Chang and colleagues took in the new study.

The first step in detecting depression-related activity in the brain was to be able to listen in. The team implanted 10 electrodes in Sarah’s brain, targeting multiple regions encoding emotion-related circuits. They then recorded electrical signals from these regions over the course of 10 days, while Sarah journaled about how she felt each day—happy or low. In the background, the team peeked into her brain activity patterns, a symphony of electrical signals in multiple frequencies, like overlapping waves on the ocean.

One particular brain wave emerged. It stemmed from the amygdala, a region normally involved in fear, lust, and other powerful emotions. Software-based mapping pinpointed the node as a powerful guide to Sarah’s mental state.

In contrast, another area tucked deep inside the brain, the ventral capsule/ventral striatum (VC/VS), emerged as a place to stimulate with little bouts of electricity to disrupt patterns leading to feelings of depression.

The team next implanted an FDA-approved neural pacemaker into the right brain lobe, with two sensing leads to capture activity from the amygdala and two stimulating wires to zap the VC/VS. The implant was previously used in epilepsy treatments and continuously senses neural activity. It’s both off-the-shelf and programmable, in that the authors could instruct it to detect “pre-specified patterns of activation” related to Sarah’s depressive episodes, and deliver short bursts of electrical stimulation only then. Just randomly stimulating the amygdala could “actually cause more stress and more depression symptoms,” said Dr. Chang in a press conference.

Brain surgery wasn’t easy. But to Sarah, drilling several holes into her brain was less difficult than the emotional pain of her depression. Every day during the trial, she waved a figure-eight-shaped wand over her head, which wirelessly captured 90 seconds of her brain’s electrical activity while reporting on her mental health.

When the stimulator turned on (even when she wasn’t aware it was on), “a joyous feeling just washed over me,” she said.

A New Neurological Future
For now, the results are just for one person. But if repeated—and Sarah could be a unique case—they suggest we’re finally at the point where we can tap into each unique person’s emotional mindset and fundamentally alter their perception of life.

And with that comes intense responsibility. Sarah’s neural “imprint” of her depression is tailored to her. It might be completely different for someone else. It’s something for future studies to dig into. But what’s clear is that it’s possible to regulate a person’s emotions with an AI-powered brain implant. And if other neurological disorders can be decoded in a similar way, we could use brain pacemakers to treat some of our toughest mental foes.

“God, the color differentiation is gorgeous,” said Sarah as her implant turned on. “I feel alert. I feel present.”

Image Credit: Sarah in her community garden, photo by John Lok/UCSF 2021 Continue reading

Posted in Human Robots

#439714 Exosuit That Helps With the Heavy ...

New advances in robotics can help push the limits of the human body to make us faster or stronger. But now researchers from the Biorobotics Laboratory at Seoul National University (SNU) have designed an exosuit that corrects body posture. Their recent paper describes the Movement Reshaping (MR) Exosuit, which, rather than augmenting any part of the human body, couples the motion of one joint to lock or unlock the motion of another joint. It works passively, without any motors or batteries.
For instance, when attempting to lift a heavy object off the floor, most of us stoop from the waist, which is an injury-inviting posture. The SNU device hinders the stooping posture and helps correct it to a (safer) squatting one. “We call our methodology 'body-powered variable impedance',” says, Kyu-Jin Cho, a biorobotics engineer and one of the authors, “[as] we can change the impedance of a joint by moving another.”
Most lift-assist devices—such as Karl Zelik's HeroWear—are designed to reduce the wearer's fatigue by providing extra power and minimizing interference in their volitional movements, says co-author Jooeun Ahn. “On the other hand, our MR Exosuit is focusing on reshaping the wearer's lifting motion into a safe squatting form, as well as providing extra assistive force.”

Movement reshaping exo-suit for safe lifting

The MR suit has been designed to mitigate injuries for workers in factories and warehouses who undertake repetitive lifting work. “Many lift-related injuries are caused not only by muscle fatigue but also by improper lifting posture,” adds Keewon Kim, a rehabilitation medicine specialist at SNU College of Medicine, who also contributed to the study. Stooping is easier than squatting, and humans tend to choose the more comfortable strategy. “Because the deleterious effects of such comfortable but unsafe motion develop slowly, people do not perceive the risk in time, as in the case of disk degeneration.”
The researchers designed a mechanism to lock the hip flexion when a person tries to stoop and unlock it when they tried to squat. “We connected the top of the back to the foot with a unique tendon structure consisting of vertical brake cables and a horizontal rubber band,” graduate researcher and first author of the study, Sung-Sik Yoon, explains. “When the hip is flexed while the knee is not flexed, the hip flexion torque is delivered to the foot through the brake cable, causing strong resistance to the movement. However, if the knees are widened laterally for squatting, the angle of the tendons changes, and the hip flexion torque is switched to be supported by the rubber band.”

The device was tested on ten human participants, who were first-time users of the suit. Nine out of ten participants changed their motion pattern closer to the squatting form while wearing the exosuit. This, says Ahn, is a 35% improvement in the average postural index of 10 participants. They also noticed a 5.3% reduction in the average metabolic energy consumption of the participants. “We are now working on improving the MR Exosuit in order to test it in a real manual working place,” Ahn adds. “We are going to start a field test soon.”
“Wearable devices do not have to mimic the original anatomical structure of humans.”
The researchers plan to commercialize the device next year, but there are still some kinks to work out. While the effectiveness of the suit has been verified in their paper, the long-term effects of wearing have not. “In the future, we plan to conduct a longitudinal experiment in various fields that require lift posture training such as industrial settings, gyms, and rehabilitation centers,” says Cho.

They are also planning a follow-up study to expand the principle of body-powered variable impedance to sports applications. “Many sports that utilize the whole body, such as golf, swimming, and running, require proper movement training to improve safety and performance,” Cho continues. “As in this study, we will develop sportswear for motion training suitable for various sports activities using soft materials such as cables and rubber bands.”
This study shows that artificial tendons whose structure is different from that of humans can effectively assist humans by reshaping the motor pattern, says Ahn. The current version of the exosuit can also be used to prevent inappropriate lifting motions of patients with poor spinal conditions. He and his colleagues expect that their design will lead to changes in future research on wearable robotics: “We demonstrated that wearable devices do not have to mimic the original anatomical structure of humans.” Continue reading

Posted in Human Robots

#439691 Researchers develop bionic arm that ...

Cleveland Clinic researchers have engineered a first-of-its-kind bionic arm for patients with upper-limb amputations that allows wearers to think, behave and function like a person without an amputation, according to new findings published in Science Robotics. Continue reading

Posted in Human Robots

#439646 Elon Musk Has No Idea What He’s Doing ...

Yesterday, at the end of
Tesla's AI Day, Elon Musk introduced a concept for “Tesla Bot,” a 125 lb, 5'8″ tall electromechanically actuated autonomous bipedal “general purpose” humanoid robot. By “concept,” I mean that Musk showed some illustrations and talked about his vision for the robot, which struck me as, let's say, somewhat naïve. Based on the content of a six-minute long presentation, it seems as though Musk believes that someone (Tesla, suddenly?) should just go make an autonomous humanoid robot already—like, the technology exists, so why not do it?

To be fair, Musk did go out and do more or less much exactly that for electric cars and reusable rockets. But humanoid robots are much different, and much more complicated. With rockets, well, we already had rockets. And with electric cars, we already had cars, batteries, sensors, and the
DARPA competitions to build on. I don't say this to minimize what Musk has done with SpaceX and Tesla, but rather to emphasize that humanoid robotics is a very different challenge.

Unlike rockets or cars, humanoid robots aren't an existing technology that needs an ambitious vision plus a team of clever people plus sustained financial investment. With humanoid robotics, there are many more problems to solve, the problems are harder, and we're much farther away from practical solutions. Lots of very smart people have been actively working on these things for decades, and there's still a laundry list of fundamental breakthroughs in hardware and especially software that are probably necessary to make Musk's vision happen.

Are these fundamental breakthroughs impossible for Tesla? Not impossible, no. But from listening to what Elon Musk said today, I don't think he has any idea what getting humanoid robots to do useful stuff actually involves. Let's talk about why.

Watch the presentation if you haven't yet, and then let's go through what Musk talks about.

Okay, here we go!
“Our cars are semi-sentient robots on wheels.”

I don't know what that even means. Semi-sentient? Sure, whatever, a cockroach is semi-sentient I guess, although the implicit suggestion that these robots are therefore somehow part of the way towards actual sentience is ridiculous. Besides, autonomous cars live in a highly constrained action space within a semi-constrained environment, and Tesla cars in particular have plenty of well-known issues with their autonomy.

“With the full self-driving computer, essentially the inference engine on the car (which we'll keep evolving, obviously) and Dojo, and all the neural nets recognizing the world, understanding how to navigate through the world, it kind of makes sense to put that onto a humanoid form.”
Yes, because that's totally how it works. Look, the neural networks in a Tesla (the car) are trained to recognize the world from a car's perspective. They look for things that cars need to understand, and they have absolutely no idea about anything else, which can cause all kinds of problems for them. Same with navigation: autonomous cars navigate through a world that consists of roads and road-related stuff. You can't just “put that” onto a humanoid robot and have any sort of expectation that it'll be useful, unless all you want it to do is walk down the middle of the street and obey traffic lights. Also, the suggestion here seems to be that “AI for general purpose robotics” can be solved by just throwing enough computing power at it, which as far as I'm aware is not even remotely how that works, especially with physical robots.

“[Tesla] is also quite good at sensors and batteries and actuators. So, we think we'll probably have a prototype sometime next year.”
It's plausible that by spending enough money, Tesla could construct a humanoid robot with batteries, actuators, and computers in a similar design to what Musk has described. Can Tesla do it by sometime next year like Musk says they can? Sure, why not. But the hard part is not building a robot, it's getting that robot to do useful stuff, and I think Musk is way out of his depth here. People without a lot of experience in robotics often seem to think that once you've built the robot, you've solved most of the problem, so they focus on mechanical things like actuators and what it'll look like and how much it can lift and whatever. But that's backwards, and the harder problems come after you've got a robot that's mechanically functional.

What the heck does “human-level hands” mean?

“It's intended to navigate through a world built for humans…”
This is one of the few good reasons to make a humanoid robot, and I'm not even sure that by itself, it's a good enough reason to do so. But in any case, the word “intended” is doing a lot of heavy lifting here. The implications of a world built for humans includes an almost infinite variety of different environments, full of all kinds of robot-unfriendly things, not to mention the safety aspects of an inherently unstable 125 lb robot.

I feel like I have a pretty good handle on the current state of the art in humanoid robotics, and if you visit this site regularly, you probably do too. Companies like Boston Dynamics and Agility Robotics have been working on robots that can navigate through human environments for literally decades, and it's still a super hard problem. I don't know why Musk thinks that he can suddenly do better.

For anyone wondering why I Tweeted “Elon Musk has no idea what getting humanoid robots to do useful stuff actually… https://t.co/5uei4LIpyF
— Evan Ackerman (@BotJunkie)
1629446537.0

The “human-level hands” that you see annotated in Musk's presentation above are a good example of why I think Musk doesn't really grasp how much work this robot is going to be. What does “human-level hands” even mean? If we're talking about five-fingered hands with human-equivalent sensing and dexterity, those do exist (sort of), although they're generally fragile and expensive. It would take an enormous engineering effort to make hands like that into something practical just from a hardware perspective, which is why nobody has bothered—most robots use much simpler, much more robust two or three finger grippers instead. Could Tesla solve this problem? I have no doubt that they could, given enough time and money. But they've also got every other part of the robot to deal with. And even if you can make the hardware robust enough to be useful, you've still got to come up with all of the software to make it work. Again, we're talking about huge problems within huge problems at a scale that it seems like Musk hasn't considered.

“…And eliminate dangerous, repetitive, and boring tasks.”

Great. This is what robots should be doing. But as Musk himself knows, it's easy to say that robots will eliminate dangerous, repetitive, and boring tasks, and much more difficult to actually get them to do it—not because the robots aren't capable, but because humans are far more capable. We set a very high bar for performance and versatility in ways that aren't always obvious, and even when they are obvious, robots may not be able to replicate them effectively.

[Musk makes jokes about robots going rogue.]

Uh, okay.

“Things I think that are hard about having a really useful humanoid robot are, can it navigate through the world without being explicitly trained, without explicit line-by-line instructions? Can you talk to it and say, 'please pick up that bolt and attach it to the car with that wrench?' 'Please go to the store and get me the following groceries?' That kind of thing.”
Robots can already navigate through the world without “explicit line-by-line instructions” when they have a pretty good idea of what “the world” consists of. If the world is “roads” or “my apartment” or “this specific shopping mall,” that's probably a 95%+ solved problem, keeping in mind that the last 5% gets ridiculously complicated. But if you start talking about “my apartment plus any nearby grocery store along with everything between my apartment and that grocery store,” that's a whole lot of not necessarily well structured or predictable space.

And part of that challenge is just physically moving through those spaces. Are there stairs? Heavy doors? Crosswalks? Lots of people? These are complicated enough environments for those small wheeled sidewalk delivery robots with humans in the loop, never mind a (hypothetical) fully autonomous bipedal humanoid that is also carrying objects. And going into a crowded grocery store and picking things up off of shelves and putting them into a basket or a cart that then has to be pushed safely? These are cutting edge unsolved robotics problems, and we've barely seen this kind of thing happen with industrial arms on wheeled bases, even in a research context. Heck, even “pick up that bolt” is not an easy thing for a robot to do right now, if it wasn't specifically designed for that task.

“This I think will be quite profound, because what is the economy—at the foundation, it is labor. So, what happens when there is no shortage of labor? This is why I think long term there will need to be universal basic income. But not right now, because this robot doesn't work.”

Economics is well beyond my area of expertise, but as Musk says, until the robot works, this is all moot.

“AI for General Purpose Robotics.” Sure.

It's possible, even likely, that Tesla will build some sort of Tesla Bot by sometime next year, as Musk says. I think that it won't look all that much like the concept images in this presentation. I think that it'll be able to stand up, and perhaps walk. Maybe withstand a shove or two and do some basic object recognition and grasping. And I think after that, progress will be slow. I don't think Tesla will catch up with Boston Dynamics or Agility Robotics. Maybe they'll end up with the equivalent of Asimo, with a PR tool that can do impressive demos but is ultimately not all that useful.

Part of what bothers me so much about all this is how Musk's vision for the Tesla Bot implies that he's going to just casually leapfrog all of the roboticists who have been working towards useful humanoids for decades. Musk assumes that he will be able to wander into humanoid robot development and do what nobody else has yet been able to do: build a useful general purpose humanoid. I doubt Musk intended it this way, but I feel like he's backhandedly suggesting that the challenges with humanoids aren't actually that hard, and that if other people were cleverer, or worked harder, or threw more money at the problem, then we would have had general purpose humanoids already.
I think he's wrong. But if Tesla ends up investing time and money into solving some really hard robotics problems, perhaps they'll have some success that will help move the entire field forward. And I'd call that a win. Continue reading

Posted in Human Robots