Category Archives: Human Robots

Everything about Humanoid Robots and Androids

#439934 New Spiking Neuromorphic Chip Could ...

When it comes to brain computing, timing is everything. It’s how neurons wire up into circuits. It’s how these circuits process highly complex data, leading to actions that can mean life or death. It’s how our brains can make split-second decisions, even when faced with entirely new circumstances. And we do so without frying the brain from extensive energy consumption.

To rephrase, the brain makes an excellent example of an extremely powerful computer to mimic—and computer scientists and engineers have taken the first steps towards doing so. The field of neuromorphic computing looks to recreate the brain’s architecture and data processing abilities with novel hardware chips and software algorithms. It may be a pathway towards true artificial intelligence.

But one crucial element is lacking. Most algorithms that power neuromorphic chips only care about the contribution of each artificial neuron—that is, how strongly they connect to one another, dubbed “synaptic weight.” What’s missing—yet tantamount to our brain’s inner working—is timing.

This month, a team affiliated with the Human Brain Project, the European Union’s flagship big data neuroscience endeavor, added the element of time to a neuromorphic algorithm. The results were then implemented on physical hardware—the BrainScaleS-2 neuromorphic platform—and pitted against state-of-the-art GPUs and conventional neuromorphic solutions.

“Compared to the abstract neural networks used in deep learning, the more biological archetypes…still lag behind in terms of performance and scalability” due to their inherent complexity, the authors said.

In several tests, the algorithm compared “favorably, in terms of accuracy, latency, and energy efficiency” on a standard benchmark test, said Dr. Charlotte Frenkel at the University of Zurich and ETH Zurich in Switzerland, who was not involved in the study. By adding a temporal component into neuromorphic computing, we could usher in a new era of highly efficient AI that moves from static data tasks—say, image recognition—to one that better encapsulates time. Think videos, biosignals, or brain-to-computer speech.

To lead author Dr. Mihai Petrovici, the potential goes both ways. “Our work is not only interesting for neuromorphic computing and biologically inspired hardware. It also acknowledges the demand … to transfer so-called deep learning approaches to neuroscience and thereby further unveil the secrets of the human brain,” he said.

Let’s Talk Spikes
At the root of the new algorithm is a fundamental principle in brain computing: spikes.

Let’s take a look at a highly abstracted neuron. It’s like a tootsie roll, with a bulbous middle section flanked by two outward-reaching wrappers. One side is the input—an intricate tree that receives signals from a previous neuron. The other is the output, blasting signals to other neurons using bubble-like ships filled with chemicals, which in turn triggers an electrical response on the receiving end.

Here’s the crux: for this entire sequence to occur, the neuron has to “spike.” If, and only if, the neuron receives a high enough level of input—a nicely built-in noise reduction mechanism—the bulbous part will generate a spike that travels down the output channels to alert the next neuron.

But neurons don’t just use one spike to convey information. Rather, they spike in a time sequence. Think of it like Morse Code: ­the timing of when an electrical burst occurs carries a wealth of data. It’s the basis for neurons wiring up into circuits and hierarchies, allowing highly energy-efficient processing.

So why not adopt the same strategy for neuromorphic computers?

A Spartan Brain-Like Chip
Instead of mapping out a single artificial neuron’s spikes—a Herculean task—the team honed in on a single metric: how long it takes for a neuron to fire.

The idea behind “time-to-first-spike” code is simple: the longer it takes a neuron to spike, the lower its activity levels. Compared to counting spikes, it’s an extremely sparse way to encode a neuron’s activity, but comes with perks. Because only the latency to the first time a neuron perks up is used to encode activation, it captures the neuron’s responsiveness without overwhelming a computer with too many data points. In other words, it’s fast, energy-efficient, and easy.

The team next encoded the algorithm onto a neuromorphic chip—the BrainScaleS-2, which roughly emulates simple “neurons” inside its structure, but runs over 1,000 times faster than our biological brains. The platform has over 500 physical artificial neurons, each capable of receiving 256 inputs through configurable synapses, where biological neurons swap, process, and store information.

The setup is a hybrid. “Learning” is achieved on a chip that implements the time-dependent algorithm. However, any updates to the neural circuit—that is, how strongly one neuron connects to another—is achieved through an external workstation, something dubbed “in-the-loop training.”

In a first test, the algorithm was challenged with the “Yin-Yang” task, which requires the algorithm to parse different areas in the traditional Eastern symbol. The algorithm excelled, with an average of 95 percent accuracy.

The team next challenged the setup with a classic deep learning task—MNIST, a dataset of handwritten numbers that revolutionized computer vision. The algorithm excelled again, with nearly 97 percent accuracy. Even more impressive, the BrainScaleS-2 system took less than one second to classify 10,000 test samples, with extremely low relative energy consumption.

Putting these results into context, the team next compared BrainScaleS-2’s performance—armed with the new algorithm—to commercial and other neuromorphic platforms. Take SpiNNaker, a massive, parallel distributed architecture that also mimics neural computing and spikes. The new algorithm was over 100 times faster at image recognition while consuming just a fraction of the power SpiNNaker consumes. Similar results were seen with True North, the harbinger IBM neuromorphic chip.

What Next?
The brain’s two most valuable computing features—energy efficiency and parallel processing—are now heavily inspiring the next generation of computer chips. The goal? Build machines that are as flexible and adaptive as our own brains while using just a fraction of the energy required for our current silicon-based chips.

Yet compared to deep learning, which relies on artificial neural networks, biologically-plausible ones have languished. Part of this, explained Frenkel, is the difficultly of “updating” these circuits through learning. However, with BrainScaleS-2 and a touch of timing data, it’s now possible.

At the same time, having an “external” arbitrator for updating synaptic connections gives the whole system some time to breathe. Neuromorphic hardware, similar to the messiness of our brain computation, is littered with mismatches and errors. With the chip and an external arbitrator, the whole system can learn to adapt to this variability, and eventually compensate for—or even exploit—its quirks for faster and more flexible learning.

For Frenkel, the algorithm’s power lies in its sparseness. The brain, she explained, is powered by sparse codes that “could explain the fast reaction times…such as for visual processing.” Rather than activating entire brain regions, only a few neural networks are needed—like whizzing down empty highways instead of getting stuck in rush hour traffic.

Despite its power, the algorithm still has hiccups. It struggles with interpreting static data, although it excels with time sequences—for example, speech or biosignals. But to Frenkel, it’s the start of a new framework: important information can be encoded with a flexible but simple metric, and generalized to enrich brain- and AI-based data processing with a fraction of the traditional energy costs.

“[It]…may be an important stepping-stone for spiking neuromorphic hardware to finally demonstrate a competitive advantage over conventional neural network approaches,” she said.

Image Credit: Classifying data points in the Yin-Yang dataset, by Göltz and Kriener et al. (Heidelberg / Bern) Continue reading

Posted in Human Robots

#439929 GITAI’s Autonomous Robot Arm Finds ...

Late last year, Japanese robotics startup GITAI sent their S1 robotic arm up to the International Space Station as part of a commercial airlock extension module to test out some useful space-based autonomy. Everything moves pretty slowly on the ISS, so it wasn't until last month that NASA astronauts installed the S1 arm and GITAI was able to put the system through its paces—or rather, sit in comfy chairs on Earth and watch the arm do most of its tasks by itself, because that's the dream, right?

The good news is that everything went well, and the arm did everything GITAI was hoping it would do. So what's next for commercial autonomous robotics in space? GITAI's CEO tells us what they're working on.

In this technology demonstration, the GITAI S1 autonomous space robot was installed inside the ISS Nanoracks Bishop Airlock and succeeded in executing two tasks: assembling structures and panels for In-Space Assembly (ISA), and operating switches & cables for Intra-Vehicular Activity (IVA).

One of the advantages of working in space is that it's a highly structured environment. Microgravity can be somewhat unpredictable, but you have a very good idea of the characteristics of objects (and even of lighting) because everything that's up there is excessively well defined. So, stuff like using a two-finger gripper for relatively high precision tasks is totally possible, because the variation that the system has to deal with is low. Of course, things can always go wrong, so GITAI also tested teleop procedures from Houston to make sure that having humans in the loop was also an effective way of completing tasks.

Since full autonomy is vastly more difficult than almost full autonomy, occasional teleop is probably going to be critical for space robots of all kinds. We spoke with GITAI CEO Sho Nakanose to learn more about their approach.

IEEE Spectrum: What do you think is the right amount of autonomy for robots working inside of the ISS?

Sho Nakanose: We believe that a combination of 95% autonomous control and 5% remote judgment and remote operation is the most efficient way to work. In this ISS demonstration, all the work was performed with 99% autonomous control and 1% remote decision making. However, in actual operations on the ISS, irregular tasks will occur that cannot be handled by autonomous control, and we believe that such irregular tasks should be handled by remote control from the ground, so we believe that the final ratio of about 5% remote judgment and remote control will be the most efficient.

GITAI will apply the general-purpose autonomous space robotics technology, know-how, and experience acquired through this tech demo to develop extra-vehicular robotics (EVR) that can execute docking, repair, and maintenance tasks for On-Orbit Servicing (OOS) or conduct various activities for lunar exploration and lunar base construction. -Sho Nakanose

I'm sure you did many tests with the system on the ground before sending it to the ISS. How was operating the robot on the ISS different from the testing you had done on Earth?

The biggest difference between experiments on the ground and on the ISS is the microgravity environment, but it was not that difficult to cope with. However, experiments on the ISS, which is an unknown environment that we have never been to before, are subject to a variety of unexpected situations that were extremely difficult to deal with, for example an unexpected communication breakdown occurred due to a failed thruster firing experiment on the Russian module. However, we were able to solve all the problems because the development team had carefully prepared for the irregularities in advance.

It looked like the robot was performing many tasks using equipment designed for humans. Do you think it would be better to design things like screws and control panels to make them easier for robots to see and operate?

Yes, I think so. Unlike the ISS that was built in the past, it is expected that humans and robots will cooperate to work together in the lunar orbiting space station Gateway and the lunar base that will be built in the future. Therefore, it is necessary to devise and implement an interface that is easy to use for both humans and robots. In 2019, GITAI received an order from JAXA to develop guidelines for an interface that is easy for both humans and robots to use on the ISS and Gateway.

What are you working on next?

We are planning to conduct an on-orbit extra-vehicular demonstration in 2023 and a lunar demonstration in 2025. We are also working on space robot development projects for several customers for which we have already received orders. Continue reading

Posted in Human Robots

#439920 Video Friday: Your Robot Dog

Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We'll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

ICRA 2022 – May 23-27, 2022 – Philadelphia, PA, USALet us know if you have suggestions for next week, and enjoy today's videos.
I don't know how much this little quadruped from DeepRobotics costs, but the video makes it look scarily close to a consumer product.

Jueying Lite2 is an intelligent quadruped robot independently developed by DeepRobotics. Based on advanced control algorithms, it has multiple motion modes such as walking, sliding, jumping, running, and back somersault. It has freely superimposed intelligent modules, capable of autonomous positioning and navigation, real-time obstacle avoidance, and visual recognition. It has a user-oriented design concept, with new functions such as voice interaction, sound source positioning, and safety and collision avoidance, giving users a better interactive experience and safety assurance.[ DeepRobotics ]
We hope that this video can assist the community in explaining what ROS is, who uses it, and why it is important to those unfamiliar with ROS.https://vimeo.com/639235111/9aa251fdb6
[ ROS.org ]

Boston Dynamics should know better than to post new videos on Fridays (as opposed to Thursday nights, when I put this post together every week), but if you missed this last week, here you go.

Robot choreography by Boston Dynamics and Monica Thomas.
[ Boston Dynamics ]
DeKonBot 2: for when you want things really, really, really, slowly clean.

[ Fraunhofer ]
Who needs Digit when Cassie is still hard at work!

[ Michigan Robotics ]
I am not making any sort of joke about sausage handling.

[ Soft Robotics ]
A squad of mini rovers traversed the simulated lunar soils of NASA Glenn's SLOPE (Simulated Lunar Operations) lab recently. The shoebox-sized rovers were tested to see if they could navigate the conditions of hard-to-reach places such as craters and caves on the Moon.
[ NASA Glenn ]
This little cyclocopter is cute, but I'm more excited for the teaser at the end of the video.

[ TAMU ]
Fourteen years ago, a team of engineering experts and Virginia Tech students competed in the 2007 DARPA Urban Challenge and propelled Torc to success. We look forward to many more milestones as we work to commercialize autonomous trucks.
[ Torc ]
Blarg not more of this…

Show me the robot prepping those eggs and doing the plating, please.
[ Moley Robotics ]
ETH Zurich's unique non-profit project continues! From 25 to 27 October 2024, the third edition of the CYBATHLON will take place in a global format. To the original six disciplines, two more are added: a race using smart visual assistive technologies and a race using assistive robots. As a platform, CYBATHLON challenges teams from around the world to develop everyday assistive technologies for, and in collaboration with, people with disabilities.
[ Cybathlon ]
Will drone deliveries be a practical part of our future? We visit the test facilities of Wing to check out how their engineers and aircraft designers have developed a drone and drone fleet control system that is actually in operation today in parts of the world.
[ Tested ]
In our third Self-Driven Women event, Waymo engineering leads Allison Thackston, Shilpa Gulati, and Congcong Li talk about some of the toughest and most interesting problems in ML and robotics and how it enables building a scalable driving autonomous driving tech stack. They also discuss their respective career journeys, and answer live questions from the virtual audience.
[ Waymo ]
The Robotics and Automation Society Student Activities Committee (RAS SAC) is proud to present “Transition to a Career in Academia,” a panel with robotics thought leaders. This panel is intended for robotics students and engineers interested in learning more about careers in academia after earning their degree. The panel will be moderated by RAS SAC Co-Chair, Marwa ElDinwiny.
[ IEEE RAS ]
This week's CMU RI Seminar is from Siddharth Srivastava at Arizona State, on The Unusual Effectiveness of Abstractions for Assistive AI.

[ CMU RI ] Continue reading

Posted in Human Robots

#439916 This Restaurant Robot Fries Your Food to ...

Four and a half years ago, a robot named Flippy made its burger-cooking debut at a fast food restaurant called CaliBurger. The bot consisted of a cart on wheels with an extending arm, complete with a pneumatic pump that let the machine swap between tools: tongs, scrapers, and spatulas. Flippy’s main jobs were pulling raw patties from a stack and placing them on the grill, tracking each burger’s cook time and temperature, and transferring cooked burgers to a plate.

This initial iteration of the fast-food robot—or robotic kitchen assistant, as its creators called it—was so successful that a commercial version launched last year. Its maker Miso Robotics put Flippy on the market for $30,000, and the bot was no longer limited to just flipping burgers; the new and improved Flippy could cook 19 different foods, including chicken wings, onion rings, french fries, and the Impossible Burger. It got sleeker, too: rather than sitting on a wheeled cart, the new Flippy was a “robot on a rail,” with the rail located along the hood of restaurant stoves.

This week, Miso Robotics announced an even newer, more improved Flippy robot called Flippy 2 (hey, they’re consistent). Most of the updates and improvements on the new bot are based on feedback the company received from restaurant chain White Castle, the first big restaurant chain to go all-in on the original Flippy.

So how is Flippy 2 different? The new robot can do the work of an entire fry station without any human assistance, and can do more than double the number of food preparation tasks its older sibling could do, including filling, emptying, and returning fry baskets.

These capabilities have made the robot more independent, eliminating the need for a human employee to step in at the beginning or end of the cooking process. When foods are placed in fry bins, the robot’s AI vision identifies the food, picks it up, and cooks it in a fry basket designated for that food specifically (i.e., onion rings won’t be cooked in the same basket as fish sticks). When cooking is complete, Flippy 2 moves the ready-to-go items to a hot-holding area.

Miso Robotics says the new robot’s throughput is 30 percent higher than that of its predecessor, which adds up to around 60 baskets of fried food per hour. So much fried food. Luckily, Americans can’t get enough fried food, in general and especially as the pandemic drags on. Even more importantly, the current labor shortages we’re seeing mean restaurant chains can’t hire enough people to cook fried food, making automated tools like Flippy not only helpful, but necessary.

“Since Flippy’s inception, our goal has always been to provide a customizable solution that can function harmoniously with any kitchen and without disruption,” said Mike Bell, CEO of Miso Robotics. “Flippy 2 has more than 120 configurations built into its technology and is the only robotic fry station currently being produced at scale.”

At the beginning of the pandemic, many foresaw that Covid-19 would push us into quicker adoption of many technologies that were already on the horizon, with automation of repetitive tasks being high on the list. They were right, and we’ve been lucky to have tools like Zoom to keep us collaborating and Flippy to keep us eating fast food (to whatever extent you consider eating fast food an essential activity; I mean, you can’t cook every day). Now if only there was a tech fix for inflation and housing shortages…

Seeing as how there’ve been three different versions of Flippy rolled out in the last four and a half years, there are doubtless more iterations coming, each with new skills and improved technology. But the burger robot is just one of many new developments in automation of food preparation and delivery. Take this pizzeria in Paris: there are no humans involved in the cooking, ordering, or pick-up process at all. And just this week, IBM and McDonald’s announced a collaboration to create drive-through lanes run by AI.

So it may not be long before you can order a meal from one computer, have that meal cooked by another computer, then have it delivered to your home or waiting vehicle by a third—you guessed it—computer.

Image Credit: Miso Robotics Continue reading

Posted in Human Robots

#439913 A system to control robotic arms based ...

For people with motor impairments or physical disabilities, completing daily tasks and house chores can be incredibly challenging. Recent advancements in robotics, such as brain-controlled robotic limbs, have the potential to significantly improve their quality of life. Continue reading

Posted in Human Robots