Tag Archives: science

#435597 Water Jet Powered Drone Takes Off With ...

At ICRA 2015, the Aerial Robotics Lab at the Imperial College London presented a concept for a multimodal flying swimming robot called AquaMAV. The really difficult thing about a flying and swimming robot isn’t so much the transition from the first to the second, since you can manage that even if your robot is completely dead (thanks to gravity), but rather the other way: going from water to air, ideally in a stable and repetitive way. The AquaMAV concept solved this by basically just applying as much concentrated power as possible to the problem, using a jet thruster to hurl the robot out of the water with quite a bit of velocity to spare.

In a paper appearing in Science Robotics this week, the roboticists behind AquaMAV present a fully operational robot that uses a solid-fuel powered chemical reaction to generate an explosion that powers the robot into the air.

The 2015 version of AquaMAV, which was mostly just some very vintage-looking computer renderings and a little bit of hardware, used a small cylinder of CO2 to power its water jet thruster. This worked pretty well, but the mass and complexity of the storage and release mechanism for the compressed gas wasn’t all that practical for a flying robot designed for long-term autonomy. It’s a familiar challenge, especially for pneumatically powered soft robots—how do you efficiently generate gas on-demand, especially if you need a lot of pressure all at once?

An explosion propels the drone out of the water
There’s one obvious way of generating large amounts of pressurized gas all at once, and that’s explosions. We’ve seen robots use explosive thrust for mobility before, at a variety of scales, and it’s very effective as long as you can both properly harness the explosion and generate the fuel with a minimum of fuss, and this latest version of AquaMAV manages to do both:

The water jet coming out the back of this robot aircraft is being propelled by a gas explosion. The gas comes from the reaction between a little bit of calcium carbide powder stored inside the robot, and water. Water is mixed with the powder one drop at a time, producing acetylene gas, which gets piped into a combustion chamber along with air and water. When ignited, the acetylene air mixture explodes, forcing the water out of the combustion chamber and providing up to 51 N of thrust, which is enough to launch the 160-gram robot 26 meters up and over the water at 11 m/s. It takes just 50 mg of calcium carbide (mixed with 3 drops of water) to generate enough acetylene for each explosion, and both air and water are of course readily available. With 0.2 g of calcium carbide powder on board, the robot has enough fuel for multiple jumps, and the jump is powerful enough that the robot can get airborne even under fairly aggressive sea conditions.

Image: Science Robotics

The robot can transition from a floating state to an airborne jetting phase and back to floating (A). A 3D model render of the underside of the robot (B) shows the electronics capsule. The capsule contains the fuel tank (C), where calcium carbide reacts with air and water to propel the vehicle.

Next step: getting the robot to fly autonomously
Providing adequate thrust is just one problem that needs to be solved when attempting to conquer the water-air transition with a fixed-wing robot. The overall design of the robot itself is a challenge as well, because the optimal design and balance for the robot is quite different in each phase of operation, as the paper describes:

For the vehicle to fly in a stable manner during the jetting phase, the center of mass must be a significant distance in front of the center of pressure of the vehicle. However, to maintain a stable floating position on the water surface and the desired angle during jetting, the center of mass must be located behind the center of buoyancy. For the gliding phase, a fine balance between the center of mass and the center of pressure must be struck to achieve static longitudinal flight stability passively. During gliding, the center of mass should be slightly forward from the wing’s center of pressure.

The current version is mostly optimized for the jetting phase of flight, and doesn’t have any active flight control surfaces yet, but the researchers are optimistic that if they added some they’d have no problem getting the robot to fly autonomously. It’s just a glider at the moment, but a low-power propeller is the obvious step after that, and to get really fancy, a switchable gearbox could enable efficient movement on water as well as in the air. Long-term, the idea is that robots like these would be useful for tasks like autonomous water sampling over large areas, but I’d personally be satisfied with a remote controlled version that I could take to the beach.

“Consecutive aquatic jump-gliding with water-reactive fuel,” by R. Zufferey, A. Ortega Ancel, A. Farinha, R. Siddall, S. F. Armanini, M. Nasr, R. V. Brahmal, G. Kennedy, and M. Kovac from Imperial College in London, is published in the current issue of Science Robotics. Continue reading

Posted in Human Robots

#435593 AI at the Speed of Light

Neural networks shine for solving tough problems such as facial and voice recognition, but conventional electronic versions are limited in speed and hungry for power. In theory, optics could beat digital electronic computers in the matrix calculations used in neural networks. However, optics had been limited by their inability to do some complex calculations that had required electronics. Now new experiments show that all-optical neural networks can tackle those problems.

The key attraction of neural networks is their massive interconnections among processors, comparable to the complex interconnections among neurons in the brain. This lets them perform many operations simultaneously, like the human brain does when looking at faces or listening to speech, making them more efficient for facial and voice recognition than traditional electronic computers that execute one instruction at a time.

Today's electronic neural networks have reached eight million neurons, but their future use in artificial intelligence may be limited by their high power usage and limited parallelism in connections. Optical connections through lenses are inherently parallel. The lens in your eye simultaneously focuses light from across your field of view onto the retina in the back of your eye, where an array of light-detecting nerve cells detects the light. Each cell then relays the signal it receives to neurons in the brain that process the visual signals to show us an image.

Glass lenses process optical signals by focusing light, which performs a complex mathematical operation called a Fourier transform that preserves the information in the original scene but rearranges is completely. One use of Fourier transforms is converting time variations in signal intensity into a plot of the frequencies present in the signal. The military used this trick in the 1950s to convert raw radar return signals recorded by an aircraft in flight into a three-dimensional image of the landscape viewed by the plane. Today that conversion is done electronically, but the vacuum-tube computers of the 1950s were not up to the task.

Development of neural networks for artificial intelligence started with electronics, but their AI applications have been limited by their slow processing and need for extensive computing resources. Some researchers have developed hybrid neural networks, in which optics perform simple linear operations, but electronics perform more complex nonlinear calculations. Now two groups have demonstrated simple all-optical neural networks that do all processing with light.

In May, Wolfram Pernice of the Institute of Physics at the University of Münster in Germany and colleagues reported testing an all-optical “neuron” in which signals change target materials between liquid and solid states, an effect that has been used for optical data storage. They demonstrated nonlinear processing, and produced output pulses like those from organic neurons. They then produced an integrated photonic circuit that incorporated four optical neurons operating at different wavelengths, each of which connected to 15 optical synapses. The photonic circuit contained more than 140 components and could recognize simple optical patterns. The group wrote that their device is scalable, and that the technology promises “access to the high speed and high bandwidth inherent to optical systems, thus enabling the direct processing of optical telecommunication and visual data.”

Now a group at the Hong Kong University of Science and Technology reports in Optica that they have made an all-optical neural network based on a different process, electromagnetically induced transparency, in which incident light affects how atoms shift between quantum-mechanical energy levels. The process is nonlinear and can be triggered by very weak light signals, says Shengwang Du, a physics professor and coauthor of the paper.

In their demonstration, they illuminated rubidium-85 atoms cooled by lasers to about 10 microKelvin (10 microdegrees above absolute zero). Although the technique may seem unusually complex, Du said the system was the most accessible one in the lab that could produce the desired effects. “As a pure quantum atomic system [it] is ideal for this proof-of-principle experiment,” he says.

Next, they plan to scale up the demonstration using a hot atomic vapor center, which is less expensive, does not require time-consuming preparation of cold atoms, and can be integrated with photonic chips. Du says the major challenges are reducing cost of the nonlinear processing medium and increasing the scale of the all-optical neural network for more complex tasks.

“Their demonstration seems valid,” says Volker Sorger, an electrical engineer at George Washington University in Washington who was not involved in either demonstration. He says the all-optical approach is attractive because it offers very high parallelism, but the update rate is limited to about 100 hertz because of the liquid crystals used in their test, and he is not completely convinced their approach can be scaled error-free. Continue reading

Posted in Human Robots

#435591 Video Friday: This Robotic Thread Could ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

IEEE Africon 2019 – September 25-27, 2019 – Accra, Ghana
ISRR 2019 – October 6-10, 2019 – Hanoi, Vietnam
Ro-Man 2019 – October 14-18, 2019 – New Delhi, India
Humanoids 2019 – October 15-17, 2019 – Toronto, Canada
ARSO 2019 – October 31-1, 2019 – Beijing, China
ROSCon 2019 – October 31-1, 2019 – Macau
IROS 2019 – November 4-8, 2019 – Macau
Let us know if you have suggestions for next week, and enjoy today’s videos.

Eight engineering students from ETH Zurich are working on a year-long focus project to develop a multimodal robot called Dipper, which can fly, swim, dive underwater, and manage that difficult air-water transition:

The robot uses one motor to selectively drive either a propeller or a marine screw depending on whether it’s in flight or not. We’re told that getting the robot to autonomously do the water to air transition is still a work in progress, but that within a few weeks things should be much smoother.

[ Dipper ]

Thanks Simon!

Giving a jellyfish a hug without stressing them out is exactly as hard as you think, but Harvard’s robot will make sure that all jellyfish get the emotional (and physical) support that they need.

The gripper’s six “fingers” are composed of thin, flat strips of silicone with a hollow channel inside bonded to a layer of flexible but stiffer polymer nanofibers. The fingers are attached to a rectangular, 3D-printed plastic “palm” and, when their channels are filled with water, curl in the direction of the nanofiber-coated side. Each finger exerts an extremely low amount of pressure — about 0.0455 kPA, or less than one-tenth of the pressure of a human’s eyelid on their eye. By contrast, current state-of-the-art soft marine grippers, which are used to capture delicate but more robust animals than jellyfish, exert about 1 kPA.

The gripper was successfully able to trap each jellyfish against the palm of the device, and the jellyfish were unable to break free from the fingers’ grasp until the gripper was depressurized. The jellyfish showed no signs of stress or other adverse effects after being released, and the fingers were able to open and close roughly 100 times before showing signs of wear and tear.

[ Harvard ]

MIT engineers have developed a magnetically steerable, thread-like robot that can actively glide through narrow, winding pathways, such as the labyrinthine vasculature of the brain. In the future, this robotic thread may be paired with existing endovascular technologies, enabling doctors to remotely guide the robot through a patient’s brain vessels to quickly treat blockages and lesions, such as those that occur in aneurysms and stroke.

[ MIT ]

See NASA’s next Mars rover quite literally coming together inside a clean room at the Jet Propulsion Laboratory. This behind-the-scenes look at what goes into building and preparing a rover for Mars, including extensive tests in simulated space environments, was captured from March to July 2019. The rover is expected to launch to the Red Planet in summer 2020 and touch down in February 2021.

The Mars 2020 rover doesn’t have a name yet, but you can give it one! As long as you’re not too old! Which you probably are!

[ Mars 2020 ]

I desperately wish that we could watch this next video at normal speed, not just slowed down, but it’s quite impressive anyway.

Here’s one more video from the Namiki Lab showing some high speed tracking with a pair of very enthusiastic robotic cameras:

[ Namiki Lab ]

Normally, tedious modeling of mechanics, electronics, and information science is required to understand how insects’ or robots’ moving parts coordinate smoothly to take them places. But in a new study, biomechanics researchers at the Georgia Institute of Technology boiled down the sprints of cockroaches to handy principles and equations they then used to make a test robot amble about better.

[ Georgia Tech ]

More magical obstacle-dodging footage from Skydio’s still secret new drone.

We’ve been hard at work extending the capabilities of our upcoming drone, giving you ways to get the control you want without the stress of crashing. The result is you can fly in ways, and get shots, that would simply be impossible any other way. How about flying through obstacles at full speed, backwards?

[ Skydio ]

This is a cute demo with Misty:

[ Misty Robotics ]

We’ve seen pieces of hardware like this before, but always made out of hard materials—a soft version is certainly something new.

Utilizing vacuum power and soft material actuators, we have developed a soft reconfigurable surface (SRS) with multi-modal control and performance capabilities. The SRS is comprised of a square grid array of linear vacuum-powered soft pneumatic actuators (linear V-SPAs), built into plug-and-play modules which enable the arrangement, consolidation, and control of many DoF.

[ RRL ]

The EksoVest is not really a robot, but it’ll make you a cyborg! With super strength!

“This is NOT intended to give you super strength but instead give you super endurance and reduce fatigue so that you have more energy and less soreness at the end of your shift.”

Drat!

[ EksoVest ]

We have created a solution for parents, grandparents, and their children who are living separated. This is an amazing tool to stay connected from a distance through the intimacy that comes through interactive play with a child. For parents who travel for work, deployed military, and families spread across the country, the Cushybot One is much more than a toy; it is the opportunity for maintaining a deep connection with your young child from a distance.

Hmm.

I think the concept here is great, but it’s going to be a serious challenge to successfully commercialize.

[ Indiegogo ]

What happens when you equip RVR with a parachute and send it off a cliff? Watch this episode of RVR Launchpad to find out – then go Behind the Build to see how we (eventually) accomplished this high-flying feat.

[ Sphero ]

These omnidirectional crawler robots aren’t new, but that doesn’t keep them from being fun to watch.

[ NEDO ] via [ Impress ]

We’ll finish up the week with a couple of past ICRA and IROS keynote talks—one by Gill Pratt on The Reliability Challenges of Autonomous Driving, and the other from Peter Hart, on Making Shakey.

[ IEEE RAS ] Continue reading

Posted in Human Robots

#435583 Soft Self-Healing Materials for Robots ...

If there’s one thing we know about robots, it’s that they break. They break, like, literally all the time. The software breaks. The hardware breaks. The bits that you think could never, ever, ever possibly break end up breaking just when you need them not to break the most, and then you have to try to explain what happened to your advisor who’s been standing there watching your robot fail and then stay up all night fixing the thing that seriously was not supposed to break.

While most of this is just a fundamental characteristic of robots that can’t be helped, the European Commission is funding a project called SHERO (Self HEaling soft RObotics) to try and solve at least some of those physical robot breaking problems through the use of structural materials that can autonomously heal themselves over and over again.

SHERO is a three year, €3 million collaboration between Vrije Universiteit Brussel, University of Cambridge, École Supérieure de Physique et de Chimie Industrielles de la ville de Paris (ESPCI-Paris), and Swiss Federal Laboratories for Materials Science and Technology (Empa). As the name SHERO suggests, the goal of the project is to develop soft materials that can completely recover from the kinds of damage that robots are likely to suffer in day to day operations, as well as the occasional more extreme accident.

Most materials, especially soft materials, are fixable somehow, whether it’s with super glue or duct tape. But fixing things involves a human first identifying when they’re broken, and then performing a potentially skill, labor, time, and money intensive task. SHERO’s soft materials will, eventually, make this entire process autonomous, allowing robots to self-identify damage and initiate healing on their own.

Photos: SHERO Project

The damaged robot finger [top] can operate normally after healing itself.

How the self-healing material works
What these self-healing materials can do is really pretty amazing. The researchers are actually developing two different types—the first one heals itself when there’s an application of heat, either internally or externally, which gives some control over when and how the healing process starts. For example, if the robot is handling stuff that’s dirty, you’d want to get it cleaned up before healing it so that dirt doesn’t become embedded in the material. This could mean that the robot either takes itself to a heating station, or it could activate some kind of embedded heating mechanism to be more self-sufficient.

The second kind of self-healing material is autonomous, in that it will heal itself at room temperature without any additional input, and is probably more suitable for relatively minor scrapes and cracks. Here are some numbers about how well the healing works:

Autonomous self-healing polymers do not require heat. They can heal damage at room temperature. Developing soft robotic systems from autonomous self-healing polymers excludes the need of additional heating devices… The healing however takes some time. The healing efficiency after 3 days, 7 days and 14 days is respectively 62 percent, 91 percent and 97 percent.

This material was used to develop a healable soft pneumatic hand. Relevant large cuts can be healed entirely without the need of external heat stimulus. Depending on the size of the damage and even more on the location of damage, the healing takes only seconds or up to a week. Damage on locations on the actuator that are subjected to very small stresses during actuation was healed instantaneously. Larger damages, like cutting the actuator completely in half, took 7 days to heal. But even this severe damage could be healed completely without the need of any external stimulus.

Applications of self-healing robots
Both of these materials can be mixed together, and their mechanical properties can be customized so that the structure that they’re a part of can be tuned to move in different ways. The researchers also plan on introducing flexible conductive sensors into the material, which will help sense damage as well as providing position feedback for control systems. A lot of development will happen over the next few years, and for more details, we spoke with Bram Vanderborght at Vrije Universiteit in Brussels.

IEEE Spectrum: How easy or difficult or expensive is it to produce these materials? Will they add significant cost to robotic grippers?

Bram Vanderborght: They are definitely more expensive materials, but it’s also a matter of size of production. At the moment, we’ve made a few kilograms of the material (enough to make several demonstrators), and the price already dropped significantly from when we ordered 100 grams of the material in the first phase of the project. So probably the cost of the gripper will be higher [than a regular gripper], but you won’t need to replace the gripper as often as other grippers that need to be replaced due to wear, so it can be an advantage.

Moreover due to the method of 3D printing the material, the surface is smoother and airtight (so no post-processing is required to make it airtight). Also, the smooth surface is better to avoid contamination for food handling, for example.

In commercial or industrial applications, gradual fatigue seems to be a more common issue than more abrupt trauma like cuts. How well does the self-healing work to improve durability over long periods of time?

We did not test for gradual fatigue over very long times. But both macroscopic and microscopic damage can be healed. So hopefully it can provide an answer here as well.

Image: SHERO Project

After developing a self-healing robot gripper, the researchers plan to use similar materials to build parts that can be used as the skeleton of robots, allowing them to repair themselves on a regular basis.

How much does the self-healing capability restrict the material properties? What are the limits for softness or hardness or smoothness or other characteristics of the material?

Typically the mechanical properties of networked polymers are much better than thermoplastics. Our material is a networked polymer but in which the crosslinks are reversible. We can change quite a lot of parameters in the design of the materials. So we can develop very stiff (fracture strain at 1.24 percent) and very elastic materials (fracture strain at 450 percent). The big advantage that our material has is we can mix it to have intermediate properties. Moreover, at the interface of the materials with different mechanical properties, we have the same chemical bonds, so the interface is perfect. While other materials, they may need to glue it, which gives local stresses and a weak spot.

When the material heals itself, is it less structurally sound in that spot? Can it heal damage that happens to the same spot over and over again?

In theory we can heal it an infinite amount of times. When the wound is not perfectly aligned, of course in that spot it will become weaker. Also too high temperatures lead to irreversible bonds, and impurities lead to weak spots.

Besides grippers and skins, what other potential robotics applications would this technology be useful for?

Most of self healing materials available now are used for coatings. What we are developing are structural components, therefore the mechanical properties of the material need to be good for such applications. So maybe part of the skeleton of the robot can be developed with such materials to make it lighter, since can be designed for regular repair. And for exceptional loads, it breaks and can be repaired like our human body.

[ SHERO Project ] Continue reading

Posted in Human Robots

#435474 Watch China’s New Hybrid AI Chip Power ...

When I lived in Beijing back in the 90s, a man walking his bike was nothing to look at. But today, I did a serious double-take at a video of a bike walking his man.

No kidding.

The bike itself looks overloaded but otherwise completely normal. Underneath its simplicity, however, is a hybrid computer chip that combines brain-inspired circuits with machine learning processes into a computing behemoth. Thanks to its smart chip, the bike self-balances as it gingerly rolls down a paved track before smoothly gaining speed into a jogging pace while navigating dexterously around obstacles. It can even respond to simple voice commands such as “speed up,” “left,” or “straight.”

Far from a circus trick, the bike is a real-world demo of the AI community’s latest attempt at fashioning specialized hardware to keep up with the challenges of machine learning algorithms. The Tianjic (天机*) chip isn’t just your standard neuromorphic chip. Rather, it has the architecture of a brain-like chip, but can also run deep learning algorithms—a match made in heaven that basically mashes together neuro-inspired hardware and software.

The study shows that China is readily nipping at the heels of Google, Facebook, NVIDIA, and other tech behemoths investing in developing new AI chip designs—hell, with billions in government investment it may have already had a head start. A sweeping AI plan from 2017 looks to catch up with the US on AI technology and application by 2020. By 2030, China’s aiming to be the global leader—and a champion for building general AI that matches humans in intellectual competence.

The country’s ambition is reflected in the team’s parting words.

“Our study is expected to stimulate AGI [artificial general intelligence] development by paving the way to more generalized hardware platforms,” said the authors, led by Dr. Luping Shi at Tsinghua University.

A Hardware Conundrum
Shi’s autonomous bike isn’t the first robotic two-wheeler. Back in 2015, the famed research nonprofit SRI International in Menlo Park, California teamed up with Yamaha to engineer MOTOBOT, a humanoid robot capable of driving a motorcycle. Powered by state-of-the-art robotic hardware and machine learning, MOTOBOT eventually raced MotoGPTM world champion Valentino Rossi in a nail-biting match-off.

However, the technological core of MOTOBOT and Shi’s bike vastly differ, and that difference reflects two pathways towards more powerful AI. One, exemplified by MOTOBOT, is software—developing brain-like algorithms with increasingly efficient architecture, efficacy, and speed. That sounds great, but deep neural nets demand so many computational resources that general-purpose chips can’t keep up.

As Shi told China Science Daily: “CPUs and other chips are driven by miniaturization technologies based on physics. Transistors might shrink to nanoscale-level in 10, 20 years. But what then?” As more transistors are squeezed onto these chips, efficient cooling becomes a limiting factor in computational speed. Tax them too much, and they melt.

For AI processes to continue, we need better hardware. An increasingly popular idea is to build neuromorphic chips, which resemble the brain from the ground up. IBM’s TrueNorth, for example, contains a massively parallel architecture nothing like the traditional Von Neumann structure of classic CPUs and GPUs. Similar to biological brains, TrueNorth’s memory is stored within “synapses” between physical “neurons” etched onto the chip, which dramatically cuts down on energy consumption.

But even these chips are limited. Because computation is tethered to hardware architecture, most chips resemble just one specific type of brain-inspired network called spiking neural networks (SNNs). Without doubt, neuromorphic chips are highly efficient setups with dynamics similar to biological networks. They also don’t play nicely with deep learning and other software-based AI.

Brain-AI Hybrid Core
Shi’s new Tianjic chip brought the two incompatibilities together onto a single piece of brainy hardware.

First was to bridge the deep learning and SNN divide. The two have very different computation philosophies and memory organizations, the team said. The biggest difference, however, is that artificial neural networks transform multidimensional data—image pixels, for example—into a single, continuous, multi-bit 0 and 1 stream. In contrast, neurons in SNNs activate using something called “binary spikes” that code for specific activation events in time.

Confused? Yeah, it’s hard to wrap my head around it too. That’s because SNNs act very similarly to our neural networks and nothing like computers. A particular neuron needs to generate an electrical signal (a “spike”) large enough to transfer down to the next one; little blips in signals don’t count. The way they transmit data also heavily depends on how they’re connected, or the network topology. The takeaway: SNNs work pretty differently than deep learning.

Shi’s team first recreated this firing quirk in the language of computers—0s and 1s—so that the coding mechanism would become compatible with deep learning algorithms. They then carefully aligned the step-by-step building blocks of the two models, which allowed them to tease out similarities into a common ground to further build on. “On the basis of this unified abstraction, we built a cross-paradigm neuron scheme,” they said.

In general, the design allowed both computational approaches to share the synapses, where neurons connect and store data, and the dendrites, the outgoing branches of the neurons. In contrast, the neuron body, where signals integrate, was left reconfigurable for each type of computation, as were the input branches. Each building block was combined into a single unified functional core (FCore), which acts like a deep learning/SNN converter depending on its specific setup. Translation: the chip can do both types of previously incompatible computation.

The Chip
Using nanoscale fabrication, the team arranged 156 FCores, containing roughly 40,000 neurons and 10 million synapses, onto a chip less than a fifth of an inch in length and width. Initial tests showcased the chip’s versatility, in that it can run both SNNs and deep learning algorithms such as the popular convolutional neural network (CNNs) often used in machine vision.

Compared to IBM TrueNorth, the density of Tianjic’s cores increased by 20 percent, speeding up performance ten times and increasing bandwidth at least 100-fold, the team said. When pitted against GPUs, the current hardware darling of machine learning, the chip increased processing throughput up to 100 times, while using just a sliver (1/10,000) of energy.

Although these stats are great, real-life performance is even better as a demo. Here’s where the authors gave their Tianjic brain a body. The team combined one chip with multiple specialized networks to process vision, balance, voice commands, and decision-making in real time. Object detection and target tracking, for example, relied on a deep neural net CNN, whereas voice commands and balance data were recognized using an SNN. The inputs were then integrated inside a neural state machine, which churned out decisions to downstream output modules—for example, controlling the handle bar to turn left.

Thanks to the chip’s brain-like architecture and bilingual ability, Tianjic “allowed all of the neural network models to operate in parallel and realized seamless communication across the models,” the team said. The result is an autonomous bike that rolls after its human, balances across speed bumps, avoids crashing into roadblocks, and answers to voice commands.

General AI?
“It’s a wonderful demonstration and quite impressive,” said the editorial team at Nature, which published the study on its cover last week.

However, they cautioned, when comparing Tianjic with state-of-the-art chips designed for a single problem toe-to-toe on that particular problem, Tianjic falls behind. But building these jack-of-all-trades hybrid chips is definitely worth the effort. Compared to today’s limited AI, what people really want is artificial general intelligence, which will require new architectures that aren’t designed to solve one particular problem.

Until people start to explore, innovate, and play around with different designs, it’s not clear how we can further progress in the pursuit of general AI. A self-driving bike might not be much to look at, but its hybrid brain is a pretty neat place to start.

*The name, in Chinese, means “heavenly machine,” “unknowable mystery of nature,” or “confidentiality.” Go figure.

Image Credit: Alexander Ryabintsev / Shutterstock.com Continue reading

Posted in Human Robots