Tag Archives: beyond

#437624 AI-Powered Drone Learns Extreme ...

Quadrotors are among the most agile and dynamic machines ever created. In the hands of a skilled human pilot, they can do some astonishing series of maneuvers. And while autonomous flying robots have been getting better at flying dynamically in real-world environments, they still haven’t demonstrated the same level of agility of manually piloted ones.

Now researchers from the Robotics and Perception Group at the University of Zurich and ETH Zurich, in collaboration with Intel, have developed a neural network training method that “enables an autonomous quadrotor to fly extreme acrobatic maneuvers with only onboard sensing and computation.” Extreme.

There are two notable things here: First, the quadrotor can do these extreme acrobatics outdoors without any kind of external camera or motion-tracking system to help it out (all sensing and computing is onboard). Second, all of the AI training is done in simulation, without the need for an additional simulation-to-real-world (what researchers call “sim-to-real”) transfer step. Usually, a sim-to-real transfer step means putting your quadrotor into one of those aforementioned external tracking systems, so that it doesn’t completely bork itself while trying to reconcile the differences between the simulated world and the real world, where, as the researchers wrote in a paper describing their system, “even tiny mistakes can result in catastrophic outcomes.”

To enable “zero-shot” sim-to-real transfer, the neural net training in simulation uses an expert controller that knows exactly what’s going on to teach a “student controller” that has much less perfect knowledge. That is, the simulated sensory input that the student ends up using as it learns to follow the expert has been abstracted to present the kind of imperfect, imprecise data it’s going to encounter in the real world. This can involve things like abstracting away the image part of the simulation until you’d have no way of telling the difference between abstracted simulation and abstracted reality, which is what allows the system to make that sim-to-real leap.

The simulation environment that the researchers used was Gazebo, slightly modified to better simulate quadrotor physics. Meanwhile, over in reality, a custom 1.5-kilogram quadrotor with a 4:1 thrust to weight ratio performed the physical experiments, using only a Nvidia Jetson TX2 computing board and an Intel RealSense T265, a dual fisheye camera module optimized for V-SLAM. To challenge the learning system, it was trained to perform three acrobatic maneuvers plus a combo of all of them:

Image: University of Zurich/ETH Zurich/Intel

Reference trajectories for acrobatic maneuvers. Top row, from left: Power Loop, Barrel Roll, and Matty Flip. Bottom row: Combo.

All of these maneuvers require high accelerations of up to 3 g’s and careful control, and the Matty Flip is particularly challenging, at least for humans, because the whole thing is done while the drone is flying backwards. Still, after just a few hours of training in simulation, the drone was totally real-world competent at these tricks, and could even extrapolate a little bit to perform maneuvers that it was not explicitly trained on, like doing multiple loops in a row. Where humans still have the advantage over drones is (as you might expect since we’re talking about robots) is quickly reacting to novel or unexpected situations. And when you’re doing this sort of thing outdoors, novel and unexpected situations are everywhere, from a gust of wind to a jealous bird.

For more details, we spoke with Antonio Loquercio from the University of Zurich’s Robotics and Perception Group.

IEEE Spectrum: Can you explain how the abstraction layer interfaces with the simulated sensors to enable effective sim-to-real transfer?

Antonio Loquercio: The abstraction layer applies a specific function to the raw sensor information. Exactly the same function is applied to the real and simulated sensors. The result of the function, which is “abstracted sensor measurements,” makes simulated and real observation of the same scene similar. For example, suppose we have a sequence of simulated and real images. We can very easily tell apart the real from the simulated ones given the difference in rendering. But if we apply the abstraction function of “feature tracks,” which are point correspondences in time, it becomes very difficult to tell which are the simulated and real feature tracks, since point correspondences are independent of the rendering. This applies for humans as well as for neural networks: Training policies on raw images gives low sim-to-real transfer (since images are too different between domains), while training on the abstracted images has high transfer abilities.

How useful is visual input from a camera like the Intel RealSense T265 for state estimation during such aggressive maneuvers? Would using an event camera substantially improve state estimation?

Our end-to-end controller does not require a state estimation module. It shares however some components with traditional state estimation pipelines, specifically the feature extractor and the inertial measurement unit (IMU) pre-processing and integration function. The input of the neural networks are feature tracks and integrated IMU measurements. When looking at images with low features (for example when the camera points to the sky), the neural net will mainly rely on IMU. When more features are available, the network uses to correct the accumulated drift from IMU. Overall, we noticed that for very short maneuvers IMU measurements were sufficient for the task. However, for longer ones, visual information was necessary to successfully address the IMU drift and complete the maneuver. Indeed, visual information reduces the odds of a crash by up to 30 percent in the longest maneuvers. We definitely think that event camera can improve even more the current approach since they could provide valuable visual information during high speed.

“The Matty Flip is probably one of the maneuvers that our approach can do very well … It is super challenging for humans, since they don’t see where they’re going and have problems in estimating their speed. For our approach the maneuver is no problem at all, since we can estimate forward velocities as well as backward velocities.”
—Antonio Loquercio, University of Zurich

You describe being able to train on “maneuvers that stretch the abilities of even expert human pilots.” What are some examples of acrobatics that your drones might be able to do that most human pilots would not be capable of?

The Matty Flip is probably one of the maneuvers that our approach can do very well, but human pilots find very challenging. It basically entails doing a high speed power loop by always looking backward. It is super challenging for humans, since they don’t see where they’re going and have problems in estimating their speed. For our approach the maneuver is no problem at all, since we can estimate forward velocities as well as backward velocities.

What are the limits to the performance of this system?

At the moment the main limitation is the maneuver duration. We never trained a controller that could perform maneuvers longer than 20 seconds. In the future, we plan to address this limitation and train general controllers which can fly in that agile way for significantly longer with relatively small drift. In this way, we could start being competitive against human pilots in drone racing competitions.

Can you talk about how the techniques developed here could be applied beyond drone acrobatics?

The current approach allows us to do acrobatics and agile flight in free space. We are now working to perform agile flight in cluttered environments, which requires a higher degree of understanding of the surrounding with respect to this project. Drone acrobatics is of course only an example application. We selected it because it makes a stress test of the controller performance. However, several other applications which require fast and agile flight can benefit from our approach. Examples are delivery (we want our Amazon packets always faster, don’t we?), search and rescue, or inspection. Going faster allows us to cover more space in less time, saving battery costs. Indeed, agile flight has very similar battery consumption of slow hovering for an autonomous drone.

“Deep Drone Acrobatics,” by Elia Kaufmann, Antonio Loquercio, René Ranftl, Matthias Müller, Vladlen Koltun, and Davide Scaramuzza from the Robotics and Perception Group at the University of Zurich and ETH Zurich, and Intel’s Intelligent Systems Lab, was presented at RSS 2020. Continue reading

Posted in Human Robots

#437620 The Trillion-Transistor Chip That Just ...

The history of computer chips is a thrilling tale of extreme miniaturization.

The smaller, the better is a trend that’s given birth to the digital world as we know it. So, why on earth would you want to reverse course and make chips a lot bigger? Well, while there’s no particularly good reason to have a chip the size of an iPad in an iPad, such a chip may prove to be genius for more specific uses, like artificial intelligence or simulations of the physical world.

At least, that’s what Cerebras, the maker of the biggest computer chip in the world, is hoping.

The Cerebras Wafer-Scale Engine is massive any way you slice it. The chip is 8.5 inches to a side and houses 1.2 trillion transistors. The next biggest chip, NVIDIA’s A100 GPU, measures an inch to a side and has a mere 54 billion transistors. The former is new, largely untested and, so far, one-of-a-kind. The latter is well-loved, mass-produced, and has taken over the world of AI and supercomputing in the last decade.

So can Goliath flip the script on David? Cerebras is on a mission to find out.

Big Chips Beyond AI
When Cerebras first came out of stealth last year, the company said it could significantly speed up the training of deep learning models.

Since then, the WSE has made its way into a handful of supercomputing labs, where the company’s customers are putting it through its paces. One of those labs, the National Energy Technology Laboratory, is looking to see what it can do beyond AI.

So, in a recent trial, researchers pitted the chip—which is housed in an all-in-one system about the size of a dorm room mini-fridge called the CS-1—against a supercomputer in a fluid dynamics simulation. Simulating the movement of fluids is a common supercomputer application useful for solving complex problems like weather forecasting and airplane wing design.

The trial was described in a preprint paper written by a team led by Cerebras’s Michael James and NETL’s Dirk Van Essendelft and presented at the supercomputing conference SC20 this week. The team said the CS-1 completed a simulation of combustion in a power plant roughly 200 times faster than it took the Joule 2.0 supercomputer to do a similar task.

The CS-1 was actually faster-than-real-time. As Cerebrus wrote in a blog post, “It can tell you what is going to happen in the future faster than the laws of physics produce the same result.”

The researchers said the CS-1’s performance couldn’t be matched by any number of CPUs and GPUs. And CEO and cofounder Andrew Feldman told VentureBeat that would be true “no matter how large the supercomputer is.” At a point, scaling a supercomputer like Joule no longer produces better results in this kind of problem. That’s why Joule’s simulation speed peaked at 16,384 cores, a fraction of its total 86,400 cores.

A comparison of the two machines drives the point home. Joule is the 81st fastest supercomputer in the world, takes up dozens of server racks, consumes up to 450 kilowatts of power, and required tens of millions of dollars to build. The CS-1, by comparison, fits in a third of a server rack, consumes 20 kilowatts of power, and sells for a few million dollars.

While the task is niche (but useful) and the problem well-suited to the CS-1, it’s still a pretty stunning result. So how’d they pull it off? It’s all in the design.

Cut the Commute
Computer chips begin life on a big piece of silicon called a wafer. Multiple chips are etched onto the same wafer and then the wafer is cut into individual chips. While the WSE is also etched onto a silicon wafer, the wafer is left intact as a single, operating unit. This wafer-scale chip contains almost 400,000 processing cores. Each core is connected to its own dedicated memory and its four neighboring cores.

Putting that many cores on a single chip and giving them their own memory is why the WSE is bigger; it’s also why, in this case, it’s better.

Most large-scale computing tasks depend on massively parallel processing. Researchers distribute the task among hundreds or thousands of chips. The chips need to work in concert, so they’re in constant communication, shuttling information back and forth. A similar process takes place within each chip, as information moves between processor cores, which are doing the calculations, and shared memory to store the results.

It’s a little like an old-timey company that does all its business on paper.

The company uses couriers to send and collect documents from other branches and archives across town. The couriers know the best routes through the city, but the trips take some minimum amount of time determined by the distance between the branches and archives, the courier’s top speed, and how many other couriers are on the road. In short, distance and traffic slow things down.

Now, imagine the company builds a brand new gleaming skyscraper. Every branch is moved into the new building and every worker gets a small filing cabinet in their office to store documents. Now any document they need can be stored and retrieved in the time it takes to step across the office or down the hall to their neighbor’s office. The information commute has all but disappeared. Everything’s in the same house.

Cerebras’s megachip is a bit like that skyscraper. The way it shuttles information—aided further by its specially tailored compiling software—is far more efficient compared to a traditional supercomputer that needs to network a ton of traditional chips.

Simulating the World as It Unfolds
It’s worth noting the chip can only handle problems small enough to fit on the wafer. But such problems may have quite practical applications because of the machine’s ability to do high-fidelity simulation in real-time. The authors note, for example, the machine should in theory be able to accurately simulate the air flow around a helicopter trying to land on a flight deck and semi-automate the process—something not possible with traditional chips.

Another opportunity, they note, would be to use a simulation as input to train a neural network also residing on the chip. In an intriguing and related example, a Caltech machine learning technique recently proved to be 1,000 times faster at solving the same kind of partial differential equations at play here to simulate fluid dynamics.

They also note that improvements in the chip (and others like it, should they arrive) will push back the limits of what can be accomplished. Already, Cerebras has teased the release of its next-generation chip, which will have 2.6 trillion transistors, 850,00 cores, and more than double the memory.

Of course, it still remains to be seen whether wafer-scale computing really takes off. The idea has been around for decades, but Cerebras is the first to pursue it seriously. Clearly, they believe they’ve solved the problem in a way that’s useful and economical.

Other new architectures are also being pursued in the lab. Memristor-based neuromorphic chips, for example, mimic the brain by putting processing and memory into individual transistor-like components. And of course, quantum computers are in a separate lane, but tackle similar problems.

It could be that one of these technologies eventually rises to rule them all. Or, and this seems just as likely, computing may splinter into a bizarre quilt of radical chips, all stitched together to make the most of each depending on the situation.

Image credit: Cerebras Continue reading

Posted in Human Robots

#437610 How Intel’s OpenBot Wants to Make ...

You could make a pretty persuasive argument that the smartphone represents the single fastest area of technological progress we’re going to experience for the foreseeable future. Every six months or so, there’s something with better sensors, more computing power, and faster connectivity. Many different areas of robotics are benefiting from this on a component level, but over at Intel Labs, they’re taking a more direct approach with a project called OpenBot that turns US $50 worth of hardware and your phone into a mobile robot that can support “advanced robotics workloads such as person following and real-time autonomous navigation in unstructured environments.”

This work aims to address two key challenges in robotics: accessibility and scalability. Smartphones are ubiquitous and are becoming more powerful by the year. We have developed a combination of hardware and software that turns smartphones into robots. The resulting robots are inexpensive but capable. Our experiments have shown that a $50 robot body powered by a smartphone is capable of person following and real-time autonomous navigation. We hope that the presented work will open new opportunities for education and large-scale learning via thousands of low-cost robots deployed around the world.

Smartphones point to many possibilities for robotics that we have not yet exploited. For example, smartphones also provide a microphone, speaker, and screen, which are not commonly found on existing navigation robots. These may enable research and applications at the confluence of human-robot interaction and natural language processing. We also expect the basic ideas presented in this work to extend to other forms of robot embodiment, such as manipulators, aerial vehicles, and watercraft.

One of the interesting things about this idea is how not-new it is. The highest profile phone robot was likely the $150 Romo, from Romotive, which raised a not-insignificant amount of money on Kickstarter in 2012 and 2013 for a little mobile chassis that accepted one of three different iPhone models and could be controlled via another device or operated somewhat autonomously. It featured “computer vision, autonomous navigation, and facial recognition” capabilities, but was really designed to be a toy. Lack of compatibility hampered Romo a bit, and there wasn’t a lot that it could actually do once the novelty wore off.

As impressive as smartphone hardware was in a robotics context (even back in 2013), we’re obviously way, way beyond that now, and OpenBot figures that smartphones now have enough clout and connectivity that turning them into mobile robots is a good idea. You know, again. We asked Intel Labs’ Matthias Muller why now was the right time to launch OpenBot, and he mentioned things like the existence of a large maker community with broad access to 3D printing as well as open source software that makes broader development easier.

And of course, there’s the smartphone hardware: “Smartphones have become extremely powerful and feature dedicated AI processors in addition to CPUs and GPUs,” says Mueller. “Almost everyone owns a very capable smartphone now. There has been a big boost in sensor performance, especially in cameras, and a lot of the recent developments for VR applications are well aligned with robotic requirements for state estimation.” OpenBot has been tested with 10 recent Android phones, and since camera placement tends to be similar and USB-C is becoming the charging and communications standard, compatibility is less of an issue nowadays.

Image: OpenBot

Intel researchers created this table comparing OpenBot to other wheeled robot platforms, including Amazon’s DeepRacer, MIT’s Duckiebot, iRobot’s Create-2, and Thymio. The top group includes robots based on RC trucks; the bottom group includes navigation robots for deployment at scale and in education. Note that the cost of the smartphone needed for OpenBot is not included in this comparison.

If you’d like an OpenBot of your own, you don’t need to know all that much about robotics hardware or software. For the hardware, you probably need some basic mechanical and electronics experience—think Arduino project level. The software is a little more complicated; there’s a pretty good walkthrough to get some relatively sophisticated behaviors (like autonomous person following) up and running, but things rapidly degenerate into a command line interface that could be intimidating for new users. We did ask about why OpenBot isn’t ROS-based to leverage the robustness and reach of that community, and Muller said that ROS “adds unnecessary overhead,” although “if someone insists on using ROS with OpenBot, it should not be very difficult.”

Without building OpenBot to explicitly be part of an existing ecosystem, the challenge going forward is to make sure that the project is consistently supported, lest it wither and die like so many similar robotics projects have before it. “We are committed to the OpenBot project and will do our best to maintain it,” Mueller assures us. “We have a good track record. Other projects from our group (e.g. CARLA, Open3D, etc.) have also been maintained for several years now.” The inherently open source nature of the project certainly helps, although it can be tricky to rely too much on community contributions, especially when something like this is first starting out.

The OpenBot folks at Intel, we’re told, are already working on a “bigger, faster and more powerful robot body that will be suitable for mass production,” which would certainly help entice more people into giving this thing a go. They’ll also be focusing on documentation, which is probably the most important but least exciting part about building a low-cost community focused platform like this. And as soon as they’ve put together a way for us actual novices to turn our phones into robots that can do cool stuff for cheap, we’ll definitely let you know. Continue reading

Posted in Human Robots

#437608 Video Friday: Agility Robotics Raises ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

IROS 2020 – October 25-29, 2020 – [Online]
ROS World 2020 – November 12, 2020 – [Online]
CYBATHLON 2020 – November 13-14, 2020 – [Online]
ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA
Let us know if you have suggestions for next week, and enjoy today’s videos.

Digit is now in full commercial production and we’re excited to announce a $20M funding rounding round co-led by DCVC and Playground Global!

Digits for everyone!

[ Agility Robotics ]

A flexible rover that has both ability to travel long distances and rappel down hard-to-reach areas of scientific interest has undergone a field test in the Mojave Desert in California to showcase its versatility. Composed of two Axel robots, DuAxel is designed to explore crater walls, pits, scarps, vents and other extreme terrain on the moon, Mars and beyond.

This technology demonstration developed at NASA’s Jet Propulsion Laboratory in Southern California showcases the robot’s ability to split in two and send one of its halves — a two-wheeled Axle robot — over an otherwise inaccessible slope, using a tether as support and to supply power.

The rappelling Axel can then autonomously seek out areas to study, safely overcome slopes and rocky obstacles, and then return to dock with its other half before driving to another destination. Although the rover doesn’t yet have a mission, key technologies are being developed that might, one day, help us explore the rocky planets and moons throughout the solar system.

[ JPL ]

A rectangular robot as tiny as a few human hairs can travel throughout a colon by doing back flips, Purdue University engineers have demonstrated in live animal models. Why the back flips? Because the goal is to use these robots to transport drugs in humans, whose colons and other organs have rough terrain. Side flips work, too. Why a back-flipping robot to transport drugs? Getting a drug directly to its target site could remove side effects, such as hair loss or stomach bleeding, that the drug may otherwise cause by interacting with other organs along the way.

[ Purdue ]

This video shows the latest results in the whole-body locomotion control of the humanoid robot iCub achieved by the Dynamic Interaction Control line at IIT-Istituto Italiano di Tecnologia in Genova (Italy). In particular, the iCub now keeps the balance while walking and receiving pushes from an external user. The implemented control algorithms also ensure the robot to remain compliant during locomotion and human-robot interaction, a fundamental property to lower the possibility to harm humans that share the robot surrounding environment.

This is super impressive, considering that iCub was only able to crawl and was still tethered not too long ago. Also, it seems to be blinking properly now, so it doesn’t look like it’s always sleepy.

[ IIT ]

This video shows a set of new tests we performed on Bolt. We conducted tests on 5 different scenarios, 1) walking forward/backward 2) uneven surface 3) soft surface 4) push recovery 5) slippage recovery. Thanks to our feedback control based on Model Predictive Control, the robot can perform walking in the presence of all these uncertainties. We will open-source all the codes in a near future.

[ ODRI ]

The title of this video is “Can you throw your robot into a lake?” The title of this video should be, “Can you throw your robot into a lake and drive it out again?”

[ Norlab ]

AeroVironment Successfully Completes Sunglider Solar HAPS Stratospheric Test Flight, Surpassing 60,000 Feet Altitude and Demonstrating Broadband Mobile Connectivity.

[ AeroVironment ]

We present CoVR, a novel robotic interface providing strong kinesthetic feedback (100 N) in a room-scale VR arena. It consists of a physical column mounted on a 2D Cartesian ceiling robot (XY displacements) with the capacity of (1) resisting to body-scaled users actions such as pushing or leaning; (2) acting on the users by pulling or transporting them as well as (3) carrying multiple potentially heavy objects (up to 80kg) that users can freely manipulate or make interact with each other.

[ DeepAI ]

In a new video, personnel from Swiss energy supply company Kraftwerke Oberhasli AG (KWO) explain how they were able to keep employees out of harm’s way by using Flyability’s Elios 2 to collect visual data while building a new dam.

[ Flyability ]

Enjoy our Ascento robot fail compilation! With every failure we experience, we learn more and we can improve our robot for its next iteration, which will come soon… Stay tuned for more!

FYI posting a robot fails video will pretty much guarantee you a spot in Video Friday!

[ Ascento ]

Humans are remarkably good at using chopsticks. The Guinness World Record witnessed a person using chopsticks to pick up 65 M&Ms in just a minute. We aim to collect demonstrations from humans and to teach robot to use chopsticks.

[ UW Personal Robotics Lab ]

A surprising amount of personality from these Yaskawa assembly robots.

[ Yaskawa ]

This paper presents the system design, modeling, and control of the Aerial Robotic Chain Manipulator. This new robot design offers the potential to exert strong forces and moments to the environment, carry and lift significant payloads, and simultaneously navigate through narrow corridors. The presented experimental studies include a valve rotation task, a pick-and-release task, and the verification of load oscillation suppression to demonstrate the stability and performance of the system.

[ ARL ]

Whether animals or plants, whether in the water, on land or in the air, nature provides the model for many technical innovations and inventions. This is summed up in the term bionics, which is a combination of the words ‘biology‘ and ‘electronics’. At Festo, learning from nature has a long history, as our Bionic Learning Network is based on using nature as the source for future technologies like robots, assistance systems or drive solutions.

[ Festo ]

Dogs! Selfies! Thousands of LEGO bricks! This video has it all.

[ LEGO ]

An IROS workshop talk on “Cassie and Mini Cheetah Autonomy” by Maani Ghaffari and Jessy Grizzle from the University of Michigan.

[ Michigan Robotics ]

David Schaefer’s Cozmo robots are back with this mind-blowing dance-off!

What you just saw represents hundreds of hours of work, David tells us: “I wrote over 10,000 lines of code to create the dance performance as I had to translate the beats per minute of the song into motor rotations in order to get the right precision needed to make the moves look sharp. The most challenging move was the SpongeBob SquareDance as any misstep would send the Cozmos crashing into each other. LOL! Fortunately for me, Cozmo robots are pretty resilient.”

[ Life with Cozmo ]

Thanks David!

This week’s GRASP on Robotics seminar is by Sangbae Kim from MIT, on “Robots with Physical Intelligence.”

While industrial robots are effective in repetitive, precise kinematic tasks in factories, the design and control of these robots are not suited for physically interactive performance that humans do easily. These tasks require ‘physical intelligence’ through complex dynamic interactions with environments whereas conventional robots are designed primarily for position control. In order to develop a robot with ‘physical intelligence’, we first need a new type of machines that allow dynamic interactions. This talk will discuss how the new design paradigm allows dynamic interactive tasks. As an embodiment of such a robot design paradigm, the latest version of the MIT Cheetah robots and force-feedback teleoperation arms will be presented.

[ GRASP ]

This week’s CMU Ri Seminar is by Kevin Lynch from Northwestern, on “Robotics and Biosystems.”

Research at the Center for Robotics and Biosystems at Northwestern University encompasses bio-inspiration, neuromechanics, human-machine systems, and swarm robotics, among other topics. In this talk I will give an overview of some of our recent work on in-hand manipulation, robot locomotion on yielding ground, and human-robot systems.

[ CMU RI ] Continue reading

Posted in Human Robots

#437603 Throwable Robot Car Always Lands on Four ...

Throwable or droppable robots seem like a great idea for a bunch of applications, including exploration and search and rescue. But such robots do come with some constraints—namely, if you’re going to throw or drop a robot, you should be prepared for that robot to not land the way you want it to land. While we’ve seen some creative approaches to this problem, or more straightforward self-righting devices, usually you’re in for significant trade-offs in complexity, mobility, and mass.

What would be ideal is a robot that can be relied upon to just always land the right way up. A robotic cat, of sorts. And while we’ve seen this with a tail, for wheeled vehicles, it turns out that a tail isn’t necessary: All it takes is some wheel spin.

The reason that AGRO (Agile Ground RObot), developed at the U.S. Military Academy at West Point, can do this is because each of its wheels is both independently driven and steerable. The wheels are essentially reaction wheels, which are a pretty common way to generate forces on all kinds of different robots, but typically you see such reaction wheels kludged onto these robots as sort of an afterthought—using the existing wheels of a wheeled robot is a more elegant way to do it.

Four steerable wheels with in-hub motors provide control in all three axes (yaw, pitch, and roll). You’ll notice that when the robot is tossed, the wheels all toe inwards (or outwards, I guess) by 45 degrees, positioning them orthogonal to the body of the robot. The front left and rear right wheels are spun together, as are the front right and rear left wheels. When one pair of wheels spins in the same direction, the body of the robot twists in the opposite way along an axis between those wheels, in a combination of pitch and roll. By combining different twisting torques from both pairs of wheels, pitch and roll along each axis can be adjusted independently. When the same pair of wheels spin in directions opposite to each other, the robot yaws, although yaw can also be derived by adjusting the ratio between pitch authority and roll authority. And lastly, if you want to sacrifice pitch control for more roll control (or vice versa) the wheel toe-in angle can be changed. Put all this together, and you get an enormous amount of mid-air control over your robot.

Image: Robotics Research Center/West Point

The AGRO robot features four steerable wheels with in-hub motors, which provide control in all three axes (yaw, pitch, and roll).

According to a paper that the West Point group will present at the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), the overall objective here is for the robot to reach a state of zero pitch or roll by the time the robot impacts with the ground, to distribute the impact as much as possible. AGRO doesn’t yet have a suspension to make falling actually safe, so in the short term, it lands on a foam pad, but the mid-air adjustments it’s currently able to make result in a 20 percent reduction of impact force and a 100 percent reduction in being sideways or upside-down.

The toss that you see in the video isn’t the most aggressive, but lead author Daniel J. Gonzalez tells us that AGRO can do much better, theoretically stabilizing from an initial condition of 22.5 degrees pitch and 22.5 degrees roll in a mere 250 milliseconds, with room for improvement beyond that through optimizing the angles of individual wheels in real time. The limiting factor is really the amount of time that AGRO has between the point at which it’s released and the point at which it hits the ground, since more time in the air gives the robot more time to change its orientation.

Given enough height, the current generation of AGRO can recover from any initial orientation as long as it’s spinning at 66 rpm or less. And the only reason this is a limitation at all is because of the maximum rotation speed of the in-wheel hub motors, which can be boosted by increasing the battery voltage, as Gonzalez and his colleagues, Mark C. Lesak, Andres H. Rodriguez, Joseph A. Cymerman, and Christopher M. Korpela from the Robotics Research Center at West Point, describe in the IROS paper, “Dynamics and Aerial Attitude Control for Rapid Emergency Deployment of the Agile Ground Robot AGRO.”

Image: Robotics Research Center/West Point

AGRO 2 will include a new hybrid wheel-leg and non-pneumatic tire design that will allow it to hop up stairs and curbs.

While these particular experiments focus on a robot that’s being thrown, the concept is potentially effective (and useful) on any wheeled robot that’s likely to find itself in mid-air. You can imagine it improving the performance of robots doing all sorts of stunts, from driving off ramps or ledges to being dropped out of aircraft. And as it turns out, being able to self-stabilize during an airdrop is an important skill that some Humvees could use to keep themselves from getting tangled in their own parachute lines and avoid mishaps.

Before they move on to Humvees, though, the researchers are working on the next version of AGRO named AGRO 2. AGRO 2 will include a new hybrid wheel-leg and non-pneumatic tire design that will allow it to hop up stairs and curbs, which sounds like a lot of fun to us. Continue reading

Posted in Human Robots