Tag Archives: sensor

#437624 AI-Powered Drone Learns Extreme ...

Quadrotors are among the most agile and dynamic machines ever created. In the hands of a skilled human pilot, they can do some astonishing series of maneuvers. And while autonomous flying robots have been getting better at flying dynamically in real-world environments, they still haven’t demonstrated the same level of agility of manually piloted ones.

Now researchers from the Robotics and Perception Group at the University of Zurich and ETH Zurich, in collaboration with Intel, have developed a neural network training method that “enables an autonomous quadrotor to fly extreme acrobatic maneuvers with only onboard sensing and computation.” Extreme.

There are two notable things here: First, the quadrotor can do these extreme acrobatics outdoors without any kind of external camera or motion-tracking system to help it out (all sensing and computing is onboard). Second, all of the AI training is done in simulation, without the need for an additional simulation-to-real-world (what researchers call “sim-to-real”) transfer step. Usually, a sim-to-real transfer step means putting your quadrotor into one of those aforementioned external tracking systems, so that it doesn’t completely bork itself while trying to reconcile the differences between the simulated world and the real world, where, as the researchers wrote in a paper describing their system, “even tiny mistakes can result in catastrophic outcomes.”

To enable “zero-shot” sim-to-real transfer, the neural net training in simulation uses an expert controller that knows exactly what’s going on to teach a “student controller” that has much less perfect knowledge. That is, the simulated sensory input that the student ends up using as it learns to follow the expert has been abstracted to present the kind of imperfect, imprecise data it’s going to encounter in the real world. This can involve things like abstracting away the image part of the simulation until you’d have no way of telling the difference between abstracted simulation and abstracted reality, which is what allows the system to make that sim-to-real leap.

The simulation environment that the researchers used was Gazebo, slightly modified to better simulate quadrotor physics. Meanwhile, over in reality, a custom 1.5-kilogram quadrotor with a 4:1 thrust to weight ratio performed the physical experiments, using only a Nvidia Jetson TX2 computing board and an Intel RealSense T265, a dual fisheye camera module optimized for V-SLAM. To challenge the learning system, it was trained to perform three acrobatic maneuvers plus a combo of all of them:

Image: University of Zurich/ETH Zurich/Intel

Reference trajectories for acrobatic maneuvers. Top row, from left: Power Loop, Barrel Roll, and Matty Flip. Bottom row: Combo.

All of these maneuvers require high accelerations of up to 3 g’s and careful control, and the Matty Flip is particularly challenging, at least for humans, because the whole thing is done while the drone is flying backwards. Still, after just a few hours of training in simulation, the drone was totally real-world competent at these tricks, and could even extrapolate a little bit to perform maneuvers that it was not explicitly trained on, like doing multiple loops in a row. Where humans still have the advantage over drones is (as you might expect since we’re talking about robots) is quickly reacting to novel or unexpected situations. And when you’re doing this sort of thing outdoors, novel and unexpected situations are everywhere, from a gust of wind to a jealous bird.

For more details, we spoke with Antonio Loquercio from the University of Zurich’s Robotics and Perception Group.

IEEE Spectrum: Can you explain how the abstraction layer interfaces with the simulated sensors to enable effective sim-to-real transfer?

Antonio Loquercio: The abstraction layer applies a specific function to the raw sensor information. Exactly the same function is applied to the real and simulated sensors. The result of the function, which is “abstracted sensor measurements,” makes simulated and real observation of the same scene similar. For example, suppose we have a sequence of simulated and real images. We can very easily tell apart the real from the simulated ones given the difference in rendering. But if we apply the abstraction function of “feature tracks,” which are point correspondences in time, it becomes very difficult to tell which are the simulated and real feature tracks, since point correspondences are independent of the rendering. This applies for humans as well as for neural networks: Training policies on raw images gives low sim-to-real transfer (since images are too different between domains), while training on the abstracted images has high transfer abilities.

How useful is visual input from a camera like the Intel RealSense T265 for state estimation during such aggressive maneuvers? Would using an event camera substantially improve state estimation?

Our end-to-end controller does not require a state estimation module. It shares however some components with traditional state estimation pipelines, specifically the feature extractor and the inertial measurement unit (IMU) pre-processing and integration function. The input of the neural networks are feature tracks and integrated IMU measurements. When looking at images with low features (for example when the camera points to the sky), the neural net will mainly rely on IMU. When more features are available, the network uses to correct the accumulated drift from IMU. Overall, we noticed that for very short maneuvers IMU measurements were sufficient for the task. However, for longer ones, visual information was necessary to successfully address the IMU drift and complete the maneuver. Indeed, visual information reduces the odds of a crash by up to 30 percent in the longest maneuvers. We definitely think that event camera can improve even more the current approach since they could provide valuable visual information during high speed.

“The Matty Flip is probably one of the maneuvers that our approach can do very well … It is super challenging for humans, since they don’t see where they’re going and have problems in estimating their speed. For our approach the maneuver is no problem at all, since we can estimate forward velocities as well as backward velocities.”
—Antonio Loquercio, University of Zurich

You describe being able to train on “maneuvers that stretch the abilities of even expert human pilots.” What are some examples of acrobatics that your drones might be able to do that most human pilots would not be capable of?

The Matty Flip is probably one of the maneuvers that our approach can do very well, but human pilots find very challenging. It basically entails doing a high speed power loop by always looking backward. It is super challenging for humans, since they don’t see where they’re going and have problems in estimating their speed. For our approach the maneuver is no problem at all, since we can estimate forward velocities as well as backward velocities.

What are the limits to the performance of this system?

At the moment the main limitation is the maneuver duration. We never trained a controller that could perform maneuvers longer than 20 seconds. In the future, we plan to address this limitation and train general controllers which can fly in that agile way for significantly longer with relatively small drift. In this way, we could start being competitive against human pilots in drone racing competitions.

Can you talk about how the techniques developed here could be applied beyond drone acrobatics?

The current approach allows us to do acrobatics and agile flight in free space. We are now working to perform agile flight in cluttered environments, which requires a higher degree of understanding of the surrounding with respect to this project. Drone acrobatics is of course only an example application. We selected it because it makes a stress test of the controller performance. However, several other applications which require fast and agile flight can benefit from our approach. Examples are delivery (we want our Amazon packets always faster, don’t we?), search and rescue, or inspection. Going faster allows us to cover more space in less time, saving battery costs. Indeed, agile flight has very similar battery consumption of slow hovering for an autonomous drone.

“Deep Drone Acrobatics,” by Elia Kaufmann, Antonio Loquercio, René Ranftl, Matthias Müller, Vladlen Koltun, and Davide Scaramuzza from the Robotics and Perception Group at the University of Zurich and ETH Zurich, and Intel’s Intelligent Systems Lab, was presented at RSS 2020. Continue reading

Posted in Human Robots

#437616 Innovative YUJIN 3D LiDAR, Now Shipping!

Recently Yujin Robot launched a new 3D LiDAR for indoor service robot, AGVs/AMRs and smart factory. The YRL3 series is a line of precise laser sensors for vertical and horizontal scanning to detect environments or objects. The Yujin Robot YRL3 series LiDAR is designed for indoor applications and utilizes an innovative 3D scanning LiDAR for a 270°(Horizontal) x 90°(vertical) dynamic field of view as a single channel. The fundamental principle is based on direct ToF (Time of Flight) and designed to measure distances towards surroundings. YRL3 collect useful data including ranges, angles, intensities and Cartesian coordinates (x,y,z). Real-time vertical right-angle adjustment is possible and supports powerful S/W package for autonomous driving devices.

“In recent years, our product lineup expanded to include models for the Fourth Industrial Revolution,” shares the marketing team of Yujin Robot. These models namely are Kobuki, the ROS reference research robot platform used by robotics research labs around the world, the Yujin LiDAR range-finding scanning sensor for LiDAR-based autonomous driving, AMS solution (Autonomous Mobility Solution) for customized autonomous driving. The company continues to push the boundaries of robotics and artificial intelligence, developing game-changing autonomous solutions that give companies around the world an edge over the competition.

Photo: Yujin

YUJIN 3D LiDAR, Now Shipping! Indoor 3D LiDAR for AGVs/AMRs, Service Robots, and Factories Continue reading

Posted in Human Robots

#437614 Video Friday: Poimo Is a Portable ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

IROS 2020 – October 25-29, 2020 – [Online]
ROS World 2020 – November 12, 2020 – [Online]
CYBATHLON 2020 – November 13-14, 2020 – [Online]
ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA
Let us know if you have suggestions for next week, and enjoy today's videos.

Engineers at the University of California San Diego have built a squid-like robot that can swim untethered, propelling itself by generating jets of water. The robot carries its own power source inside its body. It can also carry a sensor, such as a camera, for underwater exploration.

[ UCSD ]

Thanks Ioana!

Shark Robotics, French and European leader in Unmanned Ground Vehicles, is announcing today a disinfection add-on for Boston Dynamics Spot robot, designed to fight the COVID-19 pandemic. The Spot robot with Shark’s purpose-built disinfection payload can decontaminate up to 2,000 m2 in 15 minutes, in any space that needs to be sanitized – such as hospitals, metro stations, offices, warehouses or facilities.

[ Shark Robotics ]

Here’s an update on the Poimo portable inflatable mobility project we wrote about a little while ago; while not strictly robotics, it seems like it holds some promise for rapidly developing different soft structures that robotics might find useful.

[ University of Tokyo ]

Thanks Ryuma!

Pretty cool that you can do useful force feedback teleop while video chatting through a “regular broadband Internet connection.” Although, what “regular” means to you is a bit subjective, right?

[ HEBI Robotics ]

Thanks Dave!

While NASA's Mars rover Perseverance travels through space toward the Red Planet, its nearly identical rover twin is hard at work on Earth. The vehicle system test bed (VSTB) rover named OPTIMISM is a full-scale engineering version of the Mars-bound rover. It is used to test hardware and software before the commands are sent up to the Perseverance rover.

[ NASA ]

Jacquard takes ordinary, familiar objects and enhances them with new digital abilities and experiences, while remaining true to their original purpose — like being your favorite jacket, backpack or a pair of shoes that you love to wear.

Our ambition is simple: to make life easier. By staying connected to your digital world, your things can do so much more. Skip a song by brushing your sleeve. Take a picture by tapping on a shoulder strap. Get reminded about the phone you left behind with a blink of light or a haptic buzz on your cuff.

[ Google ATAP ]

Should you attend the IROS 2020 workshop on “Planetary Exploration Robots: Challenges and Opportunities”? Of course you should!

[ Workshop ]

Kuka makes a lot of these videos where I can’t help but think that if they put as much effort into programming the robot as they did into producing the video, the result would be much more impressive.

[ Kuka ]

The Colorado School of Mines is one of the first customers to buy a Spot robot from Boston Dynamics to help with robotics research. Watch as scientists take Spot into the school's mine for the first time.

[ HCR ] via [ CNET ]

A very interesting soft(ish) actuator from Ayato Kanada at Kyushu University's Control Engineering Lab.

A flexible ultrasonic motor (FUSM), which generates linear motion as a novel soft actuator. This motor consists of a single metal cube stator with a hole and an elastic elongated coil spring inserted into the hole. When voltages are applied to piezoelectric plates on the stator, the coil spring moves back and forward as a linear slider. In the FUSM that uses the friction drive as the principle, the most important parameter for optimizing its output is the preload between the stator and slider. The coil spring has a slightly larger diameter than the stator hole and generates the preload by expanding in a radial direction. The coil springs act not only as a flexible slider but also as a resistive positional sensor. Changes in the resistance between the stator and the coil spring end are converted to a voltage and used for position detection.

[ Control Engineering Lab ]

Thanks Ayato!

We show how to use the limbs of a quadruped robot to identify fine-grained soil, representative for Martian regolith.

[ Paper ] via [ ANYmal Research ]

PR2 is serving breakfast and cleaning up afterwards. It’s slow, but all you have to do is eat and leave.

That poor PR2 is a little more naked than it's probably comfortable with.

[ EASE ]

NVIDIA researchers present a hierarchical framework that combines model-based control and reinforcement learning (RL) to synthesize robust controllers for a quadruped robot (the Unitree Laikago).

[ NVIDIA ]

What's interesting about this assembly task is that the robot is using its arm only for positioning, and doing the actual assembly with just fingers.

[ RC2L ]

In this electronics assembly application, Kawasaki's cobot duAro2 uses a tool changing station to tackle a multitude of tasks and assemble different CPU models.

Okay but can it apply thermal paste to a CPU in the right way? Personally, I find that impossible.

[ Kawasaki ]

You only need to watch this video long enough to appreciate the concept of putting a robot on a robot.

[ Impress ]

In this lecture, we’ll hear from the man behind one of the biggest robotics companies in the world, Boston Dynamics, whose robotic dog, Spot, has been used to encourage social distancing in Singapore and is now getting ready for FDA approval to be able to measure patients’ vital signs in hospitals.

[ Alan Turing Institute ]

Greg Kahn from UC Berkeley wrote in to share his recent dissertation talk on “Mobile Robot Learning.”

In order to create mobile robots that can autonomously navigate real-world environments, we need generalizable perception and control systems that can reason about the outcomes of navigational decisions. Learning-based methods, in which the robot learns to navigate by observing the outcomes of navigational decisions in the real world, offer considerable promise for obtaining these intelligent navigation systems. However, there are many challenges impeding mobile robots from autonomously learning to act in the real-world, in particular (1) sample-efficiency–how to learn using a limited amount of data? (2) supervision–how to tell the robot what to do? and (3) safety–how to ensure the robot and environment are not damaged or destroyed during learning? In this talk, I will present deep reinforcement learning methods for addressing these real world mobile robot learning challenges and show results which enable ground and aerial robots to navigate in complex indoor and outdoor environments.

[ UC Berkeley ]

Thanks Greg!

Leila Takayama from UC Santa Cruz (and previously Google X and Willow Garage) gives a talk entitled “Toward a more human-centered future of robotics.”

Robots are no longer only in outer space, in factory cages, or in our imaginations. We interact with robotic agents when withdrawing cash from bank ATMs, driving cars with adaptive cruise control, and tuning our smart home thermostats. In the moment of those interactions with robotic agents, we behave in ways that do not necessarily align with the rational belief that robots are just plain machines. Through a combination of controlled experiments and field studies, we use theories and concepts from the social sciences to explore ways that human and robotic agents come together, including how people interact with personal robots and how people interact through telepresence robots. Together, we will explore topics and raise questions about the psychology of human-robot interaction and how we could invent a future of a more human-centered robotics that we actually want to live in.

[ Leila Takayama ]

Roboticist and stand-up comedian Naomi Fitter from Oregon State University gives a talk on “Everything I Know about Telepresence.”

Telepresence robots hold promise to connect people by providing videoconferencing and navigation abilities in far-away environments. At the same time, the impacts of current commercial telepresence robots are not well understood, and circumstances of robot use including internet connection stability, odd personalizations, and interpersonal relationship between a robot operator and people co-located with the robot can overshadow the benefit of the robot itself. And although the idea of telepresence robots has been around for over two decades, available nonverbal expressive abilities through telepresence robots are limited, and suitable operator user interfaces for the robot (for example, controls that allow for the operator to hold a conversation and move the robot simultaneously) remain elusive. So where should we be using telepresence robots? Are there any pitfalls to watch out for? What do we know about potential robot expressivity and user interfaces? This talk will cover my attempts to address these questions and ways in which the robotics research community can build off of this work

[ Talking Robotics ] Continue reading

Posted in Human Robots

#437610 How Intel’s OpenBot Wants to Make ...

You could make a pretty persuasive argument that the smartphone represents the single fastest area of technological progress we’re going to experience for the foreseeable future. Every six months or so, there’s something with better sensors, more computing power, and faster connectivity. Many different areas of robotics are benefiting from this on a component level, but over at Intel Labs, they’re taking a more direct approach with a project called OpenBot that turns US $50 worth of hardware and your phone into a mobile robot that can support “advanced robotics workloads such as person following and real-time autonomous navigation in unstructured environments.”

This work aims to address two key challenges in robotics: accessibility and scalability. Smartphones are ubiquitous and are becoming more powerful by the year. We have developed a combination of hardware and software that turns smartphones into robots. The resulting robots are inexpensive but capable. Our experiments have shown that a $50 robot body powered by a smartphone is capable of person following and real-time autonomous navigation. We hope that the presented work will open new opportunities for education and large-scale learning via thousands of low-cost robots deployed around the world.

Smartphones point to many possibilities for robotics that we have not yet exploited. For example, smartphones also provide a microphone, speaker, and screen, which are not commonly found on existing navigation robots. These may enable research and applications at the confluence of human-robot interaction and natural language processing. We also expect the basic ideas presented in this work to extend to other forms of robot embodiment, such as manipulators, aerial vehicles, and watercraft.

One of the interesting things about this idea is how not-new it is. The highest profile phone robot was likely the $150 Romo, from Romotive, which raised a not-insignificant amount of money on Kickstarter in 2012 and 2013 for a little mobile chassis that accepted one of three different iPhone models and could be controlled via another device or operated somewhat autonomously. It featured “computer vision, autonomous navigation, and facial recognition” capabilities, but was really designed to be a toy. Lack of compatibility hampered Romo a bit, and there wasn’t a lot that it could actually do once the novelty wore off.

As impressive as smartphone hardware was in a robotics context (even back in 2013), we’re obviously way, way beyond that now, and OpenBot figures that smartphones now have enough clout and connectivity that turning them into mobile robots is a good idea. You know, again. We asked Intel Labs’ Matthias Muller why now was the right time to launch OpenBot, and he mentioned things like the existence of a large maker community with broad access to 3D printing as well as open source software that makes broader development easier.

And of course, there’s the smartphone hardware: “Smartphones have become extremely powerful and feature dedicated AI processors in addition to CPUs and GPUs,” says Mueller. “Almost everyone owns a very capable smartphone now. There has been a big boost in sensor performance, especially in cameras, and a lot of the recent developments for VR applications are well aligned with robotic requirements for state estimation.” OpenBot has been tested with 10 recent Android phones, and since camera placement tends to be similar and USB-C is becoming the charging and communications standard, compatibility is less of an issue nowadays.

Image: OpenBot

Intel researchers created this table comparing OpenBot to other wheeled robot platforms, including Amazon’s DeepRacer, MIT’s Duckiebot, iRobot’s Create-2, and Thymio. The top group includes robots based on RC trucks; the bottom group includes navigation robots for deployment at scale and in education. Note that the cost of the smartphone needed for OpenBot is not included in this comparison.

If you’d like an OpenBot of your own, you don’t need to know all that much about robotics hardware or software. For the hardware, you probably need some basic mechanical and electronics experience—think Arduino project level. The software is a little more complicated; there’s a pretty good walkthrough to get some relatively sophisticated behaviors (like autonomous person following) up and running, but things rapidly degenerate into a command line interface that could be intimidating for new users. We did ask about why OpenBot isn’t ROS-based to leverage the robustness and reach of that community, and Muller said that ROS “adds unnecessary overhead,” although “if someone insists on using ROS with OpenBot, it should not be very difficult.”

Without building OpenBot to explicitly be part of an existing ecosystem, the challenge going forward is to make sure that the project is consistently supported, lest it wither and die like so many similar robotics projects have before it. “We are committed to the OpenBot project and will do our best to maintain it,” Mueller assures us. “We have a good track record. Other projects from our group (e.g. CARLA, Open3D, etc.) have also been maintained for several years now.” The inherently open source nature of the project certainly helps, although it can be tricky to rely too much on community contributions, especially when something like this is first starting out.

The OpenBot folks at Intel, we’re told, are already working on a “bigger, faster and more powerful robot body that will be suitable for mass production,” which would certainly help entice more people into giving this thing a go. They’ll also be focusing on documentation, which is probably the most important but least exciting part about building a low-cost community focused platform like this. And as soon as they’ve put together a way for us actual novices to turn our phones into robots that can do cool stuff for cheap, we’ll definitely let you know. Continue reading

Posted in Human Robots

#437600 Brain-Inspired Robot Controller Uses ...

Robots operating in the real world are starting to find themselves constrained by the amount of computing power they have available. Computers are certainly getting faster and more efficient, but they’re not keeping up with the potential of robotic systems, which have access to better sensors and more data, which in turn makes decision making more complex. A relatively new kind of computing device called a memristor could potentially help robotics smash through this barrier, through a combination of lower complexity, lower cost, and higher speed.

In a paper published today in Science Robotics, a team of researchers from the University of Southern California in Los Angeles and the Air Force Research Laboratory in Rome, N.Y., demonstrate a simple self-balancing robot that uses memristors to form a highly effective analog control system, inspired by the functional structure of the human brain.

First, we should go over just what the heck a memristor is. As the name suggests, it’s a type of memory that is resistance-based. That is, the resistance of a memristor can be programmed, and the memristor remembers that resistance even after it’s powered off (the resistance depends on the magnitude of the voltage applied to the memristor’s two terminals and the length of time that voltage has been applied). Memristors are potentially the ideal hybrid between RAM and flash memory, offering high speed, high density, non-volatile storage. So that’s cool, but what we’re most interested in as far as robot control systems go is that memristors store resistance, making them analog devices rather than digital ones.

By adding a memristor to an analog circuit with inputs from a gyroscope and an accelerometer, the researchers created a completely analog Kalman filter, which coupled to a second memristor functioned as a PD controller.

Nowadays, the word “analog” sounds like a bad thing, but robots are stuck in an analog world, and any physical interactions they have with the world (mediated through sensors) are fundamentally analog in nature. The challenge is that an analog signal is often “messy”—full of noise and non-linearities—and as such, the usual approach now is to get it converted to a digital signal and then processed to get anything useful out of it. This is fine, but it’s also not particularly fast or efficient. Where memristors come in is that they’re inherently analog, and in addition to storing data, they can also act as tiny analog computers, which is pretty wild.

By adding a memristor to an analog circuit with inputs from a gyroscope and an accelerometer, the researchers, led by Wei Wu, an associate professor of electrical engineering at USC, created a completely analog and completely physical Kalman filter to remove noise from the sensor signal. In addition, they used a second memristor can be used to turn that sensor data into a proportional-derivative (PD) controller. Next they put those two components together to build an analogy system that can do a bunch of the work required to keep an inverted pendulum robot upright far more efficiently than a traditional system. The difference in performance is readily apparent:

The shaking you see in the traditionally-controlled robot on the bottom comes from the non-linearity of the dynamic system, which changes faster than the on-board controller can keep up with. The memristors substantially reduce the cycle time, so the robot can balance much more smoothly. Specifically, cycle time is reduced from 3,034 microseconds to just 6 microseconds.

Of course, there’s more going on here, like motor drivers and a digital computer to talk to them, so this robot is really a hybrid system. But guess what? As the researchers point out, so are we!

The human brain consists of the cerebrum, the cerebellum, and the brainstem. The cerebrum is a major part of the brain in charge of vision, hearing, and thinking, whereas the cerebellum plays an important role in motion control. Through this cooperation of the cerebrum and the cerebellum, the human brain can conduct multiple tasks simultaneously with extremely low power consumption. Inspired by this, we developed a hybrid analog-digital computation platform, in which the digital component runs the high-level algorithm, whereas the analog component is responsible for sensor fusion and motion control.

By offloading a bunch of computation onto the memristors, the higher brain functions of the robot have more breathing room. Overall, you reduce power, space, and cost, while substantially improving performance. This has only become possible relatively recently due to memristor advances and availability, and the researchers expect that memristor-based hybrid computing will soon be able to “improve the robustness and the performance of mobile robotic systems with higher” degrees of freedom.

“A memristor-based hybrid analog-digital computing platform for mobile robotics,” by Buyun Chen, Hao Yang, Boxiang Song, Deming Meng, Xiaodong Yan, Yuanrui Li, Yunxiang Wang, Pan Hu, Tse-Hsien Ou, Mark Barnell, Qing Wu, Han Wang, and Wei Wu, from USC Viterbi and AFRL, was published in Science Robotics. Continue reading

Posted in Human Robots