Tag Archives: d
#437610 How Intel’s OpenBot Wants to Make ...
You could make a pretty persuasive argument that the smartphone represents the single fastest area of technological progress we’re going to experience for the foreseeable future. Every six months or so, there’s something with better sensors, more computing power, and faster connectivity. Many different areas of robotics are benefiting from this on a component level, but over at Intel Labs, they’re taking a more direct approach with a project called OpenBot that turns US $50 worth of hardware and your phone into a mobile robot that can support “advanced robotics workloads such as person following and real-time autonomous navigation in unstructured environments.”
This work aims to address two key challenges in robotics: accessibility and scalability. Smartphones are ubiquitous and are becoming more powerful by the year. We have developed a combination of hardware and software that turns smartphones into robots. The resulting robots are inexpensive but capable. Our experiments have shown that a $50 robot body powered by a smartphone is capable of person following and real-time autonomous navigation. We hope that the presented work will open new opportunities for education and large-scale learning via thousands of low-cost robots deployed around the world.
Smartphones point to many possibilities for robotics that we have not yet exploited. For example, smartphones also provide a microphone, speaker, and screen, which are not commonly found on existing navigation robots. These may enable research and applications at the confluence of human-robot interaction and natural language processing. We also expect the basic ideas presented in this work to extend to other forms of robot embodiment, such as manipulators, aerial vehicles, and watercraft.
One of the interesting things about this idea is how not-new it is. The highest profile phone robot was likely the $150 Romo, from Romotive, which raised a not-insignificant amount of money on Kickstarter in 2012 and 2013 for a little mobile chassis that accepted one of three different iPhone models and could be controlled via another device or operated somewhat autonomously. It featured “computer vision, autonomous navigation, and facial recognition” capabilities, but was really designed to be a toy. Lack of compatibility hampered Romo a bit, and there wasn’t a lot that it could actually do once the novelty wore off.
As impressive as smartphone hardware was in a robotics context (even back in 2013), we’re obviously way, way beyond that now, and OpenBot figures that smartphones now have enough clout and connectivity that turning them into mobile robots is a good idea. You know, again. We asked Intel Labs’ Matthias Muller why now was the right time to launch OpenBot, and he mentioned things like the existence of a large maker community with broad access to 3D printing as well as open source software that makes broader development easier.
And of course, there’s the smartphone hardware: “Smartphones have become extremely powerful and feature dedicated AI processors in addition to CPUs and GPUs,” says Mueller. “Almost everyone owns a very capable smartphone now. There has been a big boost in sensor performance, especially in cameras, and a lot of the recent developments for VR applications are well aligned with robotic requirements for state estimation.” OpenBot has been tested with 10 recent Android phones, and since camera placement tends to be similar and USB-C is becoming the charging and communications standard, compatibility is less of an issue nowadays.
Image: OpenBot
Intel researchers created this table comparing OpenBot to other wheeled robot platforms, including Amazon’s DeepRacer, MIT’s Duckiebot, iRobot’s Create-2, and Thymio. The top group includes robots based on RC trucks; the bottom group includes navigation robots for deployment at scale and in education. Note that the cost of the smartphone needed for OpenBot is not included in this comparison.
If you’d like an OpenBot of your own, you don’t need to know all that much about robotics hardware or software. For the hardware, you probably need some basic mechanical and electronics experience—think Arduino project level. The software is a little more complicated; there’s a pretty good walkthrough to get some relatively sophisticated behaviors (like autonomous person following) up and running, but things rapidly degenerate into a command line interface that could be intimidating for new users. We did ask about why OpenBot isn’t ROS-based to leverage the robustness and reach of that community, and Muller said that ROS “adds unnecessary overhead,” although “if someone insists on using ROS with OpenBot, it should not be very difficult.”
Without building OpenBot to explicitly be part of an existing ecosystem, the challenge going forward is to make sure that the project is consistently supported, lest it wither and die like so many similar robotics projects have before it. “We are committed to the OpenBot project and will do our best to maintain it,” Mueller assures us. “We have a good track record. Other projects from our group (e.g. CARLA, Open3D, etc.) have also been maintained for several years now.” The inherently open source nature of the project certainly helps, although it can be tricky to rely too much on community contributions, especially when something like this is first starting out.
The OpenBot folks at Intel, we’re told, are already working on a “bigger, faster and more powerful robot body that will be suitable for mass production,” which would certainly help entice more people into giving this thing a go. They’ll also be focusing on documentation, which is probably the most important but least exciting part about building a low-cost community focused platform like this. And as soon as they’ve put together a way for us actual novices to turn our phones into robots that can do cool stuff for cheap, we’ll definitely let you know. Continue reading
#437585 Dart-Shooting Drone Attacks Trees for ...
We all know how robots are great at going to places where you can’t (or shouldn’t) send a human. We also know how robots are great at doing repetitive tasks. These characteristics have the potential to make robots ideal for setting up wireless sensor networks in hazardous environments—that is, they could deploy a whole bunch of self-contained sensor nodes that create a network that can monitor a very large area for a very long time.
When it comes to using drones to set up sensor networks, you’ve generally got two options: A drone that just drops sensors on the ground (easy but inaccurate and limited locations), or using a drone with some sort of manipulator on it to stick sensors in specific places (complicated and risky). A third option, under development by researchers at Imperial College London’s Aerial Robotics Lab, provides the accuracy of direct contact with the safety and ease of use of passive dropping by instead using the drone as a launching platform for laser-aimed, sensor-equipped darts.
These darts (which the researchers refer to as aerodynamically stabilized, spine-equipped sensor pods) can embed themselves in relatively soft targets from up to 4 meters away with an accuracy of about 10 centimeters after being fired from a spring-loaded launcher. They’re not quite as accurate as a drone with a manipulator, but it’s pretty good, and the drone can maintain a safe distance from the surface that it’s trying to add a sensor to. Obviously, the spine is only going to work on things like wood, but the researchers point out that there are plenty of attachment mechanisms that could be used, including magnets, adhesives, chemical bonding, or microspines.
Indoor tests using magnets showed the system to be quite reliable, but at close range (within a meter of the target) the darts sometimes bounced off rather than sticking. From between 1 and 4 meters away, the darts stuck between 90 and 100 percent of the time. Initial outdoor tests were also successful, although the system was under manual control. The researchers say that “regular and safe operations should be carried out autonomously,” which, yeah, you’d just have to deal with all of the extra sensing and hardware required to autonomously fly beneath the canopy of a forest. That’s happening next, as the researchers plan to add “vision state estimation and positioning, as well as a depth sensor” to avoid some trees and fire sensors into others.
And if all of that goes well, they’ll consider trying to get each drone to carry multiple darts. Look out, trees: You’re about to be pierced for science.
“Unmanned Aerial Sensor Placement for Cluttered Environments,” by André Farinha, Raphael Zufferey, Peter Zheng, Sophie F. Armanini, and Mirko Kovac from Imperial College London, was published in IEEE Robotics and Automation Letters.
< Back to IEEE Journal Watch Continue reading