Tag Archives: recognition

#437628 Video Friday: An In-Depth Look at Mesmer ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

AUVSI EXPONENTIAL 2020 – October 5-8, 2020 – [Online]
IROS 2020 – October 25-29, 2020 – [Online]
ROS World 2020 – November 12, 2020 – [Online]
CYBATHLON 2020 – November 13-14, 2020 – [Online]
ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA
Let us know if you have suggestions for next week, and enjoy today’s videos.

Bear Robotics, a robotics and artificial intelligence company, and SoftBank Robotics Group, a leading robotics manufacturer and solutions provider, have collaborated to bring a new robot named Servi to the food service and hospitality field.

[ Bear Robotics ]

A literal in-depth look at Engineered Arts’ Mesmer android.

[ Engineered Arts ]

Is your robot running ROS? Is it connected to the Internet? Are you actually in control of it right now? Are you sure?

I appreciate how the researchers admitted to finding two of their own robots as part of the scan, a Baxter and a drone.

[ Brown ]

Smile Robotics describes this as “(possibly) world’s first full-autonomous clear-up-the-table robot.”

We’re not qualified to make a judgement on the world firstness, but personally I hate clearing tables, so this robot has my vote.

Smile Robotics founder and CEO Takashi Ogura, along with chief engineer Mitsutaka Kabasawa and engineer Kazuya Kobayashi, are former Google roboticists. Ogura also worked at SCHAFT. Smile says its robot uses ROS and is controlled by a framework written mainly in Rust, adding: “We are hiring Rustacean Roboticists!”

[ Smile Robotics ]

We’re not entirely sure why, but Panasonic has released plans for an Internet of Things system for hamsters.

We devised a recipe for a “small animal healthcare device” that can measure the weight and activity of small animals, the temperature and humidity of the breeding environment, and manage their health. This healthcare device visualizes the health status and breeding environment of small animals and manages their health to promote early detection of diseases. While imagining the scene where a healthcare device is actually used for an important small animal that we treat with affection, we hope to help overcome the current difficult situation through manufacturing.

[ Panasonic ] via [ RobotStart ]

Researchers at Yale have developed a robotic fabric, a breakthrough that could lead to such innovations as adaptive clothing, self-deploying shelters, or lightweight shape-changing machinery.

The researchers focused on processing functional materials into fiber-form so they could be integrated into fabrics while retaining its advantageous properties. For example, they made variable stiffness fibers out of an epoxy embedded with particles of Field’s metal, an alloy that liquifies at relatively low temperatures. When cool, the particles are solid metal and make the material stiffer; when warm, the particles melt into liquid and make the material softer.

[ Yale ]

In collaboration with Armasuisse and SBB, RSL demonstrated the use of a teleoperated Menzi Muck M545 to clean up a rock slide in Central Switzerland. The machine can be operated from a teloperation platform with visual and motion feedback. The walking excavator features an active chassis that can adapt to uneven terrain.

[ ETHZ RSL ]

An international team of JKU researchers is continuing to develop their vision for robots made out of soft materials. A new article in the journal “Communications Materials” demonstrates just how these kinds of soft machines react using weak magnetic fields to move very quickly. A triangle-shaped robot can roll itself in air at high speed and walk forward when exposed to an alternating in-plane square wave magnetic field (3.5 mT, 1.5 Hz). The diameter of the robot is 18 mm with a thickness of 80 µm. A six-arm robot can grab, transport, and release non-magnetic objects such as a polyurethane foam cube controlled by a permanent magnet.

Okay but tell me more about that cute sheep.

[ JKU ]

Interbotix has this “research level robotic crawler,” which both looks mean and runs ROS, a dangerous combination.

And here’s how it all came together:

[ Interbotix ]

I guess if you call them “loitering missile systems” rather than “drones that blow things up” people are less likely to get upset?

[ AeroVironment ]

In this video, we show a planner for a master dual-arm robot to manipulate tethered tools with an assistant dual-arm robot’s help. The assistant robot provides assistance to the master robot by manipulating the tool cable and avoiding collisions. The provided assistance allows the master robot to perform tool placements on the robot workspace table to regrasp the tool, which would typically fail since the tool cable tension may change the tool positions. It also allows the master robot to perform tool handovers, which would normally cause entanglements or collisions with the cable and the environment without the assistance.

[ Harada Lab ]

This video shows a flexible and robust robotic system for autonomous drawing on 3D surfaces. The system takes 2D drawing strokes and a 3D target surface (mesh or point clouds) as input. It maps the 2D strokes onto the 3D surface and generates a robot motion to draw the mapped strokes using visual recognition, grasp pose reasoning, and motion planning.

[ Harada Lab ]

Weekly mobility test. This time the Warthog takes on a fallen tree. Will it cross it? The answer is in the video!

And the answer is: kinda?

[ NORLAB ]

One of the advantages of walking machines is their ability to apply forces in all directions and of various magnitudes to the environment. Many of the multi-legged robots are equipped with point contact feet as these simplify the design and control of the robot. The iStruct project focuses on the development of a foot that allows extensive contact with the environment.

[ DFKI ]

An urgent medical transport was simulated in NASA’s second Systems Integration and Operationalization (SIO) demonstration Sept. 28 with partner Bell Textron Inc. Bell used the remotely-piloted APT 70 to conduct a flight representing an urgent medical transport mission. It is envisioned in the future that an operational APT 70 could provide rapid medical transport for blood, organs, and perishable medical supplies (payload up to 70 pounds). The APT 70 is estimated to move three times as fast as ground transportation.

Always a little suspicious when the video just shows the drone flying, and sitting on the ground, but not that tricky transition between those two states.

[ NASA ]

A Lockheed Martin Robotics Seminar on “Socially Assistive Mobile Robots,” by Yi Guo from Stevens Institute of Technology.

The use of autonomous mobile robots in human environments is on the rise. Assistive robots have been seen in real-world environments, such as robot guides in airports, robot polices in public parks, and patrolling robots in supermarkets. In this talk, I will first present current research activities conducted in the Robotics and Automation Laboratory at Stevens. I’ll then focus on robot-assisted pedestrian regulation, where pedestrian flows are regulated and optimized through passive human-robot interaction.

[ UMD ]

This week’s CMU RI Seminar is by CMU’s Zachary Manchester, on “The World’s Tiniest Space Program.”

The aerospace industry has experienced a dramatic shift over the last decade: Flying a spacecraft has gone from something only national governments and large defense contractors could afford to something a small startup can accomplish on a shoestring budget. A virtuous cycle has developed where lower costs have led to more launches and the growth of new markets for space-based data. However, many barriers remain. This talk will focus on driving these trends to their ultimate limit by harnessing advances in electronics, planning, and control to build spacecraft that cost less than a new smartphone and can be deployed in large numbers.

[ CMU RI ] Continue reading

Posted in Human Robots

#437610 How Intel’s OpenBot Wants to Make ...

You could make a pretty persuasive argument that the smartphone represents the single fastest area of technological progress we’re going to experience for the foreseeable future. Every six months or so, there’s something with better sensors, more computing power, and faster connectivity. Many different areas of robotics are benefiting from this on a component level, but over at Intel Labs, they’re taking a more direct approach with a project called OpenBot that turns US $50 worth of hardware and your phone into a mobile robot that can support “advanced robotics workloads such as person following and real-time autonomous navigation in unstructured environments.”

This work aims to address two key challenges in robotics: accessibility and scalability. Smartphones are ubiquitous and are becoming more powerful by the year. We have developed a combination of hardware and software that turns smartphones into robots. The resulting robots are inexpensive but capable. Our experiments have shown that a $50 robot body powered by a smartphone is capable of person following and real-time autonomous navigation. We hope that the presented work will open new opportunities for education and large-scale learning via thousands of low-cost robots deployed around the world.

Smartphones point to many possibilities for robotics that we have not yet exploited. For example, smartphones also provide a microphone, speaker, and screen, which are not commonly found on existing navigation robots. These may enable research and applications at the confluence of human-robot interaction and natural language processing. We also expect the basic ideas presented in this work to extend to other forms of robot embodiment, such as manipulators, aerial vehicles, and watercraft.

One of the interesting things about this idea is how not-new it is. The highest profile phone robot was likely the $150 Romo, from Romotive, which raised a not-insignificant amount of money on Kickstarter in 2012 and 2013 for a little mobile chassis that accepted one of three different iPhone models and could be controlled via another device or operated somewhat autonomously. It featured “computer vision, autonomous navigation, and facial recognition” capabilities, but was really designed to be a toy. Lack of compatibility hampered Romo a bit, and there wasn’t a lot that it could actually do once the novelty wore off.

As impressive as smartphone hardware was in a robotics context (even back in 2013), we’re obviously way, way beyond that now, and OpenBot figures that smartphones now have enough clout and connectivity that turning them into mobile robots is a good idea. You know, again. We asked Intel Labs’ Matthias Muller why now was the right time to launch OpenBot, and he mentioned things like the existence of a large maker community with broad access to 3D printing as well as open source software that makes broader development easier.

And of course, there’s the smartphone hardware: “Smartphones have become extremely powerful and feature dedicated AI processors in addition to CPUs and GPUs,” says Mueller. “Almost everyone owns a very capable smartphone now. There has been a big boost in sensor performance, especially in cameras, and a lot of the recent developments for VR applications are well aligned with robotic requirements for state estimation.” OpenBot has been tested with 10 recent Android phones, and since camera placement tends to be similar and USB-C is becoming the charging and communications standard, compatibility is less of an issue nowadays.

Image: OpenBot

Intel researchers created this table comparing OpenBot to other wheeled robot platforms, including Amazon’s DeepRacer, MIT’s Duckiebot, iRobot’s Create-2, and Thymio. The top group includes robots based on RC trucks; the bottom group includes navigation robots for deployment at scale and in education. Note that the cost of the smartphone needed for OpenBot is not included in this comparison.

If you’d like an OpenBot of your own, you don’t need to know all that much about robotics hardware or software. For the hardware, you probably need some basic mechanical and electronics experience—think Arduino project level. The software is a little more complicated; there’s a pretty good walkthrough to get some relatively sophisticated behaviors (like autonomous person following) up and running, but things rapidly degenerate into a command line interface that could be intimidating for new users. We did ask about why OpenBot isn’t ROS-based to leverage the robustness and reach of that community, and Muller said that ROS “adds unnecessary overhead,” although “if someone insists on using ROS with OpenBot, it should not be very difficult.”

Without building OpenBot to explicitly be part of an existing ecosystem, the challenge going forward is to make sure that the project is consistently supported, lest it wither and die like so many similar robotics projects have before it. “We are committed to the OpenBot project and will do our best to maintain it,” Mueller assures us. “We have a good track record. Other projects from our group (e.g. CARLA, Open3D, etc.) have also been maintained for several years now.” The inherently open source nature of the project certainly helps, although it can be tricky to rely too much on community contributions, especially when something like this is first starting out.

The OpenBot folks at Intel, we’re told, are already working on a “bigger, faster and more powerful robot body that will be suitable for mass production,” which would certainly help entice more people into giving this thing a go. They’ll also be focusing on documentation, which is probably the most important but least exciting part about building a low-cost community focused platform like this. And as soon as they’ve put together a way for us actual novices to turn our phones into robots that can do cool stuff for cheap, we’ll definitely let you know. Continue reading

Posted in Human Robots

#437598 Video Friday: Sarcos Is Developing a New ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

IROS 2020 – October 25-29, 2020 – [Online]
ROS World 2020 – November 12, 2020 – [Online]
CYBATHLON 2020 – November 13-14, 2020 – [Online]
ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA
Let us know if you have suggestions for next week, and enjoy today's videos.

NASA’s Origins, Spectral Interpretation, Resource Identification, Security, Regolith Explorer (OSIRIS-REx) spacecraft unfurled its robotic arm Oct. 20, 2020, and in a first for the agency, briefly touched an asteroid to collect dust and pebbles from the surface for delivery to Earth in 2023.

[ NASA ]

New from David Zarrouk’s lab at BGU is AmphiSTAR, which Zarrouk describes as “a kind of a ground-water drone inspired by the cockroaches (sprawling) and by the Basilisk lizard (running over water). The robot hovers due to the collision of its propellers with the water (hydrodynamics not aerodynamics). The robot can crawl and swim at high and low speeds and smoothly transition between the two. It can reach 3.5 m/s on ground and 1.5m/s in water.”

AmphiSTAR will be presented at IROS, starting next week!

[ BGU ]

This is unfortunately not a great video of a video that was taken at a SoftBank Hawks baseball game in Japan last week, but it’s showing an Atlas robot doing an honestly kind of impressive dance routine to support the team.

ロボット応援団に人型ロボット『ATLAS』がアメリカからリモートで緊急参戦!!!
ホークスビジョンの映像をお楽しみ下さい♪#sbhawks #Pepper #spot pic.twitter.com/6aTYn8GGli
— 福岡ソフトバンクホークス(公式) (@HAWKS_official)
October 16, 2020

Editor’s Note: The tweet embed above is not working for some reason—see the video here.

[ SoftBank Hawks ]

Thanks Thomas!

Sarcos is working on a new robot, which looks to be the torso of their powered exoskeleton with the human relocated somewhere else.

[ Sarcos ]

The biggest holiday of the year, International Sloth Day, was on Tuesday! To celebrate, here’s Slothbot!

[ NSF ]

This is one of those simple-seeming tasks that are really difficult for robots.

I love self-resetting training environments.

[ MIT CSAIL ]

The Chiel lab collaborates with engineers at the Center for Biologically Inspired Robotics Research at Case Western Reserve University to design novel worm-like robots that have potential applications in search-and-rescue missions, endoscopic medicine, or other scenarios requiring navigation through narrow spaces.

[ Case Western ]

ANYbotics partnered with Losinger Marazzi to explore ANYmal’s potential of patrolling construction sites to identify and report safety issues. With such a complex environment, only a robot designed to navigate difficult terrain is able to bring digitalization to such a physically demanding industry.

[ ANYbotics ]

Happy 2018 Halloween from Clearpath Robotics!

[ Clearpath ]

Overcoming illumination variance is a critical factor in vision-based navigation. Existing methods tackled this radical illumination variance issue by proposing camera control or high dynamic range (HDR) image fusion. Despite these efforts, we have found that the vision-based approaches still suffer from overcoming darkness. This paper presents real-time image synthesizing from carefully controlled seed low dynamic range (LDR) image, to enable visual simultaneous localization and mapping (SLAM) in an extremely dark environment (less than 10 lux).

[ KAIST ]

What can MoveIt do? Who knows! Let's find out!

[ MoveIt ]

Thanks Dave!

Here we pick a cube from a starting point, manipulate it within the hand, and then put it back. To explore the capabilities of the hand, no sensors were used in this demonstration. The RBO Hand 3 uses soft pneumatic actuators made of silicone. The softness imparts considerable robustness against variations in object pose and size. This lets us design manipulation funnels that work reliably without needing sensor feedback. We take advantage of this reliability to chain these funnels into more complex multi-step manipulation plans.

[ TU Berlin ]

If this was a real solar array, King Louie would have totally cleaned it. Mostly.

[ BYU ]

Autonomous exploration is a fundamental problem for various applications of unmanned aerial vehicles(UAVs). Existing methods, however, were demonstrated to have low efficiency, due to the lack of optimality consideration, conservative motion plans and low decision frequencies. In this paper, we propose FUEL, a hierarchical framework that can support Fast UAV ExpLoration in complex unknown environments.

[ HKUST ]

Countless precise repetitions? This is the perfect task for a robot, thought researchers at the University of Liverpool in the Department of Chemistry, and without further ado they developed an automation solution that can carry out and monitor research tasks, making autonomous decisions about what to do next.

[ Kuka ]

This video shows a demonstration of central results of the SecondHands project. In the context of maintenance and repair tasks, in warehouse environments, the collaborative humanoid robot ARMAR-6 demonstrates a number of cognitive and sensorimotor abilities such as 1) recognition of the need of help based on speech, force, haptics and visual scene and action interpretation, 2) collaborative bimanual manipulation of large objects, 3) compliant mobile manipulation, 4) grasping known and unknown objects and tools, 5) human-robot interaction (object and tool handover) 6) natural dialog and 7) force predictive control.

[ SecondHands ]

In celebration of Ada Lovelace Day, Silicon Valley Robotics hosted a panel of Women in Robotics.

[ Robohub ]

As part of the upcoming virtual IROS conference, HEBI robotics is putting together a tutorial on robotics actuation. While I’m sure HEBI would like you to take a long look at their own actuators, we’ve been assured that no matter what kind of actuators you use, this tutorial will still be informative and useful.

[ YouTube ] via [ HEBI Robotics ]

Thanks Dave!

This week’s UMD Lockheed Martin Robotics Seminar comes from Julie Shah at MIT, on “Enhancing Human Capability with Intelligent Machine Teammates.”

Every team has top performers- people who excel at working in a team to find the right solutions in complex, difficult situations. These top performers include nurses who run hospital floors, emergency response teams, air traffic controllers, and factory line supervisors. While they may outperform the most sophisticated optimization and scheduling algorithms, they cannot often tell us how they do it. Similarly, even when a machine can do the job better than most of us, it can’t explain how. In this talk I share recent work investigating effective ways to blend the unique decision-making strengths of humans and machines. I discuss the development of computational models that enable machines to efficiently infer the mental state of human teammates and thereby collaborate with people in richer, more flexible ways.

[ UMD ]

Matthew Piccoli gives a talk to the UPenn GRASP Lab on “Trading Complexities: Smart Motors and Dumb Vehicles.”

We will discuss my research journey through Penn making the world's smallest, simplest flying vehicles, and in parallel making the most complex brushless motors. What do they have in common? We'll touch on why the quadrotor went from an obscure type of helicopter to the current ubiquitous drone. Finally, we'll get into my life after Penn and what tools I'm creating to further drone and robot designs of the future.

[ UPenn ] Continue reading

Posted in Human Robots

#437564 How We Won the DARPA SubT Challenge: ...

This is a guest post. The views expressed here are those of the authors and do not necessarily represent positions of IEEE or its organizational units.​

“Do you smell smoke?” It was three days before the qualification deadline for the Virtual Tunnel Circuit of the DARPA Subterranean Challenge Virtual Track, and our team was barrelling through last-minute updates to our robot controllers in a small conference room at the Michigan Tech Research Institute (MTRI) offices in Ann Arbor, Mich. That’s when we noticed the smell. We’d assumed that one of the benefits of entering a virtual disaster competition was that we wouldn’t be exposed to any actual disasters, but equipment in the basement of the building MTRI shares had started to smoke. We evacuated. The fire department showed up. And as soon as we could, the team went back into the building, hunkered down, and tried to make up for the unexpected loss of several critical hours.

Team BARCS joins the SubT Virtual Track
The smoke incident happened more than a year after we first learned of the DARPA Subterranean Challenge. DARPA announced SubT early in 2018, and at that time, we were interested in building internal collaborations on multi-agent autonomy problems, and SubT seemed like the perfect opportunity. Though a few of us had backgrounds in robotics, the majority of our team was new to the field. We knew that submitting a proposal as a largely non-traditional robotics team from an organization not known for research in robotics was a risk. However, the Virtual Track gave us the opportunity to focus on autonomy and multi-agent teaming strategies, areas requiring skill in asynchronous computing and sensor data processing that are strengths of our Institute. The prevalence of open source code, small inexpensive platforms, and customizable sensors has provided the opportunity for experts in fields other than robotics to apply novel approaches to robotics problems. This is precisely what makes the Virtual Track of SubT appealing to us, and since starting SubT, autonomy has developed into a significant research thrust for our Institute. Plus, robots are fun!

After many hours of research, discussion, and collaboration, we submitted our proposal early in 2018. And several months later, we found out that we had won a contract and became a funded team (Team BARCS) in the SubT Virtual Track. Now we needed to actually make our strategy work for the first SubT Tunnel Circuit competition, taking place in August of 2019.

Building a team of virtual robots
A natural approach to robotics competitions like SubT is to start with the question of “what can X-type robot do” and then build a team and strategy around individual capabilities. A particular challenge for the SubT Virtual Track is that we can’t design our own systems; instead, we have to choose from a predefined set of simulated robots and sensors that DARPA provides, based on the real robots used by Systems Track teams. Our approach is to look at what a team of robots can do together, determining experimentally what the best team configuration is for each environment. By the final competition, ideally we will be demonstrating the value of combining platforms across multiple Systems Track teams into a single Virtual Track team. Each of the robot configurations in the competition has an associated cost, and team size is constrained by a total cost. This provides another impetus for limiting dependence on complex sensor packages, though our ranging preference is 3D lidar, which is the most expensive sensor!

Image: Michigan Tech Research Institute

The teams can rely on realistic physics and sensors but they start off with no maps of any kind, so the focus is on developing autonomous exploratory behavior, navigation methods, and object recognition for their simulated robots.

One of the frequent questions we receive about the Virtual Track is if it’s like a video game. While it may look similar on the surface, everything under the hood in a video game is designed to service the game narrative and play experience, not require novel research in AI and autonomy. The purpose of simulations, on the other hand, is to include full physics and sensor models (including noise and errors) to provide a testbed for prototyping and developing solutions to those real-world challenges. We are starting with realistic physics and sensors but no maps of any kind, so the focus is on developing autonomous exploratory behavior, navigation methods, and object recognition for our simulated robots.

Though the simulation is more like real life than a video game, it is not real life. Due to occasional software bugs, there are still non-physical events, like the robots falling through an invisible hole in the world or driving through a rock instead of over it or flipping head over heels when driving over a tiny lip between world tiles. These glitches, while sometimes frustrating, still allow the SubT Virtual platform to be realistic enough to support rapid prototyping of controller modules that will transition straightforwardly onto hardware, closing the loop between simulation and real-world robots.

Full autonomy for DARPA-hard scenarios
The Virtual Track requirement that the robotic agents be fully autonomous, rather than have a human supervisor, is a significant distinction between the Systems and Virtual Tracks of SubT. Our solutions must be hardened against software faults caused by things like missing and bad data since our robots can’t turn to us for help. In order for a team of robots to complete this objective reliably with no human-in-the-loop, all of the internal systems, from perception to navigation to control to actuation to communications, must be able to autonomously identify and manage faults and failures anywhere in the control chain.

The communications limitations in subterranean environments (both real and virtual) mean that we need to keep the amount of information shared between robots low, while making the usability of that information for joint decision-making high. This goal has guided much of our design for autonomous navigation and joint search strategy for our team. For example, instead of sharing the full SLAM map of the environment, our agents only share a simplified graphical representation of the space, along with data about frontiers it has not yet explored, and are able to merge its information with the graphs generated by other agents. The merged graph can then be used for planning and navigation without having full knowledge of the detailed 3D map.

The Virtual Track requires that the robotic agents be fully autonomous. With no human-in-the-loop, all of the internal systems, from perception to navigation to control to actuation to communications, must be able to identify and manage faults and failures anywhere in the control chain.

Since the objective of the SubT program is to advance the state-of-the-art in rapid autonomous exploration and mapping of subterranean environments by robots, our first software design choices focused on the mapping task. The SubT virtual environments are sufficiently rich as to provide interesting problems in building so-called costmaps that accurately separate obstructions that are traversable (like ramps) from legitimately impassible obstructions. An extra complication we discovered in the first course, which took place in mining tunnels, was that the angle of the lowest beam of the lidar was parallel to the down ramps in the tunnel environment, so they could not “see” the ground (or sometimes even obstructions on the ramp) until they got close enough to the lip of the ramp to receive lidar reflections off the bottom of the ramp. In this case, we had to not only change the costmap to convince the robot that there was safe ground to reach over the lip of the ramp, but also had to change the path planner to get the robot to proceed with caution onto the top of the ramp in case there were previously unseen obstructions on the ramp.

In addition to navigation in the costmaps, the robot must be able to generate its own goals to navigate to. This is what produces exploratory behavior when there is no map to start with. SLAM is used to generate a detailed map of the environment explored by a single robot—the space it has probed with its sensors. From the sensor data, we are able to extract information about the interior space of the environment while looking for holes in the data, to determine things like whether the current tunnel continues or ends, or how many tunnels meet at an intersection. Once we have some understanding of the interior space, we can place navigation goals in that space. These goals naturally update as the robot traverses the tunnel, allowing the entire space to be explored.

Sending our robots into the virtual unknown
The solutions for the Virtual Track competitions are tested by DARPA in multiple sequestered runs across many environments for each Circuit in the month prior to the Systems Track competition. We must wait until the joint award ceremony at the conclusion of the Systems Track to find out the results, and we are completely in the dark about placings before the awards are announced. It’s nerve-wracking! The challenges of the worlds used in the Circuit events are also hand-designed, so features of the worlds we use for development could be combined in ways we have not anticipated—it’s always interesting to see what features were prioritized after the event. We test everything in our controllers well enough to feel confident that we at least are submitting something reasonably stable and broadly capable, and once the solution is in, we can’t really do anything other than “let go” and get back to work on the next phase of development. Maybe it’s somewhat like sending your kid to college: “we did our best to prepare you for this world, little bots. Go do good.”

Image: Michigan Tech Research Institute

The first SubT competition was the Tunnel Circuit, featuring a labyrinthine environment that simulated human-engineered tunnels, including hazards such as vertical shafts and rubble.

The first competition was the Tunnel Circuit, in October 2019. This environment models human-engineered tunnels. Two substantial challenges in this environment were vertical shafts and rubble. Our team accrued 21 points over 15 competition runs in five separate tunnel environments for a second place finish, behind Team Coordinated Robotics.

The next phase of the SubT virtual competition was the Urban Circuit. Much of the difference between our Tunnel and Urban Circuit results came down to thorough testing to identify failure modes and implementations of checks and data filtering for fault tolerance. For example, in the SLAM nodes run by a single robot, the coordinates of the most recent sensor data are changed multiple times during processing and integration into the current global 3D map of the “visited” environment stored by that robot. If there is lag in IMU or clock data, the observation may be temporarily registered at a default location that is very far from the actual position. Since most of our decision processes for exploration are downstream from SLAM, this can cause faulty or impossible goals to be generated, and the robots then spend inordinate amounts of time trying to drive through walls. We updated our method to add a check to see if the new map position has jumped a far distance from the prior map position, and if so, we threw that data out.

Image: Michigan Tech Research Institute

In open spaces like the rooms in the Urban circuit, we adjusted our approach to exploration through graph generation to allow the robots to accurately identify viable routes while helping to prevent forays off platform edges.

Our approach to exploration through graph generation based on identification of interior spaces allowed us to thoroughly explore the centers of rooms, although we did have to make some changes from the Tunnel circuit to achieve that. In the Tunnel circuit, we used a simplified graph of the environment based on landmarks like intersections. The advantage of this approach is that it is straightforward for two robots to compare how the graphs of the space they explored individually overlap. In open spaces like the rooms in the Urban circuit, we chose to instead use a more complex, less directly comparable graph structure based on the individual robot’s trajectory. This allowed the robots to accurately identify viable routes between features like subway station platforms and subway tracks, as well as to build up the navigation space for room interiors, while helping to prevent forays off the platform edges. Frontier information is also integrated into the graph, providing a uniform data structure for both goal selection and route planning.

The results are in!
The award ceremony for the Urban Circuit was held concurrently with the Systems Track competition awards this past February in Washington State. We sent a team representative to participate in the Technical Interchange Meeting and present the approach for our team, and the rest of us followed along from our office space on the DARPAtv live stream. While we were confident in our solution, we had also been tracking the online leaderboard and knew our competitors were going to be submitting strong solutions. Since the competition environments are hand-designed, there are always novel challenges that could be presented in these environments as well. We knew we would put up a good fight, but it was very exciting to see BARCS appear in first place!

Any time we implement a new module in our control system, there is a lot of parameter tuning that has to happen to produce reliably good autonomous behavior. In the Urban Circuit, we did not sufficiently test some parameter values in our exploration modules. The effect of this was that the robots only chose to go down small hallways after they explored everything else in their environment, which meant very often they ran out of time and missed a lot of small rooms. This may be the biggest source of lost points for us in the Urban Circuit. One of our major plans going forward from the Urban Circuit is to integrate more sophisticated node selection methods, which can help our robots more intelligently prioritize which frontier nodes to visit. By going through all three Circuit challenges, we will learn how to appropriately add weights to the frontiers based on features of the individual environments. For the Final Challenge, when all three Circuit environments will be combined into large systems, we plan to implement adaptive controllers that will identify their environments and use the appropriate optimized parameters for that environment. In this way, we expect our agents to be able to (for example) prioritize hallways and other small spaces in Urban environments, and perhaps prioritize large openings over small in the Cave environments, if the small openings end up being treacherous overall.

Next for our team: Cave Circuit
Coming up next for Team BARCS is the Virtual Cave Circuit. We are in the middle of testing our hypothesis that our controller will transition from UGVs to UAVs and developing strategies for refining our solution to handle Cave Circuit environmental hazards. The UAVs have a shorter battery life than the UGVs, so executing a joint exploration strategy will also be a high priority for this event, as will completing our work on graph sharing and merging, which will give our robot teams more sophisticated options for navigation and teamwork. We’re reaching a threshold in development where we can start increasing the “smarts” of the robots, which we anticipate will be critical for the final competition, where all of the challenges of SubT will be combined to push the limits of innovation. The Cave Circuit will also have new environmental challenges to tackle: dynamic features such as rock falls have been added, which will block previously accessible passages in the cave environment. We think our controllers are well-poised to handle this new challenge, and we’re eager to find out if that’s the case.

As of now, the biggest worries for us are time and team composition. The Cave Circuit deadline has been postponed to October 15 due to COVID-19 delays, with the award ceremony in mid-November, but there have also been several very compelling additions to the testbed that we would like to experiment with before submission, including droppable networking ‘breadcrumbs’ and new simulated platforms. There are design trade-offs when balancing general versus specialist approaches to the controllers for these robots—since we are adding UAVs to our team for the first time, there are new decisions that will have to be made. For example, the UAVs can ascend into vertical spaces, but only have a battery life of 20 minutes. The UGVs by contrast have 90 minute battery life. One of our strategies is to do an early return to base with one or more agents to buy down risk on making any artifact reports at all for the run, hedging against our other robots not making it back in time, a lesson learned from the Tunnel Circuit. Should a UAV take on this role, or is it better to have them explore deeper into the environment and instead report their artifacts to a UGV or network node, which comes with its own risks? Testing and experimentation to determine the best options takes time, which is always a worry when preparing for a competition! We also anticipate new competitors and stiffer competition all around.

Image: Michigan Tech Research Institute

Team BARCS has now a year to prepare for the final DARPA SubT Challenge event, expected to take place in late 2021.

Going forward from the Cave Circuit, we will have a year to prepare for the final DARPA SubT Challenge event, expected to take place in late 2021. What we are most excited about is increasing the level of intelligence of the agents in their teamwork and joint exploration of the environment. Since we will have (hopefully) built up robust approaches to handling each of the specific types of environments in the Tunnel, Urban, and Cave circuits, we will be aiming to push the limits on collaboration and efficiency among the agents in our team. We view this as a central research contribution of the Virtual Track to the Subterranean Challenge because intelligent, adaptive, multi-robot collaboration is an upcoming stage of development for integration of robots into our lives.

The Subterranean Challenge Virtual Track gives us a bridge for transitioning our more abstract research ideas and algorithms relevant to this degree of autonomy and collaboration onto physical systems, and exploring the tangible outcomes of implementing our work in the real world. And the next time there’s an incident in the basement of our building, the robots (and humans) of Team BARCS will be ready to respond.

Richard Chase, Ph.D., P.E., is a research scientist at Michigan Tech Research Institute (MTRI) and has 20 years of experience developing robotics and cyber physical systems in areas from remote sensing to autonomous vehicles. At MTRI, he works on a variety of topics such as swarm autonomy, human-swarm teaming, and autonomous vehicles. His research interests are the intersection of design, robotics, and embedded systems.

Sarah Kitchen is a Ph.D. mathematician working as a research scientist and an AI/Robotics focus area leader at MTRI. Her research interests include intelligent autonomous agents and multi-agent collaborative teams, as well as applications of autonomous robots to sensing systems.

This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR001118C0124 and is released under Distribution Statement (Approved for Public Release, Distribution Unlimited). Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of DARPA. Continue reading

Posted in Human Robots

#437303 The Deck Is Not Rigged: Poker and the ...

Tuomas Sandholm, a computer scientist at Carnegie Mellon University, is not a poker player—or much of a poker fan, in fact—but he is fascinated by the game for much the same reason as the great game theorist John von Neumann before him. Von Neumann, who died in 1957, viewed poker as the perfect model for human decision making, for finding the balance between skill and chance that accompanies our every choice. He saw poker as the ultimate strategic challenge, combining as it does not just the mathematical elements of a game like chess but the uniquely human, psychological angles that are more difficult to model precisely—a view shared years later by Sandholm in his research with artificial intelligence.

“Poker is the main benchmark and challenge program for games of imperfect information,” Sandholm told me on a warm spring afternoon in 2018, when we met in his offices in Pittsburgh. The game, it turns out, has become the gold standard for developing artificial intelligence.

Tall and thin, with wire-frame glasses and neat brow hair framing a friendly face, Sandholm is behind the creation of three computer programs designed to test their mettle against human poker players: Claudico, Libratus, and most recently, Pluribus. (When we met, Libratus was still a toddler and Pluribus didn’t yet exist.) The goal isn’t to solve poker, as such, but to create algorithms whose decision making prowess in poker’s world of imperfect information and stochastic situations—situations that are randomly determined and unable to be predicted—can then be applied to other stochastic realms, like the military, business, government, cybersecurity, even health care.

While the first program, Claudico, was summarily beaten by human poker players—“one broke-ass robot,” an observer called it—Libratus has triumphed in a series of one-on-one, or heads-up, matches against some of the best online players in the United States.

Libratus relies on three main modules. The first involves a basic blueprint strategy for the whole game, allowing it to reach a much faster equilibrium than its predecessor. It includes an algorithm called the Monte Carlo Counterfactual Regret Minimization, which evaluates all future actions to figure out which one would cause the least amount of regret. Regret, of course, is a human emotion. Regret for a computer simply means realizing that an action that wasn’t chosen would have yielded a better outcome than one that was. “Intuitively, regret represents how much the AI regrets having not chosen that action in the past,” says Sandholm. The higher the regret, the higher the chance of choosing that action next time.

It’s a useful way of thinking—but one that is incredibly difficult for the human mind to implement. We are notoriously bad at anticipating our future emotions. How much will we regret doing something? How much will we regret not doing something else? For us, it’s an emotionally laden calculus, and we typically fail to apply it in quite the right way. For a computer, it’s all about the computation of values. What does it regret not doing the most, the thing that would have yielded the highest possible expected value?

The second module is a sub-game solver that takes into account the mistakes the opponent has made so far and accounts for every hand she could possibly have. And finally, there is a self-improver. This is the area where data and machine learning come into play. It’s dangerous to try to exploit your opponent—it opens you up to the risk that you’ll get exploited right back, especially if you’re a computer program and your opponent is human. So instead of attempting to do that, the self-improver lets the opponent’s actions inform the areas where the program should focus. “That lets the opponent’s actions tell us where [they] think they’ve found holes in our strategy,” Sandholm explained. This allows the algorithm to develop a blueprint strategy to patch those holes.

It’s a very human-like adaptation, if you think about it. I’m not going to try to outmaneuver you head on. Instead, I’m going to see how you’re trying to outmaneuver me and respond accordingly. Sun-Tzu would surely approve. Watch how you’re perceived, not how you perceive yourself—because in the end, you’re playing against those who are doing the perceiving, and their opinion, right or not, is the only one that matters when you craft your strategy. Overnight, the algorithm patches up its overall approach according to the resulting analysis.

There’s one final thing Libratus is able to do: play in situations with unknown probabilities. There’s a concept in game theory known as the trembling hand: There are branches of the game tree that, under an optimal strategy, one should theoretically never get to; but with some probability, your all-too-human opponent’s hand trembles, they take a wrong action, and you’re suddenly in a totally unmapped part of the game. Before, that would spell disaster for the computer: An unmapped part of the tree means the program no longer knows how to respond. Now, there’s a contingency plan.

Of course, no algorithm is perfect. When Libratus is playing poker, it’s essentially working in a zero-sum environment. It wins, the opponent loses. The opponent wins, it loses. But while some real-life interactions really are zero-sum—cyber warfare comes to mind—many others are not nearly as straightforward: My win does not necessarily mean your loss. The pie is not fixed, and our interactions may be more positive-sum than not.

What’s more, real-life applications have to contend with something that a poker algorithm does not: the weights that are assigned to different elements of a decision. In poker, this is a simple value-maximizing process. But what is value in the human realm? Sandholm had to contend with this before, when he helped craft the world’s first kidney exchange. Do you want to be more efficient, giving the maximum number of kidneys as quickly as possible—or more fair, which may come at a cost to efficiency? Do you want as many lives as possible saved—or do some take priority at the cost of reaching more? Is there a preference for the length of the wait until a transplant? Do kids get preference? And on and on. It’s essential, Sandholm says, to separate means and the ends. To figure out the ends, a human has to decide what the goal is.

“The world will ultimately become a lot safer with the help of algorithms like Libratus,” Sandholm told me. I wasn’t sure what he meant. The last thing that most people would do is call poker, with its competition, its winners and losers, its quest to gain the maximum edge over your opponent, a haven of safety.

“Logic is good, and the AI is much better at strategic reasoning than humans can ever be,” he explained. “It’s taking out irrationality, emotionality. And it’s fairer. If you have an AI on your side, it can lift non-experts to the level of experts. Naïve negotiators will suddenly have a better weapon. We can start to close off the digital divide.”

It was an optimistic note to end on—a zero-sum, competitive game yielding a more ultimately fair and rational world.

I wanted to learn more, to see if it was really possible that mathematics and algorithms could ultimately be the future of more human, more psychological interactions. And so, later that day, I accompanied Nick Nystrom, the chief scientist of the Pittsburgh Supercomputing Center—the place that runs all of Sandholm’s poker-AI programs—to the actual processing center that make undertakings like Libratus possible.

A half-hour drive found us in a parking lot by a large glass building. I’d expected something more futuristic, not the same square, corporate glass squares I’ve seen countless times before. The inside, however, was more promising. First the security checkpoint. Then the ride in the elevator — down, not up, to roughly three stories below ground, where we found ourselves in a maze of corridors with card readers at every juncture to make sure you don’t slip through undetected. A red-lit panel formed the final barrier, leading to a small sliver of space between two sets of doors. I could hear a loud hum coming from the far side.

“Let me tell you what you’re going to see before we walk in,” Nystrom told me. “Once we get inside, it will be too loud to hear.”

I was about to witness the heart of the supercomputing center: 27 large containers, in neat rows, each housing multiple processors with speeds and abilities too great for my mind to wrap around. Inside, the temperature is by turns arctic and tropic, so-called “cold” rows alternating with “hot”—fans operate around the clock to cool the processors as they churn through millions of giga, mega, tera, peta and other ever-increasing scales of data bytes. In the cool rows, robotic-looking lights blink green and blue in orderly progression. In the hot rows, a jumble of multicolored wires crisscrosses in tangled skeins.

In the corners stood machines that had outlived their heyday. There was Sherlock, an old Cray model, that warmed my heart. There was a sad nameless computer, whose anonymity was partially compensated for by the Warhol soup cans adorning its cage (an homage to Warhol’s Pittsburghian origins).

And where does Libratus live, I asked? Which of these computers is Bridges, the computer that runs the AI Sandholm and I had been discussing?

Bridges, it turned out, isn’t a single computer. It’s a system with processing power beyond comprehension. It takes over two and a half petabytes to run Libratus. A single petabyte is a million gigabytes: You could watch over 13 years of HD video, store 10 billion photos, catalog the contents of the entire Library of Congress word for word. That’s a whole lot of computing power. And that’s only to succeed at heads-up poker, in limited circumstances.

Yet despite the breathtaking computing power at its disposal, Libratus is still severely limited. Yes, it beat its opponents where Claudico failed. But the poker professionals weren’t allowed to use many of the tools of their trade, including the opponent analysis software that they depend on in actual online games. And humans tire. Libratus can churn for a two-week marathon, where the human mind falters.

But there’s still much it can’t do: play more opponents, play live, or win every time. There’s more humanity in poker than Libratus has yet conquered. “There’s this belief that it’s all about statistics and correlations. And we actually don’t believe that,” Nystrom explained as we left Bridges behind. “Once in a while correlations are good, but in general, they can also be really misleading.”

Two years later, the Sandholm lab will produce Pluribus. Pluribus will be able to play against five players—and will run on a single computer. Much of the human edge will have evaporated in a short, very short time. The algorithms have improved, as have the computers. AI, it seems, has gained by leaps and bounds.

So does that mean that, ultimately, the algorithmic can indeed beat out the human, that computation can untangle the web of human interaction by discerning “the little tactics of deception, of asking yourself what is the other man going to think I mean to do,” as von Neumann put it?

Long before I’d spoken to Sandholm, I’d met Kevin Slavin, a polymath of sorts whose past careers have including founding a game design company and an interactive art space and launching the Playful Systems group at MIT’s Media Lab. Slavin has a decidedly different view from the creators of Pluribus. “On the one hand, [von Neumann] was a genius,” Kevin Slavin reflects. “But the presumptuousness of it.”

Slavin is firmly on the side of the gambler, who recognizes uncertainty for what it is and thus is able to take calculated risks when necessary, all the while tampering confidence at the outcome. The most you can do is put yourself in the path of luck—but to think you can guess with certainty the actual outcome is a presumptuousness the true poker player foregoes. For Slavin, the wonder of computers is “That they can generate this fabulous, complex randomness.” His opinion of the algorithmic assaults on chance? “This is their moment,” he said. “But it’s the exact opposite of what’s really beautiful about a computer, which is that it can do something that’s actually unpredictable. That, to me, is the magic.”

Will they actually succeed in making the unpredictable predictable, though? That’s what I want to know. Because everything I’ve seen tells me that absolute success is impossible. The deck is not rigged.

“It’s an unbelievable amount of work to get there. What do you get at the end? Let’s say they’re successful. Then we live in a world where there’s no God, agency, or luck,” Slavin responded.

“I don’t want to live there,’’ he added “I just don’t want to live there.”

Luckily, it seems that for now, he won’t have to. There are more things in life than are yet written in the algorithms. We have no reliable lie detection software—whether in the face, the skin, or the brain. In a recent test of bluffing in poker, computer face recognition failed miserably. We can get at discomfort, but we can’t get at the reasons for that discomfort: lying, fatigue, stress—they all look much the same. And humans, of course, can also mimic stress where none exists, complicating the picture even further.

Pluribus may turn out to be powerful, but von Neumann’s challenge still stands: The true nature of games, the most human of the human, remains to be conquered.

This article was originally published on Undark. Read the original article.

Image Credit: José Pablo Iglesias / Unsplash Continue reading

Posted in Human Robots