Tag Archives: cyber
#439010 Video Friday: Nanotube-Powered Insect ...
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):
HRI 2021 – March 8-11, 2021 – [Online Conference]
RoboSoft 2021 – April 12-16, 2021 – [Online Conference]
ICRA 2021 – May 30-5, 2021 – Xi'an, China
Let us know if you have suggestions for next week, and enjoy today's videos.
If you’ve ever swatted a mosquito away from your face, only to have it return again (and again and again), you know that insects can be remarkably acrobatic and resilient in flight. Those traits help them navigate the aerial world, with all of its wind gusts, obstacles, and general uncertainty. Such traits are also hard to build into flying robots, but MIT Assistant Professor Kevin Yufeng Chen has built a system that approaches insects’ agility.
Chen’s actuators can flap nearly 500 times per second, giving the drone insect-like resilience. “You can hit it when it’s flying, and it can recover,” says Chen. “It can also do aggressive maneuvers like somersaults in the air.” And it weighs in at just 0.6 grams, approximately the mass of a large bumble bee. The drone looks a bit like a tiny cassette tape with wings, though Chen is working on a new prototype shaped like a dragonfly.
[ MIT ]
National Robotics Week is April 3-11, 2021!
[ NRW ]
This is in a motion capture environment, but still, super impressive!
[ Paper ]
Thanks Fan!
Why wait for Boston Dynamics to add an arm to your Spot if you can just do it yourself?
[ ETHZ ]
This video shows the deep-sea free swimming of soft robot in the South China Sea. The soft robot was grasped by a robotic arm on ‘HAIMA’ ROV and reached the bottom of the South China Sea (depth of 3,224 m). After the releasing, the soft robot was actuated with an on-board AC voltage of 8 kV at 1 Hz and demonstrated free swimming locomotion with its flapping fins.
Um, did they bring it back?
[ Nature ]
Quadruped Yuki Mini is 12 DOF robot equipped with a Raspberry Pi that runs ROS. Also, BUNNIES!
[ Lingkang Zhang ]
Thanks Lingkang!
Deployment of drone swarms usually relies on inter-agent communication or visual markers that are mounted on the vehicles to simplify their mutual detection. The vswarm package enables decentralized vision-based control of drone swarms without relying on inter-agent communication or visual fiducial markers. The results show that the drones can safely navigate in an outdoor environment despite substantial background clutter and difficult lighting conditions.
[ Vswarm ]
A conventional adopted method for operating a waiter robot is based on the static position control, where pre-defined goal positions are marked on a map. However, this solution is not optimal in a dynamic setting, such as in a coffee shop or an outdoor catering event, because the customers often change their positions. We explore an alternative human-robot interface design where a human operator communicates the identity of the customer to the robot instead. Inspired by how [a] human communicates, we propose a framework for communicating a visual goal to the robot, through interactive two-way communications.
[ Paper ]
Thanks Poramate!
In this video, LOLA reacts to undetected ground height changes, including a drop and leg-in-hole experiment. Further tests show the robustness to vertical disturbances using a seesaw. The robot is technically blind, not using any camera-based or prior information on the terrain.
[ TUM ]
RaiSim is a cross-platform multi-body physics engine for robotics and AI. It fully supports Linux, Mac OS, and Windows.
[ RaiSim ]
Thanks Fan!
The next generation of LoCoBot is here. The LoCoBot is an ROS research rover for mapping, navigation and manipulation (optional) that enables researchers, educators and students alike to focus on high level code development instead of hardware and building out lower level code. Development on the LoCoBot is simplified with open source software, full ROS-mapping and navigation packages and modular opensource Python API that allows users to move the platform as well as (optional) manipulator in as few as 10 lines of code.
[ Trossen ]
MIT Media Lab Research Specialist Dr. Kate Darling looks at how robots are portrayed in popular film and TV shows.
Kate's book, The New Breed: What Our History with Animals Reveals about Our Future with Robots can be pre-ordered now and comes out next month.
[ Kate Darling ]
The current autonomous mobility systems for planetary exploration are wheeled rovers, limited to flat, gently-sloping terrains and agglomerate regolith. These vehicles cannot tolerate instability and operate within a low-risk envelope (i.e., low-incline driving to avoid toppling). Here, we present ‘Mars Dogs’ (MD), four-legged robotic dogs, the next evolution of extreme planetary exploration.
[ Team CoSTAR ]
In 2020, first-year PhD students at the MIT Media Lab were tasked with a special project—to reimagine the Lab and write sci-fi stories about the MIT Media Lab in the year 2050. “But, we are researchers. We don't only write fiction, we also do science! So, we did what scientists do! We used a secret time machine under the MIT dome to go to the year 2050 and see what’s going on there! Luckily, the Media Lab still exists and we met someone…really cool!” Enjoy this interview of Cyber Joe, AI Mentor for MIT Media Lab Students of 2050.
[ MIT ]
In this talk, we will give an overview of the diverse research we do at CSIRO’s Robotics and Autonomous Systems Group and delve into some specific technologies we have developed including SLAM and Legged robotics. We will also give insights into CSIRO’s participation in the current DARPA Subterranean Challenge where we are deploying a fleet of heterogeneous robots into GPS-denied unknown underground environments.
[ GRASP Seminar ]
Marco Hutter (ETH) and Hae-Won Park (KAIST) talk about “Robotics Inspired by Nature.”
[ Swiss-Korean Science Club ]
Thanks Fan!
In this keynote, Guy Hoffman Assistant Professor and the Mills Family Faculty Fellow in the Sibley School of Mechanical and Aerospace Engineering at Cornell University, discusses “The Social Uncanny of Robotic Companions.”
[ Designerly HRI ] Continue reading →
#438006 Smellicopter Drone Uses Live Moth ...
Research into robotic sensing has, understandably I guess, been very human-centric. Most of us navigate and experience the world visually and in 3D, so robots tend to get covered with things like cameras and lidar. Touch is important to us, as is sound, so robots are getting pretty good with understanding tactile and auditory information, too. Smell, though? In most cases, smell doesn’t convey nearly as much information for us, so while it hasn’t exactly been ignored in robotics, it certainly isn’t the sensing modality of choice in most cases.
Part of the problem with smell sensing is that we just don’t have a good way of doing it, from a technical perspective. This has been a challenge for a long time, and it’s why we either bribe or trick animals like dogs, rats, vultures, and other animals to be our sensing systems for airborne chemicals. If only they’d do exactly what we wanted them to do all the time, this would be fine, but they don’t, so it’s not.
Until we get better at making chemical sensors, leveraging biology is the best we can do, and what would be ideal would be some sort of robot-animal hybrid cyborg thing. We’ve seen some attempts at remote controlled insects, but as it turns out, you can simplify things if you don’t use the entire insect, but instead just find a way to use its sensing system. Enter the Smellicopter.
There’s honestly not too much to say about the drone itself. It’s an open-source drone project called Crazyflie 2.0, with some additional off the shelf sensors for obstacle avoidance and stabilization. The interesting bits are a couple of passive fins that keep the drone pointed into the wind, and then the sensor, called an electroantennogram.
Image: UW
The drone’s sensor, called an electroantennogram, consists of a “single excised antenna” from a Manduca sexta hawkmoth and a custom signal processing circuit.
To make one of these sensors, you just, uh, “harvest” an antenna from a live hawkmoth. Obligingly, the moth antenna is hollow, meaning that you can stick electrodes up it. Whenever the olfactory neurons in the antenna (which is still technically alive even though it’s not attached to the moth anymore) encounter an odor that they’re looking for, they produce an electrical signal that the electrodes pick up. Plug the other ends of the electrodes into a voltage amplifier and filter, run it through an analog to digital converter, and you’ve got a chemical sensor that weighs just 1.5 gram and consumes only 2.7 mW of power. It’s significantly more sensitive than a conventional metal-oxide odor sensor, in a much smaller and more efficient form factor, making it ideal for drones.
To localize an odor, the Smellicopter uses a simple bioinspired approach called crosswind casting, which involves moving laterally left and right and then forward when an odor is detected. Here’s how it works:
The vehicle takes off to a height of 40 cm and then hovers for ten seconds to allow it time to orient upwind. The smellicopter starts casting left and right crosswind. When a volatile chemical is detected, the smellicopter will surge 25 cm upwind, and then resume casting. As long as the wind direction is fairly consistent, this strategy will bring the insect or robot increasingly closer to a singular source with each surge.
Since odors are airborne, they need a bit of a breeze to spread very far, and the Smellicopter won’t be able to detect them unless it’s downwind of the source. But, that’s just how odors work— even if you’re right next to the source, if the wind is blowing from you towards the source rather than the other way around, you might not catch a whiff of it.
Whenever the olfactory neurons in the antenna encounter an odor that they’re looking for, they produce an electrical signal that the electrodes pick up
There are a few other constraints to keep in mind with this sensor as well. First, rather than detecting something useful (like explosives), it’s going to detect the smells of pretty flowers, because moths like pretty flowers. Second, the antenna will literally go dead on you within a couple hours, since it only functions while its tissues are alive and metaphorically kicking. Interestingly, it may be possible to use CRISPR-based genetic modification to breed moths with antennae that do respond to useful smells, which would be a neat trick, and we asked the researchers—Melanie Anderson, a doctoral student of mechanical engineering at the University of Washington, in Seattle; Thomas Daniel, a UW professor of biology; and Sawyer Fuller, a UW assistant professor of mechanical engineering—about this, along with some other burning questions, via email.
IEEE Spectrum, asking the important questions first: So who came up with “Smellicopter”?
Melanie Anderson: Tom Daniel coined the term “Smellicopter”. Another runner up was “OdorRotor”!
In general, how much better are moths at odor localization than robots?
Melanie Anderson: Moths are excellent at odor detection and odor localization and need to be in order to find mates and food. Their antennae are much more sensitive and specialized than any portable man-made odor sensor. We can't ask the moths how exactly they search for odors so well, but being able to have the odor sensitivity of a moth on a flying platform is a big step in that direction.
Tom Daniel: Our best estimate is that they outperform robotic sensing by at least three orders of magnitude.
How does the localization behavior of the Smellicopter compare to that of a real moth?
Anderson: The cast-and-surge odor search strategy is a simplified version of what we believe the moth (and many other odor searching animals) are doing. It is a reactive strategy that relies on the knowledge that if you detect odor, you can assume that the source is somewhere up-wind of you. When you detect odor, you simply move upwind, and when you lose the odor signal you cast in a cross-wind direction until you regain the signal.
Can you elaborate on the potential for CRISPR to be able to engineer moths for the detection of specific chemicals?
Anderson: CRISPR is already currently being used to modify the odor detection pathways in moth species. It is one of our future efforts to specifically use this to make the antennae sensitive to other chemicals of interest, such as the chemical scent of explosives.
Sawyer Fuller: We think that one of the strengths of using a moth's antenna, in addition to its speed, is that it may provide a path to both high chemical specificity as well as high sensitivity. By expressing a preponderance of only one or a few chemosensors, we are anticipating that a moth antenna will give a strong response only to that chemical. There are several efforts underway in other research groups to make such specific, sensitive chemical detectors. Chemical sensing is an area where biology exceeds man-made systems in terms of efficiency, small size, and sensitivity. So that's why we think that the approach of trying to leverage biological machinery that already exists has some merit.
You mention that the antennae lifespan can be extended for a few days with ice- how feasible do you think this technology is outside of a research context?
Anderson: The antennae can be stored in tiny vials in a standard refrigerator or just with an ice pack to extend their life to about a week. Additionally, the process for attaching the antenna to the electrical circuit is a teachable skill. It is definitely feasible outside of a research context.
Considering the trajectory that sensor development is on, how long do you think that this biological sensor system will outperform conventional alternatives?
Anderson: It's hard to speak toward what will happen in the future, but currently, the moth antenna still stands out among any commercially-available portable sensors.
There have been some experiments with cybernetic insects; what are the advantages and disadvantages of your approach, as opposed to (say) putting some sort of tracking system on a live moth?
Daniel: I was part of a cyber insect team a number of years ago. The challenge of such research is that the animal has natural reactions to attempts to steer or control it.
Anderson: While moths are better at odor tracking than robots currently, the advantage of the drone platform is that we have control over it. We can tell it to constrain the search to a certain area, and return after it finishes searching.
What can you tell us about the health, happiness, and overall wellfare of the moths in your experiments?
Anderson: The moths are cold anesthetized before the antennae are removed. They are then frozen so that they can be used for teaching purposes or in other research efforts.
What are you working on next?
Daniel: The four big efforts are (1) CRISPR modification, (2) experiments aimed at improving the longevity of the antennal preparation, (3) improved measurements of antennal electrical responses to odors combined with machine learning to see if we can classify different odors, and (4) flight in outdoor environments.
Fuller: The moth's antenna sensor gives us a new ability to sense with a much shorter latency than was previously possible with similarly-sized sensors (e.g. semiconductor sensors). What exactly a robot agent should do to best take advantage of this is an open question. In particular, I think the speed may help it to zero in on plume sources in complex environments much more quickly. Think of places like indoor settings with flow down hallways that splits out at doorways, and in industrial settings festooned with pipes and equipment. We know that it is possible to search out and find odors in such scenarios, as anybody who has had to contend with an outbreak of fruit flies can attest. It is also known that these animals respond very quickly to sudden changes in odor that is present in such turbulent, patchy plumes. Since it is hard to reduce such plumes to a simple model, we think that machine learning may provide insights into how to best take advantage of the improved temporal plume information we now have available.
Tom Daniel also points out that the relative simplicity of this project (now that the UW researchers have it all figured out, that is) means that even high school students could potentially get involved in it, even if it’s on a ground robot rather than a drone. All the details are in the paper that was just published in Bioinspiration & Biomimetics. Continue reading →
#437910 Virtual Attacks Are Prevalant, Be ...
Virtual attacks have evolved from personal problems to global emergencies, but artificial intelligence and the cloud can protect us. As all parts of the cyber world (personal laptops, business computers, smartphones, digital assistants, TVs, devices) become more connected, they also make us more vulnerable to cyber-attacks. Not only in personal circumstances but in business too. …
The post Virtual Attacks Are Prevalant, Be Prepared appeared first on TFOT. Continue reading →
#437564 How We Won the DARPA SubT Challenge: ...
This is a guest post. The views expressed here are those of the authors and do not necessarily represent positions of IEEE or its organizational units.
“Do you smell smoke?” It was three days before the qualification deadline for the Virtual Tunnel Circuit of the DARPA Subterranean Challenge Virtual Track, and our team was barrelling through last-minute updates to our robot controllers in a small conference room at the Michigan Tech Research Institute (MTRI) offices in Ann Arbor, Mich. That’s when we noticed the smell. We’d assumed that one of the benefits of entering a virtual disaster competition was that we wouldn’t be exposed to any actual disasters, but equipment in the basement of the building MTRI shares had started to smoke. We evacuated. The fire department showed up. And as soon as we could, the team went back into the building, hunkered down, and tried to make up for the unexpected loss of several critical hours.
Team BARCS joins the SubT Virtual Track
The smoke incident happened more than a year after we first learned of the DARPA Subterranean Challenge. DARPA announced SubT early in 2018, and at that time, we were interested in building internal collaborations on multi-agent autonomy problems, and SubT seemed like the perfect opportunity. Though a few of us had backgrounds in robotics, the majority of our team was new to the field. We knew that submitting a proposal as a largely non-traditional robotics team from an organization not known for research in robotics was a risk. However, the Virtual Track gave us the opportunity to focus on autonomy and multi-agent teaming strategies, areas requiring skill in asynchronous computing and sensor data processing that are strengths of our Institute. The prevalence of open source code, small inexpensive platforms, and customizable sensors has provided the opportunity for experts in fields other than robotics to apply novel approaches to robotics problems. This is precisely what makes the Virtual Track of SubT appealing to us, and since starting SubT, autonomy has developed into a significant research thrust for our Institute. Plus, robots are fun!
After many hours of research, discussion, and collaboration, we submitted our proposal early in 2018. And several months later, we found out that we had won a contract and became a funded team (Team BARCS) in the SubT Virtual Track. Now we needed to actually make our strategy work for the first SubT Tunnel Circuit competition, taking place in August of 2019.
Building a team of virtual robots
A natural approach to robotics competitions like SubT is to start with the question of “what can X-type robot do” and then build a team and strategy around individual capabilities. A particular challenge for the SubT Virtual Track is that we can’t design our own systems; instead, we have to choose from a predefined set of simulated robots and sensors that DARPA provides, based on the real robots used by Systems Track teams. Our approach is to look at what a team of robots can do together, determining experimentally what the best team configuration is for each environment. By the final competition, ideally we will be demonstrating the value of combining platforms across multiple Systems Track teams into a single Virtual Track team. Each of the robot configurations in the competition has an associated cost, and team size is constrained by a total cost. This provides another impetus for limiting dependence on complex sensor packages, though our ranging preference is 3D lidar, which is the most expensive sensor!
Image: Michigan Tech Research Institute
The teams can rely on realistic physics and sensors but they start off with no maps of any kind, so the focus is on developing autonomous exploratory behavior, navigation methods, and object recognition for their simulated robots.
One of the frequent questions we receive about the Virtual Track is if it’s like a video game. While it may look similar on the surface, everything under the hood in a video game is designed to service the game narrative and play experience, not require novel research in AI and autonomy. The purpose of simulations, on the other hand, is to include full physics and sensor models (including noise and errors) to provide a testbed for prototyping and developing solutions to those real-world challenges. We are starting with realistic physics and sensors but no maps of any kind, so the focus is on developing autonomous exploratory behavior, navigation methods, and object recognition for our simulated robots.
Though the simulation is more like real life than a video game, it is not real life. Due to occasional software bugs, there are still non-physical events, like the robots falling through an invisible hole in the world or driving through a rock instead of over it or flipping head over heels when driving over a tiny lip between world tiles. These glitches, while sometimes frustrating, still allow the SubT Virtual platform to be realistic enough to support rapid prototyping of controller modules that will transition straightforwardly onto hardware, closing the loop between simulation and real-world robots.
Full autonomy for DARPA-hard scenarios
The Virtual Track requirement that the robotic agents be fully autonomous, rather than have a human supervisor, is a significant distinction between the Systems and Virtual Tracks of SubT. Our solutions must be hardened against software faults caused by things like missing and bad data since our robots can’t turn to us for help. In order for a team of robots to complete this objective reliably with no human-in-the-loop, all of the internal systems, from perception to navigation to control to actuation to communications, must be able to autonomously identify and manage faults and failures anywhere in the control chain.
The communications limitations in subterranean environments (both real and virtual) mean that we need to keep the amount of information shared between robots low, while making the usability of that information for joint decision-making high. This goal has guided much of our design for autonomous navigation and joint search strategy for our team. For example, instead of sharing the full SLAM map of the environment, our agents only share a simplified graphical representation of the space, along with data about frontiers it has not yet explored, and are able to merge its information with the graphs generated by other agents. The merged graph can then be used for planning and navigation without having full knowledge of the detailed 3D map.
The Virtual Track requires that the robotic agents be fully autonomous. With no human-in-the-loop, all of the internal systems, from perception to navigation to control to actuation to communications, must be able to identify and manage faults and failures anywhere in the control chain.
Since the objective of the SubT program is to advance the state-of-the-art in rapid autonomous exploration and mapping of subterranean environments by robots, our first software design choices focused on the mapping task. The SubT virtual environments are sufficiently rich as to provide interesting problems in building so-called costmaps that accurately separate obstructions that are traversable (like ramps) from legitimately impassible obstructions. An extra complication we discovered in the first course, which took place in mining tunnels, was that the angle of the lowest beam of the lidar was parallel to the down ramps in the tunnel environment, so they could not “see” the ground (or sometimes even obstructions on the ramp) until they got close enough to the lip of the ramp to receive lidar reflections off the bottom of the ramp. In this case, we had to not only change the costmap to convince the robot that there was safe ground to reach over the lip of the ramp, but also had to change the path planner to get the robot to proceed with caution onto the top of the ramp in case there were previously unseen obstructions on the ramp.
In addition to navigation in the costmaps, the robot must be able to generate its own goals to navigate to. This is what produces exploratory behavior when there is no map to start with. SLAM is used to generate a detailed map of the environment explored by a single robot—the space it has probed with its sensors. From the sensor data, we are able to extract information about the interior space of the environment while looking for holes in the data, to determine things like whether the current tunnel continues or ends, or how many tunnels meet at an intersection. Once we have some understanding of the interior space, we can place navigation goals in that space. These goals naturally update as the robot traverses the tunnel, allowing the entire space to be explored.
Sending our robots into the virtual unknown
The solutions for the Virtual Track competitions are tested by DARPA in multiple sequestered runs across many environments for each Circuit in the month prior to the Systems Track competition. We must wait until the joint award ceremony at the conclusion of the Systems Track to find out the results, and we are completely in the dark about placings before the awards are announced. It’s nerve-wracking! The challenges of the worlds used in the Circuit events are also hand-designed, so features of the worlds we use for development could be combined in ways we have not anticipated—it’s always interesting to see what features were prioritized after the event. We test everything in our controllers well enough to feel confident that we at least are submitting something reasonably stable and broadly capable, and once the solution is in, we can’t really do anything other than “let go” and get back to work on the next phase of development. Maybe it’s somewhat like sending your kid to college: “we did our best to prepare you for this world, little bots. Go do good.”
Image: Michigan Tech Research Institute
The first SubT competition was the Tunnel Circuit, featuring a labyrinthine environment that simulated human-engineered tunnels, including hazards such as vertical shafts and rubble.
The first competition was the Tunnel Circuit, in October 2019. This environment models human-engineered tunnels. Two substantial challenges in this environment were vertical shafts and rubble. Our team accrued 21 points over 15 competition runs in five separate tunnel environments for a second place finish, behind Team Coordinated Robotics.
The next phase of the SubT virtual competition was the Urban Circuit. Much of the difference between our Tunnel and Urban Circuit results came down to thorough testing to identify failure modes and implementations of checks and data filtering for fault tolerance. For example, in the SLAM nodes run by a single robot, the coordinates of the most recent sensor data are changed multiple times during processing and integration into the current global 3D map of the “visited” environment stored by that robot. If there is lag in IMU or clock data, the observation may be temporarily registered at a default location that is very far from the actual position. Since most of our decision processes for exploration are downstream from SLAM, this can cause faulty or impossible goals to be generated, and the robots then spend inordinate amounts of time trying to drive through walls. We updated our method to add a check to see if the new map position has jumped a far distance from the prior map position, and if so, we threw that data out.
Image: Michigan Tech Research Institute
In open spaces like the rooms in the Urban circuit, we adjusted our approach to exploration through graph generation to allow the robots to accurately identify viable routes while helping to prevent forays off platform edges.
Our approach to exploration through graph generation based on identification of interior spaces allowed us to thoroughly explore the centers of rooms, although we did have to make some changes from the Tunnel circuit to achieve that. In the Tunnel circuit, we used a simplified graph of the environment based on landmarks like intersections. The advantage of this approach is that it is straightforward for two robots to compare how the graphs of the space they explored individually overlap. In open spaces like the rooms in the Urban circuit, we chose to instead use a more complex, less directly comparable graph structure based on the individual robot’s trajectory. This allowed the robots to accurately identify viable routes between features like subway station platforms and subway tracks, as well as to build up the navigation space for room interiors, while helping to prevent forays off the platform edges. Frontier information is also integrated into the graph, providing a uniform data structure for both goal selection and route planning.
The results are in!
The award ceremony for the Urban Circuit was held concurrently with the Systems Track competition awards this past February in Washington State. We sent a team representative to participate in the Technical Interchange Meeting and present the approach for our team, and the rest of us followed along from our office space on the DARPAtv live stream. While we were confident in our solution, we had also been tracking the online leaderboard and knew our competitors were going to be submitting strong solutions. Since the competition environments are hand-designed, there are always novel challenges that could be presented in these environments as well. We knew we would put up a good fight, but it was very exciting to see BARCS appear in first place!
Any time we implement a new module in our control system, there is a lot of parameter tuning that has to happen to produce reliably good autonomous behavior. In the Urban Circuit, we did not sufficiently test some parameter values in our exploration modules. The effect of this was that the robots only chose to go down small hallways after they explored everything else in their environment, which meant very often they ran out of time and missed a lot of small rooms. This may be the biggest source of lost points for us in the Urban Circuit. One of our major plans going forward from the Urban Circuit is to integrate more sophisticated node selection methods, which can help our robots more intelligently prioritize which frontier nodes to visit. By going through all three Circuit challenges, we will learn how to appropriately add weights to the frontiers based on features of the individual environments. For the Final Challenge, when all three Circuit environments will be combined into large systems, we plan to implement adaptive controllers that will identify their environments and use the appropriate optimized parameters for that environment. In this way, we expect our agents to be able to (for example) prioritize hallways and other small spaces in Urban environments, and perhaps prioritize large openings over small in the Cave environments, if the small openings end up being treacherous overall.
Next for our team: Cave Circuit
Coming up next for Team BARCS is the Virtual Cave Circuit. We are in the middle of testing our hypothesis that our controller will transition from UGVs to UAVs and developing strategies for refining our solution to handle Cave Circuit environmental hazards. The UAVs have a shorter battery life than the UGVs, so executing a joint exploration strategy will also be a high priority for this event, as will completing our work on graph sharing and merging, which will give our robot teams more sophisticated options for navigation and teamwork. We’re reaching a threshold in development where we can start increasing the “smarts” of the robots, which we anticipate will be critical for the final competition, where all of the challenges of SubT will be combined to push the limits of innovation. The Cave Circuit will also have new environmental challenges to tackle: dynamic features such as rock falls have been added, which will block previously accessible passages in the cave environment. We think our controllers are well-poised to handle this new challenge, and we’re eager to find out if that’s the case.
As of now, the biggest worries for us are time and team composition. The Cave Circuit deadline has been postponed to October 15 due to COVID-19 delays, with the award ceremony in mid-November, but there have also been several very compelling additions to the testbed that we would like to experiment with before submission, including droppable networking ‘breadcrumbs’ and new simulated platforms. There are design trade-offs when balancing general versus specialist approaches to the controllers for these robots—since we are adding UAVs to our team for the first time, there are new decisions that will have to be made. For example, the UAVs can ascend into vertical spaces, but only have a battery life of 20 minutes. The UGVs by contrast have 90 minute battery life. One of our strategies is to do an early return to base with one or more agents to buy down risk on making any artifact reports at all for the run, hedging against our other robots not making it back in time, a lesson learned from the Tunnel Circuit. Should a UAV take on this role, or is it better to have them explore deeper into the environment and instead report their artifacts to a UGV or network node, which comes with its own risks? Testing and experimentation to determine the best options takes time, which is always a worry when preparing for a competition! We also anticipate new competitors and stiffer competition all around.
Image: Michigan Tech Research Institute
Team BARCS has now a year to prepare for the final DARPA SubT Challenge event, expected to take place in late 2021.
Going forward from the Cave Circuit, we will have a year to prepare for the final DARPA SubT Challenge event, expected to take place in late 2021. What we are most excited about is increasing the level of intelligence of the agents in their teamwork and joint exploration of the environment. Since we will have (hopefully) built up robust approaches to handling each of the specific types of environments in the Tunnel, Urban, and Cave circuits, we will be aiming to push the limits on collaboration and efficiency among the agents in our team. We view this as a central research contribution of the Virtual Track to the Subterranean Challenge because intelligent, adaptive, multi-robot collaboration is an upcoming stage of development for integration of robots into our lives.
The Subterranean Challenge Virtual Track gives us a bridge for transitioning our more abstract research ideas and algorithms relevant to this degree of autonomy and collaboration onto physical systems, and exploring the tangible outcomes of implementing our work in the real world. And the next time there’s an incident in the basement of our building, the robots (and humans) of Team BARCS will be ready to respond.
Richard Chase, Ph.D., P.E., is a research scientist at Michigan Tech Research Institute (MTRI) and has 20 years of experience developing robotics and cyber physical systems in areas from remote sensing to autonomous vehicles. At MTRI, he works on a variety of topics such as swarm autonomy, human-swarm teaming, and autonomous vehicles. His research interests are the intersection of design, robotics, and embedded systems.
Sarah Kitchen is a Ph.D. mathematician working as a research scientist and an AI/Robotics focus area leader at MTRI. Her research interests include intelligent autonomous agents and multi-agent collaborative teams, as well as applications of autonomous robots to sensing systems.
This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR001118C0124 and is released under Distribution Statement (Approved for Public Release, Distribution Unlimited). Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of DARPA. Continue reading →
#437303 The Deck Is Not Rigged: Poker and the ...
Tuomas Sandholm, a computer scientist at Carnegie Mellon University, is not a poker player—or much of a poker fan, in fact—but he is fascinated by the game for much the same reason as the great game theorist John von Neumann before him. Von Neumann, who died in 1957, viewed poker as the perfect model for human decision making, for finding the balance between skill and chance that accompanies our every choice. He saw poker as the ultimate strategic challenge, combining as it does not just the mathematical elements of a game like chess but the uniquely human, psychological angles that are more difficult to model precisely—a view shared years later by Sandholm in his research with artificial intelligence.
“Poker is the main benchmark and challenge program for games of imperfect information,” Sandholm told me on a warm spring afternoon in 2018, when we met in his offices in Pittsburgh. The game, it turns out, has become the gold standard for developing artificial intelligence.
Tall and thin, with wire-frame glasses and neat brow hair framing a friendly face, Sandholm is behind the creation of three computer programs designed to test their mettle against human poker players: Claudico, Libratus, and most recently, Pluribus. (When we met, Libratus was still a toddler and Pluribus didn’t yet exist.) The goal isn’t to solve poker, as such, but to create algorithms whose decision making prowess in poker’s world of imperfect information and stochastic situations—situations that are randomly determined and unable to be predicted—can then be applied to other stochastic realms, like the military, business, government, cybersecurity, even health care.
While the first program, Claudico, was summarily beaten by human poker players—“one broke-ass robot,” an observer called it—Libratus has triumphed in a series of one-on-one, or heads-up, matches against some of the best online players in the United States.
Libratus relies on three main modules. The first involves a basic blueprint strategy for the whole game, allowing it to reach a much faster equilibrium than its predecessor. It includes an algorithm called the Monte Carlo Counterfactual Regret Minimization, which evaluates all future actions to figure out which one would cause the least amount of regret. Regret, of course, is a human emotion. Regret for a computer simply means realizing that an action that wasn’t chosen would have yielded a better outcome than one that was. “Intuitively, regret represents how much the AI regrets having not chosen that action in the past,” says Sandholm. The higher the regret, the higher the chance of choosing that action next time.
It’s a useful way of thinking—but one that is incredibly difficult for the human mind to implement. We are notoriously bad at anticipating our future emotions. How much will we regret doing something? How much will we regret not doing something else? For us, it’s an emotionally laden calculus, and we typically fail to apply it in quite the right way. For a computer, it’s all about the computation of values. What does it regret not doing the most, the thing that would have yielded the highest possible expected value?
The second module is a sub-game solver that takes into account the mistakes the opponent has made so far and accounts for every hand she could possibly have. And finally, there is a self-improver. This is the area where data and machine learning come into play. It’s dangerous to try to exploit your opponent—it opens you up to the risk that you’ll get exploited right back, especially if you’re a computer program and your opponent is human. So instead of attempting to do that, the self-improver lets the opponent’s actions inform the areas where the program should focus. “That lets the opponent’s actions tell us where [they] think they’ve found holes in our strategy,” Sandholm explained. This allows the algorithm to develop a blueprint strategy to patch those holes.
It’s a very human-like adaptation, if you think about it. I’m not going to try to outmaneuver you head on. Instead, I’m going to see how you’re trying to outmaneuver me and respond accordingly. Sun-Tzu would surely approve. Watch how you’re perceived, not how you perceive yourself—because in the end, you’re playing against those who are doing the perceiving, and their opinion, right or not, is the only one that matters when you craft your strategy. Overnight, the algorithm patches up its overall approach according to the resulting analysis.
There’s one final thing Libratus is able to do: play in situations with unknown probabilities. There’s a concept in game theory known as the trembling hand: There are branches of the game tree that, under an optimal strategy, one should theoretically never get to; but with some probability, your all-too-human opponent’s hand trembles, they take a wrong action, and you’re suddenly in a totally unmapped part of the game. Before, that would spell disaster for the computer: An unmapped part of the tree means the program no longer knows how to respond. Now, there’s a contingency plan.
Of course, no algorithm is perfect. When Libratus is playing poker, it’s essentially working in a zero-sum environment. It wins, the opponent loses. The opponent wins, it loses. But while some real-life interactions really are zero-sum—cyber warfare comes to mind—many others are not nearly as straightforward: My win does not necessarily mean your loss. The pie is not fixed, and our interactions may be more positive-sum than not.
What’s more, real-life applications have to contend with something that a poker algorithm does not: the weights that are assigned to different elements of a decision. In poker, this is a simple value-maximizing process. But what is value in the human realm? Sandholm had to contend with this before, when he helped craft the world’s first kidney exchange. Do you want to be more efficient, giving the maximum number of kidneys as quickly as possible—or more fair, which may come at a cost to efficiency? Do you want as many lives as possible saved—or do some take priority at the cost of reaching more? Is there a preference for the length of the wait until a transplant? Do kids get preference? And on and on. It’s essential, Sandholm says, to separate means and the ends. To figure out the ends, a human has to decide what the goal is.
“The world will ultimately become a lot safer with the help of algorithms like Libratus,” Sandholm told me. I wasn’t sure what he meant. The last thing that most people would do is call poker, with its competition, its winners and losers, its quest to gain the maximum edge over your opponent, a haven of safety.
“Logic is good, and the AI is much better at strategic reasoning than humans can ever be,” he explained. “It’s taking out irrationality, emotionality. And it’s fairer. If you have an AI on your side, it can lift non-experts to the level of experts. Naïve negotiators will suddenly have a better weapon. We can start to close off the digital divide.”
It was an optimistic note to end on—a zero-sum, competitive game yielding a more ultimately fair and rational world.
I wanted to learn more, to see if it was really possible that mathematics and algorithms could ultimately be the future of more human, more psychological interactions. And so, later that day, I accompanied Nick Nystrom, the chief scientist of the Pittsburgh Supercomputing Center—the place that runs all of Sandholm’s poker-AI programs—to the actual processing center that make undertakings like Libratus possible.
A half-hour drive found us in a parking lot by a large glass building. I’d expected something more futuristic, not the same square, corporate glass squares I’ve seen countless times before. The inside, however, was more promising. First the security checkpoint. Then the ride in the elevator — down, not up, to roughly three stories below ground, where we found ourselves in a maze of corridors with card readers at every juncture to make sure you don’t slip through undetected. A red-lit panel formed the final barrier, leading to a small sliver of space between two sets of doors. I could hear a loud hum coming from the far side.
“Let me tell you what you’re going to see before we walk in,” Nystrom told me. “Once we get inside, it will be too loud to hear.”
I was about to witness the heart of the supercomputing center: 27 large containers, in neat rows, each housing multiple processors with speeds and abilities too great for my mind to wrap around. Inside, the temperature is by turns arctic and tropic, so-called “cold” rows alternating with “hot”—fans operate around the clock to cool the processors as they churn through millions of giga, mega, tera, peta and other ever-increasing scales of data bytes. In the cool rows, robotic-looking lights blink green and blue in orderly progression. In the hot rows, a jumble of multicolored wires crisscrosses in tangled skeins.
In the corners stood machines that had outlived their heyday. There was Sherlock, an old Cray model, that warmed my heart. There was a sad nameless computer, whose anonymity was partially compensated for by the Warhol soup cans adorning its cage (an homage to Warhol’s Pittsburghian origins).
And where does Libratus live, I asked? Which of these computers is Bridges, the computer that runs the AI Sandholm and I had been discussing?
Bridges, it turned out, isn’t a single computer. It’s a system with processing power beyond comprehension. It takes over two and a half petabytes to run Libratus. A single petabyte is a million gigabytes: You could watch over 13 years of HD video, store 10 billion photos, catalog the contents of the entire Library of Congress word for word. That’s a whole lot of computing power. And that’s only to succeed at heads-up poker, in limited circumstances.
Yet despite the breathtaking computing power at its disposal, Libratus is still severely limited. Yes, it beat its opponents where Claudico failed. But the poker professionals weren’t allowed to use many of the tools of their trade, including the opponent analysis software that they depend on in actual online games. And humans tire. Libratus can churn for a two-week marathon, where the human mind falters.
But there’s still much it can’t do: play more opponents, play live, or win every time. There’s more humanity in poker than Libratus has yet conquered. “There’s this belief that it’s all about statistics and correlations. And we actually don’t believe that,” Nystrom explained as we left Bridges behind. “Once in a while correlations are good, but in general, they can also be really misleading.”
Two years later, the Sandholm lab will produce Pluribus. Pluribus will be able to play against five players—and will run on a single computer. Much of the human edge will have evaporated in a short, very short time. The algorithms have improved, as have the computers. AI, it seems, has gained by leaps and bounds.
So does that mean that, ultimately, the algorithmic can indeed beat out the human, that computation can untangle the web of human interaction by discerning “the little tactics of deception, of asking yourself what is the other man going to think I mean to do,” as von Neumann put it?
Long before I’d spoken to Sandholm, I’d met Kevin Slavin, a polymath of sorts whose past careers have including founding a game design company and an interactive art space and launching the Playful Systems group at MIT’s Media Lab. Slavin has a decidedly different view from the creators of Pluribus. “On the one hand, [von Neumann] was a genius,” Kevin Slavin reflects. “But the presumptuousness of it.”
Slavin is firmly on the side of the gambler, who recognizes uncertainty for what it is and thus is able to take calculated risks when necessary, all the while tampering confidence at the outcome. The most you can do is put yourself in the path of luck—but to think you can guess with certainty the actual outcome is a presumptuousness the true poker player foregoes. For Slavin, the wonder of computers is “That they can generate this fabulous, complex randomness.” His opinion of the algorithmic assaults on chance? “This is their moment,” he said. “But it’s the exact opposite of what’s really beautiful about a computer, which is that it can do something that’s actually unpredictable. That, to me, is the magic.”
Will they actually succeed in making the unpredictable predictable, though? That’s what I want to know. Because everything I’ve seen tells me that absolute success is impossible. The deck is not rigged.
“It’s an unbelievable amount of work to get there. What do you get at the end? Let’s say they’re successful. Then we live in a world where there’s no God, agency, or luck,” Slavin responded.
“I don’t want to live there,’’ he added “I just don’t want to live there.”
Luckily, it seems that for now, he won’t have to. There are more things in life than are yet written in the algorithms. We have no reliable lie detection software—whether in the face, the skin, or the brain. In a recent test of bluffing in poker, computer face recognition failed miserably. We can get at discomfort, but we can’t get at the reasons for that discomfort: lying, fatigue, stress—they all look much the same. And humans, of course, can also mimic stress where none exists, complicating the picture even further.
Pluribus may turn out to be powerful, but von Neumann’s challenge still stands: The true nature of games, the most human of the human, remains to be conquered.
This article was originally published on Undark. Read the original article.
Image Credit: José Pablo Iglesias / Unsplash Continue reading →