Tag Archives: mit

#436155 This MIT Robot Wants to Use Your ...

MIT researchers have demonstrated a new kind of teleoperation system that allows a two-legged robot to “borrow” a human operator’s physical skills to move with greater agility. The system works a bit like those haptic suits from the Spielberg movie “Ready Player One.” But while the suits in the film were used to connect humans to their VR avatars, the MIT suit connects the operator to a real robot.

The robot is called Little HERMES, and it’s currently just a pair of little legs, about a third the size of an average adult. It can step and jump in place or walk a short distance while supported by a gantry. While that in itself is not very impressive, the researchers say their approach could help bring capable disaster robots closer to reality. They explain that, despite recent advances, building fully autonomous robots with motor and decision-making skills comparable to those of humans remains a challenge. That’s where a more advanced teleoperation system could help.

The researchers, João Ramos, now an assistant professor at the University of Illinois at Urbana-Champaign, and Sangbae Kim, director of MIT’s Biomimetic Robotics Lab, describe the project in this week’s issue of Science Robotics. In the paper, they argue that existing teleoperation systems often can’t effectively match the operator’s motions to that of a robot. In addition, conventional systems provide no physical feedback to the human teleoperator about what the robot is doing. Their new approach addresses these two limitations, and to see how it would work in practice, they built Little HERMES.

Image: Science Robotics

The main components of MIT’s bipedal robot Little HERMES: (A) Custom actuators designed to withstand impact and capable of producing high torque. (B) Lightweight limbs with low inertia and fast leg swing. (C) Impact-robust and lightweight foot sensors with three-axis contact force sensor. (D) Ruggedized IMU to estimates the robot’s torso posture, angular rate, and linear acceleration. (E) Real-time computer sbRIO 9606 from National Instruments for robot control. (F) Two three-cell lithium-polymer batteries in series. (G) Rigid and lightweight frame to minimize the robot mass.

Early this year, the MIT researchers wrote an in-depth article for IEEE Spectrum about the project, which includes Little HERMES and also its big brother, HERMES (for Highly Efficient Robotic Mechanisms and Electromechanical System). In that article, they describe the two main components of the system:

[…] We are building a telerobotic system that has two parts: a humanoid capable of nimble, dynamic behaviors, and a new kind of two-way human-machine interface that sends your motions to the robot and the robot’s motions to you. So if the robot steps on debris and starts to lose its balance, the operator feels the same instability and instinctively reacts to avoid falling. We then capture that physical response and send it back to the robot, which helps it avoid falling, too. Through this human-robot link, the robot can harness the operator’s innate motor skills and split-second reflexes to keep its footing.

You could say we’re putting a human brain inside the machine.

Image: Science Robotics

The human-machine interface built by the MIT researchers for controlling Little HERMES is different from conventional ones in that it relies on the operator’s reflexes to improve the robot’s stability. The researchers call it the balance-feedback interface, or BFI. The main modules of the BFI include: (A) Custom interface attachments for torso and feet designed to capture human motion data at high speed (1 kHz). (B) Two underactuated modules to track the position and orientation of the torso and apply forces to the operator. (C) Each actuation module has three DoFs, one of which is a push/pull rod actuated by a DC brushless motor. (D) A series of linkages with passive joints connected to the operator’s feet and track their spatial translation. (E) Real-time controller cRIO 9082 from National Instruments to close the BFI control loop. (F) Force plate to estimated the operator’s center of pressure position and measure the shear and normal components of the operator’s net contact force.

Here’s more footage of the experiments, showing Little HERMES stepping and jumping in place, walking a few steps forward and backward, and balancing. Watch until the end to see a compilation of unsuccessful stepping experiments. Poor Little HERMES!

In the new Science Robotics paper, the MIT researchers explain how they solved one of the key challenges in making their teleoperation system effective:

The challenge of this strategy lies in properly mapping human body motion to the machine while simultaneously informing the operator how closely the robot is reproducing the movement. Therefore, we propose a solution for this bilateral feedback policy to control a bipedal robot to take steps, jump, and walk in synchrony with a human operator. Such dynamic synchronization was achieved by (i) scaling the core components of human locomotion data to robot proportions in real time and (ii) applying feedback forces to the operator that are proportional to the relative velocity between human and robot.

Little HERMES is now taking its first steps, quite literally, but the researchers say they hope to use robotic legs with similar design as part of a more advanced humanoid. One possibility they’ve envisioned is a fast-moving quadruped robot that could run through various kinds of terrain and then transform into a bipedal robot that would use its hands to perform dexterous manipulations. This could involve merging some of the robots the MIT researchers have built in their lab, possibly creating hybrids between Cheetah and HERMES, or Mini Cheetah and Little HERMES. We can’t wait to see what the resulting robots will look like.

[ Science Robotics ] Continue reading

Posted in Human Robots

#436123 A Path Towards Reasonable Autonomous ...

Editor’s Note: The debate on autonomous weapons systems has been escalating over the past several years as the underlying technologies evolve to the point where their deployment in a military context seems inevitable. IEEE Spectrum has published a variety of perspectives on this issue. In summary, while there is a compelling argument to be made that autonomous weapons are inherently unethical and should be banned, there is also a compelling argument to be made that autonomous weapons could potentially make conflicts less harmful, especially to non-combatants. Despite an increasing amount of international attention (including from the United Nations), progress towards consensus, much less regulatory action, has been slow. The following workshop paper on autonomous weapons systems policy is remarkable because it was authored by a group of experts with very different (and in some cases divergent) views on the issue. Even so, they were able to reach consensus on a roadmap that all agreed was worth considering. It’s collaborations like this that could be the best way to establish a reasonable path forward on such a contentious issue, and with the permission of the authors, we’re excited to be able to share this paper (originally posted on Georgia Tech’s Mobile Robot Lab website) with you in its entirety.

Autonomous Weapon Systems: A Roadmapping Exercise
Over the past several years, there has been growing awareness and discussion surrounding the possibility of future lethal autonomous weapon systems that could fundamentally alter humanity’s relationship with violence in war. Lethal autonomous weapons present a host of legal, ethical, moral, and strategic challenges. At the same time, artificial intelligence (AI) technology could be used in ways that improve compliance with the laws of war and reduce non-combatant harm. Since 2014, states have come together annually at the United Nations to discuss lethal autonomous weapons systems1. Additionally, a growing number of individuals and non-governmental organizations have become active in discussions surrounding autonomous weapons, contributing to a rapidly expanding intellectual field working to better understand these issues. While a wide range of regulatory options have been proposed for dealing with the challenge of lethal autonomous weapons, ranging from a preemptive, legally binding international treaty to reinforcing compliance with existing laws of war, there is as yet no international consensus on a way forward.

The lack of an international policy consensus, whether codified in a formal document or otherwise, poses real risks. States could fall victim to a security dilemma in which they deploy untested or unsafe weapons that pose risks to civilians or international stability. Widespread proliferation could enable illicit uses by terrorists, criminals, or rogue states. Alternatively, a lack of guidance on which uses of autonomy are acceptable could stifle valuable research that could reduce the risk of non-combatant harm.

International debate thus far has predominantly centered around whether or not states should adopt a preemptive, legally-binding treaty that would ban lethal autonomous weapons before they can be built. Some of the authors of this document have called for such a treaty and would heartily support it, if states were to adopt it. Other authors of this document have argued an overly expansive treaty would foreclose the possibility of using AI to mitigate civilian harm. Options for international action are not binary, however, and there are a range of policy options that states should consider between adopting a comprehensive treaty or doing nothing.

The purpose of this paper is to explore the possibility of a middle road. If a roadmap could garner sufficient stakeholder support to have significant beneficial impact, then what elements could it contain? The exercise whose results are presented below was not to identify recommendations that the authors each prefer individually (the authors hold a broad spectrum of views), but instead to identify those components of a roadmap that the authors are all willing to entertain2. We, the authors, invite policymakers to consider these components as they weigh possible actions to address concerns surrounding autonomous weapons3.

Summary of Issues Surrounding Autonomous Weapons

There are a variety of issues that autonomous weapons raise, which might lend themselves to different approaches. A non-exhaustive list of issues includes:

The potential for beneficial uses of AI and autonomy that could improve precision and reliability in the use of force and reduce non-combatant harm.
Uncertainty about the path of future technology and the likelihood of autonomous weapons being used in compliance with the laws of war, or international humanitarian law (IHL), in different settings and on various timelines.
A desire for some degree of human involvement in the use of force. This has been expressed repeatedly in UN discussions on lethal autonomous weapon systems in different ways.
Particular risks surrounding lethal autonomous weapons specifically targeting personnel as opposed to vehicles or materiel.
Risks regarding international stability.
Risk of proliferation to terrorists, criminals, or rogue states.
Risk that autonomous systems that have been verified to be acceptable can be made unacceptable through software changes.
The potential for autonomous weapons to be used as scalable weapons enabling a small number of individuals to inflict very large-scale casualties at low cost, either intentionally or accidentally.

Summary of Components

A time-limited moratorium on the development, deployment, transfer, and use of anti-personnel lethal autonomous weapon systems4. Such a moratorium could include exceptions for certain classes of weapons.
Define guiding principles for human involvement in the use of force.
Develop protocols and/or technological means to mitigate the risk of unintentional escalation due to autonomous systems.
Develop strategies for preventing proliferation to illicit uses, such as by criminals, terrorists, or rogue states.
Conduct research to improve technologies and human-machine systems to reduce non-combatant harm and ensure IHL compliance in the use of future weapons.

Component 1:

States should consider adopting a five-year, renewable moratorium on the development, deployment, transfer, and use of anti-personnel lethal autonomous weapon systems. Anti-personnel lethal autonomous weapon systems are defined as weapons systems that, once activated, can select and engage dismounted human targets without further intervention by a human operator, possibly excluding systems such as:

Fixed-point defensive systems with human supervisory control to defend human-occupied bases or installations
Limited, proportional, automated counter-fire systems that return fire in order to provide immediate, local defense of humans
Time-limited pursuit deterrent munitions or systems
Autonomous weapon systems with size above a specified explosive weight limit that select as targets hand-held weapons, such as rifles, machine guns, anti-tank weapons, or man-portable air defense systems, provided there is adequate protection for non-combatants and ensuring IHL compliance5

The moratorium would not apply to:

Anti-vehicle or anti-materiel weapons
Non-lethal anti-personnel weapons
Research on ways of improving autonomous weapon technology to reduce non-combatant harm in future anti-personnel lethal autonomous weapon systems
Weapons that find, track, and engage specific individuals whom a human has decided should be engaged within a limited predetermined period of time and geographic region

Motivation:

This moratorium would pause development and deployment of anti-personnel lethal autonomous weapons systems to allow states to better understand the systemic risks of their use and to perform research that improves their safety, understandability, and effectiveness. Particular objectives could be to:

ensure that, prior to deployment, anti-personnel lethal autonomous weapons can be used in ways that are equal to or outperform humans in their compliance with IHL (other conditions may also apply prior to deployment being acceptable);
lay the groundwork for a potentially legally binding diplomatic instrument; and
decrease the geopolitical pressure on countries to deploy anti-personnel lethal autonomous weapons before they are reliable and well-understood.

Compliance Verification:

As part of a moratorium, states could consider various approaches to compliance verification. Potential approaches include:

Developing an industry cooperation regime analogous to that mandated under the Chemical Weapons Convention, whereby manufacturers must know their customers and report suspicious purchases of significant quantities of items such as fixed-wing drones, quadcopters, and other weaponizable robots.
Encouraging states to declare inventories of autonomous weapons for the purposes of transparency and confidence-building.
Facilitating scientific exchanges and military-to-military contacts to increase trust, transparency, and mutual understanding on topics such as compliance verification and safe operation of autonomous systems.
Designing control systems to require operator identity authentication and unalterable records of operation; enabling post-hoc compliance checks in case of plausible evidence of non-compliant autonomous weapon attacks.
Relating the quantity of weapons to corresponding capacities for human-in-the-loop operation of those weapons.
Designing weapons with air-gapped firing authorization circuits that are connected to the remote human operator but not to the on-board automated control system.
More generally, avoiding weapon designs that enable conversion from compliant to non-compliant categories or missions solely by software updates.
Designing weapons with formal proofs of relevant properties—e.g., the property that the weapon is unable to initiate an attack without human authorization. Proofs can, in principle, be provided using cryptographic techniques that allow the proofs to be checked by a third party without revealing any details of the underlying software.
Facilitate access to (non-classified) AI resources (software, data, methods for ensuring safe operation) to all states that remain in compliance and participate in transparency activities.

Component 2:

Define and universalize guiding principles for human involvement in the use of force.

Humans, not machines, are legal and moral agents in military operations.
It is a human responsibility to ensure that any attack, including one involving autonomous weapons, complies with the laws of war.
Humans responsible for initiating an attack must have sufficient understanding of the weapons, the targets, the environment and the context for use to determine whether that particular attack is lawful.
The attack must be bounded in space, time, target class, and means of attack in order for the determination about the lawfulness of that attack to be meaningful.
Militaries must invest in training, education, doctrine, policies, system design, and human-machine interfaces to ensure that humans remain responsible for attacks.

Component 3:

Develop protocols and/or technological means to mitigate the risk of unintentional escalation due to autonomous systems.

Specific potential measures include:

Developing safe rules for autonomous system behavior when in proximity to adversarial forces to avoid unintentional escalation or signaling. Examples include:

No-first-fire policy, so that autonomous weapons do not initiate hostilities without explicit human authorization.
A human must always be responsible for providing the mission for an autonomous system.
Taking steps to clearly distinguish exercises, patrols, reconnaissance, or other peacetime military operations from attacks in order to limit the possibility of reactions from adversary autonomous systems, such as autonomous air or coastal defenses.

Developing resilient communications links to ensure recallability of autonomous systems. Additionally, militaries should refrain from jamming others’ ability to recall their autonomous systems in order to afford the possibility of human correction in the event of unauthorized behavior.

Component 4:

Develop strategies for preventing proliferation to illicit uses, such as by criminals, terrorists, or rogue states:

Targeted multilateral controls to prevent large-scale sale and transfer of weaponizable robots and related military-specific components for illicit use.
Employ measures to render weaponizable robots less harmful (e.g., geofencing; hard-wired kill switch; onboard control systems largely implemented in unalterable, non-reprogrammable hardware such as application-specific integrated circuits).

Component 5:

Conduct research to improve technologies and human-machine systems to reduce non-combatant harm and ensure IHL-compliance in the use of future weapons, including:

Strategies to promote human moral engagement in decisions about the use of force
Risk assessment for autonomous weapon systems, including the potential for large-scale effects, geopolitical destabilization, accidental escalation, increased instability due to uncertainty about the relative military balance of power, and lowering thresholds to initiating conflict and for violence within conflict
Methodologies for ensuring the reliability and security of autonomous weapon systems
New techniques for verification, validation, explainability, characterization of failure conditions, and behavioral specifications.

About the Authors (in alphabetical order)

Ronald Arkin directs the Mobile Robot Laboratory at Georgia Tech.

Leslie Kaelbling is co-director of the Learning and Intelligent Systems Group at MIT.

Stuart Russell is a professor of computer science and engineering at UC Berkeley.

Dorsa Sadigh is an assistant professor of computer science and of electrical engineering at Stanford.

Paul Scharre directs the Technology and National Security Program at the Center for a New American Security (CNAS).

Bart Selman is a professor of computer science at Cornell.

Toby Walsh is a professor of artificial intelligence at the University of New South Wales (UNSW) Sydney.

The authors would like to thank Max Tegmark for organizing the three-day meeting from which this document was produced.

1 Autonomous Weapons System (AWS): A weapon system that, once activated, can select and engage targets without further intervention by a human operator. BACK TO TEXT↑

2 There is no implication that some authors would not personally support stronger recommendations. BACK TO TEXT↑

3 For ease of use, this working paper will frequently shorten “autonomous weapon system” to “autonomous weapon.” The terms should be treated as synonymous, with the understanding that “weapon” refers to the entire system: sensor, decision-making element, and munition. BACK TO TEXT↑

4 Anti-personnel lethal autonomous weapon system: A weapon system that, once activated, can select and engage dismounted human targets with lethal force and without further intervention by a human operator. BACK TO TEXT↑

5 The authors are not unanimous about this item because of concerns about ease of repurposing for mass-casualty missions targeting unarmed humans. The purpose of the lower limit on explosive payload weight would be to minimize the risk of such repurposing. There is precedent for using explosive weight limit as a mechanism of delineating between anti-personnel and anti-materiel weapons, such as the 1868 St. Petersburg Declaration Renouncing the Use, in Time of War, of Explosive Projectiles Under 400 Grammes Weight. BACK TO TEXT↑ Continue reading

Posted in Human Robots

#436114 Video Friday: Transferring Human Motion ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

ARSO 2019 – October 31-1, 2019 – Beijing, China
ROSCon 2019 – October 31-1, 2019 – Macau
IROS 2019 – November 4-8, 2019 – Macau
Let us know if you have suggestions for next week, and enjoy today’s videos.

We are very sad to say that MIT professor emeritus Woodie Flowers has passed away. Flowers will be remembered for (among many other things, like co-founding FIRST) the MIT 2.007 course that he began teaching in the mid-1970s, famous for its student competitions.

These competitions got a bunch of well-deserved publicity over the years; here’s one from 1985:

And the 2.007 competitions are still going strong—this year’s theme was Moonshot, and you can watch a replay of the event here.

[ MIT ]

Looks like Aibo is getting wireless integration with Hitachi appliances, which turns out to be pretty cute:

What is this magical box where you push a button and 60 seconds later fluffy pancakes come out?!

[ Aibo ]

LiftTiles are a “modular and reconfigurable room-scale shape display” that can turn your floor and walls into on-demand structures.

[ LiftTiles ]

Ben Katz, a grad student in MIT’s Biomimetics Robotics Lab, has been working on these beautiful desktop-sized Furuta pendulums:

That’s a crowdfunding project I’d pay way too much for.

[ Ben Katz ]

A clever bit of cable manipulation from MIT, using GelSight tactile sensors.

[ Paper ]

A useful display of industrial autonomy on ANYmal from the Oxford Robotics Group.

This video is of a demonstration for the ORCA Robotics Hub showing the ANYbotics ANYmal robot carrying out industrial inspection using autonomy software from Oxford Robotics Institute.

[ ORCA Hub ] via [ DRS ]

Thanks Maurice!

Meet Katie Hamilton, a software engineer at NASA’s Ames Research Center, who got into robotics because she wanted to help people with daily life. Katie writes code for robots, like Astrobee, who are assisting astronauts with routine tasks on the International Space Station.

[ NASA Astrobee ]

Transferring human motion to a mobile robotic manipulator and ensuring safe physical human-robot interaction are crucial steps towards automating complex manipulation tasks in human-shared environments. In this work we present a robot whole-body teleoperation framework for human motion transfer. We validate our approach through several experiments using the TIAGo robot, showing this could be an easy way for a non-expert to teach a rough manipulation skill to an assistive robot.

[ Paper ]

This is pretty cool looking for an autonomous boat, but we’ll see if they can build a real one by 2020 since at the moment it’s just an average rendering.

[ ProMare ]

I had no idea that asparagus grows like this. But, sure does make it easy for a robot to harvest.

[ Inaho ]

Skip to 2:30 in this Pepper unboxing video to hear the noise it makes when tickled.

[ HIT Lab NZ ]

In this interview, Jean Paul Laumond discusses his movement from mathematics to robotics and his career contributions to the field, especially in regards to motion planning and anthropomorphic motion. Describing his involvement at CNRS and in other robotics projects, such as HILARE, he comments on the distinction in perception between the robotics approach and a mathematics one.

[ IEEE RAS History ]

Here’s a couple of videos from the CMU Robotics Institute archives, showing some of the work that took place over the last few decades.

[ CMU RI ]

In this episode of the Artificial Intelligence Podcast, Lex Fridman speaks with David Ferrucci from IBM about Watson and (you guessed it) artificial intelligence.

David Ferrucci led the team that built Watson, the IBM question-answering system that beat the top humans in the world at the game of Jeopardy. He is also the Founder, CEO, and Chief Scientist of Elemental Cognition, a company working engineer AI systems that understand the world the way people do. This conversation is part of the Artificial Intelligence podcast.

[ AI Podcast ]

This week’s CMU RI Seminar is by Pieter Abbeel from UC Berkeley, on “Deep Learning for Robotics.”

Programming robots remains notoriously difficult. Equipping robots with the ability to learn would by-pass the need for what otherwise often ends up being time-consuming task specific programming. This talk will describe recent progress in deep reinforcement learning (robots learning through their own trial and error), in apprenticeship learning (robots learning from observing people), and in meta-learning for action (robots learning to learn). This work has led to new robotic capabilities in manipulation, locomotion, and flight, with the same approach underlying advances in each of these domains.

[ CMU RI ] Continue reading

Posted in Human Robots

#436079 Video Friday: This Humanoid Robot Will ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

Northeast Robotics Colloquium – October 12, 2019 – Philadelphia, Pa., USA
Ro-Man 2019 – October 14-18, 2019 – New Delhi, India
Humanoids 2019 – October 15-17, 2019 – Toronto, Canada
ARSO 2019 – October 31-1, 2019 – Beijing, China
ROSCon 2019 – October 31-1, 2019 – Macau
IROS 2019 – November 4-8, 2019 – Macau
Let us know if you have suggestions for next week, and enjoy today’s videos.

What’s better than a robotics paper with “dynamic” in the title? A robotics paper with “highly dynamic” in the title. From Sangbae Kim’s lab at MIT, the latest exploits of Mini Cheetah:

Yes I’d very much like one please. Full paper at the link below.

[ Paper ] via [ MIT ]

A humanoid robot serving you ice cream—on his own ice cream bike: What a delicious vision!

[ Roboy ]

The Roomba “i” series and “s” series vacuums have just gotten an update that lets you set “keep out” zones, which is super useful. Tell your robot where not to go!

I feel bad, that Roomba was probably just hungry 🙁

[ iRobot ]

We wrote about Voliro’s tilt-rotor hexcopter a couple years ago, and now it’s off doing practical things, like spray painting a building pretty much the same color that it was before.

[ Voliro ]

Thanks Mina!

Here’s a clever approach for bin-picking problematic objects, like shiny things: Just grab a whole bunch, and then sort out what you need on a nice robot-friendly table.

It might take a little bit longer, but what do you care, you’re probably off sipping a cocktail with a little umbrella in it on a beach somewhere.

[ Harada Lab ]

A unique combination of the IRB 1200 and YuMi industrial robots that use vision, AI and deep learning to recognize and categorize trash for recycling.

[ ABB ]

Measuring glacial movements in-situ is a challenging, but necessary task to model glaciers and predict their future evolution. However, installing GPS stations on ice can be dangerous and expensive when not impossible in the presence of large crevasses. In this project, the ASL develops UAVs for dropping and recovering lightweight GPS stations over inaccessible glaciers to record the ice flow motion. This video shows the results of first tests performed at Gorner glacier, Switzerland, in July 2019.

[ EPFL ]

Turns out Tertills actually do a pretty great job fighting weeds.

Plus, they leave all those cute lil’ Tertill tracks.

[ Franklin Robotics ]

The online autonomous navigation and semantic mapping experiment presented [below] is conducted with the Cassie Blue bipedal robot at the University of Michigan. The sensors attached to the robot include an IMU, a 32-beam LiDAR and an RGB-D camera. The whole online process runs in real-time on a Jetson Xavier and a laptop with an i7 processor.

The resulting map is so precise that it looks like we are doing real-time SLAM (simultaneous localization and mapping). In fact, the map is based on dead-reckoning via the InvEKF.

[ GTSAM ] via [ University of Michigan ]

UBTECH has announced an upgraded version of its Meebot, which is 30 percent bigger and comes with more sensors and programmable eyes.

[ UBTECH ]

ABB’s research team will be working with medical staff, scientist and engineers to develop non-surgical medical robotics systems, including logistics and next-generation automated laboratory technologies. The team will develop robotics solutions that will help eliminate bottlenecks in laboratory work and address the global shortage of skilled medical staff.

[ ABB ]

In this video, Ian and Chris go through Misty’s SDK, discussing the languages we’ve included, the tools that make it easy for you to get started quickly, a quick rundown of how to run the skills you build, plus what’s ahead on the Misty SDK roadmap.

[ Misty Robotics ]

My guess is that this was not one of iRobot’s testing environments for the Roomba.

You know, that’s actually super impressive. And maybe if they threw one of the self-emptying Roombas in there, it would be a viable solution to the entire problem.

[ How Farms Work ]

Part of WeRobotics’ Flying Labs network, Panama Flying Labs is a local knowledge hub catalyzing social good and empowering local experts. Through training and workshops, demonstrations and missions, the Panama Flying Labs team leverages the power of drones, data, and AI to promote entrepreneurship, build local capacity, and confront the pressing social challenges faced by communities in Panama and across Central America.

[ Panama Flying Labs ]

Go on a virtual flythrough of the NIOSH Experimental Mine, one of two courses used in the recent DARPA Subterranean Challenge Tunnel Circuit Event held 15-22 August, 2019. The data used for this partial flythrough tour were collected using 3D LIDAR sensors similar to the sensors commonly used on autonomous mobile robots.

[ SubT ]

Special thanks to PBS, Mark Knobil, Joe Seamans and Stan Brandorff and many others who produced this program in 1991.

It features Reid Simmons (and his 1 year old son), David Wettergreen, Red Whittaker, Mac Macdonald, Omead Amidi, and other Field Robotics Center alumni building the planetary walker prototype called Ambler. The team gets ready for an important demo for NASA.

[ CMU RI ]

As art and technology merge, roboticist Madeline Gannon explores the frontiers of human-robot interaction across the arts, sciences and society, and explores what this could mean for the future.

[ Sonar+D ] Continue reading

Posted in Human Robots

#435822 The Internet Is Coming to the Rest of ...

People surf it. Spiders crawl it. Gophers navigate it.

Now, a leading group of cognitive biologists and computer scientists want to make the tools of the Internet accessible to the rest of the animal kingdom.

Dubbed the Interspecies Internet, the project aims to provide intelligent animals such as elephants, dolphins, magpies, and great apes with a means to communicate among each other and with people online.

And through artificial intelligence, virtual reality, and other digital technologies, researchers hope to crack the code of all the chirps, yips, growls, and whistles that underpin animal communication.

Oh, and musician Peter Gabriel is involved.

“We can use data analysis and technology tools to give non-humans a lot more choice and control,” the former Genesis frontman, dressed in his signature Nehru-style collar shirt and loose, open waistcoat, told IEEE Spectrum at the inaugural Interspecies Internet Workshop, held Monday in Cambridge, Mass. “This will be integral to changing our relationship with the natural world.”

The workshop was a long time in the making.

Eighteen years ago, Gabriel visited a primate research center in Atlanta, Georgia, where he jammed with two bonobos, a male named Kanzi and his half-sister Panbanisha. It was the first time either bonobo had sat at a piano before, and both displayed an exquisite sense of musical timing and melody.

Gabriel seemed to be speaking to the great apes through his synthesizer. It was a shock to the man who once sang “Shock the Monkey.”

“It blew me away,” he says.

Add in the bonobos’ ability to communicate by pointing to abstract symbols, Gabriel notes, and “you’d have to be deaf, dumb, and very blind not to notice language being used.”

Gabriel eventually teamed up with Internet protocol co-inventor Vint Cerf, cognitive psychologist Diana Reiss, and IoT pioneer Neil Gershenfeld to propose building an Interspecies Internet. Presented in a 2013 TED Talk as an “idea in progress,” the concept proved to be ahead of the technology.

“It wasn’t ready,” says Gershenfeld, director of MIT’s Center for Bits and Atoms. “It needed to incubate.”

So, for the past six years, the architects of the Dolittlesque initiative embarked on two small pilot projects, one for dolphins and one for chimpanzees.

At her Hunter College lab in New York City, Reiss developed what she calls the D-Pad—a touchpad for dolphins.

Reiss had been trying for years to create an underwater touchscreen with which to probe the cognition and communication skills of bottlenose dolphins. But “it was a nightmare coming up with something that was dolphin-safe and would work,” she says.

Her first attempt emitted too much heat. A Wii-like system of gesture recognition proved too difficult to install in the dolphin tanks.

Eventually, she joined forces with Rockefeller University biophysicist Marcelo Magnasco and invented an optical detection system in which images and infrared sensors are projected through an underwater viewing window onto a glass panel, allowing the dolphins to play specially designed apps, including one dubbed Whack-a-Fish.

Meanwhile, in the United Kingdom, Gabriel worked with Alison Cronin, director of the ape rescue center Monkey World, to test the feasibility of using FaceTime with chimpanzees.

The chimps engaged with the technology, Cronin reported at this week’s workshop. However, our hominid cousins proved as adept at videotelephonic discourse as my three-year-old son is at video chatting with his grandparents—which is to say, there was a lot of pass-the-banana-through-the-screen and other silly games, and not much meaningful conversation.

“We can use data analysis and technology tools to give non-humans a lot more choice and control.”
—Peter Gabriel

The buggy, rudimentary attempt at interspecies online communication—what Cronin calls her “Max Headroom experiment”—shows that building the Interspecies Internet will not be as simple as giving out Skype-enabled tablets to smart animals.

“There are all sorts of problems with creating a human-centered experience for another animal,” says Gabriel Miller, director of research and development at the San Diego Zoo.

Miller has been working on animal-focused sensory tools such as an “Elephone” (for elephants) and a “Joybranch” (for birds), but it’s not easy to design efficient interactive systems for other creatures—and for the Interspecies Internet to be successful, Miller points out, “that will be super-foundational.”

Researchers are making progress on natural language processing of animal tongues. Through a non-profit organization called the Earth Species Project, former Firefox designer Aza Raskin and early Twitter engineer Britt Selvitelle are applying deep learning algorithms developed for unsupervised machine translation of human languages to fashion a Rosetta Stone–like tool capable of interpreting the vocalizations of whales, primates, and other animals.

Inspired by the scientists who first documented the complex sonic arrangements of humpback whales in the 1960s—a discovery that ushered in the modern marine conservation movement—Selvitelle hopes that an AI-powered animal translator can have a similar effect on environmentalism today.

“A lot of shifts happen when someone who doesn’t have a voice gains a voice,” he says.

A challenge with this sort of AI software remains verification and validation. Normally, machine-learning algorithms are benchmarked against a human expert, but who is to say if a cybernetic translation of a sperm whale’s clicks is accurate or not?

One could back-translate an English expression into sperm whale-ese and then into English again. But with the great apes, there might be a better option.

According to primatologist Sue Savage-Rumbaugh, expertly trained bonobos could serve as bilingual interpreters, translating the argot of apes into the parlance of people, and vice versa.

Not just any trained ape will do, though. They have to grow up in a mixed Pan/Homo environment, as Kanzi and Panbanisha were.

“If I can have a chat with a cow, maybe I can have more compassion for it.”
—Jeremy Coller

Those bonobos were raised effectively from birth both by Savage-Rumbaugh, who taught the animals to understand spoken English and to communicate via hundreds of different pictographic “lexigrams,” and a bonobo mother named Matata that had lived for six years in the Congolese rainforests before her capture.

Unlike all other research primates—which are brought into captivity as infants, reared by human caretakers, and have limited exposure to their natural cultures or languages—those apes thus grew up fluent in both bonobo and human.

Panbanisha died in 2012, but Kanzi, aged 38, is still going strong, living at an ape sanctuary in Des Moines, Iowa. Researchers continue to study his cognitive abilities—Francine Dolins, a primatologist at the University of Michigan-Dearborn, is running one study in which Kanzi and other apes hunt rabbits and forage for fruit through avatars on a touchscreen. Kanzi could, in theory, be recruited to check the accuracy of any Google Translate–like app for bonobo hoots, barks, grunts, and cries.

Alternatively, Kanzi could simply provide Internet-based interpreting services for our two species. He’s already proficient at video chatting with humans, notes Emily Walco, a PhD student at Harvard University who has personally Skyped with Kanzi. “He was super into it,” Walco says.

And if wild bonobos in Central Africa can be coaxed to gather around a computer screen, Savage-Rumbaugh is confident Kanzi could communicate with them that way. “It can all be put together,” she says. “We can have an Interspecies Internet.”

“Both the technology and the knowledge had to advance,” Savage-Rumbaugh notes. However, now, “the techniques that we learned could really be extended to a cow or a pig.”

That’s music to the ears of Jeremy Coller, a private equity specialist whose foundation partially funded the Interspecies Internet Workshop. Coller is passionate about animal welfare and has devoted much of his philanthropic efforts toward the goal of ending factory farming.

At the workshop, his foundation announced the creation of the Coller Doolittle Prize, a US $100,000 award to help fund further research related to the Interspecies Internet. (A working group also formed to synthesize plans for the emerging field, to facilitate future event planning, and to guide testing of shared technology platforms.)

Why would a multi-millionaire with no background in digital communication systems or cognitive psychology research want to back the initiative? For Coller, the motivation boils to interspecies empathy.

“If I can have a chat with a cow,” he says, “maybe I can have more compassion for it.”

An abridged version of this post appears in the September 2019 print issue as “Elephants, Dolphins, and Chimps Need the Internet, Too.” Continue reading

Posted in Human Robots