Tag Archives: intelligent
#435822 The Internet Is Coming to the Rest of ...
People surf it. Spiders crawl it. Gophers navigate it.
Now, a leading group of cognitive biologists and computer scientists want to make the tools of the Internet accessible to the rest of the animal kingdom.
Dubbed the Interspecies Internet, the project aims to provide intelligent animals such as elephants, dolphins, magpies, and great apes with a means to communicate among each other and with people online.
And through artificial intelligence, virtual reality, and other digital technologies, researchers hope to crack the code of all the chirps, yips, growls, and whistles that underpin animal communication.
Oh, and musician Peter Gabriel is involved.
“We can use data analysis and technology tools to give non-humans a lot more choice and control,” the former Genesis frontman, dressed in his signature Nehru-style collar shirt and loose, open waistcoat, told IEEE Spectrum at the inaugural Interspecies Internet Workshop, held Monday in Cambridge, Mass. “This will be integral to changing our relationship with the natural world.”
The workshop was a long time in the making.
Eighteen years ago, Gabriel visited a primate research center in Atlanta, Georgia, where he jammed with two bonobos, a male named Kanzi and his half-sister Panbanisha. It was the first time either bonobo had sat at a piano before, and both displayed an exquisite sense of musical timing and melody.
Gabriel seemed to be speaking to the great apes through his synthesizer. It was a shock to the man who once sang “Shock the Monkey.”
“It blew me away,” he says.
Add in the bonobos’ ability to communicate by pointing to abstract symbols, Gabriel notes, and “you’d have to be deaf, dumb, and very blind not to notice language being used.”
Gabriel eventually teamed up with Internet protocol co-inventor Vint Cerf, cognitive psychologist Diana Reiss, and IoT pioneer Neil Gershenfeld to propose building an Interspecies Internet. Presented in a 2013 TED Talk as an “idea in progress,” the concept proved to be ahead of the technology.
“It wasn’t ready,” says Gershenfeld, director of MIT’s Center for Bits and Atoms. “It needed to incubate.”
So, for the past six years, the architects of the Dolittlesque initiative embarked on two small pilot projects, one for dolphins and one for chimpanzees.
At her Hunter College lab in New York City, Reiss developed what she calls the D-Pad—a touchpad for dolphins.
Reiss had been trying for years to create an underwater touchscreen with which to probe the cognition and communication skills of bottlenose dolphins. But “it was a nightmare coming up with something that was dolphin-safe and would work,” she says.
Her first attempt emitted too much heat. A Wii-like system of gesture recognition proved too difficult to install in the dolphin tanks.
Eventually, she joined forces with Rockefeller University biophysicist Marcelo Magnasco and invented an optical detection system in which images and infrared sensors are projected through an underwater viewing window onto a glass panel, allowing the dolphins to play specially designed apps, including one dubbed Whack-a-Fish.
Meanwhile, in the United Kingdom, Gabriel worked with Alison Cronin, director of the ape rescue center Monkey World, to test the feasibility of using FaceTime with chimpanzees.
The chimps engaged with the technology, Cronin reported at this week’s workshop. However, our hominid cousins proved as adept at videotelephonic discourse as my three-year-old son is at video chatting with his grandparents—which is to say, there was a lot of pass-the-banana-through-the-screen and other silly games, and not much meaningful conversation.
“We can use data analysis and technology tools to give non-humans a lot more choice and control.”
—Peter Gabriel
The buggy, rudimentary attempt at interspecies online communication—what Cronin calls her “Max Headroom experiment”—shows that building the Interspecies Internet will not be as simple as giving out Skype-enabled tablets to smart animals.
“There are all sorts of problems with creating a human-centered experience for another animal,” says Gabriel Miller, director of research and development at the San Diego Zoo.
Miller has been working on animal-focused sensory tools such as an “Elephone” (for elephants) and a “Joybranch” (for birds), but it’s not easy to design efficient interactive systems for other creatures—and for the Interspecies Internet to be successful, Miller points out, “that will be super-foundational.”
Researchers are making progress on natural language processing of animal tongues. Through a non-profit organization called the Earth Species Project, former Firefox designer Aza Raskin and early Twitter engineer Britt Selvitelle are applying deep learning algorithms developed for unsupervised machine translation of human languages to fashion a Rosetta Stone–like tool capable of interpreting the vocalizations of whales, primates, and other animals.
Inspired by the scientists who first documented the complex sonic arrangements of humpback whales in the 1960s—a discovery that ushered in the modern marine conservation movement—Selvitelle hopes that an AI-powered animal translator can have a similar effect on environmentalism today.
“A lot of shifts happen when someone who doesn’t have a voice gains a voice,” he says.
A challenge with this sort of AI software remains verification and validation. Normally, machine-learning algorithms are benchmarked against a human expert, but who is to say if a cybernetic translation of a sperm whale’s clicks is accurate or not?
One could back-translate an English expression into sperm whale-ese and then into English again. But with the great apes, there might be a better option.
According to primatologist Sue Savage-Rumbaugh, expertly trained bonobos could serve as bilingual interpreters, translating the argot of apes into the parlance of people, and vice versa.
Not just any trained ape will do, though. They have to grow up in a mixed Pan/Homo environment, as Kanzi and Panbanisha were.
“If I can have a chat with a cow, maybe I can have more compassion for it.”
—Jeremy Coller
Those bonobos were raised effectively from birth both by Savage-Rumbaugh, who taught the animals to understand spoken English and to communicate via hundreds of different pictographic “lexigrams,” and a bonobo mother named Matata that had lived for six years in the Congolese rainforests before her capture.
Unlike all other research primates—which are brought into captivity as infants, reared by human caretakers, and have limited exposure to their natural cultures or languages—those apes thus grew up fluent in both bonobo and human.
Panbanisha died in 2012, but Kanzi, aged 38, is still going strong, living at an ape sanctuary in Des Moines, Iowa. Researchers continue to study his cognitive abilities—Francine Dolins, a primatologist at the University of Michigan-Dearborn, is running one study in which Kanzi and other apes hunt rabbits and forage for fruit through avatars on a touchscreen. Kanzi could, in theory, be recruited to check the accuracy of any Google Translate–like app for bonobo hoots, barks, grunts, and cries.
Alternatively, Kanzi could simply provide Internet-based interpreting services for our two species. He’s already proficient at video chatting with humans, notes Emily Walco, a PhD student at Harvard University who has personally Skyped with Kanzi. “He was super into it,” Walco says.
And if wild bonobos in Central Africa can be coaxed to gather around a computer screen, Savage-Rumbaugh is confident Kanzi could communicate with them that way. “It can all be put together,” she says. “We can have an Interspecies Internet.”
“Both the technology and the knowledge had to advance,” Savage-Rumbaugh notes. However, now, “the techniques that we learned could really be extended to a cow or a pig.”
That’s music to the ears of Jeremy Coller, a private equity specialist whose foundation partially funded the Interspecies Internet Workshop. Coller is passionate about animal welfare and has devoted much of his philanthropic efforts toward the goal of ending factory farming.
At the workshop, his foundation announced the creation of the Coller Doolittle Prize, a US $100,000 award to help fund further research related to the Interspecies Internet. (A working group also formed to synthesize plans for the emerging field, to facilitate future event planning, and to guide testing of shared technology platforms.)
Why would a multi-millionaire with no background in digital communication systems or cognitive psychology research want to back the initiative? For Coller, the motivation boils to interspecies empathy.
“If I can have a chat with a cow,” he says, “maybe I can have more compassion for it.”
An abridged version of this post appears in the September 2019 print issue as “Elephants, Dolphins, and Chimps Need the Internet, Too.” Continue reading
#435750 Video Friday: Amazon CEO Jeff Bezos ...
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events):
RSS 2019 – June 22-26, 2019 – Freiburg, Germany
Hamlyn Symposium on Medical Robotics – June 23-26, 2019 – London, U.K.
ETH Robotics Summer School – June 27-1, 2019 – Zurich, Switzerland
MARSS 2019 – July 1-5, 2019 – Helsinki, Finland
ICRES 2019 – July 29-30, 2019 – London, U.K.
Let us know if you have suggestions for next week, and enjoy today’s videos.
Last week at the re:MARS conference, Amazon CEO and aspiring supervillain Jeff Bezos tried out this pair of dexterous robotic hands, which he described as “weirdly natural” to operate. The system combines Shadow Robot’s anthropomorphic robot hands with SynTouch’s biomimetic tactile sensors and HaptX’s haptic feedback gloves.
After playing with the robot, Bezos let out his trademark evil laugh.
[ Shadow Robot ]
The RoboMaster S1 is DJI’s advanced new educational robot that opens the door to limitless learning and entertainment. Develop programming skills, get familiar with AI technology, and enjoy thrilling FPV driving with games and competition. From young learners to tech enthusiasts, get ready to discover endless possibilities with the RoboMaster S1.
[ DJI ]
It’s very impressive to see DLR’s humanoid robot Toro dynamically balancing, even while being handed heavy objects, pushing things, and using multi-contact techniques to kick a fire extinguisher for some reason.
The paper is in RA-L, and you can find it at the link below.
[ RA-L ] via [ DLR ]
Thanks Maximo!
Is it just me, or does the Suzumori Endo Robotics Laboratory’s Super Dragon arm somehow just keep getting longer?
Suzumori Endo Lab, Tokyo Tech developed a 10 m-long articulated manipulator for investigation inside the primary containment vessel of the Fukushima Daiichi Nuclear Power Plants. We employed a coupled tendon-driven mechanism and a gravity compensation mechanism using synthetic fiber ropes to design a lightweight and slender articulated manipulator. This work was published in IEEE Robotics and Automation Letters and Transactions of the JSME.
[ Suzumori Endo Lab ]
From what I can make out thanks to Google Translate, this cute little robot duck (developed by Nissan) helps minimize weeds in rice fields by stirring up the water.
[ Nippon.com ]
Confidence in your robot is when you can just casually throw it off of a balcony 15 meters up.
[ SUTD ]
You had me at “we’re going to completely submerge this apple in chocolate syrup.”
[ Soft Robotics Inc ]
In the mid 2020s, the European Space Agency is planning on sending a robotic sample return mission to the Moon. It’s called Heracles, after the noted snake-strangler of Greek mythology.
[ ESA ]
Rethink Robotics is still around, they’re just much more German than before. And Sawyer is still hard at work stealing jobs from humans.
[ Rethink Robotics ]
The reason to watch this new video of the Ghost Robotics Vision 60 quadruped is for the 3 seconds worth of barrel roll about 40 seconds in.
[ Ghost Robotics ]
This is a relatively low-altitude drop for Squishy Robotics’ tensegrity scout, but it still cool to watch a robot that’s resilient enough to be able to fall and just not worry about it.
[ Squishy Robotics ]
We control here the Apptronik DRACO bipedal robot for unsupported dynamic locomotion. DRACO consists of a 10 DoF lower body with liquid cooled viscoelastic actuators to reduce weight, increase payload, and achieve fast dynamic walking. Control and walking algorithms are designed by UT HCRL Laboratory.
I think all robot videos should be required to start with two “oops” clips followed by a “for real now” clip.
[ Apptronik ]
SAKE’s EZGripper manages to pick up a wrench, and also pick up a raspberry without turning it into instajam.
[ SAKE Robotics ]
And now: the robotic long-tongued piggy, courtesy Sony Toio.
[ Toio ]
In this video the ornithopter developed inside the ERC Advanced Grant GRIFFIN project performs its first flight. This projects aims to develop a flapping wing system with manipulation and human interaction capabilities.
A flapping-wing system with manipulation and human interaction capabilities, you say? I would like to subscribe to your newsletter.
[ GRVC ]
KITECH’s robotic hands and arms can manipulate, among other things, five boxes of Elmos. I’m not sure about the conversion of Elmos to Snuffleupaguses, although it turns out that one Snuffleupagus is exactly 1,000 pounds.
[ Ji-Hun Bae ]
The Australian Centre for Field Robotics (ACFR) has been working on agricultural robots for almost a decade, and this video sums up a bunch of the stuff that they’ve been doing, even if it’s more amusing than practical at times.
[ ACFR ]
ROS 2 is great for multi-robot coordination, like when you need your bubble level to stay really, really level.
[ Acutronic Robotics ]
We don’t hear iRobot CEO Colin Angle give a lot of talks, so this recent one (from Amazon’s re:MARS conference) is definitely worth a listen, especially considering how much innovation we’ve seen from iRobot recently.
Colin Angle, founder and CEO of iRobot, has unveil a series of breakthrough innovations in home robots from iRobot. For the first time on stage, he will discuss and demonstrate what it takes to build a truly intelligent system of robots that work together to accomplish more within the home – and enable that home, and the devices within it, to work together as one.
[ iRobot ]
In the latest episode of Robots in Depth, Per speaks with Federico Pecora from the Center for Applied Autonomous Sensor Systems at Örebro University in Sweden.
Federico talks about working on AI and service robotics. In this area he has worked on planning, especially focusing on why a particular goal is the one that the robot should work on. To make robots as useful and user friendly as possible, he works on inferring the goal from the robot’s environment so that the user does not have to tell the robot everything.
Federico has also worked with AI robotics planning in industry to optimize results. Managing the relative importance of tasks is another challenging area there. In this context, he works on automating not only a single robot for its goal, but an entire fleet of robots for their collective goal. We get to hear about how these techniques are being used in warehouse operations, in mines and in agriculture.
[ Robots in Depth ] Continue reading
#435722 Stochastic Robots Use Randomness to ...
The idea behind swarm robots is to replace discrete, expensive, breakable uni-tasking components with a whole bunch of much simpler, cheaper, and replaceable robots that can work together to do the same sorts of tasks. Unfortunately, all of those swarm robots end up needing their own computing and communications and stuff if you want to get them to do what you want them to do.
A different approach to swarm robotics is to use a swarm of much cheaper robots that are far less intelligent. In fact, they may not have to be intelligent at all, if you can rely on their physical characteristics to drive them instead. These swarms are “stochastic,” meaning that their motions are randomly determined, but if you’re clever and careful, you can still get them to do specific things.
Georgia Tech has developed some little swarm robots called “smarticles” that can’t really do much at all on their own, but once you put them together into a jumble, their randomness can actually accomplish something.
Honestly, calling these particle robots “smart” might be giving them a bit too much credit, because they’re actually kind of dumb and strictly speaking not capable of all that much on their own. A single smarticle weighs 35 grams, and consists of some little 3D-printed flappy bits attached to servos, plus an Arduino Pro Mini, a battery, and a light or sound sensor. When its little flappy bits are activated, each smarticle can move slightly, but a single one mostly just moves around in a square and then will gradually drift in a mostly random direction over time.
It gets more interesting when you throw a whole bunch of smarticles into a constrained area. A small collection of five or 10 smarticles constrained together form a “supersmarticle,” but besides being in close proximity to one another, the smarticles within the supersmarticle aren’t communicating or anything like that. As far as each smarticle is concerned, they’re independent, but weirdly, a bumble of them can work together without working together.
“These are very rudimentary robots whose behavior is dominated by mechanics and the laws of physics,” said Dan Goldman, a Dunn Family Professor in the School of Physics at the Georgia Institute of Technology.
The researchers noticed that if one small robot stopped moving, perhaps because its battery died, the group of smarticles would begin moving in the direction of that stalled robot. Graduate student Ross Warkentin learned he could control the movement by adding photo sensors to the robots that halt the arm flapping when a strong beam of light hits one of them.
“If you angle the flashlight just right, you can highlight the robot you want to be inactive, and that causes the ring to lurch toward or away from it, even though no robots are programmed to move toward the light,” Goldman said. “That allowed steering of the ensemble in a very rudimentary, stochastic way.”
It turns out that it’s possible to model this behavior, and control a supersmarticle with enough fidelity to steer it through a maze. And while these particular smarticles aren’t all that small, strictly speaking, the idea is to develop techniques that will work when robots are scaled way way down to the point where you can't physically fit useful computing in there at all.
The researchers are also working on some other concepts, like these:
Image: Science Robotics
The Georgia Tech researchers envision stochastic robot swarms that don’t have a perfectly defined shape or delineation but are capable of self-propulsion, relying on the ensemble-level behaviors that lead to collective locomotion. In such a robot, the researchers say, groups of largely generic agents may be able to achieve complex goals, as observed in biological collectives.
Er, yeah. I’m…not sure I really want there to be a bipedal humanoid robot built out of a bunch of tiny robots. Like, that seems creepy somehow, you know? I’m totally okay with slugs, but let’s not get crazy.
“A robot made of robots: Emergent transport and control of a smarticle ensemble, by William Savoie, Thomas A. Berrueta, Zachary Jackson, Ana Pervan, Ross Warkentin, Shengkai Li, Todd D. Murphey, Kurt Wiesenfeld, and Daniel I. Goldman” from the Georgia Institute of Technology, appears in the current issue of Science Robotics. Continue reading