Tag Archives: sensors
#436042 Video Friday: Caltech’s Drone With ...
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):
ISRR 2019 – October 6-10, 2019 – Hanoi, Vietnam
Ro-Man 2019 – October 14-18, 2019 – New Delhi, India
Humanoids 2019 – October 15-17, 2019 – Toronto, Canada
ARSO 2019 – October 31-1, 2019 – Beijing, China
ROSCon 2019 – October 31-1, 2019 – Macau
IROS 2019 – November 4-8, 2019 – Macau
Let us know if you have suggestions for next week, and enjoy today’s videos.
Caltech has been making progress on LEONARDO (LEg ON Aerial Robotic DrOne), their leggy thruster powered humanoid-thing. It can now balance and walk, which is quite impressive to see.
We’ll circle back again when they’ve got it jumping and floating around.
[ Caltech ]
Turn the subtitles on to learn how robots became experts at slicing bubbly, melty, delicious cheese.
These robots learned how to do the traditional Swiss raclette from demonstration. The Robot Learning & Interaction group at the Idiap Research Institute has developed an imitation learning technique allowing the robot to acquire new skills by considering position and force information, with an automatic adaptation to new situations. The range of applications is wide, including industrial robots, service robots, and assistive robots.
[ Idiap ]
Thanks Sylvain!
Some amazing news this week from Skydio, with the announcement of their better in every single way Skydio 2 autonomous drone. Read our full article for details, but here’s a getting started video that gives you an overview of what the drone can do.
The first batch sold out in 36 hours, but you can put down a $100 deposit to reserve the $999 drone for 2020 delivery.
[ Skydio ]
UBTECH is introducing a couple new robot kits for the holidays: ChampBot and FireBot.
$130 each, available on October 20.
[ Ubtech ]
NASA’s InSight lander on Mars is trying to use its robotic arm to get the mission’s heat flow probe, or mole, digging again. InSight team engineer Ashitey Trebbi-Ollennu, based at NASA’s Jet Propulsion Laboratory in Pasadena, California, explains what has been attempted and the game plan for the coming weeks. The next tactic they’ll try will be “pinning” the mole against the hole it’s in.
[ NASA ]
We introduce shape-changing swarm robots. A swarm of self-transformable robots can both individually and collectively change their configuration to display information, actuate objects, act as tangible controllers, visualize data, and provide physical affordances. ShapeBots is a concept prototype of shape-changing swarm robots. Each robot can change its shape by leveraging small linear actuators that are thin (2.5 cm) and highly extendable (up to 20cm) in both horizontal and vertical directions.
[ Ryo Suzuki ]
Robot abuse!
Vision 60 legged robot managing unstructured terrain without vision or force sensors in its legs. Using only high-transparency actuators and 2kHz algorithmic stability control… 4-limbs and 12-motors with only a velocity command.
[ Ghost Robotics ]
We asked real people to bring in real products they needed picked for their application. In MINUTES, we assembled the right tool.
This is a cool idea, but for a real challenge they should try it outside a supermarket. Or a pet store.
[ Soft Robotics ]
Good water quality is important to humans and to nature. In a country with as much water as the Netherlands has, ensuring water quality is a very labour-intensive undertaking. To address this issue, researchers from TU Delft have developed a ‘pelican drone’: a drone capable of taking water samples quickly, in combination with a measuring instrument that immediately analyses the water quality. The drone was tested this week at the new Marker Wadden nature area ‘Living Lab’.
[ MAVLab ]
In an international collaboration led by scientists in Switzerland, three amputees merge with their bionic prosthetic legs as they climb over various obstacles without having to look. The amputees report using and feeling their bionic leg as part of their own body, thanks to sensory feedback from the prosthetic leg that is delivered to nerves in the leg’s stump.
[ EPFL ]
It’s a little hard to see, but this is one way of testing out asteroid imaging spacecraft without actually going into space: a fake asteroid and a 2D microgravity simulator.
[ Caltech ]
Drones can help filmmakers do the kinds of shots that would be otherwise impossible.
[ DJI ]
Two long interviews this week from Lex Fridman’s AI Podcast, and both of them are worth watching: Gary Marcus, and Peter Norvig.
[ AI Podcast ]
This week’s CMU RI Seminar comes from Tucker Hermans at the University of Utah, on “Improving Multi-fingered Robot Manipulation by Unifying Learning and Planning.”
Multi-fingered hands offer autonomous robots increased dexterity, versatility, and stability over simple two-fingered grippers. Naturally, this increased ability comes with increased complexity in planning and executing manipulation actions. As such, I propose combining model-based planning with learned components to improve over purely data-driven or purely-model based approaches to manipulation. This talk examines multi-fingered autonomous manipulation when the robot has only partial knowledge of the object of interest. I will first present results on planning multi-fingered grasps for novel objects using a learned neural network. I will then present our approach to planning in-hand manipulation tasks when dynamic properties of objects are not known. I will conclude with a discussion of our ongoing and future research to further unify these two approaches.
[ CMU RI ] Continue reading
#435828 Video Friday: Boston Dynamics’ ...
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):
RoboBusiness 2019 – October 1-3, 2019 – Santa Clara, Calif., USA
ISRR 2019 – October 6-10, 2019 – Hanoi, Vietnam
Ro-Man 2019 – October 14-18, 2019 – New Delhi, India
Humanoids 2019 – October 15-17, 2019 – Toronto, Canada
ARSO 2019 – October 31-1, 2019 – Beijing, China
ROSCon 2019 – October 31-1, 2019 – Macau
IROS 2019 – November 4-8, 2019 – Macau
Let us know if you have suggestions for next week, and enjoy today’s videos.
You’ve almost certainly seen the new Spot and Atlas videos from Boston Dynamics, if for no other reason than we posted about Spot’s commercial availability earlier this week. But what, are we supposed to NOT include them in Video Friday anyway? Psh! Here you go:
[ Boston Dynamics ]
Eight deadly-looking robots. One Giant Nut trophy. Tonight is the BattleBots season finale, airing on Discovery, 8 p.m. ET, or check your local channels.
[ BattleBots ]
Thanks Trey!
Speaking of battling robots… Having giant robots fight each other is one of those things that sounds really great in theory, but doesn’t work out so well in reality. And sadly, MegaBots is having to deal with reality, which means putting their giant fighting robot up on eBay.
As of Friday afternoon, the current bid is just over $100,000 with a week to go.
[ MegaBots ]
Michigan Engineering has figured out the secret formula to getting 150,000 views on YouTube: drone plus nail gun.
[ Michigan Engineering ]
Michael Burke from the University of Edinburgh writes:
We’ve been learning to scoop grapefruit segments using a PR2, by “feeling” the difference between peel and pulp. We use joint torque measurements to predict the probability that the knife is in the peel or pulp, and use this to apply feedback control to a nominal cutting trajectory learned from human demonstration, so that we remain in a position of maximum uncertainty about which medium we’re cutting. This means we slice along the boundary between the two mediums. It works pretty well!
[ Paper ] via [ Robust Autonomy and Decisions Group ]
Thanks Michael!
Hey look, it’s Jan with eight EMYS robot heads. Hi, Jan! Hi, EMYSes!
[ EMYS ]
We’re putting the KRAKEN Arm through its paces, demonstrating that it can unfold from an Express Rack locker on the International Space Station and access neighboring lockers in NASA’s FabLab system to enable transfer of materials and parts between manufacturing, inspection, and storage stations. The KRAKEN arm will be able to change between multiple ’end effector’ tools such as grippers and inspection sensors – those are in development so they’re not shown in this video.
[ Tethers Unlimited ]
UBTECH’s Alpha Mini Robot with Smart Robot’s “Maatje” software is offering healthcare service to children at Praktijk Intraverte Multidisciplinary Institution in Netherlands.
This institution is using Alpha Mini in counseling children’s behavior. Alpha Mini can move and talk to children and offers games and activities to stimulate and interact with them. Alpha Mini talks, helps and motivates children thereby becoming more flexible in society.
[ UBTECH ]
Some impressive work here from Anusha Nagabandi, Kurt Konoglie, Sergey Levine, Vikash Kumar at Google Brain, training a dexterous multi-fingered hand to do that thing with two balls that I’m really bad at.
Dexterous multi-fingered hands can provide robots with the ability to flexibly perform a wide range of manipulation skills. However, many of the more complex behaviors are also notoriously difficult to control: Performing in-hand object manipulation, executing finger gaits to move objects, and exhibiting precise fine motor skills such as writing, all require finely balancing contact forces, breaking and reestablishing contacts repeatedly, and maintaining control of unactuated objects. In this work, we demonstrate that our method of online planning with deep dynamics models (PDDM) addresses both of these limitations; we show that improvements in learned dynamics models, together with improvements in online model-predictive control, can indeed enable efficient and effective learning of flexible contact-rich dexterous manipulation skills — and that too, on a 24-DoF anthropomorphic hand in the real world, using just 2-4 hours of purely real-world data to learn to simultaneously coordinate multiple free-floating objects.
[ PDDM ]
Thanks Vikash!
CMU’s Ballbot has a deceptively light touch that’s ideal for leading people around.
A paper on this has been submitted to IROS 2019.
[ CMU ]
The Autonomous Robots Lab at the University of Nevada is sharing some of the work they’ve done on path planning and exploration for aerial robots during the DARPA SubT Challenge.
[ Autonomous Robots Lab ]
More proof that anything can be a drone if you staple some motors to it. Even 32 feet of styrofoam insulation.
[ YouTube ]
Whatever you think of military drones, we can all agree that they look cool.
[ Boeing ]
I appreciate the fact that iCub has eyelids, I really do, but sometimes, it ends up looking kinda sleepy in research videos.
[ EPFL LASA ]
Video shows autonomous flight of a lightweight aerial vehicle outdoors and indoors on the campus of Carnegie Mellon University. The vehicle is equipped with limited onboard sensing from a front-facing camera and a proximity sensor. The aerial autonomy is enabled by utilizing a 3D prior map built in Step 1.
[ CMU ]
The Stanford Space Robotics Facility allows researchers to test innovative guidance and navigation algorithms on a realistic frictionless, underactuated system.
[ Stanford ASL ]
In this video, Ian and CP discuss Misty’s many capabilities including robust locomotion, obstacle avoidance, 3D mapping/SLAM, face detection and recognition, sound localization, hardware extensibility, photo and video capture, and programmable personality. They also talk about some of the skills he’s built using these capabilities (and others) and how those skills can be expanded upon by you.
[ Misty Robotics ]
This week’s CMU RI Seminar comes from Aaron Parness at Caltech and NASA JPL, on “Robotic Grippers for Planetary Applications.”
The previous generation of NASA missions to the outer solar system discovered salt water oceans on Europa and Enceladus, each with more liquid water than Earth – compelling targets to look for extraterrestrial life. Closer to home, JAXA and NASA have imaged sky-light entrances to lava tube caves on the Moon more than 100 m in diameter and ESA has characterized the incredibly varied and complex terrain of Comet 67P. While JPL has successfully landed and operated four rovers on the surface of Mars using a 6-wheeled rocker-bogie architecture, future missions will require new mobility architectures for these extreme environments. Unfortunately, the highest value science targets often lie in the terrain that is hardest to access. This talk will explore robotic grippers that enable missions to these extreme terrains through their ability to grip a wide variety of surfaces (shapes, sizes, and geotechnical properties). To prepare for use in space where repair or replacement is not possible, we field-test these grippers and robots in analog extreme terrain on Earth. Many of these systems are enabled by advances in autonomy. The talk will present a rapid overview of my work and a detailed case study of an underactuated rock gripper for deflecting asteroids.
[ CMU ]
Rod Brooks gives some of the best robotics talks ever. He gave this one earlier this week at UC Berkeley, on “Steps Toward Super Intelligence and the Search for a New Path.”
[ UC Berkeley ] Continue reading
#435822 The Internet Is Coming to the Rest of ...
People surf it. Spiders crawl it. Gophers navigate it.
Now, a leading group of cognitive biologists and computer scientists want to make the tools of the Internet accessible to the rest of the animal kingdom.
Dubbed the Interspecies Internet, the project aims to provide intelligent animals such as elephants, dolphins, magpies, and great apes with a means to communicate among each other and with people online.
And through artificial intelligence, virtual reality, and other digital technologies, researchers hope to crack the code of all the chirps, yips, growls, and whistles that underpin animal communication.
Oh, and musician Peter Gabriel is involved.
“We can use data analysis and technology tools to give non-humans a lot more choice and control,” the former Genesis frontman, dressed in his signature Nehru-style collar shirt and loose, open waistcoat, told IEEE Spectrum at the inaugural Interspecies Internet Workshop, held Monday in Cambridge, Mass. “This will be integral to changing our relationship with the natural world.”
The workshop was a long time in the making.
Eighteen years ago, Gabriel visited a primate research center in Atlanta, Georgia, where he jammed with two bonobos, a male named Kanzi and his half-sister Panbanisha. It was the first time either bonobo had sat at a piano before, and both displayed an exquisite sense of musical timing and melody.
Gabriel seemed to be speaking to the great apes through his synthesizer. It was a shock to the man who once sang “Shock the Monkey.”
“It blew me away,” he says.
Add in the bonobos’ ability to communicate by pointing to abstract symbols, Gabriel notes, and “you’d have to be deaf, dumb, and very blind not to notice language being used.”
Gabriel eventually teamed up with Internet protocol co-inventor Vint Cerf, cognitive psychologist Diana Reiss, and IoT pioneer Neil Gershenfeld to propose building an Interspecies Internet. Presented in a 2013 TED Talk as an “idea in progress,” the concept proved to be ahead of the technology.
“It wasn’t ready,” says Gershenfeld, director of MIT’s Center for Bits and Atoms. “It needed to incubate.”
So, for the past six years, the architects of the Dolittlesque initiative embarked on two small pilot projects, one for dolphins and one for chimpanzees.
At her Hunter College lab in New York City, Reiss developed what she calls the D-Pad—a touchpad for dolphins.
Reiss had been trying for years to create an underwater touchscreen with which to probe the cognition and communication skills of bottlenose dolphins. But “it was a nightmare coming up with something that was dolphin-safe and would work,” she says.
Her first attempt emitted too much heat. A Wii-like system of gesture recognition proved too difficult to install in the dolphin tanks.
Eventually, she joined forces with Rockefeller University biophysicist Marcelo Magnasco and invented an optical detection system in which images and infrared sensors are projected through an underwater viewing window onto a glass panel, allowing the dolphins to play specially designed apps, including one dubbed Whack-a-Fish.
Meanwhile, in the United Kingdom, Gabriel worked with Alison Cronin, director of the ape rescue center Monkey World, to test the feasibility of using FaceTime with chimpanzees.
The chimps engaged with the technology, Cronin reported at this week’s workshop. However, our hominid cousins proved as adept at videotelephonic discourse as my three-year-old son is at video chatting with his grandparents—which is to say, there was a lot of pass-the-banana-through-the-screen and other silly games, and not much meaningful conversation.
“We can use data analysis and technology tools to give non-humans a lot more choice and control.”
—Peter Gabriel
The buggy, rudimentary attempt at interspecies online communication—what Cronin calls her “Max Headroom experiment”—shows that building the Interspecies Internet will not be as simple as giving out Skype-enabled tablets to smart animals.
“There are all sorts of problems with creating a human-centered experience for another animal,” says Gabriel Miller, director of research and development at the San Diego Zoo.
Miller has been working on animal-focused sensory tools such as an “Elephone” (for elephants) and a “Joybranch” (for birds), but it’s not easy to design efficient interactive systems for other creatures—and for the Interspecies Internet to be successful, Miller points out, “that will be super-foundational.”
Researchers are making progress on natural language processing of animal tongues. Through a non-profit organization called the Earth Species Project, former Firefox designer Aza Raskin and early Twitter engineer Britt Selvitelle are applying deep learning algorithms developed for unsupervised machine translation of human languages to fashion a Rosetta Stone–like tool capable of interpreting the vocalizations of whales, primates, and other animals.
Inspired by the scientists who first documented the complex sonic arrangements of humpback whales in the 1960s—a discovery that ushered in the modern marine conservation movement—Selvitelle hopes that an AI-powered animal translator can have a similar effect on environmentalism today.
“A lot of shifts happen when someone who doesn’t have a voice gains a voice,” he says.
A challenge with this sort of AI software remains verification and validation. Normally, machine-learning algorithms are benchmarked against a human expert, but who is to say if a cybernetic translation of a sperm whale’s clicks is accurate or not?
One could back-translate an English expression into sperm whale-ese and then into English again. But with the great apes, there might be a better option.
According to primatologist Sue Savage-Rumbaugh, expertly trained bonobos could serve as bilingual interpreters, translating the argot of apes into the parlance of people, and vice versa.
Not just any trained ape will do, though. They have to grow up in a mixed Pan/Homo environment, as Kanzi and Panbanisha were.
“If I can have a chat with a cow, maybe I can have more compassion for it.”
—Jeremy Coller
Those bonobos were raised effectively from birth both by Savage-Rumbaugh, who taught the animals to understand spoken English and to communicate via hundreds of different pictographic “lexigrams,” and a bonobo mother named Matata that had lived for six years in the Congolese rainforests before her capture.
Unlike all other research primates—which are brought into captivity as infants, reared by human caretakers, and have limited exposure to their natural cultures or languages—those apes thus grew up fluent in both bonobo and human.
Panbanisha died in 2012, but Kanzi, aged 38, is still going strong, living at an ape sanctuary in Des Moines, Iowa. Researchers continue to study his cognitive abilities—Francine Dolins, a primatologist at the University of Michigan-Dearborn, is running one study in which Kanzi and other apes hunt rabbits and forage for fruit through avatars on a touchscreen. Kanzi could, in theory, be recruited to check the accuracy of any Google Translate–like app for bonobo hoots, barks, grunts, and cries.
Alternatively, Kanzi could simply provide Internet-based interpreting services for our two species. He’s already proficient at video chatting with humans, notes Emily Walco, a PhD student at Harvard University who has personally Skyped with Kanzi. “He was super into it,” Walco says.
And if wild bonobos in Central Africa can be coaxed to gather around a computer screen, Savage-Rumbaugh is confident Kanzi could communicate with them that way. “It can all be put together,” she says. “We can have an Interspecies Internet.”
“Both the technology and the knowledge had to advance,” Savage-Rumbaugh notes. However, now, “the techniques that we learned could really be extended to a cow or a pig.”
That’s music to the ears of Jeremy Coller, a private equity specialist whose foundation partially funded the Interspecies Internet Workshop. Coller is passionate about animal welfare and has devoted much of his philanthropic efforts toward the goal of ending factory farming.
At the workshop, his foundation announced the creation of the Coller Doolittle Prize, a US $100,000 award to help fund further research related to the Interspecies Internet. (A working group also formed to synthesize plans for the emerging field, to facilitate future event planning, and to guide testing of shared technology platforms.)
Why would a multi-millionaire with no background in digital communication systems or cognitive psychology research want to back the initiative? For Coller, the motivation boils to interspecies empathy.
“If I can have a chat with a cow,” he says, “maybe I can have more compassion for it.”
An abridged version of this post appears in the September 2019 print issue as “Elephants, Dolphins, and Chimps Need the Internet, Too.” Continue reading