Tag Archives: engineering

#435676 Intel’s Neuromorphic System Hits 8 ...

At the DARPA Electronics Resurgence Initiative Summit today in Detroit, Intel plans to unveil an 8-million-neuron neuromorphic system comprising 64 Loihi research chips—codenamed Pohoiki Beach. Loihi chips are built with an architecture that more closely matches the way the brain works than do chips designed to do deep learning or other forms of AI. For the set of problems that such “spiking neural networks” are particularly good at, Loihi is about 1,000 times as fast as a CPU and 10,000 times as energy efficient. The new 64-Loihi system represents the equivalent of 8-million neurons, but that’s just a step to a 768-chip, 100-million-neuron system that the company plans for the end of 2019.

Intel and its research partners are just beginning to test what massive neural systems like Pohoiki Beach can do, but so far the evidence points to even greater performance and efficiency, says Mike Davies, director of neuromorphic research at Intel.

“We’re quickly accumulating results and data that there are definite benefits… mostly in the domain of efficiency. Virtually every one that we benchmark…we find significant gains in this architecture,” he says.

Going from a single-Loihi to 64 of them is more of a software issue than a hardware one. “We designed scalability into the Loihi chip from the beginning,” says Davies. “The chip has a hierarchical routing interface…which allows us to scale to up to 16,000 chips. So 64 is just the next step.”

Photo: Tim Herman/Intel Corporation

One of Intel’s Nahuku boards, each of which contains 8 to 32 Intel Loihi neuromorphic chips, shown here interfaced to an Intel Arria 10 FPGA development kit. Intel’s latest neuromorphic system, Pohoiki Beach, is made up of multiple Nahuku boards and contains 64 Loihi chips.

Finding algorithms that run well on an 8-million-neuron system and optimizing those algorithms in software is a considerable effort, he says. Still, the payoff could be huge. Neural networks that are more brain-like, such as Loihi, could be immune to some of the artificial intelligence’s—for lack of a better word—dumbness.

For example, today’s neural networks suffer from something called catastrophic forgetting. If you tried to teach a trained neural network to recognize something new—a new road sign, say—by simply exposing the network to the new input, it would disrupt the network so badly that it would become terrible at recognizing anything. To avoid this, you have to completely retrain the network from the ground up. (DARPA’s Lifelong Learning, or L2M, program is dedicated to solving this problem.)

(Here’s my favorite analogy: Say you coached a basketball team, and you raised the net by 30 centimeters while nobody was looking. The players would miss a bunch at first, but they’d figure things out quickly. If those players were like today’s neural networks, you’d have to pull them off the court and teach them the entire game over again—dribbling, passing, everything.)

Loihi can run networks that might be immune to catastrophic forgetting, meaning it learns a bit more like a human. In fact, there’s evidence through a research collaboration with Thomas Cleland’s group at Cornell University, that Loihi can achieve what’s called one-shot learning. That is, learning a new feature after being exposed to it only once. The Cornell group showed this by abstracting a model of the olfactory system so that it would run on Loihi. When exposed to a new virtual scent, the system not only didn't catastrophically forget everything else it had smelled, it learned to recognize the new scent just from the single exposure.

Loihi might also be able to run feature-extraction algorithms that are immune to the kinds of adversarial attacks that befuddle today’s image recognition systems. Traditional neural networks don’t really understand the features they’re extracting from an image in the way our brains do. “They can be fooled with simplistic attacks like changing individual pixels or adding a screen of noise that wouldn’t fool a human in any way,” Davies explains. But the sparse-coding algorithms Loihi can run work more like the human visual system and so wouldn’t fall for such shenanigans. (Disturbingly, humans are not completely immune to such attacks.)

Photo: Tim Herman/Intel Corporation

A close-up shot of Loihi, Intel’s neuromorphic research chip. Intel’s latest neuromorphic system, Pohoiki Beach, will be comprised of 64 of these Loihi chips.

Researchers have also been using Loihi to improve real-time control for robotic systems. For example, last week at the Telluride Neuromorphic Cognition Engineering Workshop—an event Davies called “summer camp for neuromorphics nerds”—researchers were hard at work using a Loihi-based system to control a foosball table. “It strikes people as crazy,” he says. “But it’s a nice illustration of neuromorphic technology. It’s fast, requires quick response, quick planning, and anticipation. These are what neuromorphic chips are good at.” Continue reading

Posted in Human Robots

#435662 Video Friday: This 3D-Printed ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

ICRES 2019 – July 29-30, 2019 – London, U.K.
DARPA SubT Tunnel Circuit – August 15-22, 2019 – Pittsburgh, Pa., USA
IEEE Africon 2019 – September 25-27, 2019 – Accra, Ghana
ISRR 2019 – October 6-10, 2019 – Hanoi, Vietnam
Ro-Man 2019 – October 14-18, 2019 – New Delhi, India
Humanoids 2019 – October 15-17, 2019 – Toronto, Canada
Let us know if you have suggestions for next week, and enjoy today’s videos.

We’re used to seeing bristle bots about the size of a toothbrush head (which is not a coincidence), but Georgia Tech has downsized them, with some interesting benefits.

Researchers have created a new type of tiny 3D-printed robot that moves by harnessing vibration from piezoelectric actuators, ultrasound sources or even tiny speakers. Swarms of these “micro-bristle-bots” might work together to sense environmental changes, move materials – or perhaps one day repair injuries inside the human body.

The prototype robots respond to different vibration frequencies depending on their configurations, allowing researchers to control individual bots by adjusting the vibration. Approximately two millimeters long – about the size of the world’s smallest ant – the bots can cover four times their own length in a second despite the physical limitations of their small size.

“We are working to make the technology robust, and we have a lot of potential applications in mind,” said Azadeh Ansari, an assistant professor in the School of Electrical and Computer Engineering at the Georgia Institute of Technology. “We are working at the intersection of mechanics, electronics, biology and physics. It’s a very rich area and there’s a lot of room for multidisciplinary concepts.”

[ Georgia Tech ]

Most consumer drones are “multi-copters,” meaning that they have a series of rotors or propellers that allow them to hover like helicopters. But having rotors severely limits their energy efficiency, which means that they can’t easily carry heavy payloads or fly for long periods of time. To get the best of both worlds, drone designers have tried to develop “hybrid” fixed-wing drones that can fly as efficiently as airplanes, while still taking off and landing vertically like multi-copters.

These drones are extremely hard to control because of the complexity of dealing with their flight dynamics, but a team from MIT CSAIL aims to make the customization process easier, with a new system that allows users to design drones of different sizes and shapes that can nimbly switch between hovering and gliding – all by using a single controller.

In future work, the team plans to try to further increase the drone’s maneuverability by improving its design. The model doesn’t yet fully take into account complex aerodynamic effects between the propeller’s airflow and the wings. And lastly, their method trained the copter with “yaw velocity” set at zero, which means that it cannot currently perform sharp turns.

[ Paper ] via [ MIT ]

We’re not quite at the point where we can 3D print entire robots, but UCSD is getting us closer.

The UC San Diego researchers’ insight was twofold. They turned to a commercially available printer for the job, (the Stratasys Objet350 Connex3—a workhorse in many robotics labs). In addition, they realized one of the materials used by the 3D printer is made of carbon particles that can conduct power to sensors when connected to a power source. So roboticists used the black resin to manufacture complex sensors embedded within robotic parts made of clear polymer. They designed and manufactured several prototypes, including a gripper.

When stretched, the sensors failed at approximately the same strain as human skin. But the polymers the 3D printer uses are not designed to conduct electricity, so their performance is not optimal. The 3D printed robots also require a lot of post-processing before they can be functional, including careful washing to clean up impurities and drying.

However, researchers remain optimistic that in the future, materials will improve and make 3D printed robots equipped with embedded sensors much easier to manufacture.

[ UCSD ]

Congrats to Team Homer from the University of Koblenz-Landau, who won the RoboCup@Home world championship in Sydney!

[ Team Homer ]

When you’ve got a robot with both wheels and legs, motion planning is complicated. IIT has developed a new planner for CENTAURO that takes advantage of the different ways that the robot is able to get past obstacles.

[ Centauro ]

Thanks Dimitrios!

If you constrain a problem tightly enough, you can solve it even with a relatively simple robot. Here’s an example of an experimental breakfast robot named “Loraine” that can cook eggs, bacon, and potatoes using what looks to be zero sensing at all, just moving to different positions and actuating its gripper.

There’s likely to be enough human work required in the prep here to make the value that the robot adds questionable at best, but it’s a good example of how you can make a relatively complex task robot-compatible as long as you set it up in just the right way.

[ Connected Robotics ] via [ RobotStart ]

It’s been a while since we’ve seen a ball bot, and I’m not sure that I’ve ever seen one with a manipulator on it.

[ ETH Zurich RSL ]

Soft Robotics’ new mini fingers are able to pick up taco shells without shattering them, which as far as I can tell is 100 percent impossible for humans to do.

[ Soft Robotics ]

Yes, Starship’s wheeled robots can climb curbs, and indeed they have a pretty neat way of doing it.

[ Starship ]

Last year we posted a long interview with Christoph Bartneck about his research into robots and racism, and here’s a nice video summary of the work.

[ Christoph Bartneck ]

Canada’s contribution to the Lunar Gateway will be a smart robotic system which includes a next-generation robotic arm known as Canadarm3, as well as equipment, and specialized tools. Using cutting-edge software and advances in artificial intelligence, this highly-autonomous system will be able to maintain, repair and inspect the Gateway, capture visiting vehicles, relocate Gateway modules, help astronauts during spacewalks, and enable science both in lunar orbit and on the surface of the Moon.

[ CSA ]

An interesting demo of how Misty can integrate sound localization with other services.

[ Misty Robotics ]

The third and last period of H2020 AEROARMS project has brought the final developments in industrial inspection and maintenance tasks, such as the crawler retrieval and deployment (DLR) or the industrial validation in stages like a refinery or a cement factory.

[ Aeroarms ]

The Guardian S remote visual inspection and surveillance robot navigates a disaster training site to demonstrate its advanced maneuverability, long-range wireless communications and extended run times.

[ Sarcos ]

This appears to be a cake frosting robot and I wish I had like 3 more hours of this to share:

Also here is a robot that picks fried chicken using a curiously successful technique:

[ Kazumichi Moriyama ]

This isn’t strictly robots, but professor Hiroshi Ishii, associate director of the MIT Media Lab, gave a fascinating SIGCHI Lifetime Achievement Talk that’s absolutely worth your time.

[ Tangible Media Group ] Continue reading

Posted in Human Robots

#435640 Video Friday: This Wearable Robotic Tail ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

DARPA SubT Tunnel Circuit – August 15-22, 2019 – Pittsburgh, Pa., USA
CLAWAR 2019 – August 26-28, 2019 – Kuala Lumpur, Malaysia
IEEE Africon 2019 – September 25-27, 2019 – Accra, Ghana
ISRR 2019 – October 6-10, 2019 – Hanoi, Vietnam
Ro-Man 2019 – October 14-18, 2019 – New Delhi, India
Humanoids 2019 – October 15-17, 2019 – Toronto, Canada
ARSO 2019 – October 31-1, 2019 – Beijing, China
ROSCon 2019 – October 31-1, 2019 – Macau
IROS 2019 – November 4-8, 2019 – Macau
Let us know if you have suggestions for next week, and enjoy today’s videos.

Lakshmi Nair from Georgia Tech describes some fascinating research towards robots that can create their own tools, as presented at ICRA this year:

Using a novel capability to reason about shape, function, and attachment of unrelated parts, researchers have for the first time successfully trained an intelligent agent to create basic tools by combining objects.

The breakthrough comes from Georgia Tech’s Robot Autonomy and Interactive Learning (RAIL) research lab and is a significant step toward enabling intelligent agents to devise more advanced tools that could prove useful in hazardous – and potentially life-threatening – environments.

[ Lakshmi Nair ]

Victor Barasuol, from the Dynamic Legged Systems Lab at IIT, wrote in to share some new research on their HyQ quadruped that enables sensorless shin collision detection. This helps the robot navigate unstructured environments, and also mitigates all those painful shin strikes, because ouch.

This will be presented later this month at the International Conference on Climbing and Walking Robots (CLAWAR) in Kuala Lumpur, Malaysia.

[ IIT ]

Thanks Victor!

You used to have a tail, you know—as an embryo, about a month in to your development. All mammals used to have tails, and now we just have useless tailbones, which don’t help us with balancing even a little bit. BRING BACK THE TAIL!

The tail, created by Junichi Nabeshima, Kouta Minamizawa, and MHD Yamen Saraiji from Keio University’s Graduate School of Media Design, was presented at SIGGRAPH 2019 Emerging Technologies.

[ Paper ] via [ Gizmodo ]

The noises in this video are fantastic.

[ ESA ]

Apparently the industrial revolution wasn’t a thorough enough beatdown of human knitting, because the robots are at it again.

[ MIT CSAIL ]

Skydio’s drones just keep getting more and more impressive. Now if only they’d make one that I can afford…

[ Skydio ]

The only thing more fun than watching robots is watching people react to robots.

[ SEER ]

There aren’t any robots in this video, but it’s robotics-related research, and very soothing to watch.

[ Stanford ]

#autonomousicecreamtricycle

In case it wasn’t clear, which it wasn’t, this is a Roboy project. And if you didn’t understand that first video, you definitely won’t understand this second one:

Whatever that t-shirt is at the end (Roboy in sunglasses puking rainbows…?) I need one.

[ Roboy ]

By adding electronics and computation technology to a simple cane that has been around since ancient times, a team of researchers at Columbia Engineering have transformed it into a 21st century robotic device that can provide light-touch assistance in walking to the aged and others with impaired mobility.

The light-touch robotic cane, called CANINE, acts as a cane-like mobile assistant. The device improves the individual’s proprioception, or self-awareness in space, during walking, which in turn improves stability and balance.

[ ROAR Lab ]

During the second field experiment for DARPA’s OFFensive Swarm-Enabled Tactics (OFFSET) program, which took place at Fort Benning, Georgia, teams of autonomous air and ground robots tested tactics on a mission to isolate an urban objective. Similar to the way a firefighting crew establishes a boundary around a burning building, they first identified locations of interest and then created a perimeter around the focal point.

[ DARPA ]

I think there’s a bit of new footage here of Ghost Robotics’ Vision 60 quadruped walking around without sensors on unstructured terrain.

[ Ghost Robotics ]

If you’re as tired of passenger drone hype as I am, there’s absolutely no need to watch this video of NEC’s latest hover test.

[ AP ]

As researchers teach robots to perform more and more complex tasks, the need for realistic simulation environments is growing. Existing techniques for closing the reality gap by approximating real-world physics often require extensive real world data and/or thousands of simulation samples. This paper presents TuneNet, a new machine learning-based method to directly tune the parameters of one model to match another using an iterative residual tuning technique. TuneNet estimates the parameter difference between two models using a single observation from the target and minimal simulation, allowing rapid, accurate and sample-efficient parameter estimation.

The system can be trained via supervised learning over an auto-generated simulated dataset. We show that TuneNet can perform system identification, even when the true parameter values lie well outside the distribution seen during training, and demonstrate that simulators tuned with TuneNet outperform existing techniques for predicting rigid body motion. Finally, we show that our method can estimate real-world parameter values, allowing a robot to perform sim-to-real task transfer on a dynamic manipulation task unseen during training. We are also making a baseline implementation of our code available online.

[ Paper ]

Here’s an update on what GITAI has been up to with their telepresence astronaut-replacement robot.

[ GITAI ]

Curiosity captured this 360-degree panorama of a location on Mars called “Teal Ridge” on June 18, 2019. This location is part of a larger region the rover has been exploring called the “clay-bearing unit” on the side of Mount Sharp, which is inside Gale Crater. The scene is presented with a color adjustment that approximates white balancing to resemble how the rocks and sand would appear under daytime lighting conditions on Earth.

[ MSL ]

Some updates (in English) on ROS from ROSCon France. The first is a keynote from Brian Gerkey:

And this second video is from Omri Ben-Bassat, about how to keep your Anki Vector alive using ROS:

All of the ROSCon FR talks are available on Vimeo.

[ ROSCon FR ] Continue reading

Posted in Human Robots

#435628 Soft Exosuit Makes Walking and Running ...

Researchers at Harvard’s Wyss Institute have been testing a flexible, lightweight exosuit that can improve your metabolic efficiency by 4 to 10 percent while walking and running. This is very important because, according to a press release from Harvard, the suit can help you be faster and more efficient, whether you’re “walking at a leisurely pace,” or “running for your life.” Great!

Making humans better at running for their lives is something that we don’t put nearly enough research effort into, I think. The problem may not come up very often, but when it does, it’s super important (because, bears). So, sign me up for anything that we can do to make our desperate flights faster or more efficient—especially if it’s a lightweight, wearable exosuit that’s soft, flexible, and comfortable to wear.

This is the same sort of exosuit that was part of a DARPA program that we wrote about a few years ago, which was designed to make it easier for soldiers to carry heavy loads for long distances.

Photos: Wyss Institute at Harvard University

The system uses two waist-mounted electrical motors connected with cables to thigh straps that run down around your butt. The motors pull on the cables at the same time that your muscles actuate, helping them out and reducing the amount of work that your muscles put in without decreasing the amount of force they exert on your legs. The entire suit (batteries included) weighs 5 kilograms (11 pounds).

In order for the cables to actuate at the right time, the suit tracks your gait with two inertial measurement units (IMUs) on the thighs and one on the waist, and then adjusts its actuation profile accordingly. It works well, too, with measurable increases in performance:

We show that a portable exosuit that assists hip extension can reduce the metabolic rate of treadmill walking at 1.5 meters per second by 9.3 percent and that of running at 2.5 meters per second by 4.0 percent compared with locomotion without the exosuit. These reduction magnitudes are comparable to the effects of taking off 7.4 and 5.7 kilograms during walking and running, respectively, and are in a range that has shown meaningful athletic performance changes.

By increasing your efficiency, you can think of the suit as being able to make you walk or run faster, or farther, or carry a heavier load, all while spending the same amount of energy (or less), which could be just enough to outrun the bear that’s chasing you. Plus, it doesn’t appear to be uncomfortable to wear, and doesn’t require the user to do anything differently, which means that (unlike most robotics things) it’s maybe actually somewhat practical for real-world use—whether you’re indoors or outdoors, or walking or running, or being chased by a bear or not.

Sadly, I have no idea when you might be able to buy one of these things. But the researchers are looking for ways to make the suit even easier to use, while also reducing the weight and making the efficiency increase more pronounced. Harvard’s Conor Walsh says they’re “excited to continue to apply it to a range of applications, including assisting those with gait impairments, industry workers at risk of injury performing physically strenuous tasks, or recreational weekend warriors.” As a weekend warrior who is not entirely sure whether he can outrun a bear, I’m excited for this.

Reducing the metabolic rate of walking and running with a versatile, portable exosuit, by Jinsoo Kim, Giuk Lee, Roman Heimgartner, Dheepak Arumukhom Revi, Nikos Karavas, Danielle Nathanson, Ignacio Galiana, Asa Eckert-Erdheim, Patrick Murphy, David Perry, Nicolas Menard, Dabin Kim Choe, Philippe Malcolm, and Conor J. Walsh from the Wyss Institute for Biologically Inspired Engineering at Harvard University, appears in the current issue of Science. Continue reading

Posted in Human Robots

#435619 Video Friday: Watch This Robot Dog ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

IEEE Africon 2019 – September 25-27, 2019 – Accra, Ghana
RoboBusiness 2019 – October 1-3, 2019 – Santa Clara, CA, USA
ISRR 2019 – October 6-10, 2019 – Hanoi, Vietnam
Ro-Man 2019 – October 14-18, 2019 – New Delhi, India
Humanoids 2019 – October 15-17, 2019 – Toronto, Canada
ARSO 2019 – October 31-1, 2019 – Beijing, China
ROSCon 2019 – October 31-1, 2019 – Macau
IROS 2019 – November 4-8, 2019 – Macau
Let us know if you have suggestions for next week, and enjoy today’s videos.

Team PLUTO (University of Pennsylvania, Ghost Robotics, and Exyn Technologies) put together this video giving us a robot’s-eye-view (or whatever they happen to be using for eyes) of the DARPA Subterranean Challenge tunnel circuits.

[ PLUTO ]

Zhifeng Huang has been improving his jet-stepping humanoid robot, which features new hardware and the ability to take larger and more complex steps.

This video reported the last progress of an ongoing project utilizing ducted-fan propulsion system to improve humanoid robot’s ability in stepping over large ditches. The landing point of the robot’s swing foot can be not only forward but also side direction. With keeping quasi-static balance, the robot was able to step over a ditch with 450mm in width (up to 97% of the robot’s leg’s length) in 3D stepping.

[ Paper ]

Thanks Zhifeng!

These underacuated hands from Matei Ciocarlie’s lab at Columbia are magically able to reconfigure themselves to grasp different object types with just one or two motors.

[ Paper ] via [ ROAM Lab ]

This is one reason we should pursue not “autonomous cars” but “fully autonomous cars” that never require humans to take over. We can’t be trusted.

During our early days as the Google self-driving car project, we invited some employees to test our vehicles on their commutes and weekend trips. What we were testing at the time was similar to the highway driver assist features that are now available on cars today, where the car takes over the boring parts of the driving, but if something outside its ability occurs, the driver has to take over immediately.

What we saw was that our testers put too much trust in that technology. They were doing things like texting, applying makeup, and even falling asleep that made it clear they would not be ready to take over driving if the vehicle asked them to. This is why we believe that nothing short of full autonomy will do.

[ Waymo ]

Buddy is a DIY and fetchingly minimalist social robot (of sorts) that will be coming to Kickstarter this month.

We have created a new arduino kit. His name is Buddy. He is a DIY social robot to serve as a replacement for Jibo, Cozmo, or any of the other bots that are no longer available. Fully 3D printed and supported he adds much more to our series of Arduino STEM robotics kits.

Buddy is able to look around and map his surroundings and react to changes within them. He can be surprised and he will always have a unique reaction to changes. The kit can be built very easily in less than an hour. It is even robust enough to take the abuse that kids can give it in a classroom.

[ Littlebots ]

The android Mindar, based on the Buddhist deity of mercy, preaches sermons at Kodaiji temple in Kyoto, and its human colleagues predict that with artificial intelligence it could one day acquire unlimited wisdom. Developed at a cost of almost $1 million (¥106 million) in a joint project between the Zen temple and robotics professor Hiroshi Ishiguro, the robot teaches about compassion and the dangers of desire, anger and ego.

[ Japan Times ]

I’m not sure whether it’s the sound or what, but this thing scares me for some reason.

[ BIRL ]

This gripper uses magnets as a sort of adjustable spring for dynamic stiffness control, which seems pretty clever.

[ Buffalo ]

What a package of medicine sees while being flown by drone from a hospital to a remote clinic in the Dominican Republic. The drone flew 11 km horizontally and 800 meters vertically, and I can’t even imagine what it would take to make that drive.

[ WeRobotics ]

My first ride in a fully autonomous car was at Stanford in 2009. I vividly remember getting in the back seat of a descendant of Junior, and watching the steering wheel turn by itself as the car executed a perfect parking maneuver. Ten years later, it’s still fun to watch other people have that experience.

[ Waymo ]

Flirtey, the pioneer of the commercial drone delivery industry, has unveiled the much-anticipated first video of its next-generation delivery drone, the Flirtey Eagle. The aircraft designer and manufacturer also unveiled the Flirtey Portal, a sophisticated take off and landing platform that enables scalable store-to-door operations; and an autonomous software platform that enables drones to deliver safely to homes.

[ Flirtey ]

EPFL scientists are developing new approaches for improved control of robotic hands – in particular for amputees – that combines individual finger control and automation for improved grasping and manipulation. This interdisciplinary proof-of-concept between neuroengineering and robotics was successfully tested on three amputees and seven healthy subjects.

[ EPFL ]

This video is a few years old, but we’ll take any excuse to watch the majestic sage-grouse be majestic in all their majesticness.

[ UC Davis ]

I like the idea of a game of soccer (or, football to you weirdos in the rest of the world) where the ball has a mind of its own.

[ Sphero ]

Looks like the whole delivery glider idea is really taking off! Or, you know, not taking off.

Weird that they didn’t show the landing, because it sure looked like it was going to plow into the side of the hill at full speed.

[ Yates ] via [ sUAS News ]

This video is from a 2018 paper, but it’s not like we ever get tired of seeing quadrupeds do stuff, right?

[ MIT ]

Founder and Head of Product, Ian Bernstein, and Head of Engineering, Morgan Bell, have been involved in the Misty project for years and they have learned a thing or two about building robots. Hear how and why Misty evolved into a robot development platform, learn what some of the earliest prototypes did (and why they didn’t work for what we envision), and take a deep dive into the technology decisions that form the Misty II platform.

[ Misty Robotics ]

Lex Fridman interviews Vijay Kumar on the Artifiical Intelligence Podcast.

[ AI Podcast ]

This week’s CMU RI Seminar is from Ross Knepper at Cornell, on Formalizing Teamwork in Human-Robot Interaction.

Robots out in the world today work for people but not with people. Before robots can work closely with ordinary people as part of a human-robot team in a home or office setting, robots need the ability to acquire a new mix of functional and social skills. Working with people requires a shared understanding of the task, capabilities, intentions, and background knowledge. For robots to act jointly as part of a team with people, they must engage in collaborative planning, which involves forming a consensus through an exchange of information about goals, capabilities, and partial plans. Often, much of this information is conveyed through implicit communication. In this talk, I formalize components of teamwork involving collaboration, communication, and representation. I illustrate how these concepts interact in the application of social navigation, which I argue is a first-class example of teamwork. In this setting, participants must avoid collision by legibly conveying intended passing sides via nonverbal cues like path shape. A topological representation using the braid groups enables the robot to reason about a small enumerable set of passing outcomes. I show how implicit communication of topological group plans achieves rapid covergence to a group consensus, and how a robot in the group can deliberately influence the ultimate outcome to maximize joint performance, yielding pedestrian comfort with the robot.

[ CMU RI ]

In this week’s episode of Robots in Depth, Per speaks with Julien Bourgeois about Claytronics, a project from Carnegie Mellon and Intel to develop “programmable matter.”

Julien started out as a computer scientist. He was always interested in robotics privately but then had the opportunity to get into micro robots when his lab was merged into the FEMTO-ST Institute. He later worked with Seth Copen Goldstein at Carnegie Mellon on the Claytronics project.

Julien shows an enlarged mock-up of the small robots that make up programmable matter, catoms, and speaks about how they are designed. Currently he is working on a unit that is one centimeter in diameter and he shows us the very small CPU that goes into that model.

[ Robots in Depth ] Continue reading

Posted in Human Robots