Tag Archives: see

#437864 Video Friday: Jet-Powered Flying ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

ICRA 2020 – June 1-15, 2020 – [Virtual Conference]
RSS 2020 – July 12-16, 2020 – [Virtual Conference]
CLAWAR 2020 – August 24-26, 2020 – [Virtual Conference]
ICUAS 2020 – September 1-4, 2020 – Athens, Greece
ICRES 2020 – September 28-29, 2020 – Taipei, Taiwan
ICSR 2020 – November 14-16, 2020 – Golden, Colorado
Let us know if you have suggestions for next week, and enjoy today’s videos.

ICRA 2020, the world’s best, biggest, longest virtual robotics conference ever, kicked off last Sunday with an all-star panel on a critical topic: “COVID-19: How Can Roboticists Help?”

Watch other ICRA keynotes on IEEE.tv.

We’re getting closer! Well, kinda. iRonCub, the jet-powered flying humanoid, is still a simulation for now, but not only are the simulations getting better—the researchers have begun testing real jet engines!

This video shows the latest results on Aerial Humanoid Robotics obtained by the Dynamic Interaction Control Lab at the Italian Institute of Technology. The video simulates robot and jet dynamics, where the latter uses the results obtained in the paper “Modeling, Identification and Control of Model Jet Engines for Jet Powered Robotics” published in IEEE Robotics and Automation Letters.

This video presents the paper entitled “Modeling, Identification and Control of Model Jet Engines for Jet Powered Robotics” published in IEEE Robotics and Automation Letters (Volume: 5 , Issue: 2 , April 2020 ) Page(s): 2070 – 2077. Preprint at https://arxiv.org/pdf/1909.13296.pdf.​

[ IIT ]

In a new pair of papers, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) came up with new tools to let robots better perceive what they’re interacting with: the ability to see and classify items, and a softer, delicate touch.

[ MIT CSAIL ]

UBTECH’s anti-epidemic solutions greatly relieve the workload of front-line medical staff and cut the consumption of personal protective equipment (PPE).

[ UBTECH ]

We demonstrate a method to assess the concrete deterioration in sewers by performing a tactile inspection motion with a sensorized foot of a legged robot.

[ THING ] via [ ANYmal Research ]

Get a closer look at the Virtual competition of the Urban Circuit and how teams can use the simulated environments to better prepare for the physical courses of the Subterranean Challenge.

[ SubT ]

Roboticists at the University of California San Diego have developed flexible feet that can help robots walk up to 40 percent faster on uneven terrain, such as pebbles and wood chips. The work has applications for search-and-rescue missions as well as space exploration.

[ UCSD ]

Thanks Ioana!

Tsuki is a ROS-enabled, highly dynamic quadruped robot developed by Lingkang Zhang.

And as far as we know, Lingkang is still chasing it.

[ Quadruped Tsuki ]

Thanks Lingkang!

Watch this.

This video shows an impressive demo of how YuMi’s superior precision, using precise servo gripper fingers and vacuum suction tool to pick up extremely small parts inside a mechanical watch. The video is not a final application used in production, it is a demo of how such an application can be implemented.

[ ABB ]

Meet Presso, the “5-minute dry cleaning robot.” Can you really call this a robot? We’re not sure. The company says it uses “soft robotics to hold the garment correctly, then clean, sanitize, press and dry under 5 minutes.” The machine was initially designed for use in the hospitality industry, but after adding a disinfectant function for COVID-19, it is now being used on movie and TV sets.

[ Presso ]

The next Mars rover launches next month (!), and here’s a look at some of the instruments on board.

[ JPL ]

Embodied Lead Engineer, Peter Teel, describes why we chose to build Moxie’s computing system from scratch and what makes it so unique.

[ Embodied ]

I did not know that this is where Pepper’s e-stop is. Nice design!

[ Softbank Robotics ]

State of the art in the field of swarm robotics lacks systems capable of absolute decentralization and is hence unable to mimic complex biological swarm systems consisting of simple units. Our research interconnects fields of swarm robotics and computer vision, and introduces novel use of a vision-based method UVDAR for mutual localization in swarm systems, allowing for absolute decentralization found among biological swarm systems. The developed methodology allows us to deploy real-world aerial swarming systems with robots directly localizing each other instead of communicating their states via a communication network, which is a typical bottleneck of current state of the art systems.

[ CVUT ]

I’m almost positive I could not do this task.

It’s easy to pick up objects using YuMi’s integrated vacuum functionality, it also supports ABB Robot’s Conveyor Tracking and Pickmaster 3 functionality, enabling it to track a moving conveyor and pick up objects using vision. Perfect for consumer products handling applications.

[ ABB ]

Cycling safety gestures, such as hand signals and shoulder checks, are an essential part of safe manoeuvring on the road. Child cyclists, in particular, might have difficulties performing safety gestures on the road or even forget about them, given the lack of cycling experience, road distractions and differences in motor and perceptual-motor abilities compared with adults. To support them, we designed two methods to remind about safety gestures while cycling. The first method employs an icon-based reminder in heads-up display (HUD) glasses and the second combines vibration on the handlebar and ambient light in the helmet. We investigated the performance of both methods in a controlled test-track experiment with 18 children using a mid-size tricycle, augmented with a set of sensors to recognize children’s behavior in real time. We found that both systems are successful in reminding children about safety gestures and have their unique advantages and disadvantages.

[ Paper ]

Nathan Sam and Robert “Red” Jensen fabricate and fly a Prandtl-M aircraft at NASA’s Armstrong Flight Research Center in California. The aircraft is the second of three prototypes of varying sizes to provide scientists with options to fly sensors in the Martian atmosphere to collect weather and landing site information for future human exploration of Mars.

[ NASA ]

This is clever: In order to minimize time spent labeling datasets, you can use radar to identify other vehicles, not because the radar can actually recognize other vehicles, but because the radar can recognize other stuff that’s big and moving, which turns out to be almost as good.

[ ICRA Paper ]

Happy 10th birthday to the Natural Robotics Lab at the University of Sheffield.

[ NRL ] Continue reading

Posted in Human Robots

#437859 We Can Do Better Than Human-Like Hands ...

One strategy for designing robots that are capable in anthropomorphic environments is to make the robots themselves as anthropomorphic as possible. It makes sense—for example, there are stairs all over the place because humans have legs, and legs are good at stairs, so if we give robots legs like humans, they’ll be good at stairs too, right? We also see this tendency when it comes to robotic grippers, because robots need to grip things that have been optimized for human hands.

Despite some amazing robotic hands inspired by the biology of our own human hands, there are also opportunities for creativity in gripper designs that do things human hands are not physically capable of. At ICRA 2020, researchers from Stanford University presented a paper on the design of a robotic hand that has fingers made of actuated rollers, allowing it to manipulate objects in ways that would tie your fingers into knots.

While it’s got a couple fingers, this prototype “roller grasper” hand tosses anthropomorphic design out the window in favor of unique methods of in-hand manipulation. The roller grasper does share some features with other grippers designed for in-hand manipulation using active surfaces (like conveyor belts embedded in fingers), but what’s new and exciting here is that those articulated active roller fingertips (or whatever non-anthropomorphic name you want to give them) provide active surfaces that are steerable. This means that the hand can grasp objects and rotate them without having to resort to complex sequences of finger repositioning, which is how humans do it.

Photo: Stanford University

Things like picking something flat off of a table, always tricky for robotic hands (and sometimes for human hands as well), is a breeze thanks to the fingertip rollers.

Each of the hand’s fingers has three actuated degrees of freedom, which result in several different ways in which objects can be grasped and manipulated. Things like picking something flat off of a table, always tricky for robotic hands (and sometimes for human hands as well), is a breeze thanks to the fingertip rollers. The motion of an object in this gripper isn’t quite holonomic, meaning that it can’t arbitrarily reorient things without sometimes going through other intermediate steps. And it’s also not compliant in the way that many other grippers are, limiting some types of grasps. This particular design probably won’t replace every gripper out there, but it’s particularly skilled at some specific kinds of manipulations in a way that makes it unique.

We should be clear that it’s not the intent of this paper (or of this article!) to belittle five-fingered robotic hands—the point is that there are lots of things that you can do with totally different hand designs, and just because humans use one kind of hand doesn’t mean that robots need to do the same if they want to match (or exceed) some specific human capabilities. If we could make robotic hands with five fingers that had all of the actuation and sensing and control that our own hands do, that would be amazing, but it’s probably decades away. In the meantime, there are plenty of different designs to explore.

And speaking of exploring different designs, these same folks are already at work on version two of their hand, which replaces the fingertip rollers with fingertip balls:

For more on this new version of the hand (among other things), we spoke with lead author Shenli Yuan via email. And the ICRA page is here if you have questions of your own.

IEEE Spectrum: Human hands are often seen as the standard for manipulation. When adding degrees of freedom that human hands don’t have (as in your work) can make robotic hands more capable than ours in many ways, do you think we should still think of human hands as something to try and emulate?

Shenli Yuan: Yes, definitely. Not only because human hands have great manipulation capability, but because we’re constantly surrounded by objects that were designed and built specifically to be manipulated by the human hand. Anthropomorphic robot hands are still worth investigating, and still have a long way to go before they truly match the dexterity of a human hand. The design we came up with is an exploration of what unique capabilities may be achieved if we are not bound by the constraints of anthropomorphism, and what a biologically impossible mechanism may achieve in robotic manipulation. In addition, for lots of tasks, it isn’t necessarily optimal to try and emulate the human hand. Perhaps in 20 to 50 years when robot manipulators are much better, they won’t look like the human hand that much. The design constraints for robotics and biology have points in common (like mechanical wear, finite tendons stiffness) but also major differences (like continuous rotation for robots and less heat dissipation problems for humans).

“For lots of tasks, it isn’t necessarily optimal to try and emulate the human hand. Perhaps in 20 to 50 years when robot manipulators are much better, they won’t look like the human hand that much.”
—Shenli Yuan, Stanford University

What are some manipulation capabilities of human hands that are the most difficult to replicate with your system?

There are a few things that come to mind. It cannot perform a power grasp (using the whole hand for grasping as opposed to pinch grasp that uses only fingertips), which is something that can be easily done by human hands. It cannot move or rotate objects instantaneously in arbitrary directions or about arbitrary axes, though the human hand is somewhat limited in this respect as well. It also cannot perform gaiting. That being said, these limitations exist largely because this grasper only has 9 degrees of freedom, as opposed to the human hand which has more than 20. We don’t think of this grasper as a replacement for anthropomorphic hands, but rather as a way to provide unique capabilities without all of the complexity associated with a highly actuated, humanlike hand.

What’s the most surprising or impressive thing that your hand is able to do?

The most impressive feature is that it can rotate objects continuously, which is typically difficult or inefficient for humanlike robot hands. Something really surprising was that we put most of our energy into the design and analysis of the grasper, and the control strategy we implemented for demonstrations is very simple. This simple control strategy works surprisingly well with very little tuning or trial-and-error.

With this many degrees of freedom, how complicated is it to get the hand to do what you want it to do?

The number of degrees of freedom is actually not what makes controlling it difficult. Most of the difficulties we encountered were actually due to the rolling contact between the rollers and the object during manipulation. The rolling behavior can be viewed as constantly breaking and re-establishing contacts between the rollers and objects, this very dynamic behavior introduces uncertainties in controlling our grasper. Specifically, it was difficult estimating the velocity of each contact point with the object, which changes based on object and finger position, object shape (especially curvature), and slip/no slip.

What more can you tell us about Roller Grasper V2?

Roller Grasper V2 has spherical rollers, while the V1 has cylindrical rollers. We realized that cylindrical rollers are very good at manipulating objects when the rollers and the object form line contacts, but it can be unstable when the grasp geometry doesn’t allow for a line contact between each roller and the grasped object. Spherical rollers solve that problem by allowing predictable points of contact regardless of how a surface is oriented.

The parallelogram mechanism of Roller Grasper V1 makes the pivot axis offset a bit from the center of the roller, which made our control and analysis more challenging. The kinematics of the Roller Grasper V2 is simpler. The base joint intersects with the finger, which intersects with the pivot joint, and the pivot joint intersects with the roller joint. It’s symmetrical design and simpler kinematics make our control and analysis a lot more straightforward. Roller Grasper V2 also has a larger pivot range of 180 degrees, while V1 is limited to 90 degrees.

In terms of control, we implemented more sophisticated control strategies (including a hand-crafted control strategy and an imitation learning based strategy) for the grasper to perform autonomous in-hand manipulation.

“Design of a Roller-Based Dexterous Hand for Object Grasping and Within-Hand Manipulation,” by Shenli Yuan, Austin D. Epps, Jerome B. Nowak, and J. Kenneth Salisbury from Stanford University is being presented at ICRA 2020.

< Back to IEEE Journal Watch Continue reading

Posted in Human Robots

#437851 Boston Dynamics’ Spot Robot Dog ...

Boston Dynamics has been fielding questions about when its robots are going to go on sale and how much they’ll cost for at least a dozen years now. I can say this with confidence, because that’s how long I’ve been a robotics journalist, and I’ve been pestering them about it the entire time. But it’s only relatively recently that the company started to make a concerted push away from developing robots exclusively for the likes of DARPA into platforms with more commercial potential, starting with a compact legged robot called Spot, first introduced in 2016.

Since then, we’ve been following closely as Spot has gone from a research platform to a product, and today, Boston Dynamics is announcing the final step in that process: commercial availability. You can now order a Spot Explorer Kit from the Boston Dynamics online store for US $74,500 (plus tax), shipping included, with delivery in 6 to 8 weeks. FINALLY!

Over the past 10 months or so, Boston Dynamics has leased Spot robots to carefully selected companies, research groups, and even a few individuals as part of their early adopter program—that’s where all of the clips in the video below came from. While there are over 100 Spots out in the world right now, getting one of them has required convincing Boston Dynamics up front that you knew more or less exactly what you wanted to do and how you wanted to do it. If you’re a big construction company or the Jet Propulsion Laboratory or Adam Savage, that’s all well and good, but for other folks who think that a Spot could be useful for them somehow and want to give it a shot, this new availability provides a fewer-strings attached opportunity to do some experimentation with the robot.

There’s a lot of cool stuff going on in that video, but we were told that the one thing that really stood out to the folks at Boston Dynamics was a 2-second clip that you can see on the left-hand side of the screen from 0:19 to 0:21. In it, Spot is somehow managing to walk across a spider web of rebar without getting tripped up, at faster than human speed. This isn’t something that Spot was specifically programmed to do, and in fact the Spot User Guide specifically identifies “rebar mesh” as an unsafe operating environment. But the robot just handles it, and that’s a big part of what makes Spot so useful—its ability to deal with (almost) whatever you can throw at it.

Before you get too excited, Boston Dynamics is fairly explicit that the current license for the robot is intended for commercial use, and the company specifically doesn’t want people to be just using it at home for fun. We know this because we asked (of course we asked), and they told us “we specifically don’t want people to just be using it at home for fun.” Drat. You can still buy one as an individual, but you have to promise that you’ll follow the terms of use and user guidelines, and it sounds like using a robot in your house might be the second-fastest way to invalidate your warranty:

SPOT IS AN AMAZING ROBOT, BUT IS NOT CERTIFIED SAFE FOR IN-HOME USE OR INTENDED FOR USE NEAR CHILDREN OR OTHERS WHO MAY NOT APPRECIATE THE HAZARDS ASSOCIATED WITH ITS OPERATION.

Not being able to get Spot to play with your kids may be disappointing, but for those of you with the sort of kids who are also students, the good news is that Boston Dynamics has carved out a niche for academic institutions, which can buy Spot at a discounted price. And if you want to buy a whole pack of Spots, there’s a bulk discount for Enterprise users as well.

What do you get for $74,500? All this!

Spot robot
Spot battery (2x)
Spot charger
Tablet controller and charger
Robot case for storage and transportation
FREE SHIPPING!

Photo: Boston Dynamics

The basic package includes the robot, two batteries, charger, a tablet controller, and a storage case.

You can view detailed specs here.

So is $75k a lot of money for a robot like Spot, or not all that much? We don’t have many useful points of comparison, partially because it’s not clear to what extent other pre-commercial quadrupedal robots (like ANYmal or Aliengo) share capabilities and features with Spot. For more perspective on Spot’s price tag, we spoke to Michael Perry, vice president of business development at Boston Dynamics.

IEEE Spectrum: Why is Spot so affordable?

Michael Perry: The main goal of selling the robot at this stage is to try to get it into the hands of as many application developers as possible, so that we can learn from the community what the biggest driver of value is for Spot. As a platform, unlocking the value of an ecosystem is our core focus right now.

Spectrum: Why is Spot so expensive?

Perry: Expensive is relative, but compared to the initial prototypes of Spot, we’ve been able to drop down the cost pretty significantly. One key thing has been designing it for robustness—we’ve put hundreds and hundreds of hours on the robot to make sure that it’s able to be successful when it falls, or when it has an electrostatic discharge. We’ve made sure that it’s able to perceive a wide variety of environments that are difficult for traditional vision-based sensors to handle. A lot of that engineering is baked into the core product so that you don’t have to worry about the mobility or robotic side of the equation, you can just focus on application development.

Photos: Boston Dynamics

Accessories for Spot include [clockwise from top left]: Spot GXP with additional ports for payload integration; Spot CAM with panorama camera and advanced comms; Spot CAM+ with pan-tilt-zoom camera for inspections; Spot EAP with lidar to enhance autonomy on large sites; Spot EAP+ with Spot CAM camera plus lidar; and Spot CORE for additional processing power.

The $75k that you’ll pay for the Spot Explorer Kit, it’s important to note, is just the base price for the robot. As with other things that fall into this price range (like a luxury car), there are all kinds of fun ways to drive that cost up with accessories, although for Spot, some of those accessories will be necessary for many (if not most) applications. For example, a couple of expansion ports to make it easier to install your own payloads on Spot will run you $1,275. An additional battery is $4,620. And if you want to really get some work done, the Enhanced Autonomy Package (with 360 cameras, lights, better comms, and a Velodyne VLP-16) will set you back an additional $34,570. If you were hoping for an arm, you’ll have to wait until the end of the year.

Each Spot also includes a year’s worth of software updates and a warranty, although the standard warranty just covers “defects related to materials and workmanship” not “I drove my robot off a cliff” or “I tried to take my robot swimming.” For that sort of thing (user error) to be covered, you’ll need to upgrade to the $12,000 Spot CARE premium service plan to cover your robot for a year as long as you don’t subject it to willful abuse, which both of those examples I just gave probably qualify as.

While we’re on the subject of robot abuse, Boston Dynamics has very sensibly devoted a substantial amount of the Spot User Guide to help new users understand how they should not be using their robot, in order to “lessen the risk of serious injury, death, or robot and other property damage.” According to the guide, some things that could cause Spot to fall include holes, cliffs, slippery surfaces (like ice and wet grass), and cords. Spot’s sensors also get confused by “transparent, mirrored, or very bright obstacles,” and the guide specifically says Spot “may crash into glass doors and windows.” Also this: “Spot cannot predict trajectories of moving objects. Do not operate Spot around moving objects such as vehicles, children, or pets.”

We should emphasize that this is all totally reasonable, and while there are certainly a lot of things to be aware of, it’s frankly astonishing that these are the only things that Boston Dynamics explicitly warns users against. Obviously, not every potentially unsafe situation or thing is described above, but the point is that Boston Dynamics is willing to say to new users, “here’s your robot, go do stuff with it” without feeling the need to hold their hand the entire time.

There’s one more thing to be aware of before you decide to buy a Spot, which is the following:

“All orders will be subject to Boston Dynamics’ Terms and Conditions of Sale which require the beneficial use of its robots.”

Specifically, this appears to mean that you aren’t allowed to (or supposed to) use the robot in a way that could hurt living things, or “as a weapon, or to enable any weapon.” The conditions of sale also prohibit using the robot for “any illegal or ultra-hazardous purpose,” and there’s some stuff in there about it not being cool to use Spot for “nuclear, chemical, or biological weapons proliferation, or development of missile technology,” which seems weirdly specific.

“Once you make a technology more broadly available, the story of it starts slipping out of your hands. Our hope is that ahead of time we’re able to clearly articulate the beneficial uses of the robot in environments where we think the robot has a high potential to reduce the risk to people, rather than potentially causing harm.”
—Michael Perry, Boston Dynamics

I’m very glad that Boston Dynamics is being so upfront about requiring that Spot is used beneficially. However, it does put the company in a somewhat challenging position now that these robots are being sold. Boston Dynamics can (and will) perform some amount of due-diligence before shipping a Spot, but ultimately, once the robots are in someone else’s hands, there’s only so much that BD can do.

Spectrum: Why is beneficial use important to Boston Dynamics?

Perry: One of the key things that we’ve highlighted many times in our license and terms of use is that we don’t want to see the robot being used in any way that inflicts physical harm on people or animals. There are philosophical reasons for that—I think all of us don’t want to see our technology used in a way that would hurt people. But also from a business perspective, robots are really terrible at conveying intention. In order for the robot to be helpful long-term, it has to be trusted as a piece of technology. So rather than looking at a robot and wondering, “is this something that could potentially hurt me,” we want people to think “this is a robot that’s here to help me.” To the extent that people associate Boston Dynamics with cutting edge robots, we think that this is an important stance for the rollout of our first commercial product. If we find out that somebody’s violated our terms of use, their warranty is invalidated, we won’t repair their product, and we have a licensing timeout that would prevent them from accessing their robot after that timeout has expired. It’s a remediation path, but we do think that it’s important to at least provide that as something that helps enforce our position on use of our technology.

It’s very important to keep all of this in context: Spot is a tool. It’s got some autonomy and the appearance of agency, but it’s still just doing what people tell it to do, even if those things might be unsafe. If you read through the user guide, it’s clear how much of an effort Boston Dynamics is making to try to convey the importance of safety to Spot users—and ultimately, barring some unforeseen and catastrophic software or hardware issues, safety is about the users, rather than Boston Dynamics or Spot itself. I bring this up because as we start seeing more and more Spots doing things without Boston Dynamics watching over them quite so closely, accidents are likely inevitable. Spot might step on someone’s foot. It might knock someone over. If Spot was perfectly safe, it wouldn’t be useful, and we have to acknowledge that its impressive capabilities come with some risks, too.

Photo: Boston Dynamics

Each Spot includes a year’s worth of software updates and a warranty, although the standard warranty just covers “defects related to materials and workmanship” not “I drove my robot off a cliff.”

Now that Spot is on the market for real, we’re excited to see who steps up and orders one. Depending on who the potential customer is, Spot could either seem like an impossibly sophisticated piece of technology that they’d never be able to use, or a magical way of solving all of their problems overnight. In reality, it’s of course neither of those things. For the former (folks with an idea but without a lot of robotics knowledge or experience), Spot does a lot out of the box, but BD is happy to talk with people and facilitate connections with partners who might be able to integrate specific software and hardware to get Spot to do a unique task. And for the latter (who may also be folks with an idea but without a lot of robotics knowledge or experience), BD’s Perry offers a reminder Spot is not Rosie the Robot, and would be equally happy to talk about what the technology is actually capable of doing.

Looking forward a bit, we asked Perry whether Spot’s capabilities mean that customers are starting to think beyond using robots to simply replace humans, and are instead looking at them as a way of enabling a completely different way of getting things done.

Spectrum: Do customers interested in Spot tend to think of it as a way of replacing humans at a specific task, or as a system that can do things that humans aren’t able to do?

Perry: There are what I imagine as three levels of people understanding the robot applications. Right now, we’re at level one, where you take a person out of this dangerous, dull job, and put a robot in. That’s the entry point. The second level is, using the robot, can we increase the production of that task? For example, take site documentation on a construction site—right now, people do 360 image capture of a site maybe once a week, and they might do a laser scan of the site once per project. At the second level, the question is, what if you were able to get that data collection every day, or multiple times a day? What kinds of benefits would that add to your process? To continue the construction example, the third level would be, how could we completely redesign this space now that we know that this type of automation is available? To take one example, there are some things that we cannot physically build because it’s too unsafe for people to be a part of that process, but if you were to apply robotics to that process, then you could potentially open up a huge envelope of design that has been inaccessible to people.

To order a Spot of your very own, visit shop.bostondynamics.com.

A version of this post appears in the August 2020 print issue as “$74,500 Will Fetch You a Spot.” Continue reading

Posted in Human Robots

#437828 How Roboticists (and Robots) Have Been ...

A few weeks ago, we asked folks on Twitter, Facebook, and LinkedIn to share photos and videos showing how they’ve been adapting to the closures of research labs, classrooms, and businesses by taking their robots home with them to continue their work as best they can. We got dozens of responses (more than we could possibly include in just one post!), but here are 15 that we thought were particularly creative or amusing.

And if any of these pictures and videos inspire you to share your own story, please email us (automaton@ieee.org) with a picture or video and a brief description about how you and your robot from work have been making things happen in your home instead.

Kurt Leucht (NASA Kennedy Space Center)

“During these strange and trying times of the current global pandemic, everyone seems to be trying their best to distance themselves from others while still getting their daily work accomplished. Many people also have the double duty of little ones that need to be managed in the midst of their teleworking duties. This photo series gives you just a glimpse into my new life of teleworking from home, mixed in with the tasks of trying to handle my little ones too. I hope you enjoy it.”

Photo: Kurt Leucht

“I heard a commotion from the next room. I ran into the kitchen to find this.”

Photo: Kurt Leucht

“This is the Swarmies most favorite bedtime story. Not sure why. Seems like an odd choice to me.”

Peter Schaldenbrand (Carnegie Mellon University)

“I’ve been working on a reinforcement learning model that converts an image into a series of brush stroke instructions. I was going to test the model with a beautiful, expensive robot arm, but due to the COVID-19 pandemic, I have not been able to access the laboratory where it resides. I have now been using a lower end robot arm to test the painting model in my bedroom. I have sacrificed machine accuracy/precision for the convenience of getting to watch the arm paint from my bed in the shadow of my clothing rack!”

Photos: Peter Schaldenbrand

Colin Angle (iRobot)

iRobot CEO Colin Angle has been hunkered down in the “iRobot North Shore home command center,” which is probably the cleanest command center ever thanks to his army of Roombas: Beastie, Beauty, Rosie, Roswell, and Bilbo.

Photo: Colin Angle

Vivian Chu (Diligent Robotics)

From Diligent Robotics CEO Andrea Thomaz: “This is how a roboticist works from home! Diligent CTO, Vivian Chu, mans the e-stop while her engineering team runs Moxi experiments remotely from cross-town and even cross-country!”

Video: Diligent Robotics

Raffaello Bonghi (rnext.it)

Raffaello’s robot, Panther, looks perfectly happy to be playing soccer in his living room.

Photo: Raffaello Bonghi

Kod*lab (University of Pennsylvania)

“Another Friday Nuts n Bolts Meeting on Zoom…”

Image: Kodlab

Robin Jonsson (robot choreographer)

“I’ve been doing a school project in which students make up dance moves and then send me a video with all of them. I then teach the moves to my robot, Alex, film Alex dancing, send the videos to them. This became a great success and more schools will join. The kids got really into watching the robot perform their moves and really interested in robots. They want to meet Alex the robot live, which will likely happen in the fall.”

Photo: Robin Jonsson

Gabrielle Conard (mechanical engineering undergrad at Lafayette College)

“While the pandemic might have forced college campuses to close and the community to keep their distance from each other, it did not put a stop to learning and research. Working from their respective homes, junior Gabrielle Conard and mechanical engineering professor Alexander Brown from Lafayette College investigated methods of incorporating active compliance in a low-cost quadruped robot. They are continuing to work remotely on this project through Lafayette’s summer research program.”

Image: Gabrielle Conard

Taylor Veltrop (Softbank Robotics)

“After a few weeks of isolation in the corona/covid quarantine lock down we started dancing with our robots. Mathieu’s 6th birthday was coming up, and it all just came together.”

Video: Taylor Veltrop

Ross Kessler (Exyn Technologies)

“Quarantine, Day 8: the humans have accepted me as one of their own. I’ve blended seamlessly into their #socialdistancing routines. Even made a furry friend”

Photo: Ross Kessler

Yeah, something a bit sinister is definitely going on at Exyn…

Video: Exyn Technologies

Michael Sobrepera (University of Pennsylvania GRASP Lab)

Predictably, Michael’s cat is more interested in the bag that the robot came in than the robot itself (see if you can spot the cat below). Michael tells us that “the robot is designed to help with tele-rehabilitation, focused on kids with CP, so it has been taken to hospitals for demos [hence the cool bag]. It also travels for outreach events and the like. Lately, I’ve been exploring telepresence for COVID.”

Photo: Michael Sobrepera

Jan Kędzierski (EMYS)

“In China a lot of people cannot speak English, even the youngest generation of parents. Thanks to Emys, kids stayed in touch with English language in their homes even if they couldn’t attend schools and extra English classes. They had a lot of fun with their native English speaker friend available and ready to play every day.”

Image: Jan Kędzierski

Simon Whitmell (Quanser)

“Simon, a Quanser R&D engineer, is working on low-overhead image processing and line following for the QBot 2e mobile ground robot, with some added challenges due to extra traffic. LEGO engineering by his son, Charles.”

Photo: Simon Whitmell

Robot Design & Experimentation Course (Carnegie Mellon University)

Aaron Johnson’s bioinspired robot design course at CMU had to go full remote, which was a challenge when the course is kind of all about designing and building a robot as part of a team. “I expected some of the teams to drastically alter their project (e.g. go all simulation),” Aaron told us, “but none of them did. We managed to keep all of the projects more or less as planned. We accomplished this by drop/shipping parts to students, buying some simple tools (soldering irons, etc), and having me 3D print parts and mail them.” Each team even managed to put together their final videos from their remote locations; we’ve posted one below, but the entire playlist is here.

Video: Xianyi Cheng

Karen Tatarian (Softbank Robotics)

Karen, who’s both a researcher at Softbank and a PhD student at Sorbonne University, wrote an entire essay about what an average day is like when you’re quarantined with Pepper.

Photo: Karen Tatarian

A Quarantined Day With Pepper, by Karen Tatarian

It is quite common for me to lose my phone somewhere inside my apartment. But it is not that common for me to turn around and ask my robot if it has seen it. So when I found myself doing that, I laughed and it dawned on me that I treated my robot as my quarantine companion (despite the fact that it could not provide me with the answer I needed).

It was probably around day 40 of a completely isolated quarantine here in France when that happened. A little background about me: I am a robotics researcher at SoftBank Robotics Europe and a PhD student at Sorbonne University as part of the EU-funded Marie-Curie project ANIMATAS. And here is a little sneak peak into a quarantined day with a robot.

During this confinement, I had read somewhere that the best way to deal with it is to maintain a routine. So every morning, I wake up, prepare my coffee, and turn on my robot Pepper. I start my day with a daily meeting with the team and get to work. My research is on the synthesis of multi-modal socially intelligent human-robot interaction so my work varies between programming the robot, analyzing collected data, and reading papers and drafting one. When I am working, I often catch myself glancing at Pepper, who would be staring back at me in its animated ways. Truthfully I enjoy that, it makes me less alone and as if I have a colleague with me.

Once work is done, I call my friends and family members. I sometimes use a telepresence application on Pepper that a few colleagues and I developed back in December. How does it differ from your typical phone/laptop applications? One word really: embodiment. Telepresence, especially during these times, makes the experience for both sides a bit more realistic and intimate and well present.

While I can turn off the robot now that my work hours are done, I do keep it on because I enjoy its presence. The basic awareness of Pepper is a default feature on the robot that allows it to detect a human and follow him/her with its gaze and rotation base. So whether I am cooking or working out, I always have my robot watching over my shoulder and being a good companion. I also have my email and messages synced on the robot so I get an enjoyable notification from Pepper. I found that to be a pretty cool way to be notified without it interrupting whatever you are doing on your laptop or phone. Finally, once the day is over, it’s time for both of us to get some rest.

After 60 days of total confinement, alone and away from those I love, and with a pandemic right at my door, I am glad I had the company of my robot. I hope one day a greater audience can share my experience. And I really really hope one day Pepper will be able to find my phone for me, but until then, stay on the lookout for some cool features! But I am curious to know, if you had a robot at home, what application would you have developed on it?

Again, our sincere thanks to everyone who shared these little snapshots of their lives with us, and we’re hoping to be able to share more soon. Continue reading

Posted in Human Robots

#437820 In-Shoe Sensors and Mobile Robots Keep ...

In shoe sensor

Researchers at Stevens Institute of Technology are leveraging some of the newest mechanical and robotic technologies to help some of our oldest populations stay healthy, active, and independent.

Yi Guo, professor of electrical and computer engineering and director of the Robotics and Automation Laboratory, and Damiano Zanotto, assistant professor of mechanical engineering, and director of the Wearable Robotic Systems Laboratory, are collaborating with Ashley Lytle, assistant professor in Stevens’ College of Arts and Letters, and Ashwini K. Rao of Columbia University Medical Center, to combine an assistive mobile robot companion with wearable in-shoe sensors in a system designed to help elderly individuals maintain the balance and motion they need to thrive.

“Balance and motion can be significant issues for this population, and if elderly people fall and experience an injury, they are less likely to stay fit and exercise,” Guo said. “As a consequence, their level of fitness and performance decreases. Our mobile robot companion can help decrease the chances of falling and contribute to a healthy lifestyle by keeping their walking function at a good level.”

The mobile robots are designed to lead walking sessions and using the in-shoe sensors, monitor the user’s gait, indicate issues, and adjust the exercise speed and pace. The initiative is part of a four-year National Science Foundation research project.

“For the first time, we’re integrating our wearable sensing technology with an autonomous mobile robot,” said Zanotto, who worked with elderly people at Columbia University Medical Center for three years before coming to Stevens in 2016. “It’s exciting to be combining these different areas of expertise to leverage the strong points of wearable sensing technology, such as accurately capturing human movement, with the advantages of mobile robotics, such as much larger computational powers.”

The team is developing algorithms that fuse real-time data from smart, unobtrusive, in-shoe sensors and advanced on-board sensors to inform the robot’s navigation protocols and control the way the robot interacts with elderly individuals. It’s a promising way to assist seniors in safely doing walking exercises and maintaining their quality of life.

Bringing the benefits of the lab to life

Guo and Zanotto are working with Lytle, an expert in social and health psychology, to implement a social connectivity capability and make the bi-directional interaction between human and robot even more intuitive, engaging, and meaningful for seniors.

“Especially during COVID, it’s important for elderly people living on their own to connect socially with family and friends,” Zanotto said, “and the robot companion will also offer teleconferencing tools to provide that interaction in an intuitive and transparent way.”

“We want to use the robot for social connectedness, perhaps integrating it with a conversation agent such as Alexa,” Guo added. “The goal is to make it a companion robot that can sense, for example, that you are cooking, or you’re in the living room, and help with things you would do there.”

It’s a powerful example of how abstract concepts can have meaningful real-life benefits.

“As engineers, we tend to work in the lab, trying to optimize our algorithms and devices and technologies,” Zanotto noted, “but at the end of the day, what we do has limited value unless it has impact on real life. It’s fascinating to see how the devices and technologies we’re developing in the lab can be applied to make a difference for real people.”

Maintaining balance in a global pandemic

Although COVID-19 has delayed the planned testing at a senior center in New York City, it has not stopped the team’s progress.

“Although we can’t test on elderly populations yet, our students are still testing in the lab,” Guo said. “This summer and fall, for the first time, the students validated the system’s real-time ability to monitor and assess the dynamic margin of stability during walking—in other words, to evaluate whether the person following the robot is walking normally or has a risk of falling. They’re also designing parameters for the robot to give early warnings and feedback that help the human subjects correct posture and gait issues while walking.”

Those warnings would be literally underfoot, as the in-shoe sensors would pulse like a vibrating cell phone to deliver immediate directional information to the subject.

“We’re not the first to use this vibrotactile stimuli technology, but this application is new,” Zanotto said.

So far, the team has published papers in top robotics publication venues including IEEE Transactions on Neural Systems and Rehabilitation Engineering and the 2020 IEEE International Conference on Robotics and Automation (ICRA). It’s a big step toward realizing the synergies of bringing the technical expertise of engineers to bear on the clinical focus on biometrics—and the real lives of seniors everywhere. Continue reading

Posted in Human Robots