Tag Archives: head

#438012 Video Friday: These Robots Have Made 1 ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

HRI 2021 – March 8-11, 2021 – [Online Conference]
RoboSoft 2021 – April 12-16, 2021 – [Online Conference]
Let us know if you have suggestions for next week, and enjoy today's videos.

We're proud to announce Starship Delivery Robots have now completed 1,000,000 autonomous deliveries around the world. We were unsure where the one millionth delivery was going to take place, as there are around 15-20 service areas open globally, all with robots doing deliveries every minute. In the end it took place at Bowling Green, Ohio, to a student called Annika Keeton who is a freshman studying pre-health Biology at BGSU. Annika is now part of Starship’s history!

[ Starship ]

I adore this little DIY walking robot- with modular feet and little dials to let you easily adjust the walking parameters, it's an affordable kit that's way more nuanced than most.

It's called Bakiwi, and it costs €95. A squee cover made from feathers or fur is an extra €17. Here's a more serious look at what it can do:

[ Bakiwi ]

Thanks Oswald!

Savva Morozov, an AeroAstro junior, works on autonomous navigation for the MIT mini cheetah robot and reflects on the value of a crowded Infinite Corridor.

[ MIT ]

The world's most advanced haptic feedback gloves just got a huge upgrade! HaptX Gloves DK2 achieves a level of realism that other haptic devices can't match. Whether you’re training your workforce, designing a new product, or controlling robots from a distance, HaptX Gloves make it feel real.

They're the only gloves with true-contact haptics, with patented technology that displace your skin the same way a real object would. With 133 points of tactile feedback per hand, for full palm and fingertip coverage. HaptX Gloves DK2 feature the industry's most powerful force feedback, ~2X the strength of other force feedback gloves. They're also the most accurate motion tracking gloves, with 30 tracked degrees of freedom, sub-millimeter precision, no perceivable latency, and no occlusion.

[ HaptX ]

Yardroid is an outdoor robot “guided by computer vision and artificial intelligence” that seems like it can do almost everything.

These are a lot of autonomous capabilities, but so far, we've only seen the video. So, best not to get too excited until we know more about how it works.

[ Yardroid ]

Thanks Dan!

Since as far as we know, Pepper can't spread COVID, it had a busy year.

I somehow missed seeing that chimpanzee magic show, but here it is:

[ Simon Pierro ] via [ SoftBank Robotics ]

In spite of the pandemic, Professor Hod Lipson’s Robotics Studio persevered and even thrived— learning to work on global teams, to develop protocols for sharing blueprints and code, and to test, evaluate, and refine their designs remotely. Equipped with a 3D printer and a kit of electronics prototyping equipment, our students engineered bipedal robots that were conceptualized, fabricated, programmed, and endlessly iterated around the globe in bedrooms, kitchens, backyards, and any other makeshift laboratory you can imagine.

[ Hod Lipson ]

Thanks Fan!

We all know how much quadrupeds love ice!

[ Ghost Robotics ]

We took the opportunity of the last storm to put the Warthog in the snow of Université Laval. Enjoy!

[ Norlab ]

They've got a long way to go, but autonomous indoor firefighting drones seem like a fantastic idea.

[ CTU ]

Individual manipulators are limited by their vertical total load capacity. This places a fundamental limit on the weight of loads that a single manipulator can move. Cooperative manipulation with two arms has the potential to increase the net weight capacity of the overall system. However, it is critical that proper load sharing takes place between the two arms. In this work, we outline a method that utilizes mechanical intelligence in the form of a whiffletree.

And your word of the day is whiffletree, which is “a mechanism to distribute force evenly through linkages.”

[ DART Lab ]

Thanks Raymond!

Some highlights of robotic projects at FZI in 2020, all using ROS.

[ FZI ]

Thanks Fan!

iRobot CEO Colin Angle threatens my job by sharing some cool robots.

[ iRobot ]

A fascinating new talk from Henry Evans on robotic caregivers.

[ HRL ]

The ANA Avatar XPRIZE semifinals selection submission for Team AVATRINA. The setting is a mock clinic, with the patient sitting on a wheelchair and nurse having completed an initial intake. Avatar enters the room controlled by operator (Doctor). A rolling tray table with medical supplies (stethoscope, pulse oximeter, digital thermometer, oxygen mask, oxygen tube) is by the patient’s side. Demonstrates head tracking, stereo vision, fine manipulation, bimanual manipulation, safe impedance control, and navigation.

[ Team AVATRINA ]

This five year old talk from Mikell Taylor, who wrote for us a while back and is now at Amazon Robotics, is entitled “Nobody Cares About Your Robot.” For better or worse, it really doesn't sound like it was written five years ago.

Robotics for the consumer market – Mikell Taylor from Scott Handsaker on Vimeo.

[ Mikell Taylor ]

Fall River Community Media presents this wonderful guy talking about his love of antique robot toys.

If you enjoy this kind of slow media, Fall River also has weekly Hot Dogs Cool Cats adoption profiles that are super relaxing to watch.

[ YouTube ] Continue reading

Posted in Human Robots

#437957 Meet Assembloids, Mini Human Brains With ...

It’s not often that a twitching, snowman-shaped blob of 3D human tissue makes someone’s day.

But when Dr. Sergiu Pasca at Stanford University witnessed the tiny movement, he knew his lab had achieved something special. You see, the blob was evolved from three lab-grown chunks of human tissue: a mini-brain, mini-spinal cord, and mini-muscle. Each individual component, churned to eerie humanoid perfection inside bubbling incubators, is already a work of scientific genius. But Pasca took the extra step, marinating the three components together inside a soup of nutrients.

The result was a bizarre, Lego-like human tissue that replicates the basic circuits behind how we decide to move. Without external prompting, when churned together like ice cream, the three ingredients physically linked up into a fully functional circuit. The 3D mini-brain, through the information highway formed by the artificial spinal cord, was able to make the lab-grown muscle twitch on demand.

In other words, if you think isolated mini-brains—known formally as brain organoids—floating in a jar is creepy, upgrade your nightmares. The next big thing in probing the brain is assembloids—free-floating brain circuits—that now combine brain tissue with an external output.

The end goal isn’t to freak people out. Rather, it’s to recapitulate our nervous system, from input to output, inside the controlled environment of a Petri dish. An autonomous, living brain-spinal cord-muscle entity is an invaluable model for figuring out how our own brains direct the intricate muscle movements that allow us stay upright, walk, or type on a keyboard.

It’s the nexus toward more dexterous brain-machine interfaces, and a model to understand when brain-muscle connections fail—as in devastating conditions like Lou Gehrig’s disease or Parkinson’s, where people slowly lose muscle control due to the gradual death of neurons that control muscle function. Assembloids are a sort of “mini-me,” a workaround for testing potential treatments on a simple “replica” of a person rather than directly on a human.

From Organoids to Assembloids
The miniature snippet of the human nervous system has been a long time in the making.

It all started in 2014, when Dr. Madeleine Lancaster, then a post-doc at Stanford, grew a shockingly intricate 3D replica of human brain tissue inside a whirling incubator. Revolutionarily different than standard cell cultures, which grind up brain tissue to reconstruct as a flat network of cells, Lancaster’s 3D brain organoids were incredibly sophisticated in their recapitulation of the human brain during development. Subsequent studies further solidified their similarity to the developing brain of a fetus—not just in terms of neuron types, but also their connections and structure.

With the finding that these mini-brains sparked with electrical activity, bioethicists increasingly raised red flags that the blobs of human brain tissue—no larger than the size of a pea at most—could harbor the potential to develop a sense of awareness if further matured and with external input and output.

Despite these concerns, brain organoids became an instant hit. Because they’re made of human tissue—often taken from actual human patients and converted into stem-cell-like states—organoids harbor the same genetic makeup as their donors. This makes it possible to study perplexing conditions such as autism, schizophrenia, or other brain disorders in a dish. What’s more, because they’re grown in the lab, it’s possible to genetically edit the mini-brains to test potential genetic culprits in the search for a cure.

Yet mini-brains had an Achilles’ heel: not all were made the same. Rather, depending on the region of the brain that was reverse engineered, the cells had to be persuaded by different cocktails of chemical soups and maintained in isolation. It was a stark contrast to our own developing brains, where regions are connected through highways of neural networks and work in tandem.

Pasca faced the problem head-on. Betting on the brain’s self-assembling capacity, his team hypothesized that it might be possible to grow different mini-brains, each reflecting a different brain region, and have them fuse together into a synchronized band of neuron circuits to process information. Last year, his idea paid off.

In one mind-blowing study, his team grew two separate portions of the brain into blobs, one representing the cortex, the other a deeper part of the brain known to control reward and movement, called the striatum. Shockingly, when put together, the two blobs of human brain tissue fused into a functional couple, automatically establishing neural highways that resulted in one of the most sophisticated recapitulations of a human brain. Pasca crowned this tissue engineering crème-de-la-crème “assembloids,” a portmanteau between “assemble” and “organoids.”

“We have demonstrated that regionalized brain spheroids can be put together to form fused structures called brain assembloids,” said Pasca at the time.” [They] can then be used to investigate developmental processes that were previously inaccessible.”

And if that’s possible for wiring up a lab-grown brain, why wouldn’t it work for larger neural circuits?

Assembloids, Assemble
The new study is the fruition of that idea.

The team started with human skin cells, scraped off of eight healthy people, and transformed them into a stem-cell-like state, called iPSCs. These cells have long been touted as the breakthrough for personalized medical treatment, before each reflects the genetic makeup of its original host.

Using two separate cocktails, the team then generated mini-brains and mini-spinal cords using these iPSCs. The two components were placed together “in close proximity” for three days inside a lab incubator, gently floating around each other in an intricate dance. To the team’s surprise, under the microscope using tracers that glow in the dark, they saw highways of branches extending from one organoid to the other like arms in a tight embrace. When stimulated with electricity, the links fired up, suggesting that the connections weren’t just for show—they’re capable of transmitting information.

“We made the parts,” said Pasca, “but they knew how to put themselves together.”

Then came the ménage à trois. Once the mini-brain and spinal cord formed their double-decker ice cream scoop, the team overlaid them onto a layer of muscle cells—cultured separately into a human-like muscular structure. The end result was a somewhat bizarre and silly-looking snowman, made of three oddly-shaped spherical balls.

Yet against all odds, the brain-spinal cord assembly reached out to the lab-grown muscle. Using a variety of tools, including measuring muscle contraction, the team found that this utterly Frankenstein-like snowman was able to make the muscle component contract—in a way similar to how our muscles twitch when needed.

“Skeletal muscle doesn’t usually contract on its own,” said Pasca. “Seeing that first twitch in a lab dish immediately after cortical stimulation is something that’s not soon forgotten.”

When tested for longevity, the contraption lasted for up to 10 weeks without any sort of breakdown. Far from a one-shot wonder, the isolated circuit worked even better the longer each component was connected.

Pasca isn’t the first to give mini-brains an output channel. Last year, the queen of brain organoids, Lancaster, chopped up mature mini-brains into slices, which were then linked to muscle tissue through a cultured spinal cord. Assembloids are a step up, showing that it’s possible to automatically sew multiple nerve-linked structures together, such as brain and muscle, sans slicing.

The question is what happens when these assembloids become more sophisticated, edging ever closer to the inherent wiring that powers our movements. Pasca’s study targets outputs, but what about inputs? Can we wire input channels, such as retinal cells, to mini-brains that have a rudimentary visual cortex to process those examples? Learning, after all, depends on examples of our world, which are processed inside computational circuits and delivered as outputs—potentially, muscle contractions.

To be clear, few would argue that today’s mini-brains are capable of any sort of consciousness or awareness. But as mini-brains get increasingly more sophisticated, at what point can we consider them a sort of AI, capable of computation or even something that mimics thought? We don’t yet have an answer—but the debates are on.

Image Credit: christitzeimaging.com / Shutterstock.com Continue reading

Posted in Human Robots

#437929 These Were Our Favorite Tech Stories ...

This time last year we were commemorating the end of a decade and looking ahead to the next one. Enter the year that felt like a decade all by itself: 2020. News written in January, the before-times, feels hopelessly out of touch with all that came after. Stories published in the early days of the pandemic are, for the most part, similarly naive.

The year’s news cycle was swift and brutal, ping-ponging from pandemic to extreme social and political tension, whipsawing economies, and natural disasters. Hope. Despair. Loneliness. Grief. Grit. More hope. Another lockdown. It’s been a hell of a year.

Though 2020 was dominated by big, hairy societal change, science and technology took significant steps forward. Researchers singularly focused on the pandemic and collaborated on solutions to a degree never before seen. New technologies converged to deliver vaccines in record time. The dark side of tech, from biased algorithms to the threat of omnipresent surveillance and corporate control of artificial intelligence, continued to rear its head.

Meanwhile, AI showed uncanny command of language, joined Reddit threads, and made inroads into some of science’s grandest challenges. Mars rockets flew for the first time, and a private company delivered astronauts to the International Space Station. Deprived of night life, concerts, and festivals, millions traveled to virtual worlds instead. Anonymous jet packs flew over LA. Mysterious monoliths appeared and disappeared worldwide.

It was all, you know, very 2020. For this year’s (in-no-way-all-encompassing) list of fascinating stories in tech and science, we tried to select those that weren’t totally dated by the news, but rose above it in some way. So, without further ado: This year’s picks.

How Science Beat the Virus
Ed Yong | The Atlantic
“Much like famous initiatives such as the Manhattan Project and the Apollo program, epidemics focus the energies of large groups of scientists. …But ‘nothing in history was even close to the level of pivoting that’s happening right now,’ Madhukar Pai of McGill University told me. … No other disease has been scrutinized so intensely, by so much combined intellect, in so brief a time.”

‘It Will Change Everything’: DeepMind’s AI Makes Gigantic Leap in Solving Protein Structures
Ewen Callaway | Nature
“In some cases, AlphaFold’s structure predictions were indistinguishable from those determined using ‘gold standard’ experimental methods such as X-ray crystallography and, in recent years, cryo-electron microscopy (cryo-EM). AlphaFold might not obviate the need for these laborious and expensive methods—yet—say scientists, but the AI will make it possible to study living things in new ways.”

OpenAI’s Latest Breakthrough Is Astonishingly Powerful, But Still Fighting Its Flaws
James Vincent | The Verge
“What makes GPT-3 amazing, they say, is not that it can tell you that the capital of Paraguay is Asunción (it is) or that 466 times 23.5 is 10,987 (it’s not), but that it’s capable of answering both questions and many more beside simply because it was trained on more data for longer than other programs. If there’s one thing we know that the world is creating more and more of, it’s data and computing power, which means GPT-3’s descendants are only going to get more clever.”

Artificial General Intelligence: Are We Close, and Does It Even Make Sense to Try?
Will Douglas Heaven | MIT Technology Review
“A machine that could think like a person has been the guiding vision of AI research since the earliest days—and remains its most divisive idea. …So why is AGI controversial? Why does it matter? And is it a reckless, misleading dream—or the ultimate goal?”

The Dark Side of Big Tech’s Funding for AI Research
Tom Simonite | Wired
“Timnit Gebru’s exit from Google is a powerful reminder of how thoroughly companies dominate the field, with the biggest computers and the most resources. …[Meredith] Whittaker of AI Now says properly probing the societal effects of AI is fundamentally incompatible with corporate labs. ‘That kind of research that looks at the power and politics of AI is and must be inherently adversarial to the firms that are profiting from this technology.’i”

We’re Not Prepared for the End of Moore’s Law
David Rotman | MIT Technology Review
“Quantum computing, carbon nanotube transistors, even spintronics, are enticing possibilities—but none are obvious replacements for the promise that Gordon Moore first saw in a simple integrated circuit. We need the research investments now to find out, though. Because one prediction is pretty much certain to come true: we’re always going to want more computing power.”

Inside the Race to Build the Best Quantum Computer on Earth
Gideon Lichfield | MIT Technology Review
“Regardless of whether you agree with Google’s position [on ‘quantum supremacy’] or IBM’s, the next goal is clear, Oliver says: to build a quantum computer that can do something useful. …The trouble is that it’s nearly impossible to predict what the first useful task will be, or how big a computer will be needed to perform it.”

The Secretive Company That Might End Privacy as We Know It
Kashmir Hill | The New York Times
“Searching someone by face could become as easy as Googling a name. Strangers would be able to listen in on sensitive conversations, take photos of the participants and know personal secrets. Someone walking down the street would be immediately identifiable—and his or her home address would be only a few clicks away. It would herald the end of public anonymity.”

Wrongfully Accused by an Algorithm
Kashmir Hill | The New York Times
“Mr. Williams knew that he had not committed the crime in question. What he could not have known, as he sat in the interrogation room, is that his case may be the first known account of an American being wrongfully arrested based on a flawed match from a facial recognition algorithm, according to experts on technology and the law.”

Predictive Policing Algorithms Are Racist. They Need to Be Dismantled.
Will Douglas Heaven | MIT Technology Review
“A number of studies have shown that these tools perpetuate systemic racism, and yet we still know very little about how they work, who is using them, and for what purpose. All of this needs to change before a proper reckoning can take pace. Luckily, the tide may be turning.”

The Panopticon Is Already Here
Ross Andersen | The Atlantic
“Artificial intelligence has applications in nearly every human domain, from the instant translation of spoken language to early viral-outbreak detection. But Xi [Jinping] also wants to use AI’s awesome analytical powers to push China to the cutting edge of surveillance. He wants to build an all-seeing digital system of social control, patrolled by precog algorithms that identify potential dissenters in real time.”

The Case For Cities That Aren’t Dystopian Surveillance States
Cory Doctorow | The Guardian
“Imagine a human-centered smart city that knows everything it can about things. It knows how many seats are free on every bus, it knows how busy every road is, it knows where there are short-hire bikes available and where there are potholes. …What it doesn’t know is anything about individuals in the city.”

The Modern World Has Finally Become Too Complex for Any of Us to Understand
Tim Maughan | OneZero
“One of the dominant themes of the last few years is that nothing makes sense. …I am here to tell you that the reason so much of the world seems incomprehensible is that it is incomprehensible. From social media to the global economy to supply chains, our lives rest precariously on systems that have become so complex, and we have yielded so much of it to technologies and autonomous actors that no one totally comprehends it all.”

The Conscience of Silicon Valley
Zach Baron | GQ
“What I really hoped to do, I said, was to talk about the future and how to live in it. This year feels like a crossroads; I do not need to explain what I mean by this. …I want to destroy my computer, through which I now work and ‘have drinks’ and stare at blurry simulations of my parents sometimes; I want to kneel down and pray to it like a god. I want someone—I want Jaron Lanier—to tell me where we’re going, and whether it’s going to be okay when we get there. Lanier just nodded. All right, then.”

Yes to Tech Optimism. And Pessimism.
Shira Ovide | The New York Times
“Technology is not something that exists in a bubble; it is a phenomenon that changes how we live or how our world works in ways that help and hurt. That calls for more humility and bridges across the optimism-pessimism divide from people who make technology, those of us who write about it, government officials and the public. We need to think on the bright side. And we need to consider the horribles.”

How Afrofuturism Can Help the World Mend
C. Brandon Ogbunu | Wired
“…[W. E. B. DuBois’] ‘The Comet’ helped lay the foundation for a paradigm known as Afrofuturism. A century later, as a comet carrying disease and social unrest has upended the world, Afrofuturism may be more relevant than ever. Its vision can help guide us out of the rubble, and help us to consider universes of better alternatives.”

Wikipedia Is the Last Best Place on the Internet
Richard Cooke | Wired
“More than an encyclopedia, Wikipedia has become a community, a library, a constitution, an experiment, a political manifesto—the closest thing there is to an online public square. It is one of the few remaining places that retains the faintly utopian glow of the early World Wide Web.”

Can Genetic Engineering Bring Back the American Chestnut?
Gabriel Popkin | The New York Times Magazine
“The geneticists’ research forces conservationists to confront, in a new and sometimes discomfiting way, the prospect that repairing the natural world does not necessarily mean returning to an unblemished Eden. It may instead mean embracing a role that we’ve already assumed: engineers of everything, including nature.”

At the Limits of Thought
David C. Krakauer | Aeon
“A schism is emerging in the scientific enterprise. On the one side is the human mind, the source of every story, theory, and explanation that our species holds dear. On the other stand the machines, whose algorithms possess astonishing predictive power but whose inner workings remain radically opaque to human observers.”

Is the Internet Conscious? If It Were, How Would We Know?
Meghan O’Gieblyn | Wired
“Does the internet behave like a creature with an internal life? Does it manifest the fruits of consciousness? There are certainly moments when it seems to. Google can anticipate what you’re going to type before you fully articulate it to yourself. Facebook ads can intuit that a woman is pregnant before she tells her family and friends. It is easy, in such moments, to conclude that you’re in the presence of another mind—though given the human tendency to anthropomorphize, we should be wary of quick conclusions.”

The Internet Is an Amnesia Machine
Simon Pitt | OneZero
“There was a time when I didn’t know what a Baby Yoda was. Then there was a time I couldn’t go online without reading about Baby Yoda. And now, Baby Yoda is a distant, shrugging memory. Soon there will be a generation of people who missed the whole thing and for whom Baby Yoda is as meaningless as it was for me a year ago.”

Digital Pregnancy Tests Are Almost as Powerful as the Original IBM PC
Tom Warren | The Verge
“Each test, which costs less than $5, includes a processor, RAM, a button cell battery, and a tiny LCD screen to display the result. …Foone speculates that this device is ‘probably faster at number crunching and basic I/O than the CPU used in the original IBM PC.’ IBM’s original PC was based on Intel’s 8088 microprocessor, an 8-bit chip that operated at 5Mhz. The difference here is that this is a pregnancy test you pee on and then throw away.”

The Party Goes on in Massive Online Worlds
Cecilia D’Anastasio | Wired
“We’re more stand-outside types than the types to cast a flashy glamour spell and chat up the nearest cat girl. But, hey, it’s Final Fantasy XIV online, and where my body sat in New York, the epicenter of America’s Covid-19 outbreak, there certainly weren’t any parties.”

The Facebook Groups Where People Pretend the Pandemic Isn’t Happening
Kaitlyn Tiffany | The Atlantic
“Losing track of a friend in a packed bar or screaming to be heard over a live band is not something that’s happening much in the real world at the moment, but it happens all the time in the 2,100-person Facebook group ‘a group where we all pretend we’re in the same venue.’ So does losing shoes and Juul pods, and shouting matches over which bands are the saddest, and therefore the greatest.”

Did You Fly a Jetpack Over Los Angeles This Weekend? Because the FBI Is Looking for You
Tom McKay | Gizmodo
“Did you fly a jetpack over Los Angeles at approximately 3,000 feet on Sunday? Some kind of tiny helicopter? Maybe a lawn chair with balloons tied to it? If the answer to any of the above questions is ‘yes,’ you should probably lay low for a while (by which I mean cool it on the single-occupant flying machine). That’s because passing airline pilots spotted you, and now it’s this whole thing with the FBI and the Federal Aviation Administration, both of which are investigating.”

Image Credit: Thomas Kinto / Unsplash Continue reading

Posted in Human Robots

#437857 Video Friday: Robotic Third Hand Helps ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

ICRA 2020 – June 1-15, 2020 – [Virtual Conference]
RSS 2020 – July 12-16, 2020 – [Virtual Conference]
CLAWAR 2020 – August 24-26, 2020 – [Virtual Conference]
ICUAS 2020 – September 1-4, 2020 – Athens, Greece
ICRES 2020 – September 28-29, 2020 – Taipei, Taiwan
ICSR 2020 – November 14-16, 2020 – Golden, Colorado
Let us know if you have suggestions for next week, and enjoy today’s videos.

We are seeing some exciting advances in the development of supernumerary robotic limbs. But one thing about this technology remains a major challenge: How do you control the extra limb if your own hands are busy—say, if you’re carrying a package? MIT researchers at Professor Harry Asada’s lab have an idea. They are using subtle finger movements in sensorized gloves to control the supernumerary limb. The results are promising, and they’ve demonstrated a waist-mounted arm with a qb SoftHand that can help you with doors, elevators, and even handshakes.

[ Paper ]

ROBOPANDA

Fluid actuated soft robots, or fluidic elastomer actuators, have shown great potential in robotic applications where large compliance and safe interaction are dominant concerns. They have been widely studied in wearable robotics, prosthetics, and rehabilitations in recent years. However, such soft robots and actuators are tethered to a bulky pump and controlled by various valves, limiting their applications to a small confined space. In this study, we report a new and effective approach to fluidic power actuation that is untethered, easy to design, fabricate, control, and allows various modes of actuation. In the proposed approach, a sealed elastic tube filled with fluid (gas or liquid) is segmented by adaptors. When twisting a segment, two major effects could be observed: (1) the twisted segment exhibits a contraction force and (2) other segments inflate or deform according to their constraint patterns.

[ Paper ]

And now: “Magnetic cilia carpets.”

[ ETH Zurich ]

To adhere to government recommendations while maintaining requirements for social distancing during the COVID-19 pandemic, Yaskawa Motoman is now utilizing an HC10DT collaborative robot to take individual employee temperatures. Named “Covie”, the design and fabrication of the robotic solution and its software was a combined effort by Yaskawa Motoman’s Technology Advancement Team (TAT) and Product Solutions Group (PSG), as well as a group of robotics students from the University of Dayton.

They should have programmed it to nod if your temperature was normal, and smacked you upside the head while yelling “GO HOME” if it wasn’t.

[ Yaskawa ]

Driving slowly on pre-defined routes, ZMP’s RakuRo autonomous vehicle helps people with mobility challenges enjoy cherry blossoms in Japan.

RakuRo costs about US $1,000 per month to rent, but ZMP suggests that facilities or groups of ~10 people could get together and share one, which makes the cost much more reasonable.

[ ZMP ]

Jessy Grizzle from the Dynamic Legged Locomotion Lab at the University of Michigan writes:

Our lab closed on March 20, 2020 under the State of Michigan’s “Stay Home, Stay Safe” order. For a 24-hour period, it seemed that our labs would be “sanitized” during our absence. Since we had no idea what that meant, we decided that Cassie Blue needed to “Stay Home, Stay Safe” as well. We loaded up a very expensive robot and took her off campus. On May 26, we were allowed to re-open our laboratory. After thoroughly cleaning the lab, disinfecting tools and surfaces, developing and getting approval for new safe operation procedures, we then re-organized our work areas to respect social distancing requirements and brought Cassie back to the laboratory.

During the roughly two months we were working remotely, the lab’s members got a lot done. Papers were written, dissertation proposals were composed, and plans for a new course, ROB 101, Computational Linear Algebra, were developed with colleagues. In addition, one of us (Yukai Gong) found the lockdown to his liking! He needed the long period of quiet to work through some new ideas for how to control 3D bipedal robots.

[ Michigan Robotics ]

Thanks Jesse and Bruce!

You can tell that this video of how Pepper has been useful during COVID-19 is not focused on the United States, since it refers to the pandemic in past tense.

[ Softbank Robotics ]

NASA’s water-seeking robotic Moon rover just booked a ride to the Moon’s South Pole. Astrobotic of Pittsburgh, Pennsylvania, has been selected to deliver the Volatiles Investigating Polar Exploration Rover, or VIPER, to the Moon in 2023.

[ NASA ]

This could be the most impressive robotic gripper demo I have ever seen.

[ Soft Robotics ]

Whiz, an autonomous vacuum sweeper, innovates the cleaning industry by automating tedious tasks for your team. Easy to train, easy to use, Whiz works with your staff to deliver a high-quality clean while increasing efficiency and productivity.

[ Softbank Robotics ]

About 40 seconds into this video, a robot briefly chases a goose.

[ Ghost Robotics ]

SwarmRail is a new concept for rail-guided omnidirectional mobile robot systems. It aims for a highly flexible production process in the factory of the future by opening up the available work space from above. This means that transport and manipulation tasks can be carried out by floor- and ceiling-bound robot systems. The special feature of the system is the combination of omnidirectionally mobile units with a grid-shaped rail network, which is characterized by passive crossings and a continuous gap between the running surfaces of the rails. Through this gap, a manipulator operating below the rail can be connected to a mobile unit traveling on the rail.

[ DLRRMC ]

RightHand Robotics (RHR), a leader in providing robotic piece-picking solutions, is partnered with PALTAC Corporation, Japan’s largest wholesaler of consumer packaged goods. The collaboration introduces RightHand’s newest piece-picking solution to the Japanese market, with multiple workstations installed in PALTAC’s newest facility, RDC Saitama, which opened in 2019 in Sugito, Saitama Prefecture, Japan.

[ RightHand Robotics ]

From the ICRA 2020, a debate on the “Future of Robotics Research,” addressing such issues as “robotics research is over-reliant on benchmark datasets and simulation” and “robots designed for personal or household use have failed because of fundamental misunderstandings of Human-Robot Interaction (HRI).”

[ Robotics Debates ]

MassRobotics has a series of interviews where robotics celebrities are interviewed by high school students.The students are perhaps a little awkward (remember being in high school?), but it’s honest and the questions are interesting. The first two interviews are with Laurie Leshin, who worked on space robots at NASA and is now President of Worcester Polytechnic Institute, and Colin Angle, founder and CEO of iRobot.

[ MassRobotics ]

Thanks Andrew!

In this episode of the Voices from DARPA podcast, Dr. Timothy Chung, a program manager since 2016 in the agency’s Tactical Technology Office, delves into his robotics and autonomous technology programs – the Subterranean (SubT) Challenge and OFFensive Swarm-Enabled Tactics (OFFSET). From robot soccer to live-fly experimentation programs involving dozens of unmanned aircraft systems (UASs), he explains how he aims to assist humans heading into unknown environments via advances in collaborative autonomy and robotics.

[ DARPA ] Continue reading

Posted in Human Robots

#437783 Ex-Googler’s Startup Comes Out of ...

Over the last 10 years, the PR2 has helped roboticists make an enormous amount of progress in mobile manipulation over a relatively short time. I mean, it’s been a decade already, but still—robots are hard, and giving a bunch of smart people access to a capable platform where they didn’t have to worry about hardware and could instead focus on doing interesting and useful things helped to establish a precedent for robotics research going forward.

Unfortunately, not everyone can afford an enormous US $400,000 robot, and even if they could, PR2s are getting very close to the end of their lives. There are other mobile manipulators out there taking the place of the PR2, but so far, size and cost have largely restricted them to research labs. Lots of good research is being done, but it’s getting to the point where folks want to take the next step: making mobile manipulators real-world useful.

Today, a company called Hello Robot is announcing a new mobile manipulator called the Stretch RE1. With offices in the San Francisco Bay Area and in Atlanta, Ga., Hello Robot is led by Aaron Edsinger and Charlie Kemp, and by combining decades of experience in industry and academia they’ve managed to come up with a robot that’s small, lightweight, capable, and affordable, all at the same time. For now, it’s a research platform, but eventually, its creators hope that it will be able to come into our homes and take care of us when we need it to.

A fresh look at mobile manipulators
To understand the concept behind Stretch, it’s worth taking a brief look back at what Edsinger and Kemp have been up to for the past 10 years. Edsinger co-founded Meka Robotics in 2007, which built expensive, high performance humanoid arms, torsos, and heads for the research market. Meka was notable for being the first robotics company (as far as we know) to sell robot arms that used series elastic actuators, and the company worked extensively with Georgia Tech researchers. In 2011, Edsinger was one of the co-founders of Redwood Robotics (along with folks from SRI and Willow Garage), which was going to develop some kind of secret and amazing new robot arm before Google swallowed it in late 2013. At the same time, Google also acquired Meka and a bunch of other robotics companies, and Edsinger ended up at Google as one of the directors of its robotics program, until he left to co-found Hello Robot in 2017.

Meanwhile, since 2007 Kemp has been a robotics professor at Georgia Tech, where he runs the Healthcare Robotics Lab. Kemp’s lab was one of the 11 PR2 beta sites, giving him early experience with a ginormous mobile manipulator. Much of the research that Kemp has spent the last decade on involves robots providing assistance to untrained users, often through direct physical contact, and frequently either in their own homes or in a home environment. We should mention that the Georgia Tech PR2 is still going, most recently doing some clever material classification work in a paper for IROS later this year.

Photo: Hello Robot

Hello Robot co-founder and CEO Aaron Edsinger says that, although Stretch is currently a research platform, he hopes to see the robot deployed in home environments, adding that the “impact we want to have is through robots that are helpful to people in society.”

So with all that in mind, where’d Hello Robot come from? As it turns out, both Edsinger and Kemp were in Rodney Brooks’ group at MIT, so it’s perhaps not surprising that they share some of the same philosophies about what robots should be and what they should be used for. After collaborating on a variety of projects over the years, in 2017 Edsinger was thinking about his next step after Google when Kemp stopped by to show off some video of a new robot prototype that he’d been working on—the prototype for Stretch. “As soon as I saw it, I knew that was exactly the kind of thing I wanted to be working on,” Edsinger told us. “I’d become frustrated with the complexity of the robots being built to do manipulation in home environments and around people, and it solved a lot of problems in an elegant way.”

For Kemp, Stretch is an attempt to get everything he’s been teaching his robots out of his lab at Georgia Tech and into the world where it can actually be helpful to people. “Right from the beginning, we were trying to take our robots out to real homes and interact with real people,” says Kemp. Georgia Tech’s PR2, for example, worked extensively with Henry and Jane Evans, helping Henry (a quadriplegic) regain some of the bodily autonomy he had lost. With the assistance of the PR2, Henry was able to keep himself comfortable for hours without needing a human caregiver to be constantly with him. “I felt like I was making a commitment in some ways to some of the people I was working with,” Kemp told us. “But 10 years later, I was like, where are these things? I found that incredibly frustrating. Stretch is an effort to try to push things forward.”

A robot you can put in the backseat of a car
One way to put Stretch in context is to think of it almost as a reaction to the kitchen sink philosophy of the PR2. Where the PR2 was designed to be all the robot anyone could ever need (plus plenty of robot that nobody really needed) embodied in a piece of hardware that weighs 225 kilograms and cost nearly half a million dollars, Stretch is completely focused on being just the robot that is actually necessary in a form factor that’s both much smaller and affordable. The entire robot weighs a mere 23 kg in a footprint that’s just a 34 cm square. As you can see from the video, it’s small enough (and safe enough) that it can be moved by a child. The cost? At $17,950 apiece—or a bit less if you buy a bunch at once—Stretch costs a fraction of what other mobile manipulators sell for.

It might not seem like size or weight should be that big of an issue, but it very much is, explains Maya Cakmak, a robotics professor at the University of Washington, in Seattle. Cakmak worked with PR2 and Henry Evans when she was at Willow Garage, and currently has access to both a PR2 and a Fetch research robot. “When I think about my long term research vision, I want to deploy service robots in real homes,” Cakmak told us. Unfortunately, it’s the robots themselves that have been preventing her from doing this—both the Fetch and the PR2 are large enough that moving them anywhere requires a truck and a lift, which also limits the home that they can be used in. “For me, I felt immediately that Stretch is very different, and it makes a lot of sense,” she says. “It’s safe and lightweight, you can probably put it in the backseat of a car.” For Cakmak, Stretch’s size is the difference between being able to easily take a robot to the places she wants to do research in, and not. And cost is a factor as well, since a cheaper robot means more access for her students. “I got my refurbished PR2 for $180,000,” Cakmak says. “For that, with Stretch I could have 10!”

“I felt immediately that Stretch is very different. It’s safe and lightweight, you can probably put it in the backseat of a car. I got my refurbished PR2 for $180,000. For that, with Stretch I could have 10!”
—Maya Cakmak, University of Washington

Of course, a portable robot doesn’t do you any good if the robot itself isn’t sophisticated enough to do what you need it to do. Stretch is certainly a compromise in functionality in the interest of small size and low cost, but it’s a compromise that’s been carefully thought out, based on the experience that Edsinger has building robots and the experience that Kemp has operating robots in homes. For example, most mobile manipulators are essentially multi-degrees-of-freedom arms on mobile bases. Stretch instead leverages its wheeled base to move its arm in the horizontal plane, which (most of the time) works just as well as an extra DoF or two on the arm while saving substantially on weight and cost. Similarly, Stretch relies almost entirely on one sensor, an Intel RealSense D435i on a pan-tilt head that gives it a huge range of motion. The RealSense serves as a navigation camera, manipulation camera, a 3D mapping system, and more. It’s not going to be quite as good for a task that might involve fine manipulation, but most of the time it’s totally workable and you’re saving on cost and complexity.

Stretch has been relentlessly optimized to be the absolutely minimum robot to do mobile manipulation in a home or workplace environment. In practice, this meant figuring out exactly what it was absolutely necessary for Stretch to be able to do. With an emphasis on manipulation, that meant defining the workspace of the robot, or what areas it’s able to usefully reach. “That was one thing we really had to push hard on,” says Edsinger. “Reachability.” He explains that reachability and a small mobile base tend not to go together, because robot arms (which tend to weigh a lot) can cause a small base to tip, especially if they’re moving while holding a payload. At the same time, Stretch needed to be able to access both countertops and the floor, while being able to reach out far enough to hand people things without having to be right next to them. To come up with something that could meet all those requirements, Edsinger and Kemp set out to reinvent the robot arm.

Stretch’s key innovation: a stretchable arm
The design they came up with is rather ingenious in its simplicity and how well it works. Edsinger explains that the arm consists of five telescoping links: one fixed and four moving. They are constructed of custom carbon fiber, and are driven by a single motor, which is attached to the robot’s vertical pole. The strong, lightweight structure allows the arm to extend over half a meter and hold up to 1.5 kg. Although the company has a patent pending for the design, Edsinger declined to say whether the links are driven by a belt, cables, or gears. “We don’t want to disclose too much of the secret sauce [with regard to] the drive mechanism.” He added that the arm was “one of the most significant engineering challenges on the robot in terms of getting the desired reach, compactness, precision, smoothness, force sensitivity, and low cost to all happily coexist.”

Photo: Hello Robot

Stretch’s arm consists of five telescoping links constructed of custom carbon fiber, and are driven by a single motor, which is attached to the robot’s vertical pole, minimizing weight and inertia. The arm has a reach of over half a meter and can hold up to 1.5 kg.

Another interesting features of Stretch is its interface with the world—its gripper. There are countless different gripper designs out there, each and every one of which is the best at gripping some particular subset of things. But making a generalized gripper for all of the stuff that you’d find in a home is exceptionally difficult. Ideally, you’d want some sort of massive experimental test program where thousands and thousands of people test out different gripper designs in their homes for long periods of time and then tell you which ones work best. Obviously, that’s impractical for a robotics startup, but Kemp realized that someone else was already running the study for him: Amazon.

“I had this idea that there are these assistive grabbers that people with disabilities use to grasp objects in the real world,” he told us. Kemp went on Amazon’s website and looked at the top 10 grabbers and the reviews from thousands of users. He then bought a bunch of different ones and started testing them. “This one [Stretch’s gripper], I almost didn’t order it, it was such a weird looking thing,” he says. “But it had great reviews on Amazon, and oh my gosh, it just blew away the other grabbers. And I was like, that’s it. It just works.”

Stretch’s teleoperated and autonomous capabilities
As with any robot intended to be useful outside of a structured environment, hardware is only part of the story, and arguably not even the most important part. In order for Stretch to be able to operate out from under the supervision of a skilled roboticist, it has to be either easy to control, or autonomous. Ideally, it’s both, and that’s what Hello Robot is working towards, although things didn’t start out that way, Kemp explains. “From a minimalist standpoint, we began with the notion that this would be a teleoperated robot. But in the end, you just don’t get the real power of the robot that way, because you’re tied to a person doing stuff. As much as we fought it, autonomy really is a big part of the future for this kind of system.”

Here’s a look at some of Stretch’s teleoperated capabilities. We’re told that Stretch is very easy to get going right out of the box, although this teleoperation video from Hello Robot looks like it’s got a skilled and experienced user in the loop:

For such a low-cost platform, the autonomy (even at this early stage) is particularly impressive:

Since it’s not entirely clear from the video exactly what’s autonomous, here’s a brief summary of a couple of the more complex behaviors that Kemp sent us:

Object grasping: Stretch uses its 3D camera to find the nearest flat surface using a virtual overhead view. It then segments significant blobs on top of the surface. It selects the largest blob in this virtual overhead view and fits an ellipse to it. It then generates a grasp plan that makes use of the center of the ellipse and the major and minor axes. Once it has a plan, Stretch orients its gripper, moves to the pre-grasp pose, moves to the grasp pose, closes its gripper based on the estimated object width, lifts up, and retracts.
Mapping, navigating, and reaching to a 3D point: These demonstrations all use FUNMAP (Fast Unified Navigation, Manipulation and Planning). It’s all novel custom Python code. Even a single head scan performed by panning the 3D camera around can result in a very nice 3D representation of Stretch’s surroundings that includes the nearby floor. This is surprisingly unusual for robots, which often have their cameras too low to see many interesting things in a human environment. While mapping, Stretch selects where to scan next in a non-trivial way that considers factors such as the quality of previous observations, expected new observations, and navigation distance. The plan that Stretch uses to reach the target 3D point has been optimized for navigation and manipulation. For example, it finds a final robot pose that provides a large manipulation workspace for Stretch, which must consider nearby obstacles, including obstacles on the ground.
Object handover: This is a simple demonstration of object handovers. Stretch performs Cartesian motions to move its gripper to a body-relative position using a good motion heuristic, which is to extend the arm as the last step. These simple motions work well due to the design of Stretch. It still surprises me how well it moves the object to comfortable places near my body, and how unobtrusive it is. The goal point is specified relative to a 3D frame attached to the person’s mouth estimated using deep learning models (shown in the RViz visualization video). Specifically, Stretch targets handoff at a 3D point that is 20 cm below the estimated position of the mouth and 25 cm away along the direction of reaching.

Much of these autonomous capabilities come directly from Kemp’s lab, and the demo code is available for anyone to use. (Hello Robot says all of Stretch’s software is open source.)

Photo: Hello Robot

Hello Robot co-founder and CEO Aaron Edsinger says Stretch is designed to work with people in homes and workplaces and can be teleoperated to do a variety of tasks, including picking up toys, removing laundry from a dryer, and playing games with kids.

As of right now, Stretch is very much a research platform. You’re going to see it in research labs doing research things, and hopefully in homes and commercial spaces as well, but still under the supervision of professional roboticists. As you may have guessed, though, Hello Robot’s vision is a bit broader than that. “The impact we want to have is through robots that are helpful to people in society,” Edsinger says. “We think primarily in the home context, but it could be in healthcare, or in other places. But we really want to have our robots be impactful, and useful. To us, useful is exciting.” Adds Kemp: “I have a personal bias, but we’d really like this technology to benefit older adults and caregivers. Rather than creating a specialized assistive device, we want to eventually create an inexpensive consumer device for everyone that does lots of things.”

Neither Edsinger nor Kemp would say much more on this for now, and they were very explicit about why—they’re being deliberately cautious about raising expectations, having seen what’s happened to some other robotics companies over the past few years. Without VC funding (Hello Robot is currently bootstrapping itself into existence), Stretch is being sold entirely on its own merits. So far, it seems to be working. Stretch robots are already in a half dozen research labs, and we expect that with today’s announcement, we’ll start seeing them much more frequently.

This article appears in the October 2020 print issue as “A Robot That Keeps It Simple.” Continue reading

Posted in Human Robots