Tag Archives: and

#438754 TALOS Humanoid Robot in Scotland

Video of TALOS arriving at the University of Edinburgh, being unpacked, and activated.

Posted in Human Robots

#439153 OTTO Motors’ Biggest AMR Gets ...

Over the last few weeks, we’ve posted several articles about the next generation of warehouse manipulation robots designed to handle the non-stop stream of boxes that provide the foundation for modern ecommerce. But once these robots take boxes out of the back of a trailer or off of a pallet, there are yet more robots ready to autonomously continue the flow through a warehouse or distribution center. One of the beefiest of these autonomous mobile robots is the OTTO 1500, which is called the OTTO 1500 because (you guessed it) it can handle 1500 kg of cargo. Plus another 400kg of cargo, for a total of 1900 kg of cargo. Yeah, I don’t get it either. Anyway, it’s undergone a major update, which is a good excuse for us to ask OTTO CTO Ryan Gariepy some questions about it.

The earlier version, also named OTTO 1500, has over a million hours of real-world operation, which is impressive. Even more impressive is being able to move that much stuff that quickly without being a huge safety hazard in warehouse environments full of unpredictable humans. Although, that might become less of a problem over time, as other robots take over some of the tasks that humans have been doing. OTTO Motors and Clearpath Robotics have an ongoing partnership with Boston Dynamics, and we fully expect to see these AMRs hauling boxes for Stretch in the near future.

For a bit more, we spoke with OTTO CTO Ryan Gariepy via email.

IEEE Spectrum: What are the major differences between today’s OTTO 1500 and the one introduced six years ago, and why did you decide to make those changes?

Ryan Gariepy: Six years isn’t a long shelf life for an industrial product, but it’s a lifetime in the software world. We took the original OTTO 1500 and stripped it down to the chassis and drivetrain, and re-built it with more modern components (embedded controller, state-of-the-art sensors, next-generation lithium batteries, and more). But the biggest difference is in how we’ve integrated our autonomous software and our industrial safety systems. Our systems are safe throughout the entirety of the vehicle dynamics envelope from straight line motion to aggressive turning at speed in tight spaces. It corners at 2m/s and has 60% more throughput. No “simple rectangular” footprints here! On top of this, the entire customization, development, and validation process is done in a way which respects that our integration partners need to be able to take advantage of these capabilities themselves without needing to become experts in vehicle dynamics.

As for “why now,” we’ve always known that an ecosystem of new sensors and controllers was going to emerge as the world caught on to the potential of heavy-load AMRs. We wanted to give the industry some time to settle out—making sure we had reliable and low-cost 3D sensors, for example, or industrial grade fanless computers which can still mount a reasonable GPU, or modular battery systems which are now built-in view of new certifications requirements. And, possibly most importantly, partners who see the promise of the market enough to accommodate our feedback in their product roadmaps.

How has the reception differed from the original introduction of the OTTO 1500 and the new version?
That’s like asking the difference between the public reception to the introduction of the first iPod in 2001 and the first iPhone in 2007. When we introduced our first AMR, very few people had even heard of them, let alone purchased one before. We spent a great deal of time educating the market on the basic functionality of an AMR: What it is and how it works kind of stuff. Today’s buyers are way more sophisticated, experienced, and approach automation from a more strategic perspective. What was once a tactical purchase to plug a hole is now part of a larger automation initiative. And while the next generation of AMRs closely resemble the original models from the outside, the software functionality and integration capabilities are night and day.

What’s the most valuable lesson you’ve learned?

We knew that our customers needed incredible uptime: 365 days, 24/7 for 10 years is the typical expectation. Some of our competitors have AMRs working in facilities where they can go offline for a few minutes or a few hours without any significant repercussions to the workflow. That’s not the case with our customers, where any stoppage at any point means everything shuts down. And, of course, Murphy’s law all but guarantees that it shuts down at 4:00 a.m. on Saturday, Japan Standard Time. So the humbling lesson wasn’t knowing that our customers wanted maintenance service levels with virtually no down time, the humbling part was the degree of difficulty in building out a service organization as rapidly as we rolled out customer deployments. Every customer in a new geography needed a local service infrastructure as well. Finally, service doesn’t mean anything without spare parts availability, which brings with it customs and shipping challenges. And, of course, as a Canadian company, we need to build all of that international service and logistics infrastructure right from the beginning. Fortunately, the groundwork we’d laid with Clearpath Robotics served as a good foundation for this.

How were you able to develop a new product with COVID restrictions in place?

We knew we couldn’t take an entire OTTO 1500 and ship it to every engineer’s home that needed to work on one, so we came up with the next best thing. We call it a ‘wall-bot’ and it’s basically a deconstructed 1500 that our engineers can roll into their garage. We were pleasantly surprised with how effective this was, though it might be the heaviest dev kit in the robot world.

Also don’t forget that much of robotics is software driven. Our software development life cycle had already had a strong focus on Gazebo-based simulation for years due to it being unfeasible to give every in-office developer a multi-ton loaded robot to play with, and we’d already had a redundant VPN setup for the office. Finally, we’ve always been a remote-work-friendly culture ever since we started adopting telepresence robots and default-on videoconferencing in the pre-OTTO days. In retrospect, it seems like the largest area of improvement for us for the future is how quickly we could get people good home office setups while amid a pandemic. Continue reading

Posted in Human Robots

#439142 Scientists Grew Human Cells in Monkey ...

Few things in science freak people out more than human-animal hybrids. Named chimeras, after the mythical Greek creature that’s an amalgam of different beasts, these part-human, part-animal embryos have come onto the scene to transform our understanding of what makes us “human.”

If theoretically grown to term, chimeras would be an endless resource for replacement human organs. They’re a window into the very early stages of human development, allowing scientists to probe the mystery of the first dozen days after sperm-meets-egg. They could help map out how our brains build their early architecture, potentially solving the age-old question of why our neural networks are so powerful—and how their wiring could go wrong.

The trouble with all of this? The embryos are part human. The idea of human hearts or livers growing inside an animal may be icky, but tolerable, to some. Human neurons crafting a brain inside a hybrid embryo—potentially leading to consciousness—is a horror scenario. For years, scientists have flirted with ethical boundaries by mixing human cells with those of rats and pigs, which are relatively far from us in evolutionary terms, to reduce the chance of a mentally “humanized” chimera.

This week, scientists crossed a line.

In a study led by Dr. Juan Carlos Izpisua Belmonte, a prominent stem cell biologist at the Salk Institute for Biological Studies, the team reported the first vetted case of a human-monkey hybrid embryo.

Reflexive shudder aside, the study is a technological tour-de-force. The scientists were able to watch the hybrid embryo develop for 20 days outside the womb, far longer than any previous attempts. Putting the timeline into context, it’s about 20 percent of a monkey’s gestation period.

Although only 3 out of over 100 attempts survived past that point, the viable embryos contained a shockingly high amount of human cells—about one-third of the entire cell population. If able to further develop, those human contributions could, in theory, substantially form the biological architecture of the body, and perhaps the mind, of a human-monkey fetus.

I can’t stress this enough: the technology isn’t there yet to bring Planet of the Apes to life. Strict regulations also prohibit growing chimera embryos past the first few weeks. It’s telling that Izpisua Belmonte collaborated with Chinese labs, which have far fewer ethical regulations than the US.

But the line’s been crossed, and there’s no going back. Here’s what they did, why they did it, and reasons to justify—or limit—similar tests going forward.

What They Did
The way the team made the human-monkey embryo is similar to previous attempts at half-human chimeras.

Here’s how it goes. They used de-programmed, or “reverted,” human stem cells, called induced pluripotent stem cells (iPSCs). These cells often start from skin cells, and are chemically treated to revert to the stem cell stage, gaining back the superpower to grow into almost any type of cell: heart, lung, brain…you get the idea. The next step is preparing the monkey component, a fertilized and healthy monkey egg that develops for six days in a Petri dish. By this point, the embryo is ready for implantation into the uterus, which kicks off the whole development process.

This is where the chimera jab comes in. Using a tiny needle, the team injected each embryo with 25 human cells, and babied them for another day. “Until recently the experiment would have ended there,” wrote Drs. Hank Greely and Nita Farahany, two prominent bioethicists who wrote an accompanying expert take, but were not involved in the study.

But the team took it way further. Using a biological trick, the embryos attached to the Petri dish as they would to a womb. The human cells survived after the artificial “implantation,” and—surprisingly—tended to physically group together, away from monkey cells.

The weird segregation led the team to further explore why human cells don’t play nice with those of another species. Using a big data approach, the team scouted how genes in human cells talked to their monkey hosts. What’s surprising, the team said, is that adding human cells into the monkey embryos fundamentally changed both. Rather than each behaving as they would have in their normal environment, the two species of cells influenced each other, even when physically separated. The human cells, for example, tweaked the biochemical messengers that monkey cells—and the “goop” surrounding those cells—use to talk to one another.

In other words, in contrast to oil and water, human and monkey cells seemed to communicate and change the other’s biology without needing too much outside whisking. Human iPSCs began to behave more like monkey cells, whereas monkey embryos became slightly more human.

Ok, But Why?
The main reasons the team went for a monkey hybrid, rather than the “safer” pig or rat alternative, was because of our similarities to monkeys. As the authors argue, being genetically “closer” in evolutionary terms makes it easier to form chimeras. In turn, the resulting embryos also make it possible to study early human development and build human tissues and organs for replacement.

“Historically, the generation of human-animal chimeras has suffered from low efficiency,” said Izpisua Belmonte. “Generation of a chimera between human and non-human primate, a species more closely related to humans along the evolutionary timeline than all previously used species, will allow us to gain better insight into whether there are evolutionarily imposed barriers to chimera generation and if there are any means by which we can overcome them.”

A Controversial Future
That argument isn’t convincing to some.

In terms of organ replacement, monkeys are very expensive (and cognitively advanced) donors compared to pigs, the latter of which have been the primary research host for growing human organs. While difficult to genetically engineer to fit human needs, pigs are more socially acceptable as organ “donors”—many of us don’t bat an eye at eating ham or bacon—whereas the concept of extracting humanoid tissue from monkeys is extremely uncomfortable.

A human-monkey hybrid could be especially helpful for studying neurodevelopment, but that directly butts heads with the “human cells in animal brains” problem. Even when such an embryo is not brought to term, it’s hard to imagine anyone who’s ready to study the brain of a potentially viable animal fetus with human cells wired into its neural networks.

There’s also the “sledgehammer” aspect of the study that makes scientists cringe. “Direct transplantation of cells into particular regions, or organs [of an animal], allows researchers to predict where and how the cells might integrate,” said Greely and Farahany. This means they might be able to predict if the injected human cells end up in a “boring” area, like the gallbladder, or a more “sensitive” area, like the brain. But with the current technique, we’re unsure where the human cells could eventually migrate to and grow.

Yet despite the ick factor, human-monkey embryos circumvent the ethical quandaries around using aborted tissue for research. These hybrid embryos may present the closest models to early human development that we can get without dipping into the abortion debate.

In their commentary, Greely and Farahany laid out four main aspects to consider before moving ahead with the controversial field. First and foremost is animal welfare, which is “especially true for non-human primates,” as they’re mentally close to us. There’s also the need for consent from human donors, which form the basis of the injected iPSCs, as some may be uncomfortable with the endeavor itself. Like organ donors, people need to be fully informed.

Third and fourth, public discourse is absolutely needed, as people may strongly disapprove of the idea of mixing human tissue or organs with animals. For now, the human-monkey embryos have a short life. But as technology gets better, and based on previous similar experiments with other chimeras, the next step in this venture is to transplant the embryo into a living animal host’s uterus, which could nurture it to grow further.

For now, that’s a red line for human-monkey embryos, and the technology isn’t there yet. But if the surprise of CRISPR babies has taught us anything, it’s that as a society we need to discourage, yet prepare for, a lone wolf who’s willing to step over the line—that is, bringing a part-human, part-animal embryo to term.

“We must begin to think about that possibility,” said Greely and Farahany. With the study, we know that “those future experiments are now at least plausible.”

Image Credit: A human-monkey chimera embryo, photo by Weizhi Ji, Kunming University of Science and Technology Continue reading

Posted in Human Robots

#438745 Social robot from India

The Indian humanoid “SHALU” is able to speak in 9 Indian, and 38 foreign languages, can recognize faces, and identity people and objects!

Posted in Human Robots

#439110 Robotic Exoskeletons Could One Day Walk ...

Engineers, using artificial intelligence and wearable cameras, now aim to help robotic exoskeletons walk by themselves.

Increasingly, researchers around the world are developing lower-body exoskeletons to help people walk. These are essentially walking robots users can strap to their legs to help them move.

One problem with such exoskeletons: They often depend on manual controls to switch from one mode of locomotion to another, such as from sitting to standing, or standing to walking, or walking on the ground to walking up or down stairs. Relying on joysticks or smartphone apps every time you want to switch the way you want to move can prove awkward and mentally taxing, says Brokoslaw Laschowski, a robotics researcher at the University of Waterloo in Canada.

Scientists are working on automated ways to help exoskeletons recognize when to switch locomotion modes — for instance, using sensors attached to legs that can detect bioelectric signals sent from your brain to your muscles telling them to move. However, this approach comes with a number of challenges, such as how how skin conductivity can change as a person’s skin gets sweatier or dries off.

Now several research groups are experimenting with a new approach: fitting exoskeleton users with wearable cameras to provide the machines with vision data that will let them operate autonomously. Artificial intelligence (AI) software can analyze this data to recognize stairs, doors, and other features of the surrounding environment and calculate how best to respond.

Laschowski leads the ExoNet project, the first open-source database of high-resolution wearable camera images of human locomotion scenarios. It holds more than 5.6 million images of indoor and outdoor real-world walking environments. The team used this data to train deep-learning algorithms; their convolutional neural networks can already automatically recognize different walking environments with 73 percent accuracy “despite the large variance in different surfaces and objects sensed by the wearable camera,” Laschowski notes.

According to Laschowski, a potential limitation of their work their reliance on conventional 2-D images, whereas depth cameras could also capture potentially useful distance data. He and his collaborators ultimately chose not to rely on depth cameras for a number of reasons, including the fact that the accuracy of depth measurements typically degrades in outdoor lighting and with increasing distance, he says.

In similar work, researchers in North Carolina had volunteers with cameras either mounted on their eyeglasses or strapped onto their knees walk through a variety of indoor and outdoor settings to capture the kind of image data exoskeletons might use to see the world around them. The aim? “To automate motion,” says Edgar Lobaton an electrical engineering researcher at North Carolina State University. He says they are focusing on how AI software might reduce uncertainty due to factors such as motion blur or overexposed images “to ensure safe operation. We want to ensure that we can really rely on the vision and AI portion before integrating it into the hardware.”

In the future, Laschowski and his colleagues will focus on improving the accuracy of their environmental analysis software with low computational and memory storage requirements, which are important for onboard, real-time operations on robotic exoskeletons. Lobaton and his team also seek to account for uncertainty introduced into their visual systems by movements .

Ultimately, the ExoNet researchers want to explore how AI software can transmit commands to exoskeletons so they can perform tasks such as climbing stairs or avoiding obstacles based on a system’s analysis of a user's current movements and the upcoming terrain. With autonomous cars as inspiration, they are seeking to develop autonomous exoskeletons that can handle the walking task without human input, Laschowski says.

However, Laschowski adds, “User safety is of the utmost importance, especially considering that we're working with individuals with mobility impairments,” resulting perhaps from advanced age or physical disabilities.
“The exoskeleton user will always have the ability to override the system should the classification algorithm or controller make a wrong decision.” Continue reading

Posted in Human Robots