Tag Archives: new

#435822 The Internet Is Coming to the Rest of ...

People surf it. Spiders crawl it. Gophers navigate it.

Now, a leading group of cognitive biologists and computer scientists want to make the tools of the Internet accessible to the rest of the animal kingdom.

Dubbed the Interspecies Internet, the project aims to provide intelligent animals such as elephants, dolphins, magpies, and great apes with a means to communicate among each other and with people online.

And through artificial intelligence, virtual reality, and other digital technologies, researchers hope to crack the code of all the chirps, yips, growls, and whistles that underpin animal communication.

Oh, and musician Peter Gabriel is involved.

“We can use data analysis and technology tools to give non-humans a lot more choice and control,” the former Genesis frontman, dressed in his signature Nehru-style collar shirt and loose, open waistcoat, told IEEE Spectrum at the inaugural Interspecies Internet Workshop, held Monday in Cambridge, Mass. “This will be integral to changing our relationship with the natural world.”

The workshop was a long time in the making.

Eighteen years ago, Gabriel visited a primate research center in Atlanta, Georgia, where he jammed with two bonobos, a male named Kanzi and his half-sister Panbanisha. It was the first time either bonobo had sat at a piano before, and both displayed an exquisite sense of musical timing and melody.

Gabriel seemed to be speaking to the great apes through his synthesizer. It was a shock to the man who once sang “Shock the Monkey.”

“It blew me away,” he says.

Add in the bonobos’ ability to communicate by pointing to abstract symbols, Gabriel notes, and “you’d have to be deaf, dumb, and very blind not to notice language being used.”

Gabriel eventually teamed up with Internet protocol co-inventor Vint Cerf, cognitive psychologist Diana Reiss, and IoT pioneer Neil Gershenfeld to propose building an Interspecies Internet. Presented in a 2013 TED Talk as an “idea in progress,” the concept proved to be ahead of the technology.

“It wasn’t ready,” says Gershenfeld, director of MIT’s Center for Bits and Atoms. “It needed to incubate.”

So, for the past six years, the architects of the Dolittlesque initiative embarked on two small pilot projects, one for dolphins and one for chimpanzees.

At her Hunter College lab in New York City, Reiss developed what she calls the D-Pad—a touchpad for dolphins.

Reiss had been trying for years to create an underwater touchscreen with which to probe the cognition and communication skills of bottlenose dolphins. But “it was a nightmare coming up with something that was dolphin-safe and would work,” she says.

Her first attempt emitted too much heat. A Wii-like system of gesture recognition proved too difficult to install in the dolphin tanks.

Eventually, she joined forces with Rockefeller University biophysicist Marcelo Magnasco and invented an optical detection system in which images and infrared sensors are projected through an underwater viewing window onto a glass panel, allowing the dolphins to play specially designed apps, including one dubbed Whack-a-Fish.

Meanwhile, in the United Kingdom, Gabriel worked with Alison Cronin, director of the ape rescue center Monkey World, to test the feasibility of using FaceTime with chimpanzees.

The chimps engaged with the technology, Cronin reported at this week’s workshop. However, our hominid cousins proved as adept at videotelephonic discourse as my three-year-old son is at video chatting with his grandparents—which is to say, there was a lot of pass-the-banana-through-the-screen and other silly games, and not much meaningful conversation.

“We can use data analysis and technology tools to give non-humans a lot more choice and control.”
—Peter Gabriel

The buggy, rudimentary attempt at interspecies online communication—what Cronin calls her “Max Headroom experiment”—shows that building the Interspecies Internet will not be as simple as giving out Skype-enabled tablets to smart animals.

“There are all sorts of problems with creating a human-centered experience for another animal,” says Gabriel Miller, director of research and development at the San Diego Zoo.

Miller has been working on animal-focused sensory tools such as an “Elephone” (for elephants) and a “Joybranch” (for birds), but it’s not easy to design efficient interactive systems for other creatures—and for the Interspecies Internet to be successful, Miller points out, “that will be super-foundational.”

Researchers are making progress on natural language processing of animal tongues. Through a non-profit organization called the Earth Species Project, former Firefox designer Aza Raskin and early Twitter engineer Britt Selvitelle are applying deep learning algorithms developed for unsupervised machine translation of human languages to fashion a Rosetta Stone–like tool capable of interpreting the vocalizations of whales, primates, and other animals.

Inspired by the scientists who first documented the complex sonic arrangements of humpback whales in the 1960s—a discovery that ushered in the modern marine conservation movement—Selvitelle hopes that an AI-powered animal translator can have a similar effect on environmentalism today.

“A lot of shifts happen when someone who doesn’t have a voice gains a voice,” he says.

A challenge with this sort of AI software remains verification and validation. Normally, machine-learning algorithms are benchmarked against a human expert, but who is to say if a cybernetic translation of a sperm whale’s clicks is accurate or not?

One could back-translate an English expression into sperm whale-ese and then into English again. But with the great apes, there might be a better option.

According to primatologist Sue Savage-Rumbaugh, expertly trained bonobos could serve as bilingual interpreters, translating the argot of apes into the parlance of people, and vice versa.

Not just any trained ape will do, though. They have to grow up in a mixed Pan/Homo environment, as Kanzi and Panbanisha were.

“If I can have a chat with a cow, maybe I can have more compassion for it.”
—Jeremy Coller

Those bonobos were raised effectively from birth both by Savage-Rumbaugh, who taught the animals to understand spoken English and to communicate via hundreds of different pictographic “lexigrams,” and a bonobo mother named Matata that had lived for six years in the Congolese rainforests before her capture.

Unlike all other research primates—which are brought into captivity as infants, reared by human caretakers, and have limited exposure to their natural cultures or languages—those apes thus grew up fluent in both bonobo and human.

Panbanisha died in 2012, but Kanzi, aged 38, is still going strong, living at an ape sanctuary in Des Moines, Iowa. Researchers continue to study his cognitive abilities—Francine Dolins, a primatologist at the University of Michigan-Dearborn, is running one study in which Kanzi and other apes hunt rabbits and forage for fruit through avatars on a touchscreen. Kanzi could, in theory, be recruited to check the accuracy of any Google Translate–like app for bonobo hoots, barks, grunts, and cries.

Alternatively, Kanzi could simply provide Internet-based interpreting services for our two species. He’s already proficient at video chatting with humans, notes Emily Walco, a PhD student at Harvard University who has personally Skyped with Kanzi. “He was super into it,” Walco says.

And if wild bonobos in Central Africa can be coaxed to gather around a computer screen, Savage-Rumbaugh is confident Kanzi could communicate with them that way. “It can all be put together,” she says. “We can have an Interspecies Internet.”

“Both the technology and the knowledge had to advance,” Savage-Rumbaugh notes. However, now, “the techniques that we learned could really be extended to a cow or a pig.”

That’s music to the ears of Jeremy Coller, a private equity specialist whose foundation partially funded the Interspecies Internet Workshop. Coller is passionate about animal welfare and has devoted much of his philanthropic efforts toward the goal of ending factory farming.

At the workshop, his foundation announced the creation of the Coller Doolittle Prize, a US $100,000 award to help fund further research related to the Interspecies Internet. (A working group also formed to synthesize plans for the emerging field, to facilitate future event planning, and to guide testing of shared technology platforms.)

Why would a multi-millionaire with no background in digital communication systems or cognitive psychology research want to back the initiative? For Coller, the motivation boils to interspecies empathy.

“If I can have a chat with a cow,” he says, “maybe I can have more compassion for it.”

An abridged version of this post appears in the September 2019 print issue as “Elephants, Dolphins, and Chimps Need the Internet, Too.” Continue reading

Posted in Human Robots

#435818 Swappable Flying Batteries Keep Drones ...

Battery power is a limiting factor for robots everywhere, but it’s particularly problematic for drones, which have to make an awkward tradeoff between the amount of battery they carry, the amount of other more useful stuff they carry, and how long they can spend in the air. Consumer drones seem to have settled around about a third of their overall mass in battery, resulting in flight times of 20 to 25 minutes at best, before you have to bring the drone back for a battery swap. And if whatever the drone was supposed to be doing depended on it staying in the air, then you’re pretty much out of luck.

When much larger aircraft have this problem, and in particular military aircraft which sometimes need to stay on-station for long periods of time, the solution is mid-air refueling—why send an aircraft all the way back to its fuel source when you can instead bring the fuel source to the aircraft? It’s easier to do this with liquid fuel than it is with batteries, of course, but researchers at UC Berkeley have come up with a clever solution: You just give the batteries wings. Or, in this case, rotors.

The big quadrotor, which weighs 820 grams, is carrying its own 2.2 Ah lithium-polymer battery that by itself gives it a flight time of about 12 minutes. Each little quadrotor weighs 320 g, including its own 0.8 Ah battery plus a 1.5 Ah battery as cargo. The little ones can’t keep themselves aloft for all that long, but that’s okay, because as flying batteries their only job is to go from ground to the big quadrotor and back again.

Photo: UC Berkeley

The flying batteries land on a tray mounted atop the main drone and align their legs with electrical contacts.

How the flying batteries work
As each flying battery approaches the main quadrotor, the smaller quadrotor takes a position about 30 centimeter above a passive docking tray mounted on top of the bigger drone. It then slowly descends to about 3 cm above, waits for its alignment to be just right, and then drops, landing on the tray which helps align its legs with electrical contacts. As soon as a connection is made, the main quadrotor is able to power itself completely from the smaller drone’s battery payload. Each flying battery can power the main quadrotor for about 6 minutes, and then it flies off and a new flying battery takes its place. If everything goes well, the main quadrotor only uses its primary battery during the undocking and docking phases, and in testing, this boosted its flight time from 12 minutes to nearly an hour.

All of this happens in a motion-capture environment, which is a big constraint, and getting this precision(ish) docking maneuver to work outside, or when the primary drone is moving, is something that the researchers would like to figure out. There are potential applications in situations where continuous monitoring by a drone is important—you could argue that switching off two identical drones might be a simpler way of achieving that, but it also requires two (presumably fancy) drones as opposed to just one plus a bunch of relatively simple and inexpensive flying batteries.

“Flying Batteries: In-flight Battery Switching to Increase Multirotor Flight Time,” by Karan P. Jain and Mark W. Mueller from the High Performance Robotics Lab at UC Berkeley, is available on arXiv. Continue reading

Posted in Human Robots

#435806 Boston Dynamics’ Spot Robot Dog ...

Boston Dynamics is announcing this morning that Spot, its versatile quadruped robot, is now for sale. The machine’s animal-like behavior regularly electrifies crowds at tech conferences, and like other Boston Dynamics’ robots, Spot is a YouTube sensation whose videos amass millions of views.

Now anyone interested in buying a Spot—or a pack of them—can go to the company’s website and submit an order form. But don’t pull out your credit card just yet. Spot may cost as much as a luxury car, and it is not really available to consumers. The initial sale, described as an “early adopter program,” is targeting businesses. Boston Dynamics wants to find customers in select industries and help them deploy Spots in real-world scenarios.

“What we’re doing is the productization of Spot,” Boston Dynamics CEO Marc Raibert tells IEEE Spectrum. “It’s really a milestone for us going from robots that work in the lab to these that are hardened for work out in the field.”

Boston Dynamics has always been a secretive company, but last month, in preparation for launching Spot (formerly SpotMini), it allowed our photographers into its headquarters in Waltham, Mass., for a special shoot. In that session, we captured Spot and also Atlas—the company’s highly dynamic humanoid—in action, walking, climbing, and jumping.

You can see Spot’s photo interactives on our Robots Guide. (The Atlas interactives will appear in coming weeks.)

Gif: Bob O’Connor/Robots.ieee.org

And if you’re in the market for a robot dog, here’s everything we know about Boston Dynamics’ plans for Spot.

Who can buy a Spot?
If you’re interested in one, you should go to Boston Dynamics’ website and take a look at the information the company requires from potential buyers. Again, the focus is on businesses. Boston Dynamics says it wants to get Spots out to initial customers that “either have a compelling use case or a development team that we believe can do something really interesting with the robot,” says VP of business development Michael Perry. “Just because of the scarcity of the robots that we have, we’re going to have to be selective about which partners we start working together with.”

What can Spot do?
As you’ve probably seen on the YouTube videos, Spot can walk, trot, avoid obstacles, climb stairs, and much more. The robot’s hardware is almost completely custom, with powerful compute boards for control, and five sensor modules located on every side of Spot’s body, allowing it to survey the space around itself from any direction. The legs are powered by 12 custom motors with a reduction, with a top speed of 1.6 meters per second. The robot can operate for 90 minutes on a charge. In addition to the basic configuration, you can integrate up to 14 kilograms of extra hardware to a payload interface. Among the payload packages Boston Dynamics plans to offer are a 6 degrees-of-freedom arm, a version of which can be seen in some of the YouTube videos, and a ring of cameras called SpotCam that could be used to create Street View–type images inside buildings.

Image: Boston Dynamics

How do you control Spot?
Learning to drive the robot using its gaming-style controller “takes 15 seconds,” says CEO Marc Raibert. He explains that while teleoperating Spot, you may not realize that the robot is doing a lot of the work. “You don’t really see what that is like until you’re operating the joystick and you go over a box and you don’t have to do anything,” he says. “You’re practically just thinking about what you want to do and the robot takes care of everything.” The control methods have evolved significantly since the company’s first quadruped robots, machines like BigDog and LS3. “The control in those days was much more monolithic, and now we have what we call a sequential composition controller,” Raibert says, “which lets the system have control of the dynamics in a much broader variety of situations.” That means that every time one of Spot’s feet touches or doesn’t touch the ground, this different state of the body affects the basic physical behavior of the robot, and the controller adjusts accordingly. “Our controller is designed to understand what that state is and have different controls depending upon the case,” he says.

How much does Spot cost?
Boston Dynamics would not give us specific details about pricing, saying only that potential customers should contact them for a quote and that there is going to be a leasing option. It’s understandable: As with any expensive and complex product, prices can vary on a case by case basis and depend on factors such as configuration, availability, level of support, and so forth. When we pressed the company for at least an approximate base price, Perry answered: “Our general guidance is that the total cost of the early adopter program lease will be less than the price of a car—but how nice a car will depend on the number of Spots leased and how long the customer will be leasing the robot.”

Can Spot do mapping and SLAM out of the box?
The robot’s perception system includes cameras and 3D sensors (there is no lidar), used to avoid obstacles and sense the terrain so it can climb stairs and walk over rubble. It’s also used to create 3D maps. According to Boston Dynamics, the first software release will offer just teleoperation. But a second release, to be available in the next few weeks, will enable more autonomous behaviors. For example, it will be able to do mapping and autonomous navigation—similar to what the company demonstrated in a video last year, showing how you can drive the robot through an environment, create a 3D point cloud of the environment, and then set waypoints within that map for Spot to go out and execute that mission. For customers that have their own autonomy stack and are interested in using those on Spot, Boston Dynamics made it “as plug and play as possible in terms of how third-party software integrates into Spot’s system,” Perry says. This is done mainly via an API.

How does Spot’s API works?
Boston Dynamics built an API so that customers can create application-level products with Spot without having to deal with low-level control processes. “Rather than going and building joint-level kinematic access to the robot,” Perry explains, “we created a high-level API and SDK that allows people who are used to Web app development or development of missions for drones to use that same scope, and they’ll be able to build applications for Spot.”

What applications should we see first?
Boston Dynamics envisions Spot as a platform: a versatile mobile robot that companies can use to build applications based on their needs. What types of applications? The company says the best way to find out is to put Spot in the hands of as many users as possible and let them develop the applications. Some possibilities include performing remote data collection and light manipulation in construction sites; monitoring sensors and infrastructure at oil and gas sites; and carrying out dangerous missions such as bomb disposal and hazmat inspections. There are also other promising areas such as security, package delivery, and even entertainment. “We have some initial guesses about which markets could benefit most from this technology, and we’ve been engaging with customers doing proof-of-concept trials,” Perry says. “But at the end of the day, that value story is really going to be determined by people going out and exploring and pushing the limits of the robot.”

Photo: Bob O'Connor

How many Spots have been produced?
Last June, Boston Dynamics said it was planning to build about a hundred Spots by the end of the year, eventually ramping up production to a thousand units per year by the middle of this year. The company admits that it is not quite there yet. It has built close to a hundred beta units, which it has used to test and refine the final design. This version is now being mass manufactured, but the company is still “in the early tens of robots,” Perry says.

How did Boston Dynamics test Spot?

The company has tested the robots during proof-of-concept trials with customers, and at least one is already using Spot to survey construction sites. The company has also done reliability tests at its facility in Waltham, Mass. “We drive around, not quite day and night, but hundreds of miles a week, so that we can collect reliability data and find bugs,” Raibert says.

What about competitors?
In recent years, there’s been a proliferation of quadruped robots that will compete in the same space as Spot. The most prominent of these is ANYmal, from ANYbotics, a Swiss company that spun out of ETH Zurich. Other quadrupeds include Vision from Ghost Robotics, used by one of the teams in the DARPA Subterranean Challenge; and Laikago and Aliengo from Unitree Robotics, a Chinese startup. Raibert views the competition as a positive thing. “We’re excited to see all these companies out there helping validate the space,” he says. “I think we’re more in competition with finding the right need [that robots can satisfy] than we are with the other people building the robots at this point.”

Why is Boston Dynamics selling Spot now?
Boston Dynamics has long been an R&D-centric firm, with most of its early funding coming from military programs, but it says commercializing robots has always been a goal. Productizing its machines probably accelerated when the company was acquired by Google’s parent company, Alphabet, which had an ambitious (and now apparently very dead) robotics program. The commercial focus likely continued after Alphabet sold Boston Dynamics to SoftBank, whose famed CEO, Masayoshi Son, is known for his love of robots—and profits.

Which should I buy, Spot or Aibo?
Don’t laugh. We’ve gotten emails from individuals interested in purchasing a Spot for personal use after seeing our stories on the robot. Alas, Spot is not a bigger, fancier Aibo pet robot. It’s an expensive, industrial-grade machine that requires development and maintenance. If you’re maybe Jeff Bezos you could probably convince Boston Dynamics to sell you one, but otherwise the company will prioritize businesses.

What’s next for Boston Dynamics?
On the commercial side of things, other than Spot, Boston Dynamics is interested in the logistics space. Earlier this year it announced the acquisition of Kinema Systems, a startup that had developed vision sensors and deep-learning software to enable industrial robot arms to locate and move boxes. There’s also Handle, the mobile robot on whegs (wheels + legs), that can pick up and move packages. Boston Dynamics is hiring both in Waltham, Mass., and Mountain View, Calif., where Kinema was located.

Okay, can I watch a cool video now?
During our visit to Boston Dynamics’ headquarters last month, we saw Atlas and Spot performing some cool new tricks that we unfortunately are not allowed to tell you about. We hope that, although the company is putting a lot of energy and resources into its commercial programs, Boston Dynamics will still find plenty of time to improve its robots, build new ones, and of course, keep making videos. [Update: The company has just released a new Spot video, which we’ve embedded at the top of the post.][Update 2: We should have known. Boston Dynamics sure knows how to create buzz for itself: It has just released a second video, this time of Atlas doing some of those tricks we saw during our visit and couldn’t tell you about. Enjoy!]

[ Boston Dynamics ] Continue reading

Posted in Human Robots

#435804 New AI Systems Are Here to Personalize ...

The narratives about automation and its impact on jobs go from urgent to hopeful and everything in between. Regardless where you land, it’s hard to argue against the idea that technologies like AI and robotics will change our economy and the nature of work in the coming years.

A recent World Economic Forum report noted that some estimates show automation could displace 75 million jobs by 2022, while at the same time creating 133 million new roles. While these estimates predict a net positive for the number of new jobs in the coming decade, displaced workers will need to learn new skills to adapt to the changes. If employees can’t be retrained quickly for jobs in the changing economy, society is likely to face some degree of turmoil.

According to Bryan Talebi, CEO and founder of AI education startup Ahura AI, the same technologies erasing and creating jobs can help workers bridge the gap between the two.

Ahura is developing a product to capture biometric data from adult learners who are using computers to complete online education programs. The goal is to feed this data to an AI system that can modify and adapt their program to optimize for the most effective teaching method.

While the prospect of a computer recording and scrutinizing a learner’s behavioral data will surely generate unease across a society growing more aware and uncomfortable with digital surveillance, some people may look past such discomfort if they experience improved learning outcomes. Users of the system would, in theory, have their own personalized instruction shaped specifically for their unique learning style.

And according to Talebi, their systems are showing some promise.

“Based on our early tests, our technology allows people to learn three to five times faster than traditional education,” Talebi told me.

Currently, Ahura’s system uses the video camera and microphone that come standard on the laptops, tablets, and mobile devices most students are using for their learning programs.

With the computer’s camera Ahura can capture facial movements and micro expressions, measure eye movements, and track fidget score (a measure of how much a student moves while learning). The microphone tracks voice sentiment, and the AI leverages natural language processing to review the learner’s word usage.

From this collection of data Ahura can, according to Talebi, identify the optimal way to deliver content to each individual.

For some users that might mean a video tutorial is the best style of learning, while others may benefit more from some form of experiential or text-based delivery.

“The goal is to alter the format of the content in real time to optimize for attention and retention of the information,” said Talebi. One of Ahura’s main goals is to reduce the frequency with which students switch from their learning program to distractions like social media.

“We can now predict with a 60 percent confidence interval ten seconds before someone switches over to Facebook or Instagram. There’s a lot of work to do to get that up to a 95 percent level, so I don’t want to overstate things, but that’s a promising indication that we can work to cut down on the amount of context-switching by our students,” Talebi said.

Talebi repeatedly mentioned his ambition to leverage the same design principles used by Facebook, Twitter, and others to increase the time users spend on those platforms, but instead use them to design more compelling and even addictive education programs that can compete for attention with social media.

But the notion that Ahura’s system could one day be used to create compelling or addictive education necessarily presses against a set of justified fears surrounding data privacy. Growing anxiety surrounding the potential to misuse user data for social manipulation is widespread.

“Of course there is a real danger, especially because we are collecting so much data about our users which is specifically connected to how they consume content. And because we are looking so closely at the ways people interact with content, it’s incredibly important that this technology never be used for propaganda or to sell things to people,” Talebi tried to assure me.

Unsurprisingly (and worrying), using this AI system to sell products to people is exactly where some investors’ ambitions immediately turn once they learn about the company’s capabilities, according to Talebi. During our discussion Talebi regularly cited the now infamous example of Cambridge Analytica, the political consulting firm hired by the Trump campaign to run a psychographically targeted persuasion campaign on the US population during the most recent presidential election.

“It’s important that we don’t use this technology in those ways. We’re aware that things can go sideways, so we’re hoping to put up guardrails to ensure our system is helping and not harming society,” Talebi said.

Talebi will surely need to take real action on such a claim, but says the company is in the process of identifying a structure for an ethics review board—one that carries significant influence with similar voting authority as the executive team and the regular board.

“Our goal is to build an ethics review board that has teeth, is diverse in both gender and background but also in thought and belief structures. The idea is to have our ethics review panel ensure we’re building things ethically,” he said.

Data privacy appears to be an important issue for Talebi, who occasionally referenced a major competitor in the space based in China. According to a recent article from MIT Tech Review outlining the astonishing growth of AI-powered education platforms in China, data privacy concerns may be less severe there than in the West.

Ahura is currently developing upgrades to an early alpha-stage prototype, but is already capturing data from students from at least one Ivy League school and a variety of other places. Their next step is to roll out a working beta version to over 200,000 users as part of a partnership with an unnamed corporate client who will be measuring the platform’s efficacy against a control group.

Going forward, Ahura hopes to add to its suite of biometric data capture by including things like pupil dilation and facial flushing, heart rate, sleep patterns, or whatever else may give their system an edge in improving learning outcomes.

As information technologies increasingly automate work, it’s likely we’ll also see rapid changes to our labor systems. It’s also looking increasingly likely that those same technologies will be used to improve our ability to give people the right skills when they need them. It may be one way to address the challenges automation is sure to bring.

Image Credit: Gerd Altmann / Pixabay Continue reading

Posted in Human Robots

#435793 Tiny Robots Carry Stem Cells Through a ...

Engineers have built microrobots to perform all sorts of tasks in the body, and can now add to that list another key skill: delivering stem cells. In a paper published today in Science Robotics, researchers describe propelling a magnetically-controlled, stem-cell-carrying bot through a live mouse.

Under a rotating magnetic field, the microrobots moved with rolling and corkscrew-style locomotion. The researchers, led by Hongsoo Choi and his team at the Daegu Gyeongbuk Institute of Science & Technology (DGIST), in South Korea, also demonstrated their bot’s moves in slices of mouse brain, in blood vessels isolated from rat brains, and in a multi-organ-on-a chip.

The invention provides an alternative way to deliver stem cells, which are increasingly important in medicine. Such cells can be coaxed into becoming nearly any kind of cell, making them great candidates for treating neurodegenerative disorders such as Alzheimer’s.

But delivering stem cells typically requires an injection with a needle, which lowers the survival rate of the stem cells, and limits their reach in the body. Microrobots, however, have the potential to deliver stem cells to precise, hard-to-reach areas, with less damage to surrounding tissue, and better survival rates, says Jin-young Kim, a principle investigator at DGIST-ETH Microrobotics Research Center, and an author on the paper.

The virtues of microrobots have inspired several research groups to propose and test different designs in simple conditions, such as microfluidic channels and other static environments. A group out of Hong Kong last year described a burr-shaped bot that carried cells through live, transparent zebrafish.

The new research presents a magnetically-actuated microrobot that successfully carried stem cells through a live mouse. In additional experiments, the cells, which had differentiated into brain cells such as astrocytes, oligodendrocytes, and neurons, transferred to microtissues on the multi-organ-on-a-chip. Taken together, the proof-of-concept experiments demonstrate the potential for microrobots to be used in human stem cell therapy, says Kim.

The team fabricated the robots with 3D laser lithography, and designed them in two shapes: spherical and helical. Using a rotating magnetic field, the scientists navigated the spherical-shaped bots with a rolling motion, and the helical bots with a corkscrew motion. These styles of locomotion proved more efficient than that from a simple pulling force, and were more suitable for use in biological fluids, the scientists reported.

The big challenge in navigating microbots in a live animal (or human body) is being able to see them in real time. Imaging with fMRI doesn’t work, because the magnetic fields interfere with the system. “To precisely control microbots in vivo, it is important to actually see them as they move,” the authors wrote in their paper.

That wasn’t possible during experiments in a live mouse, so the researchers had to check the location of the microrobots before and after the experiments using an optical tomography system called IVIS. They also had to resort to using a pulling force with a permanent magnet to navigate the microrobots inside the mouse, due to the limitations of the IVIS system.

Kim says he and his colleagues are developing imaging systems that will enable them to view in real time the locomotion of their microrobots in live animals. Continue reading

Posted in Human Robots