Tag Archives: boston

#435172 DARPA’s New Project Is Investing ...

When Elon Musk and DARPA both hop aboard the cyborg hypetrain, you know brain-machine interfaces (BMIs) are about to achieve the impossible.

BMIs, already the stuff of science fiction, facilitate crosstalk between biological wetware with external computers, turning human users into literal cyborgs. Yet mind-controlled robotic arms, microelectrode “nerve patches”, or “memory Band-Aids” are still purely experimental medical treatments for those with nervous system impairments.

With the Next-Generation Nonsurgical Neurotechnology (N3) program, DARPA is looking to expand BMIs to the military. This month, the project tapped six academic teams to engineer radically different BMIs to hook up machines to the brains of able-bodied soldiers. The goal is to ditch surgery altogether—while minimizing any biological interventions—to link up brain and machine.

Rather than microelectrodes, which are currently surgically inserted into the brain to hijack neural communication, the project is looking to acoustic signals, electromagnetic waves, nanotechnology, genetically-enhanced neurons, and infrared beams for their next-gen BMIs.

It’s a radical departure from current protocol, with potentially thrilling—or devastating—impact. Wireless BMIs could dramatically boost bodily functions of veterans with neural damage or post-traumatic stress disorder (PTSD), or allow a single soldier to control swarms of AI-enabled drones with his or her mind. Or, similar to the Black Mirror episode Men Against Fire, it could cloud the perception of soldiers, distancing them from the emotional guilt of warfare.

When trickled down to civilian use, these new technologies are poised to revolutionize medical treatment. Or they could galvanize the transhumanist movement with an inconceivably powerful tool that fundamentally alters society—for better or worse.

Here’s what you need to know.

Radical Upgrades
The four-year N3 program focuses on two main aspects: noninvasive and “minutely” invasive neural interfaces to both read and write into the brain.

Because noninvasive technologies sit on the scalp, their sensors and stimulators will likely measure entire networks of neurons, such as those controlling movement. These systems could then allow soldiers to remotely pilot robots in the field—drones, rescue bots, or carriers like Boston Dynamics’ BigDog. The system could even boost multitasking prowess—mind-controlling multiple weapons at once—similar to how able-bodied humans can operate a third robotic arm in addition to their own two.

In contrast, minutely invasive technologies allow scientists to deliver nanotransducers without surgery: for example, an injection of a virus carrying light-sensitive sensors, or other chemical, biotech, or self-assembled nanobots that can reach individual neurons and control their activity independently without damaging sensitive tissue. The proposed use for these technologies isn’t yet well-specified, but as animal experiments have shown, controlling the activity of single neurons at multiple points is sufficient to program artificial memories of fear, desire, and experiences directly into the brain.

“A neural interface that enables fast, effective, and intuitive hands-free interaction with military systems by able-bodied warfighters is the ultimate program goal,” DARPA wrote in its funding brief, released early last year.

The only technologies that will be considered must have a viable path toward eventual use in healthy human subjects.

“Final N3 deliverables will include a complete integrated bidirectional brain-machine interface system,” the project description states. This doesn’t just include hardware, but also new algorithms tailored to these system, demonstrated in a “Department of Defense-relevant application.”

The Tools
Right off the bat, the usual tools of the BMI trade, including microelectrodes, MRI, or transcranial magnetic stimulation (TMS) are off the table. These popular technologies rely on surgery, heavy machinery, or personnel to sit very still—conditions unlikely in the real world.

The six teams will tap into three different kinds of natural phenomena for communication: magnetism, light beams, and acoustic waves.

Dr. Jacob Robinson at Rice University, for example, is combining genetic engineering, infrared laser beams, and nanomagnets for a bidirectional system. The $18 million project, MOANA (Magnetic, Optical and Acoustic Neural Access device) uses viruses to deliver two extra genes into the brain. One encodes a protein that sits on top of neurons and emits infrared light when the cell activates. Red and infrared light can penetrate through the skull. This lets a skull cap, embedded with light emitters and detectors, pick up these signals for subsequent decoding. Ultra-fast and utra-sensitvie photodetectors will further allow the cap to ignore scattered light and tease out relevant signals emanating from targeted portions of the brain, the team explained.

The other new gene helps write commands into the brain. This protein tethers iron nanoparticles to the neurons’ activation mechanism. Using magnetic coils on the headset, the team can then remotely stimulate magnetic super-neurons to fire while leaving others alone. Although the team plans to start in cell cultures and animals, their goal is to eventually transmit a visual image from one person to another. “In four years we hope to demonstrate direct, brain-to-brain communication at the speed of thought and without brain surgery,” said Robinson.

Other projects in N3 are just are ambitious.

The Carnegie Mellon team, for example, plans to use ultrasound waves to pinpoint light interaction in targeted brain regions, which can then be measured through a wearable “hat.” To write into the brain, they propose a flexible, wearable electrical mini-generator that counterbalances the noisy effect of the skull and scalp to target specific neural groups.

Similarly, a group at Johns Hopkins is also measuring light path changes in the brain to correlate them with regional brain activity to “read” wetware commands.

The Teledyne Scientific & Imaging group, in contrast, is turning to tiny light-powered “magnetometers” to detect small, localized magnetic fields that neurons generate when they fire, and match these signals to brain output.

The nonprofit Battelle team gets even fancier with their ”BrainSTORMS” nanotransducers: magnetic nanoparticles wrapped in a piezoelectric shell. The shell can convert electrical signals from neurons into magnetic ones and vice-versa. This allows external transceivers to wirelessly pick up the transformed signals and stimulate the brain through a bidirectional highway.

The magnetometers can be delivered into the brain through a nasal spray or other non-invasive methods, and magnetically guided towards targeted brain regions. When no longer needed, they can once again be steered out of the brain and into the bloodstream, where the body can excrete them without harm.

Four-Year Miracle
Mind-blown? Yeah, same. However, the challenges facing the teams are enormous.

DARPA’s stated goal is to hook up at least 16 sites in the brain with the BMI, with a lag of less than 50 milliseconds—on the scale of average human visual perception. That’s crazy high resolution for devices sitting outside the brain, both in space and time. Brain tissue, blood vessels, and the scalp and skull are all barriers that scatter and dissipate neural signals. All six teams will need to figure out the least computationally-intensive ways to fish out relevant brain signals from background noise, and triangulate them to the appropriate brain region to decipher intent.

In the long run, four years and an average $20 million per project isn’t much to potentially transform our relationship with machines—for better or worse. DARPA, to its credit, is keenly aware of potential misuse of remote brain control. The program is under the guidance of a panel of external advisors with expertise in bioethical issues. And although DARPA’s focus is on enabling able-bodied soldiers to better tackle combat challenges, it’s hard to argue that wireless, non-invasive BMIs will also benefit those most in need: veterans and other people with debilitating nerve damage. To this end, the program is heavily engaging the FDA to ensure it meets safety and efficacy regulations for human use.

Will we be there in just four years? I’m skeptical. But these electrical, optical, acoustic, magnetic, and genetic BMIs, as crazy as they sound, seem inevitable.

“DARPA is preparing for a future in which a combination of unmanned systems, AI, and cyber operations may cause conflicts to play out on timelines that are too short for humans to effectively manage with current technology alone,” said Al Emondi, the N3 program manager.

The question is, now that we know what’s in store, how should the rest of us prepare?

Image Credit: With permission from DARPA N3 project. Continue reading

Posted in Human Robots

#435119 Are These Robots Better Than You at ...

Robot technology is evolving at breakneck speed. SoftBank’s Pepper is found in companies across the globe and is rapidly improving its conversation skills. Telepresence robots open up new opportunities for remote working, while Boston Dynamics’ Handle robot could soon (literally) take a load off human colleagues in warehouses.

But warehouses and offices aren’t the only places where robots are lining up next to humans.

Toyota’s Cue 3 robot recently showed off its basketball skills, putting up better numbers than the NBA’s most accurate three-point shooter, the Golden State Warriors’ Steph Curry.

Cue 3 is still some way from being ready to take on Curry, or even amateur basketball players, in a real game. However, it is the latest member of a growing cast of robots challenging human dominance in sports.

As these robots continue to develop, they not only exemplify the speed of exponential technology development, but also how those technologies are improving human capabilities.

Meet the Contestants
The list of robots in sports is surprisingly long and diverse. There are robot skiers, tumblers, soccer players, sumos, and even robot game jockeys. Introductions to a few of them are in order.

Robot: Forpheus
Sport: Table tennis
Intro: Looks like something out of War of the Worlds equipped with a ping pong bat instead of a death ray.
Ability level: Capable of counteracting spin shots and good enough to beat many beginners.

Robot: Sumo bot
Sport: Sumo wrestling
Intro: Hyper-fast, hyper-aggressive. Think robot equivalent to an angry wasp on six cans of Red Bull crossed with a very small tank.
Ability level: Flies around the ring way faster than any human sumo. Tend to drive straight out of the ring at times.

Robot: Cue 3
Sport: Basketball
Intro: Stands at an imposing 6 foot and 10 inches, so pretty much built for the NBA. Looks a bit like something that belongs in a video game.
Ability level: A 62.5 percent three-pointer percentage, which is better than Steph Curry’s; is less mobile than Charles Barkley – in his current form.

Robot: Robo Cup Robots
Intro: The future of soccer. If everything goes to plan, a team of robots will take on the Lionel Messis and Cristiano Ronaldos of 2050 and beat them in a full 11 vs. 11 game.
Ability level: Currently plays soccer more like the six-year-olds I used to coach than Lionel Messi.

The Limiting Factor
The skill level of all the robots above is impressive, and they are doing things that no human contestant can. The sumo bots’ inhuman speed is self-evident. Forpheus’ ability to track the ball with two cameras while simultaneously tracking its opponent with two other cameras requires a look at the spec sheet, but is similarly beyond human capability. While Cue 3 can’t move, it makes shots from the mid-court logo look easy.

Robots are performing at a level that was confined to the realm of science fiction at the start of the millennium. The speed of development indicates that in the near future, my national team soccer coach would likely call up a robot instead of me (he must have lost my number since he hasn’t done so yet. It’s the only logical explanation), and he’d be right to do so.

It is also worth considering that many current sports robots have a humanoid form, which limits their ability. If engineers were to optimize robot design to outperform humans in specific categories, many world champions would likely already be metallic.

Swimming is perhaps one of the most obvious. Even Michael Phelps would struggle to keep up with a torpedo-shaped robot, and if you beefed up a sumo robot to human size, human sumos might impress you by running away from them with a 100-meter speed close to Usain Bolt’s.

In other areas, the playing field for humans and robots is rapidly leveling. One likely candidate for the first head-to-head competitions is racing, where self-driving cars from the Roborace League could perhaps soon be ready to race the likes of Lewis Hamilton.

Tech Pushing Humans
Perhaps one of the biggest reasons why it may still take some time for robots to surpass us is that they, along with other exponential technologies, are already making us better at sports.

In Japan, elite volleyball players use a robot to practice their attacks. Some American football players also practice against robot opponents and hone their skills using VR.

On the sidelines, AI is being used to analyze and improve athletes’ performance, and we may soon see the first AI coaches, not to mention referees.

We may even compete in games dreamt up by our electronic cousins. SpeedGate, a new game created by an AI by studying 400 different sports, is a prime example of that quickly becoming a possibility.

However, we will likely still need to make the final call on what constitutes a good game. The AI that created SpeedGate reportedly also suggested less suitable pastimes, like underwater parkour and a game that featured exploding frisbees. Both of these could be fun…but only if you’re as sturdy as a robot.

Image Credit: RoboCup Standard Platform League 2018, ©The Robocup Federation. Published with permission of reproduction granted by the RoboCup Federation. Continue reading

Posted in Human Robots

#435023 Inflatable Robot Astronauts and How to ...

The typical cultural image of a robot—as a steel, chrome, humanoid bucket of bolts—is often far from the reality of cutting-edge robotics research. There are difficulties, both social and technological, in realizing the image of a robot from science fiction—let alone one that can actually help around the house. Often, it’s simply the case that great expense in producing a humanoid robot that can perform dozens of tasks quite badly is less appropriate than producing some other design that’s optimized to a specific situation.

A team of scientists from Brigham Young University has received funding from NASA to investigate an inflatable robot called, improbably, King Louie. The robot was developed by Pneubotics, who have a long track record in the world of soft robotics.

In space, weight is at a premium. The world watched in awe and amusement when Commander Chris Hadfield sang “Space Oddity” from the International Space Station—but launching that guitar into space likely cost around $100,000. A good price for launching payload into outer space is on the order of $10,000 per pound ($22,000/kg).

For that price, it would cost a cool $1.7 million to launch Boston Dynamics’ famous ATLAS robot to the International Space Station, and its bulk would be inconvenient in the cramped living quarters available. By contrast, an inflatable robot like King Louie is substantially lighter and can simply be deflated and folded away when not in use. The robot can be manufactured from cheap, lightweight, and flexible materials, and minor damage is easy to repair.

Inflatable Robots Under Pressure
The concept of inflatable robots is not new: indeed, earlier prototypes of King Louie were exhibited back in 2013 at Google I/O’s After Hours, flailing away at each other in a boxing ring. Sparks might fly in fights between traditional robots, but the aim here was to demonstrate that the robots are passively safe: the soft, inflatable figures won’t accidentally smash delicate items when moving around.

Health and safety regulations form part of the reason why robots don’t work alongside humans more often, but soft robots would be far safer to use in healthcare or around children (whose first instinct, according to BYU’s promotional video, is either to hug or punch King Louie.) It’s also much harder to have nightmarish fantasies about robotic domination with these friendlier softbots: Terminator would’ve been a much shorter franchise if Skynet’s droids were inflatable.

Robotic exoskeletons are increasingly used for physical rehabilitation therapies, as well as for industrial purposes. As countries like Japan seek to care for their aging populations with robots and alleviate the burden on nurses, who suffer from some of the highest rates of back injuries of any profession, soft robots will become increasingly attractive for use in healthcare.

Precision and Proprioception
The main issue is one of control. Rigid, metallic robots may be more expensive and more dangerous, but the simple fact of their rigidity makes it easier to map out and control the precise motions of each of the robot’s limbs, digits, and actuators. Individual motors attached to these rigid robots can allow for a great many degrees of freedom—individual directions in which parts of the robot can move—and precision control.

For example, ATLAS has 28 degrees of freedom, while Shadow’s dexterous robot hand alone has 20. This is much harder to do with an inflatable robot, for precisely the same reasons that make it safer. Without hard and rigid bones, other methods of control must be used.

In the case of King Louie, the robot is made up of many expandable air chambers. An air-compressor changes the pressure levels in these air chambers, allowing them to expand and contract. This harks back to some of the earliest pneumatic automata. Pairs of chambers act antagonistically, like muscles, such that when one chamber “tenses,” another relaxes—allowing King Louie to have, for example, four degrees of freedom in each of its arms.

The robot is also surprisingly strong. Professor Killpack, who works at BYU on the project, estimates that its payload is comparable to other humanoid robots on the market, like Rethink Robotics’ Baxter (RIP).

Proprioception, that sixth sense that allows us to map out and control our own bodies and muscles in fine detail, is being enhanced for a wider range of soft, flexible robots with the use of machine learning algorithms connected to input from a whole host of sensors on the robot’s body.

Part of the reason this is so complicated with soft, flexible robots is that the shape and “map” of the robot’s body can change; that’s the whole point. But this means that every time King Louie is inflated, its body is a slightly different shape; when it becomes deformed, for example due to picking up objects, the shape changes again, and the complex ways in which the fabric can twist and bend are far more difficult to model and sense than the behavior of the rigid metal of King Louie’s hard counterparts. When you’re looking for precision, seemingly-small changes can be the difference between successfully holding an object or dropping it.

Learning to Move
Researchers at BYU are therefore spending a great deal of time on how to control the soft-bot enough to make it comparably useful. One method involves the commercial tracking technology used in the Vive VR system: by moving the game controller, which provides a constant feedback to the robot’s arm, you can control its position. Since the tracking software provides an estimate of the robot’s joint angles and continues to provide feedback until the arm is correctly aligned, this type of feedback method is likely to work regardless of small changes to the robot’s shape.

The other technologies the researchers are looking into for their softbot include arrays of flexible, tactile sensors to place on the softbot’s skin, and minimizing the complex cross-talk between these arrays to get coherent information about the robot’s environment. As with some of the new proprioception research, the project is looking into neural networks as a means of modeling the complicated dynamics—the motion and response to forces—of the softbot. This method relies on large amounts of observational data, mapping how the robot is inflated and how it moves, rather than explicitly understanding and solving the equations that govern its motion—which hopefully means the methods can work even as the robot changes.

There’s still a long way to go before soft and inflatable robots can be controlled sufficiently well to perform all the tasks they might be used for. Ultimately, no one robotic design is likely to be perfect for any situation.

Nevertheless, research like this gives us hope that one day, inflatable robots could be useful tools, or even companions, at which point the advertising slogans write themselves: Don’t let them down, and they won’t let you down!

Image Credit: Brigham Young University. Continue reading

Posted in Human Robots

#434843 This Week’s Awesome Stories From ...

ARTIFICIAL INTELLIGENCE
Open AI’s Dota 2 AI Steamrolls World Champion e-Sports Team With Back-to-Back Victories
Nick Statt | The Verge
“…[OpenAI cofounder and CEO, Sam Altman] tells me there probably does not exist a video game out there right now that a system like OpenAI Five can’t eventually master at a level beyond human capability. For the broader AI industry, mastering video games may soon become passé, simple table stakes required to prove your system can learn fast and act in a way required to tackle tougher, real-world tasks with more meaningful benefits.”

ROBOTICS
Boston Dynamics Debuts the Production Version of SpotMini
Brian Heater, Catherine Shu | TechCrunch
“SpotMini is the first commercial robot Boston Dynamics is set to release, but as we learned earlier, it certainly won’t be the last. The company is looking to its wheeled Handle robot in an effort to push into the logistics space. It’s a super-hot category for robotics right now. Notably, Amazon recently acquired Colorado-based start up Canvas to add to its own arm of fulfillment center robots.”

NEUROSCIENCE
Scientists Restore Some Brain Cell Functions in Pigs Four Hours After Death
Joel Achenbach | The Washington Post
“The ethicists say this research can blur the line between life and death, and could complicate the protocols for organ donation, which rely on a clear determination of when a person is dead and beyond resuscitation.”

BIOTECH
How Scientists 3D Printed a Tiny Heart From Human Cells
Yasmin Saplakoglu | Live Science
“Though the heart is much smaller than a human’s (it’s only the size of a rabbit’s), and there’s still a long way to go until it functions like a normal heart, the proof-of-concept experiment could eventually lead to personalized organs or tissues that could be used in the human body…”

SPACE
The Next Clash of Silicon Valley Titans Will Take Place in Space
Luke Dormehl | Digital Trends
“With bold plans that call for thousands of new satellites being put into orbit and astronomical costs, it’s going to be fascinating to observe the next phase of the tech platform battle being fought not on our desktops or mobile devices in our pockets, but outside of Earth’s atmosphere.”

FUTURE HISTORY
The Images That Could Help Rebuild Notre-Dame Cathedral
Alexis C. Madrigal | The Atlantic
“…in 2010, [Andrew] Tallon, an art professor at Vassar, took a Leica ScanStation C10 to Notre-Dame and, with the assistance of Columbia’s Paul Blaer, began to painstakingly scan every piece of the structure, inside and out. …Over five days, they positioned the scanner again and again—50 times in all—to create an unmatched record of the reality of one of the world’s most awe-inspiring buildings, represented as a series of points in space.”

AUGMENTED REALITY
Mapping Our World in 3D Will Let Us Paint Streets With Augmented Reality
Charlotte Jee | MIT Technology Review
“Scape wants to use its location services to become the underlying infrastructure upon which driverless cars, robotics, and augmented-reality services sit. ‘Our end goal is a one-to-one map of the world covering everything,’ says Miller. ‘Our ambition is to be as invisible as GPS is today.’i”

Image Credit: VAlex / Shutterstock.com Continue reading

Posted in Human Robots

#434818 Watch These Robots Do Tasks You Thought ...

Robots have been masters of manufacturing at speed and precision for decades, but give them a seemingly simple task like stacking shelves, and they quickly get stuck. That’s changing, though, as engineers build systems that can take on the deceptively tricky tasks most humans can do with their eyes closed.

Boston Dynamics is famous for dramatic reveals of robots performing mind-blowing feats that also leave you scratching your head as to what the market is—think the bipedal Atlas doing backflips or Spot the galloping robot dog.

Last week, the company released a video of a robot called Handle that looks like an ostrich on wheels carrying out the seemingly mundane task of stacking boxes in a warehouse.

It might seem like a step backward, but this is exactly the kind of practical task robots have long struggled with. While the speed and precision of industrial robots has seen them take over many functions in modern factories, they’re generally limited to highly prescribed tasks carried out in meticulously-controlled environments.

That’s because despite their mechanical sophistication, most are still surprisingly dumb. They can carry out precision welding on a car or rapidly assemble electronics, but only by rigidly following a prescribed set of motions. Moving cardboard boxes around a warehouse might seem simple to a human, but it actually involves a variety of tasks machines still find pretty difficult—perceiving your surroundings, navigating, and interacting with objects in a dynamic environment.

But the release of this video suggests Boston Dynamics thinks these kinds of applications are close to prime time. Last week the company doubled down by announcing the acquisition of start-up Kinema Systems, which builds computer vision systems for robots working in warehouses.

It’s not the only company making strides in this area. On the same day the video went live, Google unveiled a robot arm called TossingBot that can pick random objects from a box and quickly toss them into another container beyond its reach, which could prove very useful for sorting items in a warehouse. The machine can train on new objects in just an hour or two, and can pick and toss up to 500 items an hour with better accuracy than any of the humans who tried the task.

And an apple-picking robot built by Abundant Robotics is currently on New Zealand farms navigating between rows of apple trees using LIDAR and computer vision to single out ripe apples before using a vacuum tube to suck them off the tree.

In most cases, advances in machine learning and computer vision brought about by the recent AI boom are the keys to these rapidly improving capabilities. Robots have historically had to be painstakingly programmed by humans to solve each new task, but deep learning is making it possible for them to quickly train themselves on a variety of perception, navigation, and dexterity tasks.

It’s not been simple, though, and the application of deep learning in robotics has lagged behind other areas. A major limitation is that the process typically requires huge amounts of training data. That’s fine when you’re dealing with image classification, but when that data needs to be generated by real-world robots it can make the approach impractical. Simulations offer the possibility to run this training faster than real time, but it’s proved difficult to translate policies learned in virtual environments into the real world.

Recent years have seen significant progress on these fronts, though, and the increasing integration of modern machine learning with robotics. In October, OpenAI imbued a robotic hand with human-level dexterity by training an algorithm in a simulation using reinforcement learning before transferring it to the real-world device. The key to ensuring the translation went smoothly was injecting random noise into the simulation to mimic some of the unpredictability of the real world.

And just a couple of weeks ago, MIT researchers demonstrated a new technique that let a robot arm learn to manipulate new objects with far less training data than is usually required. By getting the algorithm to focus on a few key points on the object necessary for picking it up, the system could learn to pick up a previously unseen object after seeing only a few dozen examples (rather than the hundreds or thousands typically required).

How quickly these innovations will trickle down to practical applications remains to be seen, but a number of startups as well as logistics behemoth Amazon are developing robots designed to flexibly pick and place the wide variety of items found in your average warehouse.

Whether the economics of using robots to replace humans at these kinds of menial tasks makes sense yet is still unclear. The collapse of collaborative robotics pioneer Rethink Robotics last year suggests there are still plenty of challenges.

But at the same time, the number of robotic warehouses is expected to leap from 4,000 today to 50,000 by 2025. It may not be long until robots are muscling in on tasks we’ve long assumed only humans could do.

Image Credit: Visual Generation / Shutterstock.com Continue reading

Posted in Human Robots