Tag Archives: full

#435589 Construction Robots Learn to Excavate by ...

Pavel Savkin remembers the first time he watched a robot imitate his movements. Minutes earlier, the engineer had finished “showing” the robotic excavator its new goal by directing its movements manually. Now, running on software Savkin helped design, the robot was reproducing his movements, gesture for gesture. “It was like there was something alive in there—but I knew it was me,” he said.

Savkin is the CTO of SE4, a robotics software project that styles itself the “driver” of a fleet of robots that will eventually build human colonies in space. For now, SE4 is focused on creating software that can help developers communicate with robots, rather than on building hardware of its own.
The Tokyo-based startup showed off an industrial arm from Universal Robots that was running SE4’s proprietary software at SIGGRAPH in July. SE4’s demonstration at the Los Angeles innovation conference drew the company’s largest audience yet. The robot, nicknamed Squeezie, stacked real blocks as directed by SE4 research engineer Nathan Quinn, who wore a VR headset and used handheld controls to “show” Squeezie what to do.

As Quinn manipulated blocks in a virtual 3D space, the software learned a set of ordered instructions to be carried out in the real world. That order is essential for remote operations, says Quinn. To build remotely, developers need a way to communicate instructions to robotic builders on location. In the age of digital construction and industrial robotics, giving a computer a blueprint for what to build is a well-explored art. But operating on a distant object—especially under conditions that humans haven’t experienced themselves—presents challenges that only real-time communication with operators can solve.

The problem is that, in an unpredictable setting, even simple tasks require not only instruction from an operator, but constant feedback from the changing environment. Five years ago, the Swedish fiber network provider umea.net (part of the private Umeå Energy utility) took advantage of the virtual reality boom to promote its high-speed connections with the help of a viral video titled “Living with Lag: An Oculus Rift Experiment.” The video is still circulated in VR and gaming circles.

In the experiment, volunteers donned headgear that replaced their real-time biological senses of sight and sound with camera and audio feeds of their surroundings—both set at a 3-second delay. Thus equipped, volunteers attempt to complete everyday tasks like playing ping-pong, dancing, cooking, and walking on a beach, with decidedly slapstick results.

At outer-orbit intervals, including SE4’s dream of construction projects on Mars, the limiting factor in communication speed is not an artificial delay, but the laws of physics. The shifting relative positions of Earth and Mars mean that communications between the planets—even at the speed of light—can take anywhere from 3 to 22 minutes.

A long-distance relationship

Imagine trying to manage a construction project from across an ocean without the benefit of intelligent workers: sending a ship to an unknown world with a construction crew and blueprints for a log cabin, and four months later receiving a letter back asking how to cut down a tree. The parallel problem in long-distance construction with robots, according to SE4 CEO Lochlainn Wilson, is that automation relies on predictability. “Every robot in an industrial setting today is expecting a controlled environment.”
Platforms for applying AR and VR systems to teach tasks to artificial intelligences, as SE4 does, are already proliferating in manufacturing, healthcare, and defense. But all of the related communications systems are bound by physics and, specifically, the speed of light.
The same fundamental limitation applies in space. “Our communications are light-based, whether they’re radio or optical,” says Laura Seward Forczyk, a planetary scientist and consultant for space startups. “If you’re going to Mars and you want to communicate with your robot or spacecraft there, you need to have it act semi- or mostly-independently so that it can operate without commands from Earth.”

Semantic control
That’s exactly what SE4 aims to do. By teaching robots to group micro-movements into logical units—like all the steps to building a tower of blocks—the Tokyo-based startup lets robots make simple relational judgments that would allow them to receive a full set of instruction modules at once and carry them out in order. This sidesteps the latency issue in real-time bilateral communications that could hamstring a project or at least make progress excruciatingly slow.
The key to the platform, says Wilson, is the team’s proprietary operating software, “Semantic Control.” Just as in linguistics and philosophy, “semantics” refers to meaning itself, and meaning is the key to a robot’s ability to make even the smallest decisions on its own. “A robot can scan its environment and give [raw data] to us, but it can’t necessarily identify the objects around it and what they mean,” says Wilson.

That’s where human intelligence comes in. As part of the demonstration phase, the human operator of an SE4-controlled machine “annotates” each object in the robot’s vicinity with meaning. By labeling objects in the VR space with useful information—like which objects are building material and which are rocks—the operator helps the robot make sense of its real 3D environment before the building begins.

Giving robots the tools to deal with a changing environment is an important step toward allowing the AI to be truly independent, but it’s only an initial step. “We’re not letting it do absolutely everything,” said Quinn. “Our robot is good at moving an object from point A to point B, but it doesn’t know the overall plan.” Wilson adds that delegating environmental awareness and raw mechanical power to separate agents is the optimal relationship for a mixed human-robot construction team; it “lets humans do what they’re good at, while robots do what they do best.”

This story was updated on 4 September 2019. Continue reading

Posted in Human Robots

#435423 Moving Beyond Mind-Controlled Limbs to ...

Brain-machine interface enthusiasts often gush about “closing the loop.” It’s for good reason. On the implant level, it means engineering smarter probes that only activate when they detect faulty electrical signals in brain circuits. Elon Musk’s Neuralink—among other players—are readily pursuing these bi-directional implants that both measure and zap the brain.

But to scientists laboring to restore functionality to paralyzed patients or amputees, “closing the loop” has broader connotations. Building smart mind-controlled robotic limbs isn’t enough; the next frontier is restoring sensation in offline body parts. To truly meld biology with machine, the robotic appendage has to “feel one” with the body.

This month, two studies from Science Robotics describe complementary ways forward. In one, scientists from the University of Utah paired a state-of-the-art robotic arm—the DEKA LUKE—with electrically stimulating remaining nerves above the attachment point. Using artificial zaps to mimic the skin’s natural response patterns to touch, the team dramatically increased the patient’s ability to identify objects. Without much training, he could easily discriminate between the small and large and the soft and hard while blindfolded and wearing headphones.

In another, a team based at the National University of Singapore took inspiration from our largest organ, the skin. Mimicking the neural architecture of biological skin, the engineered “electronic skin” not only senses temperature, pressure, and humidity, but continues to function even when scraped or otherwise damaged. Thanks to artificial nerves that transmit signals far faster than our biological ones, the flexible e-skin shoots electrical data 1,000 times quicker than human nerves.

Together, the studies marry neuroscience and robotics. Representing the latest push towards closing the loop, they show that integrating biological sensibilities with robotic efficiency isn’t impossible (super-human touch, anyone?). But more immediately—and more importantly—they’re beacons of hope for patients who hope to regain their sense of touch.

For one of the participants, a late middle-aged man with speckled white hair who lost his forearm 13 years ago, superpowers, cyborgs, or razzle-dazzle brain implants are the last thing on his mind. After a barrage of emotionally-neutral scientific tests, he grasped his wife’s hand and felt her warmth for the first time in over a decade. His face lit up in a blinding smile.

That’s what scientists are working towards.

Biomimetic Feedback
The human skin is a marvelous thing. Not only does it rapidly detect a multitude of sensations—pressure, temperature, itch, pain, humidity—its wiring “binds” disparate signals together into a sensory fingerprint that helps the brain identify what it’s feeling at any moment. Thanks to over 45 miles of nerves that connect the skin, muscles, and brain, you can pick up a half-full coffee cup, knowing that it’s hot and sloshing, while staring at your computer screen. Unfortunately, this complexity is also why restoring sensation is so hard.

The sensory electrode array implanted in the participant’s arm. Image Credit: George et al., Sci. Robot. 4, eaax2352 (2019)..
However, complex neural patterns can also be a source of inspiration. Previous cyborg arms are often paired with so-called “standard” sensory algorithms to induce a basic sense of touch in the missing limb. Here, electrodes zap residual nerves with intensities proportional to the contact force: the harder the grip, the stronger the electrical feedback. Although seemingly logical, that’s not how our skin works. Every time the skin touches or leaves an object, its nerves shoot strong bursts of activity to the brain; while in full contact, the signal is much lower. The resulting electrical strength curve resembles a “U.”

The LUKE hand. Image Credit: George et al., Sci. Robot. 4, eaax2352 (2019).
The team decided to directly compare standard algorithms with one that better mimics the skin’s natural response. They fitted a volunteer with a robotic LUKE arm and implanted an array of electrodes into his forearm—right above the amputation—to stimulate the remaining nerves. When the team activated different combinations of electrodes, the man reported sensations of vibration, pressure, tapping, or a sort of “tightening” in his missing hand. Some combinations of zaps also made him feel as if he were moving the robotic arm’s joints.

In all, the team was able to carefully map nearly 120 sensations to different locations on the phantom hand, which they then overlapped with contact sensors embedded in the LUKE arm. For example, when the patient touched something with his robotic index finger, the relevant electrodes sent signals that made him feel as if he were brushing something with his own missing index fingertip.

Standard sensory feedback already helped: even with simple electrical stimulation, the man could tell apart size (golf versus lacrosse ball) and texture (foam versus plastic) while blindfolded and wearing noise-canceling headphones. But when the team implemented two types of neuromimetic feedback—electrical zaps that resembled the skin’s natural response—his performance dramatically improved. He was able to identify objects much faster and more accurately under their guidance. Outside the lab, he also found it easier to cook, feed, and dress himself. He could even text on his phone and complete routine chores that were previously too difficult, such as stuffing an insert into a pillowcase, hammering a nail, or eating hard-to-grab foods like eggs and grapes.

The study shows that the brain more readily accepts biologically-inspired electrical patterns, making it a relatively easy—but enormously powerful—upgrade that seamlessly integrates the robotic arms with the host. “The functional and emotional benefits…are likely to be further enhanced with long-term use, and efforts are underway to develop a portable take-home system,” the team said.

E-Skin Revolution: Asynchronous Coded Electronic Skin (ACES)
Flexible electronic skins also aren’t new, but the second team presented an upgrade in both speed and durability while retaining multiplexed sensory capabilities.

Starting from a combination of rubber, plastic, and silicon, the team embedded over 200 sensors onto the e-skin, each capable of discerning contact, pressure, temperature, and humidity. They then looked to the skin’s nervous system for inspiration. Our skin is embedded with a dense array of nerve endings that individually transmit different types of sensations, which are integrated inside hubs called ganglia. Compared to having every single nerve ending directly ping data to the brain, this “gather, process, and transmit” architecture rapidly speeds things up.

The team tapped into this biological architecture. Rather than pairing each sensor with a dedicated receiver, ACES sends all sensory data to a single receiver—an artificial ganglion. This setup lets the e-skin’s wiring work as a whole system, as opposed to individual electrodes. Every sensor transmits its data using a characteristic pulse, which allows it to be uniquely identified by the receiver.

The gains were immediate. First was speed. Normally, sensory data from multiple individual electrodes need to be periodically combined into a map of pressure points. Here, data from thousands of distributed sensors can independently go to a single receiver for further processing, massively increasing efficiency—the new e-skin’s transmission rate is roughly 1,000 times faster than that of human skin.

Second was redundancy. Because data from individual sensors are aggregated, the system still functioned even when any individual receptors are damaged, making it far more resilient than previous attempts. Finally, the setup could easily scale up. Although the team only tested the idea with 240 sensors, theoretically the system should work with up to 10,000.

The team is now exploring ways to combine their invention with other material layers to make it water-resistant and self-repairable. As you might’ve guessed, an immediate application is to give robots something similar to complex touch. A sensory upgrade not only lets robots more easily manipulate tools, doorknobs, and other objects in hectic real-world environments, it could also make it easier for machines to work collaboratively with humans in the future (hey Wall-E, care to pass the salt?).

Dexterous robots aside, the team also envisions engineering better prosthetics. When coated onto cyborg limbs, for example, ACES may give them a better sense of touch that begins to rival the human skin—or perhaps even exceed it.

Regardless, efforts that adapt the functionality of the human nervous system to machines are finally paying off, and more are sure to come. Neuromimetic ideas may very well be the link that finally closes the loop.

Image Credit: Dan Hixson/University of Utah College of Engineering.. Continue reading

Posted in Human Robots

#435313 This Week’s Awesome Stories From ...

ARTIFICIAL INTELLIGENCE
Microsoft Invests $1 Billion in OpenAI to Pursue Holy Grail of Artificial Intelligence
James Vincent | The Verge
“i‘The creation of AGI will be the most important technological development in human history, with the potential to shape the trajectory of humanity,’ said [OpenAI cofounder] Sam Altman. ‘Our mission is to ensure that AGI technology benefits all of humanity, and we’re working with Microsoft to build the supercomputing foundation on which we’ll build AGI.’i”

ROBOTICS
UPS Wants to Go Full-Scale With Its Drone Deliveries
Eric Adams | Wired
“If UPS gets its way, it’ll be known for vehicles other than its famous brown vans. The delivery giant is working to become the first commercial entity authorized by the Federal Aviation Administration to use autonomous delivery drones without any of the current restrictions that have governed the aerial testing it has done to date.”

SYNTHETIC BIOLOGY
Scientists Can Finally Build Feedback Circuits in Cells
Megan Molteni | Wired
“Network a few LOCKR-bound molecules together, and you’ve got a circuit that can control a cell’s functions the same way a PID computer program automatically adjusts the pitch of a plane. With the right key, you can make cells glow or blow themselves apart. You can send things to the cell’s trash heap or zoom them to another cellular zip code.”

ENERGY
Carbon Nanotubes Could Increase Solar Efficiency to 80 Percent
David Grossman | Popular Mechanics
“Obviously, that sort of efficiency rating is unheard of in the world of solar panels. But even though a proof of concept is a long way from being used in the real world, any further developments in the nanotubes could bolster solar panels in ways we haven’t seen yet.”

FUTURE
What Technology Is Most Likely to Become Obsolete During Your Lifetime?
Daniel Kolitz | Gizmodo
“Old technology seldom just goes away. Whiteboards and LED screens join chalk blackboards, but don’t eliminate them. Landline phones get scarce, but not phones. …And the technologies that seem to be the most outclassed may come back as a the cult objects of aficionados—the vinyl record, for example. All this is to say that no one can tell us what will be obsolete in fifty years, but probably a lot less will be obsolete than we think.”

NEUROSCIENCE
The Human Brain Project Hasn’t Lived Up to Its Promise
Ed Yong | The Atlantic
“The HBP, then, is in a very odd position, criticized for being simultaneously too grandiose and too narrow. None of the skeptics I spoke with was dismissing the idea of simulating parts of the brain, but all of them felt that such efforts should be driven by actual research questions. …Countless such projects could have been funded with the money channeled into the HBP, which explains much of the furor around the project.”

Image Credit: Aron Van de Pol / Unsplash Continue reading

Posted in Human Robots

#435159 This Week’s Awesome Stories From ...

ARTIFICIAL INTELLIGENCE
DeepMind Can Now Beat Us at Multiplayer Games Too
Cade Metz | The New York Times
“DeepMind’s project is part of a broad effort to build artificial intelligence that can play enormously complex, three-dimensional video games, including Quake III, Dota 2 and StarCraft II. Many researchers believe that success in the virtual arena will eventually lead to automated systems with improved abilities in the real world.”

ROBOTICS
Tiny Robots Carry Stem Cells Through a Mouse
Emily Waltz | IEEE Spectrum
“Engineers have built microrobots to perform all sorts of tasks in the body, and can now add to that list another key skill: delivering stem cells. In a paper, published [May 29] in Science Robotics, researchers describe propelling a magnetically-controlled, stem-cell-carrying bot through a live mouse.” [Video shows microbots navigating a microfluidic chip. MRI could not be used to image the mouse as the bots navigate magnetically.]

COMPUTING
How a Quantum Computer Could Break 2048-Bit RSA Encryption in 8 Hours
Emerging Technology From the arXiv | MIT Technology Review
“[Two researchers] have found a more efficient way for quantum computers to perform the code-breaking calculations, reducing the resources they require by orders of magnitude. Consequently, these machines are significantly closer to reality than anyone suspected.” [The arXiv is a pre-print server for research that has not yet been peer reviewed.]

AUTOMATION
Lyft Has Completed 55,000 Self Driving Rides in Las Vegas
Christine Fisher | Engadget
“One year ago, Lyft launched its self-driving ride service in Las Vegas. Today, the company announced its 30-vehicle fleet has made 55,000 trips. That makes it the largest commercial program of its kind in the US.”

TRANSPORTATION
Flying Car Startup Alaka’i Bets Hydrogen Can Outdo Batteries
Eric Adams | Wired
“Alaka’i says the final product will be able to fly for up to four hours and cover 400 miles on a single load of fuel, which can be replenished in 10 minutes at a hydrogen fueling station. It has built a functional, full-scale prototype that will make its first flight ‘imminently,’ a spokesperson says.”

ETHICS
The World Economic Forum Wants to Develop Global Rules for AI
Will Knight | MIT Technology Review
“This week, AI experts, politicians, and CEOs will gather to ask an important question: Can the United States, China, or anyone else agree on how artificial intelligence should be used and controlled?”

SPACE
Building a Rocket in a Garage to Take on SpaceX and Blue Origin
Jackson Ryan | CNET
“While billionaire entrepreneurs like SpaceX’s Elon Musk and Blue Origin’s Jeff Bezos push the boundaries of human spaceflight and exploration, a legion of smaller private startups around the world have been developing their own rocket technology to launch lighter payloads into orbit.”

Image Credit: Kevin Crosby / Unsplash Continue reading

Posted in Human Robots

#435145 How Big Companies Can Simultaneously Run ...

We live in the age of entrepreneurs. New startups seem to appear out of nowhere and challenge not only established companies, but entire industries. Where startup unicorns were once mythical creatures, they now seem abundant, not only increasing in numbers but also in the speed with which they can gain the minimum one-billion-dollar valuations to achieve this status.

But no matter how well things go for innovative startups, how many new success stories we hear, and how much space they take up in the media, the story that they are the best or only source of innovation isn’t entirely accurate.

Established organizations, or legacy organizations, can be incredibly innovative too. And while innovation is much more difficult in established organizations than in startups because they have much more complex systems—nobody is more likely to succeed in their innovation efforts than established organizations.

Unlike startups, established organizations have all the resources. They have money, customers, data, suppliers, partners, and infrastructure, which put them in a far better position to transform new ideas into concrete, value-creating, successful offerings than startups.

However, for established organizations, becoming an innovation champion in these times of rapid change requires new rules of engagement.

Many organizations commit the mistake of engaging in innovation as if it were a homogeneous thing that should be approached in the same way every time, regardless of its purpose. In my book, Transforming Legacy Organizations, I argue that innovation in established organizations must actually be divided into three different tracks: optimizing, augmenting, and mutating innovation.

All three are important, and to complicate matters further, organizations must execute all three types of innovation at the same time.

Optimizing Innovation
The first track is optimizing innovation. This type of innovation is the majority of what legacy organizations already do today. It is, metaphorically speaking, the extra blade on the razor. A razor manufacturer might launch a new razor that has not just three, but four blades, to ensure an even better, closer, and more comfortable shave. Then one or two years later, they say they are now launching a razor that has not only four, but five blades for an even better, closer, and more comfortable shave. That is optimizing innovation.

Adding extra blades on the razor is where the established player reigns.

No startup with so much as a modicum of sense would even try to beat the established company in this type of innovation. And this continuous optimization, both on the operational and customer facing sides, is important. In the short term. It pays the rent. But it’s far from enough. There are limits to how many blades a razor needs, and optimizing innovation only improves upon the past.

Augmenting Innovation
Established players must also go beyond optimization and prepare for the future through augmenting innovation.

The digital transformation projects that many organizations are initiating can be characterized as augmenting innovation. In the first instance, it is about upgrading core offerings and processes from analog to digital. Or, if you’re born digital, you’ve probably had to augment the core to become mobile-first. Perhaps you have even entered the next augmentation phase, which involves implementing artificial intelligence. Becoming AI-first, like the Amazons, Microsofts, Baidus, and Googles of the world, requires great technological advancements. And it’s difficult. But technology may, in fact, be a minor part of the task.

The biggest challenge for augmenting innovation is probably culture.

Only legacy organizations that manage to transform their cultures from status quo cultures—cultures with a preference for things as they are—into cultures full of incremental innovators can thrive in constant change.

To create a strong innovation culture, an organization needs to thoroughly understand its immune systems. These are the mechanisms that protect the organization and operate around the clock to keep it healthy and stable, just as the body’s immune system operates to keep the body healthy and stable. But in a rapidly changing world, many of these defense mechanisms are no longer appropriate and risk weakening organizations’ innovation power.

When talking about organizational immune systems, there is a clear tendency to simply point to the individual immune system, people’s unwillingness to change.

But this is too simplistic.

Of course, there is human resistance to change, but the organizational immune system, consisting of a company’s key performance indicators (KPIs), rewards systems, legacy IT infrastructure and processes, and investor and shareholder demands, is far more important. So is the organization’s societal immune system, such as legislative barriers, legacy customers and providers, and economic climate.

Luckily, there are many culture hacks that organizations can apply to strengthen their innovation cultures by upgrading their physical and digital workspaces, transforming their top-down work processes into decentralized, agile ones, and empowering their employees.

Mutating Innovation
Upgrading your core and preparing for the future by augmenting innovation is crucial if you want success in the medium term. But to win in the long run and be as or more successful 20 to 30 years from now, you need to invent the future, and challenge your core, through mutating innovation.

This requires involving radical innovators who have a bold focus on experimenting with that which is not currently understood and for which a business case cannot be prepared.

Here you must also physically move away from the core organization when you initiate and run such initiatives. This is sometimes called “innovation on the edges” because the initiatives will not have a chance at succeeding within the core. It will be too noisy as they challenge what currently exists—precisely what the majority of the organization’s employees are working to optimize or augment.

Forward-looking organizations experiment to mutate their core through “X divisions,” sometimes called skunk works or innovation labs.

Lowe’s Innovation Labs, for instance, worked with startups to build in-store robot assistants and zero-gravity 3D printers to explore the future. Mutating innovation might include pursuing partnerships across all imaginable domains or establishing brand new companies, rather than traditional business units, as we see automakers such as Toyota now doing to build software for autonomous vehicles. Companies might also engage in radical open innovation by sponsoring others’ ingenuity. Japan’s top airline ANA is exploring a future of travel that does not involve flying people from point A to point B via the ANA Avatar XPRIZE competition.

Increasing technological opportunities challenge the core of any organization but also create unprecedented potential. No matter what product, service, or experience you create, you can’t rest on your laurels. You have to bring yourself to a position where you have a clear strategy for optimizing, augmenting, and mutating your core and thus transforming your organization.

It’s not an easy job. But, hey, if it were easy, everyone would be doing it. Those who make it, on the other hand, will be the innovation champions of the future.

Image Credit: rock-the-stock / Shutterstock.com

We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites. Continue reading

Posted in Human Robots