Tag Archives: beyond

#435494 Driverless Electric Trucks Are Coming, ...

Self-driving and electric cars just don’t stop making headlines lately. Amazon invested in self-driving startup Aurora earlier this year. Waymo, Daimler, GM, along with startups like Zoox, have all launched or are planning to launch driverless taxis, many of them all-electric. People are even yanking driverless cars from their timeless natural habitat—roads—to try to teach them to navigate forests and deserts.

The future of driving, it would appear, is upon us.

But an equally important vehicle that often gets left out of the conversation is trucks; their relevance to our day-to-day lives may not be as visible as that of cars, but their impact is more profound than most of us realize.

Two recent developments in trucking point to a future of self-driving, electric semis hauling goods across the country, and likely doing so more quickly, cheaply, and safely than trucks do today.

Self-Driving in Texas
Last week, Kodiak Robotics announced it’s beginning its first commercial deliveries using self-driving trucks on a route from Dallas to Houston. The two cities sit about 240 miles apart, connected primarily by interstate 45. Kodiak is aiming to expand its reach far beyond the heart of Texas (if Dallas and Houston can be considered the heart, that is) to the state’s most far-flung cities, including El Paso to the west and Laredo to the south.

If self-driving trucks are going to be constrained to staying within state lines (and given that the laws regulating them differ by state, they will be for the foreseeable future), Texas is a pretty ideal option. It’s huge (thousands of miles of highway run both east-west and north-south), it’s warm (better than cold for driverless tech components like sensors), its proximity to Mexico means constant movement of both raw materials and manufactured goods (basically, you can’t have too many trucks in Texas), and most crucially, it’s lax on laws (driverless vehicles have been permitted there since 2017).

Spoiler, though—the trucks won’t be fully unmanned. They’ll have safety drivers to guide them onto and off of the highway, and to be there in case of any unexpected glitches.

California Goes (Even More) Electric
According to some top executives in the rideshare industry, automation is just one key component of the future of driving. Another is electricity replacing gas, and it’s not just carmakers that are plugging into the trend.

This week, Daimler Trucks North America announced completion of its first electric semis for customers Penske and NFI, to be used in the companies’ southern California operations. Scheduled to start operating later this month, the trucks will essentially be guinea pigs for testing integration of electric trucks into large-scale fleets; intel gleaned from the trucks’ performance will impact the design of later models.

Design-wise, the trucks aren’t much different from any other semi you’ve seen lumbering down the highway recently. Their range is about 250 miles—not bad if you think about how much more weight a semi is pulling than a passenger sedan—and they’ve been dubbed eCascadia, an electrified version of Freightliner’s heavy-duty Cascadia truck.

Batteries have a long way to go before they can store enough energy to make electric trucks truly viable (not to mention setting up a national charging infrastructure), but Daimler’s announcement is an important step towards an electrically-driven future.

Keep on Truckin’
Obviously, it’s more exciting to think about hailing one of those cute little Waymo cars with no steering wheel to shuttle you across town than it is to think about that 12-pack of toilet paper you ordered on Amazon cruising down the highway in a semi while the safety driver takes a snooze. But pushing driverless and electric tech in the trucking industry makes sense for a few big reasons.

Trucks mostly run long routes on interstate highways—with no pedestrians, stoplights, or other city-street obstacles to contend with, highway driving is much easier to automate. What glitches there are to be smoothed out may as well be smoothed out with cargo on board rather than people. And though you wouldn’t know it amid the frantic shouts of ‘a robot could take your job!’, the US is actually in the midst of a massive shortage of truck drivers—60,000 short as of earlier this year, to be exact.

As Todd Spencer, president of the Owner-Operator Independent Drivers Association, put it, “Trucking is an absolutely essential, critical industry to the nation, to everybody in it.” Alas, trucks get far less love than cars, but come on—probably 90 percent of the things you ate, bought, or used today were at some point moved by a truck.

Adding driverless and electric tech into that equation, then, should yield positive outcomes on all sides, whether we’re talking about cheaper 12-packs of toilet paper, fewer traffic fatalities due to human error, a less-strained labor force, a stronger economy… or something pretty cool to see as you cruise down the highway in your (driverless, electric, futuristic) car.

Image Credit: Vitpho / Shutterstock.com Continue reading

Posted in Human Robots

#435423 Moving Beyond Mind-Controlled Limbs to ...

Brain-machine interface enthusiasts often gush about “closing the loop.” It’s for good reason. On the implant level, it means engineering smarter probes that only activate when they detect faulty electrical signals in brain circuits. Elon Musk’s Neuralink—among other players—are readily pursuing these bi-directional implants that both measure and zap the brain.

But to scientists laboring to restore functionality to paralyzed patients or amputees, “closing the loop” has broader connotations. Building smart mind-controlled robotic limbs isn’t enough; the next frontier is restoring sensation in offline body parts. To truly meld biology with machine, the robotic appendage has to “feel one” with the body.

This month, two studies from Science Robotics describe complementary ways forward. In one, scientists from the University of Utah paired a state-of-the-art robotic arm—the DEKA LUKE—with electrically stimulating remaining nerves above the attachment point. Using artificial zaps to mimic the skin’s natural response patterns to touch, the team dramatically increased the patient’s ability to identify objects. Without much training, he could easily discriminate between the small and large and the soft and hard while blindfolded and wearing headphones.

In another, a team based at the National University of Singapore took inspiration from our largest organ, the skin. Mimicking the neural architecture of biological skin, the engineered “electronic skin” not only senses temperature, pressure, and humidity, but continues to function even when scraped or otherwise damaged. Thanks to artificial nerves that transmit signals far faster than our biological ones, the flexible e-skin shoots electrical data 1,000 times quicker than human nerves.

Together, the studies marry neuroscience and robotics. Representing the latest push towards closing the loop, they show that integrating biological sensibilities with robotic efficiency isn’t impossible (super-human touch, anyone?). But more immediately—and more importantly—they’re beacons of hope for patients who hope to regain their sense of touch.

For one of the participants, a late middle-aged man with speckled white hair who lost his forearm 13 years ago, superpowers, cyborgs, or razzle-dazzle brain implants are the last thing on his mind. After a barrage of emotionally-neutral scientific tests, he grasped his wife’s hand and felt her warmth for the first time in over a decade. His face lit up in a blinding smile.

That’s what scientists are working towards.

Biomimetic Feedback
The human skin is a marvelous thing. Not only does it rapidly detect a multitude of sensations—pressure, temperature, itch, pain, humidity—its wiring “binds” disparate signals together into a sensory fingerprint that helps the brain identify what it’s feeling at any moment. Thanks to over 45 miles of nerves that connect the skin, muscles, and brain, you can pick up a half-full coffee cup, knowing that it’s hot and sloshing, while staring at your computer screen. Unfortunately, this complexity is also why restoring sensation is so hard.

The sensory electrode array implanted in the participant’s arm. Image Credit: George et al., Sci. Robot. 4, eaax2352 (2019)..
However, complex neural patterns can also be a source of inspiration. Previous cyborg arms are often paired with so-called “standard” sensory algorithms to induce a basic sense of touch in the missing limb. Here, electrodes zap residual nerves with intensities proportional to the contact force: the harder the grip, the stronger the electrical feedback. Although seemingly logical, that’s not how our skin works. Every time the skin touches or leaves an object, its nerves shoot strong bursts of activity to the brain; while in full contact, the signal is much lower. The resulting electrical strength curve resembles a “U.”

The LUKE hand. Image Credit: George et al., Sci. Robot. 4, eaax2352 (2019).
The team decided to directly compare standard algorithms with one that better mimics the skin’s natural response. They fitted a volunteer with a robotic LUKE arm and implanted an array of electrodes into his forearm—right above the amputation—to stimulate the remaining nerves. When the team activated different combinations of electrodes, the man reported sensations of vibration, pressure, tapping, or a sort of “tightening” in his missing hand. Some combinations of zaps also made him feel as if he were moving the robotic arm’s joints.

In all, the team was able to carefully map nearly 120 sensations to different locations on the phantom hand, which they then overlapped with contact sensors embedded in the LUKE arm. For example, when the patient touched something with his robotic index finger, the relevant electrodes sent signals that made him feel as if he were brushing something with his own missing index fingertip.

Standard sensory feedback already helped: even with simple electrical stimulation, the man could tell apart size (golf versus lacrosse ball) and texture (foam versus plastic) while blindfolded and wearing noise-canceling headphones. But when the team implemented two types of neuromimetic feedback—electrical zaps that resembled the skin’s natural response—his performance dramatically improved. He was able to identify objects much faster and more accurately under their guidance. Outside the lab, he also found it easier to cook, feed, and dress himself. He could even text on his phone and complete routine chores that were previously too difficult, such as stuffing an insert into a pillowcase, hammering a nail, or eating hard-to-grab foods like eggs and grapes.

The study shows that the brain more readily accepts biologically-inspired electrical patterns, making it a relatively easy—but enormously powerful—upgrade that seamlessly integrates the robotic arms with the host. “The functional and emotional benefits…are likely to be further enhanced with long-term use, and efforts are underway to develop a portable take-home system,” the team said.

E-Skin Revolution: Asynchronous Coded Electronic Skin (ACES)
Flexible electronic skins also aren’t new, but the second team presented an upgrade in both speed and durability while retaining multiplexed sensory capabilities.

Starting from a combination of rubber, plastic, and silicon, the team embedded over 200 sensors onto the e-skin, each capable of discerning contact, pressure, temperature, and humidity. They then looked to the skin’s nervous system for inspiration. Our skin is embedded with a dense array of nerve endings that individually transmit different types of sensations, which are integrated inside hubs called ganglia. Compared to having every single nerve ending directly ping data to the brain, this “gather, process, and transmit” architecture rapidly speeds things up.

The team tapped into this biological architecture. Rather than pairing each sensor with a dedicated receiver, ACES sends all sensory data to a single receiver—an artificial ganglion. This setup lets the e-skin’s wiring work as a whole system, as opposed to individual electrodes. Every sensor transmits its data using a characteristic pulse, which allows it to be uniquely identified by the receiver.

The gains were immediate. First was speed. Normally, sensory data from multiple individual electrodes need to be periodically combined into a map of pressure points. Here, data from thousands of distributed sensors can independently go to a single receiver for further processing, massively increasing efficiency—the new e-skin’s transmission rate is roughly 1,000 times faster than that of human skin.

Second was redundancy. Because data from individual sensors are aggregated, the system still functioned even when any individual receptors are damaged, making it far more resilient than previous attempts. Finally, the setup could easily scale up. Although the team only tested the idea with 240 sensors, theoretically the system should work with up to 10,000.

The team is now exploring ways to combine their invention with other material layers to make it water-resistant and self-repairable. As you might’ve guessed, an immediate application is to give robots something similar to complex touch. A sensory upgrade not only lets robots more easily manipulate tools, doorknobs, and other objects in hectic real-world environments, it could also make it easier for machines to work collaboratively with humans in the future (hey Wall-E, care to pass the salt?).

Dexterous robots aside, the team also envisions engineering better prosthetics. When coated onto cyborg limbs, for example, ACES may give them a better sense of touch that begins to rival the human skin—or perhaps even exceed it.

Regardless, efforts that adapt the functionality of the human nervous system to machines are finally paying off, and more are sure to come. Neuromimetic ideas may very well be the link that finally closes the loop.

Image Credit: Dan Hixson/University of Utah College of Engineering.. Continue reading

Posted in Human Robots

#435260 How Tech Can Help Curb Emissions by ...

Trees are a low-tech, high-efficiency way to offset much of humankind’s negative impact on the climate. What’s even better, we have plenty of room for a lot more of them.

A new study conducted by researchers at Switzerland’s ETH-Zürich, published in Science, details how Earth could support almost an additional billion hectares of trees without the new forests pushing into existing urban or agricultural areas. Once the trees grow to maturity, they could store more than 200 billion metric tons of carbon.

Great news indeed, but it still leaves us with some huge unanswered questions. Where and how are we going to plant all the new trees? What kind of trees should we plant? How can we ensure that the new forests become a boon for people in those areas?

Answers to all of the above likely involve technology.

Math + Trees = Challenges
The ETH-Zürich research team combined Google Earth mapping software with a database of nearly 80,000 existing forests to create a predictive model for optimal planting locations. In total, 0.9 billion hectares of new, continuous forest could be planted. Once mature, the 500 billion new trees in these forests would be capable of storing about two-thirds of the carbon we have emitted since the industrial revolution.

Other researchers have noted that the study may overestimate how efficient trees are at storing carbon, as well as underestimate how much carbon humans have emitted over time. However, all seem to agree that new forests would offset much of our cumulative carbon emissions—still an impressive feat as the target of keeping global warming this century at under 1.5 degrees Celsius becomes harder and harder to reach.

Recently, there was a story about a Brazilian couple who replanted trees in the valley where they live. The couple planted about 2.7 million trees in two decades. Back-of-the-napkin math shows that they on average planted 370 trees a day, meaning planting 500 billion trees would take about 3.7 million years. While an over-simplification, the point is that planting trees by hand is not realistic. Even with a million people going at a rate of 370 trees a day, it would take 83 years. Current technologies are also not likely to be able to meet the challenge, especially in remote locations.

Tree-Bombing Drones
Technology can speed up the planting process, including a new generation of drones that take tree planting to the skies. Drone planting generally involves dropping biodegradable seed pods at a designated area. The pods dissolve over time, and the tree seeds grow in the earth below. DroneSeed is one example; its 55-pound drones can plant up to 800 seeds an hour. Another startup, Biocarbon Engineering, has used various techniques, including drones, to plant 38 different species of trees across three continents.

Drone planting has distinct advantages when it comes to planting in hard-to-access areas—one example is mangrove forests, which are disappearing rapidly, increasing the risk of floods and storm surges.

Challenges include increasing the range and speed of drone planting, and perhaps most importantly, the success rate, as automatic planting from a height is still likely to be less accurate when it comes to what depth the tree saplings are planted. However, drones are already showing impressive numbers for sapling survival rates.

AI, Sensors, and Eye-In-the-Sky
Planting the trees is the first step in a long road toward an actual forest. Companies are leveraging artificial intelligence and satellite imagery in a multitude of ways to increase protection and understanding of forested areas.

20tree.ai, a Portugal-based startup, uses AI to analyze satellite imagery and monitor the state of entire forests at a fraction of the cost of manual monitoring. The approach can lead to faster identification of threats like pest infestation and a better understanding of the state of forests.

AI can also play a pivotal role in protecting existing forest areas by predicting where deforestation is likely to occur.

Closer to the ground—and sometimes in it—new networks of sensors can provide detailed information about the state and needs of trees. One such project is Trace, where individual trees are equipped with a TreeTalker, an internet of things-based device that can provide real-time monitoring of the tree’s functions and well-being. The information can be used to, among other things, optimize the use of available resources, such as providing the exact amount of water a tree needs.

Budding Technologies Are Controversial
Trees are in many ways fauna’s marathon runners—slow-growing and sturdy, but still susceptible to sickness and pests. Many deforested areas are likely not as rich in nutrients as they once were, which could slow down reforestation. Much of the positive impact that said trees could have on carbon levels in the atmosphere is likely decades away.

Bioengineering, for example through CRISPR, could provide solutions, making trees more resistant and faster-growing. Such technologies are being explored in relation to Ghana’s at-risk cocoa trees. Other exponential technologies could also hold much future potential—for instance micro-robots to assist the dwindling number of bees with pollination.

These technologies remain mired in controversy, and perhaps rightfully so. Bioengineering’s massive potential is for many offset by the inherent risks of engineered plants out-competing existing fauna or growing beyond our control. Micro-robots for pollination may solve a problem, but don’t do much to address the root cause: that we seem to be disrupting and destroying integral parts of natural cycles.

Tech Not The Whole Answer
So, is it realistic to plant 500 billion new trees? The short answer would be that yes, it’s possible—with the help of technology.

However, there are many unanswered challenges. For example, many of areas identified by the ETH-Zürich research team are not readily available for reforestation. Some are currently reserved for grazing, others owned by private entities, and others again are located in remote areas or areas prone to political instability, beyond the reach of most replanting efforts.

If we do wish to plant 500 billion trees to offset some of the negative impacts we have had on the planet, we might well want to combine the best of exponential technology with reforestation as well as a move to other forms of agriculture.

Such an approach might also help address a major issue: that few of the proposed new forests will likely succeed without ensuring that people living in and around the areas where reforestation takes place become involved, and can reap rewards from turning arable land into forests.

Image Credit: Lillac/Shutterstock.com Continue reading

Posted in Human Robots

#435199 The Rise of AI Art—and What It Means ...

Artificially intelligent systems are slowly taking over tasks previously done by humans, and many processes involving repetitive, simple movements have already been fully automated. In the meantime, humans continue to be superior when it comes to abstract and creative tasks.

However, it seems like even when it comes to creativity, we’re now being challenged by our own creations.

In the last few years, we’ve seen the emergence of hundreds of “AI artists.” These complex algorithms are creating unique (and sometimes eerie) works of art. They’re generating stunning visuals, profound poetry, transcendent music, and even realistic movie scripts. The works of these AI artists are raising questions about the nature of art and the role of human creativity in future societies.

Here are a few works of art created by non-human entities.

Unsecured Futures
by Ai.Da

Ai-Da Robot with Painting. Image Credit: Ai-Da portraits by Nicky Johnston. Published with permission from Midas Public Relations.
Earlier this month we saw the announcement of Ai.Da, considered the first ultra-realistic drawing robot artist. Her mechanical abilities, combined with AI-based algorithms, allow her to draw, paint, and even sculpt. She is able to draw people using her artificial eye and a pencil in her hand. Ai.Da’s artwork and first solo exhibition, Unsecured Futures, will be showcased at Oxford University in July.

Ai-Da Cartesian Painting. Image Credit: Ai-Da Artworks. Published with permission from Midas Public Relations.
Obviously Ai.Da has no true consciousness, thoughts, or feelings. Despite that, the (human) organizers of the exhibition believe that Ai.Da serves as a basis for crucial conversations about the ethics of emerging technologies. The exhibition will serve as a stimulant for engaging with critical questions about what kind of future we ought to create via such technologies.

The exhibition’s creators wrote, “Humans are confident in their position as the most powerful species on the planet, but how far do we actually want to take this power? To a Brave New World (Nightmare)? And if we use new technologies to enhance the power of the few, we had better start safeguarding the future of the many.”

Google’s PoemPortraits
Our transcendence adorns,
That society of the stars seem to be the secret.

The two lines of poetry above aren’t like any poetry you’ve come across before. They are generated by an algorithm that was trained via deep learning neural networks trained on 20 million words of 19th-century poetry.

Google’s latest art project, named PoemPortraits, takes a word of your suggestion and generates a unique poem (once again, a collaboration of man and machine). You can even add a selfie in the final “PoemPortrait.” Artist Es Devlin, the project’s creator, explains that the AI “doesn’t copy or rework existing phrases, but uses its training material to build a complex statistical model. As a result, the algorithm generates original phrases emulating the style of what it’s been trained on.”

The generated poetry can sometimes be profound, and sometimes completely meaningless.But what makes the PoemPortraits project even more interesting is that it’s a collaborative project. All of the generated lines of poetry are combined to form a consistently growing collective poem, which you can view after your lines are generated. In many ways, the final collective poem is a collaboration of people from around the world working with algorithms.

Faceless Portraits Transcending Time
AICAN + Ahmed Elgammal

Image Credit: AICAN + Ahmed Elgammal | Faceless Portrait #2 (2019) | Artsy.
In March of this year, an AI artist called AICAN and its creator Ahmed Elgammal took over a New York gallery. The exhibition at HG Commentary showed two series of canvas works portraying harrowing, dream-like faceless portraits.

The exhibition was not simply credited to a machine, but rather attributed to the collaboration between a human and machine. Ahmed Elgammal is the founder and director of the Art and Artificial Intelligence Laboratory at Rutgers University. He considers AICAN to not only be an autonomous AI artist, but also a collaborator for artistic endeavors.

How did AICAN create these eerie faceless portraits? The system was presented with 100,000 photos of Western art from over five centuries, allowing it to learn the aesthetics of art via machine learning. It then drew from this historical knowledge and the mandate to create something new to create an artwork without human intervention.

Genesis
by AIVA Technologies

Listen to the score above. While you do, reflect on the fact that it was generated by an AI.

AIVA is an AI that composes soundtrack music for movies, commercials, games, and trailers. Its creative works span a wide range of emotions and moods. The scores it generates are indistinguishable from those created by the most talented human composers.

The AIVA music engine allows users to generate original scores in multiple ways. One is to upload an existing human-generated score and select the temp track to base the composition process on. Another method involves using preset algorithms to compose music in pre-defined styles, including everything from classical to Middle Eastern.

Currently, the platform is promoted as an opportunity for filmmakers and producers. But in the future, perhaps every individual will have personalized music generated for them based on their interests, tastes, and evolving moods. We already have algorithms on streaming websites recommending novel music to us based on our interests and history. Soon, algorithms may be used to generate music and other works of art that are tailored to impact our unique psyches.

The Future of Art: Pushing Our Creative Limitations
These works of art are just a glimpse into the breadth of the creative works being generated by algorithms and machines. Many of us will rightly fear these developments. We have to ask ourselves what our role will be in an era where machines are able to perform what we consider complex, abstract, creative tasks. The implications on the future of work, education, and human societies are profound.

At the same time, some of these works demonstrate that AI artists may not necessarily represent a threat to human artists, but rather an opportunity for us to push our creative boundaries. The most exciting artistic creations involve collaborations between humans and machines.

We have always used our technological scaffolding to push ourselves beyond our biological limitations. We use the telescope to extend our line of sight, planes to fly, and smartphones to connect with others. Our machines are not always working against us, but rather working as an extension of our minds. Similarly, we could use our machines to expand on our creativity and push the boundaries of art.

Image Credit: Ai-Da portraits by Nicky Johnston. Published with permission from Midas Public Relations. Continue reading

Posted in Human Robots

#435167 A Closer Look at the Robots Helping Us ...

Buck Rogers had Twiki. Luke Skywalker palled around with C-3PO and R2-D2. And astronauts aboard the International Space Station (ISS) now have their own robotic companions in space—Astrobee.

A pair of the cube-shaped robots were launched to the ISS during an April re-supply mission and are currently being commissioned for use on the space station. The free-flying space robots, dubbed Bumble and Honey, are the latest generation of robotic machines to join the human crew on the ISS.

Exploration of the solar system and beyond will require autonomous machines that can assist humans with numerous tasks—or go where we cannot. NASA has said repeatedly that robots will be instrumental in future space missions to the moon, Mars, and even to the icy moon Europa.

The Astrobee robots will specifically test robotic capabilities in zero gravity, replacing the SPHERES (Synchronized Position Hold, Engage, Reorient, Experimental Satellite) robots that have been on the ISS for more than a decade to test various technologies ranging from communications to navigation.

The 18-sided robots, each about the size of a volleyball or an oversized Dungeons and Dragons die, use CO2-based cold-gas thrusters for movement and a series of ultrasonic beacons for orientation. The Astrobee robots, on the other hand, can propel themselves autonomously around the interior of the ISS using electric fans and six cameras.

The modular design of the Astrobee robots means they are highly plug-and-play, capable of being reconfigured with different hardware modules. The robots’ software is also open-source, encouraging scientists and programmers to develop and test new algorithms and features.

And, yes, the Astrobee robots will be busy as bees once they are fully commissioned this fall, with experiments planned to begin next year. Scientists hope to learn more about how robots can assist space crews and perform caretaking duties on spacecraft.

Robots Working Together
The Astrobee robots are expected to be joined by a familiar “face” on the ISS later this year—the humanoid robot Robonaut.

Robonaut, also known as R2, was the first US-built robot on the ISS. It joined the crew back in 2011 without legs, which were added in 2014. However, the installation never entirely worked, as R2 experienced power failures that eventually led to its return to Earth last year to fix the problem. If all goes as planned, the space station’s first humanoid robot will return to the ISS to lend a hand to the astronauts and the new robotic arrivals.

In particular, NASA is interested in how the two different robotic platforms can complement each other, with an eye toward outfitting the agency’s proposed lunar orbital space station with various robots that can supplement a human crew.

“We don’t have definite plans for what would happen on the Gateway yet, but there’s a general recognition that intra-vehicular robots are important for space stations,” Astrobee technical lead Trey Smith in the NASA Intelligent Robotics Group told IEEE Spectrum. “And so, it would not be surprising to see a mobile manipulator like Robonaut, and a free flyer like Astrobee, on the Gateway.”

While the focus on R2 has been to test its capabilities in zero gravity and to use it for mundane or dangerous tasks in space, the technology enabling the humanoid robot has proven to be equally useful on Earth.

For example, R2 has amazing dexterity for a robot, with sensors, actuators, and tendons comparable to the nerves, muscles, and tendons in a human hand. Based on that design, engineers are working on a robotic glove that can help factory workers, for instance, do their jobs better while reducing the risk of repetitive injuries. R2 has also inspired development of a robotic exoskeleton for both astronauts in space and paraplegics on Earth.

Working Hard on Soft Robotics
While innovative and technologically sophisticated, Astrobee and Robonaut are typical robots in that neither one would do well in a limbo contest. In other words, most robots are limited in their flexibility and agility based on current hardware and materials.

A subfield of robotics known as soft robotics involves developing robots with highly pliant materials that mimic biological organisms in how they move. Scientists at NASA’s Langley Research Center are investigating how soft robots could help with future space exploration.

Specifically, the researchers are looking at a series of properties to understand how actuators—components responsible for moving a robotic part, such as Robonaut’s hand—can be built and used in space.

The team first 3D prints a mold and then pours a flexible material like silicone into the mold. Air bladders or chambers in the actuator expand and compress using just air.

Some of the first applications of soft robotics sound more tool-like than R2-D2-like. For example, two soft robots could connect to produce a temporary shelter for astronauts on the moon or serve as an impromptu wind shield during one of Mars’ infamous dust storms.

The idea is to use soft robots in situations that are “dangerous, dirty, or dull,” according to Jack Fitzpatrick, a NASA intern working on the soft robotics project at Langley.

Working on Mars
Of course, space robots aren’t only designed to assist humans. In many instances, they are the only option to explore even relatively close celestial bodies like Mars. Four American-made robotic rovers have been used to investigate the fourth planet from the sun since 1997.

Opportunity is perhaps the most famous, covering about 25 miles of terrain across Mars over 15 years. A dust storm knocked it out of commission last year, with NASA officially ending the mission in February.

However, the biggest and baddest of the Mars rovers, Curiosity, is still crawling across the Martian surface, sending back valuable data since 2012. The car-size robot carries 17 cameras, a laser to vaporize rocks for study, and a drill to collect samples. It is on the hunt for signs of biological life.

The next year or two could see a virtual traffic jam of robots to Mars. NASA’s Mars 2020 Rover is next in line to visit the Red Planet, sporting scientific gadgets like an X-ray fluorescence spectrometer for chemical analyses and ground-penetrating radar to see below the Martian surface.

This diagram shows the instrument payload for the Mars 2020 mission. Image Credit: NASA.
Meanwhile, the Europeans have teamed with the Russians on a rover called Rosalind Franklin, named after a famed British chemist, that will drill down into the Martian ground for evidence of past or present life as soon as 2021.

The Chinese are also preparing to begin searching for life on Mars using robots as soon as next year, as part of the country’s Mars Global Remote Sensing Orbiter and Small Rover program. The mission is scheduled to be the first in a series of launches that would culminate with bringing samples back from Mars to Earth.

Perhaps there is no more famous utterance in the universe of science fiction as “to boldly go where no one has gone before.” However, the fact is that human exploration of the solar system and beyond will only be possible with robots of different sizes, shapes, and sophistication.

Image Credit: NASA. Continue reading

Posted in Human Robots