Tag Archives: robotic arm
#434827 AI and Robotics Are Transforming ...
During the past 50 years, the frequency of recorded natural disasters has surged nearly five-fold.
In this blog, I’ll be exploring how converging exponential technologies (AI, robotics, drones, sensors, networks) are transforming the future of disaster relief—how we can prevent them in the first place and get help to victims during that first golden hour wherein immediate relief can save lives.
Here are the three areas of greatest impact:
AI, predictive mapping, and the power of the crowd
Next-gen robotics and swarm solutions
Aerial drones and immediate aid supply
Let’s dive in!
Artificial Intelligence and Predictive Mapping
When it comes to immediate and high-precision emergency response, data is gold.
Already, the meteoric rise of space-based networks, stratosphere-hovering balloons, and 5G telecommunications infrastructure is in the process of connecting every last individual on the planet.
Aside from democratizing the world’s information, however, this upsurge in connectivity will soon grant anyone the ability to broadcast detailed geo-tagged data, particularly those most vulnerable to natural disasters.
Armed with the power of data broadcasting and the force of the crowd, disaster victims now play a vital role in emergency response, turning a historically one-way blind rescue operation into a two-way dialogue between connected crowds and smart response systems.
With a skyrocketing abundance of data, however, comes a new paradigm: one in which we no longer face a scarcity of answers. Instead, it will be the quality of our questions that matters most.
This is where AI comes in: our mining mechanism.
In the case of emergency response, what if we could strategically map an almost endless amount of incoming data points? Or predict the dynamics of a flood and identify a tsunami’s most vulnerable targets before it even strikes? Or even amplify critical signals to trigger automatic aid by surveillance drones and immediately alert crowdsourced volunteers?
Already, a number of key players are leveraging AI, crowdsourced intelligence, and cutting-edge visualizations to optimize crisis response and multiply relief speeds.
Take One Concern, for instance. Born out of Stanford under the mentorship of leading AI expert Andrew Ng, One Concern leverages AI through analytical disaster assessment and calculated damage estimates.
Partnering with the cities of Los Angeles, San Francisco, and numerous cities in San Mateo County, the platform assigns verified, unique ‘digital fingerprints’ to every element in a city. Building robust models of each system, One Concern’s AI platform can then monitor site-specific impacts of not only climate change but each individual natural disaster, from sweeping thermal shifts to seismic movement.
This data, combined with that of city infrastructure and former disasters, are then used to predict future damage under a range of disaster scenarios, informing prevention methods and structures in need of reinforcement.
Within just four years, One Concern can now make precise predictions with an 85 percent accuracy rate in under 15 minutes.
And as IoT-connected devices and intelligent hardware continue to boom, a blooming trillion-sensor economy will only serve to amplify AI’s predictive capacity, offering us immediate, preventive strategies long before disaster strikes.
Beyond natural disasters, however, crowdsourced intelligence, predictive crisis mapping, and AI-powered responses are just as formidable a triage in humanitarian disasters.
One extraordinary story is that of Ushahidi. When violence broke out after the 2007 Kenyan elections, one local blogger proposed a simple yet powerful question to the web: “Any techies out there willing to do a mashup of where the violence and destruction is occurring and put it on a map?”
Within days, four ‘techies’ heeded the call, building a platform that crowdsourced first-hand reports via SMS, mined the web for answers, and—with over 40,000 verified reports—sent alerts back to locals on the ground and viewers across the world.
Today, Ushahidi has been used in over 150 countries, reaching a total of 20 million people across 100,000+ deployments. Now an open-source crisis-mapping software, its V3 (or “Ushahidi in the Cloud”) is accessible to anyone, mining millions of Tweets, hundreds of thousands of news articles, and geo-tagged, time-stamped data from countless sources.
Aggregating one of the longest-running crisis maps to date, Ushahidi’s Syria Tracker has proved invaluable in the crowdsourcing of witness reports. Providing real-time geographic visualizations of all verified data, Syria Tracker has enabled civilians to report everything from missing people and relief supply needs to civilian casualties and disease outbreaks— all while evading the government’s cell network, keeping identities private, and verifying reports prior to publication.
As mobile connectivity and abundant sensors converge with AI-mined crowd intelligence, real-time awareness will only multiply in speed and scale.
Imagining the Future….
Within the next 10 years, spatial web technology might even allow us to tap into mesh networks.
As I’ve explored in a previous blog on the implications of the spatial web, while traditional networks rely on a limited set of wired access points (or wireless hotspots), a wireless mesh network can connect entire cities via hundreds of dispersed nodes that communicate with each other and share a network connection non-hierarchically.
In short, this means that individual mobile users can together establish a local mesh network using nothing but the computing power in their own devices.
Take this a step further, and a local population of strangers could collectively broadcast countless 360-degree feeds across a local mesh network.
Imagine a scenario in which armed attacks break out across disjointed urban districts, each cluster of eye witnesses and at-risk civilians broadcasting an aggregate of 360-degree videos, all fed through photogrammetry AIs that build out a live hologram in real time, giving family members and first responders complete information.
Or take a coastal community in the throes of torrential rainfall and failing infrastructure. Now empowered by a collective live feed, verification of data reports takes a matter of seconds, and richly-layered data informs first responders and AI platforms with unbelievable accuracy and specificity of relief needs.
By linking all the right technological pieces, we might even see the rise of automated drone deliveries. Imagine: crowdsourced intelligence is first cross-referenced with sensor data and verified algorithmically. AI is then leveraged to determine the specific needs and degree of urgency at ultra-precise coordinates. Within minutes, once approved by personnel, swarm robots rush to collect the requisite supplies, equipping size-appropriate drones with the right aid for rapid-fire delivery.
This brings us to a second critical convergence: robots and drones.
While cutting-edge drone technology revolutionizes the way we deliver aid, new breakthroughs in AI-geared robotics are paving the way for superhuman emergency responses in some of today’s most dangerous environments.
Let’s explore a few of the most disruptive examples to reach the testing phase.
First up….
Autonomous Robots and Swarm Solutions
As hardware advancements converge with exploding AI capabilities, disaster relief robots are graduating from assistance roles to fully autonomous responders at a breakneck pace.
Born out of MIT’s Biomimetic Robotics Lab, the Cheetah III is but one of many robots that may form our first line of defense in everything from earthquake search-and-rescue missions to high-risk ops in dangerous radiation zones.
Now capable of running at 6.4 meters per second, Cheetah III can even leap up to a height of 60 centimeters, autonomously determining how to avoid obstacles and jump over hurdles as they arise.
Initially designed to perform spectral inspection tasks in hazardous settings (think: nuclear plants or chemical factories), the Cheetah’s various iterations have focused on increasing its payload capacity, range of motion, and even a gripping function with enhanced dexterity.
Cheetah III and future versions are aimed at saving lives in almost any environment.
And the Cheetah III is not alone. Just this February, Tokyo’s Electric Power Company (TEPCO) has put one of its own robots to the test. For the first time since Japan’s devastating 2011 tsunami, which led to three nuclear meltdowns in the nation’s Fukushima nuclear power plant, a robot has successfully examined the reactor’s fuel.
Broadcasting the process with its built-in camera, the robot was able to retrieve small chunks of radioactive fuel at five of the six test sites, offering tremendous promise for long-term plans to clean up the still-deadly interior.
Also out of Japan, Mitsubishi Heavy Industries (MHi) is even using robots to fight fires with full autonomy. In a remarkable new feat, MHi’s Water Cannon Bot can now put out blazes in difficult-to-access or highly dangerous fire sites.
Delivering foam or water at 4,000 liters per minute and 1 megapascal (MPa) of pressure, the Cannon Bot and its accompanying Hose Extension Bot even form part of a greater AI-geared system to conduct reconnaissance and surveillance on larger transport vehicles.
As wildfires grow ever more untameable, high-volume production of such bots could prove a true lifesaver. Paired with predictive AI forest fire mapping and autonomous hauling vehicles, not only will solutions like MHi’s Cannon Bot save numerous lives, but avoid population displacement and paralyzing damage to our natural environment before disaster has the chance to spread.
But even in cases where emergency shelter is needed, groundbreaking (literally) robotics solutions are fast to the rescue.
After multiple iterations by Fastbrick Robotics, the Hadrian X end-to-end bricklaying robot can now autonomously build a fully livable, 180-square-meter home in under three days. Using a laser-guided robotic attachment, the all-in-one brick-loaded truck simply drives to a construction site and directs blocks through its robotic arm in accordance with a 3D model.
Meeting verified building standards, Hadrian and similar solutions hold massive promise in the long-term, deployable across post-conflict refugee sites and regions recovering from natural catastrophes.
But what if we need to build emergency shelters from local soil at hand? Marking an extraordinary convergence between robotics and 3D printing, the Institute for Advanced Architecture of Catalonia (IAAC) is already working on a solution.
In a major feat for low-cost construction in remote zones, IAAC has found a way to convert almost any soil into a building material with three times the tensile strength of industrial clay. Offering myriad benefits, including natural insulation, low GHG emissions, fire protection, air circulation, and thermal mediation, IAAC’s new 3D printed native soil can build houses on-site for as little as $1,000.
But while cutting-edge robotics unlock extraordinary new frontiers for low-cost, large-scale emergency construction, novel hardware and computing breakthroughs are also enabling robotic scale at the other extreme of the spectrum.
Again, inspired by biological phenomena, robotics specialists across the US have begun to pilot tiny robotic prototypes for locating trapped individuals and assessing infrastructural damage.
Take RoboBees, tiny Harvard-developed bots that use electrostatic adhesion to ‘perch’ on walls and even ceilings, evaluating structural damage in the aftermath of an earthquake.
Or Carnegie Mellon’s prototyped Snakebot, capable of navigating through entry points that would otherwise be completely inaccessible to human responders. Driven by AI, the Snakebot can maneuver through even the most densely-packed rubble to locate survivors, using cameras and microphones for communication.
But when it comes to fast-paced reconnaissance in inaccessible regions, miniature robot swarms have good company.
Next-Generation Drones for Instantaneous Relief Supplies
Particularly in the case of wildfires and conflict zones, autonomous drone technology is fundamentally revolutionizing the way we identify survivors in need and automate relief supply.
Not only are drones enabling high-resolution imagery for real-time mapping and damage assessment, but preliminary research shows that UAVs far outpace ground-based rescue teams in locating isolated survivors.
As presented by a team of electrical engineers from the University of Science and Technology of China, drones could even build out a mobile wireless broadband network in record time using a “drone-assisted multi-hop device-to-device” program.
And as shown during Houston’s Hurricane Harvey, drones can provide scores of predictive intel on everything from future flooding to damage estimates.
Among multiple others, a team led by Texas A&M computer science professor and director of the university’s Center for Robot-Assisted Search and Rescue Dr. Robin Murphy flew a total of 119 drone missions over the city, from small-scale quadcopters to military-grade unmanned planes. Not only were these critical for monitoring levee infrastructure, but also for identifying those left behind by human rescue teams.
But beyond surveillance, UAVs have begun to provide lifesaving supplies across some of the most remote regions of the globe. One of the most inspiring examples to date is Zipline.
Created in 2014, Zipline has completed 12,352 life-saving drone deliveries to date. While drones are designed, tested, and assembled in California, Zipline primarily operates in Rwanda and Tanzania, hiring local operators and providing over 11 million people with instant access to medical supplies.
Providing everything from vaccines and HIV medications to blood and IV tubes, Zipline’s drones far outpace ground-based supply transport, in many instances providing life-critical blood cells, plasma, and platelets in under an hour.
But drone technology is even beginning to transcend the limited scale of medical supplies and food.
Now developing its drones under contracts with DARPA and the US Marine Corps, Logistic Gliders, Inc. has built autonomously-navigating drones capable of carrying 1,800 pounds of cargo over unprecedented long distances.
Built from plywood, Logistic’s gliders are projected to cost as little as a few hundred dollars each, making them perfect candidates for high-volume remote aid deliveries, whether navigated by a pilot or self-flown in accordance with real-time disaster zone mapping.
As hardware continues to advance, autonomous drone technology coupled with real-time mapping algorithms pose no end of abundant opportunities for aid supply, disaster monitoring, and richly layered intel previously unimaginable for humanitarian relief.
Concluding Thoughts
Perhaps one of the most consequential and impactful applications of converging technologies is their transformation of disaster relief methods.
While AI-driven intel platforms crowdsource firsthand experiential data from those on the ground, mobile connectivity and drone-supplied networks are granting newfound narrative power to those most in need.
And as a wave of new hardware advancements gives rise to robotic responders, swarm technology, and aerial drones, we are fast approaching an age of instantaneous and efficiently-distributed responses in the midst of conflict and natural catastrophes alike.
Empowered by these new tools, what might we create when everyone on the planet has the same access to relief supplies and immediate resources? In a new age of prevention and fast recovery, what futures can you envision?
Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.
Image Credit: Arcansel / Shutterstock.com Continue reading
#434643 Sensors and Machine Learning Are Giving ...
According to some scientists, humans really do have a sixth sense. There’s nothing supernatural about it: the sense of proprioception tells you about the relative positions of your limbs and the rest of your body. Close your eyes, block out all sound, and you can still use this internal “map” of your external body to locate your muscles and body parts – you have an innate sense of the distances between them, and the perception of how they’re moving, above and beyond your sense of touch.
This sense is invaluable for allowing us to coordinate our movements. In humans, the brain integrates senses including touch, heat, and the tension in muscle spindles to allow us to build up this map.
Replicating this complex sense has posed a great challenge for roboticists. We can imagine simulating the sense of sight with cameras, sound with microphones, or touch with pressure-pads. Robots with chemical sensors could be far more accurate than us in smell and taste, but building in proprioception, the robot’s sense of itself and its body, is far more difficult, and is a large part of why humanoid robots are so tricky to get right.
Simultaneous localization and mapping (SLAM) software allows robots to use their own senses to build up a picture of their surroundings and environment, but they’d need a keen sense of the position of their own bodies to interact with it. If something unexpected happens, or in dark environments where primary senses are not available, robots can struggle to keep track of their own position and orientation. For human-robot interaction, wearable robotics, and delicate applications like surgery, tiny differences can be extremely important.
Piecemeal Solutions
In the case of hard robotics, this is generally solved by using a series of strain and pressure sensors in each joint, which allow the robot to determine how its limbs are positioned. That works fine for rigid robots with a limited number of joints, but for softer, more flexible robots, this information is limited. Roboticists are faced with a dilemma: a vast, complex array of sensors for every degree of freedom in the robot’s movement, or limited skill in proprioception?
New techniques, often involving new arrays of sensory material and machine-learning algorithms to fill in the gaps, are starting to tackle this problem. Take the work of Thomas George Thuruthel and colleagues in Pisa and San Diego, who draw inspiration from the proprioception of humans. In a new paper in Science Robotics, they describe the use of soft sensors distributed through a robotic finger at random. This placement is much like the constant adaptation of sensors in humans and animals, rather than relying on feedback from a limited number of positions.
The sensors allow the soft robot to react to touch and pressure in many different locations, forming a map of itself as it contorts into complicated positions. The machine-learning algorithm serves to interpret the signals from the randomly-distributed sensors: as the finger moves around, it’s observed by a motion capture system. After training the robot’s neural network, it can associate the feedback from the sensors with the position of the finger detected in the motion-capture system, which can then be discarded. The robot observes its own motions to understand the shapes that its soft body can take, and translate them into the language of these soft sensors.
“The advantages of our approach are the ability to predict complex motions and forces that the soft robot experiences (which is difficult with traditional methods) and the fact that it can be applied to multiple types of actuators and sensors,” said Michael Tolley of the University of California San Diego. “Our method also includes redundant sensors, which improves the overall robustness of our predictions.”
The use of machine learning lets the roboticists come up with a reliable model for this complex, non-linear system of motions for the actuators, something difficult to do by directly calculating the expected motion of the soft-bot. It also resembles the human system of proprioception, built on redundant sensors that change and shift in position as we age.
In Search of a Perfect Arm
Another approach to training robots in using their bodies comes from Robert Kwiatkowski and Hod Lipson of Columbia University in New York. In their paper “Task-agnostic self-modeling machines,” also recently published in Science Robotics, they describe a new type of robotic arm.
Robotic arms and hands are getting increasingly dexterous, but training them to grasp a large array of objects and perform many different tasks can be an arduous process. It’s also an extremely valuable skill to get right: Amazon is highly interested in the perfect robot arm. Google hooked together an array of over a dozen robot arms so that they could share information about grasping new objects, in part to cut down on training time.
Individually training a robot arm to perform every individual task takes time and reduces the adaptability of your robot: either you need an ML algorithm with a huge dataset of experiences, or, even worse, you need to hard-code thousands of different motions. Kwiatkowski and Lipson attempt to overcome this by developing a robotic system that has a “strong sense of self”: a model of its own size, shape, and motions.
They do this using deep machine learning. The robot begins with no prior knowledge of its own shape or the underlying physics of its motion. It then repeats a series of a thousand random trajectories, recording the motion of its arm. Kwiatkowski and Lipson compare this to a baby in the first year of life observing the motions of its own hands and limbs, fascinated by picking up and manipulating objects.
Again, once the robot has trained itself to interpret these signals and build up a robust model of its own body, it’s ready for the next stage. Using that deep-learning algorithm, the researchers then ask the robot to design strategies to accomplish simple pick-up and place and handwriting tasks. Rather than laboriously and narrowly training itself for each individual task, limiting its abilities to a very narrow set of circumstances, the robot can now strategize how to use its arm for a much wider range of situations, with no additional task-specific training.
Damage Control
In a further experiment, the researchers replaced part of the arm with a “deformed” component, intended to simulate what might happen if the robot was damaged. The robot can then detect that something’s up and “reconfigure” itself, reconstructing its self-model by going through the training exercises once again; it was then able to perform the same tasks with only a small reduction in accuracy.
Machine learning techniques are opening up the field of robotics in ways we’ve never seen before. Combining them with our understanding of how humans and other animals are able to sense and interact with the world around us is bringing robotics closer and closer to becoming truly flexible and adaptable, and, eventually, omnipresent.
But before they can get out and shape the world, as these studies show, they will need to understand themselves.
Image Credit: jumbojan / Shutterstock.com Continue reading
#434569 From Parkour to Surgery, Here Are the ...
The robot revolution may not be here quite yet, but our mechanical cousins have made some serious strides. And now some of the leading experts in the field have provided a rundown of what they see as the 10 most exciting recent developments.
Compiled by the editors of the journal Science Robotics, the list includes some of the most impressive original research and innovative commercial products to make a splash in 2018, as well as a couple from 2017 that really changed the game.
1. Boston Dynamics’ Atlas doing parkour
It seems like barely a few months go by without Boston Dynamics rewriting the book on what a robot can and can’t do. Last year they really outdid themselves when they got their Atlas humanoid robot to do parkour, leaping over logs and jumping between wooden crates.
Atlas’s creators have admitted that the videos we see are cherry-picked from multiple attempts, many of which don’t go so well. But they say they’re meant to be inspirational and aspirational rather than an accurate picture of where robotics is today. And combined with the company’s dog-like Spot robot, they are certainly pushing boundaries.
2. Intuitive Surgical’s da Vinci SP platform
Robotic surgery isn’t new, but the technology is improving rapidly. Market leader Intuitive’s da Vinci surgical robot was first cleared by the FDA in 2000, but since then it’s come a long way, with the company now producing three separate systems.
The latest addition is the da Vinci SP (single port) system, which is able to insert three instruments into the body through a single 2.5cm cannula (tube) bringing a whole new meaning to minimally invasive surgery. The system was granted FDA clearance for urological procedures last year, and the company has now started shipping the new system to customers.
3. Soft robot that navigates through growth
Roboticists have long borrowed principles from the animal kingdom, but a new robot design that mimics the way plant tendrils and fungi mycelium move by growing at the tip has really broken the mold on robot navigation.
The editors point out that this is the perfect example of bio-inspired design; the researchers didn’t simply copy nature, they took a general principle and expanded on it. The tube-like robot unfolds from the front as pneumatic pressure is applied, but unlike a plant, it can grow at the speed of an animal walking and can navigate using visual feedback from a camera.
4. 3D printed liquid crystal elastomers for soft robotics
Soft robotics is one of the fastest-growing sub-disciplines in the field, but powering these devices without rigid motors or pumps is an ongoing challenge. A variety of shape-shifting materials have been proposed as potential artificial muscles, including liquid crystal elastomeric actuators.
Harvard engineers have now demonstrated that these materials can be 3D printed using a special ink that allows the designer to easily program in all kinds of unusual shape-shifting abilities. What’s more, their technique produces actuators capable of lifting significantly more weight than previous approaches.
5. Muscle-mimetic, self-healing, and hydraulically amplified actuators
In another effort to find a way to power soft robots, last year researchers at the University of Colorado Boulder designed a series of super low-cost artificial muscles that can lift 200 times their own weight and even heal themselves.
The devices rely on pouches filled with a liquid that makes them contract with the force and speed of mammalian skeletal muscles when a voltage is applied. The most promising for robotics applications is the so-called Peano-HASEL, which features multiple rectangular pouches connected in series that contract linearly, just like real muscle.
6. Self-assembled nanoscale robot from DNA
While you may think of robots as hulking metallic machines, a substantial number of scientists are working on making nanoscale robots out of DNA. And last year German researchers built the first remote-controlled DNA robotic arm.
They created a length of tightly-bound DNA molecules to act as the arm and attached it to a DNA base plate via a flexible joint. Because DNA carries a charge, they were able to get the arm to swivel around like the hand of a clock by applying a voltage and switch direction by reversing that voltage. The hope is that this arm could eventually be used to build materials piece by piece at the nanoscale.
7. DelFly nimble bioinspired robotic flapper
Robotics doesn’t only borrow from biology—sometimes it gives back to it, too. And a new flapping-winged robot designed by Dutch engineers that mimics the humble fruit fly has done just that, by revealing how the animals that inspired it carry out predator-dodging maneuvers.
The lab has been building flapping robots for years, but this time they ditched the airplane-like tail used to control previous incarnations. Instead, they used insect-inspired adjustments to the motions of its twin pairs of flapping wings to hover, pitch, and roll with the agility of a fruit fly. That has provided a useful platform for investigating insect flight dynamics, as well as more practical applications.
8. Soft exosuit wearable robot
Exoskeletons could prevent workplace injuries, help people walk again, and even boost soldiers’ endurance. Strapping on bulky equipment isn’t ideal, though, so researchers at Harvard are working on a soft exoskeleton that combines specially-designed textiles, sensors, and lightweight actuators.
And last year the team made an important breakthrough by combining their novel exoskeleton with a machine-learning algorithm that automatically tunes the device to the user’s particular walking style. Using physiological data, it is able to adjust when and where the device needs to deliver a boost to the user’s natural movements to improve walking efficiency.
9. Universal Robots (UR) e-Series Cobots
Robots in factories are nothing new. The enormous mechanical arms you see in car factories normally have to be kept in cages to prevent them from accidentally crushing people. In recent years there’s been growing interest in “co-bots,” collaborative robots designed to work side-by-side with their human colleagues and even learn from them.
Earlier this year saw the demise of ReThink robotics, the pioneer of the approach. But the simple single arm devices made by Danish firm Universal Robotics are becoming ubiquitous in workshops and warehouses around the world, accounting for about half of global co-bot sales. Last year they released their latest e-Series, with enhanced safety features and force/torque sensing.
10. Sony’s aibo
After a nearly 20-year hiatus, Sony’s robotic dog aibo is back, and it’s had some serious upgrades. As well as a revamp to its appearance, the new robotic pet takes advantage of advances in AI, with improved environmental and command awareness and the ability to develop a unique character based on interactions with its owner.
The editors note that this new context awareness mark the device out as a significant evolution in social robots, which many hope could aid in childhood learning or provide companionship for the elderly.
Image Credit: DelFly Nimble / CC BY – SA 4.0 Continue reading
#433728 AI Is Kicking Space Exploration into ...
Artificial intelligence in space exploration is gathering momentum. Over the coming years, new missions look likely to be turbo-charged by AI as we voyage to comets, moons, and planets and explore the possibilities of mining asteroids.
“AI is already a game-changer that has made scientific research and exploration much more efficient. We are not just talking about a doubling but about a multiple of ten,” Leopold Summerer, Head of the Advanced Concepts and Studies Office at ESA, said in an interview with Singularity Hub.
Examples Abound
The history of AI and space exploration is older than many probably think. It has already played a significant role in research into our planet, the solar system, and the universe. As computer systems and software have developed, so have AI’s potential use cases.
The Earth Observer 1 (EO-1) satellite is a good example. Since its launch in the early 2000s, its onboard AI systems helped optimize analysis of and response to natural occurrences, like floods and volcanic eruptions. In some cases, the AI was able to tell EO-1 to start capturing images before the ground crew were even aware that the occurrence had taken place.
Other satellite and astronomy examples abound. Sky Image Cataloging and Analysis Tool (SKICAT) has assisted with the classification of objects discovered during the second Palomar Sky Survey, classifying thousands more objects caught in low resolution than a human would be able to. Similar AI systems have helped astronomers to identify 56 new possible gravitational lenses that play a crucial role in connection with research into dark matter.
AI’s ability to trawl through vast amounts of data and find correlations will become increasingly important in relation to getting the most out of the available data. ESA’s ENVISAT produces around 400 terabytes of new data every year—but will be dwarfed by the Square Kilometre Array, which will produce around the same amount of data that is currently on the internet in a day.
AI Readying For Mars
AI is also being used for trajectory and payload optimization. Both are important preliminary steps to NASA’s next rover mission to Mars, the Mars 2020 Rover, which is, slightly ironically, set to land on the red planet in early 2021.
An AI known as AEGIS is already on the red planet onboard NASA’s current rovers. The system can handle autonomous targeting of cameras and choose what to investigate. However, the next generation of AIs will be able to control vehicles, autonomously assist with study selection, and dynamically schedule and perform scientific tasks.
Throughout his career, John Leif Jørgensen from DTU Space in Denmark has designed equipment and systems that have been on board about 100 satellites—and counting. He is part of the team behind the Mars 2020 Rover’s autonomous scientific instrument PIXL, which makes extensive use of AI. Its purpose is to investigate whether there have been lifeforms like stromatolites on Mars.
“PIXL’s microscope is situated on the rover’s arm and needs to be placed 14 millimetres from what we want it to study. That happens thanks to several cameras placed on the rover. It may sound simple, but the handover process and finding out exactly where to place the arm can be likened to identifying a building from the street from a picture taken from the roof. This is something that AI is eminently suited for,” he said in an interview with Singularity Hub.
AI also helps PIXL operate autonomously throughout the night and continuously adjust as the environment changes—the temperature changes between day and night can be more than 100 degrees Celsius, meaning that the ground beneath the rover, the cameras, the robotic arm, and the rock being studied all keep changing distance.
“AI is at the core of all of this work, and helps almost double productivity,” Jørgensen said.
First Mars, Then Moons
Mars is likely far from the final destination for AIs in space. Jupiter’s moons have long fascinated scientists. Especially Europa, which could house a subsurface ocean, buried beneath an approximately 10 km thick ice crust. It is one of the most likely candidates for finding life elsewhere in the solar system.
While that mission may be some time in the future, NASA is currently planning to launch the James Webb Space Telescope into an orbit of around 1.5 million kilometers from Earth in 2020. Part of the mission will involve AI-empowered autonomous systems overseeing the full deployment of the telescope’s 705-kilo mirror.
The distances between Earth and Europa, or Earth and the James Webb telescope, means a delay in communications. That, in turn, makes it imperative for the crafts to be able to make their own decisions. Examples from the Mars Rover project show that communication between a rover and Earth can take 20 minutes because of the vast distance. A Europa mission would see much longer communication times.
Both missions, to varying degrees, illustrate one of the most significant challenges currently facing the use of AI in space exploration. There tends to be a direct correlation between how well AI systems perform and how much data they have been fed. The more, the better, as it were. But we simply don’t have very much data to feed such a system about what it’s likely to encounter on a mission to a place like Europa.
Computing power presents a second challenge. A strenuous, time-consuming approval process and the risk of radiation mean that your computer at home would likely be more powerful than anything going into space in the near future. A 200 GHz processor, 256 megabytes of ram, and 2 gigabytes of memory sounds a lot more like a Nokia 3210 (the one you could use as an ice hockey puck without it noticing) than an iPhone X—but it’s actually the ‘brain’ that will be onboard the next rover.
Private Companies Taking Off
Private companies are helping to push those limitations. CB Insights charts 57 startups in the space-space, covering areas as diverse as natural resources, consumer tourism, R&D, satellites, spacecraft design and launch, and data analytics.
David Chew works as an engineer for the Japanese satellite company Axelspace. He explained how private companies are pushing the speed of exploration and lowering costs.
“Many private space companies are taking advantage of fall-back systems and finding ways of using parts and systems that traditional companies have thought of as non-space-grade. By implementing fall-backs, and using AI, it is possible to integrate and use parts that lower costs without adding risk of failure,” he said in an interview with Singularity Hub.
Terraforming Our Future Home
Further into the future, moonshots like terraforming Mars await. Without AI, these kinds of projects to adapt other planets to Earth-like conditions would be impossible.
Autonomous crafts are already terraforming here on Earth. BioCarbon Engineering uses drones to plant up to 100,000 trees in a single day. Drones first survey and map an area, then an algorithm decides the optimal locations for the trees before a second wave of drones carry out the actual planting.
As is often the case with exponential technologies, there is a great potential for synergies and convergence. For example with AI and robotics, or quantum computing and machine learning. Why not send an AI-driven robot to Mars and use it as a telepresence for scientists on Earth? It could be argued that we are already in the early stages of doing just that by using VR and AR systems that take data from the Mars rovers and create a virtual landscape scientists can walk around in and make decisions on what the rovers should explore next.
One of the biggest benefits of AI in space exploration may not have that much to do with its actual functions. Chew believes that within as little as ten years, we could see the first mining of asteroids in the Kuiper Belt with the help of AI.
“I think one of the things that AI does to space exploration is that it opens up a whole range of new possible industries and services that have a more immediate effect on the lives of people on Earth,” he said. “It becomes a relatable industry that has a real effect on people’s daily lives. In a way, space exploration becomes part of people’s mindset, and the border between our planet and the solar system becomes less important.”
Image Credit: Taily / Shutterstock.com Continue reading
#433545 Six Degrees of Torque-controlled ...
ALMA (“Articulated Locomotion and Manipulation”), a quadrupedal robotic framework, allows a cool robotic arm six degrees of dynamic locomotion (“movement”) while doing something else. Possible motions include walking, trotting, pacing and torso-posturing, simultaneously with other complex tasks, as well as … Continue reading