Tag Archives: planning

#433728 AI Is Kicking Space Exploration into ...

Artificial intelligence in space exploration is gathering momentum. Over the coming years, new missions look likely to be turbo-charged by AI as we voyage to comets, moons, and planets and explore the possibilities of mining asteroids.

“AI is already a game-changer that has made scientific research and exploration much more efficient. We are not just talking about a doubling but about a multiple of ten,” Leopold Summerer, Head of the Advanced Concepts and Studies Office at ESA, said in an interview with Singularity Hub.

Examples Abound
The history of AI and space exploration is older than many probably think. It has already played a significant role in research into our planet, the solar system, and the universe. As computer systems and software have developed, so have AI’s potential use cases.

The Earth Observer 1 (EO-1) satellite is a good example. Since its launch in the early 2000s, its onboard AI systems helped optimize analysis of and response to natural occurrences, like floods and volcanic eruptions. In some cases, the AI was able to tell EO-1 to start capturing images before the ground crew were even aware that the occurrence had taken place.

Other satellite and astronomy examples abound. Sky Image Cataloging and Analysis Tool (SKICAT) has assisted with the classification of objects discovered during the second Palomar Sky Survey, classifying thousands more objects caught in low resolution than a human would be able to. Similar AI systems have helped astronomers to identify 56 new possible gravitational lenses that play a crucial role in connection with research into dark matter.

AI’s ability to trawl through vast amounts of data and find correlations will become increasingly important in relation to getting the most out of the available data. ESA’s ENVISAT produces around 400 terabytes of new data every year—but will be dwarfed by the Square Kilometre Array, which will produce around the same amount of data that is currently on the internet in a day.

AI Readying For Mars
AI is also being used for trajectory and payload optimization. Both are important preliminary steps to NASA’s next rover mission to Mars, the Mars 2020 Rover, which is, slightly ironically, set to land on the red planet in early 2021.

An AI known as AEGIS is already on the red planet onboard NASA’s current rovers. The system can handle autonomous targeting of cameras and choose what to investigate. However, the next generation of AIs will be able to control vehicles, autonomously assist with study selection, and dynamically schedule and perform scientific tasks.

Throughout his career, John Leif Jørgensen from DTU Space in Denmark has designed equipment and systems that have been on board about 100 satellites—and counting. He is part of the team behind the Mars 2020 Rover’s autonomous scientific instrument PIXL, which makes extensive use of AI. Its purpose is to investigate whether there have been lifeforms like stromatolites on Mars.

“PIXL’s microscope is situated on the rover’s arm and needs to be placed 14 millimetres from what we want it to study. That happens thanks to several cameras placed on the rover. It may sound simple, but the handover process and finding out exactly where to place the arm can be likened to identifying a building from the street from a picture taken from the roof. This is something that AI is eminently suited for,” he said in an interview with Singularity Hub.

AI also helps PIXL operate autonomously throughout the night and continuously adjust as the environment changes—the temperature changes between day and night can be more than 100 degrees Celsius, meaning that the ground beneath the rover, the cameras, the robotic arm, and the rock being studied all keep changing distance.

“AI is at the core of all of this work, and helps almost double productivity,” Jørgensen said.

First Mars, Then Moons
Mars is likely far from the final destination for AIs in space. Jupiter’s moons have long fascinated scientists. Especially Europa, which could house a subsurface ocean, buried beneath an approximately 10 km thick ice crust. It is one of the most likely candidates for finding life elsewhere in the solar system.

While that mission may be some time in the future, NASA is currently planning to launch the James Webb Space Telescope into an orbit of around 1.5 million kilometers from Earth in 2020. Part of the mission will involve AI-empowered autonomous systems overseeing the full deployment of the telescope’s 705-kilo mirror.

The distances between Earth and Europa, or Earth and the James Webb telescope, means a delay in communications. That, in turn, makes it imperative for the crafts to be able to make their own decisions. Examples from the Mars Rover project show that communication between a rover and Earth can take 20 minutes because of the vast distance. A Europa mission would see much longer communication times.

Both missions, to varying degrees, illustrate one of the most significant challenges currently facing the use of AI in space exploration. There tends to be a direct correlation between how well AI systems perform and how much data they have been fed. The more, the better, as it were. But we simply don’t have very much data to feed such a system about what it’s likely to encounter on a mission to a place like Europa.

Computing power presents a second challenge. A strenuous, time-consuming approval process and the risk of radiation mean that your computer at home would likely be more powerful than anything going into space in the near future. A 200 GHz processor, 256 megabytes of ram, and 2 gigabytes of memory sounds a lot more like a Nokia 3210 (the one you could use as an ice hockey puck without it noticing) than an iPhone X—but it’s actually the ‘brain’ that will be onboard the next rover.

Private Companies Taking Off
Private companies are helping to push those limitations. CB Insights charts 57 startups in the space-space, covering areas as diverse as natural resources, consumer tourism, R&D, satellites, spacecraft design and launch, and data analytics.

David Chew works as an engineer for the Japanese satellite company Axelspace. He explained how private companies are pushing the speed of exploration and lowering costs.

“Many private space companies are taking advantage of fall-back systems and finding ways of using parts and systems that traditional companies have thought of as non-space-grade. By implementing fall-backs, and using AI, it is possible to integrate and use parts that lower costs without adding risk of failure,” he said in an interview with Singularity Hub.

Terraforming Our Future Home
Further into the future, moonshots like terraforming Mars await. Without AI, these kinds of projects to adapt other planets to Earth-like conditions would be impossible.

Autonomous crafts are already terraforming here on Earth. BioCarbon Engineering uses drones to plant up to 100,000 trees in a single day. Drones first survey and map an area, then an algorithm decides the optimal locations for the trees before a second wave of drones carry out the actual planting.

As is often the case with exponential technologies, there is a great potential for synergies and convergence. For example with AI and robotics, or quantum computing and machine learning. Why not send an AI-driven robot to Mars and use it as a telepresence for scientists on Earth? It could be argued that we are already in the early stages of doing just that by using VR and AR systems that take data from the Mars rovers and create a virtual landscape scientists can walk around in and make decisions on what the rovers should explore next.

One of the biggest benefits of AI in space exploration may not have that much to do with its actual functions. Chew believes that within as little as ten years, we could see the first mining of asteroids in the Kuiper Belt with the help of AI.

“I think one of the things that AI does to space exploration is that it opens up a whole range of new possible industries and services that have a more immediate effect on the lives of people on Earth,” he said. “It becomes a relatable industry that has a real effect on people’s daily lives. In a way, space exploration becomes part of people’s mindset, and the border between our planet and the solar system becomes less important.”

Image Credit: Taily / Shutterstock.com Continue reading

Posted in Human Robots

#433486 This AI Predicts Obesity ...

A research team at the University of Washington has trained an artificial intelligence system to spot obesity—all the way from space. The system used a convolutional neural network (CNN) to analyze 150,000 satellite images and look for correlations between the physical makeup of a neighborhood and the prevalence of obesity.

The team’s results, presented in JAMA Network Open, showed that features of a given neighborhood could explain close to two-thirds (64.8 percent) of the variance in obesity. Researchers found that analyzing satellite data could help increase understanding of the link between peoples’ environment and obesity prevalence. The next step would be to make corresponding structural changes in the way neighborhoods are built to encourage physical activity and better health.

Training AI to Spot Obesity
Convolutional neural networks (CNNs) are particularly adept at image analysis, object recognition, and identifying special hierarchies in large datasets.

Prior to analyzing 150,000 high-resolution satellite images of Bellevue, Seattle, Tacoma, Los Angeles, Memphis, and San Antonio, the researchers trained the CNN on 1.2 million images from the ImageNet database. The categorizations were correlated with obesity prevalence estimates for the six urban areas from census tracts gathered by the 500 Cities project.

The system was able to identify the presence of certain features that increased likelihood of obesity in a given area. Some of these features included tightly–packed houses, being close to roadways, and living in neighborhoods with a lack of greenery.

Visualization of features identified by the convolutional neural network (CNN) model. The images on the left column are satellite images taken from Google Static Maps API (application programming interface). Images in the middle and right columns are activation maps taken from the second convolutional layer of VGG-CNN-F network after forward pass of the respective satellite images through the network. From Google Static Maps API, DigitalGlobe, US Geological Survey (accessed July 2017). Credit: JAMA Network Open
Your Surroundings Are Key
In their discussion of the findings, the researchers stressed that there are limitations to the conclusions that can be drawn from the AI’s results. For example, socio-economic factors like income likely play a major role for obesity prevalence in a given geographic area.

However, the study concluded that the AI-powered analysis showed the prevalence of specific man-made features in neighborhoods consistently correlating with obesity prevalence and not necessarily correlating with socioeconomic status.

The system’s success rates varied between studied cities, with Memphis being the highest (73.3 percent) and Seattle being the lowest (55.8 percent).

AI Takes To the Sky
Around a third of the US population is categorized as obese. Obesity is linked to a number of health-related issues, and the AI-generated results could potentially help improve city planning and better target campaigns to limit obesity.

The study is one of the latest of a growing list that uses AI to analyze images and extrapolate insights.

A team at Stanford University has used a CNN to predict poverty via satellite imagery, assisting governments and NGOs to better target their efforts. A combination of the public Automatic Identification System for shipping, satellite imagery, and Google’s AI has proven able to identify illegal fishing activity. Researchers have even been able to use AI and Google Street View to predict what party a given city will vote for, based on what cars are parked on the streets.

In each case, the AI systems have been able to look at volumes of data about our world and surroundings that are beyond the capabilities of humans and extrapolate new insights. If one were to moralize about the good and bad sides of AI (new opportunities vs. potential job losses, for example) it could seem that it comes down to what we ask AI systems to look at—and what questions we ask of them.

Image Credit: Ocean Biology Processing Group at NASA’s Goddard Space Flight Center Continue reading

Posted in Human Robots

#432456 This Planned Solar Farm in Saudi Arabia ...

Right now it only exists on paper, in the form of a memorandum of understanding. But if constructed, the newly-announced solar photovoltaic project in Saudi Arabia would break an astonishing array of records. It’s larger than any solar project currently planned by a factor of 100. When completed, nominally in 2030, it would have a capacity of an astonishing 200 gigawatts (GW). The project is backed by Softbank Group and Saudi Arabia’s new crown prince, Mohammed Bin Salman, and was announced in New York on March 27.

The Tengger Desert Solar Park in China, affectionately known as the “Great Wall of Solar,” is the world’s largest operating solar farm, with a capacity of 1.5 GW. Larger farms are under construction, including the Westlands Solar Park, which plans to finish with 2.7 GW of capacity. But even those that are only in the planning phases are dwarfed by the Saudi project; two early-stage solar parks will have capacity of 7.2 GW, and the plan involves them generating electricity as early as next year.

It makes more sense to compare to slightly larger projects, like nations, or even planets. Saudi Arabia’s current electricity generation capacity is 77 GW. This project would almost triple it. The current total solar photovoltaic generation capacity installed worldwide is 303 GW. In other words, this single solar farm would account for a similar installed capacity as the entire world’s capacity in 2015, and over a thousand times more than we had in 2000.

That’s exponential growth for you, folks.

Of course, practically doubling the world’s solar capacity doesn’t come cheap; the nominal estimate for the budget is around $200 billion (compared to $20 billion for around half a gigawatt of fusion, though, it may not seem so bad.) But the project would help solve a number of pressing problems for Saudi Arabia.

For a start, solar power works well in the desert. The irradiance is high, you have plenty of empty space, and peak demand is driven by air conditioning in the cities and so corresponds with peak supply. Even if oil companies might seem blasé about the global supply of oil running out, individual countries are aware that their own reserves won’t last forever, and they don’t want to miss the energy transition. The country’s Vision 2030 project aims to diversify its heavily oil-dependent economy by that year. If they can construct solar farms on this scale, alongside the $80 billion the government plans to spend on a fleet of nuclear reactors, it seems logical to export that power to other countries in the region, especially given the amount of energy storage that would be required otherwise.

We’ve already discussed a large-scale project to build solar panels in the desert then export the electricity: the DESERTEC initiative in the Sahara. Although DESERTEC planned a range of different demonstration plants on scales of around 500 MW, its ultimate ambition was to “provide 20 percent of Europe’s electricity by 2050.” It seems that this project is similar in scale to what they were planning. Weaning ourselves off fossil fuels is going to be incredibly difficult. Only large-scale nuclear, wind, or solar can really supply the world’s energy needs if consumption is anything like what it is today; in all likelihood, we’ll need a combination of all three.

To make a sizeable contribution to that effort, the renewable projects have to be truly epic in scale. The planned 2 GW solar park at Bulli Creek in Australia would cover 5 square kilometers, so it’s not unreasonable to suggest that, across many farms, this project could cover around 500 square kilometers—around the size of Chicago.

It will come as no surprise that Softbank is involved in this project. The founder, Masayoshi Son, is well-known for large-scale “visionary” investments. This is suggested by the name of his $100 billion VC fund, the Softbank Vision Fund, and the focus of its investments. It has invested millions of dollars in tech companies like Uber, IoT, NVIDIA and ARM, and startups across fields like VR, agritech, and AI.

Of course, Softbank is also the company that bought infamous robot-makers Boston Dynamics from Google when their not-at-all-sinister “Project Replicant” was sidelined. Softbank is famous in Japan in part due to their mascot, Pepper, which is probably the most widespread humanoid robot on the planet. Suffice it to say that Softbank is keen to be a part of any technological development, and they’re not afraid of projects that are truly vast in scope.

Since the Fukushima disaster in 2011 led Japan to turn away from nuclear power, Son has also been focused on green electricity, floating the idea of an Asia Super Grid. Similar to DESERTEC, it aims to get around the main issues with renewable energy (the land use and the intermittency of supply) with a vast super-grid that would connect Mongolia, India, Japan, China, Russia, and South Korea with high-voltage DC power cables. “Since this is such a grandiose project, many people told me it is crazy,” Son said. “They said it is impossible both economically and politically.” The first stage of the project, a demonstration wind farm of 50 megawatts in Mongolia, began operating in October of last year.

Given that Saudi Arabia put up $45 billion of the Vision Fund, it’s also not surprising to see the location of the project; Softbank reportedly had plans to invest $25 billion of the Vision Fund in Saudi Arabia, and $1 billion will be spent on the first solar farms there. Prince Mohammed Bin Salman, 32, who recently consolidated power, is looking to be seen on the global stage as a modernizer. He was effusive about the project. “It’s a huge step in human history,” he said. “It’s bold, risky, and we hope we succeed doing that.”

It is the risk that will keep renewable energy enthusiasts concerned.

Every visionary plan contains the potential for immense disappointment. As yet, the Asian Super Grid and the Saudi power plan are more or less at the conceptual stage. The fact that a memorandum of understanding exists between the Saudi government and Softbank is no guarantee that it will ever be built. Some analysts in the industry are a little skeptical.

“It’s an unprecedented construction effort; it’s an unprecedented financing effort,” said Benjamin Attia, a global solar analyst for Green Tech Media Research. “But there are so many questions, so few details, and a lot of headwinds, like grid instability, the availability of commercial debt, construction, and logistics challenges.”

We have already seen with the DESERTEC initiative that these vast-scale renewable energy projects can fail, despite immense enthusiasm. They are not easy to accomplish. But in a world without fossil fuels, they will be required. This project could be a flagship example for how to run a country on renewable energy—or another example of grand designs and good intentions. We’ll have to wait to find out which.

Image Credit: Love Silhouette / Shutterstock.com Continue reading

Posted in Human Robots

#432331 $10 million XPRIZE Aims for Robot ...

Ever wished you could be in two places at the same time? The XPRIZE Foundation wants to make that a reality with a $10 million competition to build robot avatars that can be controlled from at least 100 kilometers away.

The competition was announced by XPRIZE founder Peter Diamandis at the SXSW conference in Austin last week, with an ambitious timeline of awarding the grand prize by October 2021. Teams have until October 31st to sign up, and they need to submit detailed plans to a panel of judges by the end of next January.

The prize, sponsored by Japanese airline ANA, has given contestants little guidance on how they expect them to solve the challenge other than saying their solutions need to let users see, hear, feel, and interact with the robot’s environment as well as the people in it.

XPRIZE has also not revealed details of what kind of tasks the robots will be expected to complete, though they’ve said tasks will range from “simple” to “complex,” and it should be possible for an untrained operator to use them.

That’s a hugely ambitious goal that’s likely to require teams to combine multiple emerging technologies, from humanoid robotics to virtual reality high-bandwidth communications and high-resolution haptics.

If any of the teams succeed, the technology could have myriad applications, from letting emergency responders enter areas too hazardous for humans to helping people care for relatives who live far away or even just allowing tourists to visit other parts of the world without the jet lag.

“Our ability to physically experience another geographic location, or to provide on-the-ground assistance where needed, is limited by cost and the simple availability of time,” Diamandis said in a statement.

“The ANA Avatar XPRIZE can enable creation of an audacious alternative that could bypass these limitations, allowing us to more rapidly and efficiently distribute skill and hands-on expertise to distant geographic locations where they are needed, bridging the gap between distance, time, and cultures,” he added.

Interestingly, the technology may help bypass an enduring hand break on the widespread use of robotics: autonomy. By having a human in the loop, you don’t need nearly as much artificial intelligence analyzing sensory input and making decisions.

Robotics software is doing a lot more than just high-level planning and strategizing, though. While a human moves their limbs instinctively without consciously thinking about which muscles to activate, controlling and coordinating a robot’s components requires sophisticated algorithms.

The DARPA Robotics Challenge demonstrated just how hard it was to get human-shaped robots to do tasks humans would find simple, such as opening doors, climbing steps, and even just walking. These robots were supposedly semi-autonomous, but on many tasks they were essentially tele-operated, and the results suggested autonomy isn’t the only problem.

There’s also the issue of powering these devices. You may have noticed that in a lot of the slick web videos of humanoid robots doing cool things, the machine is attached to the roof by a large cable. That’s because they suck up huge amounts of power.

Possibly the most advanced humanoid robot—Boston Dynamics’ Atlas—has a battery, but it can only run for about an hour. That might be fine for some applications, but you don’t want it running out of juice halfway through rescuing someone from a mine shaft.

When it comes to the link between the robot and its human user, some of the technology is probably not that much of a stretch. Virtual reality headsets can create immersive audio-visual environments, and a number of companies are working on advanced haptic suits that will let people “feel” virtual environments.

Motion tracking technology may be more complicated. While even consumer-grade devices can track peoples’ movements with high accuracy, you will probably need to don something more like an exoskeleton that can both pick up motion and provide mechanical resistance, so that when the robot bumps into an immovable object, the user stops dead too.

How hard all of this will be is also dependent on how the competition ultimately defines subjective terms like “feel” and “interact.” Will the user need to be able to feel a gentle breeze on the robot’s cheek or be able to paint a watercolor? Or will simply having the ability to distinguish a hard object from a soft one or shake someone’s hand be enough?

Whatever the fidelity they decide on, the approach will require huge amounts of sensory and control data to be transmitted over large distances, most likely wirelessly, in a way that’s fast and reliable enough that there’s no lag or interruptions. Fortunately 5G is launching this year, with a speed of 10 gigabits per second and very low latency, so this problem should be solved by 2021.

And it’s worth remembering there have already been some tentative attempts at building robotic avatars. Telepresence robots have solved the seeing, hearing, and some of the interacting problems, and MIT has already used virtual reality to control robots to carry out complex manipulation tasks.

South Korean company Hankook Mirae Technology has also unveiled a 13-foot-tall robotic suit straight out of a sci-fi movie that appears to have made some headway with the motion tracking problem, albeit with a human inside the robot. Toyota’s T-HR3 does the same, but with the human controlling the robot from a “Master Maneuvering System” that marries motion tracking with VR.

Combining all of these capabilities into a single machine will certainly prove challenging. But if one of the teams pulls it off, you may be able to tick off trips to the Seven Wonders of the World without ever leaving your house.

Image Credit: ANA Avatar XPRIZE Continue reading

Posted in Human Robots

#432009 How Swarm Intelligence Is Making Simple ...

As a group, simple creatures following simple rules can display a surprising amount of complexity, efficiency, and even creativity. Known as swarm intelligence, this trait is found throughout nature, but researchers have recently begun using it to transform various fields such as robotics, data mining, medicine, and blockchains.

Ants, for example, can only perform a limited range of functions, but an ant colony can build bridges, create superhighways of food and information, wage war, and enslave other ant species—all of which are beyond the comprehension of any single ant. Likewise, schools of fish, flocks of birds, beehives, and other species exhibit behavior indicative of planning by a higher intelligence that doesn’t actually exist.

It happens by a process called stigmergy. Simply put, a small change by a group member causes other members to behave differently, leading to a new pattern of behavior.

When an ant finds a food source, it marks the path with pheromones. This attracts other ants to that path, leads them to the food source, and prompts them to mark the same path with more pheromones. Over time, the most efficient route will become the superhighway, as the faster and easier a path is, the more ants will reach the food and the more pheromones will be on the path. Thus, it looks as if a more intelligent being chose the best path, but it emerged from the tiny, simple changes made by individuals.

So what does this mean for humans? Well, a lot. In the past few decades, researchers have developed numerous algorithms and metaheuristics, such as ant colony optimization and particle swarm optimization, and they are rapidly being adopted.

Swarm Robotics
A swarm of robots would work on the same principles as an ant colony: each member has a simple set of rules to follow, leading to self-organization and self-sufficiency.

For example, researchers at Georgia Robotics and InTelligent Systems (GRITS) created a small swarm of simple robots that can spell and play piano. The robots cannot communicate, but based solely on the position of surrounding robots, they are able to use their specially-created algorithm to determine the optimal path to complete their task.

This is also immensely useful for drone swarms.

Last February, Ehang, an aviation company out of China, created a swarm of a thousand drones that not only lit the sky with colorful, intricate displays, but demonstrated the ability to improvise and troubleshoot errors entirely autonomously.

Further, just recently, the University of Cambridge and Koc University unveiled their idea for what they call the Energy Neutral Internet of Drones. Amazingly, this drone swarm would take initiative to share information or energy with other drones that did not receive a communication or are running low on energy.

Militaries all of the world are utilizing this as well.

Last year, the US Department of Defense announced it had successfully tested a swarm of miniature drones that could carry out complex missions cheaper and more efficiently. They claimed, “The micro-drones demonstrated advanced swarm behaviors such as collective decision-making, adaptive formation flying, and self-healing.”

Some experts estimate at least 30 nations are actively developing drone swarms—and even submersible drones—for military missions, including intelligence gathering, missile defense, precision missile strikes, and enhanced communication.

NASA also plans on deploying swarms of tiny spacecraft for space exploration, and the medical community is looking into using swarms of nanobots for precision delivery of drugs, microsurgery, targeting toxins, and biological sensors.

What If Humans Are the Ants?
The strength of any blockchain comes from the size and diversity of the community supporting it. Cryptocurrencies like Bitcoin, Ethereum, and Litecoin are driven by the people using, investing in, and, most importantly, mining them so their blockchains can function. Without an active community, or swarm, their blockchains wither away.

When viewed from a great height, a blockchain performs eerily like an ant colony in that it will naturally find the most efficient way to move vast amounts of information.

Miners compete with each other to perform the complex calculations necessary to add another block, for which the winner is rewarded with the blockchain’s native currency and agreed-upon fees. Of course, the miner with the more powerful computers is more likely to win the reward, thereby empowering the winner’s ability to mine and receive even more rewards. Over time, fewer and fewer miners are going to exist, as the winners are able to more efficiently shoulder more of the workload, in much the same way that ants build superhighways.

Further, a company called Unanimous AI has developed algorithms that allow humans to collectively make predictions. So far, the AI algorithms and their human participants have made some astoundingly accurate predictions, such as the first four winning horses of the Kentucky Derby, the Oscar winners, the Stanley Cup winners, and others. The more people involved in the swarm, the greater their predictive power will be.

To be clear, this is not a prediction based on group consensus. Rather, the swarm of humans uses software to input their opinions in real time, thus making micro-changes to the rest of the swarm and the inputs of other members.

Studies show that swarm intelligence consistently outperforms individuals and crowds working without the algorithms. While this is only the tip of the iceberg, some have suggested swarm intelligence can revolutionize how doctors diagnose a patient or how products are marketed to consumers. It might even be an essential step in truly creating AI.

While swarm intelligence is an essential part of many species’ success, it’s only a matter of time before humans harness its effectiveness as well.

Image Credit: Nature Bird Photography / Shutterstock.com Continue reading

Posted in Human Robots