Tag Archives: media
#435769 The Ultimate Optimization Problem: How ...
Lucas Joppa thinks big. Even while gazing down into his cup of tea in his modest office on Microsoft’s campus in Redmond, Washington, he seems to see the entire planet bobbing in there like a spherical tea bag.
As Microsoft’s first chief environmental officer, Joppa came up with the company’s AI for Earth program, a five-year effort that’s spending US $50 million on AI-powered solutions to global environmental challenges.
The program is not just about specific deliverables, though. It’s also about mindset, Joppa told IEEE Spectrum in an interview in July. “It’s a plea for people to think about the Earth in the same way they think about the technologies they’re developing,” he says. “You start with an objective. So what’s our objective function for Earth?” (In computer science, an objective function describes the parameter or parameters you are trying to maximize or minimize for optimal results.)
Photo: Microsoft
Lucas Joppa
AI for Earth launched in December 2017, and Joppa’s team has since given grants to more than 400 organizations around the world. In addition to receiving funding, some grantees get help from Microsoft’s data scientists and access to the company’s computing resources.
In a wide-ranging interview about the program, Joppa described his vision of the “ultimate optimization problem”—figuring out which parts of the planet should be used for farming, cities, wilderness reserves, energy production, and so on.
Every square meter of land and water on Earth has an infinite number of possible utility functions. It’s the job of Homo sapiens to describe our overall objective for the Earth. Then it’s the job of computers to produce optimization results that are aligned with the human-defined objective.
I don’t think we’re close at all to being able to do this. I think we’re closer from a technology perspective—being able to run the model—than we are from a social perspective—being able to make decisions about what the objective should be. What do we want to do with the Earth’s surface?
Such questions are increasingly urgent, as climate change has already begun reshaping our planet and our societies. Global sea and air surface temperatures have already risen by an average of 1 degree Celsius above preindustrial levels, according to the Intergovernmental Panel on Climate Change.
Today, people all around the world participated in a “climate strike,” with young people leading the charge and demanding a global transition to renewable energy. On Monday, world leaders will gather in New York for the United Nations Climate Action Summit, where they’re expected to present plans to limit warming to 1.5 degrees Celsius.
Joppa says such summit discussions should aim for a truly holistic solution.
We talk about how to solve climate change. There’s a higher-order question for society: What climate do we want? What output from nature do we want and desire? If we could agree on those things, we could put systems in place for optimizing our environment accordingly. Instead we have this scattered approach, where we try for local optimization. But the sum of local optimizations is never a global optimization.
There’s increasing interest in using artificial intelligence to tackle global environmental problems. New sensing technologies enable scientists to collect unprecedented amounts of data about the planet and its denizens, and AI tools are becoming vital for interpreting all that data.
The 2018 report “Harnessing AI for the Earth,” produced by the World Economic Forum and the consulting company PwC, discusses ways that AI can be used to address six of the world’s most pressing environmental challenges (climate change, biodiversity, and healthy oceans, water security, clean air, and disaster resilience).
Many of the proposed applications involve better monitoring of human and natural systems, as well as modeling applications that would enable better predictions and more efficient use of natural resources.
Joppa says that AI for Earth is taking a two-pronged approach, funding efforts to collect and interpret vast amounts of data alongside efforts that use that data to help humans make better decisions. And that’s where the global optimization engine would really come in handy.
For any location on earth, you should be able to go and ask: What’s there, how much is there, and how is it changing? And more importantly: What should be there?
On land, the data is really only interesting for the first few hundred feet. Whereas in the ocean, the depth dimension is really important.
We need a planet with sensors, with roving agents, with remote sensing. Otherwise our decisions aren’t going to be any good.
AI for Earth isn’t going to create such an online portal within five years, Joppa stresses. But he hopes the projects that he’s funding will contribute to making such a portal possible—eventually.
We’re asking ourselves: What are the fundamental missing layers in the tech stack that would allow people to build a global optimization engine? Some of them are clear, some are still opaque to me.
By the end of five years, I’d like to have identified these missing layers, and have at least one example of each of the components.
Some of the projects that AI for Earth has funded seem to fit that desire. Examples include SilviaTerra, which used satellite imagery and AI to create a map of the 92 billion trees in forested areas across the United States. There’s also OceanMind, a non-profit that detects illegal fishing and helps marine authorities enforce compliance. Platforms like Wildbook and iNaturalist enable citizen scientists to upload pictures of animals and plants, aiding conservation efforts and research on biodiversity. And FarmBeats aims to enable data-driven agriculture with low-cost sensors, drones, and cloud services.
It’s not impossible to imagine putting such services together into an optimization engine that knows everything about the land, the water, and the creatures who live on planet Earth. Then we’ll just have to tell that engine what we want to do about it.
Editor’s note: This story is published in cooperation with more than 250 media organizations and independent journalists that have focused their coverage on climate change ahead of the UN Climate Action Summit. IEEE Spectrum’s participation in the Covering Climate Now partnership builds on our past reporting about this global issue. Continue reading
#435640 Video Friday: This Wearable Robotic Tail ...
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):
DARPA SubT Tunnel Circuit – August 15-22, 2019 – Pittsburgh, Pa., USA
CLAWAR 2019 – August 26-28, 2019 – Kuala Lumpur, Malaysia
IEEE Africon 2019 – September 25-27, 2019 – Accra, Ghana
ISRR 2019 – October 6-10, 2019 – Hanoi, Vietnam
Ro-Man 2019 – October 14-18, 2019 – New Delhi, India
Humanoids 2019 – October 15-17, 2019 – Toronto, Canada
ARSO 2019 – October 31-1, 2019 – Beijing, China
ROSCon 2019 – October 31-1, 2019 – Macau
IROS 2019 – November 4-8, 2019 – Macau
Let us know if you have suggestions for next week, and enjoy today’s videos.
Lakshmi Nair from Georgia Tech describes some fascinating research towards robots that can create their own tools, as presented at ICRA this year:
Using a novel capability to reason about shape, function, and attachment of unrelated parts, researchers have for the first time successfully trained an intelligent agent to create basic tools by combining objects.
The breakthrough comes from Georgia Tech’s Robot Autonomy and Interactive Learning (RAIL) research lab and is a significant step toward enabling intelligent agents to devise more advanced tools that could prove useful in hazardous – and potentially life-threatening – environments.
[ Lakshmi Nair ]
Victor Barasuol, from the Dynamic Legged Systems Lab at IIT, wrote in to share some new research on their HyQ quadruped that enables sensorless shin collision detection. This helps the robot navigate unstructured environments, and also mitigates all those painful shin strikes, because ouch.
This will be presented later this month at the International Conference on Climbing and Walking Robots (CLAWAR) in Kuala Lumpur, Malaysia.
[ IIT ]
Thanks Victor!
You used to have a tail, you know—as an embryo, about a month in to your development. All mammals used to have tails, and now we just have useless tailbones, which don’t help us with balancing even a little bit. BRING BACK THE TAIL!
The tail, created by Junichi Nabeshima, Kouta Minamizawa, and MHD Yamen Saraiji from Keio University’s Graduate School of Media Design, was presented at SIGGRAPH 2019 Emerging Technologies.
[ Paper ] via [ Gizmodo ]
The noises in this video are fantastic.
[ ESA ]
Apparently the industrial revolution wasn’t a thorough enough beatdown of human knitting, because the robots are at it again.
[ MIT CSAIL ]
Skydio’s drones just keep getting more and more impressive. Now if only they’d make one that I can afford…
[ Skydio ]
The only thing more fun than watching robots is watching people react to robots.
[ SEER ]
There aren’t any robots in this video, but it’s robotics-related research, and very soothing to watch.
[ Stanford ]
#autonomousicecreamtricycle
In case it wasn’t clear, which it wasn’t, this is a Roboy project. And if you didn’t understand that first video, you definitely won’t understand this second one:
Whatever that t-shirt is at the end (Roboy in sunglasses puking rainbows…?) I need one.
[ Roboy ]
By adding electronics and computation technology to a simple cane that has been around since ancient times, a team of researchers at Columbia Engineering have transformed it into a 21st century robotic device that can provide light-touch assistance in walking to the aged and others with impaired mobility.
The light-touch robotic cane, called CANINE, acts as a cane-like mobile assistant. The device improves the individual’s proprioception, or self-awareness in space, during walking, which in turn improves stability and balance.
[ ROAR Lab ]
During the second field experiment for DARPA’s OFFensive Swarm-Enabled Tactics (OFFSET) program, which took place at Fort Benning, Georgia, teams of autonomous air and ground robots tested tactics on a mission to isolate an urban objective. Similar to the way a firefighting crew establishes a boundary around a burning building, they first identified locations of interest and then created a perimeter around the focal point.
[ DARPA ]
I think there’s a bit of new footage here of Ghost Robotics’ Vision 60 quadruped walking around without sensors on unstructured terrain.
[ Ghost Robotics ]
If you’re as tired of passenger drone hype as I am, there’s absolutely no need to watch this video of NEC’s latest hover test.
[ AP ]
As researchers teach robots to perform more and more complex tasks, the need for realistic simulation environments is growing. Existing techniques for closing the reality gap by approximating real-world physics often require extensive real world data and/or thousands of simulation samples. This paper presents TuneNet, a new machine learning-based method to directly tune the parameters of one model to match another using an iterative residual tuning technique. TuneNet estimates the parameter difference between two models using a single observation from the target and minimal simulation, allowing rapid, accurate and sample-efficient parameter estimation.
The system can be trained via supervised learning over an auto-generated simulated dataset. We show that TuneNet can perform system identification, even when the true parameter values lie well outside the distribution seen during training, and demonstrate that simulators tuned with TuneNet outperform existing techniques for predicting rigid body motion. Finally, we show that our method can estimate real-world parameter values, allowing a robot to perform sim-to-real task transfer on a dynamic manipulation task unseen during training. We are also making a baseline implementation of our code available online.
[ Paper ]
Here’s an update on what GITAI has been up to with their telepresence astronaut-replacement robot.
[ GITAI ]
Curiosity captured this 360-degree panorama of a location on Mars called “Teal Ridge” on June 18, 2019. This location is part of a larger region the rover has been exploring called the “clay-bearing unit” on the side of Mount Sharp, which is inside Gale Crater. The scene is presented with a color adjustment that approximates white balancing to resemble how the rocks and sand would appear under daytime lighting conditions on Earth.
[ MSL ]
Some updates (in English) on ROS from ROSCon France. The first is a keynote from Brian Gerkey:
And this second video is from Omri Ben-Bassat, about how to keep your Anki Vector alive using ROS:
All of the ROSCon FR talks are available on Vimeo.
[ ROSCon FR ] Continue reading
#435632 DARPA Subterranean Challenge: Tunnel ...
The Tunnel Circuit of the DARPA Subterranean Challenge starts later this week at the NIOSH research mine just outside of Pittsburgh, Pennsylvania. From 15-22 August, 11 teams will send robots into a mine that they've never seen before, with the goal of making maps and locating items. All DARPA SubT events involve tunnels of one sort or another, but in this case, the “Tunnel Circuit” refers to mines as opposed to urban underground areas or natural caves. This month’s challenge is the first of three discrete events leading up to a huge final event in August of 2021.
While the Tunnel Circuit competition will be closed to the public, and media are only allowed access for a single day (which we'll be at, of course), DARPA has provided a substantial amount of information about what teams will be able to expect. We also have details from the SubT Integration Exercise, called STIX, which was a completely closed event that took place back in April. STIX was aimed at giving some teams (and DARPA) a chance to practice in a real tunnel environment.
For more general background on SubT, here are some articles to get you all caught up:
SubT: The Next DARPA Challenge for Robotics
Q&A with DARPA Program Manager Tim Chung
Meet The First Nine Teams
It makes sense to take a closer look at what happened at April's STIX exercise, because it is (probably) very similar to what teams will experience in the upcoming Tunnel Circuit. STIX took place at Edgar Experimental Mine in Colorado, and while no two mines are the same (and many are very, very different), there are enough similarities for STIX to have been a valuable experience for teams. Here's an overview video of the exercise from DARPA:
DARPA has also put together a much more detailed walkthrough of the STIX mine exercise, which gives you a sense of just how vast, complicated, and (frankly) challenging for robots the mine environment is:
So, that's the kind of thing that teams had to deal with back in April. Since the event was an exercise, rather than a competition, DARPA didn't really keep score, and wouldn't comment on the performance of individual teams. We've been trolling YouTube for STIX footage, though, to get a sense of how things went, and we found a few interesting videos.
Here's a nice overview from Team CERBERUS, which used drones plus an ANYmal quadruped:
Team CTU-CRAS also used drones, along with a tracked robot:
Team Robotika was brave enough to post video of a “fatal failure” experienced by its wheeled robot; the poor little bot gets rescued at about 7:00 in case you get worried:
So that was STIX. But what about the Tunnel Circuit competition this week? Here's a course preview video from DARPA:
It sort of looks like the NIOSH mine might be a bit less dusty than the Edgar mine was, but it could also be wetter and muddier. It’s hard to tell, because we’re just getting a few snapshots of what’s probably an enormous area with kilometers of tunnels that the robots will have to explore. But DARPA has promised “constrained passages, sharp turns, large drops/climbs, inclines, steps, ladders, and mud, sand, and/or water.” Combine that with the serious challenge to communications imposed by the mine itself, and robots will have to be both physically capable, and almost entirely autonomous. Which is, of course, exactly what DARPA is looking to test with this challenge.
Lastly, we had a chance to catch up with Tim Chung, Program Manager for the Subterranean Challenge at DARPA, and ask him a few brief questions about STIX and what we have to look forward to this week.
IEEE Spectrum: How did STIX go?
Tim Chung: It was a lot of fun! I think it gave a lot of the teams a great opportunity to really get a taste of what these types of real world environments look like, and also what DARPA has in store for them in the SubT Challenge. STIX I saw as an experiment—a learning experience for all the teams involved (as well as the DARPA team) so that we can continue our calibration.
What do you think teams took away from STIX, and what do you think DARPA took away from STIX?
I think the thing that teams took away was that, when DARPA hosts a challenge, we have very audacious visions for what the art of the possible is. And that's what we want—in my mind, the purpose of a DARPA Grand Challenge is to provide that inspiration of, ‘Holy cow, someone thinks we can do this!’ So I do think the teams walked away with a better understanding of what DARPA's vision is for the capabilities we're seeking in the SubT Challenge, and hopefully walked away with a better understanding of the technical, physical, even maybe mental challenges of doing this in the wild— which will all roll back into how they think about the problem, and how they develop their systems.
This was a collaborative exercise, so the DARPA field team was out there interacting with the other engineers, figuring out what their strengths and weaknesses and needs might be, and even understanding how to handle the robots themselves. That will help [strengthen] connections between these university teams and DARPA going forward. Across the board, I think that collaborative spirit is something we really wish to encourage, and something that the DARPA folks were able to take away.
What do we have to look forward to during the Tunnel Circuit?
The vision here is that the Tunnel Circuit is representative of one of the three subterranean subdomains, along with urban and cave. Characteristics of all of these three subdomains will be mashed together in an epic final course, so that teams will have to face hints of tunnel once again in that final event.
Without giving too much away, the NIOSH mine will be similar to the Edgar mine in that it's a human-made environment that supports mining operations and research. But of course, every site is different, and these differences, I think, will provide good opportunities for the teams to shine.
Again, we'll be visiting the NIOSH mine in Pennsylvania during the Tunnel Circuit and will post as much as we can from there. But if you’re an actual participant in the Subterranean Challenge, please tweet me @BotJunkie so that I can follow and help share live updates.
[ DARPA Subterranean Challenge ] Continue reading