Tag Archives: google

#439783 This Google-Funded Project Is Tracking ...

It’s crunch time on climate change. The IPCC’s latest report told the world just how bad it is, and…it’s bad. Companies, NGOs, and governments are scrambling for fixes, both short-term and long-term, from banning sale of combustion-engine vehicles to pouring money into hydrogen to building direct air capture plants. And one initiative, launched last week, is taking an “if you can name it, you can tame it” approach by creating an independent database that measures and tracks emissions all over the world.

Climate TRACE, which stands for tracking real-time atmospheric carbon emissions, is a collaboration between nonprofits, tech companies, and universities, including CarbonPlan, Earthrise Alliance, Johns Hopkins Applied Physics Laboratory, former US Vice President Al Gore, and others. The organization started thanks to a grant from Google, which funded an effort to measure power plant emissions using satellites. A team of fellows from Google helped build algorithms to monitor the power plants (the Google.org Fellowship was created in 2019 to let Google employees do pro bono technical work for grant recipients).

Climate TRACE uses data from satellites and other remote sensing technologies to “see” emissions. Artificial intelligence algorithms combine this data with verifiable emissions measurements to produce estimates of the total emissions coming from various sources.

These sources are divided into ten sectors—like power, manufacturing, transportation, and agriculture—each with multiple subsectors (i.e., two subsectors of agriculture are rice cultivation and manure management). The total carbon emitted January 2015 to December 2020, by the project’s estimation, was 303.96 billion tons. The biggest offender? Electricity generation. It’s no wonder, then, that states, companies, and countries are rushing to make (occasionally unrealistic) carbon-neutral pledges, and that the renewable energy industry is booming.

The founders of the initiative hope that, by increasing transparency, the database will increase accountability, thereby spurring action. Younger consumers care about climate change, and are likely to push companies and brands to do something about it.

The BBC reported that in a recent survey led by the UK’s Bath University, almost 60 percent of respondents said they were “very worried” or “extremely worried” about climate change, while more than 45 percent said feelings about the climate affected their daily lives. The survey received responses from 10,000 people aged 16 to 25, finding that young people are the most concerned with climate change in the global south, while in the northern hemisphere those most worried are in Portugal, which has grappled with severe wildfires. Many of the survey respondents, independent of location, reportedly feel that “humanity is doomed.”

Once this demographic reaches working age, they’ll be able to throw their weight around, and it seems likely they’ll do so in a way that puts the planet and its future at center stage. For all its sanctimoniousness, “naming and shaming” of emitters not doing their part may end up being both necessary and helpful.

Until now, Climate TRACE’s website points out, emissions inventories have been largely self-reported (I mean, what’s even the point?), and they’ve used outdated information and opaque measurement methods. Besides being independent, which is huge in itself, TRACE is using 59 trillion bytes of data from more than 300 satellites, more than 11,100 sensors, and other sources of emissions information.

“We’ve established a shared, open monitoring system capable of detecting essentially all forms of humanity’s greenhouse gas emissions,” said Gavin McCormick, executive director of coalition convening member WattTime. “This is a transformative step forward that puts timely information at the fingertips of all those who seek to drive significant emissions reductions on our path to net zero.”

Given the scale of the project, the parties involved, and how quickly it has all come together—the grant from Google was in May 2019—it seems Climate TRACE is well-positioned to make a difference.

Image Credit: NASA Continue reading

Posted in Human Robots

#439437 Google parent launches new ...

Google's parent Alphabet unveiled a new “moonshot” project to develop software for robotics which could be used in a wide range of industries. Continue reading

Posted in Human Robots

#439280 Google and Harvard Unveil the Largest ...

Last Tuesday, teams from Google and Harvard published an intricate map of every cell and connection in a cubic millimeter of the human brain.

The mapped region encompasses the various layers and cell types of the cerebral cortex, a region of brain tissue associated with higher-level cognition, such as thinking, planning, and language. According to Google, it’s the largest brain map at this level of detail to date, and it’s freely available to scientists (and the rest of us) online. (Really. Go here. Take a stroll.)

To make the map, the teams sliced donated tissue into 5,300 sections, each 30 nanometers thick, and imaged them with a scanning electron microscope at a resolution of 4 nanometers. The resulting 225 million images were computationally aligned and stitched back into a 3D digital representation of the region. Machine learning algorithms segmented individual cells and classified synapses, axons, dendrites, cells, and other structures, and humans checked their work. (The team posted a pre-print paper about the map on bioArxiv.)

Last year, Google and the Janelia Research Campus of the Howard Hughes Medical Institute made headlines when they similarly mapped a portion of a fruit fly brain. That map, at the time the largest yet, covered some 25,000 neurons and 20 million synapses. In addition to targeting the human brain, itself of note, the new map includes tens of thousands of neurons and 130 million synapses. It takes up 1.4 petabytes of disk space.

By comparison, over three decades’ worth of satellite images of Earth by NASA’s Landsat program require 1.3 petabytes of storage. Collections of brain images on the smallest scales are like “a world in a grain of sand,” the Allen Institute’s Clay Reid told Nature, quoting William Blake in reference to an earlier map of the mouse brain.

All that, however, is but a millionth of the human brain. Which is to say, a similarly detailed map of the entire thing is yet years away. Still, the work shows how fast the field is moving. A map of this scale and detail would have been unimaginable a few decades ago.

How to Map a Brain
The study of the brain’s cellular circuitry is known as connectomics.

Obtaining the human connectome, or the wiring diagram of a whole brain, is a moonshot akin to the human genome. And like the human genome, at first, it seemed an impossible feat.

The only complete connectomes are for simple creatures: the nematode worm (C. elegans) and the larva of a sea creature called C. intestinalis. There’s a very good reason for that. Until recently, the mapping process was time-consuming and costly.

Researchers mapping C. elegans in the 1980s used a film camera attached to an electron microscope to image slices of the worm, then reconstructed the neurons and synaptic connections by hand, like a maddeningly difficult three-dimensional puzzle. C. elegans has only 302 neurons and roughly 7,000 synapses, but the rough draft of its connectome took 15 years, and a final draft took another 20. Clearly, this approach wouldn’t scale.

What’s changed? In short, automation.

These days the images themselves are, of course, digital. A process known as focused ion beam milling shaves down each slice of tissue a few nanometers at a time. After one layer is vaporized, an electron microscope images the newly exposed layer. The imaged layer is then shaved away by the ion beam and the next one imaged, until all that’s left of the slice of tissue is a nanometer-resolution digital copy. It’s a far cry from the days of Kodachrome.

But maybe the most dramatic improvement is what happens after scientists complete that pile of images.

Instead of assembling them by hand, algorithms take over. Their first job is ordering the imaged slices. Then they do something impossible until the last decade. They line up the images just so, tracing the path of cells and synapses between them and thus building a 3D model. Humans still proofread the results, but they don’t do the hardest bit anymore. (Even the proofreading can be refined. Renowned neuroscientist and connectomics proponent Sebastian Seung, for example, created a game called Eyewire, where thousands of volunteers review structures.)

“It’s truly beautiful to look at,” Harvard’s Jeff Lichtman, whose lab collaborated with Google on the new map, told Nature in 2019. The programs can trace out neurons faster than the team can churn out image data, he said. “We’re not able to keep up with them. That’s a great place to be.”

But Why…?
In a 2010 TED talk, Seung told the audience you are your connectome. Reconstruct the connections and you reconstruct the mind itself: memories, experience, and personality.

But connectomics has not been without controversy over the years.

Not everyone believes mapping the connectome at this level of detail is necessary for a deep understanding of the brain. And, especially in the field’s earlier, more artisanal past, researchers worried the scale of resources required simply wouldn’t yield comparably valuable (or timely) results.

“I don’t need to know the precise details of the wiring of each cell and each synapse in each of those brains,” nueroscientist Anthony Movshon said in 2019. “What I need to know, instead, is the organizational principles that wire them together.” These, Movshon believes, can likely be inferred from observations at lower resolutions.

Also, a static snapshot of the brain’s physical connections doesn’t necessarily explain how those connections are used in practice.

“A connectome is necessary, but not sufficient,” some scientists have said over the years. Indeed, it may be in the combination of brain maps—including functional, higher-level maps that track signals flowing through neural networks in response to stimuli—that the brain’s inner workings will be illuminated in the sharpest detail.

Still, the C. elegans connectome has proven to be a foundational building block for neuroscience over the years. And the growing speed of mapping is beginning to suggest goals that once seemed impractical may actually be within reach in the coming decades.

Are We There Yet?
Seung has said that when he first started out he estimated it’d take a million years for a person to manually trace all the connections in a cubic millimeter of human cortex. The whole brain, he further inferred, would take on the order of a trillion years.

That’s why automation and algorithms have been so crucial to the field.

Janelia’s Gerry Rubin told Stat he and his team have overseen a 1,000-fold increase in mapping speed since they began work on the fruit fly connectome in 2008. The full connectome—the first part of which was completed last year—may arrive in 2022.

Other groups are working on other animals, like octopuses, saying comparing how different forms of intelligence are wired up may prove particularly rich ground for discovery.

The full connectome of a mouse, a project already underway, may follow the fruit fly by the end of the decade. Rubin estimates going from mouse to human would need another million-fold jump in mapping speed. But he points to the trillion-fold increase in DNA sequencing speed since 1973 to show such dramatic technical improvements aren’t unprecedented.

The genome may be an apt comparison in another way too. Even after sequencing the first human genome, it’s taken many years to scale genomics to the point we can more fully realize its potential. Perhaps the same will be true of connectomics.

Even as the technology opens new doors, it may take time to understand and make use of all it has to offer.

“I believe people were impatient about what [connectomes] would provide,” Joshua Vogelstein, cofounder of the Open Connetome Project, told the Verge last year. “The amount of time between a good technology being seeded, and doing actual science using that technology is often approximately 15 years. Now it’s 15 years later and we can start doing science.”

Proponents hope brain maps will yield new insights into how the brain works—from thinking to emotion and memory—and how to better diagnose and treat brain disorders. Others, Google among them no doubt, hope to glean insights that could lead to more efficient computing (the brain is astonishing in this respect) and powerful artificial intelligence.

There’s no telling exactly what scientists will find as, neuron by synapse, they map the inner workings of our minds—but it seems all but certain great discoveries await.

Image Credit: Google / Harvard Continue reading

Posted in Human Robots

#439105 This Robot Taught Itself to Walk in a ...

Recently, in a Berkeley lab, a robot called Cassie taught itself to walk, a little like a toddler might. Through trial and error, it learned to move in a simulated world. Then its handlers sent it strolling through a minefield of real-world tests to see how it’d fare.

And, as it turns out, it fared pretty damn well. With no further fine-tuning, the robot—which is basically just a pair of legs—was able to walk in all directions, squat down while walking, right itself when pushed off balance, and adjust to different kinds of surfaces.

It’s the first time a machine learning approach known as reinforcement learning has been so successfully applied in two-legged robots.

This likely isn’t the first robot video you’ve seen, nor the most polished.

For years, the internet has been enthralled by videos of robots doing far more than walking and regaining their balance. All that is table stakes these days. Boston Dynamics, the heavyweight champ of robot videos, regularly releases mind-blowing footage of robots doing parkour, back flips, and complex dance routines. At times, it can seem the world of iRobot is just around the corner.

This sense of awe is well-earned. Boston Dynamics is one of the world’s top makers of advanced robots.

But they still have to meticulously hand program and choreograph the movements of the robots in their videos. This is a powerful approach, and the Boston Dynamics team has done incredible things with it.

In real-world situations, however, robots need to be robust and resilient. They need to regularly deal with the unexpected, and no amount of choreography will do. Which is how, it’s hoped, machine learning can help.

Reinforcement learning has been most famously exploited by Alphabet’s DeepMind to train algorithms that thrash humans at some the most difficult games. Simplistically, it’s modeled on the way we learn. Touch the stove, get burned, don’t touch the damn thing again; say please, get a jelly bean, politely ask for another.

In Cassie’s case, the Berkeley team used reinforcement learning to train an algorithm to walk in a simulation. It’s not the first AI to learn to walk in this manner. But going from simulation to the real world doesn’t always translate.

Subtle differences between the two can (literally) trip up a fledgling robot as it tries out its sim skills for the first time.

To overcome this challenge, the researchers used two simulations instead of one. The first simulation, an open source training environment called MuJoCo, was where the algorithm drew upon a large library of possible movements and, through trial and error, learned to apply them. The second simulation, called Matlab SimMechanics, served as a low-stakes testing ground that more precisely matched real-world conditions.

Once the algorithm was good enough, it graduated to Cassie.

And amazingly, it didn’t need further polishing. Said another way, when it was born into the physical world—it knew how to walk just fine. In addition, it was also quite robust. The researchers write that two motors in Cassie’s knee malfunctioned during the experiment, but the robot was able to adjust and keep on trucking.

Other labs have been hard at work applying machine learning to robotics.

Last year Google used reinforcement learning to train a (simpler) four-legged robot. And OpenAI has used it with robotic arms. Boston Dynamics, too, will likely explore ways to augment their robots with machine learning. New approaches—like this one aimed at training multi-skilled robots or this one offering continuous learning beyond training—may also move the dial. It’s early yet, however, and there’s no telling when machine learning will exceed more traditional methods.

And in the meantime, Boston Dynamics bots are testing the commercial waters.

Still, robotics researchers, who were not part of the Berkeley team, think the approach is promising. Edward Johns, head of Imperial College London’s Robot Learning Lab, told MIT Technology Review, “This is one of the most successful examples I have seen.”

The Berkeley team hopes to build on that success by trying out “more dynamic and agile behaviors.” So, might a self-taught parkour-Cassie be headed our way? We’ll see.

Image Credit: University of California Berkeley Hybrid Robotics via YouTube Continue reading

Posted in Human Robots

#439073 There’s a ‘New’ Nirvana Song Out, ...

One of the primary capabilities separating human intelligence from artificial intelligence is our ability to be creative—to use nothing but the world around us, our experiences, and our brains to create art. At present, AI needs to be extensively trained on human-made works of art in order to produce new work, so we’ve still got a leg up. That said, neural networks like OpenAI’s GPT-3 and Russian designer Nikolay Ironov have been able to create content indistinguishable from human-made work.

Now there’s another example of AI artistry that’s hard to tell apart from the real thing, and it’s sure to excite 90s alternative rock fans the world over: a brand-new, never-heard-before Nirvana song. Or, more accurately, a song written by a neural network that was trained on Nirvana’s music.

The song is called “Drowned in the Sun,” and it does have a pretty Nirvana-esque ring to it. The neural network that wrote it is Magenta, which was launched by Google in 2016 with the goal of training machines to create art—or as the tool’s website puts it, exploring the role of machine learning as a tool in the creative process. Magenta was built using TensorFlow, Google’s massive open-source software library focused on deep learning applications.

The song was written as part of an album called Lost Tapes of the 27 Club, a project carried out by a Toronto-based organization called Over the Bridge focused on mental health in the music industry.

Here’s how a computer was able to write a song in the unique style of a deceased musician. Music, 20 to 30 tracks, was fed into Magenta’s neural network in the form of MIDI files. MIDI stands for Musical Instrument Digital Interface, and the format contains the details of a song written in code that represents musical parameters like pitch and tempo. Components of each song, like vocal melody or rhythm guitar, were fed in one at a time.

The neural network found patterns in these different components, and got enough of a handle on them that when given a few notes to start from, it could use those patterns to predict what would come next; in this case, chords and melodies that sound like they could’ve been written by Kurt Cobain.

To be clear, Magenta didn’t spit out a ready-to-go song complete with lyrics. The AI wrote the music, but a different neural network wrote the lyrics (using essentially the same process as Magenta), and the team then sifted through “pages and pages” of output to find lyrics that fit the melodies Magenta created.

Eric Hogan, a singer for a Nirvana tribute band who the Over the Bridge team hired to sing “Drowned in the Sun,” felt that the lyrics were spot-on. “The song is saying, ‘I’m a weirdo, but I like it,’” he said. “That is total Kurt Cobain right there. The sentiment is exactly what he would have said.”

Cobain isn’t the only musician the Lost Tapes project tried to emulate; songs in the styles of Jimi Hendrix, Jim Morrison, and Amy Winehouse were also included. What all these artists have in common is that they died by suicide at the age of 27.

The project is meant to raise awareness around mental health, particularly among music industry professionals. It’s not hard to think of great artists of all persuasions—musicians, painters, writers, actors—whose lives are cut short due to severe depression and other mental health issues for which it can be hard to get help. These issues are sometimes romanticized, as suffering does tend to create art that’s meaningful, relatable, and timeless. But according to the Lost Tapes website, suicide attempts among music industry workers are more than double that of the general population.

How many more hit songs would these artists have written if they were still alive? We’ll never know, but hopefully Lost Tapes of the 27 Club and projects like it will raise awareness of mental health issues, both in the music industry and in general, and help people in need find the right resources. Because no matter how good computers eventually get at creating music, writing, or other art, as Lost Tapes’ website pointedly says, “Even AI will never replace the real thing.”

Image Credit: Edward Xu on Unsplash Continue reading

Posted in Human Robots