Tag Archives: creatures
#435822 The Internet Is Coming to the Rest of ...
People surf it. Spiders crawl it. Gophers navigate it.
Now, a leading group of cognitive biologists and computer scientists want to make the tools of the Internet accessible to the rest of the animal kingdom.
Dubbed the Interspecies Internet, the project aims to provide intelligent animals such as elephants, dolphins, magpies, and great apes with a means to communicate among each other and with people online.
And through artificial intelligence, virtual reality, and other digital technologies, researchers hope to crack the code of all the chirps, yips, growls, and whistles that underpin animal communication.
Oh, and musician Peter Gabriel is involved.
“We can use data analysis and technology tools to give non-humans a lot more choice and control,” the former Genesis frontman, dressed in his signature Nehru-style collar shirt and loose, open waistcoat, told IEEE Spectrum at the inaugural Interspecies Internet Workshop, held Monday in Cambridge, Mass. “This will be integral to changing our relationship with the natural world.”
The workshop was a long time in the making.
Eighteen years ago, Gabriel visited a primate research center in Atlanta, Georgia, where he jammed with two bonobos, a male named Kanzi and his half-sister Panbanisha. It was the first time either bonobo had sat at a piano before, and both displayed an exquisite sense of musical timing and melody.
Gabriel seemed to be speaking to the great apes through his synthesizer. It was a shock to the man who once sang “Shock the Monkey.”
“It blew me away,” he says.
Add in the bonobos’ ability to communicate by pointing to abstract symbols, Gabriel notes, and “you’d have to be deaf, dumb, and very blind not to notice language being used.”
Gabriel eventually teamed up with Internet protocol co-inventor Vint Cerf, cognitive psychologist Diana Reiss, and IoT pioneer Neil Gershenfeld to propose building an Interspecies Internet. Presented in a 2013 TED Talk as an “idea in progress,” the concept proved to be ahead of the technology.
“It wasn’t ready,” says Gershenfeld, director of MIT’s Center for Bits and Atoms. “It needed to incubate.”
So, for the past six years, the architects of the Dolittlesque initiative embarked on two small pilot projects, one for dolphins and one for chimpanzees.
At her Hunter College lab in New York City, Reiss developed what she calls the D-Pad—a touchpad for dolphins.
Reiss had been trying for years to create an underwater touchscreen with which to probe the cognition and communication skills of bottlenose dolphins. But “it was a nightmare coming up with something that was dolphin-safe and would work,” she says.
Her first attempt emitted too much heat. A Wii-like system of gesture recognition proved too difficult to install in the dolphin tanks.
Eventually, she joined forces with Rockefeller University biophysicist Marcelo Magnasco and invented an optical detection system in which images and infrared sensors are projected through an underwater viewing window onto a glass panel, allowing the dolphins to play specially designed apps, including one dubbed Whack-a-Fish.
Meanwhile, in the United Kingdom, Gabriel worked with Alison Cronin, director of the ape rescue center Monkey World, to test the feasibility of using FaceTime with chimpanzees.
The chimps engaged with the technology, Cronin reported at this week’s workshop. However, our hominid cousins proved as adept at videotelephonic discourse as my three-year-old son is at video chatting with his grandparents—which is to say, there was a lot of pass-the-banana-through-the-screen and other silly games, and not much meaningful conversation.
“We can use data analysis and technology tools to give non-humans a lot more choice and control.”
—Peter Gabriel
The buggy, rudimentary attempt at interspecies online communication—what Cronin calls her “Max Headroom experiment”—shows that building the Interspecies Internet will not be as simple as giving out Skype-enabled tablets to smart animals.
“There are all sorts of problems with creating a human-centered experience for another animal,” says Gabriel Miller, director of research and development at the San Diego Zoo.
Miller has been working on animal-focused sensory tools such as an “Elephone” (for elephants) and a “Joybranch” (for birds), but it’s not easy to design efficient interactive systems for other creatures—and for the Interspecies Internet to be successful, Miller points out, “that will be super-foundational.”
Researchers are making progress on natural language processing of animal tongues. Through a non-profit organization called the Earth Species Project, former Firefox designer Aza Raskin and early Twitter engineer Britt Selvitelle are applying deep learning algorithms developed for unsupervised machine translation of human languages to fashion a Rosetta Stone–like tool capable of interpreting the vocalizations of whales, primates, and other animals.
Inspired by the scientists who first documented the complex sonic arrangements of humpback whales in the 1960s—a discovery that ushered in the modern marine conservation movement—Selvitelle hopes that an AI-powered animal translator can have a similar effect on environmentalism today.
“A lot of shifts happen when someone who doesn’t have a voice gains a voice,” he says.
A challenge with this sort of AI software remains verification and validation. Normally, machine-learning algorithms are benchmarked against a human expert, but who is to say if a cybernetic translation of a sperm whale’s clicks is accurate or not?
One could back-translate an English expression into sperm whale-ese and then into English again. But with the great apes, there might be a better option.
According to primatologist Sue Savage-Rumbaugh, expertly trained bonobos could serve as bilingual interpreters, translating the argot of apes into the parlance of people, and vice versa.
Not just any trained ape will do, though. They have to grow up in a mixed Pan/Homo environment, as Kanzi and Panbanisha were.
“If I can have a chat with a cow, maybe I can have more compassion for it.”
—Jeremy Coller
Those bonobos were raised effectively from birth both by Savage-Rumbaugh, who taught the animals to understand spoken English and to communicate via hundreds of different pictographic “lexigrams,” and a bonobo mother named Matata that had lived for six years in the Congolese rainforests before her capture.
Unlike all other research primates—which are brought into captivity as infants, reared by human caretakers, and have limited exposure to their natural cultures or languages—those apes thus grew up fluent in both bonobo and human.
Panbanisha died in 2012, but Kanzi, aged 38, is still going strong, living at an ape sanctuary in Des Moines, Iowa. Researchers continue to study his cognitive abilities—Francine Dolins, a primatologist at the University of Michigan-Dearborn, is running one study in which Kanzi and other apes hunt rabbits and forage for fruit through avatars on a touchscreen. Kanzi could, in theory, be recruited to check the accuracy of any Google Translate–like app for bonobo hoots, barks, grunts, and cries.
Alternatively, Kanzi could simply provide Internet-based interpreting services for our two species. He’s already proficient at video chatting with humans, notes Emily Walco, a PhD student at Harvard University who has personally Skyped with Kanzi. “He was super into it,” Walco says.
And if wild bonobos in Central Africa can be coaxed to gather around a computer screen, Savage-Rumbaugh is confident Kanzi could communicate with them that way. “It can all be put together,” she says. “We can have an Interspecies Internet.”
“Both the technology and the knowledge had to advance,” Savage-Rumbaugh notes. However, now, “the techniques that we learned could really be extended to a cow or a pig.”
That’s music to the ears of Jeremy Coller, a private equity specialist whose foundation partially funded the Interspecies Internet Workshop. Coller is passionate about animal welfare and has devoted much of his philanthropic efforts toward the goal of ending factory farming.
At the workshop, his foundation announced the creation of the Coller Doolittle Prize, a US $100,000 award to help fund further research related to the Interspecies Internet. (A working group also formed to synthesize plans for the emerging field, to facilitate future event planning, and to guide testing of shared technology platforms.)
Why would a multi-millionaire with no background in digital communication systems or cognitive psychology research want to back the initiative? For Coller, the motivation boils to interspecies empathy.
“If I can have a chat with a cow,” he says, “maybe I can have more compassion for it.”
An abridged version of this post appears in the September 2019 print issue as “Elephants, Dolphins, and Chimps Need the Internet, Too.” Continue reading
#435769 The Ultimate Optimization Problem: How ...
Lucas Joppa thinks big. Even while gazing down into his cup of tea in his modest office on Microsoft’s campus in Redmond, Washington, he seems to see the entire planet bobbing in there like a spherical tea bag.
As Microsoft’s first chief environmental officer, Joppa came up with the company’s AI for Earth program, a five-year effort that’s spending US $50 million on AI-powered solutions to global environmental challenges.
The program is not just about specific deliverables, though. It’s also about mindset, Joppa told IEEE Spectrum in an interview in July. “It’s a plea for people to think about the Earth in the same way they think about the technologies they’re developing,” he says. “You start with an objective. So what’s our objective function for Earth?” (In computer science, an objective function describes the parameter or parameters you are trying to maximize or minimize for optimal results.)
Photo: Microsoft
Lucas Joppa
AI for Earth launched in December 2017, and Joppa’s team has since given grants to more than 400 organizations around the world. In addition to receiving funding, some grantees get help from Microsoft’s data scientists and access to the company’s computing resources.
In a wide-ranging interview about the program, Joppa described his vision of the “ultimate optimization problem”—figuring out which parts of the planet should be used for farming, cities, wilderness reserves, energy production, and so on.
Every square meter of land and water on Earth has an infinite number of possible utility functions. It’s the job of Homo sapiens to describe our overall objective for the Earth. Then it’s the job of computers to produce optimization results that are aligned with the human-defined objective.
I don’t think we’re close at all to being able to do this. I think we’re closer from a technology perspective—being able to run the model—than we are from a social perspective—being able to make decisions about what the objective should be. What do we want to do with the Earth’s surface?
Such questions are increasingly urgent, as climate change has already begun reshaping our planet and our societies. Global sea and air surface temperatures have already risen by an average of 1 degree Celsius above preindustrial levels, according to the Intergovernmental Panel on Climate Change.
Today, people all around the world participated in a “climate strike,” with young people leading the charge and demanding a global transition to renewable energy. On Monday, world leaders will gather in New York for the United Nations Climate Action Summit, where they’re expected to present plans to limit warming to 1.5 degrees Celsius.
Joppa says such summit discussions should aim for a truly holistic solution.
We talk about how to solve climate change. There’s a higher-order question for society: What climate do we want? What output from nature do we want and desire? If we could agree on those things, we could put systems in place for optimizing our environment accordingly. Instead we have this scattered approach, where we try for local optimization. But the sum of local optimizations is never a global optimization.
There’s increasing interest in using artificial intelligence to tackle global environmental problems. New sensing technologies enable scientists to collect unprecedented amounts of data about the planet and its denizens, and AI tools are becoming vital for interpreting all that data.
The 2018 report “Harnessing AI for the Earth,” produced by the World Economic Forum and the consulting company PwC, discusses ways that AI can be used to address six of the world’s most pressing environmental challenges (climate change, biodiversity, and healthy oceans, water security, clean air, and disaster resilience).
Many of the proposed applications involve better monitoring of human and natural systems, as well as modeling applications that would enable better predictions and more efficient use of natural resources.
Joppa says that AI for Earth is taking a two-pronged approach, funding efforts to collect and interpret vast amounts of data alongside efforts that use that data to help humans make better decisions. And that’s where the global optimization engine would really come in handy.
For any location on earth, you should be able to go and ask: What’s there, how much is there, and how is it changing? And more importantly: What should be there?
On land, the data is really only interesting for the first few hundred feet. Whereas in the ocean, the depth dimension is really important.
We need a planet with sensors, with roving agents, with remote sensing. Otherwise our decisions aren’t going to be any good.
AI for Earth isn’t going to create such an online portal within five years, Joppa stresses. But he hopes the projects that he’s funding will contribute to making such a portal possible—eventually.
We’re asking ourselves: What are the fundamental missing layers in the tech stack that would allow people to build a global optimization engine? Some of them are clear, some are still opaque to me.
By the end of five years, I’d like to have identified these missing layers, and have at least one example of each of the components.
Some of the projects that AI for Earth has funded seem to fit that desire. Examples include SilviaTerra, which used satellite imagery and AI to create a map of the 92 billion trees in forested areas across the United States. There’s also OceanMind, a non-profit that detects illegal fishing and helps marine authorities enforce compliance. Platforms like Wildbook and iNaturalist enable citizen scientists to upload pictures of animals and plants, aiding conservation efforts and research on biodiversity. And FarmBeats aims to enable data-driven agriculture with low-cost sensors, drones, and cloud services.
It’s not impossible to imagine putting such services together into an optimization engine that knows everything about the land, the water, and the creatures who live on planet Earth. Then we’ll just have to tell that engine what we want to do about it.
Editor’s note: This story is published in cooperation with more than 250 media organizations and independent journalists that have focused their coverage on climate change ahead of the UN Climate Action Summit. IEEE Spectrum’s participation in the Covering Climate Now partnership builds on our past reporting about this global issue. Continue reading
#434673 The World’s Most Valuable AI ...
It recognizes our faces. It knows the videos we might like. And it can even, perhaps, recommend the best course of action to take to maximize our personal health.
Artificial intelligence and its subset of disciplines—such as machine learning, natural language processing, and computer vision—are seemingly becoming integrated into our daily lives whether we like it or not. What was once sci-fi is now ubiquitous research and development in company and university labs around the world.
Similarly, the startups working on many of these AI technologies have seen their proverbial stock rise. More than 30 of these companies are now valued at over a billion dollars, according to data research firm CB Insights, which itself employs algorithms to provide insights into the tech business world.
Private companies with a billion-dollar valuation were so uncommon not that long ago that they were dubbed unicorns. Now there are 325 of these once-rare creatures, with a combined valuation north of a trillion dollars, as CB Insights maintains a running count of this exclusive Unicorn Club.
The subset of AI startups accounts for about 10 percent of the total membership, growing rapidly in just 4 years from 0 to 32. Last year, an unprecedented 17 AI startups broke the billion-dollar barrier, with 2018 also a record year for venture capital into private US AI companies at $9.3 billion, CB Insights reported.
What exactly is all this money funding?
AI Keeps an Eye Out for You
Let’s start with the bad news first.
Facial recognition is probably one of the most ubiquitous applications of AI today. It’s actually a decades-old technology often credited to a man named Woodrow Bledsoe, who used an instrument called a RAND tablet that could semi-autonomously match faces from a database. That was in the 1960s.
Today, most of us are familiar with facial recognition as a way to unlock our smartphones. But the technology has gained notoriety as a surveillance tool of law enforcement, particularly in China.
It’s no secret that the facial recognition algorithms developed by several of the AI unicorns from China—SenseTime, CloudWalk, and Face++ (also known as Megvii)—are used to monitor the country’s 1.3 billion citizens. Police there are even equipped with AI-powered eyeglasses for such purposes.
A fourth billion-dollar Chinese startup, Yitu Technologies, also produces a platform for facial recognition in the security realm, and develops AI systems in healthcare on top of that. For example, its CARE.AITM Intelligent 4D Imaging System for Chest CT can reputedly identify in real time a variety of lesions for the possible early detection of cancer.
The AI Doctor Is In
As Peter Diamandis recently noted, AI is rapidly augmenting healthcare and longevity. He mentioned another AI unicorn from China in this regard—iCarbonX, which plans to use machines to develop personalized health plans for every individual.
A couple of AI unicorns on the hardware side of healthcare are OrCam Technologies and Butterfly. The former, an Israeli company, has developed a wearable device for the vision impaired called MyEye that attaches to one’s eyeglasses. The device can identify people and products, as well as read text, conveying the information through discrete audio.
Butterfly Network, out of Connecticut, has completely upended the healthcare market with a handheld ultrasound machine that works with a smartphone.
“Orcam and Butterfly are amazing examples of how machine learning can be integrated into solutions that provide a step-function improvement over state of the art in ultra-competitive markets,” noted Andrew Byrnes, investment director at Comet Labs, a venture capital firm focused on AI and robotics, in an email exchange with Singularity Hub.
AI in the Driver’s Seat
Comet Labs’ portfolio includes two AI unicorns, Megvii and Pony.ai.
The latter is one of three billion-dollar startups developing the AI technology behind self-driving cars, with the other two being Momenta.ai and Zoox.
Founded in 2016 near San Francisco (with another headquarters in China), Pony.ai debuted its latest self-driving system, called PonyAlpha, last year. The platform uses multiple sensors (LiDAR, cameras, and radar) to navigate its environment, but its “sensor fusion technology” makes things simple by choosing the most reliable sensor data for any given driving scenario.
Zoox is another San Francisco area startup founded a couple of years earlier. In late 2018, it got the green light from the state of California to be the first autonomous vehicle company to transport a passenger as part of a pilot program. Meanwhile, China-based Momenta.ai is testing level four autonomy for its self-driving system. Autonomous driving levels are ranked zero to five, with level five being equal to a human behind the wheel.
The hype around autonomous driving is currently in overdrive, and Byrnes thinks regulatory roadblocks will keep most self-driving cars in idle for the foreseeable future. The exception, he said, is China, which is adopting a “systems” approach to autonomy for passenger transport.
“If [autonomous mobility] solves bigger problems like traffic that can elicit government backing, then that has the potential to go big fast,” he said. “This is why we believe Pony.ai will be a winner in the space.”
AI in the Back Office
An AI-powered technology that perhaps only fans of the cult classic Office Space might appreciate has suddenly taken the business world by storm—robotic process automation (RPA).
RPA companies take the mundane back office work, such as filling out invoices or processing insurance claims, and turn it over to bots. The intelligent part comes into play because these bots can tackle unstructured data, such as text in an email or even video and pictures, in order to accomplish an increasing variety of tasks.
Both Automation Anywhere and UiPath are older companies, founded in 2003 and 2005, respectively. However, since just 2017, they have raised nearly a combined $1 billion in disclosed capital.
Cybersecurity Embraces AI
Cybersecurity is another industry where AI is driving investment into startups. Sporting imposing names like CrowdStrike, Darktrace, and Tanium, these cybersecurity companies employ different machine-learning techniques to protect computers and other IT assets beyond the latest software update or virus scan.
Darktrace, for instance, takes its inspiration from the human immune system. Its algorithms can purportedly “learn” the unique pattern of each device and user on a network, detecting emerging problems before things spin out of control.
All three companies are used by major corporations and governments around the world. CrowdStrike itself made headlines a few years ago when it linked the hacking of the Democratic National Committee email servers to the Russian government.
Looking Forward
I could go on, and introduce you to the world’s most valuable startup, a Chinese company called Bytedance that is valued at $75 billion for news curation and an app to create 15-second viral videos. But that’s probably not where VC firms like Comet Labs are generally putting their money.
Byrnes sees real value in startups that are taking “data-driven approaches to problems specific to unique industries.” Take the example of Chicago-based unicorn Uptake Technologies, which analyzes incoming data from machines, from wind turbines to tractors, to predict problems before they occur with the machinery. A not-yet unicorn called PingThings in the Comet Labs portfolio does similar predictive analytics for the energy utilities sector.
“One question we like asking is, ‘What does the state of the art look like in your industry in three to five years?’” Byrnes said. “We ask that a lot, then we go out and find the technology-focused teams building those things.”
Image Credit: Andrey Suslov / Shutterstock.com Continue reading