Tag Archives: think

#437276 Cars Will Soon Be Able to Sense and ...

Imagine you’re on your daily commute to work, driving along a crowded highway while trying to resist looking at your phone. You’re already a little stressed out because you didn’t sleep well, woke up late, and have an important meeting in a couple hours, but you just don’t feel like your best self.

Suddenly another car cuts you off, coming way too close to your front bumper as it changes lanes. Your already-simmering emotions leap into overdrive, and you lay on the horn and shout curses no one can hear.

Except someone—or, rather, something—can hear: your car. Hearing your angry words, aggressive tone, and raised voice, and seeing your furrowed brow, the onboard computer goes into “soothe” mode, as it’s been programmed to do when it detects that you’re angry. It plays relaxing music at just the right volume, releases a puff of light lavender-scented essential oil, and maybe even says some meditative quotes to calm you down.

What do you think—creepy? Helpful? Awesome? Weird? Would you actually calm down, or get even more angry that a car is telling you what to do?

Scenarios like this (maybe without the lavender oil part) may not be imaginary for much longer, especially if companies working to integrate emotion-reading artificial intelligence into new cars have their way. And it wouldn’t just be a matter of your car soothing you when you’re upset—depending what sort of regulations are enacted, the car’s sensors, camera, and microphone could collect all kinds of data about you and sell it to third parties.

Computers and Feelings
Just as AI systems can be trained to tell the difference between a picture of a dog and one of a cat, they can learn to differentiate between an angry tone of voice or facial expression and a happy one. In fact, there’s a whole branch of machine intelligence devoted to creating systems that can recognize and react to human emotions; it’s called affective computing.

Emotion-reading AIs learn what different emotions look and sound like from large sets of labeled data; “smile = happy,” “tears = sad,” “shouting = angry,” and so on. The most sophisticated systems can likely even pick up on the micro-expressions that flash across our faces before we consciously have a chance to control them, as detailed by Daniel Goleman in his groundbreaking book Emotional Intelligence.

Affective computing company Affectiva, a spinoff from MIT Media Lab, says its algorithms are trained on 5,313,751 face videos (videos of people’s faces as they do an activity, have a conversation, or react to stimuli) representing about 2 billion facial frames. Fascinatingly, Affectiva claims its software can even account for cultural differences in emotional expression (for example, it’s more normalized in Western cultures to be very emotionally expressive, whereas Asian cultures tend to favor stoicism and politeness), as well as gender differences.

But Why?
As reported in Motherboard, companies like Affectiva, Cerence, Xperi, and Eyeris have plans in the works to partner with automakers and install emotion-reading AI systems in new cars. Regulations passed last year in Europe and a bill just introduced this month in the US senate are helping make the idea of “driver monitoring” less weird, mainly by emphasizing the safety benefits of preemptive warning systems for tired or distracted drivers (remember that part in the beginning about sneaking glances at your phone? Yeah, that).

Drowsiness and distraction can’t really be called emotions, though—so why are they being lumped under an umbrella that has a lot of other implications, including what many may consider an eerily Big Brother-esque violation of privacy?

Our emotions, in fact, are among the most private things about us, since we are the only ones who know their true nature. We’ve developed the ability to hide and disguise our emotions, and this can be a useful skill at work, in relationships, and in scenarios that require negotiation or putting on a game face.

And I don’t know about you, but I’ve had more than one good cry in my car. It’s kind of the perfect place for it; private, secluded, soundproof.

Putting systems into cars that can recognize and collect data about our emotions under the guise of preventing accidents due to the state of mind of being distracted or the physical state of being sleepy, then, seems a bit like a bait and switch.

A Highway to Privacy Invasion?
European regulations will help keep driver data from being used for any purpose other than ensuring a safer ride. But the US is lagging behind on the privacy front, with car companies largely free from any enforceable laws that would keep them from using driver data as they please.

Affectiva lists the following as use cases for occupant monitoring in cars: personalizing content recommendations, providing alternate route recommendations, adapting environmental conditions like lighting and heating, and understanding user frustration with virtual assistants and designing those assistants to be emotion-aware so that they’re less frustrating.

Our phones already do the first two (though, granted, we’re not supposed to look at them while we drive—but most cars now let you use bluetooth to display your phone’s content on the dashboard), and the third is simply a matter of reaching a hand out to turn a dial or press a button. The last seems like a solution for a problem that wouldn’t exist without said… solution.

Despite how unnecessary and unsettling it may seem, though, emotion-reading AI isn’t going away, in cars or other products and services where it might provide value.

Besides automotive AI, Affectiva also makes software for clients in the advertising space. With consent, the built-in camera on users’ laptops records them while they watch ads, gauging their emotional response, what kind of marketing is most likely to engage them, and how likely they are to buy a given product. Emotion-recognition tech is also being used or considered for use in mental health applications, call centers, fraud monitoring, and education, among others.

In a 2015 TED talk, Affectiva co-founder Rana El-Kaliouby told her audience that we’re living in a world increasingly devoid of emotion, and her goal was to bring emotions back into our digital experiences. Soon they’ll be in our cars, too; whether the benefits will outweigh the costs remains to be seen.

Image Credit: Free-Photos from Pixabay Continue reading

Posted in Human Robots

#437269 DeepMind’s Newest AI Programs Itself ...

When Deep Blue defeated world chess champion Garry Kasparov in 1997, it may have seemed artificial intelligence had finally arrived. A computer had just taken down one of the top chess players of all time. But it wasn’t to be.

Though Deep Blue was meticulously programmed top-to-bottom to play chess, the approach was too labor-intensive, too dependent on clear rules and bounded possibilities to succeed at more complex games, let alone in the real world. The next revolution would take a decade and a half, when vastly more computing power and data revived machine learning, an old idea in artificial intelligence just waiting for the world to catch up.

Today, machine learning dominates, mostly by way of a family of algorithms called deep learning, while symbolic AI, the dominant approach in Deep Blue’s day, has faded into the background.

Key to deep learning’s success is the fact the algorithms basically write themselves. Given some high-level programming and a dataset, they learn from experience. No engineer anticipates every possibility in code. The algorithms just figure it.

Now, Alphabet’s DeepMind is taking this automation further by developing deep learning algorithms that can handle programming tasks which have been, to date, the sole domain of the world’s top computer scientists (and take them years to write).

In a paper recently published on the pre-print server arXiv, a database for research papers that haven’t been peer reviewed yet, the DeepMind team described a new deep reinforcement learning algorithm that was able to discover its own value function—a critical programming rule in deep reinforcement learning—from scratch.

Surprisingly, the algorithm was also effective beyond the simple environments it trained in, going on to play Atari games—a different, more complicated task—at a level that was, at times, competitive with human-designed algorithms and achieving superhuman levels of play in 14 games.

DeepMind says the approach could accelerate the development of reinforcement learning algorithms and even lead to a shift in focus, where instead of spending years writing the algorithms themselves, researchers work to perfect the environments in which they train.

Pavlov’s Digital Dog
First, a little background.

Three main deep learning approaches are supervised, unsupervised, and reinforcement learning.

The first two consume huge amounts of data (like images or articles), look for patterns in the data, and use those patterns to inform actions (like identifying an image of a cat). To us, this is a pretty alien way to learn about the world. Not only would it be mind-numbingly dull to review millions of cat images, it’d take us years or more to do what these programs do in hours or days. And of course, we can learn what a cat looks like from just a few examples. So why bother?

While supervised and unsupervised deep learning emphasize the machine in machine learning, reinforcement learning is a bit more biological. It actually is the way we learn. Confronted with several possible actions, we predict which will be most rewarding based on experience—weighing the pleasure of eating a chocolate chip cookie against avoiding a cavity and trip to the dentist.

In deep reinforcement learning, algorithms go through a similar process as they take action. In the Atari game Breakout, for instance, a player guides a paddle to bounce a ball at a ceiling of bricks, trying to break as many as possible. When playing Breakout, should an algorithm move the paddle left or right? To decide, it runs a projection—this is the value function—of which direction will maximize the total points, or rewards, it can earn.

Move by move, game by game, an algorithm combines experience and value function to learn which actions bring greater rewards and improves its play, until eventually, it becomes an uncanny Breakout player.

Learning to Learn (Very Meta)
So, a key to deep reinforcement learning is developing a good value function. And that’s difficult. According to the DeepMind team, it takes years of manual research to write the rules guiding algorithmic actions—which is why automating the process is so alluring. Their new Learned Policy Gradient (LPG) algorithm makes solid progress in that direction.

LPG trained in a number of toy environments. Most of these were “gridworlds”—literally two-dimensional grids with objects in some squares. The AI moves square to square and earns points or punishments as it encounters objects. The grids vary in size, and the distribution of objects is either set or random. The training environments offer opportunities to learn fundamental lessons for reinforcement learning algorithms.

Only in LPG’s case, it had no value function to guide that learning.

Instead, LPG has what DeepMind calls a “meta-learner.” You might think of this as an algorithm within an algorithm that, by interacting with its environment, discovers both “what to predict,” thereby forming its version of a value function, and “how to learn from it,” applying its newly discovered value function to each decision it makes in the future.

Prior work in the area has had some success, but according to DeepMind, LPG is the first algorithm to discover reinforcement learning rules from scratch and to generalize beyond training. The latter was particularly surprising because Atari games are so different from the simple worlds LPG trained in—that is, it had never seen anything like an Atari game.

Time to Hand Over the Reins? Not Just Yet
LPG is still behind advanced human-designed algorithms, the researchers said. But it outperformed a human-designed benchmark in training and even some Atari games, which suggests it isn’t strictly worse, just that it specializes in some environments.

This is where there’s room for improvement and more research.

The more environments LPG saw, the more it could successfully generalize. Intriguingly, the researchers speculate that with enough well-designed training environments, the approach might yield a general-purpose reinforcement learning algorithm.

At the least, though, they say further automation of algorithm discovery—that is, algorithms learning to learn—will accelerate the field. In the near term, it can help researchers more quickly develop hand-designed algorithms. Further out, as self-discovered algorithms like LPG improve, engineers may shift from manually developing the algorithms themselves to building the environments where they learn.

Deep learning long ago left Deep Blue in the dust at games. Perhaps algorithms learning to learn will be a winning strategy in the real world too.

Image credit: Mike Szczepanski / Unsplash Continue reading

Posted in Human Robots

#437267 This Week’s Awesome Tech Stories From ...

ARTIFICIAL INTELLIGENCE
OpenAI’s New Language Generator GPT-3 Is Shockingly Good—and Completely Mindless
Will Douglas Heaven | MIT Technology Review
“‘Playing with GPT-3 feels like seeing the future,’ Arram Sabeti, a San Francisco–based developer and artist, tweeted last week. That pretty much sums up the response on social media in the last few days to OpenAI’s latest language-generating AI.”

ROBOTICS
The Star of This $70 Million Sci-Fi Film Is a Robot
Sarah Bahr | The New York Times
“Erica was created by Hiroshi Ishiguro, a roboticist at Osaka University in Japan, to be ‘the most beautiful woman in the world’—he modeled her after images of Miss Universe pageant finalists—and the most humanlike robot in existence. But she’s more than just a pretty face: Though ‘b’ is still in preproduction, when she makes her debut, producers believe it will be the first time a film has relied on a fully autonomous artificially intelligent actor.”

VIRTUAL REALITY
My Glitchy, Glorious Day at a Conference for Virtual Beings
Emma Grey Ellis | Wired
“Spectators spent much of the time debating who was real and who was fake. …[Lars Buttler’s] eyes seemed awake and alive in a way that the faces of the other participants in the Zoom call—venture capitalist, a tech founder, and an activist, all of them puppeted by artificial intelligence—were not. ‘Pretty sure Lars is human,’ a (real-person) spectator typed in the in-meeting chat room. ‘I’m starting to think Lars is AI,’ wrote another.”

FUTURE OF FOOD
KFC Is Working With a Russian 3D Bioprinting Firm to Try to Make Lab-Produced Chicken Nuggets
Kim Lyons | The Verge
“The chicken restaurant chain will work with Russian company 3D Bioprinting Solutions to develop bioprinting technology that will ‘print’ chicken meat, using chicken cells and plant material. KFC plans to provide the bioprinting firm with ingredients like breading and spices ‘to achieve the signature KFC taste’ and will seek to replicate the taste and texture of genuine chicken.”

BIOTECH
A CRISPR Cow Is Born. It’s Definitely a Boy
Megan Molteni | Wired
“After nearly five years of research, at least half a million dollars, dozens of failed pregnancies, and countless scientific setbacks, Van Eenennaam’s pioneering attempt to create a line of Crispr’d cattle tailored to the needs of the beef industry all came down to this one calf. Who, as luck seemed sure to have it, was about to enter the world in the middle of a global pandemic.”

GOVERNANCE
Is the Pandemic Finally the Moment for a Universal Basic Income?
Brooks Rainwater and Clay Dillow | Fast Company
“Since February, governments around the globe—including in the US—have intervened in their citizens’ individual financial lives, distributing direct cash payments to backstop workers sidelined by the COVID-19 pandemic. Some are considering keeping such direct assistance in place indefinitely, or at least until the economic shocks subside.”

SCIENCE
How Gödel’s Proof Works
Natalie Wolchover | Wired
“In 1931, the Austrian logician Kurt Gödel pulled off arguably one of the most stunning intellectual achievements in history. Mathematicians of the era sought a solid foundation for mathematics: a set of basic mathematical facts, or axioms, that was both consistent—never leading to contradictions—and complete, serving as the building blocks of all mathematical truths. But Gödel’s shocking incompleteness theorems, published when he was just 25, crushed that dream.”

Image credit: Pierre Châtel-Innocenti / Unsplash Continue reading

Posted in Human Robots

#437261 How AI Will Make Drug Discovery ...

If you had to guess how long it takes for a drug to go from an idea to your pharmacy, what would you guess? Three years? Five years? How about the cost? $30 million? $100 million?

Well, here’s the sobering truth: 90 percent of all drug possibilities fail. The few that do succeed take an average of 10 years to reach the market and cost anywhere from $2.5 billion to $12 billion to get there.

But what if we could generate novel molecules to target any disease, overnight, ready for clinical trials? Imagine leveraging machine learning to accomplish with 50 people what the pharmaceutical industry can barely do with an army of 5,000.

Welcome to the future of AI and low-cost, ultra-fast, and personalized drug discovery. Let’s dive in.

GANs & Drugs
Around 2012, computer scientist-turned-biophysicist Alex Zhavoronkov started to notice that artificial intelligence was getting increasingly good at image, voice, and text recognition. He knew that all three tasks shared a critical commonality. In each, massive datasets were available, making it easy to train up an AI.

But similar datasets were present in pharmacology. So, back in 2014, Zhavoronkov started wondering if he could use these datasets and AI to significantly speed up the drug discovery process. He’d heard about a new technique in artificial intelligence known as generative adversarial networks (or GANs). By pitting two neural nets against one another (adversarial), the system can start with minimal instructions and produce novel outcomes (generative). At the time, researchers had been using GANs to do things like design new objects or create one-of-a-kind, fake human faces, but Zhavoronkov wanted to apply them to pharmacology.

He figured GANs would allow researchers to verbally describe drug attributes: “The compound should inhibit protein X at concentration Y with minimal side effects in humans,” and then the AI could construct the molecule from scratch. To turn his idea into reality, Zhavoronkov set up Insilico Medicine on the campus of Johns Hopkins University in Baltimore, Maryland, and rolled up his sleeves.

Instead of beginning their process in some exotic locale, Insilico’s “drug discovery engine” sifts millions of data samples to determine the signature biological characteristics of specific diseases. The engine then identifies the most promising treatment targets and—using GANs—generates molecules (that is, baby drugs) perfectly suited for them. “The result is an explosion in potential drug targets and a much more efficient testing process,” says Zhavoronkov. “AI allows us to do with fifty people what a typical drug company does with five thousand.”

The results have turned what was once a decade-long war into a month-long skirmish.

In late 2018, for example, Insilico was generating novel molecules in fewer than 46 days, and this included not just the initial discovery, but also the synthesis of the drug and its experimental validation in computer simulations.

Right now, they’re using the system to hunt down new drugs for cancer, aging, fibrosis, Parkinson’s, Alzheimer’s, ALS, diabetes, and many others. The first drug to result from this work, a treatment for hair loss, is slated to start Phase I trials by the end of 2020.

They’re also in the early stages of using AI to predict the outcomes of clinical trials in advance of the trial. If successful, this technique will enable researchers to strip a bundle of time and money out of the traditional testing process.

Protein Folding
Beyond inventing new drugs, AI is also being used by other scientists to identify new drug targets—that is, the place to which a drug binds in the body and another key part of the drug discovery process.

Between 1980 and 2006, despite an annual investment of $30 billion, researchers only managed to find about five new drug targets a year. The trouble is complexity. Most potential drug targets are proteins, and a protein’s structure—meaning the way a 2D sequence of amino acids folds into a 3D protein—determines its function.

But a protein with merely a hundred amino acids (a rather small protein) can produce a googol-cubed worth of potential shapes—that’s a one followed by three hundred zeroes. This is also why protein-folding has long been considered an intractably hard problem for even the most powerful of supercomputers.

Back in 1994, to monitor supercomputers’ progress in protein-folding, a biannual competition was created. Until 2018, success was fairly rare. But then the creators of DeepMind turned their neural networks loose on the problem. They created an AI that mines enormous datasets to determine the most likely distance between a protein’s base pairs and the angles of their chemical bonds—aka, the basics of protein-folding. They called it AlphaFold.

On its first foray into the competition, contestant AIs were given 43 protein-folding problems to solve. AlphaFold got 25 right. The second-place team managed a meager three. By predicting the elusive ways in which various proteins fold on the basis of their amino acid sequences, AlphaFold may soon have a tremendous impact in aiding drug discovery and fighting some of today’s most intractable diseases.

Drug Delivery
Another theater of war for improved drugs is the realm of drug delivery. Even here, converging exponential technologies are paving the way for massive implications in both human health and industry shifts.

One key contender is CRISPR, the fast-advancing gene-editing technology that stands to revolutionize synthetic biology and treatment of genetically linked diseases. And researchers have now demonstrated how this tool can be applied to create materials that shape-shift on command. Think: materials that dissolve instantaneously when faced with a programmed stimulus, releasing a specified drug at a highly targeted location.

Yet another potential boon for targeted drug delivery is nanotechnology, whereby medical nanorobots have now been used to fight incidences of cancer. In a recent review of medical micro- and nanorobotics, lead authors (from the University of Texas at Austin and University of California, San Diego) found numerous successful tests of in vivo operation of medical micro- and nanorobots.

Drugs From the Future
Covid-19 is uniting the global scientific community with its urgency, prompting scientists to cast aside nation-specific territorialism, research secrecy, and academic publishing politics in favor of expedited therapeutic and vaccine development efforts. And in the wake of rapid acceleration across healthcare technologies, Big Pharma is an area worth watching right now, no matter your industry. Converging technologies will soon enable extraordinary strides in longevity and disease prevention, with companies like Insilico leading the charge.

Riding the convergence of massive datasets, skyrocketing computational power, quantum computing, cognitive surplus capabilities, and remarkable innovations in AI, we are not far from a world in which personalized drugs, delivered directly to specified targets, will graduate from science fiction to the standard of care.

Rejuvenational biotechnology will be commercially available sooner than you think. When I asked Alex for his own projection, he set the timeline at “maybe 20 years—that’s a reasonable horizon for tangible rejuvenational biotechnology.”

How might you use an extra 20 or more healthy years in your life? What impact would you be able to make?

Join Me
(1) A360 Executive Mastermind: If you’re an exponentially and abundance-minded entrepreneur who would like coaching directly from me, consider joining my Abundance 360 Mastermind, a highly selective community of 360 CEOs and entrepreneurs who I coach for 3 days every January in Beverly Hills, Ca. Through A360, I provide my members with context and clarity about how converging exponential technologies will transform every industry. I’m committed to running A360 for the course of an ongoing 25-year journey as a “countdown to the Singularity.”

If you’d like to learn more and consider joining our 2021 membership, apply here.

(2) Abundance-Digital Online Community: I’ve also created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is Singularity University’s ‘onramp’ for exponential entrepreneurs—those who want to get involved and play at a higher level. Click here to learn more.

(Both A360 and Abundance-Digital are part of Singularity University—your participation opens you to a global community.)

This article originally appeared on diamandis.com. Read the original article here.

Image Credit: andreas160578 from Pixabay Continue reading

Posted in Human Robots

#437236 Why We Need Mass Automation to ...

The scale of goods moving around the planet at any moment is staggering. Raw materials are dug up in one country, spun into parts and pieces in another, and assembled into products in a third. Crossing oceans and continents, they find their way to a local store or direct to your door.

Magically, a roll of toilet paper, power tool, or tube of toothpaste is there just when you need it.

Even more staggering is that this whole system, the global supply chain, works so well that it’s effectively invisible most of the time. Until now, that is. The pandemic has thrown a floodlight on the inner workings of this modern wonder—and it’s exposed massive vulnerabilities.

The e-commerce supply chain is an instructive example. As the world went into lockdown, and everything non-essential went online, demand for digital fulfillment skyrocketed.

Even under “normal” conditions, most e-commerce warehouses were struggling to meet demand. But Covid-19 has further strained the ability to cope with shifting supply, an unprecedented tidal wave of orders, and labor shortages. Local stores are running out of key products. Online grocers and e-commerce platforms are suspending some home deliveries, restricting online purchases of certain items, and limiting new customers. The whole system is being severely tested.

Why? Despite an abundance of 21st century technology, we’re stuck in the 20th century.

Today’s supply chain consists of fleets of ships, trucks, warehouses, and importantly, people scattered around the world. While there are some notable instances of advanced automation, the overwhelming majority of work is still manual, resembling a sort of human-powered bucket brigade, with people wandering around warehouses or standing alongside conveyor belts. Each package of diapers or bottle of detergent ordered by an online customer might be touched dozens of times by warehouse workers before finding its way into a box delivered to a home.

The pandemic has proven the critical need for innovation due to increased demand, concerns about the health and safety of workers, and traceability and safety of products and services.

At the 2020 World Economic Forum, there was much discussion about the ongoing societal transformation in which humans and machines work in tandem, automating and augmenting the way we get things done. At the time, pre-pandemic, debate trended toward skepticism and fear of job losses, with some even questioning the ethics and need for these technologies.

Now, we see things differently. To make the global supply chain more resilient to shocks like Covid-19, we must look to technology.

Perfecting the Global Supply Chain: The Massive ‘Matter Router’
Technology has faced and overcome similar challenges in the past.

World War II, for example, drove innovation in techniques for rapid production of many products on a large scale, including penicillin. We went from the availability of one dose of the drug in 1941, to four million sterile packages of the drug every month four years later.

Similarly, today’s companies, big and small, are looking to automation, robotics, and AI to meet the pandemic head on. These technologies are crucial to scaling the infrastructure that will fulfill most of the world’s e-commerce and food distribution needs.

You can think of this new infrastructure as a rapidly evolving “matter router” that will employ increasingly complex robotic systems to move products more freely and efficiently.

Robots powered by specialized AI software, for example, are already learning to adapt to changes in the environment, using the most recent advances in industrial robotics and machine learning. When customers suddenly need to order dramatically new items, these robots don’t need to stop or be reprogrammed. They can perform new tasks by learning from experience using low-cost camera systems and deep learning for visual and image recognition.

These more flexible robots can work around the clock, helping make facilities less sensitive to sudden changes in workforce and customer demand and strengthening the supply chain.

Today, e-commerce is roughly 12% of retail sales in the US and is expected to rise well beyond 25% within the decade, fueled by changes in buying habits. However, analysts have begun to consider whether the current crisis might cause permanent jumps in those numbers, as it has in the past (for instance with the SARS epidemic in China in 2003). Whatever happens, the larger supply chain will benefit from greater, more flexible automation, especially during global crises.

We must create what Hamza Mudassire of the University of Cambridge calls a “resilient ecosystem that links multiple buyers with multiple vendors, across a mesh of supply chains.” This ecosystem must be backed by robust, efficient, and scalable automation that uses robotics, autonomous vehicles, and the Internet of Things to help track the flow of goods through the supply chain.

The good news? We can accomplish this with technologies we have today.

Image credit: Guillaume Bolduc / Unsplash Continue reading

Posted in Human Robots