Tag Archives: use

#437276 Cars Will Soon Be Able to Sense and ...

Imagine you’re on your daily commute to work, driving along a crowded highway while trying to resist looking at your phone. You’re already a little stressed out because you didn’t sleep well, woke up late, and have an important meeting in a couple hours, but you just don’t feel like your best self.

Suddenly another car cuts you off, coming way too close to your front bumper as it changes lanes. Your already-simmering emotions leap into overdrive, and you lay on the horn and shout curses no one can hear.

Except someone—or, rather, something—can hear: your car. Hearing your angry words, aggressive tone, and raised voice, and seeing your furrowed brow, the onboard computer goes into “soothe” mode, as it’s been programmed to do when it detects that you’re angry. It plays relaxing music at just the right volume, releases a puff of light lavender-scented essential oil, and maybe even says some meditative quotes to calm you down.

What do you think—creepy? Helpful? Awesome? Weird? Would you actually calm down, or get even more angry that a car is telling you what to do?

Scenarios like this (maybe without the lavender oil part) may not be imaginary for much longer, especially if companies working to integrate emotion-reading artificial intelligence into new cars have their way. And it wouldn’t just be a matter of your car soothing you when you’re upset—depending what sort of regulations are enacted, the car’s sensors, camera, and microphone could collect all kinds of data about you and sell it to third parties.

Computers and Feelings
Just as AI systems can be trained to tell the difference between a picture of a dog and one of a cat, they can learn to differentiate between an angry tone of voice or facial expression and a happy one. In fact, there’s a whole branch of machine intelligence devoted to creating systems that can recognize and react to human emotions; it’s called affective computing.

Emotion-reading AIs learn what different emotions look and sound like from large sets of labeled data; “smile = happy,” “tears = sad,” “shouting = angry,” and so on. The most sophisticated systems can likely even pick up on the micro-expressions that flash across our faces before we consciously have a chance to control them, as detailed by Daniel Goleman in his groundbreaking book Emotional Intelligence.

Affective computing company Affectiva, a spinoff from MIT Media Lab, says its algorithms are trained on 5,313,751 face videos (videos of people’s faces as they do an activity, have a conversation, or react to stimuli) representing about 2 billion facial frames. Fascinatingly, Affectiva claims its software can even account for cultural differences in emotional expression (for example, it’s more normalized in Western cultures to be very emotionally expressive, whereas Asian cultures tend to favor stoicism and politeness), as well as gender differences.

But Why?
As reported in Motherboard, companies like Affectiva, Cerence, Xperi, and Eyeris have plans in the works to partner with automakers and install emotion-reading AI systems in new cars. Regulations passed last year in Europe and a bill just introduced this month in the US senate are helping make the idea of “driver monitoring” less weird, mainly by emphasizing the safety benefits of preemptive warning systems for tired or distracted drivers (remember that part in the beginning about sneaking glances at your phone? Yeah, that).

Drowsiness and distraction can’t really be called emotions, though—so why are they being lumped under an umbrella that has a lot of other implications, including what many may consider an eerily Big Brother-esque violation of privacy?

Our emotions, in fact, are among the most private things about us, since we are the only ones who know their true nature. We’ve developed the ability to hide and disguise our emotions, and this can be a useful skill at work, in relationships, and in scenarios that require negotiation or putting on a game face.

And I don’t know about you, but I’ve had more than one good cry in my car. It’s kind of the perfect place for it; private, secluded, soundproof.

Putting systems into cars that can recognize and collect data about our emotions under the guise of preventing accidents due to the state of mind of being distracted or the physical state of being sleepy, then, seems a bit like a bait and switch.

A Highway to Privacy Invasion?
European regulations will help keep driver data from being used for any purpose other than ensuring a safer ride. But the US is lagging behind on the privacy front, with car companies largely free from any enforceable laws that would keep them from using driver data as they please.

Affectiva lists the following as use cases for occupant monitoring in cars: personalizing content recommendations, providing alternate route recommendations, adapting environmental conditions like lighting and heating, and understanding user frustration with virtual assistants and designing those assistants to be emotion-aware so that they’re less frustrating.

Our phones already do the first two (though, granted, we’re not supposed to look at them while we drive—but most cars now let you use bluetooth to display your phone’s content on the dashboard), and the third is simply a matter of reaching a hand out to turn a dial or press a button. The last seems like a solution for a problem that wouldn’t exist without said… solution.

Despite how unnecessary and unsettling it may seem, though, emotion-reading AI isn’t going away, in cars or other products and services where it might provide value.

Besides automotive AI, Affectiva also makes software for clients in the advertising space. With consent, the built-in camera on users’ laptops records them while they watch ads, gauging their emotional response, what kind of marketing is most likely to engage them, and how likely they are to buy a given product. Emotion-recognition tech is also being used or considered for use in mental health applications, call centers, fraud monitoring, and education, among others.

In a 2015 TED talk, Affectiva co-founder Rana El-Kaliouby told her audience that we’re living in a world increasingly devoid of emotion, and her goal was to bring emotions back into our digital experiences. Soon they’ll be in our cars, too; whether the benefits will outweigh the costs remains to be seen.

Image Credit: Free-Photos from Pixabay Continue reading

Posted in Human Robots

#437269 DeepMind’s Newest AI Programs Itself ...

When Deep Blue defeated world chess champion Garry Kasparov in 1997, it may have seemed artificial intelligence had finally arrived. A computer had just taken down one of the top chess players of all time. But it wasn’t to be.

Though Deep Blue was meticulously programmed top-to-bottom to play chess, the approach was too labor-intensive, too dependent on clear rules and bounded possibilities to succeed at more complex games, let alone in the real world. The next revolution would take a decade and a half, when vastly more computing power and data revived machine learning, an old idea in artificial intelligence just waiting for the world to catch up.

Today, machine learning dominates, mostly by way of a family of algorithms called deep learning, while symbolic AI, the dominant approach in Deep Blue’s day, has faded into the background.

Key to deep learning’s success is the fact the algorithms basically write themselves. Given some high-level programming and a dataset, they learn from experience. No engineer anticipates every possibility in code. The algorithms just figure it.

Now, Alphabet’s DeepMind is taking this automation further by developing deep learning algorithms that can handle programming tasks which have been, to date, the sole domain of the world’s top computer scientists (and take them years to write).

In a paper recently published on the pre-print server arXiv, a database for research papers that haven’t been peer reviewed yet, the DeepMind team described a new deep reinforcement learning algorithm that was able to discover its own value function—a critical programming rule in deep reinforcement learning—from scratch.

Surprisingly, the algorithm was also effective beyond the simple environments it trained in, going on to play Atari games—a different, more complicated task—at a level that was, at times, competitive with human-designed algorithms and achieving superhuman levels of play in 14 games.

DeepMind says the approach could accelerate the development of reinforcement learning algorithms and even lead to a shift in focus, where instead of spending years writing the algorithms themselves, researchers work to perfect the environments in which they train.

Pavlov’s Digital Dog
First, a little background.

Three main deep learning approaches are supervised, unsupervised, and reinforcement learning.

The first two consume huge amounts of data (like images or articles), look for patterns in the data, and use those patterns to inform actions (like identifying an image of a cat). To us, this is a pretty alien way to learn about the world. Not only would it be mind-numbingly dull to review millions of cat images, it’d take us years or more to do what these programs do in hours or days. And of course, we can learn what a cat looks like from just a few examples. So why bother?

While supervised and unsupervised deep learning emphasize the machine in machine learning, reinforcement learning is a bit more biological. It actually is the way we learn. Confronted with several possible actions, we predict which will be most rewarding based on experience—weighing the pleasure of eating a chocolate chip cookie against avoiding a cavity and trip to the dentist.

In deep reinforcement learning, algorithms go through a similar process as they take action. In the Atari game Breakout, for instance, a player guides a paddle to bounce a ball at a ceiling of bricks, trying to break as many as possible. When playing Breakout, should an algorithm move the paddle left or right? To decide, it runs a projection—this is the value function—of which direction will maximize the total points, or rewards, it can earn.

Move by move, game by game, an algorithm combines experience and value function to learn which actions bring greater rewards and improves its play, until eventually, it becomes an uncanny Breakout player.

Learning to Learn (Very Meta)
So, a key to deep reinforcement learning is developing a good value function. And that’s difficult. According to the DeepMind team, it takes years of manual research to write the rules guiding algorithmic actions—which is why automating the process is so alluring. Their new Learned Policy Gradient (LPG) algorithm makes solid progress in that direction.

LPG trained in a number of toy environments. Most of these were “gridworlds”—literally two-dimensional grids with objects in some squares. The AI moves square to square and earns points or punishments as it encounters objects. The grids vary in size, and the distribution of objects is either set or random. The training environments offer opportunities to learn fundamental lessons for reinforcement learning algorithms.

Only in LPG’s case, it had no value function to guide that learning.

Instead, LPG has what DeepMind calls a “meta-learner.” You might think of this as an algorithm within an algorithm that, by interacting with its environment, discovers both “what to predict,” thereby forming its version of a value function, and “how to learn from it,” applying its newly discovered value function to each decision it makes in the future.

Prior work in the area has had some success, but according to DeepMind, LPG is the first algorithm to discover reinforcement learning rules from scratch and to generalize beyond training. The latter was particularly surprising because Atari games are so different from the simple worlds LPG trained in—that is, it had never seen anything like an Atari game.

Time to Hand Over the Reins? Not Just Yet
LPG is still behind advanced human-designed algorithms, the researchers said. But it outperformed a human-designed benchmark in training and even some Atari games, which suggests it isn’t strictly worse, just that it specializes in some environments.

This is where there’s room for improvement and more research.

The more environments LPG saw, the more it could successfully generalize. Intriguingly, the researchers speculate that with enough well-designed training environments, the approach might yield a general-purpose reinforcement learning algorithm.

At the least, though, they say further automation of algorithm discovery—that is, algorithms learning to learn—will accelerate the field. In the near term, it can help researchers more quickly develop hand-designed algorithms. Further out, as self-discovered algorithms like LPG improve, engineers may shift from manually developing the algorithms themselves to building the environments where they learn.

Deep learning long ago left Deep Blue in the dust at games. Perhaps algorithms learning to learn will be a winning strategy in the real world too.

Image credit: Mike Szczepanski / Unsplash Continue reading

Posted in Human Robots

#437261 How AI Will Make Drug Discovery ...

If you had to guess how long it takes for a drug to go from an idea to your pharmacy, what would you guess? Three years? Five years? How about the cost? $30 million? $100 million?

Well, here’s the sobering truth: 90 percent of all drug possibilities fail. The few that do succeed take an average of 10 years to reach the market and cost anywhere from $2.5 billion to $12 billion to get there.

But what if we could generate novel molecules to target any disease, overnight, ready for clinical trials? Imagine leveraging machine learning to accomplish with 50 people what the pharmaceutical industry can barely do with an army of 5,000.

Welcome to the future of AI and low-cost, ultra-fast, and personalized drug discovery. Let’s dive in.

GANs & Drugs
Around 2012, computer scientist-turned-biophysicist Alex Zhavoronkov started to notice that artificial intelligence was getting increasingly good at image, voice, and text recognition. He knew that all three tasks shared a critical commonality. In each, massive datasets were available, making it easy to train up an AI.

But similar datasets were present in pharmacology. So, back in 2014, Zhavoronkov started wondering if he could use these datasets and AI to significantly speed up the drug discovery process. He’d heard about a new technique in artificial intelligence known as generative adversarial networks (or GANs). By pitting two neural nets against one another (adversarial), the system can start with minimal instructions and produce novel outcomes (generative). At the time, researchers had been using GANs to do things like design new objects or create one-of-a-kind, fake human faces, but Zhavoronkov wanted to apply them to pharmacology.

He figured GANs would allow researchers to verbally describe drug attributes: “The compound should inhibit protein X at concentration Y with minimal side effects in humans,” and then the AI could construct the molecule from scratch. To turn his idea into reality, Zhavoronkov set up Insilico Medicine on the campus of Johns Hopkins University in Baltimore, Maryland, and rolled up his sleeves.

Instead of beginning their process in some exotic locale, Insilico’s “drug discovery engine” sifts millions of data samples to determine the signature biological characteristics of specific diseases. The engine then identifies the most promising treatment targets and—using GANs—generates molecules (that is, baby drugs) perfectly suited for them. “The result is an explosion in potential drug targets and a much more efficient testing process,” says Zhavoronkov. “AI allows us to do with fifty people what a typical drug company does with five thousand.”

The results have turned what was once a decade-long war into a month-long skirmish.

In late 2018, for example, Insilico was generating novel molecules in fewer than 46 days, and this included not just the initial discovery, but also the synthesis of the drug and its experimental validation in computer simulations.

Right now, they’re using the system to hunt down new drugs for cancer, aging, fibrosis, Parkinson’s, Alzheimer’s, ALS, diabetes, and many others. The first drug to result from this work, a treatment for hair loss, is slated to start Phase I trials by the end of 2020.

They’re also in the early stages of using AI to predict the outcomes of clinical trials in advance of the trial. If successful, this technique will enable researchers to strip a bundle of time and money out of the traditional testing process.

Protein Folding
Beyond inventing new drugs, AI is also being used by other scientists to identify new drug targets—that is, the place to which a drug binds in the body and another key part of the drug discovery process.

Between 1980 and 2006, despite an annual investment of $30 billion, researchers only managed to find about five new drug targets a year. The trouble is complexity. Most potential drug targets are proteins, and a protein’s structure—meaning the way a 2D sequence of amino acids folds into a 3D protein—determines its function.

But a protein with merely a hundred amino acids (a rather small protein) can produce a googol-cubed worth of potential shapes—that’s a one followed by three hundred zeroes. This is also why protein-folding has long been considered an intractably hard problem for even the most powerful of supercomputers.

Back in 1994, to monitor supercomputers’ progress in protein-folding, a biannual competition was created. Until 2018, success was fairly rare. But then the creators of DeepMind turned their neural networks loose on the problem. They created an AI that mines enormous datasets to determine the most likely distance between a protein’s base pairs and the angles of their chemical bonds—aka, the basics of protein-folding. They called it AlphaFold.

On its first foray into the competition, contestant AIs were given 43 protein-folding problems to solve. AlphaFold got 25 right. The second-place team managed a meager three. By predicting the elusive ways in which various proteins fold on the basis of their amino acid sequences, AlphaFold may soon have a tremendous impact in aiding drug discovery and fighting some of today’s most intractable diseases.

Drug Delivery
Another theater of war for improved drugs is the realm of drug delivery. Even here, converging exponential technologies are paving the way for massive implications in both human health and industry shifts.

One key contender is CRISPR, the fast-advancing gene-editing technology that stands to revolutionize synthetic biology and treatment of genetically linked diseases. And researchers have now demonstrated how this tool can be applied to create materials that shape-shift on command. Think: materials that dissolve instantaneously when faced with a programmed stimulus, releasing a specified drug at a highly targeted location.

Yet another potential boon for targeted drug delivery is nanotechnology, whereby medical nanorobots have now been used to fight incidences of cancer. In a recent review of medical micro- and nanorobotics, lead authors (from the University of Texas at Austin and University of California, San Diego) found numerous successful tests of in vivo operation of medical micro- and nanorobots.

Drugs From the Future
Covid-19 is uniting the global scientific community with its urgency, prompting scientists to cast aside nation-specific territorialism, research secrecy, and academic publishing politics in favor of expedited therapeutic and vaccine development efforts. And in the wake of rapid acceleration across healthcare technologies, Big Pharma is an area worth watching right now, no matter your industry. Converging technologies will soon enable extraordinary strides in longevity and disease prevention, with companies like Insilico leading the charge.

Riding the convergence of massive datasets, skyrocketing computational power, quantum computing, cognitive surplus capabilities, and remarkable innovations in AI, we are not far from a world in which personalized drugs, delivered directly to specified targets, will graduate from science fiction to the standard of care.

Rejuvenational biotechnology will be commercially available sooner than you think. When I asked Alex for his own projection, he set the timeline at “maybe 20 years—that’s a reasonable horizon for tangible rejuvenational biotechnology.”

How might you use an extra 20 or more healthy years in your life? What impact would you be able to make?

Join Me
(1) A360 Executive Mastermind: If you’re an exponentially and abundance-minded entrepreneur who would like coaching directly from me, consider joining my Abundance 360 Mastermind, a highly selective community of 360 CEOs and entrepreneurs who I coach for 3 days every January in Beverly Hills, Ca. Through A360, I provide my members with context and clarity about how converging exponential technologies will transform every industry. I’m committed to running A360 for the course of an ongoing 25-year journey as a “countdown to the Singularity.”

If you’d like to learn more and consider joining our 2021 membership, apply here.

(2) Abundance-Digital Online Community: I’ve also created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is Singularity University’s ‘onramp’ for exponential entrepreneurs—those who want to get involved and play at a higher level. Click here to learn more.

(Both A360 and Abundance-Digital are part of Singularity University—your participation opens you to a global community.)

This article originally appeared on diamandis.com. Read the original article here.

Image Credit: andreas160578 from Pixabay Continue reading

Posted in Human Robots

#437258 This Startup Is 3D Printing Custom ...

Around 1.9 million people in the US are currently living with limb loss. The trauma of losing a limb is just the beginning of what amputees have to face, with the sky-high cost of prosthetics making their circumstance that much more challenging.

Prosthetics can run over $50,000 for a complex limb (like an arm or a leg) and aren’t always covered by insurance. As if shelling out that sum one time wasn’t costly enough, kids’ prosthetics need to be replaced as they outgrow them, meaning the total expense can reach hundreds of thousands of dollars.

A startup called Unlimited Tomorrow is trying to change this, and using cutting-edge technology to do so. Based in Rhinebeck, New York, a town about two hours north of New York City, the company was founded by 23-year-old Easton LaChappelle. He’d been teaching himself the basics of robotics and building prosthetics since grade school (his 8th grade science fair project was a robotic arm) and launched his company in 2014.

After six years of research and development, the company launched its TrueLimb product last month, describing it as an affordable, next-generation prosthetic arm using a custom remote-fitting process where the user never has to leave home.

The technologies used for TrueLimb’s customization and manufacturing are pretty impressive, in that they both cut costs and make the user’s experience a lot less stressful.

For starters, the entire purchase, sizing, and customization process for the prosthetic can be done remotely. Here’s how it works. First, prospective users fill out an eligibility form and give information about their residual limb. If they’re a qualified candidate for a prosthetic, Unlimited Tomorrow sends them a 3D scanner, which they use to scan their residual limb.

The company uses the scans to design a set of test sockets (the component that connects the residual limb to the prosthetic), which are mailed to the user. The company schedules a video meeting with the user for them to try on and discuss the different sockets, with the goal of finding the one that’s most comfortable; new sockets can be made based on the information collected during the video consultation. The user selects their skin tone from a swatch with 450 options, then Unlimited Tomorrow 3D prints and assembles the custom prosthetic and tests it before shipping it out.

“We print the socket, forearm, palm, and all the fingers out of durable nylon material in full color,” LaChappelle told Singularity Hub in an email. “The only components that aren’t 3D printed are the actuators, tendons, electronics, batteries, sensors, and the nuts and bolts. We are an extreme example of final use 3D printing.”

Unlimited Tomorrow’s website lists TrueLimb’s cost as “as low as $7,995.” When you consider the customization and capabilities of the prosthetic, this is incredibly low. According to LaChappelle, the company created a muscle sensor that picks up muscle movement at a higher resolution than the industry standard electromyography sensors. The sensors read signals from nerves in the residual limb used to control motions like fingers bending. This means that when a user thinks about bending a finger, the nerve fires and the prosthetic’s sensors can detect the signal and translate it into the action.

“Working with children using our device, I’ve witnessed a physical moment where the brain “clicks” and starts moving the hand rather than focusing on moving the muscles,” LaChappelle said.

The cost savings come both from the direct-to-consumer model and the fact that Unlimited Tomorrow doesn’t use any outside suppliers. “We create every piece of our product,” LaChappelle said. “We don’t rely on another prosthetic manufacturer to make expensive sensors or electronics. By going direct to consumer, we cut out all the middlemen that usually drive costs up.” Similar devices on the market can cost up to $100,000.

Unlimited Tomorrow is primarily focused on making prosthetics for kids; when they outgrow their first TrueLimb, they send it back, where the company upcycles the expensive quality components and integrates them into a new customized device.

Unlimited Tomorrow isn’t the first to use 3D printing for prosthetics. Florida-based Limbitless Solutions does so too, and industry experts believe the technology is the future of artificial limbs.

“I am constantly blown away by this tech,” LaChappelle said. “We look at technology as the means to augment the human body and empower people.”

Image Credit: Unlimited Tomorrow Continue reading

Posted in Human Robots

#437230 How Drones and Aerial Vehicles Could ...

Drones, personal flying vehicles, and air taxis may be part of our everyday life in the very near future. Drones and air taxis will create new means of mobility and transport routes. Drones will be used for surveillance, delivery, and in the construction sector as it moves towards automation.

The introduction of these aerial craft into cities will require the built environment to change dramatically. Drones and other new aerial vehicles will require landing pads, charging points, and drone ports. They could usher in new styles of building, and lead to more sustainable design.

My research explores the impact of aerial vehicles on urban design, mapping out possible future trajectories.

An Aerial Age
Already, civilian drones can vary widely in size and complexity. They can carry a range of items from high-resolution cameras, delivery mechanisms, and thermal image technology to speakers and scanners. In the public sector, drones are used in disaster response and by the fire service to tackle fires which could endanger firefighters.

During the coronavirus pandemic, drones have been used by the police to enforce lockdown. Drones normally used in agriculture have sprayed disinfectant over cities. In the UK, drone delivery trials are taking place to carry medical items to the Isle of Wight.

Alongside drones, our future cities could also be populated by vertical takeoff and landing craft (VTOL), used as private vehicles and air taxis.

These vehicles are familiar to sci-fi fans. The late Syd Mead’s illustrations of the Spinner VTOL craft in the film Blade Runner captured the popular imagination, and the screens for the Spinners in Blade Runner 2049 created by Territory Studio provided a careful design fiction of the experience of piloting these types of vehicle.

Now, though, these flying vehicles are reality. A number of companies are developing eVTOL with electric multi-rotor jets, and a whole new motorsport is being established around them.

These aircraft have the potential to change our cities. However, they need to be tested extensively in urban airspace. A study conducted by Airbus found that public concerns about VTOL use focused on the safety of those on the ground and noise emissions.

New Cities
The widespread adoption of drones and VTOL will lead to new architecture and infrastructure. Existing buildings will require adaptations: landing pads, solar photovoltaic panels for energy efficiency, charging points for delivery drones, and landscaping to mitigate noise emissions.

A number of companies are already trialing drone delivery services. Existing buildings will need to be adapted to accommodate these new networks, and new design principles will have to be implemented in future ones.

The architect Saúl Ajuria Fernández has developed a design for a delivery drone port hub. This drone port acts like a beehive where drones recharge and collect parcels for distribution. Architectural firm Humphreys & Partners’ Pier 2, a design for a modular apartment building of the future, includes a cantilevered drone port for delivery services.

The Norman Foster Foundation has designed a drone port for delivery of medical supplies and other items for rural communities in Rwanda. The structure is also intended to function as a space for the public to congregate, as well as to receive training in robotics.

Drones may also help the urban environment become more sustainable. Researchers at the University of Stuttgart have developed a re-configurable architectural roof canopy system deployed by drones. By adjusting to follow the direction of the sun, the canopy provides shade and reduces reliance on ventilation systems.

Demand for air taxis and personal flying vehicles will develop where failures in other transport systems take place. The Airbus research found that of the cities surveyed, highest demand for VTOLs was in Los Angeles and Mexico City, urban areas famous for traffic pollution. To accommodate these aerial vehicles, urban space will need to transform to include landing pads, airport-like infrastructure, and recharge points.

Furthermore, this whole logistics system in lower airspace (below 500 feet), or what I term “hover space,” will need an urban traffic management system. One great example of how this hover space could work can be seen in a speculative project from design studio Superflux in their Drone Aviary project. A number of drones with different functions move around an urban area in a network, following different paths at varying heights.

We are at a critical period in urban history, faced by climatic breakdown and pandemic. Drones and aerial vehicles can be part of a profound rethink of the urban environment.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: NASA Continue reading

Posted in Human Robots