Tag Archives: going
#437276 Cars Will Soon Be Able to Sense and ...
Imagine you’re on your daily commute to work, driving along a crowded highway while trying to resist looking at your phone. You’re already a little stressed out because you didn’t sleep well, woke up late, and have an important meeting in a couple hours, but you just don’t feel like your best self.
Suddenly another car cuts you off, coming way too close to your front bumper as it changes lanes. Your already-simmering emotions leap into overdrive, and you lay on the horn and shout curses no one can hear.
Except someone—or, rather, something—can hear: your car. Hearing your angry words, aggressive tone, and raised voice, and seeing your furrowed brow, the onboard computer goes into “soothe” mode, as it’s been programmed to do when it detects that you’re angry. It plays relaxing music at just the right volume, releases a puff of light lavender-scented essential oil, and maybe even says some meditative quotes to calm you down.
What do you think—creepy? Helpful? Awesome? Weird? Would you actually calm down, or get even more angry that a car is telling you what to do?
Scenarios like this (maybe without the lavender oil part) may not be imaginary for much longer, especially if companies working to integrate emotion-reading artificial intelligence into new cars have their way. And it wouldn’t just be a matter of your car soothing you when you’re upset—depending what sort of regulations are enacted, the car’s sensors, camera, and microphone could collect all kinds of data about you and sell it to third parties.
Computers and Feelings
Just as AI systems can be trained to tell the difference between a picture of a dog and one of a cat, they can learn to differentiate between an angry tone of voice or facial expression and a happy one. In fact, there’s a whole branch of machine intelligence devoted to creating systems that can recognize and react to human emotions; it’s called affective computing.
Emotion-reading AIs learn what different emotions look and sound like from large sets of labeled data; “smile = happy,” “tears = sad,” “shouting = angry,” and so on. The most sophisticated systems can likely even pick up on the micro-expressions that flash across our faces before we consciously have a chance to control them, as detailed by Daniel Goleman in his groundbreaking book Emotional Intelligence.
Affective computing company Affectiva, a spinoff from MIT Media Lab, says its algorithms are trained on 5,313,751 face videos (videos of people’s faces as they do an activity, have a conversation, or react to stimuli) representing about 2 billion facial frames. Fascinatingly, Affectiva claims its software can even account for cultural differences in emotional expression (for example, it’s more normalized in Western cultures to be very emotionally expressive, whereas Asian cultures tend to favor stoicism and politeness), as well as gender differences.
But Why?
As reported in Motherboard, companies like Affectiva, Cerence, Xperi, and Eyeris have plans in the works to partner with automakers and install emotion-reading AI systems in new cars. Regulations passed last year in Europe and a bill just introduced this month in the US senate are helping make the idea of “driver monitoring” less weird, mainly by emphasizing the safety benefits of preemptive warning systems for tired or distracted drivers (remember that part in the beginning about sneaking glances at your phone? Yeah, that).
Drowsiness and distraction can’t really be called emotions, though—so why are they being lumped under an umbrella that has a lot of other implications, including what many may consider an eerily Big Brother-esque violation of privacy?
Our emotions, in fact, are among the most private things about us, since we are the only ones who know their true nature. We’ve developed the ability to hide and disguise our emotions, and this can be a useful skill at work, in relationships, and in scenarios that require negotiation or putting on a game face.
And I don’t know about you, but I’ve had more than one good cry in my car. It’s kind of the perfect place for it; private, secluded, soundproof.
Putting systems into cars that can recognize and collect data about our emotions under the guise of preventing accidents due to the state of mind of being distracted or the physical state of being sleepy, then, seems a bit like a bait and switch.
A Highway to Privacy Invasion?
European regulations will help keep driver data from being used for any purpose other than ensuring a safer ride. But the US is lagging behind on the privacy front, with car companies largely free from any enforceable laws that would keep them from using driver data as they please.
Affectiva lists the following as use cases for occupant monitoring in cars: personalizing content recommendations, providing alternate route recommendations, adapting environmental conditions like lighting and heating, and understanding user frustration with virtual assistants and designing those assistants to be emotion-aware so that they’re less frustrating.
Our phones already do the first two (though, granted, we’re not supposed to look at them while we drive—but most cars now let you use bluetooth to display your phone’s content on the dashboard), and the third is simply a matter of reaching a hand out to turn a dial or press a button. The last seems like a solution for a problem that wouldn’t exist without said… solution.
Despite how unnecessary and unsettling it may seem, though, emotion-reading AI isn’t going away, in cars or other products and services where it might provide value.
Besides automotive AI, Affectiva also makes software for clients in the advertising space. With consent, the built-in camera on users’ laptops records them while they watch ads, gauging their emotional response, what kind of marketing is most likely to engage them, and how likely they are to buy a given product. Emotion-recognition tech is also being used or considered for use in mental health applications, call centers, fraud monitoring, and education, among others.
In a 2015 TED talk, Affectiva co-founder Rana El-Kaliouby told her audience that we’re living in a world increasingly devoid of emotion, and her goal was to bring emotions back into our digital experiences. Soon they’ll be in our cars, too; whether the benefits will outweigh the costs remains to be seen.
Image Credit: Free-Photos from Pixabay Continue reading
#437269 DeepMind’s Newest AI Programs Itself ...
When Deep Blue defeated world chess champion Garry Kasparov in 1997, it may have seemed artificial intelligence had finally arrived. A computer had just taken down one of the top chess players of all time. But it wasn’t to be.
Though Deep Blue was meticulously programmed top-to-bottom to play chess, the approach was too labor-intensive, too dependent on clear rules and bounded possibilities to succeed at more complex games, let alone in the real world. The next revolution would take a decade and a half, when vastly more computing power and data revived machine learning, an old idea in artificial intelligence just waiting for the world to catch up.
Today, machine learning dominates, mostly by way of a family of algorithms called deep learning, while symbolic AI, the dominant approach in Deep Blue’s day, has faded into the background.
Key to deep learning’s success is the fact the algorithms basically write themselves. Given some high-level programming and a dataset, they learn from experience. No engineer anticipates every possibility in code. The algorithms just figure it.
Now, Alphabet’s DeepMind is taking this automation further by developing deep learning algorithms that can handle programming tasks which have been, to date, the sole domain of the world’s top computer scientists (and take them years to write).
In a paper recently published on the pre-print server arXiv, a database for research papers that haven’t been peer reviewed yet, the DeepMind team described a new deep reinforcement learning algorithm that was able to discover its own value function—a critical programming rule in deep reinforcement learning—from scratch.
Surprisingly, the algorithm was also effective beyond the simple environments it trained in, going on to play Atari games—a different, more complicated task—at a level that was, at times, competitive with human-designed algorithms and achieving superhuman levels of play in 14 games.
DeepMind says the approach could accelerate the development of reinforcement learning algorithms and even lead to a shift in focus, where instead of spending years writing the algorithms themselves, researchers work to perfect the environments in which they train.
Pavlov’s Digital Dog
First, a little background.
Three main deep learning approaches are supervised, unsupervised, and reinforcement learning.
The first two consume huge amounts of data (like images or articles), look for patterns in the data, and use those patterns to inform actions (like identifying an image of a cat). To us, this is a pretty alien way to learn about the world. Not only would it be mind-numbingly dull to review millions of cat images, it’d take us years or more to do what these programs do in hours or days. And of course, we can learn what a cat looks like from just a few examples. So why bother?
While supervised and unsupervised deep learning emphasize the machine in machine learning, reinforcement learning is a bit more biological. It actually is the way we learn. Confronted with several possible actions, we predict which will be most rewarding based on experience—weighing the pleasure of eating a chocolate chip cookie against avoiding a cavity and trip to the dentist.
In deep reinforcement learning, algorithms go through a similar process as they take action. In the Atari game Breakout, for instance, a player guides a paddle to bounce a ball at a ceiling of bricks, trying to break as many as possible. When playing Breakout, should an algorithm move the paddle left or right? To decide, it runs a projection—this is the value function—of which direction will maximize the total points, or rewards, it can earn.
Move by move, game by game, an algorithm combines experience and value function to learn which actions bring greater rewards and improves its play, until eventually, it becomes an uncanny Breakout player.
Learning to Learn (Very Meta)
So, a key to deep reinforcement learning is developing a good value function. And that’s difficult. According to the DeepMind team, it takes years of manual research to write the rules guiding algorithmic actions—which is why automating the process is so alluring. Their new Learned Policy Gradient (LPG) algorithm makes solid progress in that direction.
LPG trained in a number of toy environments. Most of these were “gridworlds”—literally two-dimensional grids with objects in some squares. The AI moves square to square and earns points or punishments as it encounters objects. The grids vary in size, and the distribution of objects is either set or random. The training environments offer opportunities to learn fundamental lessons for reinforcement learning algorithms.
Only in LPG’s case, it had no value function to guide that learning.
Instead, LPG has what DeepMind calls a “meta-learner.” You might think of this as an algorithm within an algorithm that, by interacting with its environment, discovers both “what to predict,” thereby forming its version of a value function, and “how to learn from it,” applying its newly discovered value function to each decision it makes in the future.
Prior work in the area has had some success, but according to DeepMind, LPG is the first algorithm to discover reinforcement learning rules from scratch and to generalize beyond training. The latter was particularly surprising because Atari games are so different from the simple worlds LPG trained in—that is, it had never seen anything like an Atari game.
Time to Hand Over the Reins? Not Just Yet
LPG is still behind advanced human-designed algorithms, the researchers said. But it outperformed a human-designed benchmark in training and even some Atari games, which suggests it isn’t strictly worse, just that it specializes in some environments.
This is where there’s room for improvement and more research.
The more environments LPG saw, the more it could successfully generalize. Intriguingly, the researchers speculate that with enough well-designed training environments, the approach might yield a general-purpose reinforcement learning algorithm.
At the least, though, they say further automation of algorithm discovery—that is, algorithms learning to learn—will accelerate the field. In the near term, it can help researchers more quickly develop hand-designed algorithms. Further out, as self-discovered algorithms like LPG improve, engineers may shift from manually developing the algorithms themselves to building the environments where they learn.
Deep learning long ago left Deep Blue in the dust at games. Perhaps algorithms learning to learn will be a winning strategy in the real world too.
Image credit: Mike Szczepanski / Unsplash Continue reading
#437258 This Startup Is 3D Printing Custom ...
Around 1.9 million people in the US are currently living with limb loss. The trauma of losing a limb is just the beginning of what amputees have to face, with the sky-high cost of prosthetics making their circumstance that much more challenging.
Prosthetics can run over $50,000 for a complex limb (like an arm or a leg) and aren’t always covered by insurance. As if shelling out that sum one time wasn’t costly enough, kids’ prosthetics need to be replaced as they outgrow them, meaning the total expense can reach hundreds of thousands of dollars.
A startup called Unlimited Tomorrow is trying to change this, and using cutting-edge technology to do so. Based in Rhinebeck, New York, a town about two hours north of New York City, the company was founded by 23-year-old Easton LaChappelle. He’d been teaching himself the basics of robotics and building prosthetics since grade school (his 8th grade science fair project was a robotic arm) and launched his company in 2014.
After six years of research and development, the company launched its TrueLimb product last month, describing it as an affordable, next-generation prosthetic arm using a custom remote-fitting process where the user never has to leave home.
The technologies used for TrueLimb’s customization and manufacturing are pretty impressive, in that they both cut costs and make the user’s experience a lot less stressful.
For starters, the entire purchase, sizing, and customization process for the prosthetic can be done remotely. Here’s how it works. First, prospective users fill out an eligibility form and give information about their residual limb. If they’re a qualified candidate for a prosthetic, Unlimited Tomorrow sends them a 3D scanner, which they use to scan their residual limb.
The company uses the scans to design a set of test sockets (the component that connects the residual limb to the prosthetic), which are mailed to the user. The company schedules a video meeting with the user for them to try on and discuss the different sockets, with the goal of finding the one that’s most comfortable; new sockets can be made based on the information collected during the video consultation. The user selects their skin tone from a swatch with 450 options, then Unlimited Tomorrow 3D prints and assembles the custom prosthetic and tests it before shipping it out.
“We print the socket, forearm, palm, and all the fingers out of durable nylon material in full color,” LaChappelle told Singularity Hub in an email. “The only components that aren’t 3D printed are the actuators, tendons, electronics, batteries, sensors, and the nuts and bolts. We are an extreme example of final use 3D printing.”
Unlimited Tomorrow’s website lists TrueLimb’s cost as “as low as $7,995.” When you consider the customization and capabilities of the prosthetic, this is incredibly low. According to LaChappelle, the company created a muscle sensor that picks up muscle movement at a higher resolution than the industry standard electromyography sensors. The sensors read signals from nerves in the residual limb used to control motions like fingers bending. This means that when a user thinks about bending a finger, the nerve fires and the prosthetic’s sensors can detect the signal and translate it into the action.
“Working with children using our device, I’ve witnessed a physical moment where the brain “clicks” and starts moving the hand rather than focusing on moving the muscles,” LaChappelle said.
The cost savings come both from the direct-to-consumer model and the fact that Unlimited Tomorrow doesn’t use any outside suppliers. “We create every piece of our product,” LaChappelle said. “We don’t rely on another prosthetic manufacturer to make expensive sensors or electronics. By going direct to consumer, we cut out all the middlemen that usually drive costs up.” Similar devices on the market can cost up to $100,000.
Unlimited Tomorrow is primarily focused on making prosthetics for kids; when they outgrow their first TrueLimb, they send it back, where the company upcycles the expensive quality components and integrates them into a new customized device.
Unlimited Tomorrow isn’t the first to use 3D printing for prosthetics. Florida-based Limbitless Solutions does so too, and industry experts believe the technology is the future of artificial limbs.
“I am constantly blown away by this tech,” LaChappelle said. “We look at technology as the means to augment the human body and empower people.”
Image Credit: Unlimited Tomorrow Continue reading
#437222 China and AI: What the World Can Learn ...
China announced in 2017 its ambition to become the world leader in artificial intelligence (AI) by 2030. While the US still leads in absolute terms, China appears to be making more rapid progress than either the US or the EU, and central and local government spending on AI in China is estimated to be in the tens of billions of dollars.
The move has led—at least in the West—to warnings of a global AI arms race and concerns about the growing reach of China’s authoritarian surveillance state. But treating China as a “villain” in this way is both overly simplistic and potentially costly. While there are undoubtedly aspects of the Chinese government’s approach to AI that are highly concerning and rightly should be condemned, it’s important that this does not cloud all analysis of China’s AI innovation.
The world needs to engage seriously with China’s AI development and take a closer look at what’s really going on. The story is complex and it’s important to highlight where China is making promising advances in useful AI applications and to challenge common misconceptions, as well as to caution against problematic uses.
Nesta has explored the broad spectrum of AI activity in China—the good, the bad, and the unexpected.
The Good
China’s approach to AI development and implementation is fast-paced and pragmatic, oriented towards finding applications which can help solve real-world problems. Rapid progress is being made in the field of healthcare, for example, as China grapples with providing easy access to affordable and high-quality services for its aging population.
Applications include “AI doctor” chatbots, which help to connect communities in remote areas with experienced consultants via telemedicine; machine learning to speed up pharmaceutical research; and the use of deep learning for medical image processing, which can help with the early detection of cancer and other diseases.
Since the outbreak of Covid-19, medical AI applications have surged as Chinese researchers and tech companies have rushed to try and combat the virus by speeding up screening, diagnosis, and new drug development. AI tools used in Wuhan, China, to tackle Covid-19 by helping accelerate CT scan diagnosis are now being used in Italy and have been also offered to the NHS in the UK.
The Bad
But there are also elements of China’s use of AI that are seriously concerning. Positive advances in practical AI applications that are benefiting citizens and society don’t detract from the fact that China’s authoritarian government is also using AI and citizens’ data in ways that violate privacy and civil liberties.
Most disturbingly, reports and leaked documents have revealed the government’s use of facial recognition technologies to enable the surveillance and detention of Muslim ethnic minorities in China’s Xinjiang province.
The emergence of opaque social governance systems that lack accountability mechanisms are also a cause for concern.
In Shanghai’s “smart court” system, for example, AI-generated assessments are used to help with sentencing decisions. But it is difficult for defendants to assess the tool’s potential biases, the quality of the data, and the soundness of the algorithm, making it hard for them to challenge the decisions made.
China’s experience reminds us of the need for transparency and accountability when it comes to AI in public services. Systems must be designed and implemented in ways that are inclusive and protect citizens’ digital rights.
The Unexpected
Commentators have often interpreted the State Council’s 2017 Artificial Intelligence Development Plan as an indication that China’s AI mobilization is a top-down, centrally planned strategy.
But a closer look at the dynamics of China’s AI development reveals the importance of local government in implementing innovation policy. Municipal and provincial governments across China are establishing cross-sector partnerships with research institutions and tech companies to create local AI innovation ecosystems and drive rapid research and development.
Beyond the thriving major cities of Beijing, Shanghai, and Shenzhen, efforts to develop successful innovation hubs are also underway in other regions. A promising example is the city of Hangzhou, in Zhejiang Province, which has established an “AI Town,” clustering together the tech company Alibaba, Zhejiang University, and local businesses to work collaboratively on AI development. China’s local ecosystem approach could offer interesting insights to policymakers in the UK aiming to boost research and innovation outside the capital and tackle longstanding regional economic imbalances.
China’s accelerating AI innovation deserves the world’s full attention, but it is unhelpful to reduce all the many developments into a simplistic narrative about China as a threat or a villain. Observers outside China need to engage seriously with the debate and make more of an effort to understand—and learn from—the nuances of what’s really happening.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Image Credit: Dominik Vanyi on Unsplash Continue reading