Tag Archives: media
#432880 Google’s Duplex Raises the Question: ...
By now, you’ve probably seen Google’s new Duplex software, which promises to call people on your behalf to book appointments for haircuts and the like. As yet, it only exists in demo form, but already it seems like Google has made a big stride towards capturing a market that plenty of companies have had their eye on for quite some time. This software is impressive, but it raises questions.
Many of you will be familiar with the stilted, robotic conversations you can have with early chatbots that are, essentially, glorified menus. Instead of pressing 1 to confirm or 2 to re-enter, some of these bots would allow for simple commands like “Yes” or “No,” replacing the buttons with limited ability to recognize a few words. Using them was often a far more frustrating experience than attempting to use a menu—there are few things more irritating than a robot saying, “Sorry, your response was not recognized.”
Google Duplex scheduling a hair salon appointment:
Google Duplex calling a restaurant:
Even getting the response recognized is hard enough. After all, there are countless different nuances and accents to baffle voice recognition software, and endless turns of phrase that amount to saying the same thing that can confound natural language processing (NLP), especially if you like your phrasing quirky.
You may think that standard customer-service type conversations all travel the same route, using similar words and phrasing. But when there are over 80,000 ways to order coffee, and making a mistake is frowned upon, even simple tasks require high accuracy over a huge dataset.
Advances in audio processing, neural networks, and NLP, as well as raw computing power, have meant that basic recognition of what someone is trying to say is less of an issue. Soundhound’s virtual assistant prides itself on being able to process complicated requests (perhaps needlessly complicated).
The deeper issue, as with all attempts to develop conversational machines, is one of understanding context. There are so many ways a conversation can go that attempting to construct a conversation two or three layers deep quickly runs into problems. Multiply the thousands of things people might say by the thousands they might say next, and the combinatorics of the challenge runs away from most chatbots, leaving them as either glorified menus, gimmicks, or rather bizarre to talk to.
Yet Google, who surely remembers from Glass the risk of premature debuts for technology, especially the kind that ask you to rethink how you interact with or trust in software, must have faith in Duplex to show it on the world stage. We know that startups like Semantic Machines and x.ai have received serious funding to perform very similar functions, using natural-language conversations to perform computing tasks, schedule meetings, book hotels, or purchase items.
It’s no great leap to imagine Google will soon do the same, bringing us closer to a world of onboard computing, where Lens labels the world around us and their assistant arranges it for us (all the while gathering more and more data it can convert into personalized ads). The early demos showed some clever tricks for keeping the conversation within a fairly narrow realm where the AI should be comfortable and competent, and the blog post that accompanied the release shows just how much effort has gone into the technology.
Yet given the privacy and ethics funk the tech industry finds itself in, and people’s general unease about AI, the main reaction to Duplex’s impressive demo was concern. The voice sounded too natural, bringing to mind Lyrebird and their warnings of deepfakes. You might trust “Do the Right Thing” Google with this technology, but it could usher in an era when automated robo-callers are far more convincing.
A more human-like voice may sound like a perfectly innocuous improvement, but the fact that the assistant interjects naturalistic “umm” and “mm-hm” responses to more perfectly mimic a human rubbed a lot of people the wrong way. This wasn’t just a voice assistant trying to sound less grinding and robotic; it was actively trying to deceive people into thinking they were talking to a human.
Google is running the risk of trying to get to conversational AI by going straight through the uncanny valley.
“Google’s experiments do appear to have been designed to deceive,” said Dr. Thomas King of the Oxford Internet Institute’s Digital Ethics Lab, according to Techcrunch. “Their main hypothesis was ‘can you distinguish this from a real person?’ In this case it’s unclear why their hypothesis was about deception and not the user experience… there should be some kind of mechanism there to let people know what it is they are speaking to.”
From Google’s perspective, being able to say “90 percent of callers can’t tell the difference between this and a human personal assistant” is an excellent marketing ploy, even though statistics about how many interactions are successful might be more relevant.
In fact, Duplex runs contrary to pretty much every major recommendation about ethics for the use of robotics or artificial intelligence, not to mention certain eavesdropping laws. Transparency is key to holding machines (and the people who design them) accountable, especially when it comes to decision-making.
Then there are the more subtle social issues. One prominent effect social media has had is to allow people to silo themselves; in echo chambers of like-minded individuals, it’s hard to see how other opinions exist. Technology exacerbates this by removing the evolutionary cues that go along with face-to-face interaction. Confronted with a pair of human eyes, people are more generous. Confronted with a Twitter avatar or a Facebook interface, people hurl abuse and criticism they’d never dream of using in a public setting.
Now that we can use technology to interact with ever fewer people, will it change us? Is it fair to offload the burden of dealing with a robot onto the poor human at the other end of the line, who might have to deal with dozens of such calls a day? Google has said that if the AI is in trouble, it will put you through to a human, which might help save receptionists from the hell of trying to explain a concept to dozens of dumbfounded AI assistants all day. But there’s always the risk that failures will be blamed on the person and not the machine.
As AI advances, could we end up treating the dwindling number of people in these “customer-facing” roles as the buggiest part of a fully automatic service? Will people start accusing each other of being robots on the phone, as well as on Twitter?
Google has provided plenty of reassurances about how the system will be used. They have said they will ensure that the system is identified, and it’s hardly difficult to resolve this problem; a slight change in the script from their demo would do it. For now, consumers will likely appreciate moves that make it clear whether the “intelligent agents” that make major decisions for us, that we interact with daily, and that hide behind social media avatars or phone numbers are real or artificial.
Image Credit: Besjunior / Shutterstock.com Continue reading
#432431 Why Slowing Down Can Actually Help Us ...
Leah Weiss believes that when we pay attention to how we do our work—our thoughts and feelings about what we do and why we do it—we can tap into a much deeper reservoir of courage, creativity, meaning, and resilience.
As a researcher, educator, and author, Weiss teaches a course called “Leading with Compassion and Mindfulness” at the Stanford Graduate School of Business, one of the most competitive MBA programs in the world, and runs programs at HopeLab.
Weiss is the author of the new book How We Work: Live Your Purpose, Reclaim your Sanity and Embrace the Daily Grind, endorsed by the Dalai Lama, among others. I caught up with Leah to learn more about how the practice of mindfulness can deepen our individual and collective purpose and passion.
Lisa Kay Solomon: We’re hearing a lot about mindfulness these days. What is mindfulness and why is it so important to bring into our work? Can you share some of the basic tenets of the practice?
Leah Weiss, PhD: Mindfulness is, in its most literal sense, “the attention to inattention.” It’s as simple as noticing when you’re not paying attention and then re-focusing. It is prioritizing what is happening right now over internal and external noise.
The ability to work well with difficult coworkers, handle constructive feedback and criticism, regulate emotions at work—all of these things can come from regular mindfulness practice.
Some additional benefits of mindfulness are a greater sense of compassion (both self-compassion and compassion for others) and a way to seek and find purpose in even mundane things (and especially at work). From the business standpoint, mindfulness at work leads to increased productivity and creativity, mostly because when we are focused on one task at a time (as opposed to multitasking), we produce better results.
We spend more time with our co-workers than we do with our families; if our work relationships are negative, we suffer both mentally and physically. Even worse, we take all of those negative feelings home with us at the end of the work day. The antidote to this prescription for unhappiness is to have clear, strong purpose (one third of people do not have purpose at work and this is a major problem in the modern workplace!). We can use mental training to grow as people and as employees.
LKS: What are some recommendations you would make to busy leaders who are working around the clock to change the world?
LW: I think the most important thing is to remember to tend to our relationship with ourselves while trying to change the world. If we’re beating up on ourselves all the time we’ll be depleted.
People passionate about improving the world can get into habits of believing self-care isn’t important. We demand a lot of ourselves. It’s okay to fail, to mess up, to make mistakes—what’s important is how we learn from those mistakes and what we tell ourselves about those instances. What is the “internal script” playing in your own head? Is it positive, supporting, and understanding? It should be. If it isn’t, you can work on it. And the changes you make won’t just improve your quality of life, they’ll make you more resilient to weather life’s inevitable setbacks.
A close second recommendation is to always consider where everyone in an organization fits and help everyone (including yourself) find purpose. When you know what your own purpose is and show others their purpose, you can motivate a team and help everyone on a team gain pride in and at work. To get at this, make sure to ask people on your team what really lights them up. What sucks their energy and depletes them? If we know our own answers to these questions and relate them to the people we work with, we can create more engaged organizations.
LKS: Can you envision a future where technology and mindfulness can work together?
LW: Technology and mindfulness are already starting to work together. Some artificial intelligence companies are considering things like mindfulness and compassion when building robots, and there are numerous apps that target spreading mindfulness meditations in a widely-accessible way.
LKS: Looking ahead at our future generations who seem more attached to their devices than ever, what advice do you have for them?
LW: It’s unrealistic to say “stop using your device so much,” so instead, my suggestion is to make time for doing things like scrolling social media and make the same amount of time for putting your phone down and watching a movie or talking to a friend. No matter what it is that you are doing, make sure you have meta-awareness or clarity about what you’re paying attention to. Be clear about where your attention is and recognize that you can be a steward of attention. Technology can support us in this or pull us away from this; it depends on how we use it.
Image Credit: frankie’s / Shutterstock.com Continue reading
#432249 New Malicious AI Report Outlines Biggest ...
Everyone’s talking about deep fakes: audio-visual imitations of people, generated by increasingly powerful neural networks, that will soon be indistinguishable from the real thing. Politicians are regularly laid low by scandals that arise from audio-visual recordings. Try watching the footage that could be created of Barack Obama from his speeches, and the Lyrebird impersonations. You could easily, today or in the very near future, create a forgery that might be indistinguishable from the real thing. What would that do to politics?
Once the internet is flooded with plausible-seeming tapes and recordings of this sort, how are we going to decide what’s real and what isn’t? Democracy, and our ability to counteract threats, is already threatened by a lack of agreement on the facts. Once you can’t believe the evidence of your senses anymore, we’re in serious trouble. Ultimately, you can dream up all kinds of utterly terrifying possibilities for these deep fakes, from fake news to blackmail.
How to solve the problem? Some have suggested that media websites like Facebook or Twitter should carry software that probes every video to see if it’s a deep fake or not and labels the fakes. But this will prove computationally intensive. Plus, imagine a case where we have such a system, and a fake is “verified as real” by news media algorithms that have been fooled by clever hackers.
The other alternative is even more dystopian: you can prove something isn’t true simply by always having an alibi. Lawfare describes a “solution” where those concerned about deep fakes have all of their movements and interactions recorded. So to avoid being blackmailed or having your reputation ruined, you just consent to some company engaging in 24/7 surveillance of everything you say or do and having total power over that information. What could possibly go wrong?
The point is, in the same way that you don’t need human-level, general AI or humanoid robotics to create systems that can cause disruption in the world of work, you also don’t need a general intelligence to threaten security and wreak havoc on society. Andrew Ng, AI researcher, says that worrying about the risks from superintelligent AI is like “worrying about overpopulation on Mars.” There are clearly risks that arise even from the simple algorithms we have today.
The looming issue of deep fakes is just one of the threats considered by the new malicious AI report, which has co-authors from the Future of Humanity Institute and the Centre for the Study of Existential Risk (among other organizations.) They limit their focus to the technologies of the next five years.
Some of the concerns the report explores are enhancements to familiar threats.
Automated hacking can get better, smarter, and algorithms can adapt to changing security protocols. “Phishing emails,” where people are scammed by impersonating someone they trust or an official organization, could be generated en masse and made more realistic by scraping data from social media. Standard phishing works by sending such a great volume of emails that even a very low success rate can be profitable. Spear phishing aims at specific targets by impersonating family members, but can be labor intensive. If AI algorithms enable every phishing scam to become sharper in this way, more people are going to get gouged.
Then there are novel threats that come from our own increasing use of and dependence on artificial intelligence to make decisions.
These algorithms may be smart in some ways, but as any human knows, computers are utterly lacking in common sense; they can be fooled. A rather scary application is adversarial examples. Machine learning algorithms are often used for image recognition. But it’s possible, if you know a little about how the algorithm is structured, to construct the perfect level of noise to add to an image, and fool the machine. Two images can be almost completely indistinguishable to the human eye. But by adding some cleverly-calculated noise, the hackers can fool the algorithm into thinking an image of a panda is really an image of a gibbon (in the OpenAI example). Research conducted by OpenAI demonstrates that you can fool algorithms even by printing out examples on stickers.
Now imagine that instead of tricking a computer into thinking that a panda is actually a gibbon, you fool it into thinking that a stop sign isn’t there, or that the back of someone’s car is really a nice open stretch of road. In the adversarial example case, the images are almost indistinguishable to humans. By the time anyone notices the road sign has been “hacked,” it could already be too late.
As the OpenAI foundation freely admits, worrying about whether we’d be able to tame a superintelligent AI is a hard problem. It looks all the more difficult when you realize some of our best algorithms can be fooled by stickers; even “modern simple algorithms can behave in ways we do not intend.”
There are ways around this approach.
Adversarial training can generate lots of adversarial examples and explicitly train the algorithm not to be fooled by them—but it’s costly in terms of time and computation, and puts you in an arms race with hackers. Many strategies for defending against adversarial examples haven’t proved adaptive enough; correcting against vulnerabilities one at a time is too slow. Moreover, it demonstrates a point that can be lost in the AI hype: algorithms can be fooled in ways we didn’t anticipate. If we don’t learn about these vulnerabilities until the algorithms are everywhere, serious disruption can occur. And no matter how careful you are, some vulnerabilities are likely to remain to be exploited, even if it takes years to find them.
Just look at the Meltdown and Spectre vulnerabilities, which weren’t widely known about for more than 20 years but could enable hackers to steal personal information. Ultimately, the more blind faith we put into algorithms and computers—without understanding the opaque inner mechanics of how they work—the more vulnerable we will be to these forms of attack. And, as China dreams of using AI to predict crimes and enhance the police force, the potential for unjust arrests can only increase.
This is before you get into the truly nightmarish territory of “killer robots”—not the Terminator, but instead autonomous or consumer drones which could potentially be weaponized by bad actors and used to conduct attacks remotely. Some reports have indicated that terrorist organizations are already trying to do this.
As with any form of technology, new powers for humanity come with new risks. And, as with any form of technology, closing Pandora’s box will prove very difficult.
Somewhere between the excessively hyped prospects of AI that will do everything for us and AI that will destroy the world lies reality: a complex, ever-changing set of risks and rewards. The writers of the malicious AI report note that one of their key motivations is ensuring that the benefits of new technology can be delivered to people as quickly, but as safely, as possible. In the rush to exploit the potential for algorithms and create 21st-century infrastructure, we must ensure we’re not building in new dangers.
Image Credit: lolloj / Shutterstock.com Continue reading