Tag Archives: learning
#437373 Microsoft’s New Deepfake Detector Puts ...
The upcoming US presidential election seems set to be something of a mess—to put it lightly. Covid-19 will likely deter millions from voting in person, and mail-in voting isn’t shaping up to be much more promising. This all comes at a time when political tensions are running higher than they have in decades, issues that shouldn’t be political (like mask-wearing) have become highly politicized, and Americans are dramatically divided along party lines.
So the last thing we need right now is yet another wrench in the spokes of democracy, in the form of disinformation; we all saw how that played out in 2016, and it wasn’t pretty. For the record, disinformation purposely misleads people, while misinformation is simply inaccurate, but without malicious intent. While there’s not a ton tech can do to make people feel safe at crowded polling stations or up the Postal Service’s budget, tech can help with disinformation, and Microsoft is trying to do so.
On Tuesday the company released two new tools designed to combat disinformation, described in a blog post by VP of Customer Security and Trust Tom Burt and Chief Scientific Officer Eric Horvitz.
The first is Microsoft Video Authenticator, which is made to detect deepfakes. In case you’re not familiar with this wicked byproduct of AI progress, “deepfakes” refers to audio or visual files made using artificial intelligence that can manipulate peoples’ voices or likenesses to make it look like they said things they didn’t. Editing a video to string together words and form a sentence someone didn’t say doesn’t count as a deepfake; though there’s manipulation involved, you don’t need a neural network and you’re not generating any original content or footage.
The Authenticator analyzes videos or images and tells users the percentage chance that they’ve been artificially manipulated. For videos, the tool can even analyze individual frames in real time.
Deepfake videos are made by feeding hundreds of hours of video of someone into a neural network, “teaching” the network the minutiae of the person’s voice, pronunciation, mannerisms, gestures, etc. It’s like when you do an imitation of your annoying coworker from accounting, complete with mimicking the way he makes every sentence sound like a question and his eyes widen when he talks about complex spreadsheets. You’ve spent hours—no, months—in his presence and have his personality quirks down pat. An AI algorithm that produces deepfakes needs to learn those same quirks, and more, about whoever the creator’s target is.
Given enough real information and examples, the algorithm can then generate its own fake footage, with deepfake creators using computer graphics and manually tweaking the output to make it as realistic as possible.
The scariest part? To make a deepfake, you don’t need a fancy computer or even a ton of knowledge about software. There are open-source programs people can access for free online, and as far as finding video footage of famous people—well, we’ve got YouTube to thank for how easy that is.
Microsoft’s Video Authenticator can detect the blending boundary of a deepfake and subtle fading or greyscale elements that the human eye may not be able to see.
In the blog post, Burt and Horvitz point out that as time goes by, deepfakes are only going to get better and become harder to detect; after all, they’re generated by neural networks that are continuously learning from and improving themselves.
Microsoft’s counter-tactic is to come in from the opposite angle, that is, being able to confirm beyond doubt that a video, image, or piece of news is real (I mean, can McDonald’s fries cure baldness? Did a seal slap a kayaker in the face with an octopus? Never has it been so imperative that the world know the truth).
A tool built into Microsoft Azure, the company’s cloud computing service, lets content producers add digital hashes and certificates to their content, and a reader (which can be used as a browser extension) checks the certificates and matches the hashes to indicate the content is authentic.
Finally, Microsoft also launched an interactive “Spot the Deepfake” quiz it developed in collaboration with the University of Washington’s Center for an Informed Public, deepfake detection company Sensity, and USA Today. The quiz is intended to help people “learn about synthetic media, develop critical media literacy skills, and gain awareness of the impact of synthetic media on democracy.”
The impact Microsoft’s new tools will have remains to be seen—but hey, we’re glad they’re trying. And they’re not alone; Facebook, Twitter, and YouTube have all taken steps to ban and remove deepfakes from their sites. The AI Foundation’s Reality Defender uses synthetic media detection algorithms to identify fake content. There’s even a coalition of big tech companies teaming up to try to fight election interference.
One thing is for sure: between a global pandemic, widespread protests and riots, mass unemployment, a hobbled economy, and the disinformation that’s remained rife through it all, we’re going to need all the help we can get to make it through not just the election, but the rest of the conga-line-of-catastrophes year that is 2020.
Image Credit: Darius Bashar on Unsplash Continue reading
#437357 Algorithms Workers Can’t See Are ...
“I’m sorry, Dave. I’m afraid I can’t do that.” HAL’s cold, if polite, refusal to open the pod bay doors in 2001: A Space Odyssey has become a defining warning about putting too much trust in artificial intelligence, particularly if you work in space.
In the movies, when a machine decides to be the boss (or humans let it) things go wrong. Yet despite myriad dystopian warnings, control by machines is fast becoming our reality.
Algorithms—sets of instructions to solve a problem or complete a task—now drive everything from browser search results to better medical care.
They are helping design buildings. They are speeding up trading on financial markets, making and losing fortunes in micro-seconds. They are calculating the most efficient routes for delivery drivers.
In the workplace, self-learning algorithmic computer systems are being introduced by companies to assist in areas such as hiring, setting tasks, measuring productivity, evaluating performance, and even terminating employment: “I’m sorry, Dave. I’m afraid you are being made redundant.”
Giving self‐learning algorithms the responsibility to make and execute decisions affecting workers is called “algorithmic management.” It carries a host of risks in depersonalizing management systems and entrenching pre-existing biases.
At an even deeper level, perhaps, algorithmic management entrenches a power imbalance between management and worker. Algorithms are closely guarded secrets. Their decision-making processes are hidden. It’s a black-box: perhaps you have some understanding of the data that went in, and you see the result that comes out, but you have no idea of what goes on in between.
Algorithms at Work
Here are a few examples of algorithms already at work.
At Amazon’s fulfillment center in south-east Melbourne, they set the pace for “pickers,” who have timers on their scanners showing how long they have to find the next item. As soon as they scan that item, the timer resets for the next. All at a “not quite walking, not quite running” speed.
Or how about AI determining your success in a job interview? More than 700 companies have trialed such technology. US developer HireVue says its software speeds up the hiring process by 90 percent by having applicants answer identical questions and then scoring them according to language, tone, and facial expressions.
Granted, human assessments during job interviews are notoriously flawed. Algorithms,however, can also be biased. The classic example is the COMPAS software used by US judges, probation, and parole officers to rate a person’s risk of re-offending. In 2016 a ProPublica investigation showed the algorithm was heavily discriminatory, incorrectly classifying black subjects as higher risk 45 percent of the time, compared with 23 percent for white subjects.
How Gig Workers Cope
Algorithms do what their code tells them to do. The problem is this code is rarely available. This makes them difficult to scrutinize, or even understand.
Nowhere is this more evident than in the gig economy. Uber, Lyft, Deliveroo, and other platforms could not exist without algorithms allocating, monitoring, evaluating, and rewarding work.
Over the past year Uber Eats’ bicycle couriers and drivers, for instance, have blamed unexplained changes to the algorithm for slashing their jobs, and incomes.
Rider’s can’t be 100 percent sure it was all down to the algorithm. But that’s part of the problem. The fact those who depend on the algorithm don’t know one way or the other has a powerful influence on them.
This is a key result from our interviews with 58 food-delivery couriers. Most knew their jobs were allocated by an algorithm (via an app). They knew the app collected data. What they didn’t know was how data was used to award them work.
In response, they developed a range of strategies (or guessed how) to “win” more jobs, such as accepting gigs as quickly as possible and waiting in “magic” locations. Ironically, these attempts to please the algorithm often meant losing the very flexibility that was one of the attractions of gig work.
The information asymmetry created by algorithmic management has two profound effects. First, it threatens to entrench systemic biases, the type of discrimination hidden within the COMPAS algorithm for years. Second, it compounds the power imbalance between management and worker.
Our data also confirmed others’ findings that it is almost impossible to complain about the decisions of the algorithm. Workers often do not know the exact basis of those decisions, and there’s no one to complain to anyway. When Uber Eats bicycle couriers asked for reasons about their plummeting income, for example, responses from the company advised them “we have no manual control over how many deliveries you receive.”
Broader Lessons
When algorithmic management operates as a “black box” one of the consequences is that it is can become an indirect control mechanism. Thus far under-appreciated by Australian regulators, this control mechanism has enabled platforms to mobilize a reliable and scalable workforce while avoiding employer responsibilities.
“The absence of concrete evidence about how the algorithms operate”, the Victorian government’s inquiry into the “on-demand” workforce notes in its report, “makes it hard for a driver or rider to complain if they feel disadvantaged by one.”
The report, published in June, also found it is “hard to confirm if concern over algorithm transparency is real.”
But it is precisely the fact it is hard to confirm that’s the problem. How can we start to even identify, let alone resolve, issues like algorithmic management?
Fair conduct standards to ensure transparency and accountability are a start. One example is the Fair Work initiative, led by the Oxford Internet Institute. The initiative is bringing together researchers with platforms, workers, unions, and regulators to develop global principles for work in the platform economy. This includes “fair management,” which focuses on how transparent the results and outcomes of algorithms are for workers.
Understandings about impact of algorithms on all forms of work is still in its infancy. It demands greater scrutiny and research. Without human oversight based on agreed principles we risk inviting HAL into our workplaces.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Image Credit: PickPik Continue reading
#437269 DeepMind’s Newest AI Programs Itself ...
When Deep Blue defeated world chess champion Garry Kasparov in 1997, it may have seemed artificial intelligence had finally arrived. A computer had just taken down one of the top chess players of all time. But it wasn’t to be.
Though Deep Blue was meticulously programmed top-to-bottom to play chess, the approach was too labor-intensive, too dependent on clear rules and bounded possibilities to succeed at more complex games, let alone in the real world. The next revolution would take a decade and a half, when vastly more computing power and data revived machine learning, an old idea in artificial intelligence just waiting for the world to catch up.
Today, machine learning dominates, mostly by way of a family of algorithms called deep learning, while symbolic AI, the dominant approach in Deep Blue’s day, has faded into the background.
Key to deep learning’s success is the fact the algorithms basically write themselves. Given some high-level programming and a dataset, they learn from experience. No engineer anticipates every possibility in code. The algorithms just figure it.
Now, Alphabet’s DeepMind is taking this automation further by developing deep learning algorithms that can handle programming tasks which have been, to date, the sole domain of the world’s top computer scientists (and take them years to write).
In a paper recently published on the pre-print server arXiv, a database for research papers that haven’t been peer reviewed yet, the DeepMind team described a new deep reinforcement learning algorithm that was able to discover its own value function—a critical programming rule in deep reinforcement learning—from scratch.
Surprisingly, the algorithm was also effective beyond the simple environments it trained in, going on to play Atari games—a different, more complicated task—at a level that was, at times, competitive with human-designed algorithms and achieving superhuman levels of play in 14 games.
DeepMind says the approach could accelerate the development of reinforcement learning algorithms and even lead to a shift in focus, where instead of spending years writing the algorithms themselves, researchers work to perfect the environments in which they train.
Pavlov’s Digital Dog
First, a little background.
Three main deep learning approaches are supervised, unsupervised, and reinforcement learning.
The first two consume huge amounts of data (like images or articles), look for patterns in the data, and use those patterns to inform actions (like identifying an image of a cat). To us, this is a pretty alien way to learn about the world. Not only would it be mind-numbingly dull to review millions of cat images, it’d take us years or more to do what these programs do in hours or days. And of course, we can learn what a cat looks like from just a few examples. So why bother?
While supervised and unsupervised deep learning emphasize the machine in machine learning, reinforcement learning is a bit more biological. It actually is the way we learn. Confronted with several possible actions, we predict which will be most rewarding based on experience—weighing the pleasure of eating a chocolate chip cookie against avoiding a cavity and trip to the dentist.
In deep reinforcement learning, algorithms go through a similar process as they take action. In the Atari game Breakout, for instance, a player guides a paddle to bounce a ball at a ceiling of bricks, trying to break as many as possible. When playing Breakout, should an algorithm move the paddle left or right? To decide, it runs a projection—this is the value function—of which direction will maximize the total points, or rewards, it can earn.
Move by move, game by game, an algorithm combines experience and value function to learn which actions bring greater rewards and improves its play, until eventually, it becomes an uncanny Breakout player.
Learning to Learn (Very Meta)
So, a key to deep reinforcement learning is developing a good value function. And that’s difficult. According to the DeepMind team, it takes years of manual research to write the rules guiding algorithmic actions—which is why automating the process is so alluring. Their new Learned Policy Gradient (LPG) algorithm makes solid progress in that direction.
LPG trained in a number of toy environments. Most of these were “gridworlds”—literally two-dimensional grids with objects in some squares. The AI moves square to square and earns points or punishments as it encounters objects. The grids vary in size, and the distribution of objects is either set or random. The training environments offer opportunities to learn fundamental lessons for reinforcement learning algorithms.
Only in LPG’s case, it had no value function to guide that learning.
Instead, LPG has what DeepMind calls a “meta-learner.” You might think of this as an algorithm within an algorithm that, by interacting with its environment, discovers both “what to predict,” thereby forming its version of a value function, and “how to learn from it,” applying its newly discovered value function to each decision it makes in the future.
Prior work in the area has had some success, but according to DeepMind, LPG is the first algorithm to discover reinforcement learning rules from scratch and to generalize beyond training. The latter was particularly surprising because Atari games are so different from the simple worlds LPG trained in—that is, it had never seen anything like an Atari game.
Time to Hand Over the Reins? Not Just Yet
LPG is still behind advanced human-designed algorithms, the researchers said. But it outperformed a human-designed benchmark in training and even some Atari games, which suggests it isn’t strictly worse, just that it specializes in some environments.
This is where there’s room for improvement and more research.
The more environments LPG saw, the more it could successfully generalize. Intriguingly, the researchers speculate that with enough well-designed training environments, the approach might yield a general-purpose reinforcement learning algorithm.
At the least, though, they say further automation of algorithm discovery—that is, algorithms learning to learn—will accelerate the field. In the near term, it can help researchers more quickly develop hand-designed algorithms. Further out, as self-discovered algorithms like LPG improve, engineers may shift from manually developing the algorithms themselves to building the environments where they learn.
Deep learning long ago left Deep Blue in the dust at games. Perhaps algorithms learning to learn will be a winning strategy in the real world too.
Image credit: Mike Szczepanski / Unsplash Continue reading
#437261 How AI Will Make Drug Discovery ...
If you had to guess how long it takes for a drug to go from an idea to your pharmacy, what would you guess? Three years? Five years? How about the cost? $30 million? $100 million?
Well, here’s the sobering truth: 90 percent of all drug possibilities fail. The few that do succeed take an average of 10 years to reach the market and cost anywhere from $2.5 billion to $12 billion to get there.
But what if we could generate novel molecules to target any disease, overnight, ready for clinical trials? Imagine leveraging machine learning to accomplish with 50 people what the pharmaceutical industry can barely do with an army of 5,000.
Welcome to the future of AI and low-cost, ultra-fast, and personalized drug discovery. Let’s dive in.
GANs & Drugs
Around 2012, computer scientist-turned-biophysicist Alex Zhavoronkov started to notice that artificial intelligence was getting increasingly good at image, voice, and text recognition. He knew that all three tasks shared a critical commonality. In each, massive datasets were available, making it easy to train up an AI.
But similar datasets were present in pharmacology. So, back in 2014, Zhavoronkov started wondering if he could use these datasets and AI to significantly speed up the drug discovery process. He’d heard about a new technique in artificial intelligence known as generative adversarial networks (or GANs). By pitting two neural nets against one another (adversarial), the system can start with minimal instructions and produce novel outcomes (generative). At the time, researchers had been using GANs to do things like design new objects or create one-of-a-kind, fake human faces, but Zhavoronkov wanted to apply them to pharmacology.
He figured GANs would allow researchers to verbally describe drug attributes: “The compound should inhibit protein X at concentration Y with minimal side effects in humans,” and then the AI could construct the molecule from scratch. To turn his idea into reality, Zhavoronkov set up Insilico Medicine on the campus of Johns Hopkins University in Baltimore, Maryland, and rolled up his sleeves.
Instead of beginning their process in some exotic locale, Insilico’s “drug discovery engine” sifts millions of data samples to determine the signature biological characteristics of specific diseases. The engine then identifies the most promising treatment targets and—using GANs—generates molecules (that is, baby drugs) perfectly suited for them. “The result is an explosion in potential drug targets and a much more efficient testing process,” says Zhavoronkov. “AI allows us to do with fifty people what a typical drug company does with five thousand.”
The results have turned what was once a decade-long war into a month-long skirmish.
In late 2018, for example, Insilico was generating novel molecules in fewer than 46 days, and this included not just the initial discovery, but also the synthesis of the drug and its experimental validation in computer simulations.
Right now, they’re using the system to hunt down new drugs for cancer, aging, fibrosis, Parkinson’s, Alzheimer’s, ALS, diabetes, and many others. The first drug to result from this work, a treatment for hair loss, is slated to start Phase I trials by the end of 2020.
They’re also in the early stages of using AI to predict the outcomes of clinical trials in advance of the trial. If successful, this technique will enable researchers to strip a bundle of time and money out of the traditional testing process.
Protein Folding
Beyond inventing new drugs, AI is also being used by other scientists to identify new drug targets—that is, the place to which a drug binds in the body and another key part of the drug discovery process.
Between 1980 and 2006, despite an annual investment of $30 billion, researchers only managed to find about five new drug targets a year. The trouble is complexity. Most potential drug targets are proteins, and a protein’s structure—meaning the way a 2D sequence of amino acids folds into a 3D protein—determines its function.
But a protein with merely a hundred amino acids (a rather small protein) can produce a googol-cubed worth of potential shapes—that’s a one followed by three hundred zeroes. This is also why protein-folding has long been considered an intractably hard problem for even the most powerful of supercomputers.
Back in 1994, to monitor supercomputers’ progress in protein-folding, a biannual competition was created. Until 2018, success was fairly rare. But then the creators of DeepMind turned their neural networks loose on the problem. They created an AI that mines enormous datasets to determine the most likely distance between a protein’s base pairs and the angles of their chemical bonds—aka, the basics of protein-folding. They called it AlphaFold.
On its first foray into the competition, contestant AIs were given 43 protein-folding problems to solve. AlphaFold got 25 right. The second-place team managed a meager three. By predicting the elusive ways in which various proteins fold on the basis of their amino acid sequences, AlphaFold may soon have a tremendous impact in aiding drug discovery and fighting some of today’s most intractable diseases.
Drug Delivery
Another theater of war for improved drugs is the realm of drug delivery. Even here, converging exponential technologies are paving the way for massive implications in both human health and industry shifts.
One key contender is CRISPR, the fast-advancing gene-editing technology that stands to revolutionize synthetic biology and treatment of genetically linked diseases. And researchers have now demonstrated how this tool can be applied to create materials that shape-shift on command. Think: materials that dissolve instantaneously when faced with a programmed stimulus, releasing a specified drug at a highly targeted location.
Yet another potential boon for targeted drug delivery is nanotechnology, whereby medical nanorobots have now been used to fight incidences of cancer. In a recent review of medical micro- and nanorobotics, lead authors (from the University of Texas at Austin and University of California, San Diego) found numerous successful tests of in vivo operation of medical micro- and nanorobots.
Drugs From the Future
Covid-19 is uniting the global scientific community with its urgency, prompting scientists to cast aside nation-specific territorialism, research secrecy, and academic publishing politics in favor of expedited therapeutic and vaccine development efforts. And in the wake of rapid acceleration across healthcare technologies, Big Pharma is an area worth watching right now, no matter your industry. Converging technologies will soon enable extraordinary strides in longevity and disease prevention, with companies like Insilico leading the charge.
Riding the convergence of massive datasets, skyrocketing computational power, quantum computing, cognitive surplus capabilities, and remarkable innovations in AI, we are not far from a world in which personalized drugs, delivered directly to specified targets, will graduate from science fiction to the standard of care.
Rejuvenational biotechnology will be commercially available sooner than you think. When I asked Alex for his own projection, he set the timeline at “maybe 20 years—that’s a reasonable horizon for tangible rejuvenational biotechnology.”
How might you use an extra 20 or more healthy years in your life? What impact would you be able to make?
Join Me
(1) A360 Executive Mastermind: If you’re an exponentially and abundance-minded entrepreneur who would like coaching directly from me, consider joining my Abundance 360 Mastermind, a highly selective community of 360 CEOs and entrepreneurs who I coach for 3 days every January in Beverly Hills, Ca. Through A360, I provide my members with context and clarity about how converging exponential technologies will transform every industry. I’m committed to running A360 for the course of an ongoing 25-year journey as a “countdown to the Singularity.”
If you’d like to learn more and consider joining our 2021 membership, apply here.
(2) Abundance-Digital Online Community: I’ve also created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is Singularity University’s ‘onramp’ for exponential entrepreneurs—those who want to get involved and play at a higher level. Click here to learn more.
(Both A360 and Abundance-Digital are part of Singularity University—your participation opens you to a global community.)
This article originally appeared on diamandis.com. Read the original article here.
Image Credit: andreas160578 from Pixabay Continue reading