Tag Archives: how to

#435528 The Time for AI Is Now. Here’s Why

You hear a lot these days about the sheer transformative power of AI.

There’s pure intelligence: DeepMind’s algorithms readily beat humans at Go and StarCraft, and DeepStack triumphs over humans at no-limit hold’em poker. Often, these silicon brains generate gameplay strategies that don’t resemble anything from a human mind.

There’s astonishing speed: algorithms routinely surpass radiologists in diagnosing breast cancer, eye disease, and other ailments visible from medical imaging, essentially collapsing decades of expert training down to a few months.

Although AI’s silent touch is mainly felt today in the technological, financial, and health sectors, its impact across industries is rapidly spreading. At the Singularity University Global Summit in San Francisco this week Neil Jacobstein, Chair of AI and Robotics, painted a picture of a better AI-powered future for humanity that is already here.

Thanks to cloud-based cognitive platforms, sophisticated AI tools like deep learning are no longer relegated to academic labs. For startups looking to tackle humanity’s grand challenges, the tools to efficiently integrate AI into their missions are readily available. The progress of AI is massively accelerating—to the point you need help from AI to track its progress, joked Jacobstein.

Now is the time to consider how AI can impact your industry, and in the process, begin to envision a beneficial relationship with our machine coworkers. As Jacobstein stressed in his talk, the future of a brain-machine mindmeld is a collaborative intelligence that augments our own. “AI is reinventing the way we invent,” he said.

AI’s Rapid Revolution
Machine learning and other AI-based methods may seem academic and abstruse. But Jacobstein pointed out that there are already plenty of real-world AI application frameworks.

Their secret? Rather than coding from scratch, smaller companies—with big visions—are tapping into cloud-based solutions such as Google’s TensorFlow, Microsoft’s Azure, or Amazon’s AWS to kick off their AI journey. These platforms act as all-in-one solutions that not only clean and organize data, but also contain built-in security and drag-and-drop coding that allow anyone to experiment with complicated machine learning algorithms.

Google Cloud’s Anthos, for example, lets anyone migrate data from other servers—IBM Watson or AWS, for example—so users can leverage different computing platforms and algorithms to transform data into insights and solutions.

Rather than coding from scratch, it’s already possible to hop onto a platform and play around with it, said Jacobstein. That’s key: this democratization of AI is how anyone can begin exploring solutions to problems we didn’t even know we had, or those long thought improbable.

The acceleration is only continuing. Much of AI’s mind-bending pace is thanks to a massive infusion of funding. Microsoft recently injected $1 billion into OpenAI, the Elon Musk venture that engineers socially responsible artificial general intelligence (AGI).

The other revolution is in hardware, and Google, IBM, and NVIDIA—among others—are racing to manufacture computing chips tailored to machine learning.

Democratizing AI is like the birth of the printing press. Mechanical printing allowed anyone to become an author; today, an iPhone lets anyone film a movie masterpiece.

However, this diffusion of AI into the fabric of our lives means tech explorers need to bring skepticism to their AI solutions, giving them a dose of empathy, nuance, and humanity.

A Path Towards Ethical AI
The democratization of AI is a double-edged sword: as more people wield the technology’s power in real-world applications, problems embedded in deep learning threaten to disrupt those very judgment calls.

Much of the press on the dangers of AI focuses on superintelligence—AI that’s more adept at learning than humans—taking over the world, said Jacobstein. But the near-term threat, and far more insidious, is in humans misusing the technology.

Deepfakes, for example, allow AI rookies to paste one person’s head on a different body or put words into a person’s mouth. As the panel said, it pays to think of AI as a cybersecurity problem, one with currently shaky accountability and complexity, and one that fails at diversity and bias.

Take bias. Thanks to progress in natural language processing, Google Translate works nearly perfectly today, so much so that many consider the translation problem solved. Not true, the panel said. One famous example is how the algorithm translates gender-neutral terms like “doctor” into “he” and “nurse” into “she.”

These biases reflect our own, and it’s not just a data problem. To truly engineer objective AI systems, ones stripped of our society’s biases, we need to ask who is developing these systems, and consult those who will be impacted by the products. In addition to gender, racial bias is also rampant. For example, one recent report found that a supposedly objective crime-predicting system was trained on falsified data, resulting in outputs that further perpetuate corrupt police practices. Another study from Google just this month found that their hate speech detector more often labeled innocuous tweets from African-Americans as “obscene” compared to tweets from people of other ethnicities.

We often think of building AI as purely an engineering job, the panelists agreed. But similar to gene drives, germ-line genome editing, and other transformative—but dangerous—tools, AI needs to grow under the consultation of policymakers and other stakeholders. It pays to start young: educating newer generations on AI biases will mold malleable minds early, alerting them to the problem of bias and potentially mitigating risks.

As panelist Tess Posner from AI4ALL said, AI is rocket fuel for ambition. If young minds set out using the tools of AI to tackle their chosen problems, while fully aware of its inherent weaknesses, we can begin to build an AI-embedded future that is widely accessible and inclusive.

The bottom line: people who will be impacted by AI need to be in the room at the conception of an AI solution. People will be displaced by the new technology, and ethical AI has to consider how to mitigate human suffering during the transition. Just because AI looks like “magic fairy dust doesn’t mean that you’re home free,” the panelists said. You, the sentient human, bear the burden of being responsible for how you decide to approach the technology.

The time for AI is now. Let’s make it ethical.

Image Credit: GrAI / Shutterstock.com Continue reading

Posted in Human Robots

#435520 These Are the Meta-Trends Shaping the ...

Life is pretty different now than it was 20 years ago, or even 10 years ago. It’s sort of exciting, and sort of scary. And hold onto your hat, because it’s going to keep changing—even faster than it already has been.

The good news is, maybe there won’t be too many big surprises, because the future will be shaped by trends that have already been set in motion. According to Singularity University co-founder and XPRIZE founder Peter Diamandis, a lot of these trends are unstoppable—but they’re also pretty predictable.

At SU’s Global Summit, taking place this week in San Francisco, Diamandis outlined some of the meta-trends he believes are key to how we’ll live our lives and do business in the (not too distant) future.

Increasing Global Abundance
Resources are becoming more abundant all over the world, and fewer people are seeing their lives limited by scarcity. “It’s hard for us to realize this as we see crisis news, but what people have access to is more abundant than ever before,” Diamandis said. Products and services are becoming cheaper and thus available to more people, and having more resources then enables people to create more, thus producing even more resources—and so on.

Need evidence? The proportion of the world’s population living in extreme poverty is currently lower than it’s ever been. The average human life expectancy is longer than it’s ever been. The costs of day-to-day needs like food, energy, transportation, and communications are on a downward trend.

Take energy. In most of the world, though its costs are decreasing, it’s still a fairly precious commodity; we turn off our lights and our air conditioners when we don’t need them (ideally, both to save money and to avoid wastefulness). But the cost of solar energy has plummeted, and the storage capacity of batteries is improving, and solar technology is steadily getting more efficient. Bids for new solar power plants in the past few years have broken each other’s records for lowest cost per kilowatt hour.

“We’re not far from a penny per kilowatt hour for energy from the sun,” Diamandis said. “And if you’ve got energy, you’ve got water.” Desalination, for one, will be much more widely feasible once the cost of the energy needed for it drops.

Knowledge is perhaps the most crucial resource that’s going from scarce to abundant. All the world’s knowledge is now at the fingertips of anyone who has a mobile phone and an internet connection—and the number of people connected is only going to grow. “Everyone is being connected at gigabit connection speeds, and this will be transformative,” Diamandis said. “We’re heading towards a world where anyone can know anything at any time.”

Increasing Capital Abundance
It’s not just goods, services, and knowledge that are becoming more plentiful. Money is, too—particularly money for business. “There’s more and more capital available to invest in companies,” Diamandis said. As a result, more people are getting the chance to bring their world-changing ideas to life.

Venture capital investments reached a new record of $130 billion in 2018, up from $84 billion in 2017—and that’s just in the US. Globally, VC funding grew 21 percent from 2017 to a total of $207 billion in 2018.

Through crowdfunding, any person in any part of the world can present their idea and ask for funding. That funding can come in the form of a loan, an equity investment, a reward, or an advanced purchase of the proposed product or service. “Crowdfunding means it doesn’t matter where you live, if you have a great idea you can get it funded by people from all over the world,” Diamandis said.

All this is making a difference; the number of unicorns—privately-held startups valued at over $1 billion—currently stands at an astounding 360.

One of the reasons why the world is getting better, Diamandis believes, is because entrepreneurs are trying more crazy ideas—not ideas that are reasonable or predictable or linear, but ideas that seem absurd at first, then eventually end up changing the world.

Everyone and Everything, Connected
As already noted, knowledge is becoming abundant thanks to the proliferation of mobile phones and wireless internet; everyone’s getting connected. In the next decade or sooner, connectivity will reach every person in the world. 5G is being tested and offered for the first time this year, and companies like Google, SpaceX, OneWeb, and Amazon are racing to develop global satellite internet constellations, whether by launching 12,000 satellites, as SpaceX’s Starlink is doing, or by floating giant balloons into the stratosphere like Google’s Project Loon.

“We’re about to reach a period of time in the next four to six years where we’re going from half the world’s people being connected to the whole world being connected,” Diamandis said. “What happens when 4.2 billion new minds come online? They’re all going to want to create, discover, consume, and invent.”

And it doesn’t stop at connecting people. Things are becoming more connected too. “By 2020 there will be over 20 billion connected devices and more than one trillion sensors,” Diamandis said. By 2030, those projections go up to 500 billion and 100 trillion. Think about it: there’s home devices like refrigerators, TVs, dishwashers, digital assistants, and even toasters. There’s city infrastructure, from stoplights to cameras to public transportation like buses or bike sharing. It’s all getting smart and connected.

Soon we’ll be adding autonomous cars to the mix, and an unimaginable glut of data to go with them. Every turn, every stop, every acceleration will be a data point. Some cars already collect over 25 gigabytes of data per hour, Diamandis said, and car data is projected to generate $750 billion of revenue by 2030.

“You’re going to start asking questions that were never askable before, because the data is now there to be mined,” he said.

Increasing Human Intelligence
Indeed, we’ll have data on everything we could possibly want data on. We’ll also soon have what Diamandis calls just-in-time education, where 5G combined with artificial intelligence and augmented reality will allow you to learn something in the moment you need it. “It’s not going and studying, it’s where your AR glasses show you how to do an emergency surgery, or fix something, or program something,” he said.

We’re also at the beginning of massive investments in research working towards connecting our brains to the cloud. “Right now, everything we think, feel, hear, or learn is confined in our synaptic connections,” Diamandis said. What will it look like when that’s no longer the case? Companies like Kernel, Neuralink, Open Water, Facebook, Google, and IBM are all investing billions of dollars into brain-machine interface research.

Increasing Human Longevity
One of the most important problems we’ll use our newfound intelligence to solve is that of our own health and mortality, making 100 years old the new 60—then eventually, 120 or 150.

“Our bodies were never evolved to live past age 30,” Diamandis said. “You’d go into puberty at age 13 and have a baby, and by the time you were 26 your baby was having a baby.”

Seeing how drastically our lifespans have changed over time makes you wonder what aging even is; is it natural, or is it a disease? Many companies are treating it as one, and using technologies like senolytics, CRISPR, and stem cell therapy to try to cure it. Scaffolds of human organs can now be 3D printed then populated with the recipient’s own stem cells so that their bodies won’t reject the transplant. Companies are testing small-molecule pharmaceuticals that can stop various forms of cancer.

“We don’t truly know what’s going on inside our bodies—but we can,” Diamandis said. “We’re going to be able to track our bodies and find disease at stage zero.”

Chins Up
The world is far from perfect—that’s not hard to see. What’s less obvious but just as true is that we’re living in an amazing time. More people are coming together, and they have more access to information, and that information moves faster, than ever before.

“I don’t think any of us understand how fast the world is changing,” Diamandis said. “Most people are fearful about the future. But we should be excited about the tools we now have to solve the world’s problems.”

Image Credit: spainter_vfx / Shutterstock.com Continue reading

Posted in Human Robots

#435505 This Week’s Awesome Stories From ...

AUGMENTED REALITY
This Is the Computer You’ll Wear on Your Face in 10 Years
Mark Sullivan | Fast Company
“[Snap’s new Spectacles 3] foreshadow a device that many of us may wear as our primary personal computing device in about 10 years. Based on what I’ve learned by talking AR with technologists in companies big and small, here is what such a device might look like and do.”

ROBOTICS
These Robo-Shorts Are the Precursor to a True Robotic Exoskeleton
Devin Coldewey | TechCrunch
“The whole idea, then, is to leave behind the idea of an exosuit as a big mechanical thing for heavy industry or work, and bring in the idea that one could help an elderly person stand up from a chair, or someone recovering from an accident walk farther without fatigue.”

ENVIRONMENT
Artificial Tree Promises to Suck Up as Much Air Pollution as a Small Forest
Luke Dormehl | Digital Trends
“The company has developed an artificial tree that it claims is capable of sucking up the equivalent amount of air pollution as 368 living trees. That’s not only a saving on growing time, but also on the space needed to accommodate them.”

FUTURE
The Anthropocene Is a Joke
Peter Brannen | The Atlantic
“Unless we fast learn how to endure on this planet, and on a scale far beyond anything we’ve yet proved ourselves capable of, the detritus of civilization will be quickly devoured by the maw of deep time.”

ARTIFICIAL INTELLIGENCE
DeepMind’s Losses and the Future of Artificial Intelligence
Gary Marcus | Wired
“Still, the rising magnitude of DeepMind’s losses is worth considering: $154 million in 2016, $341 million in 2017, $572 million in 2018. In my view, there are three central questions: Is DeepMind on the right track scientifically? Are investments of this magnitude sound from Alphabet’s perspective? And how will the losses affect AI in general?”

Image Credit: Tithi Luadthong / Shutterstock.com Continue reading

Posted in Human Robots

#435308 Brain-Machine Interfaces Are Getting ...

Elon Musk grabbed a lot of attention with his July 16 announcement that his company Neuralink plans to implant electrodes into the brains of people with paralysis by next year. Their first goal is to create assistive technology to help people who can’t move or are unable to communicate.

If you haven’t been paying attention, brain-machine interfaces (BMIs) that allow people to control robotic arms with their thoughts might sound like science fiction. But science and engineering efforts have already turned it into reality.

In a few research labs around the world, scientists and physicians have been implanting devices into the brains of people who have lost the ability to control their arms or hands for over a decade. In our own research group at the University of Pittsburgh, we’ve enabled people with paralyzed arms and hands to control robotic arms that allow them to grasp and move objects with relative ease. They can even experience touch-like sensations from their own hand when the robot grasps objects.

At its core, a BMI is pretty straightforward. In your brain, microscopic cells called neurons are sending signals back and forth to each other all the time. Everything you think, do and feel as you interact with the world around you is the result of the activity of these 80 billion or so neurons.

If you implant a tiny wire very close to one of these neurons, you can record the electrical activity it generates and send it to a computer. Record enough of these signals from the right area of the brain and it becomes possible to control computers, robots, or anything else you might want, simply by thinking about moving. But doing this comes with tremendous technical challenges, especially if you want to record from hundreds or thousands of neurons.

What Neuralink Is Bringing to the Table
Elon Musk founded Neuralink in 2017, aiming to address these challenges and raise the bar for implanted neural interfaces.

Perhaps the most impressive aspect of Neuralink’s system is the breadth and depth of their approach. Building a BMI is inherently interdisciplinary, requiring expertise in electrode design and microfabrication, implantable materials, surgical methods, electronics, packaging, neuroscience, algorithms, medicine, regulatory issues, and more. Neuralink has created a team that spans most, if not all, of these areas.

With all of this expertise, Neuralink is undoubtedly moving the field forward, and improving their technology rapidly. Individually, many of the components of their system represent significant progress along predictable paths. For example, their electrodes, that they call threads, are very small and flexible; many researchers have tried to harness those properties to minimize the chance the brain’s immune response would reject the electrodes after insertion. Neuralink has also developed high-performance miniature electronics, another focus area for labs working on BMIs.

Often overlooked in academic settings, however, is how an entire system would be efficiently implanted in a brain.

Neuralink’s BMI requires brain surgery. This is because implanted electrodes that are in intimate contact with neurons will always outperform non-invasive electrodes where neurons are far away from the electrodes sitting outside the skull. So, a critical question becomes how to minimize the surgical challenges around getting the device into a brain.

Maybe the most impressive aspect of Neuralink’s announcement was that they created a 3,000-electrode neural interface where electrodes could be implanted at a rate of between 30 and 200 per minute. Each thread of electrodes is implanted by a sophisticated surgical robot that essentially acts like a sewing machine. This all happens while specifically avoiding blood vessels that blanket the surface of the brain. The robotics and imaging that enable this feat, with tight integration to the entire device, is striking.

Neuralink has thought through the challenge of developing a clinically viable BMI from beginning to end in a way that few groups have done, though they acknowledge that many challenges remain as they work towards getting this technology into human patients in the clinic.

Figuring Out What More Electrodes Gets You
The quest for implantable devices with thousands of electrodes is not only the domain of private companies. DARPA, the NIH BRAIN Initiative, and international consortiums are working on neurotechnologies for recording and stimulating in the brain with goals of tens of thousands of electrodes. But what might scientists do with the information from 1,000, 3,000, or maybe even 100,000 neurons?

At some level, devices with more electrodes might not actually be necessary to have a meaningful impact in people’s lives. Effective control of computers for access and communication, of robotic limbs to grasp and move objects as well as of paralyzed muscles is already happening—in people. And it has been for a number of years.

Since the 1990s, the Utah Array, which has just 100 electrodes and is manufactured by Blackrock Microsystems, has been a critical device in neuroscience and clinical research. This electrode array is FDA-cleared for temporary neural recording. Several research groups, including our own, have implanted Utah Arrays in people that lasted multiple years.

Currently, the biggest constraints are related to connectors, electronics, and system-level engineering, not the implanted electrode itself—although increasing the electrodes’ lifespan to more than five years would represent a significant advance. As those technical capabilities improve, it might turn out that the ability to accurately control computers and robots is limited more by scientists’ understanding of what the neurons are saying—that is, the neural code—than by the number of electrodes on the device.

Even the most capable implanted system, and maybe the most capable devices researchers can reasonably imagine, might fall short of the goal of actually augmenting skilled human performance. Nevertheless, Neuralink’s goal of creating better BMIs has the potential to improve the lives of people who can’t move or are unable to communicate. Right now, Musk’s vision of using BMIs to meld physical brains and intelligence with artificial ones is no more than a dream.

So, what does the future look like for Neuralink and other groups creating implantable BMIs? Devices with more electrodes that last longer and are connected to smaller and more powerful wireless electronics are essential. Better devices themselves, however, are insufficient. Continued public and private investment in companies and academic research labs, as well as innovative ways for these groups to work together to share technologies and data, will be necessary to truly advance scientists’ understanding of the brain and deliver on the promise of BMIs to improve peoples’ lives.

While researchers need to keep the future societal implications of advanced neurotechnologies in mind—there’s an essential role for ethicists and regulation—BMIs could be truly transformative as they help more people overcome limitations caused by injury or disease in the brain and body.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: UPMC/Pitt Health Sciences, / CC BY-NC-ND Continue reading

Posted in Human Robots

#435224 Can AI Save the Internet from Fake News?

There’s an old proverb that says “seeing is believing.” But in the age of artificial intelligence, it’s becoming increasingly difficult to take anything at face value—literally.

The rise of so-called “deepfakes,” in which different types of AI-based techniques are used to manipulate video content, has reached the point where Congress held its first hearing last month on the potential abuses of the technology. The congressional investigation coincided with the release of a doctored video of Facebook CEO Mark Zuckerberg delivering what appeared to be a sinister speech.

View this post on Instagram

‘Imagine this…’ (2019) Mark Zuckerberg reveals the truth about Facebook and who really owns the future… see more @sheffdocfest VDR technology by @cannyai #spectreknows #privacy #democracy #surveillancecapitalism #dataism #deepfake #deepfakes #contemporaryartwork #digitalart #generativeart #newmediaart #codeart #markzuckerberg #artivism #contemporaryart

A post shared by Bill Posters (@bill_posters_uk) on Jun 7, 2019 at 7:15am PDT

Scientists are scrambling for solutions on how to combat deepfakes, while at the same time others are continuing to refine the techniques for less nefarious purposes, such as automating video content for the film industry.

At one end of the spectrum, for example, researchers at New York University’s Tandon School of Engineering have proposed implanting a type of digital watermark using a neural network that can spot manipulated photos and videos.

The idea is to embed the system directly into a digital camera. Many smartphone cameras and other digital devices already use AI to boost image quality and make other corrections. The authors of the study out of NYU say their prototype platform increased the chances of detecting manipulation from about 45 percent to more than 90 percent without sacrificing image quality.

On the other hand, researchers at Carnegie Mellon University recently hit on a technique for automatically and rapidly converting large amounts of video content from one source into the style of another. In one example, the scientists transferred the facial expressions of comedian John Oliver onto the bespectacled face of late night show host Stephen Colbert.

The CMU team says the method could be a boon to the movie industry, such as by converting black and white films to color, though it also conceded that the technology could be used to develop deepfakes.

Words Matter with Fake News
While the current spotlight is on how to combat video and image manipulation, a prolonged trench warfare on fake news is being fought by academia, nonprofits, and the tech industry.

This isn’t the fake news that some have come to use as a knee-jerk reaction to fact-based information that might be less than flattering to the subject of the report. Rather, fake news is deliberately-created misinformation that is spread via the internet.

In a recent Pew Research Center poll, Americans said fake news is a bigger problem than violent crime, racism, and terrorism. Fortunately, many of the linguistic tools that have been applied to determine when people are being deliberately deceitful can be baked into algorithms for spotting fake news.

That’s the approach taken by a team at the University of Michigan (U-M) to develop an algorithm that was better than humans at identifying fake news—76 percent versus 70 percent—by focusing on linguistic cues like grammatical structure, word choice, and punctuation.

For example, fake news tends to be filled with hyperbole and exaggeration, using terms like “overwhelming” or “extraordinary.”

“I think that’s a way to make up for the fact that the news is not quite true, so trying to compensate with the language that’s being used,” Rada Mihalcea, a computer science and engineering professor at U-M, told Singularity Hub.

The paper “Automatic Detection of Fake News” was based on the team’s previous studies on how people lie in general, without necessarily having the intention of spreading fake news, she said.

“Deception is a complicated and complex phenomenon that requires brain power,” Mihalcea noted. “That often results in simpler language, where you have shorter sentences or shorter documents.”

AI Versus AI
While most fake news is still churned out by humans with identifiable patterns of lying, according to Mihalcea, other researchers are already anticipating how to detect misinformation manufactured by machines.

A group led by Yejin Choi, with the Allen Institute of Artificial Intelligence and the University of Washington in Seattle, is one such team. The researchers recently introduced the world to Grover, an AI platform that is particularly good at catching autonomously-generated fake news because it’s equally good at creating it.

“This is due to a finding that is perhaps counterintuitive: strong generators for neural fake news are themselves strong detectors of it,” wrote Rowan Zellers, a PhD student and team member, in a Medium blog post. “A generator of fake news will be most familiar with its own peculiarities, such as using overly common or predictable words, as well as the peculiarities of similar generators.”

The team found that the best current discriminators can classify neural fake news from real, human-created text with 73 percent accuracy. Grover clocks in with 92 percent accuracy based on a training set of 5,000 neural network-generated fake news samples. Zellers wrote that Grover got better at scale, identifying 97.5 percent of made-up machine mumbo jumbo when trained on 80,000 articles.

It performed almost as well against fake news created by a powerful new text-generation system called GPT-2 built by OpenAI, a nonprofit research lab founded by Elon Musk, classifying 96.1 percent of the machine-written articles.

OpenAI had so feared that the platform could be abused that it has only released limited versions of the software. The public can play with a scaled-down version posted by a machine learning engineer named Adam King, where the user types in a short prompt and GPT-2 bangs out a short story or poem based on the snippet of text.

No Silver AI Bullet
While real progress is being made against fake news, the challenges of using AI to detect and correct misinformation are abundant, according to Hugo Williams, outreach manager for Logically, a UK-based startup that is developing different detectors using elements of deep learning and natural language processing, among others. He explained that the Logically models analyze information based on a three-pronged approach.

Publisher metadata: Is the article from a known, reliable, and trustworthy publisher with a history of credible journalism?
Network behavior: Is the article proliferating through social platforms and networks in ways typically associated with misinformation?
Content: The AI scans articles for hundreds of known indicators typically found in misinformation.

“There is no single algorithm which is capable of doing this,” Williams wrote in an email to Singularity Hub. “Even when you have a collection of different algorithms which—when combined—can give you relatively decent indications of what is unreliable or outright false, there will always need to be a human layer in the pipeline.”

The company released a consumer app in India back in February just before that country’s election cycle that was a “great testing ground” to refine its technology for the next app release, which is scheduled in the UK later this year. Users can submit articles for further scrutiny by a real person.

“We see our technology not as replacing traditional verification work, but as a method of simplifying and streamlining a very manual process,” Williams said. “In doing so, we’re able to publish more fact checks at a far quicker pace than other organizations.”

“With heightened analysis and the addition of more contextual information around the stories that our users are reading, we are not telling our users what they should or should not believe, but encouraging critical thinking based upon reliable, credible, and verified content,” he added.

AI may never be able to detect fake news entirely on its own, but it can help us be smarter about what we read on the internet.

Image Credit: Dennis Lytyagin / Shutterstock.com Continue reading

Posted in Human Robots