Tag Archives: actor

#437769 Q&A: Facebook’s CTO Is at War With ...

Photo: Patricia de Melo Moreira/AFP/Getty Images

Facebook chief technology officer Mike Schroepfer leads the company’s AI and integrity efforts.

Facebook’s challenge is huge. Billions of pieces of content—short and long posts, images, and combinations of the two—are uploaded to the site daily from around the world. And any tiny piece of that—any phrase, image, or video—could contain so-called bad content.

In its early days, Facebook relied on simple computer filters to identify potentially problematic posts by their words, such as those containing profanity. These automatically filtered posts, as well as posts flagged by users as offensive, went to humans for adjudication.

In 2015, Facebook started using artificial intelligence to cull images that contained nudity, illegal goods, and other prohibited content; those images identified as possibly problematic were sent to humans for further review.

By 2016, more offensive photos were reported by Facebook’s AI systems than by Facebook users (and that is still the case).

In 2018, Facebook CEO Mark Zuckerberg made a bold proclamation: He predicted that within five or ten years, Facebook’s AI would not only look for profanity, nudity, and other obvious violations of Facebook’s policies. The tools would also be able to spot bullying, hate speech, and other misuse of the platform, and put an immediate end to them.

Today, automated systems using algorithms developed with AI scan every piece of content between the time when a user completes a post and when it is visible to others on the site—just fractions of a second. In most cases, a violation of Facebook’s standards is clear, and the AI system automatically blocks the post. In other cases, the post goes to human reviewers for a final decision, a workforce that includes 15,000 content reviewers and another 20,000 employees focused on safety and security, operating out of more than 20 facilities around the world.

In the first quarter of this year, Facebook removed or took other action (like appending a warning label) on more than 9.6 million posts involving hate speech, 8.6 million involving child nudity or exploitation, almost 8 million posts involving the sale of drugs, 2.3 million posts involving bullying and harassment, and tens of millions of posts violating other Facebook rules.

Right now, Facebook has more than 1,000 engineers working on further developing and implementing what the company calls “integrity” tools. Using these systems to screen every post that goes up on Facebook, and doing so in milliseconds, is sucking up computing resources. Facebook chief technology officer Mike Schroepfer, who is heading up Facebook’s AI and integrity efforts, spoke with IEEE Spectrum about the team’s progress on building an AI system that detects bad content.

Since that discussion, Facebook’s policies around hate speech have come under increasing scrutiny, with particular attention on divisive posts by political figures. A group of major advertisers in June announced that they would stop advertising on the platform while reviewing the situation, and civil rights groups are putting pressure on others to follow suit until Facebook makes policy changes related to hate speech and groups that promote hate, misinformation, and conspiracies.

Facebook CEO Mark Zuckerberg responded with news that Facebook will widen the category of what it considers hateful content in ads. Now the company prohibits claims that people from a specific race, ethnicity, national origin, religious affiliation, caste, sexual orientation, gender identity, or immigration status are a threat to the physical safety, health, or survival of others. The policy change also aims to better protect immigrants, migrants, refugees, and asylum seekers from ads suggesting these groups are inferior or expressing contempt. Finally, Zuckerberg announced that the company will label some problematic posts by politicians and government officials as content that violates Facebook’s policies.

However, civil rights groups say that’s not enough. And an independent audit released in July also said that Facebook needs to go much further in addressing civil rights concerns and disinformation.

Schroepfer indicated that Facebook’s AI systems are designed to quickly adapt to changes in policy. “I don’t expect considerable technical changes are needed to adjust,” he told Spectrum.

This interview has been edited and condensed for clarity.

IEEE Spectrum: What are the stakes of content moderation? Is this an existential threat to Facebook? And is it critical that you deal well with the issue of election interference this year?

Schroepfer: It’s probably existential; it’s certainly massive. We are devoting a tremendous amount of our attention to it.

The idea that anyone could meddle in an election is deeply disturbing and offensive to all of us here, just as people and citizens of democracies. We don’t want to see that happen anywhere, and certainly not on our watch. So whether it’s important to the company or not, it’s important to us as people. And I feel a similar way on the content-moderation side.

There are not a lot of easy choices here. The only way to prevent people, with certainty, from posting bad things is to not let them post anything. We can take away all voice and just say, “Sorry, the Internet’s too dangerous. No one can use it.” That will certainly get rid of all hate speech online. But I don’t want to end up in that world. And there are variants of that world that various governments are trying to implement, where they get to decide what’s true or not, and you as a person don’t. I don’t want to get there either.

My hope is that we can build a set of tools that make it practical for us to do a good enough job, so that everyone is still excited about the idea that anyone can share what they want, and so that Facebook is a safe and reasonable place for people to operate in.

Spectrum: You joined Facebook in 2008, before AI was part of the company’s toolbox. When did that change? When did you begin to think that AI tools would be useful to Facebook?

Schroepfer: Ten years ago, AI wasn’t commercially practical; the technology just didn’t work very well. In 2012, there was one of those moments that a lot of people point to as the beginning of the current revolution in deep learning and AI. A computer-vision model—a neural network—was trained using what we call supervised training, and it turned out to be better than all the existing models.

Spectrum: How is that training done, and how did computer-vision models come to Facebook?

Image: Facebook

Just Broccoli? Facebook’s image analysis algorithms can tell the difference between marijuana [left] and tempura broccoli [right] better than some humans.

Schroepfer: Say I take a bunch of photos and I have people look at them. If they see a photo of a cat, they put a text label that says cat; if it’s one of a dog, the text label says dog. If you build a big enough data set and feed that to the neural net, it learns how to tell the difference between cats and dogs.

Prior to 2012, it didn’t work very well. And then in 2012, there was this moment where it seemed like, “Oh wow, this technique might work.” And a few years later we were deploying that form of technology to help us detect problematic imagery.

Spectrum: Do your AI systems work equally well on all types of prohibited content?

Schroepfer: Nudity was technically easiest. I don’t need to understand language or culture to understand that this is either a naked human or not. Violence is a much more nuanced problem, so it was harder technically to get it right. And with hate speech, not only do you have to understand the language, it may be very contextual, even tied to recent events. A week before the Christchurch shooting [New Zealand, 2019], saying “I wish you were in the mosque” probably doesn’t mean anything. A week after, that might be a terrible thing to say.

Spectrum: How much progress have you made on hate speech?

Schroepfer: AI, in the first quarter of 2020, proactively detected 88.8 percent of the hate-speech content we removed, up from 80.2 percent in the previous quarter. In the first quarter of 2020, we took action on 9.6 million pieces of content for violating our hate-speech policies.

Image: Facebook

Off Label: Sometimes image analysis isn’t enough to determine whether a picture posted violates the company’s policies. In considering these candy-colored vials of marijuana, for example, the algorithms can look at any accompanying text and, if necessary, comments on the post.

Spectrum: It sounds like you’ve expanded beyond tools that analyze images and are also using AI tools that analyze text.

Schroepfer: AI started off as very siloed. People worked on language, people worked on computer vision, people worked on video. We’ve put these things together—in production, not just as research—into multimodal classifiers.

[Schroepfer shows a photo of a pan of Rice Krispies treats, with text referring to it as a “potent batch”] This is a case in which you have an image, and then you have the text on the post. This looks like Rice Krispies. On its own, this image is fine. You put the text together with it in a bigger model; that can then understand what’s going on. That didn’t work five years ago.

Spectrum: Today, every post that goes up on Facebook is immediately checked by automated systems. Can you explain that process?

Image: Facebook

Bigger Picture: Identifying hate speech is often a matter of context. Either the text or the photo in this post isn’t hateful standing alone, but putting them together tells a different story.

Schroepfer: You upload an image and you write some text underneath it, and the systems look at both the image and the text to try to see which, if any, policies it violates. Those decisions are based on our Community Standards. It will also look at other signals on the posts, like the comments people make.

It happens relatively instantly, though there may be times things happen after the fact. Maybe you uploaded a post that had misinformation in it, and at the time you uploaded it, we didn’t know it was misinformation. The next day we fact-check something and scan again; we may find your post and take it down. As we learn new things, we’re going to go back through and look for violations of what we now know to be a problem. Or, as people comment on your post, we might update our understanding of it. If people are saying, “That’s terrible,” or “That’s mean,” or “That looks fake,” those comments may be an interesting signal.

Spectrum: How is Facebook applying its AI tools to the problem of election interference?

Schroepfer: I would split election interference into two categories. There are times when you’re going after the content, and there are times you’re going after the behavior or the authenticity of the person.

On content, if you’re sharing misinformation, saying, “It’s super Wednesday, not super Tuesday, come vote on Wednesday,” that’s a problem whether you’re an American sitting in California or a foreign actor.

Other times, people create a series of Facebook pages pretending they’re Americans, but they’re really a foreign entity. That is a problem on its own, even if all the content they’re sharing completely meets our Community Standards. The problem there is that you have a foreign government running an information operation.

There, you need different tools. What you’re trying to do is put pieces together, to say, “Wait a second. All of these pages—Martians for Justice, Moonlings for Justice, and Venusians for Justice”—are all run by an administrator with an IP address that’s outside the United States. So they’re all connected, even though they’re pretending to not be connected. That’s a very different problem than me sitting in my office in Menlo Park [Calif.] sharing misinformation.

I’m not going to go into lots of technical detail, because this is an area of adversarial nature. The fundamental problem you’re trying to solve is that there’s one entity coordinating the activity of a bunch of things that look like they’re not all one thing. So this is a series of Instagram accounts, or a series of Facebook pages, or a series of WhatsApp accounts, and they’re pretending to be totally different things. We’re looking for signals that these things are related in some way. And we’re looking through the graph [what Facebook calls its map of relationships between users] to understand the properties of this network.

Spectrum: What cutting-edge AI tools and methods have you been working on lately?

Schroepfer: Supervised learning, with humans setting up the instruction process for the AI systems, is amazingly effective. But it has a very obvious flaw: the speed at which you can develop these things is limited by how fast you can curate the data sets. If you’re dealing in a problem domain where things change rapidly, you have to rebuild a new data set and retrain the whole thing.

Self-supervision is inspired by the way people learn, by the way kids explore the world around them. To get computers to do it themselves, we take a bunch of raw data and build a way for the computer to construct its own tests. For language, you scan a bunch of Web pages, and the computer builds a test where it takes a sentence, eliminates one of the words, and figures out how to predict what word belongs there. And because it created the test, it actually knows the answer. I can use as much raw text as I can find and store because it’s processing everything itself and doesn’t require us to sit down and build the information set. In the last two years there has been a revolution in language understanding as a result of AI self-supervised learning.

Spectrum: What else are you excited about?

Schroepfer: What we’ve been working on over the last few years is multilingual understanding. Usually, when I’m trying to figure out, say, whether something is hate speech or not I have to go through the whole process of training the model in every language. I have to do that one time for every language. When you make a post, the first thing we have to figure out is what language your post is in. “Ah, that’s Spanish. So send it to the Spanish hate-speech model.”

We’ve started to build a multilingual model—one box where you can feed in text in 40 different languages and it determines whether it’s hate speech or not. This is way more effective and easier to deploy.

To geek out for a second, just the idea that you can build a model that understands a concept in multiple languages at once is crazy cool. And it not only works for hate speech, it works for a variety of things.

When we started working on this multilingual model years ago, it performed worse than every single individual model. Now, it not only works as well as the English model, but when you get to the languages where you don’t have enough data, it’s so much better. This rapid progress is very exciting.

Spectrum: How do you move new AI tools from your research labs into operational use?

Schroepfer: Engineers trying to make the next breakthrough will often say, “Cool, I’ve got a new thing and it achieved state-of-the-art results on machine translation.” And we say, “Great. How long does it take to run in production?” They say, “Well, it takes 10 seconds for every sentence to run on a CPU.” And we say, “It’ll eat our whole data center if we deploy that.” So we take that state-of-the-art model and we make it 10 or a hundred or a thousand times more efficient, maybe at the cost of a little bit of accuracy. So it’s not as good as the state-of-the-art version, but it’s something we can actually put into our data centers and run in production.

Spectrum: What’s the role of the humans in the loop? Is it true that Facebook currently employs 35,000 moderators?

Schroepfer: Yes. Right now our goal is not to reduce that. Our goal is to do a better job catching bad content. People often think that the end state will be a fully automated system. I don’t see that world coming anytime soon.

As automated systems get more sophisticated, they take more and more of the grunt work away, freeing up the humans to work on the really gnarly stuff where you have to spend an hour researching.

We also use AI to give our human moderators power tools. Say I spot this new meme that is telling everyone to vote on Wednesday rather than Tuesday. I have a tool in front of me that says, “Find variants of that throughout the system. Find every photo with the same text, find every video that mentions this thing and kill it in one shot.” Rather than, I found this one picture, but then a bunch of other people upload that misinformation in different forms.

Another important aspect of AI is that anything I can do to prevent a person from having to look at terrible things is time well spent. Whether it’s a person employed by us as a moderator or a user of our services, looking at these things is a terrible experience. If I can build systems that take the worst of the worst, the really graphic violence, and deal with that in an automated fashion, that’s worth a lot to me. Continue reading

Posted in Human Robots

#437267 This Week’s Awesome Tech Stories From ...

ARTIFICIAL INTELLIGENCE
OpenAI’s New Language Generator GPT-3 Is Shockingly Good—and Completely Mindless
Will Douglas Heaven | MIT Technology Review
“‘Playing with GPT-3 feels like seeing the future,’ Arram Sabeti, a San Francisco–based developer and artist, tweeted last week. That pretty much sums up the response on social media in the last few days to OpenAI’s latest language-generating AI.”

ROBOTICS
The Star of This $70 Million Sci-Fi Film Is a Robot
Sarah Bahr | The New York Times
“Erica was created by Hiroshi Ishiguro, a roboticist at Osaka University in Japan, to be ‘the most beautiful woman in the world’—he modeled her after images of Miss Universe pageant finalists—and the most humanlike robot in existence. But she’s more than just a pretty face: Though ‘b’ is still in preproduction, when she makes her debut, producers believe it will be the first time a film has relied on a fully autonomous artificially intelligent actor.”

VIRTUAL REALITY
My Glitchy, Glorious Day at a Conference for Virtual Beings
Emma Grey Ellis | Wired
“Spectators spent much of the time debating who was real and who was fake. …[Lars Buttler’s] eyes seemed awake and alive in a way that the faces of the other participants in the Zoom call—venture capitalist, a tech founder, and an activist, all of them puppeted by artificial intelligence—were not. ‘Pretty sure Lars is human,’ a (real-person) spectator typed in the in-meeting chat room. ‘I’m starting to think Lars is AI,’ wrote another.”

FUTURE OF FOOD
KFC Is Working With a Russian 3D Bioprinting Firm to Try to Make Lab-Produced Chicken Nuggets
Kim Lyons | The Verge
“The chicken restaurant chain will work with Russian company 3D Bioprinting Solutions to develop bioprinting technology that will ‘print’ chicken meat, using chicken cells and plant material. KFC plans to provide the bioprinting firm with ingredients like breading and spices ‘to achieve the signature KFC taste’ and will seek to replicate the taste and texture of genuine chicken.”

BIOTECH
A CRISPR Cow Is Born. It’s Definitely a Boy
Megan Molteni | Wired
“After nearly five years of research, at least half a million dollars, dozens of failed pregnancies, and countless scientific setbacks, Van Eenennaam’s pioneering attempt to create a line of Crispr’d cattle tailored to the needs of the beef industry all came down to this one calf. Who, as luck seemed sure to have it, was about to enter the world in the middle of a global pandemic.”

GOVERNANCE
Is the Pandemic Finally the Moment for a Universal Basic Income?
Brooks Rainwater and Clay Dillow | Fast Company
“Since February, governments around the globe—including in the US—have intervened in their citizens’ individual financial lives, distributing direct cash payments to backstop workers sidelined by the COVID-19 pandemic. Some are considering keeping such direct assistance in place indefinitely, or at least until the economic shocks subside.”

SCIENCE
How Gödel’s Proof Works
Natalie Wolchover | Wired
“In 1931, the Austrian logician Kurt Gödel pulled off arguably one of the most stunning intellectual achievements in history. Mathematicians of the era sought a solid foundation for mathematics: a set of basic mathematical facts, or axioms, that was both consistent—never leading to contradictions—and complete, serving as the building blocks of all mathematical truths. But Gödel’s shocking incompleteness theorems, published when he was just 25, crushed that dream.”

Image credit: Pierre Châtel-Innocenti / Unsplash Continue reading

Posted in Human Robots

#434781 What Would It Mean for AI to Become ...

As artificial intelligence systems take on more tasks and solve more problems, it’s hard to say which is rising faster: our interest in them or our fear of them. Futurist Ray Kurzweil famously predicted that “By 2029, computers will have emotional intelligence and be convincing as people.”

We don’t know how accurate this prediction will turn out to be. Even if it takes more than 10 years, though, is it really possible for machines to become conscious? If the machines Kurzweil describes say they’re conscious, does that mean they actually are?

Perhaps a more relevant question at this juncture is: what is consciousness, and how do we replicate it if we don’t understand it?

In a panel discussion at South By Southwest titled “How AI Will Design the Human Future,” experts from academia and industry discussed these questions and more.

Wait, What Is AI?
Most of AI’s recent feats—diagnosing illnesses, participating in debate, writing realistic text—involve machine learning, which uses statistics to find patterns in large datasets then uses those patterns to make predictions. However, “AI” has been used to refer to everything from basic software automation and algorithms to advanced machine learning and deep learning.

“The term ‘artificial intelligence’ is thrown around constantly and often incorrectly,” said Jennifer Strong, a reporter at the Wall Street Journal and host of the podcast “The Future of Everything.” Indeed, one study found that 40 percent of European companies that claim to be working on or using AI don’t actually use it at all.

Dr. Peter Stone, associate chair of computer science at UT Austin, was the study panel chair on the 2016 One Hundred Year Study on Artificial Intelligence (or AI100) report. Based out of Stanford University, AI100 is studying and anticipating how AI will impact our work, our cities, and our lives.

“One of the first things we had to do was define AI,” Stone said. They defined it as a collection of different technologies inspired by the human brain to be able to perceive their surrounding environment and figure out what actions to take given these inputs.

Modeling on the Unknown
Here’s the crazy thing about that definition (and about AI itself): we’re essentially trying to re-create the abilities of the human brain without having anything close to a thorough understanding of how the human brain works.

“We’re starting to pair our brains with computers, but brains don’t understand computers and computers don’t understand brains,” Stone said. Dr. Heather Berlin, cognitive neuroscientist and professor of psychiatry at the Icahn School of Medicine at Mount Sinai, agreed. “It’s still one of the greatest mysteries how this three-pound piece of matter can give us all our subjective experiences, thoughts, and emotions,” she said.

This isn’t to say we’re not making progress; there have been significant neuroscience breakthroughs in recent years. “This has been the stuff of science fiction for a long time, but now there’s active work being done in this area,” said Amir Husain, CEO and founder of Austin-based AI company Spark Cognition.

Advances in brain-machine interfaces show just how much more we understand the brain now than we did even a few years ago. Neural implants are being used to restore communication or movement capabilities in people who’ve been impaired by injury or illness. Scientists have been able to transfer signals from the brain to prosthetic limbs and stimulate specific circuits in the brain to treat conditions like Parkinson’s, PTSD, and depression.

But much of the brain’s inner workings remain a deep, dark mystery—one that will have to be further solved if we’re ever to get from narrow AI, which refers to systems that can perform specific tasks and is where the technology stands today, to artificial general intelligence, or systems that possess the same intelligence level and learning capabilities as humans.

The biggest question that arises here, and one that’s become a popular theme across stories and films, is if machines achieve human-level general intelligence, does that also mean they’d be conscious?

Wait, What Is Consciousness?
As valuable as the knowledge we’ve accumulated about the brain is, it seems like nothing more than a collection of disparate facts when we try to put it all together to understand consciousness.

“If you can replace one neuron with a silicon chip that can do the same function, then replace another neuron, and another—at what point are you still you?” Berlin asked. “These systems will be able to pass the Turing test, so we’re going to need another concept of how to measure consciousness.”

Is consciousness a measurable phenomenon, though? Rather than progressing by degrees or moving through some gray area, isn’t it pretty black and white—a being is either conscious or it isn’t?

This may be an outmoded way of thinking, according to Berlin. “It used to be that only philosophers could study consciousness, but now we can study it from a scientific perspective,” she said. “We can measure changes in neural pathways. It’s subjective, but depends on reportability.”

She described three levels of consciousness: pure subjective experience (“Look, the sky is blue”), awareness of one’s own subjective experience (“Oh, it’s me that’s seeing the blue sky”), and relating one subjective experience to another (“The blue sky reminds me of a blue ocean”).

“These subjective states exist all the way down the animal kingdom. As humans we have a sense of self that gives us another depth to that experience, but it’s not necessary for pure sensation,” Berlin said.

Husain took this definition a few steps farther. “It’s this self-awareness, this idea that I exist separate from everything else and that I can model myself,” he said. “Human brains have a wonderful simulator. They can propose a course of action virtually, in their minds, and see how things play out. The ability to include yourself as an actor means you’re running a computation on the idea of yourself.”

Most of the decisions we make involve envisioning different outcomes, thinking about how each outcome would affect us, and choosing which outcome we’d most prefer.

“Complex tasks you want to achieve in the world are tied to your ability to foresee the future, at least based on some mental model,” Husain said. “With that view, I as an AI practitioner don’t see a problem implementing that type of consciousness.”

Moving Forward Cautiously (But Not too Cautiously)
To be clear, we’re nowhere near machines achieving artificial general intelligence or consciousness, and whether a “conscious machine” is possible—not to mention necessary or desirable—is still very much up for debate.

As machine intelligence continues to advance, though, we’ll need to walk the line between progress and risk management carefully.

Improving the transparency and explainability of AI systems is one crucial goal AI developers and researchers are zeroing in on. Especially in applications that could mean the difference between life and death, AI shouldn’t advance without people being able to trace how it’s making decisions and reaching conclusions.

Medicine is a prime example. “There are already advances that could save lives, but they’re not being used because they’re not trusted by doctors and nurses,” said Stone. “We need to make sure there’s transparency.” Demanding too much transparency would also be a mistake, though, because it will hinder the development of systems that could at best save lives and at worst improve efficiency and free up doctors to have more face time with patients.

Similarly, self-driving cars have great potential to reduce deaths from traffic fatalities. But even though humans cause thousands of deadly crashes every day, we’re terrified by the idea of self-driving cars that are anything less than perfect. “If we only accept autonomous cars when there’s zero probability of an accident, then we will never accept them,” Stone said. “Yet we give 16-year-olds the chance to take a road test with no idea what’s going on in their brains.”

This brings us back to the fact that, in building tech modeled after the human brain—which has evolved over millions of years—we’re working towards an end whose means we don’t fully comprehend, be it something as basic as choosing when to brake or accelerate or something as complex as measuring consciousness.

“We shouldn’t charge ahead and do things just because we can,” Stone said. “The technology can be very powerful, which is exciting, but we have to consider its implications.”

Image Credit: agsandrew / Shutterstock.com Continue reading

Posted in Human Robots

#433400 A Model for the Future of Education, and ...

As kids worldwide head back to school, I’d like to share my thoughts on the future of education.

Bottom line, how we educate our kids needs to radically change given the massive potential of exponential tech (e.g. artificial intelligence and virtual reality).

Without question, the number one driver for education is inspiration. As such, if you have a kid age 8–18, you’ll want to get your hands on an incredibly inspirational novel written by my dear friend Ray Kurzweil called Danielle: Chronicles of a Superheroine.

Danielle offers boys and girls a role model of a young woman who uses smart technologies and super-intelligence to partner with her friends to solve some of the world’s greatest challenges. It’s perfect to inspire anyone to pursue their moonshot.

Without further ado, let’s dive into the future of educating kids, and a summary of my white paper thoughts….

Just last year, edtech (education technology) investments surpassed a record high of 9.5 billion USD—up 30 percent from the year before.

Already valued at over half a billion USD, the AI in education market is set to surpass 6 billion USD by 2024.

And we’re now seeing countless new players enter the classroom, from a Soul Machines AI teacher specializing in energy use and sustainability to smart “lab schools” with personalized curricula.

As my two boys enter 1st grade, I continue asking myself, given the fact that most elementary schools haven’t changed in many decades (perhaps a century), what do I want my kids to learn? How do I think about elementary school during an exponential era?

This post covers five subjects related to elementary school education:

Five Issues with Today’s Elementary Schools
Five Guiding Principles for Future Education
An Elementary School Curriculum for the Future
Exponential Technologies in our Classroom
Mindsets for the 21st Century

Excuse the length of this post, but if you have kids, the details might be meaningful. If you don’t, then next week’s post will return to normal length and another fun subject.

Also, if you’d like to see my detailed education “white paper,” you can view or download it here.

Let’s dive in…

Five Issues With Today’s Elementary Schools
There are probably lots of issues with today’s traditional elementary schools, but I’ll just choose a few that bother me most.

Grading: In the traditional education system, you start at an “A,” and every time you get something wrong, your score gets lower and lower. At best it’s demotivating, and at worst it has nothing to do with the world you occupy as an adult. In the gaming world (e.g. Angry Birds), it’s just the opposite. You start with zero and every time you come up with something right, your score gets higher and higher.
Sage on the Stage: Most classrooms have a teacher up in front of class lecturing to a classroom of students, half of whom are bored and half of whom are lost. The one-teacher-fits-all model comes from an era of scarcity where great teachers and schools were rare.
Relevance: When I think back to elementary and secondary school, I realize how much of what I learned was never actually useful later in life, and how many of my critical lessons for success I had to pick up on my own (I don’t know about you, but I haven’t ever actually had to factor a polynomial in my adult life).
Imagination, Coloring inside the Lines: Probably of greatest concern to me is the factory-worker, industrial-era origin of today’s schools. Programs are so structured with rote memorization that it squashes the originality from most children. I’m reminded that “the day before something is truly a breakthrough, it’s a crazy idea.” Where do we pursue crazy ideas in our schools? Where do we foster imagination?
Boring: If learning in school is a chore, boring, or emotionless, then the most important driver of human learning, passion, is disengaged. Having our children memorize facts and figures, sit passively in class, and take mundane standardized tests completely defeats the purpose.

An average of 7,200 students drop out of high school each day, totaling 1.3 million each year. This means only 69 percent of students who start high school finish four years later. And over 50 percent of these high school dropouts name boredom as the number one reason they left.

Five Guiding Principles for Future Education
I imagine a relatively near-term future in which robotics and artificial intelligence will allow any of us, from ages 8 to 108, to easily and quickly find answers, create products, or accomplish tasks, all simply by expressing our desires.

From ‘mind to manufactured in moments.’ In short, we’ll be able to do and create almost whatever we want.

In this future, what attributes will be most critical for our children to learn to become successful in their adult lives? What’s most important for educating our children today?

For me it’s about passion, curiosity, imagination, critical thinking, and grit.

Passion: You’d be amazed at how many people don’t have a mission in life… A calling… something to jolt them out of bed every morning. The most valuable resource for humanity is the persistent and passionate human mind, so creating a future of passionate kids is so very important. For my 7-year-old boys, I want to support them in finding their passion or purpose… something that is uniquely theirs. In the same way that the Apollo program and Star Trek drove my early love for all things space, and that passion drove me to learn and do.
Curiosity: Curiosity is something innate in kids, yet something lost by most adults during the course of their life. Why? In a world of Google, robots, and AI, raising a kid that is constantly asking questions and running “what if” experiments can be extremely valuable. In an age of machine learning, massive data, and a trillion sensors, it will be the quality of your questions that will be most important.
Imagination: Entrepreneurs and visionaries imagine the world (and the future) they want to live in, and then they create it. Kids happen to be some of the most imaginative humans around… it’s critical that they know how important and liberating imagination can be.
Critical Thinking: In a world flooded with often-conflicting ideas, baseless claims, misleading headlines, negative news, and misinformation, learning the skill of critical thinking helps find the signal in the noise. This principle is perhaps the most difficult to teach kids.
Grit/Persistence: Grit is defined as “passion and perseverance in pursuit of long-term goals,” and it has recently been widely acknowledged as one of the most important predictors of and contributors to success.

Teaching your kids not to give up, to keep trying, and to keep trying new ideas for something that they are truly passionate about achieving is extremely critical. Much of my personal success has come from such stubbornness. I joke that both XPRIZE and the Zero Gravity Corporation were “overnight successes after 10 years of hard work.”

So given those five basic principles, what would an elementary school curriculum look like? Let’s take a look…

An Elementary School Curriculum for the Future
Over the last 30 years, I’ve had the pleasure of starting two universities, International Space University (1987) and Singularity University (2007). My favorite part of co-founding both institutions was designing and implementing the curriculum. Along those lines, the following is my first shot at the type of curriculum I’d love my own boys to be learning.

I’d love your thoughts, I’ll be looking for them here: https://www.surveymonkey.com/r/DDRWZ8R

For the purpose of illustration, I’ll speak about ‘courses’ or ‘modules,’ but in reality these are just elements that would ultimately be woven together throughout the course of K-6 education.

Module 1: Storytelling/Communications

When I think about the skill that has served me best in life, it’s been my ability to present my ideas in the most compelling fashion possible, to get others onboard, and support birth and growth in an innovative direction. In my adult life, as an entrepreneur and a CEO, it’s been my ability to communicate clearly and tell compelling stories that has allowed me to create the future. I don’t think this lesson can start too early in life. So imagine a module, year after year, where our kids learn the art and practice of formulating and pitching their ideas. The best of oration and storytelling. Perhaps children in this class would watch TED presentations, or maybe they’d put together their own TEDx for kids. Ultimately, it’s about practice and getting comfortable with putting yourself and your ideas out there and overcoming any fears of public speaking.

Module 2: Passions

A modern school should help our children find and explore their passion(s). Passion is the greatest gift of self-discovery. It is a source of interest and excitement, and is unique to each child.

The key to finding passion is exposure. Allowing kids to experience as many adventures, careers, and passionate adults as possible. Historically, this was limited by the reality of geography and cost, implemented by having local moms and dads presenting in class about their careers. “Hi, I’m Alan, Billy’s dad, and I’m an accountant. Accountants are people who…”

But in a world of YouTube and virtual reality, the ability for our children to explore 500 different possible careers or passions during their K-6 education becomes not only possible but compelling. I imagine a module where children share their newest passion each month, sharing videos (or VR experiences) and explaining what they love and what they’ve learned.

Module 3: Curiosity & Experimentation

Einstein famously said, “I have no special talent. I am only passionately curious.” Curiosity is innate in children, and many times lost later in life. Arguably, it can be said that curiosity is responsible for all major scientific and technological advances; it’s the desire of an individual to know the truth.

Coupled with curiosity is the process of experimentation and discovery. The process of asking questions, creating and testing a hypothesis, and repeated experimentation until the truth is found. As I’ve studied the most successful entrepreneurs and entrepreneurial companies, from Google and Amazon to Uber, their success is significantly due to their relentless use of experimentation to define their products and services.

Here I imagine a module which instills in children the importance of curiosity and gives them permission to say, “I don’t know, let’s find out.”

Further, a monthly module that teaches children how to design and execute valid and meaningful experiments. Imagine children who learn the skill of asking a question, proposing a hypothesis, designing an experiment, gathering the data, and then reaching a conclusion.

Module 4: Persistence/Grit

Doing anything big, bold, and significant in life is hard work. You can’t just give up when the going gets rough. The mindset of persistence, of grit, is a learned behavior I believe can be taught at an early age, especially when it’s tied to pursuing a child’s passion.

I imagine a curriculum that, each week, studies the career of a great entrepreneur and highlights their story of persistence. It would highlight the individuals and companies that stuck with it, iterated, and ultimately succeeded.

Further, I imagine a module that combines persistence and experimentation in gameplay, such as that found in Dean Kamen’s FIRST LEGO league, where 4th graders (and up) research a real-world problem such as food safety, recycling, energy, and so on, and are challenged to develop a solution. They also must design, build, and program a robot using LEGO MINDSTORMS®, then compete on a tabletop playing field.

Module 5: Technology Exposure

In a world of rapidly accelerating technology, understanding how technologies work, what they do, and their potential for benefiting society is, in my humble opinion, critical to a child’s future. Technology and coding (more on this below) are the new “lingua franca” of tomorrow.

In this module, I imagine teaching (age appropriate) kids through play and demonstration. Giving them an overview of exponential technologies such as computation, sensors, networks, artificial intelligence, digital manufacturing, genetic engineering, augmented/virtual reality, and robotics, to name a few. This module is not about making a child an expert in any technology, it’s more about giving them the language of these new tools, and conceptually an overview of how they might use such a technology in the future. The goal here is to get them excited, give them demonstrations that make the concepts stick, and then to let their imaginations run.

Module 6: Empathy

Empathy, defined as “the ability to understand and share the feelings of another,” has been recognized as one of the most critical skills for our children today. And while there has been much written, and great practices for instilling this at home and in school, today’s new tools accelerate this.

Virtual reality isn’t just about video games anymore. Artists, activists, and journalists now see the technology’s potential to be an empathy engine, one that can shine spotlights on everything from the Ebola epidemic to what it’s like to live in Gaza. And Jeremy Bailenson has been at the vanguard of investigating VR’s power for good.

For more than a decade, Bailenson’s lab at Stanford has been studying how VR can make us better people. Through the power of VR, volunteers at the lab have felt what it is like to be Superman (to see if it makes them more helpful), a cow (to reduce meat consumption), and even a coral (to learn about ocean acidification).

Silly as they might seem, these sorts of VR scenarios could be more effective than the traditional public service ad at making people behave. Afterwards, they waste less paper. They save more money for retirement. They’re nicer to the people around them. And this could have consequences in terms of how we teach and train everyone from cliquey teenagers to high court judges.

Module 7: Ethics/Moral Dilemmas

Related to empathy, and equally important, is the goal of infusing kids with a moral compass. Over a year ago, I toured a special school created by Elon Musk (the Ad Astra school) for his five boys (age 9 to 14). One element that is persistent in that small school of under 40 kids is the conversation about ethics and morals, a conversation manifested by debating real-world scenarios that our kids may one day face.

Here’s an example of the sort of gameplay/roleplay that I heard about at Ad Astra, that might be implemented in a module on morals and ethics. Imagine a small town on a lake, in which the majority of the town is employed by a single factory. But that factory has been polluting the lake and killing all the life. What do you do? It’s posed that shutting down the factory would mean that everyone loses their jobs. On the other hand, keeping the factory open means the lake is destroyed and the lake dies. This kind of regular and routine conversation/gameplay allows the children to see the world in a critically important fashion.

Module 8: The 3R Basics (Reading, wRiting & aRithmetic)

There’s no question that young children entering kindergarten need the basics of reading, writing, and math. The only question is what’s the best way for them to get it? We all grew up in the classic mode of a teacher at the chalkboard, books, and homework at night. But I would argue that such teaching approaches are long outdated, now replaced with apps, gameplay, and the concept of the flip classroom.

Pioneered by high school teachers Jonathan Bergman and Aaron Sams in 2007, the flipped classroom reverses the sequence of events from that of the traditional classroom.

Students view lecture materials, usually in the form of video lectures, as homework prior to coming to class. In-class time is reserved for activities such as interactive discussions or collaborative work, all performed under the guidance of the teacher.

The benefits are clear:

Students can consume lectures at their own pace, viewing the video again and again until they get the concept, or fast-forwarding if the information is obvious.
The teacher is present while students apply new knowledge. Doing the homework into class time gives teachers insight into which concepts, if any, that their students are struggling with and helps them adjust the class accordingly.
The flipped classroom produces tangible results: 71 percent of teachers who flipped their classes noticed improved grades, and 80 percent reported improved student attitudes as a result.

Module 9: Creative Expression & Improvisation

Every single one of us is creative. It’s human nature to be creative… the thing is that we each might have different ways of expressing our creativity.

We must encourage kids to discover and to develop their creative outlets early. In this module, imagine showing kids the many different ways creativity is expressed, from art to engineering to music to math, and then guiding them as they choose the area (or areas) they are most interested in. Critically, teachers (or parents) can then develop unique lessons for each child based on their interests, thanks to open education resources like YouTube and the Khan Academy. If my child is interested in painting and robots, a teacher or AI could scour the web and put together a custom lesson set from videos/articles where the best painters and roboticists in the world share their skills.

Adapting to change is critical for success, especially in our constantly changing world today. Improvisation is a skill that can be learned, and we need to be teaching it early.

In most collegiate “improv” classes, the core of great improvisation is the “Yes, and…” mindset. When acting out a scene, one actor might introduce a new character or idea, completely changing the context of the scene. It’s critical that the other actors in the scene say “Yes, and…” accept the new reality, then add something new of their own.

Imagine playing similar role-play games in elementary schools, where a teacher gives the students a scene/context and constantly changes variables, forcing them to adapt and play.

Module 10: Coding

Computer science opens more doors for students than any other discipline in today’s world. Learning even the basics will help students in virtually any career, from architecture to zoology.

Coding is an important tool for computer science, in the way that arithmetic is a tool for doing mathematics and words are a tool for English. Coding creates software, but computer science is a broad field encompassing deep concepts that go well beyond coding.

Every 21st century student should also have a chance to learn about algorithms, how to make an app, or how the internet works. Computational thinking allows preschoolers to grasp concepts like algorithms, recursion and heuristics. Even if they don’t understand the terms, they’ll learn the basic concepts.

There are more than 500,000 open jobs in computing right now, representing the number one source of new wages in the US, and these jobs are projected to grow at twice the rate of all other jobs.

Coding is fun! Beyond the practical reasons for learning how to code, there’s the fact that creating a game or animation can be really fun for kids.

Module 11: Entrepreneurship & Sales

At its core, entrepreneurship is about identifying a problem (an opportunity), developing a vision on how to solve it, and working with a team to turn that vision into reality. I mentioned Elon’s school, Ad Astra: here, again, entrepreneurship is a core discipline where students create and actually sell products and services to each other and the school community.

You could recreate this basic exercise with a group of kids in lots of fun ways to teach them the basic lessons of entrepreneurship.

Related to entrepreneurship is sales. In my opinion, we need to be teaching sales to every child at an early age. Being able to “sell” an idea (again related to storytelling) has been a critical skill in my career, and it is a competency that many people simply never learned.

The lemonade stand has been a classic, though somewhat meager, lesson in sales from past generations, where a child sits on a street corner and tries to sell homemade lemonade for $0.50 to people passing by. I’d suggest we step the game up and take a more active approach in gamifying sales, and maybe having the classroom create a Kickstarter, Indiegogo or GoFundMe campaign. The experience of creating a product or service and successfully selling it will create an indelible memory and give students the tools to change the world.

Module 12: Language

A little over a year ago, I spent a week in China meeting with parents whose focus on kids’ education is extraordinary. One of the areas I found fascinating is how some of the most advanced parents are teaching their kids new languages: through games. On the tablet, the kids are allowed to play games, but only in French. A child’s desire to win fully engages them and drives their learning rapidly.

Beyond games, there’s virtual reality. We know that full immersion is what it takes to become fluent (at least later in life). A semester abroad in France or Italy, and you’ve got a great handle on the language and the culture. But what about for an eight-year-old?

Imagine a module where for an hour each day, the children spend their time walking around Italy in a VR world, hanging out with AI-driven game characters who teach them, engage them, and share the culture and the language in the most personalized and compelling fashion possible.

Exponential Technologies for Our Classrooms
If you’ve attended Abundance 360 or Singularity University, or followed my blogs, you’ll probably agree with me that the way our children will learn is going to fundamentally transform over the next decade.

Here’s an overview of the top five technologies that will reshape the future of education:

Tech 1: Virtual Reality (VR) can make learning truly immersive. Research has shown that we remember 20 percent of what we hear, 30 percent of what we see, and up to 90 percent of what we do or simulate. Virtual reality yields the latter scenario impeccably. VR enables students to simulate flying through the bloodstream while learning about different cells they encounter, or travel to Mars to inspect the surface for life.

To make this a reality, Google Cardboard just launched its Pioneer Expeditions product. Under this program, thousands of schools around the world have gotten a kit containing everything a teacher needs to take his or her class on a virtual trip. While data on VR use in K-12 schools and colleges have yet to be gathered, the steady growth of the market is reflected in the surge of companies (including zSpace, Alchemy VR and Immersive VR Education) solely dedicated to providing schools with packaged education curriculum and content.

Add to VR a related technology called augmented reality (AR), and experiential education really comes alive. Imagine wearing an AR headset that is able to superimpose educational lessons on top of real-world experiences. Interested in botany? As you walk through a garden, the AR headset superimposes the name and details of every plant you see.

Tech 2: 3D Printing is allowing students to bring their ideas to life. Never mind the computer on every desktop (or a tablet for every student), that’s a given. In the near future, teachers and students will want or have a 3D printer on the desk to help them learn core science, technology, engineering and mathematics (STEM) principles. Bre Pettis, of MakerBot Industries, in a grand but practical vision, sees a 3D printer on every school desk in America. “Imagine if you had a 3D printer instead of a LEGO set when you were a kid; what would life be like now?” asks Mr. Pettis. You could print your own mini-figures, your own blocks, and you could iterate on new designs as quickly as your imagination would allow. MakerBots are now in over 5,000 K-12 schools across the US.

Taking this one step further, you could imagine having a 3D file for most entries in Wikipedia, allowing you to print out and study an object you can only read about or visualize in VR.

Tech 3: Sensors & Networks. An explosion of sensors and networks are going to connect everyone at gigabit speeds, making access to rich video available at all times. At the same time, sensors continue to miniaturize and reduce in power, becoming embedded in everything. One benefit will be the connection of sensor data with machine learning and AI (below), such that knowledge of a child’s attention drifting, or confusion, can be easily measured and communicated. The result would be a representation of the information through an alternate modality or at a different speed.

Tech 4: Machine Learning is making learning adaptive and personalized. No two students are identical—they have different modes of learning (by reading, seeing, hearing, doing), come from different educational backgrounds, and have different intellectual capabilities and attention spans. Advances in machine learning and the surging adaptive learning movement are seeking to solve this problem. Companies like Knewton and Dreambox have over 15 million students on their respective adaptive learning platforms. Soon, every education application will be adaptive, learning how to personalize the lesson for a specific student. There will be adaptive quizzing apps, flashcard apps, textbook apps, simulation apps and many more.

Tech 5: Artificial Intelligence or “An AI Teaching Companion.” Neil Stephenson’s book The Diamond Age presents a fascinating piece of educational technology called “A Young Lady’s Illustrated Primer.”

As described by Beat Schwendimann, “The primer is an interactive book that can answer a learner’s questions (spoken in natural language), teach through allegories that incorporate elements of the learner’s environment, and presents contextual just-in-time information.

“The primer includes sensors that monitor the learner’s actions and provide feedback. The learner is in a cognitive apprenticeship with the book: The primer models a certain skill (through allegorical fairy tale characters), which the learner then imitates in real life.

“The primer follows a learning progression with increasingly more complex tasks. The educational goals of the primer are humanist: To support the learner to become a strong and independently thinking person.”

The primer, an individualized AI teaching companion is the result of technological convergence and is beautifully described by YouTuber CGP Grey in his video: Digital Aristotle: Thoughts on the Future of Education.

Your AI companion will have unlimited access to information on the cloud and will deliver it at the optimal speed to each student in an engaging, fun way. This AI will demonetize and democratize education, be available to everyone for free (just like Google), and offering the best education to the wealthiest and poorest children on the planet equally.

This AI companion is not a tutor who spouts facts, figures and answers, but a player on the side of the student, there to help him or her learn, and in so doing, learn how to learn better. The AI is always alert, watching for signs of frustration and boredom that may precede quitting, for signs of curiosity or interest that tend to indicate active exploration, and for signs of enjoyment and mastery, which might indicate a successful learning experience.

Ultimately, we’re heading towards a vastly more educated world. We are truly living during the most exciting time to be alive.

Mindsets for the 21st Century
Finally, it’s important for me to discuss mindsets. How we think about the future colors how we learn and what we do. I’ve written extensively about the importance of an abundance and exponential mindset for entrepreneurs and CEOs. I also think that attention to mindset in our elementary schools, when a child is shaping the mental “operating system” for the rest of their life, is even more important.

As such, I would recommend that a school adopt a set of principles that teach and promote a number of mindsets in the fabric of their programs.

Many “mindsets” are important to promote. Here are a couple to consider:

Nurturing Optimism & An Abundance Mindset:
We live in a competitive world, and kids experience a significant amount of pressure to perform. When they fall short, they feel deflated. We all fail at times; that’s part of life. If we want to raise “can-do” kids who can work through failure and come out stronger for it, it’s wise to nurture optimism. Optimistic kids are more willing to take healthy risks, are better problem-solvers, and experience positive relationships. You can nurture optimism in your school by starting each day by focusing on gratitude (what each child is grateful for), or a “positive focus” in which each student takes 30 seconds to talk about what they are most excited about, or what recent event was positively impactful to them. (NOTE: I start every meeting inside my Strike Force team with a positive focus.)

Finally, helping students understand (through data and graphs) that the world is in fact getting better (see my first book: Abundance: The Future is Better Than You Think) will help them counter the continuous flow of negative news flowing through our news media.

When kids feel confident in their abilities and excited about the world, they are willing to work harder and be more creative.

Tolerance for Failure:
Tolerating failure is a difficult lesson to learn and a difficult lesson to teach. But it is critically important to succeeding in life.

Astro Teller, who runs Google’s innovation branch “X,” talks a lot about encouraging failure. At X, they regularly try to “kill” their ideas. If they are successful in killing an idea, and thus “failing,” they save lots of time, money and resources. The ideas they can’t kill survive and develop into billion-dollar businesses. The key is that each time an idea is killed, Astro rewards the team, literally, with cash bonuses. Their failure is celebrated and they become a hero.

This should be reproduced in the classroom: kids should try to be critical of their best ideas (learn critical thinking), then they should be celebrated for ‘successfully failing,’ perhaps with cake, balloons, confetti, and lots of Silly String.

Join Me & Get Involved!
Abundance Digital Online Community: I have created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance Digital. This is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: sakkarin sapu / Shutterstock.com Continue reading

Posted in Human Robots

#433288 The New AI Tech Turning Heads in Video ...

A new technique using artificial intelligence to manipulate video content gives new meaning to the expression “talking head.”

An international team of researchers showcased the latest advancement in synthesizing facial expressions—including mouth, eyes, eyebrows, and even head position—in video at this month’s 2018 SIGGRAPH, a conference on innovations in computer graphics, animation, virtual reality, and other forms of digital wizardry.

The project is called Deep Video Portraits. It relies on a type of AI called generative adversarial networks (GANs) to modify a “target” actor based on the facial and head movement of a “source” actor. As the name implies, GANs pit two opposing neural networks against one another to create a realistic talking head, right down to the sneer or raised eyebrow.

In this case, the adversaries are actually working together: One neural network generates content, while the other rejects or approves each effort. The back-and-forth interplay between the two eventually produces a realistic result that can easily fool the human eye, including reproducing a static scene behind the head as it bobs back and forth.

The researchers say the technique can be used by the film industry for a variety of purposes, from editing facial expressions of actors for matching dubbed voices to repositioning an actor’s head in post-production. AI can not only produce highly realistic results, but much quicker ones compared to the manual processes used today, according to the researchers. You can read the full paper of their work here.

“Deep Video Portraits shows how such a visual effect could be created with less effort in the future,” said Christian Richardt, from the University of Bath’s motion capture research center CAMERA, in a press release. “With our approach, even the positioning of an actor’s head and their facial expression could be easily edited to change camera angles or subtly change the framing of a scene to tell the story better.”

AI Tech Different Than So-Called “Deepfakes”
The work is far from the first to employ AI to manipulate video and audio. At last year’s SIGGRAPH conference, researchers from the University of Washington showcased their work using algorithms that inserted audio recordings from a person in one instance into a separate video of the same person in a different context.

In this case, they “faked” a video using a speech from former President Barack Obama addressing a mass shooting incident during his presidency. The AI-doctored video injects the audio into an unrelated video of the president while also blending the facial and mouth movements, creating a pretty credible job of lip synching.

A previous paper by many of the same scientists on the Deep Video Portraits project detailed how they were first able to manipulate a video in real time of a talking head (in this case, actor and former California governor Arnold Schwarzenegger). The Face2Face system pulled off this bit of digital trickery using a depth-sensing camera that tracked the facial expressions of an Asian female source actor.

A less sophisticated method of swapping faces using a machine learning software dubbed FakeApp emerged earlier this year. Predictably, the tech—requiring numerous photos of the source actor in order to train the neural network—was used for more juvenile pursuits, such as injecting a person’s face onto a porn star.

The application gave rise to the term “deepfakes,” which is now used somewhat ubiquitously to describe all such instances of AI-manipulated video—much to the chagrin of some of the researchers involved in more legitimate uses.

Fighting AI-Created Video Forgeries
However, the researchers are keenly aware that their work—intended for benign uses such as in the film industry or even to correct gaze and head positions for more natural interactions through video teleconferencing—could be used for nefarious purposes. Fake news is the most obvious concern.

“With ever-improving video editing technology, we must also start being more critical about the video content we consume every day, especially if there is no proof of origin,” said Michael Zollhöfer, a visiting assistant professor at Stanford University and member of the Deep Video Portraits team, in the press release.

Toward that end, the research team is training the same adversarial neural networks to spot video forgeries. They also strongly recommend that developers clearly watermark videos that are edited through AI or otherwise, and denote clearly what part and element of the scene was modified.

To catch less ethical users, the US Department of Defense, through the Defense Advanced Research Projects Agency (DARPA), is supporting a program called Media Forensics. This latest DARPA challenge enlists researchers to develop technologies to automatically assess the integrity of an image or video, as part of an end-to-end media forensics platform.

The DARPA official in charge of the program, Matthew Turek, did tell MIT Technology Review that so far the program has “discovered subtle cues in current GAN-manipulated images and videos that allow us to detect the presence of alterations.” In one reported example, researchers have targeted eyes, which rarely blink in the case of “deepfakes” like those created by FakeApp, because the AI is trained on still pictures. That method would seem to be less effective to spot the sort of forgeries created by Deep Video Portraits, which appears to flawlessly match the entire facial and head movements between the source and target actors.

“We believe that the field of digital forensics should and will receive a lot more attention in the future to develop approaches that can automatically prove the authenticity of a video clip,” Zollhöfer said. “This will lead to ever-better approaches that can spot such modifications even if we humans might not be able to spot them with our own eyes.

Image Credit: Tancha / Shutterstock.com Continue reading

Posted in Human Robots