Tag Archives: soul

#439070 Are Digital Humans the Next Step in ...

In the fictional worlds of film and TV, artificial intelligence has been depicted as so advanced that it is indistinguishable from humans. But what if we’re actually getting closer to a world where AI is capable of thinking and feeling?

Tech company UneeQ is embarking on that journey with its “digital humans.” These avatars act as visual interfaces for customer service chatbots, virtual assistants, and other applications. UneeQ’s digital humans appear lifelike not only in terms of language and tone of voice, but also because of facial movements: raised eyebrows, a tilt of the head, a smile, even a wink. They transform a transaction into an interaction: creepy yet astonishing, human, but not quite.

What lies beneath UneeQ’s digital humans? Their 3D faces are modeled on actual human features. Speech recognition enables the avatar to understand what a person is saying, and natural language processing is used to craft a response. Before the avatar utters a word, specific emotions and facial expressions are encoded within the response.

UneeQ may be part of a larger trend towards humanizing computing. ObEN’s digital avatars serve as virtual identities for celebrities, influencers, gaming characters, and other entities in the media and entertainment industry. Meanwhile, Soul Machines is taking a more biological approach, with a “digital brain” that simulates aspects of the human brain to modulate the emotions “felt” and “expressed” by its “digital people.” Amelia is employing a similar methodology in building its “digital employees.” It emulates parts of the brain involved with memory to respond to queries and, with each interaction, learns to deliver more engaging and personalized experiences.

Shiwali Mohan, an AI systems scientist at the Palo Alto Research Center, is skeptical of these digital beings. “They’re humanlike in their looks and the way they sound, but that in itself is not being human,” she says. “Being human is also how you think, how you approach problems, and how you break them down; and that takes a lot of algorithmic design. Designing for human-level intelligence is a different endeavor than designing graphics that behave like humans. If you think about the problems we’re trying to design these avatars for, we might not need something that looks like a human—it may not even be the right solution path.”

And even if these avatars appear near-human, they still evoke an uncanny valley feeling. “If something looks like a human, we have high expectations of them, but they might behave differently in ways that humans just instinctively know how other humans react. These differences give rise to the uncanny valley feeling,” says Mohan.

Yet the demand is there, with Amelia seeing high adoption of its digital employees across the financial, health care, and retail sectors. “We find that banks and insurance companies, which are so risk-averse, are leading the adoption of such disruptive technologies because they understand that the risk of non-adoption is much greater than the risk of early adoption,” says Chetan Dube, Amelia’s CEO. “Unless they innovate their business models and make them much more efficient digitally, they might be left behind.” Dube adds that the COVID-19 pandemic has accelerated adoption of digital employees in health care and retail as well.

Amelia, Soul Machines, and UneeQ are taking their digital beings a step further, enabling organizations to create avatars themselves using low-code or no-code platforms: Digital Employee Builder for Amelia, Creator for UneeQ, and Digital DNA Studio for Soul Machines. Unreal Engine, a game engine developed by Epic Games, is doing the same with MetaHuman Creator, a tool that allows anyone to create photorealistic digital humans. “The biggest motivation for Digital Employee Builder is to democratize AI,” Dube says.

Mohan is cautious about this approach. “AI has problems with bias creeping in from data sets and into the way it speaks. The AI community is still trying to figure out how to measure and counter that bias,” she says. “[Companies] have to have an AI expert on board that can recommend the right things to build for.”

Despite being wary of the technology, Mohan supports the purpose behind these virtual beings and is optimistic about where they’re headed. “We do need these tools that support humans in different kinds of things. I think the vision is the pro, and I’m behind that vision,” she says. “As we develop more sophisticated AI technology, we would then have to implement novel ways of interacting with that technology. Hopefully, all of that is designed to support humans in their goals.” Continue reading

Posted in Human Robots

#438785 Video Friday: A Blimp For Your Cat

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

HRI 2021 – March 8-11, 2021 – [Online Conference]
RoboSoft 2021 – April 12-16, 2021 – [Online Conference]
ICRA 2021 – May 30-5, 2021 – Xi'an, China
Let us know if you have suggestions for next week, and enjoy today's videos.

Shiny robotic cat toy blimp!

I am pretty sure this is Google Translate getting things wrong, but the About page mentions that the blimp will “take you to your destination after appearing in the death of God.”

[ NTT DoCoMo ] via [ RobotStart ]

If you have yet to see this real-time video of Perseverance landing on Mars, drop everything and watch it.

During the press conference, someone commented that this is the first time anyone on the team who designed and built this system has ever seen it in operation, since it could only be tested at the component scale on Earth. This landing system has blown my mind since Curiosity.

Here's a better look at where Percy ended up:

[ NASA ]

The fact that Digit can just walk up and down wet, slippery, muddy hills without breaking a sweat is (still) astonishing.

[ Agility Robotics ]

SkyMul wants drones to take over the task of tying rebar, which looks like just the sort of thing we'd rather robots be doing so that we don't have to:

The tech certainly looks promising, and SkyMul says that they're looking for some additional support to bring things to the pilot stage.

[ SkyMul ]

Thanks Eohan!

Flatcat is a pet-like, playful robot that reacts to touch. Flatcat feels everything exactly: Cuddle with it, romp around with it, or just watch it do weird things of its own accord. We are sure that flatcat will amaze you, like us, and caress your soul.

I don't totally understand it, but I want it anyway.

[ Flatcat ]

Thanks Oswald!

This is how I would have a romantic dinner date if I couldn't get together in person. Herman the UR3 and an OptiTrack system let me remotely make a romantic meal!

[ Dave's Armoury ]

Here, we propose a novel design of deformable propellers inspired by dragonfly wings. The structure of these propellers includes a flexible segment similar to the nodus on a dragonfly wing. This flexible segment can bend, twist and even fold upon collision, absorbing force upon impact and protecting the propeller from damage.

[ Paper ]

Thanks Van!

In the 1970s, The CIA​ created the world's first miniaturized unmanned aerial vehicle, or UAV, which was intended to be a clandestine listening device. The Insectothopter was never deployed operationally, but was still revolutionary for its time.

It may never have been deployed (not that they'll admit to, anyway), but it was definitely operational and could fly controllably.

[ CIA ]

Research labs are starting to get Digits, which means we're going to get a much better idea of what its limitations are.

[ Ohio State ]

This video shows the latest achievements for LOLA walking on undetected uneven terrain. The robot is technically blind, not using any camera-based or prior information on the terrain.

[ TUM ]

We define “robotic contact juggling” to be the purposeful control of the motion of a three-dimensional smooth object as it rolls freely on a motion-controlled robot manipulator, or “hand.” While specific examples of robotic contact juggling have been studied before, in this paper we provide the first general formulation and solution method for the case of an arbitrary smooth object in single-point rolling contact on an arbitrary smooth hand.

[ Paper ]

Thanks Fan!

A couple of new cobots from ABB, designed to work safely around humans.

[ ABB ]

Thanks Fan!

It's worth watching at least a little bit of Adam Savage testing Spot's new arm, because we get to see Spot try, fail, and eventually succeed at an autonomous door-opening behavior at the 10 minute mark.

[ Tested ]

SVR discusses diversity with guest speakers Dr. Michelle Johnson from the GRASP Lab at UPenn; Dr Ariel Anders from Women in Robotics and first technical hire at Robust.ai; Alka Roy from The Responsible Innovation Project; and Kenechukwu C. Mbanesi and Kenya Andrews from Black in Robotics. The discussion here is moderated by Dr. Ken Goldberg—artist, roboticist and Director of the CITRIS People and Robots Lab—and Andra Keay from Silicon Valley Robotics.

[ SVR ]

RAS presents a Soft Robotics Debate on Bioinspired vs. Biohybrid Design.

In this debate, we will bring together experts in Bioinspiration and Biohybrid design to discuss the necessary steps to make more competent soft robots. We will try to answer whether bioinspired research should focus more on developing new bioinspired material and structures or on the integration of living and artificial structures in biohybrid designs.

[ RAS SoRo ]

IFRR presents a Colloquium on Human Robot Interaction.

Across many application domains, robots are expected to work in human environments, side by side with people. The users will vary substantially in background, training, physical and cognitive abilities, and readiness to adopt technology. Robotic products are expected to not only be intuitive, easy to use, and responsive to the needs and states of their users, but they must also be designed with these differences in mind, making human-robot interaction (HRI) a key area of research.

[ IFRR ]

Vijay Kumar, Nemirovsky Family Dean and Professor at Penn Engineering, gives an introduction to ENIAC day and David Patterson, Pardee Professor of Computer Science, Emeritus at the University of California at Berkeley, speaks about the legacy of the ENIAC and its impact on computer architecture today. This video is comprised of lectures one and two of nine total lectures in the ENIAC Day series.

There are more interesting ENIAC videos at the link below, but we'll highlight this particular one, about the women of the ENIAC, also known as the First Programmers.

[ ENIAC Day ] Continue reading

Posted in Human Robots

#436252 After AI, Fashion and Shopping Will ...

AI and broadband are eating retail for breakfast. In the first half of 2019, we’ve seen 19 retailer bankruptcies. And the retail apocalypse is only accelerating.

What’s coming next is astounding. Why drive when you can speak? Revenue from products purchased via voice commands is expected to quadruple from today’s US$2 billion to US$8 billion by 2023.

Virtual reality, augmented reality, and 3D printing are converging with artificial intelligence, drones, and 5G to transform shopping on every dimension. And as a result, shopping is becoming dematerialized, demonetized, democratized, and delocalized… a top-to-bottom transformation of the retail world.

Welcome to Part 1 of our series on the future of retail, a deep-dive into AI and its far-reaching implications.

Let’s dive in.

A Day in the Life of 2029
Welcome to April 21, 2029, a sunny day in Dallas. You’ve got a fundraising luncheon tomorrow, but nothing to wear. The last thing you want to do is spend the day at the mall.

No sweat. Your body image data is still current, as you were scanned only a week ago. Put on your VR headset and have a conversation with your AI. “It’s time to buy a dress for tomorrow’s event” is all you have to say. In a moment, you’re teleported to a virtual clothing store. Zero travel time. No freeway traffic, parking hassles, or angry hordes wielding baby strollers.

Instead, you’ve entered your own personal clothing store. Everything is in your exact size…. And I mean everything. The store has access to nearly every designer and style on the planet. Ask your AI to show you what’s hot in Shanghai, and presto—instant fashion show. Every model strutting down the runway looks exactly like you, only dressed in Shanghai’s latest.

When you’re done selecting an outfit, your AI pays the bill. And as your new clothes are being 3D printed at a warehouse—before speeding your way via drone delivery—a digital version has been added to your personal inventory for use at future virtual events.

The cost? Thanks to an era of no middlemen, less than half of what you pay in stores today. Yet this future is not all that far off…

Digital Assistants
Let’s begin with the basics: the act of turning desire into purchase.

Most of us navigate shopping malls or online marketplaces alone, hoping to stumble across the right item and fit. But if you’re lucky enough to employ a personal assistant, you have the luxury of describing what you want to someone who knows you well enough to buy that exact right thing most of the time.

For most of us who don’t, enter the digital assistant.

Right now, the four horsemen of the retail apocalypse are waging war for our wallets. Amazon’s Alexa, Google’s Now, Apple’s Siri, and Alibaba’s Tmall Genie are going head-to-head in a battle to become the platform du jour for voice-activated, AI-assisted commerce.

For baby boomers who grew up watching Captain Kirk talk to the Enterprise’s computer on Star Trek, digital assistants seem a little like science fiction. But for millennials, it’s just the next logical step in a world that is auto-magical.

And as those millennials enter their consumer prime, revenue from products purchased via voice-driven commands is projected to leap from today’s US$2 billion to US$8 billion by 2023.

We are already seeing a major change in purchasing habits. On average, consumers using Amazon Echo spent more than standard Amazon Prime customers: US$1,700 versus US$1,300.

And as far as an AI fashion advisor goes, those too are here, courtesy of both Alibaba and Amazon. During its annual Singles’ Day (November 11) shopping festival, Alibaba’s FashionAI concept store uses deep learning to make suggestions based on advice from human fashion experts and store inventory, driving a significant portion of the day’s US$25 billion in sales.

Similarly, Amazon’s shopping algorithm makes personalized clothing recommendations based on user preferences and social media behavior.

Customer Service
But AI is disrupting more than just personalized fashion and e-commerce. Its next big break will take place in the customer service arena.

According to a recent Zendesk study, good customer service increases the possibility of a purchase by 42 percent, while bad customer service translates into a 52 percent chance of losing that sale forever. This means more than half of us will stop shopping at a store due to a single disappointing customer service interaction. These are significant financial stakes. They’re also problems perfectly suited for an AI solution.

During the 2018 Google I/O conference, CEO Sundar Pichai demoed the Google Duplex, their next generation digital assistant. Pichai played the audience a series of pre-recorded phone calls made by Google Duplex. The first call made a reservation at a restaurant, the second one booked a haircut appointment, amusing the audience with a long “hmmm” mid-call.

In neither case did the person on the other end of the phone have any idea they were talking to an AI. The system’s success speaks to how seamlessly AI can blend into our retail lives and how convenient it will continue to make them. The same technology Pichai demonstrated that can make phone calls for consumers can also answer phones for retailers—a development that’s unfolding in two different ways:

(1) Customer service coaches: First, for organizations interested in keeping humans involved, there’s Beyond Verbal, a Tel Aviv-based startup that has built an AI customer service coach. Simply by analyzing customer voice intonation, the system can tell whether the person on the phone is about to blow a gasket, is genuinely excited, or anything in between.

Based on research of over 70,000 subjects in more than 30 languages, Beyond Verbal’s app can detect 400 different markers of human moods, attitudes, and personality traits. Already it’s been integrated in call centers to help human sales agents understand and react to customer emotions, making those calls more pleasant, and also more profitable.

For example, by analyzing word choice and vocal style, Beyond Verbal’s system can tell what kind of shopper the person on the line actually is. If they’re an early adopter, the AI alerts the sales agent to offer them the latest and greatest. If they’re more conservative, it suggests items more tried-and-true.

(2) Replacing customer service agents: Second, companies like New Zealand’s Soul Machines are working to replace human customer service agents altogether. Powered by IBM’s Watson, Soul Machines builds lifelike customer service avatars designed for empathy, making them one of many helping to pioneer the field of emotionally intelligent computing.

With their technology, 40 percent of all customer service interactions are now resolved with a high degree of satisfaction, no human intervention needed. And because the system is built using neural nets, it’s continuously learning from every interaction—meaning that percentage will continue to improve.

The number of these interactions continues to grow as well. Software manufacturer Autodesk now includes a Soul Machine avatar named AVA (Autodesk Virtual Assistant) in all of its new offerings. She lives in a small window on the screen, ready to soothe tempers, troubleshoot problems, and forever banish those long tech support hold times.

For Daimler Financial Services, Soul Machines built an avatar named Sarah, who helps customers with arguably three of modernity’s most annoying tasks: financing, leasing, and insuring a car.

This isn’t just about AI—it’s about AI converging with additional exponentials. Add networks and sensors to the story and it raises the scale of disruption, upping the FQ—the frictionless quotient—in our frictionless shopping adventure.

Final Thoughts
AI makes retail cheaper, faster, and more efficient, touching everything from customer service to product delivery. It also redefines the shopping experience, making it frictionless and—once we allow AI to make purchases for us—ultimately invisible.

Prepare for a future in which shopping is dematerialized, demonetized, democratized, and delocalized—otherwise known as “the end of malls.”

Of course, if you wait a few more years, you’ll be able to take an autonomous flying taxi to Westfield’s Destination 2028—so perhaps today’s converging exponentials are not so much spelling the end of malls but rather the beginning of an experience economy far smarter, more immersive, and whimsically imaginative than today’s shopping centers.

Either way, it’s a top-to-bottom transformation of the retail world.

Over the coming blog series, we will continue our discussion of the future of retail. Stay tuned to learn new implications for your business and how to future-proof your company in an age of smart, ultra-efficient, experiential retail.

Want a copy of my next book? If you’ve enjoyed this blogified snippet of The Future is Faster Than You Think, sign up here to be eligible for an early copy and access up to $800 worth of pre-launch giveaways!

Join Me
(1) A360 Executive Mastermind: If you’re an exponentially and abundance-minded entrepreneur who would like coaching directly from me, consider joining my Abundance 360 Mastermind, a highly selective community of 360 CEOs and entrepreneurs who I coach for 3 days every January in Beverly Hills, Ca. Through A360, I provide my members with context and clarity about how converging exponential technologies will transform every industry. I’m committed to running A360 for the course of an ongoing 25-year journey as a “countdown to the Singularity.”

If you’d like to learn more and consider joining our 2020 membership, apply here.

(2) Abundance-Digital Online Community: I’ve also created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is Singularity University’s ‘onramp’ for exponential entrepreneurs — those who want to get involved and play at a higher level. Click here to learn more.

(Both A360 and Abundance-Digital are part of Singularity University — your participation opens you to a global community.)

This article originally appeared on diamandis.com. Read the original article here.

Image Credit: Image by Pexels from Pixabay Continue reading

Posted in Human Robots

#436190 What Is the Uncanny Valley?

Have you ever encountered a lifelike humanoid robot or a realistic computer-generated face that seem a bit off or unsettling, though you can’t quite explain why?

Take for instance AVA, one of the “digital humans” created by New Zealand tech startup Soul Machines as an on-screen avatar for Autodesk. Watching a lifelike digital being such as AVA can be both fascinating and disconcerting. AVA expresses empathy through her demeanor and movements: slightly raised brows, a tilt of the head, a nod.

By meticulously rendering every lash and line in its avatars, Soul Machines aimed to create a digital human that is virtually undistinguishable from a real one. But to many, rather than looking natural, AVA actually looks creepy. There’s something about it being almost human but not quite that can make people uneasy.

Like AVA, many other ultra-realistic avatars, androids, and animated characters appear stuck in a disturbing in-between world: They are so lifelike and yet they are not “right.” This void of strangeness is known as the uncanny valley.

Uncanny Valley: Definition and History
The uncanny valley is a concept first introduced in the 1970s by Masahiro Mori, then a professor at the Tokyo Institute of Technology. The term describes Mori’s observation that as robots appear more humanlike, they become more appealing—but only up to a certain point. Upon reaching the uncanny valley, our affinity descends into a feeling of strangeness, a sense of unease, and a tendency to be scared or freaked out.

Image: Masahiro Mori

The uncanny valley as depicted in Masahiro Mori’s original graph: As a robot’s human likeness [horizontal axis] increases, our affinity towards the robot [vertical axis] increases too, but only up to a certain point. For some lifelike robots, our response to them plunges, and they appear repulsive or creepy. That’s the uncanny valley.

In his seminal essay for Japanese journal Energy, Mori wrote:

I have noticed that, in climbing toward the goal of making robots appear human, our affinity for them increases until we come to a valley, which I call the uncanny valley.

Later in the essay, Mori describes the uncanny valley by using an example—the first prosthetic hands:

One might say that the prosthetic hand has achieved a degree of resemblance to the human form, perhaps on a par with false teeth. However, when we realize the hand, which at first site looked real, is in fact artificial, we experience an eerie sensation. For example, we could be startled during a handshake by its limp boneless grip together with its texture and coldness. When this happens, we lose our sense of affinity, and the hand becomes uncanny.

In an interview with IEEE Spectrum, Mori explained how he came up with the idea for the uncanny valley:

“Since I was a child, I have never liked looking at wax figures. They looked somewhat creepy to me. At that time, electronic prosthetic hands were being developed, and they triggered in me the same kind of sensation. These experiences had made me start thinking about robots in general, which led me to write that essay. The uncanny valley was my intuition. It was one of my ideas.”

Uncanny Valley Examples
To better illustrate how the uncanny valley works, here are some examples of the phenomenon. Prepare to be freaked out.

1. Telenoid

Photo: Hiroshi Ishiguro/Osaka University/ATR

Taking the top spot in the “creepiest” rankings of IEEE Spectrum’s Robots Guide, Telenoid is a robotic communication device designed by Japanese roboticist Hiroshi Ishiguro. Its bald head, lifeless face, and lack of limbs make it seem more alien than human.

2. Diego-san

Photo: Andrew Oh/Javier Movellan/Calit2

Engineers and roboticists at the University of California San Diego’s Machine Perception Lab developed this robot baby to help parents better communicate with their infants. At 1.2 meters (4 feet) tall and weighing 30 kilograms (66 pounds), Diego-san is a big baby—bigger than an average 1-year-old child.

“Even though the facial expression is sophisticated and intuitive in this infant robot, I still perceive a false smile when I’m expecting the baby to appear happy,” says Angela Tinwell, a senior lecturer at the University of Bolton in the U.K. and author of The Uncanny Valley in Games and Animation. “This, along with a lack of detail in the eyes and forehead, can make the baby appear vacant and creepy, so I would want to avoid those ‘dead eyes’ rather than interacting with Diego-san.”

​3. Geminoid HI

Photo: Osaka University/ATR/Kokoro

Another one of Ishiguro’s creations, Geminoid HI is his android replica. He even took hair from his own scalp to put onto his robot twin. Ishiguro says he created Geminoid HI to better understand what it means to be human.

4. Sophia

Photo: Mikhail Tereshchenko/TASS/Getty Images

Designed by David Hanson of Hanson Robotics, Sophia is one of the most famous humanoid robots. Like Soul Machines’ AVA, Sophia displays a range of emotional expressions and is equipped with natural language processing capabilities.

5. Anthropomorphized felines

The uncanny valley doesn’t only happen with robots that adopt a human form. The 2019 live-action versions of the animated film The Lion King and the musical Cats brought the uncanny valley to the forefront of pop culture. To some fans, the photorealistic computer animations of talking lions and singing cats that mimic human movements were just creepy.

Are you feeling that eerie sensation yet?

Uncanny Valley: Science or Pseudoscience?
Despite our continued fascination with the uncanny valley, its validity as a scientific concept is highly debated. The uncanny valley wasn’t actually proposed as a scientific concept, yet has often been criticized in that light.

Mori himself said in his IEEE Spectrum interview that he didn’t explore the concept from a rigorous scientific perspective but as more of a guideline for robot designers:

Pointing out the existence of the uncanny valley was more of a piece of advice from me to people who design robots rather than a scientific statement.

Karl MacDorman, an associate professor of human-computer interaction at Indiana University who has long studied the uncanny valley, interprets the classic graph not as expressing Mori’s theory but as a heuristic for learning the concept and organizing observations.

“I believe his theory is instead expressed by his examples, which show that a mismatch in the human likeness of appearance and touch or appearance and motion can elicit a feeling of eeriness,” MacDorman says. “In my own experiments, I have consistently reproduced this effect within and across sense modalities. For example, a mismatch in the human realism of the features of a face heightens eeriness; a robot with a human voice or a human with a robotic voice is eerie.”

How to Avoid the Uncanny Valley
Unless you intend to create creepy characters or evoke a feeling of unease, you can follow certain design principles to avoid the uncanny valley. “The effect can be reduced by not creating robots or computer-animated characters that combine features on different sides of a boundary—for example, human and nonhuman, living and nonliving, or real and artificial,” MacDorman says.

To make a robot or avatar more realistic and move it beyond the valley, Tinwell says to ensure that a character’s facial expressions match its emotive tones of speech, and that its body movements are responsive and reflect its hypothetical emotional state. Special attention must also be paid to facial elements such as the forehead, eyes, and mouth, which depict the complexities of emotion and thought. “The mouth must be modeled and animated correctly so the character doesn’t appear aggressive or portray a ‘false smile’ when they should be genuinely happy,” she says.

For Christoph Bartneck, an associate professor at the University of Canterbury in New Zealand, the goal is not to avoid the uncanny valley, but to avoid bad character animations or behaviors, stressing the importance of matching the appearance of a robot with its ability. “We’re trained to spot even the slightest divergence from ‘normal’ human movements or behavior,” he says. “Hence, we often fail in creating highly realistic, humanlike characters.”

But he warns that the uncanny valley appears to be more of an uncanny cliff. “We find the likability to increase and then crash once robots become humanlike,” he says. “But we have never observed them ever coming out of the valley. You fall off and that’s it.” Continue reading

Posted in Human Robots

#433400 A Model for the Future of Education, and ...

As kids worldwide head back to school, I’d like to share my thoughts on the future of education.

Bottom line, how we educate our kids needs to radically change given the massive potential of exponential tech (e.g. artificial intelligence and virtual reality).

Without question, the number one driver for education is inspiration. As such, if you have a kid age 8–18, you’ll want to get your hands on an incredibly inspirational novel written by my dear friend Ray Kurzweil called Danielle: Chronicles of a Superheroine.

Danielle offers boys and girls a role model of a young woman who uses smart technologies and super-intelligence to partner with her friends to solve some of the world’s greatest challenges. It’s perfect to inspire anyone to pursue their moonshot.

Without further ado, let’s dive into the future of educating kids, and a summary of my white paper thoughts….

Just last year, edtech (education technology) investments surpassed a record high of 9.5 billion USD—up 30 percent from the year before.

Already valued at over half a billion USD, the AI in education market is set to surpass 6 billion USD by 2024.

And we’re now seeing countless new players enter the classroom, from a Soul Machines AI teacher specializing in energy use and sustainability to smart “lab schools” with personalized curricula.

As my two boys enter 1st grade, I continue asking myself, given the fact that most elementary schools haven’t changed in many decades (perhaps a century), what do I want my kids to learn? How do I think about elementary school during an exponential era?

This post covers five subjects related to elementary school education:

Five Issues with Today’s Elementary Schools
Five Guiding Principles for Future Education
An Elementary School Curriculum for the Future
Exponential Technologies in our Classroom
Mindsets for the 21st Century

Excuse the length of this post, but if you have kids, the details might be meaningful. If you don’t, then next week’s post will return to normal length and another fun subject.

Also, if you’d like to see my detailed education “white paper,” you can view or download it here.

Let’s dive in…

Five Issues With Today’s Elementary Schools
There are probably lots of issues with today’s traditional elementary schools, but I’ll just choose a few that bother me most.

Grading: In the traditional education system, you start at an “A,” and every time you get something wrong, your score gets lower and lower. At best it’s demotivating, and at worst it has nothing to do with the world you occupy as an adult. In the gaming world (e.g. Angry Birds), it’s just the opposite. You start with zero and every time you come up with something right, your score gets higher and higher.
Sage on the Stage: Most classrooms have a teacher up in front of class lecturing to a classroom of students, half of whom are bored and half of whom are lost. The one-teacher-fits-all model comes from an era of scarcity where great teachers and schools were rare.
Relevance: When I think back to elementary and secondary school, I realize how much of what I learned was never actually useful later in life, and how many of my critical lessons for success I had to pick up on my own (I don’t know about you, but I haven’t ever actually had to factor a polynomial in my adult life).
Imagination, Coloring inside the Lines: Probably of greatest concern to me is the factory-worker, industrial-era origin of today’s schools. Programs are so structured with rote memorization that it squashes the originality from most children. I’m reminded that “the day before something is truly a breakthrough, it’s a crazy idea.” Where do we pursue crazy ideas in our schools? Where do we foster imagination?
Boring: If learning in school is a chore, boring, or emotionless, then the most important driver of human learning, passion, is disengaged. Having our children memorize facts and figures, sit passively in class, and take mundane standardized tests completely defeats the purpose.

An average of 7,200 students drop out of high school each day, totaling 1.3 million each year. This means only 69 percent of students who start high school finish four years later. And over 50 percent of these high school dropouts name boredom as the number one reason they left.

Five Guiding Principles for Future Education
I imagine a relatively near-term future in which robotics and artificial intelligence will allow any of us, from ages 8 to 108, to easily and quickly find answers, create products, or accomplish tasks, all simply by expressing our desires.

From ‘mind to manufactured in moments.’ In short, we’ll be able to do and create almost whatever we want.

In this future, what attributes will be most critical for our children to learn to become successful in their adult lives? What’s most important for educating our children today?

For me it’s about passion, curiosity, imagination, critical thinking, and grit.

Passion: You’d be amazed at how many people don’t have a mission in life… A calling… something to jolt them out of bed every morning. The most valuable resource for humanity is the persistent and passionate human mind, so creating a future of passionate kids is so very important. For my 7-year-old boys, I want to support them in finding their passion or purpose… something that is uniquely theirs. In the same way that the Apollo program and Star Trek drove my early love for all things space, and that passion drove me to learn and do.
Curiosity: Curiosity is something innate in kids, yet something lost by most adults during the course of their life. Why? In a world of Google, robots, and AI, raising a kid that is constantly asking questions and running “what if” experiments can be extremely valuable. In an age of machine learning, massive data, and a trillion sensors, it will be the quality of your questions that will be most important.
Imagination: Entrepreneurs and visionaries imagine the world (and the future) they want to live in, and then they create it. Kids happen to be some of the most imaginative humans around… it’s critical that they know how important and liberating imagination can be.
Critical Thinking: In a world flooded with often-conflicting ideas, baseless claims, misleading headlines, negative news, and misinformation, learning the skill of critical thinking helps find the signal in the noise. This principle is perhaps the most difficult to teach kids.
Grit/Persistence: Grit is defined as “passion and perseverance in pursuit of long-term goals,” and it has recently been widely acknowledged as one of the most important predictors of and contributors to success.

Teaching your kids not to give up, to keep trying, and to keep trying new ideas for something that they are truly passionate about achieving is extremely critical. Much of my personal success has come from such stubbornness. I joke that both XPRIZE and the Zero Gravity Corporation were “overnight successes after 10 years of hard work.”

So given those five basic principles, what would an elementary school curriculum look like? Let’s take a look…

An Elementary School Curriculum for the Future
Over the last 30 years, I’ve had the pleasure of starting two universities, International Space University (1987) and Singularity University (2007). My favorite part of co-founding both institutions was designing and implementing the curriculum. Along those lines, the following is my first shot at the type of curriculum I’d love my own boys to be learning.

I’d love your thoughts, I’ll be looking for them here: https://www.surveymonkey.com/r/DDRWZ8R

For the purpose of illustration, I’ll speak about ‘courses’ or ‘modules,’ but in reality these are just elements that would ultimately be woven together throughout the course of K-6 education.

Module 1: Storytelling/Communications

When I think about the skill that has served me best in life, it’s been my ability to present my ideas in the most compelling fashion possible, to get others onboard, and support birth and growth in an innovative direction. In my adult life, as an entrepreneur and a CEO, it’s been my ability to communicate clearly and tell compelling stories that has allowed me to create the future. I don’t think this lesson can start too early in life. So imagine a module, year after year, where our kids learn the art and practice of formulating and pitching their ideas. The best of oration and storytelling. Perhaps children in this class would watch TED presentations, or maybe they’d put together their own TEDx for kids. Ultimately, it’s about practice and getting comfortable with putting yourself and your ideas out there and overcoming any fears of public speaking.

Module 2: Passions

A modern school should help our children find and explore their passion(s). Passion is the greatest gift of self-discovery. It is a source of interest and excitement, and is unique to each child.

The key to finding passion is exposure. Allowing kids to experience as many adventures, careers, and passionate adults as possible. Historically, this was limited by the reality of geography and cost, implemented by having local moms and dads presenting in class about their careers. “Hi, I’m Alan, Billy’s dad, and I’m an accountant. Accountants are people who…”

But in a world of YouTube and virtual reality, the ability for our children to explore 500 different possible careers or passions during their K-6 education becomes not only possible but compelling. I imagine a module where children share their newest passion each month, sharing videos (or VR experiences) and explaining what they love and what they’ve learned.

Module 3: Curiosity & Experimentation

Einstein famously said, “I have no special talent. I am only passionately curious.” Curiosity is innate in children, and many times lost later in life. Arguably, it can be said that curiosity is responsible for all major scientific and technological advances; it’s the desire of an individual to know the truth.

Coupled with curiosity is the process of experimentation and discovery. The process of asking questions, creating and testing a hypothesis, and repeated experimentation until the truth is found. As I’ve studied the most successful entrepreneurs and entrepreneurial companies, from Google and Amazon to Uber, their success is significantly due to their relentless use of experimentation to define their products and services.

Here I imagine a module which instills in children the importance of curiosity and gives them permission to say, “I don’t know, let’s find out.”

Further, a monthly module that teaches children how to design and execute valid and meaningful experiments. Imagine children who learn the skill of asking a question, proposing a hypothesis, designing an experiment, gathering the data, and then reaching a conclusion.

Module 4: Persistence/Grit

Doing anything big, bold, and significant in life is hard work. You can’t just give up when the going gets rough. The mindset of persistence, of grit, is a learned behavior I believe can be taught at an early age, especially when it’s tied to pursuing a child’s passion.

I imagine a curriculum that, each week, studies the career of a great entrepreneur and highlights their story of persistence. It would highlight the individuals and companies that stuck with it, iterated, and ultimately succeeded.

Further, I imagine a module that combines persistence and experimentation in gameplay, such as that found in Dean Kamen’s FIRST LEGO league, where 4th graders (and up) research a real-world problem such as food safety, recycling, energy, and so on, and are challenged to develop a solution. They also must design, build, and program a robot using LEGO MINDSTORMS®, then compete on a tabletop playing field.

Module 5: Technology Exposure

In a world of rapidly accelerating technology, understanding how technologies work, what they do, and their potential for benefiting society is, in my humble opinion, critical to a child’s future. Technology and coding (more on this below) are the new “lingua franca” of tomorrow.

In this module, I imagine teaching (age appropriate) kids through play and demonstration. Giving them an overview of exponential technologies such as computation, sensors, networks, artificial intelligence, digital manufacturing, genetic engineering, augmented/virtual reality, and robotics, to name a few. This module is not about making a child an expert in any technology, it’s more about giving them the language of these new tools, and conceptually an overview of how they might use such a technology in the future. The goal here is to get them excited, give them demonstrations that make the concepts stick, and then to let their imaginations run.

Module 6: Empathy

Empathy, defined as “the ability to understand and share the feelings of another,” has been recognized as one of the most critical skills for our children today. And while there has been much written, and great practices for instilling this at home and in school, today’s new tools accelerate this.

Virtual reality isn’t just about video games anymore. Artists, activists, and journalists now see the technology’s potential to be an empathy engine, one that can shine spotlights on everything from the Ebola epidemic to what it’s like to live in Gaza. And Jeremy Bailenson has been at the vanguard of investigating VR’s power for good.

For more than a decade, Bailenson’s lab at Stanford has been studying how VR can make us better people. Through the power of VR, volunteers at the lab have felt what it is like to be Superman (to see if it makes them more helpful), a cow (to reduce meat consumption), and even a coral (to learn about ocean acidification).

Silly as they might seem, these sorts of VR scenarios could be more effective than the traditional public service ad at making people behave. Afterwards, they waste less paper. They save more money for retirement. They’re nicer to the people around them. And this could have consequences in terms of how we teach and train everyone from cliquey teenagers to high court judges.

Module 7: Ethics/Moral Dilemmas

Related to empathy, and equally important, is the goal of infusing kids with a moral compass. Over a year ago, I toured a special school created by Elon Musk (the Ad Astra school) for his five boys (age 9 to 14). One element that is persistent in that small school of under 40 kids is the conversation about ethics and morals, a conversation manifested by debating real-world scenarios that our kids may one day face.

Here’s an example of the sort of gameplay/roleplay that I heard about at Ad Astra, that might be implemented in a module on morals and ethics. Imagine a small town on a lake, in which the majority of the town is employed by a single factory. But that factory has been polluting the lake and killing all the life. What do you do? It’s posed that shutting down the factory would mean that everyone loses their jobs. On the other hand, keeping the factory open means the lake is destroyed and the lake dies. This kind of regular and routine conversation/gameplay allows the children to see the world in a critically important fashion.

Module 8: The 3R Basics (Reading, wRiting & aRithmetic)

There’s no question that young children entering kindergarten need the basics of reading, writing, and math. The only question is what’s the best way for them to get it? We all grew up in the classic mode of a teacher at the chalkboard, books, and homework at night. But I would argue that such teaching approaches are long outdated, now replaced with apps, gameplay, and the concept of the flip classroom.

Pioneered by high school teachers Jonathan Bergman and Aaron Sams in 2007, the flipped classroom reverses the sequence of events from that of the traditional classroom.

Students view lecture materials, usually in the form of video lectures, as homework prior to coming to class. In-class time is reserved for activities such as interactive discussions or collaborative work, all performed under the guidance of the teacher.

The benefits are clear:

Students can consume lectures at their own pace, viewing the video again and again until they get the concept, or fast-forwarding if the information is obvious.
The teacher is present while students apply new knowledge. Doing the homework into class time gives teachers insight into which concepts, if any, that their students are struggling with and helps them adjust the class accordingly.
The flipped classroom produces tangible results: 71 percent of teachers who flipped their classes noticed improved grades, and 80 percent reported improved student attitudes as a result.

Module 9: Creative Expression & Improvisation

Every single one of us is creative. It’s human nature to be creative… the thing is that we each might have different ways of expressing our creativity.

We must encourage kids to discover and to develop their creative outlets early. In this module, imagine showing kids the many different ways creativity is expressed, from art to engineering to music to math, and then guiding them as they choose the area (or areas) they are most interested in. Critically, teachers (or parents) can then develop unique lessons for each child based on their interests, thanks to open education resources like YouTube and the Khan Academy. If my child is interested in painting and robots, a teacher or AI could scour the web and put together a custom lesson set from videos/articles where the best painters and roboticists in the world share their skills.

Adapting to change is critical for success, especially in our constantly changing world today. Improvisation is a skill that can be learned, and we need to be teaching it early.

In most collegiate “improv” classes, the core of great improvisation is the “Yes, and…” mindset. When acting out a scene, one actor might introduce a new character or idea, completely changing the context of the scene. It’s critical that the other actors in the scene say “Yes, and…” accept the new reality, then add something new of their own.

Imagine playing similar role-play games in elementary schools, where a teacher gives the students a scene/context and constantly changes variables, forcing them to adapt and play.

Module 10: Coding

Computer science opens more doors for students than any other discipline in today’s world. Learning even the basics will help students in virtually any career, from architecture to zoology.

Coding is an important tool for computer science, in the way that arithmetic is a tool for doing mathematics and words are a tool for English. Coding creates software, but computer science is a broad field encompassing deep concepts that go well beyond coding.

Every 21st century student should also have a chance to learn about algorithms, how to make an app, or how the internet works. Computational thinking allows preschoolers to grasp concepts like algorithms, recursion and heuristics. Even if they don’t understand the terms, they’ll learn the basic concepts.

There are more than 500,000 open jobs in computing right now, representing the number one source of new wages in the US, and these jobs are projected to grow at twice the rate of all other jobs.

Coding is fun! Beyond the practical reasons for learning how to code, there’s the fact that creating a game or animation can be really fun for kids.

Module 11: Entrepreneurship & Sales

At its core, entrepreneurship is about identifying a problem (an opportunity), developing a vision on how to solve it, and working with a team to turn that vision into reality. I mentioned Elon’s school, Ad Astra: here, again, entrepreneurship is a core discipline where students create and actually sell products and services to each other and the school community.

You could recreate this basic exercise with a group of kids in lots of fun ways to teach them the basic lessons of entrepreneurship.

Related to entrepreneurship is sales. In my opinion, we need to be teaching sales to every child at an early age. Being able to “sell” an idea (again related to storytelling) has been a critical skill in my career, and it is a competency that many people simply never learned.

The lemonade stand has been a classic, though somewhat meager, lesson in sales from past generations, where a child sits on a street corner and tries to sell homemade lemonade for $0.50 to people passing by. I’d suggest we step the game up and take a more active approach in gamifying sales, and maybe having the classroom create a Kickstarter, Indiegogo or GoFundMe campaign. The experience of creating a product or service and successfully selling it will create an indelible memory and give students the tools to change the world.

Module 12: Language

A little over a year ago, I spent a week in China meeting with parents whose focus on kids’ education is extraordinary. One of the areas I found fascinating is how some of the most advanced parents are teaching their kids new languages: through games. On the tablet, the kids are allowed to play games, but only in French. A child’s desire to win fully engages them and drives their learning rapidly.

Beyond games, there’s virtual reality. We know that full immersion is what it takes to become fluent (at least later in life). A semester abroad in France or Italy, and you’ve got a great handle on the language and the culture. But what about for an eight-year-old?

Imagine a module where for an hour each day, the children spend their time walking around Italy in a VR world, hanging out with AI-driven game characters who teach them, engage them, and share the culture and the language in the most personalized and compelling fashion possible.

Exponential Technologies for Our Classrooms
If you’ve attended Abundance 360 or Singularity University, or followed my blogs, you’ll probably agree with me that the way our children will learn is going to fundamentally transform over the next decade.

Here’s an overview of the top five technologies that will reshape the future of education:

Tech 1: Virtual Reality (VR) can make learning truly immersive. Research has shown that we remember 20 percent of what we hear, 30 percent of what we see, and up to 90 percent of what we do or simulate. Virtual reality yields the latter scenario impeccably. VR enables students to simulate flying through the bloodstream while learning about different cells they encounter, or travel to Mars to inspect the surface for life.

To make this a reality, Google Cardboard just launched its Pioneer Expeditions product. Under this program, thousands of schools around the world have gotten a kit containing everything a teacher needs to take his or her class on a virtual trip. While data on VR use in K-12 schools and colleges have yet to be gathered, the steady growth of the market is reflected in the surge of companies (including zSpace, Alchemy VR and Immersive VR Education) solely dedicated to providing schools with packaged education curriculum and content.

Add to VR a related technology called augmented reality (AR), and experiential education really comes alive. Imagine wearing an AR headset that is able to superimpose educational lessons on top of real-world experiences. Interested in botany? As you walk through a garden, the AR headset superimposes the name and details of every plant you see.

Tech 2: 3D Printing is allowing students to bring their ideas to life. Never mind the computer on every desktop (or a tablet for every student), that’s a given. In the near future, teachers and students will want or have a 3D printer on the desk to help them learn core science, technology, engineering and mathematics (STEM) principles. Bre Pettis, of MakerBot Industries, in a grand but practical vision, sees a 3D printer on every school desk in America. “Imagine if you had a 3D printer instead of a LEGO set when you were a kid; what would life be like now?” asks Mr. Pettis. You could print your own mini-figures, your own blocks, and you could iterate on new designs as quickly as your imagination would allow. MakerBots are now in over 5,000 K-12 schools across the US.

Taking this one step further, you could imagine having a 3D file for most entries in Wikipedia, allowing you to print out and study an object you can only read about or visualize in VR.

Tech 3: Sensors & Networks. An explosion of sensors and networks are going to connect everyone at gigabit speeds, making access to rich video available at all times. At the same time, sensors continue to miniaturize and reduce in power, becoming embedded in everything. One benefit will be the connection of sensor data with machine learning and AI (below), such that knowledge of a child’s attention drifting, or confusion, can be easily measured and communicated. The result would be a representation of the information through an alternate modality or at a different speed.

Tech 4: Machine Learning is making learning adaptive and personalized. No two students are identical—they have different modes of learning (by reading, seeing, hearing, doing), come from different educational backgrounds, and have different intellectual capabilities and attention spans. Advances in machine learning and the surging adaptive learning movement are seeking to solve this problem. Companies like Knewton and Dreambox have over 15 million students on their respective adaptive learning platforms. Soon, every education application will be adaptive, learning how to personalize the lesson for a specific student. There will be adaptive quizzing apps, flashcard apps, textbook apps, simulation apps and many more.

Tech 5: Artificial Intelligence or “An AI Teaching Companion.” Neil Stephenson’s book The Diamond Age presents a fascinating piece of educational technology called “A Young Lady’s Illustrated Primer.”

As described by Beat Schwendimann, “The primer is an interactive book that can answer a learner’s questions (spoken in natural language), teach through allegories that incorporate elements of the learner’s environment, and presents contextual just-in-time information.

“The primer includes sensors that monitor the learner’s actions and provide feedback. The learner is in a cognitive apprenticeship with the book: The primer models a certain skill (through allegorical fairy tale characters), which the learner then imitates in real life.

“The primer follows a learning progression with increasingly more complex tasks. The educational goals of the primer are humanist: To support the learner to become a strong and independently thinking person.”

The primer, an individualized AI teaching companion is the result of technological convergence and is beautifully described by YouTuber CGP Grey in his video: Digital Aristotle: Thoughts on the Future of Education.

Your AI companion will have unlimited access to information on the cloud and will deliver it at the optimal speed to each student in an engaging, fun way. This AI will demonetize and democratize education, be available to everyone for free (just like Google), and offering the best education to the wealthiest and poorest children on the planet equally.

This AI companion is not a tutor who spouts facts, figures and answers, but a player on the side of the student, there to help him or her learn, and in so doing, learn how to learn better. The AI is always alert, watching for signs of frustration and boredom that may precede quitting, for signs of curiosity or interest that tend to indicate active exploration, and for signs of enjoyment and mastery, which might indicate a successful learning experience.

Ultimately, we’re heading towards a vastly more educated world. We are truly living during the most exciting time to be alive.

Mindsets for the 21st Century
Finally, it’s important for me to discuss mindsets. How we think about the future colors how we learn and what we do. I’ve written extensively about the importance of an abundance and exponential mindset for entrepreneurs and CEOs. I also think that attention to mindset in our elementary schools, when a child is shaping the mental “operating system” for the rest of their life, is even more important.

As such, I would recommend that a school adopt a set of principles that teach and promote a number of mindsets in the fabric of their programs.

Many “mindsets” are important to promote. Here are a couple to consider:

Nurturing Optimism & An Abundance Mindset:
We live in a competitive world, and kids experience a significant amount of pressure to perform. When they fall short, they feel deflated. We all fail at times; that’s part of life. If we want to raise “can-do” kids who can work through failure and come out stronger for it, it’s wise to nurture optimism. Optimistic kids are more willing to take healthy risks, are better problem-solvers, and experience positive relationships. You can nurture optimism in your school by starting each day by focusing on gratitude (what each child is grateful for), or a “positive focus” in which each student takes 30 seconds to talk about what they are most excited about, or what recent event was positively impactful to them. (NOTE: I start every meeting inside my Strike Force team with a positive focus.)

Finally, helping students understand (through data and graphs) that the world is in fact getting better (see my first book: Abundance: The Future is Better Than You Think) will help them counter the continuous flow of negative news flowing through our news media.

When kids feel confident in their abilities and excited about the world, they are willing to work harder and be more creative.

Tolerance for Failure:
Tolerating failure is a difficult lesson to learn and a difficult lesson to teach. But it is critically important to succeeding in life.

Astro Teller, who runs Google’s innovation branch “X,” talks a lot about encouraging failure. At X, they regularly try to “kill” their ideas. If they are successful in killing an idea, and thus “failing,” they save lots of time, money and resources. The ideas they can’t kill survive and develop into billion-dollar businesses. The key is that each time an idea is killed, Astro rewards the team, literally, with cash bonuses. Their failure is celebrated and they become a hero.

This should be reproduced in the classroom: kids should try to be critical of their best ideas (learn critical thinking), then they should be celebrated for ‘successfully failing,’ perhaps with cake, balloons, confetti, and lots of Silly String.

Join Me & Get Involved!
Abundance Digital Online Community: I have created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance Digital. This is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: sakkarin sapu / Shutterstock.com Continue reading

Posted in Human Robots