Tag Archives: love
#432051 What Roboticists Are Learning From Early ...
You might not have heard of Hanson Robotics, but if you’re reading this, you’ve probably seen their work. They were the company behind Sophia, the lifelike humanoid avatar that’s made dozens of high-profile media appearances. Before that, they were the company behind that strange-looking robot that seemed a bit like Asimo with Albert Einstein’s head—or maybe you saw BINA48, who was interviewed for the New York Times in 2010 and featured in Jon Ronson’s books. For the sci-fi aficionados amongst you, they even made a replica of legendary author Philip K. Dick, best remembered for having books with titles like Do Androids Dream of Electric Sheep? turned into films with titles like Blade Runner.
Hanson Robotics, in other words, with their proprietary brand of life-like humanoid robots, have been playing the same game for a while. Sometimes it can be a frustrating game to watch. Anyone who gives the robot the slightest bit of thought will realize that this is essentially a chat-bot, with all the limitations this implies. Indeed, even in that New York Times interview with BINA48, author Amy Harmon describes it as a frustrating experience—with “rare (but invariably thrilling) moments of coherence.” This sensation will be familiar to anyone who’s conversed with a chatbot that has a few clever responses.
The glossy surface belies the lack of real intelligence underneath; it seems, at first glance, like a much more advanced machine than it is. Peeling back that surface layer—at least for a Hanson robot—means you’re peeling back Frubber. This proprietary substance—short for “Flesh Rubber,” which is slightly nightmarish—is surprisingly complicated. Up to thirty motors are required just to control the face; they manipulate liquid cells in order to make the skin soft, malleable, and capable of a range of different emotional expressions.
A quick combinatorial glance at the 30+ motors suggests that there are millions of possible combinations; researchers identify 62 that they consider “human-like” in Sophia, although not everyone agrees with this assessment. Arguably, the technical expertise that went into reconstructing the range of human facial expressions far exceeds the more simplistic chat engine the robots use, although it’s the second one that allows it to inflate the punters’ expectations with a few pre-programmed questions in an interview.
Hanson Robotics’ belief is that, ultimately, a lot of how humans will eventually relate to robots is going to depend on their faces and voices, as well as on what they’re saying. “The perception of identity is so intimately bound up with the perception of the human form,” says David Hanson, company founder.
Yet anyone attempting to design a robot that won’t terrify people has to contend with the uncanny valley—that strange blend of concern and revulsion people react with when things appear to be creepily human. Between cartoonish humanoids and genuine humans lies what has often been a no-go zone in robotic aesthetics.
The uncanny valley concept originated with roboticist Masahiro Mori, who argued that roboticists should avoid trying to replicate humans exactly. Since anything that wasn’t perfect, but merely very good, would elicit an eerie feeling in humans, shirking the challenge entirely was the only way to avoid the uncanny valley. It’s probably a task made more difficult by endless streams of articles about AI taking over the world that inexplicably conflate AI with killer humanoid Terminators—which aren’t particularly likely to exist (although maybe it’s best not to push robots around too much).
The idea behind this realm of psychological horror is fairly simple, cognitively speaking.
We know how to categorize things that are unambiguously human or non-human. This is true even if they’re designed to interact with us. Consider the popularity of Aibo, Jibo, or even some robots that don’t try to resemble humans. Something that resembles a human, but isn’t quite right, is bound to evoke a fear response in the same way slightly distorted music or slightly rearranged furniture in your home will. The creature simply doesn’t fit.
You may well reject the idea of the uncanny valley entirely. David Hanson, naturally, is not a fan. In the paper Upending the Uncanny Valley, he argues that great art forms have often resembled humans, but the ultimate goal for humanoid roboticists is probably to create robots we can relate to as something closer to humans than works of art.
Meanwhile, Hanson and other scientists produce competing experiments to either demonstrate that the uncanny valley is overhyped, or to confirm it exists and probe its edges.
The classic experiment involves gradually morphing a cartoon face into a human face, via some robotic-seeming intermediaries—yet it’s in movement that the real horror of the almost-human often lies. Hanson has argued that incorporating cartoonish features may help—and, sometimes, that the uncanny valley is a generational thing which will melt away when new generations grow used to the quirks of robots. Although Hanson might dispute the severity of this effect, it’s clearly what he’s trying to avoid with each new iteration.
Hiroshi Ishiguro is the latest of the roboticists to have dived headlong into the valley.
Building on the work of pioneers like Hanson, those who study human-robot interaction are pushing at the boundaries of robotics—but also of social science. It’s usually difficult to simulate what you don’t understand, and there’s still an awful lot we don’t understand about how we interpret the constant streams of non-verbal information that flow when you interact with people in the flesh.
Ishiguro took this imitation of human forms to extreme levels. Not only did he monitor and log the physical movements people made on videotapes, but some of his robots are based on replicas of people; the Repliee series began with a ‘replicant’ of his daughter. This involved making a rubber replica—a silicone cast—of her entire body. Future experiments were focused on creating Geminoid, a replica of Ishiguro himself.
As Ishiguro aged, he realized that it would be more effective to resemble his replica through cosmetic surgery rather than by continually creating new casts of his face, each with more lines than the last. “I decided not to get old anymore,” Ishiguro said.
We love to throw around abstract concepts and ideas: humans being replaced by machines, cared for by machines, getting intimate with machines, or even merging themselves with machines. You can take an idea like that, hold it in your hand, and examine it—dispassionately, if not without interest. But there’s a gulf between thinking about it and living in a world where human-robot interaction is not a field of academic research, but a day-to-day reality.
As the scientists studying human-robot interaction develop their robots, their replicas, and their experiments, they are making some of the first forays into that world. We might all be living there someday. Understanding ourselves—decrypting the origins of empathy and love—may be the greatest challenge to face. That is, if you want to avoid the valley.
Image Credit: Anton Gvozdikov / Shutterstock.com Continue reading
#431592 Reactive Content Will Get to Know You ...
The best storytellers react to their audience. They look for smiles, signs of awe, or boredom; they simultaneously and skillfully read both the story and their sitters. Kevin Brooks, a seasoned storyteller working for Motorola’s Human Interface Labs, explains, “As the storyteller begins, they must tune in to… the audience’s energy. Based on this energy, the storyteller will adjust their timing, their posture, their characterizations, and sometimes even the events of the story. There is a dialog between audience and storyteller.”
Shortly after I read the script to Melita, the latest virtual reality experience from Madrid-based immersive storytelling company Future Lighthouse, CEO Nicolas Alcalá explained to me that the piece is an example of “reactive content,” a concept he’s been working on since his days at Singularity University.
For the first time in history, we have access to technology that can merge the reactive and affective elements of oral storytelling with the affordances of digital media, weaving stunning visuals, rich soundtracks, and complex meta-narratives in a story arena that has the capability to know you more intimately than any conventional storyteller could.
It’s no understatement to say that the storytelling potential here is phenomenal.
In short, we can refer to content as reactive if it reads and reacts to users based on their body rhythms, emotions, preferences, and data points. Artificial intelligence is used to analyze users’ behavior or preferences to sculpt unique storylines and narratives, essentially allowing for a story that changes in real time based on who you are and how you feel.
The development of reactive content will allow those working in the industry to go one step further than simply translating the essence of oral storytelling into VR. Rather than having a narrative experience with a digital storyteller who can read you, reactive content has the potential to create an experience with a storyteller who knows you.
This means being able to subtly insert minor personal details that have a specific meaning to the viewer. When we talk to our friends we often use experiences we’ve shared in the past or knowledge of our audience to give our story as much resonance as possible. Targeting personal memories and aspects of our lives is a highly effective way to elicit emotions and aid in visualizing narratives. When you can do this with the addition of visuals, music, and characters—all lifted from someone’s past—you have the potential for overwhelmingly engaging and emotionally-charged content.
Future Lighthouse inform me that for now, reactive content will rely primarily on biometric feedback technology such as breathing, heartbeat, and eye tracking sensors. A simple example would be a story in which parts of the environment or soundscape change in sync with the user’s heartbeat and breathing, or characters who call you out for not paying attention.
The next step would be characters and situations that react to the user’s emotions, wherein algorithms analyze biometric information to make inferences about states of emotional arousal (“why are you so nervous?” etc.). Another example would be implementing the use of “arousal parameters,” where the audience can choose what level of “fear” they want from a VR horror story before algorithms modulate the experience using information from biometric feedback devices.
The company’s long-term goal is to gather research on storytelling conventions and produce a catalogue of story “wireframes.” This entails distilling the basic formula to different genres so they can then be fleshed out with visuals, character traits, and soundtracks that are tailored for individual users based on their deep data, preferences, and biometric information.
The development of reactive content will go hand in hand with a renewed exploration of diverging, dynamic storylines, and multi-narratives, a concept that hasn’t had much impact in the movie world thus far. In theory, the idea of having a story that changes and mutates is captivating largely because of our love affair with serendipity and unpredictability, a cultural condition theorist Arthur Kroker refers to as the “hypertextual imagination.” This feeling of stepping into the unknown with the possibility of deviation from the habitual translates as a comforting reminder that our own lives can take exciting and unexpected turns at any moment.
The inception of the concept into mainstream culture dates to the classic Choose Your Own Adventure book series that launched in the late 70s, which in its literary form had great success. However, filmic takes on the theme have made somewhat less of an impression. DVDs like I’m Your Man (1998) and Switching (2003) both use scene selection tools to determine the direction of the storyline.
A more recent example comes from Kino Industries, who claim to have developed the technology to allow filmmakers to produce interactive films in which viewers can use smartphones to quickly vote on which direction the narrative takes at numerous decision points throughout the film.
The main problem with diverging narrative films has been the stop-start nature of the interactive element: when I’m immersed in a story I don’t want to have to pick up a controller or remote to select what’s going to happen next. Every time the audience is given the option to take a new path (“press this button”, “vote on X, Y, Z”) the narrative— and immersion within that narrative—is temporarily halted, and it takes the mind a while to get back into this state of immersion.
Reactive content has the potential to resolve these issues by enabling passive interactivity—that is, input and output without having to pause and actively make decisions or engage with the hardware. This will result in diverging, dynamic narratives that will unfold seamlessly while being dependent on and unique to the specific user and their emotions. Passive interactivity will also remove the game feel that can often be a symptom of interactive experiences and put a viewer somewhere in the middle: still firmly ensconced in an interactive dynamic narrative, but in a much subtler way.
While reading the Melita script I was particularly struck by a scene in which the characters start to engage with the user and there’s a synchronicity between the user’s heartbeat and objects in the virtual world. As the narrative unwinds and the words of Melita’s character get more profound, parts of the landscape, which seemed to be flashing and pulsating at random, come together and start to mimic the user’s heartbeat.
In 2013, Jane Aspell of Anglia Ruskin University (UK) and Lukas Heydrich of the Swiss Federal Institute of Technology proved that a user’s sense of presence and identification with a virtual avatar could be dramatically increased by syncing the on-screen character with the heartbeat of the user. The relationship between bio-digital synchronicity, immersion, and emotional engagement is something that will surely have revolutionary narrative and storytelling potential.
Image Credit: Tithi Luadthong / Shutterstock.com Continue reading
#431412 3 Dangerous Ideas From Ray Kurzweil
Recently, I interviewed my friend Ray Kurzweil at the Googleplex for a 90-minute webinar on disruptive and dangerous ideas, a prelude to my fireside chat with Ray at Abundance 360 this January.
Ray is my friend and cofounder and chancellor of Singularity University. He is also an XPRIZE trustee, a director of engineering at Google, and one of the best predictors of our exponential future.
It’s my pleasure to share with you three compelling ideas that came from our conversation.
1. The nation-state will soon be irrelevant.
Historically, we humans don’t like change. We like waking up in the morning and knowing that the world is the same as the night before.
That’s one reason why government institutions exist: to stabilize society.
But how will this change in 20 or 30 years? What role will stabilizing institutions play in a world of continuous, accelerating change?
“Institutions stick around, but they change their role in our lives,” Ray explained. “They already have. The nation-state is not as profound as it was. Religion used to direct every aspect of your life, minute to minute. It’s still important in some ways, but it’s much less important, much less pervasive. [It] plays a much smaller role in most people’s lives than it did, and the same is true for governments.”
Ray continues: “We are fantastically interconnected already. Nation-states are not islands anymore. So we’re already much more of a global community. The generation growing up today really feels like world citizens much more than ever before, because they’re talking to people all over the world, and it’s not a novelty.”
I’ve previously shared my belief that national borders have become extremely porous, with ideas, people, capital, and technology rapidly flowing between nations. In decades past, your cultural identity was tied to your birthplace. In the decades ahead, your identify is more a function of many other external factors. If you love space, you’ll be connected with fellow space-cadets around the globe more than you’ll be tied to someone born next door.
2. We’ll hit longevity escape velocity before we realize we’ve hit it.
Ray and I share a passion for extending the healthy human lifespan.
I frequently discuss Ray’s concept of “longevity escape velocity”—the point at which, for every year that you’re alive, science is able to extend your life for more than a year.
Scientists are continually extending the human lifespan, helping us cure heart disease, cancer, and eventually, neurodegenerative disease. This will keep accelerating as technology improves.
During my discussion with Ray, I asked him when he expects we’ll reach “escape velocity…”
His answer? “I predict it’s likely just another 10 to 12 years before the general public will hit longevity escape velocity.”
“At that point, biotechnology is going to have taken over medicine,” Ray added. “The next decade is going to be a profound revolution.”
From there, Ray predicts that nanorobots will “basically finish the job of the immune system,” with the ability to seek and destroy cancerous cells and repair damaged organs.
As we head into this sci-fi-like future, your most important job for the next 15 years is to stay alive. “Wear your seatbelt until we get the self-driving cars going,” Ray jokes.
The implications to society will be profound. While the scarcity-minded in government will react saying, “Social Security will be destroyed,” the more abundance-minded will realize that extending a person’s productive earning life space from 65 to 75 or 85 years old would be a massive boon to GDP.
3. Technology will help us define and actualize human freedoms.
The third dangerous idea from my conversation with Ray is about how technology will enhance our humanity, not detract from it.
You may have heard critics complain that technology is making us less human and increasingly disconnected.
Ray and I share a slightly different viewpoint: that technology enables us to tap into the very essence of what it means to be human.
“I don’t think humans even have to be biological,” explained Ray. “I think humans are the species that changes who we are.”
Ray argues that this began when humans developed the earliest technologies—fire and stone tools. These tools gave people new capabilities and became extensions of our physical bodies.
At its base level, technology is the means by which we change our environment and change ourselves. This will continue, even as the technologies themselves evolve.
“People say, ‘Well, do I really want to become part machine?’ You’re not even going to notice it,” Ray says, “because it’s going to be a sensible thing to do at each point.”
Today, we take medicine to fight disease and maintain good health and would likely consider it irresponsible if someone refused to take a proven, life-saving medicine.
In the future, this will still happen—except the medicine might have nanobots that can target disease or will also improve your memory so you can recall things more easily.
And because this new medicine works so well for so many, public perception will change. Eventually, it will become the norm… as ubiquitous as penicillin and ibuprofen are today.
In this way, ingesting nanorobots, uploading your brain to the cloud, and using devices like smart contact lenses can help humans become, well, better at being human.
Ray sums it up: “We are the species that changes who we are to become smarter and more profound, more beautiful, more creative, more musical, funnier, sexier.”
Speaking of sexuality and beauty, Ray also sees technology expanding these concepts. “In virtual reality, you can be someone else. Right now, actually changing your gender in real reality is a pretty significant, profound process, but you could do it in virtual reality much more easily and you can be someone else. A couple could become each other and discover their relationship from the other’s perspective.”
In the 2030s, when Ray predicts sensor-laden nanorobots will be able to go inside the nervous system, virtual or augmented reality will become exceptionally realistic, enabling us to “be someone else and have other kinds of experiences.”
Why Dangerous Ideas Matter
Why is it so important to discuss dangerous ideas?
I often say that the day before something is a breakthrough, it’s a crazy idea.
By consuming and considering a steady diet of “crazy ideas,” you train yourself to think bigger and bolder, a critical requirement for making impact.
As humans, we are linear and scarcity-minded.
As entrepreneurs, we must think exponentially and abundantly.
At the end of the day, the formula for a true breakthrough is equal to “having a crazy idea” you believe in, plus the passion to pursue that idea against all naysayers and obstacles.
Image Credit: Tithi Luadthong / Shutterstock.com Continue reading