Tag Archives: humanoids
#429641 Researchers adapt a DIY robotics kit to ...
Elementary and secondary school students who later want to become scientists and engineers often get hands-on inspiration by using off-the-shelf kits to build and program robots. But so far it's been difficult to create robotic projects to foster interest in the "wet" sciences – biology, chemistry and medicine – so called because experiments in these field often involve fluids. Continue reading →
#429640 People afraid of robots much more likely ...
"Technophobes"—people who fear robots, artificial intelligence and new technology that they don't understand—are much more likely to be afraid of losing their jobs to technology and to suffer anxiety-related mental health issues, a Baylor University study found. Continue reading →
#429638 AI Investors Rack Up Massive Returns in ...
A international team of researchers showed that artificial intelligence can make a killing on the stock market, and some real-world hedge funds are already trying it.
#429637 The Body Is the Missing Link for Truly ...
It’s tempting to think of the mind as a layer that sits on top of more primitive cognitive structures. We experience ourselves as conscious beings, after all, in a way that feels different to the rhythm of our heartbeat or the rumblings of our stomach. If the operations of the brain can be separated out and stratified, then perhaps we can construct something akin to just the top layer, and achieve human-like artificial intelligence (AI) while bypassing the messy flesh that characterises organic life.
I understand the appeal of this view, because I co-founded SwiftKey, a predictive-language software company that was bought by Microsoft. Our goal is to emulate the remarkable processes by which human beings can understand and manipulate language. We’ve made some decent progress: I was pretty proud of the elegant new communication system we built for the physicist Stephen Hawking between 2012 and 2014. But despite encouraging results, most of the time I’m reminded that we're nowhere near achieving human-like AI. Why? Because the layered model of cognition is wrong. Most AI researchers are currently missing a central piece of the puzzle: embodiment.
Things took a wrong turn at the beginning of modern AI, back in the 1950s. Computer scientists decided to try to imitate conscious reasoning by building logical systems based on symbols. The method involves associating real-world entities with digital codes to create virtual models of the environment, which could then be projected back onto the world itself. For instance, using symbolic logic, you could instruct a machine to ‘learn’ that a cat is an animal by encoding a specific piece of knowledge using a mathematical formula such as ‘cat > is > animal’. Such formulae can be rolled up into more complex statements that allow the system to manipulate and test propositions — such as whether your average cat is as big as a horse, or likely to chase a mouse.
This method found some early success in simple contrived environments: in ‘SHRDLU’, a virtual world created by the computer scientist Terry Winograd at MIT between 1968-1970, users could talk to the computer in order to move around simple block shapes such as cones and balls. But symbolic logic proved hopelessly inadequate when faced with real-world problems, where fine-tuned symbols broke down in the face of ambiguous definitions and myriad shades of interpretation.
In later decades, as computing power grew, researchers switched to using statistics to extract patterns from massive quantities of data. These methods are often referred to as ‘machine learning’. Rather than trying to encode high-level knowledge and logical reasoning, machine learning employs a bottom-up approach in which algorithms discern relationships by repeating tasks, such as classifying the visual objects in images or transcribing recorded speech into text. Such a system might learn to identify images of cats, for example, by looking at millions of cat photos, or to make a connection between cats and mice based on the way they are referred to throughout large bodies of text.
Machine learning has produced many tremendous practical applications in recent years. We’ve built systems that surpass us at speech recognition, image processing and lip reading; that can beat us at chess, Jeopardy! and Go; and that are learning to create visual art, compose pop music and write their own software programs. To a degree, these self-teaching algorithms mimic what we know about the subconscious processes of organic brains. Machine-learning algorithms start with simple ‘features’ (individual letters or pixels, for instance) and combine them into more complex ‘categories’, taking into account the inherent uncertainty and ambiguity in real-world data. This is somewhat analogous to the visual cortex, which receives electrical signals from the eye and interprets them as identifiable patterns and objects.
But algorithms are a long way from being able to think like us. The biggest distinction lies in our evolved biology, and how that biology processes information. Humans are made up of trillions of eukaryotic cells, which first appeared in the fossil record around 2.5 billion years ago. A human cell is a remarkable piece of networked machinery that has about the same number of components as a modern jumbo jet — all of which arose out of a longstanding, embedded encounter with the natural world. In Basin and Range (1981), the writer John McPhee observed that, if you stand with your arms outstretched to represent the whole history of the Earth, complex organisms began evolving only at the far wrist, while ‘in a single stroke with a medium-grained nail file you could eradicate human history’.
The traditional view of evolution suggests that our cellular complexity evolved from early eukaryotes via random genetic mutation and selection. But in 2005 the biologist James Shapiro at the University of Chicago outlined a radical new narrative. He argued that eukaryotic cells work ‘intelligently’ to adapt a host organism to its environment by manipulating their own DNA in response to environmental stimuli. Recent microbiological findings lend weight to this idea. For example, mammals’ immune systems have the tendency to duplicate sequences of DNA in order to generate effective antibodies to attack disease, and we now know that at least 43 per cent of the human genome is made up of DNA that can be moved from one location to another, through a process of natural ‘genetic engineering’.
Now, it’s a bit of a leap to go from smart, self-organising cells to the brainy sort of intelligence that concerns us here. But the point is that long before we were conscious, thinking beings, our cells were reading data from the environment and working together to mould us into robust, self-sustaining agents. What we take as intelligence, then, is not simply about using symbols to represent the world as it objectively is. Rather, we only have the world as it is revealed to us, which is rooted in our evolved, embodied needs as an organism. Nature ‘has built the apparatus of rationality not just on top of the apparatus of biological regulation, but also from it and with it’, wrote the neuroscientist Antonio Damasio in Descartes’ Error (1994), his seminal book on cognition. In other words, we think with our whole body, not just with the brain.
I suspect that this basic imperative of bodily survival in an uncertain world is the basis of the flexibility and power of human intelligence. But few AI researchers have really embraced the implications of these insights. The motivating drive of most AI algorithms is to infer patterns from vast sets of training data — so it might require millions or even billions of individual cat photos to gain a high degree of accuracy in recognising cats. By contrast, thanks to our needs as an organism, human beings carry with them extraordinarily rich models of the body in its broader environment. We draw on experiences and expectations to predict likely outcomes from a relatively small number of observed samples. So when a human thinks about a cat, she can probably picture the way it moves, hear the sound of purring, feel the impending scratch from an unsheathed claw. She has a rich store of sensory information at her disposal to understand the idea of a ‘cat’, and other related concepts that might help her interact with such a creature.
This means that when a human approaches a new problem, most of the hard work has already been done. In ways that we’re only just beginning to understand, our body and brain, from the cellular level upwards, have already built a model of the world that we can apply almost instantly to a wide array of challenges. But for an AI algorithm, the process begins from scratch each time. There is an active and important line of research, known as ‘inductive transfer’, focused on using prior machine-learned knowledge to inform new solutions. However, as things stand, it’s questionable whether this approach will be able to capture anything like the richness of our own bodily models.
On the same day that SwiftKey unveiled Hawking’s new communications system in 2014, he gave an interview to the BBC in which he warned that intelligent machines could end mankind. You can imagine which story ended up dominating the headlines. I agree with Hawking that we should take the risks of rogue AI seriously. But I believe we’re still very far from needing to worry about anything approaching human intelligence — and we have little hope of achieving this goal unless we think carefully about how to give algorithms some kind of long-term, embodied relationship with their environment.
This article was originally published at Aeon and has been republished under Creative Commons.
Image Credit: Patroclus by Jacques-Louis David (1780) via Wikipedia Continue reading →
#429636 The Power of VR as an ‘Empathy ...
Virtual reality has a long history in science fiction. From the Lawnmower Man to the Matrix, the idea of VR has inspired artists and gamers alike. But it’s only very recently that the technology has moved out of the lab into people’s homes.
Since the 2012 Oculus Kickstarter, VR has become a driving passion for technophiles and developers around the world. In 2016, the first consumer devices became mainstream, and now the only questions seem to be how quickly it will improve, who will adopt it, and what applications will prove the most revolutionary?
Barry Pousman is one of the field’s leading innovators and a big believer in VR’s transformative potential. Pousman began working in the VR field as a founding member of the United Nations' VR initiative and has served as an advisor to some of the industry’s heavyweights, including Oculus, HTC Vive, and IDEO Labs.
Pousman co-directed, co-produced, and shot the now-famous VR film Clouds Over Sidra, and his work has been screened at the World Economic Forum, the UN General Assembly, and the Sundance Film festival. In fact, his company, Variable Labs, is building an immersive VR learning platform for businesses with a special focus on corporate training.
I recently caught up with Pousman to get his take on VR’s recent past and its exciting future. In his corporate office in Oakland, California, we discussed the power of VR as an “empathy machine,” its dramatic impact on donations to aid Syrian refugees, and how his home office is already pretty close to Star Trek’s Holodeck.
I know that empathy is a big focus for Variable Labs. Could you say more about how you see immersive experiences helping people to become more empathic? What is the connection between VR and empathy?
What attracted me to the medium of VR in the first place is how incredible VR experiences can be and how much remains unknown within the field.
Although all artistic mediums can invoke empathy VR is unlike traditional mediums (writing, theater, film). VR’s sheer form-factor and the isolating experience it engenders, inspires focus like no other medium before it. And when we marry that with the user experience of seeing and hearing the world from another human’s perspective, you get what Chris Milk calls “the empathy machine.”
At Variable Labs, our end-goal is not to foster more empathy in the world, but instead to create measurable and positive behavior change for our audiences using commercial technology. We are engaging in efficacy research for our learning platform to see if and how users internalize and implement the lessons in their own lives.
You co-directed, co-produced and shot the United Nations VR documentary, Clouds Over Sidra. For those who are unfamiliar, could you say something about the film. What was it like making the film? What was the advantage of using VR? And what was the overall impact for the UN?
The 360 film Clouds Over Sidra allows audiences to spend a day in a refugee camp and is seen through the lens of a young Syrian girl. It was first filmed as an experiment with the United Nations and the VR company, Within, but has since become a model for live-action 360 documentary and documentary journalism.
For me personally, the film was difficult to shoot because of the challenging environment at the camp. Not that it was particularly violent or unclean, but rather that the refugees there were so similar to my own friends and family at home. They were young professionals, doctors, and middle-class children, living as refugees with almost no opportunities to shape their own futures.
"Clouds Over Sidra is now being used by UNICEF street fundraisers and reporting a 100% increase in donations in cities across the world."
Throughout my career of making impact media, I’ve understood how important it is to get these types of stories out and into the hands of people that can really make a difference. And in measuring the actions taken by the audience of this film, it’s clear that it has had a dramatic effect on people.
When Clouds Over Sidra was screened at the last minute during a Syrian Refugee donor conference, they were able to raise $3.8 billion, far surpassing the expected $2.3 billion for the 24-hour event. In fact, the film is now being used by UNICEF street fundraisers and reporting a 100% increase in donations in cities across the world.
We’ve seen a kind of rise and fall of VR over the last forty years or so. In the 1980s and 1990s, there was a lot of excitement about VR linked to books like Neuromancer (1984), and movies like Brainstorm (1983), the Lawnmower Man (1992), and of course, the Matrix trilogy (1999). In your view, has VR now finally come of age?
Has VR come of age? Well-funded organizations such as NASA and the DoD have been using virtual reality for simulated learning since the late 70s. And similar to the computing industry—which began in the DoD and then moved into consumer and personal computing—VR hardware is now finally hitting the consumer market.
This means that instead of spending millions of dollars on VR hardware, anyone can purchase something very similar for only a few hundred dollars.
Steven Spielberg's upcoming film, Ready Player One will raise eyebrows and grow interest and appetite for personal immersive tech. And as these themes continue to grow in mainstream media, consumers and publishers will become increasingly inspired to explore new VR formats and entirely new use-cases.
Personally, I’m excited about further exploring the idea of convergent media, bridging the gap between linear storytelling and audience agency. For example, Pete Middleton’s, Notes on Blindness, pushes the envelope in this way by involving the audience in the action. And the Gabo Arora's upcoming room-scale piece, The Last Goodbye, is another example that uses "activity required" storytelling.
But in my view, VR won’t truly come of age until we can integrate artificial intelligence. Then the virtual worlds and characters will be able to respond dynamically to audience input and we can deliver more seamless human-computer interactions.
There are now a plethora of VR platforms for the mass market: Oculus, HTC Vive, Samsung Gear VR, Google Daydream and more. With the costs of the technology declining and computing capacity accelerating, where do you see VR having the most impact over the next 10-15 years?
For impact from VR, the clear and away winner will be education.
The research from Stanford’s Virtual Human Interaction Lab, the MIT Media Lab, USC’s Institute for Creative Technologies, and many other top-tier institutions has shown the efficacy of VR for learning and development with excellent results. In fact, a new study from researchers in China showed incredible improvement for students using VR when learning both theoretical concepts and practical skills at the high school level.
"Immersive education will permeate all sectors from medicine to transportation to agriculture."
And immersive education (VR, AR and MR) will permeate all sectors from medicine to transportation to agriculture. E-commerce is going to see a huge shift as well. Amazon and Google will no doubt be creating VR shopping experiences very soon if they haven’t started already. In addition to this, autonomous cars are a perfect fit for VR and AR. Self-driving cars will create an entirely new living room for families with both individual and group VR and AR experiences for learning and entertainment.
VR breaks the square frame of traditional narratives. What does VR mean to art and storytelling?
Seeing well-made and well thought out VR is one of the most satisfying experiences one can have.
I look at the incredible work of Oculus Story Studio, and it’s obvious they’ve tapped into a whole new way of looking at story development for VR. And Within continues to break new boundaries in art and storytelling by adding new technologies while maintaining nuanced storylines, most recently through voice input in their latest work, Life of Us.
One of the best places to discover this sort of content is through Kaleidoscope, a traveling VR festival and collective of VR and AR artists, animators, filmmakers, and engineers.
There looks to be a pretty wide array of applications for VR including military training, education, gaming, advertising, entertainment, etc. What kind of projects are you currently working on?
We are excited about the enterprise training space. Imagine on your first day of work you get handed a nice VR headset instead of a stack of books and papers.
We used to think of the platform we’re building as the “Netflix of Learning” but we’ve now started exploring a Virtual Campus model. So imagine on that first day of work, you can (virtually) sit down with your new CEO in their office, meet other employees, speak with your HR manager, and fill out your new-hire forms from inside the headset using the controller.
For now, VR is limited to headsets or head-mounted displays (HMD). What new interfacing systems could we see in the future? When will we get the Star Trek holodeck?
There will be two form factors of VR/AR as we move forward, glasses for mobile use and rooms for higher fidelity experiences. I just installed an HTC Vive in my home office, and it feels pretty close to the Holodeck already! The empty room turns into an art gallery, a paintball field, a deep-sea dive, and a public speaking simulator. And what we get to take out of it is an expanded viewpoint, a raised consciousness, memories and the occasional screenshot. This is just the beginning, and it’s going to change how we learn and play in profound ways.
Image Credit: Shutterstock Continue reading →