Tag Archives: exist

#431899 Darker Still: Black Mirror’s New ...

The key difference between science fiction and fantasy is that science fiction is entirely possible because of its grounding in scientific facts, while fantasy is not. This is where Black Mirror is both an entertaining and terrifying work of science fiction. Created by Charlie Brooker, the anthological series tells cautionary tales of emerging technology that could one day be an integral part of our everyday lives.
While watching the often alarming episodes, one can’t help but recognize the eerie similarities to some of the tech tools that are already abundant in our lives today. In fact, many previous Black Mirror predictions are already becoming reality.
The latest season of Black Mirror was arguably darker than ever. This time, Brooker seemed to focus on the ethical implications of one particular area: neurotechnology.
Emerging Neurotechnology
Warning: The remainder of this article may contain spoilers from Season 4 of Black Mirror.
Most of the storylines from season four revolve around neurotechnology and brain-machine interfaces. They are based in a world where people have the power to upload their consciousness onto machines, have fully immersive experiences in virtual reality, merge their minds with other minds, record others’ memories, and even track what others are thinking, feeling, and doing.
How can all this ever be possible? Well, these capabilities are already being developed by pioneers and researchers globally. Early last year, Elon Musk unveiled Neuralink, a company whose goal is to merge the human mind with AI through a neural lace. We’ve already connected two brains via the internet, allowing one brain to communicate with another. Various research teams have been able to develop mechanisms for “reading minds” or reconstructing memories of individuals via devices. The list goes on.
With many of the technologies we see in Black Mirror it’s not a question of if, but when. Futurist Ray Kurzweil has predicted that by the 2030s we will be able to upload our consciousness onto the cloud via nanobots that will “provide full-immersion virtual reality from within the nervous system, provide direct brain-to-brain communication over the internet, and otherwise greatly expand human intelligence.” While other experts continue to challenge Kurzweil on the exact year we’ll accomplish this feat, with the current exponential growth of our technological capabilities, we’re on track to get there eventually.
Ethical Questions
As always, technology is only half the conversation. Equally fascinating are the many ethical and moral questions this topic raises.
For instance, with the increasing convergence of artificial intelligence and virtual reality, we have to ask ourselves if our morality from the physical world transfers equally into the virtual world. The first episode of season four, USS Calister, tells the story of a VR pioneer, Robert Daley, who creates breakthrough AI and VR to satisfy his personal frustrations and sexual urges. He uses the DNA of his coworkers (and their children) to re-create them digitally in his virtual world, to which he escapes to torture them, while they continue to be indifferent in the “real” world.
Audiences are left asking themselves: should what happens in the digital world be considered any less “real” than the physical world? How do we know if the individuals in the virtual world (who are ultimately based on algorithms) have true feelings or sentiments? Have they been developed to exhibit characteristics associated with suffering, or can they really feel suffering? Fascinatingly, these questions point to the hard problem of consciousness—the question of if, why, and how a given physical process generates the specific experience it does—which remains a major mystery in neuroscience.
Towards the end of USS Calister, the hostages of Daley’s virtual world attempt to escape through suicide, by committing an act that will delete the code that allows them to exist. This raises yet another mind-boggling ethical question: if we “delete” code that signifies a digital being, should that be considered murder (or suicide, in this case)? Why shouldn’t it? When we murder someone we are, in essence, taking away their capacity to live and to be, without their consent. By unplugging a self-aware AI, wouldn’t we be violating its basic right to live in the same why? Does AI, as code, even have rights?
Brain implants can also have a radical impact on our self-identity and how we define the word “I”. In the episode Black Museum, instead of witnessing just one horror, we get a series of scares in little segments. One of those segments tells the story of a father who attempts to reincarnate the mother of his child by uploading her consciousness into his mind and allowing her to live in his head (essentially giving him multiple personality disorder). In this way, she can experience special moments with their son.
With “no privacy for him, and no agency for her” the good intention slowly goes very wrong. This story raises a critical question: should we be allowed to upload consciousness into limited bodies? Even more, if we are to upload our minds into “the cloud,” at what point do we lose our individuality to become one collective being?
These questions can form the basis of hours of debate, but we’re just getting started. There are no right or wrong answers with many of these moral dilemmas, but we need to start having such discussions.
The Downside of Dystopian Sci-Fi
Like last season’s San Junipero, one episode of the series, Hang the DJ, had an uplifting ending. Yet the overwhelming majority of the stories in Black Mirror continue to focus on the darkest side of human nature, feeding into the pre-existing paranoia of the general public. There is certainly some value in this; it’s important to be aware of the dangers of technology. After all, what better way to explore these dangers before they occur than through speculative fiction?
A big takeaway from every tale told in the series is that the greatest threat to humanity does not come from technology, but from ourselves. Technology itself is not inherently good or evil; it all comes down to how we choose to use it as a society. So for those of you who are techno-paranoid, beware, for it’s not the technology you should fear, but the humans who get their hands on it.
While we can paint negative visions for the future, though, it is also important to paint positive ones. The kind of visions we set for ourselves have the power to inspire and motivate generations. Many people are inherently pessimistic when thinking about the future, and that pessimism in turn can shape their contributions to humanity.
While utopia may not exist, the future of our species could and should be one of solving global challenges, abundance, prosperity, liberation, and cosmic transcendence. Now that would be a thrilling episode to watch.
Image Credit: Billion Photos / Shutterstock.com Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

#431872 AI Uses Titan Supercomputer to Create ...

You don’t have to dig too deeply into the archive of dystopian science fiction to uncover the horror that intelligent machines might unleash. The Matrix and The Terminator are probably the most well-known examples of self-replicating, intelligent machines attempting to enslave or destroy humanity in the process of building a brave new digital world.
The prospect of artificially intelligent machines creating other artificially intelligent machines took a big step forward in 2017. However, we’re far from the runaway technological singularity futurists are predicting by mid-century or earlier, let alone murderous cyborgs or AI avatar assassins.
The first big boost this year came from Google. The tech giant announced it was developing automated machine learning (AutoML), writing algorithms that can do some of the heavy lifting by identifying the right neural networks for a specific job. Now researchers at the Department of Energy’s Oak Ridge National Laboratory (ORNL), using the most powerful supercomputer in the US, have developed an AI system that can generate neural networks as good if not better than any developed by a human in less than a day.
It can take months for the brainiest, best-paid data scientists to develop deep learning software, which sends data through a complex web of mathematical algorithms. The system is modeled after the human brain and known as an artificial neural network. Even Google’s AutoML took weeks to design a superior image recognition system, one of the more standard operations for AI systems today.
Computing Power
Of course, Google Brain project engineers only had access to 800 graphic processing units (GPUs), a type of computer hardware that works especially well for deep learning. Nvidia, which pioneered the development of GPUs, is considered the gold standard in today’s AI hardware architecture. Titan, the supercomputer at ORNL, boasts more than 18,000 GPUs.
The ORNL research team’s algorithm, called MENNDL for Multinode Evolutionary Neural Networks for Deep Learning, isn’t designed to create AI systems that cull cute cat photos from the internet. Instead, MENNDL is a tool for testing and training thousands of potential neural networks to work on unique science problems.
That requires a different approach from the Google and Facebook AI platforms of the world, notes Steven Young, a postdoctoral research associate at ORNL who is on the team that designed MENNDL.
“We’ve discovered that those [neural networks] are very often not the optimal network for a lot of our problems, because our data, while it can be thought of as images, is different,” he explains to Singularity Hub. “These images, and the problems, have very different characteristics from object detection.”
AI for Science
One application of the technology involved a particle physics experiment at the Fermi National Accelerator Laboratory. Fermilab researchers are interested in understanding neutrinos, high-energy subatomic particles that rarely interact with normal matter but could be a key to understanding the early formation of the universe. One Fermilab experiment involves taking a sort of “snapshot” of neutrino interactions.
The team wanted the help of an AI system that could analyze and classify Fermilab’s detector data. MENNDL evaluated 500,000 neural networks in 24 hours. Its final solution proved superior to custom models developed by human scientists.
In another case involving a collaboration with St. Jude Children’s Research Hospital in Memphis, MENNDL improved the error rate of a human-designed algorithm for identifying mitochondria inside 3D electron microscopy images of brain tissue by 30 percent.
“We are able to do better than humans in a fraction of the time at designing networks for these sort of very different datasets that we’re interested in,” Young says.
What makes MENNDL particularly adept is its ability to define the best or most optimal hyperparameters—the key variables—to tackle a particular dataset.
“You don’t always need a big, huge deep network. Sometimes you just need a small network with the right hyperparameters,” Young says.
A Virtual Data Scientist
That’s not dissimilar to the approach of a company called H20.ai, a startup out of Silicon Valley that uses open source machine learning platforms to “democratize” AI. It applies machine learning to create business solutions for Fortune 500 companies, including some of the world’s biggest banks and healthcare companies.
“Our software is more [about] pattern detection, let’s say anti-money laundering or fraud detection or which customer is most likely to churn,” Dr. Arno Candel, chief technology officer at H2O.ai, tells Singularity Hub. “And that kind of insight-generating software is what we call AI here.”
The company’s latest product, Driverless AI, promises to deliver the data scientist equivalent of a chessmaster to its customers (the company claims several such grandmasters in its employ and advisory board). In other words, the system can analyze a raw dataset and, like MENNDL, automatically identify what features should be included in the computer model to make the most of the data based on the best “chess moves” of its grandmasters.
“So we’re using those algorithms, but we’re giving them the human insights from those data scientists, and we automate their thinking,” he explains. “So we created a virtual data scientist that is relentless at trying these ideas.”
Inside the Black Box
Not unlike how the human brain reaches a conclusion, it’s not always possible to understand how a machine, despite being designed by humans, reaches its own solutions. The lack of transparency is often referred to as the AI “black box.” Experts like Young say we can learn something about the evolutionary process of machine learning by generating millions of neural networks and seeing what works well and what doesn’t.
“You’re never going to be able to completely explain what happened, but maybe we can better explain it than we currently can today,” Young says.
Transparency is built into the “thought process” of each particular model generated by Driverless AI, according to Candel.
The computer even explains itself to the user in plain English at each decision point. There is also real-time feedback that allows users to prioritize features, or parameters, to see how the changes improve the accuracy of the model. For example, the system may include data from people in the same zip code as it creates a model to describe customer turnover.
“That’s one of the advantages of our automatic feature engineering: it’s basically mimicking human thinking,” Candel says. “It’s not just neural nets that magically come up with some kind of number, but we’re trying to make it statistically significant.”
Moving Forward
Much digital ink has been spilled over the dearth of skilled data scientists, so automating certain design aspects for developing artificial neural networks makes sense. Experts agree that automation alone won’t solve that particular problem. However, it will free computer scientists to tackle more difficult issues, such as parsing the inherent biases that exist within the data used by machine learning today.
“I think the world has an opportunity to focus more on the meaning of things and not on the laborious tasks of just fitting a model and finding the best features to make that model,” Candel notes. “By automating, we are pushing the burden back for the data scientists to actually do something more meaningful, which is think about the problem and see how you can address it differently to make an even bigger impact.”
The team at ORNL expects it can also make bigger impacts beginning next year when the lab’s next supercomputer, Summit, comes online. While Summit will boast only 4,600 nodes, it will sport the latest and greatest GPU technology from Nvidia and CPUs from IBM. That means it will deliver more than five times the computational performance of Titan, the world’s fifth-most powerful supercomputer today.
“We’ll be able to look at much larger problems on Summit than we were able to with Titan and hopefully get to a solution much faster,” Young says.
It’s all in a day’s work.
Image Credit: Gennady Danilkin / Shutterstock.com Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

#431690 Oxford Study Says Alien Life Would ...

The alternative universe known as science fiction has given our culture a menagerie of alien species. From overstuffed teddy bears like Ewoks and Wookies to terrifying nightmares such as Alien and Predator, our collective imagination of what form alien life from another world may take has been irrevocably imprinted by Hollywood.
It might all be possible, or all these bug-eyed critters might turn out to be just B-movie versions of how real extraterrestrials will appear if and when they finally make the evening news.
One thing for certain is that aliens from another world will be shaped by the same evolutionary forces as here on Earth—natural selection. That’s the conclusion of a team of scientists from the University of Oxford in a study published this month in the International Journal of Astrobiology.
A complex alien that comprises a hierarchy of entities, where each lower level collection of entities has aligned evolutionary interests.Image Credit: Helen S. Cooper/University of Oxford.
The researchers suggest that evolutionary theory—famously put forth by Charles Darwin in his seminal book On the Origin of Species 158 years ago this month—can be used to make some predictions about alien species. In particular, the team argues that extraterrestrials will undergo natural selection, because that is the only process by which organisms can adapt to their environment.
“Adaptation is what defines life,” lead author Samuel Levin tells Singularity Hub.
While it’s likely that NASA or some SpaceX-like private venture will eventually kick over a few space rocks and discover microbial life in the not-too-distant future, the sorts of aliens Levin and his colleagues are interested in describing are more complex. That’s because natural selection is at work.
A quick evolutionary theory 101 refresher: Natural selection is the process by which certain traits are favored over others in a given population. For example, take a group of brown and green beetles. It just so happens that birds prefer foraging on green beetles, allowing more brown beetles to survive and reproduce than the more delectable green ones. Eventually, if these population pressures persist, brown beetles will become the dominant type. Brown wins, green loses.
And just as human beings are the result of millions of years of adaptations—eyes and thumbs, for example—aliens will similarly be constructed from parts that were once free living but through time came together to work as one organism.
“Life has so many intricate parts, so much complexity, for that to happen (randomly),” Levin explains. “It’s too complex and too many things working together in a purposeful way for that to happen by chance, as how certain molecules come about. Instead you need a process for making it, and natural selection is that process.”
Just don’t expect ET to show up as a bipedal humanoid with a large head and almond-shaped eyes, Levin says.
“They can be built from entirely different chemicals and so visually, superficially, unfamiliar,” he explains. “They will have passed through the same evolutionary history as us. To me, that’s way cooler and more exciting than them having two legs.”
Need for Data
Seth Shostak, a lead astronomer at the SETI Institute and host of the organization’s Big Picture Science radio show, wrote that while the argument is interesting, it doesn’t answer the question of ET’s appearance.
Shostak argues that a more productive approach would invoke convergent evolution, where similar environments lead to similar adaptations, at least assuming a range of Earth-like conditions such as liquid oceans and thick atmospheres. For example, an alien species that evolved in a liquid environment would evolve a streamlined body to move through water.
“Happenstance and the specifics of the environment will produce variations on an alien species’ planet as it has on ours, and there’s really no way to predict these,” Shostak concludes. “Alas, an accurate cosmic bestiary cannot be written by the invocation of biological mechanisms alone. We need data. That requires more than simply thinking about alien life. We need to actually discover it.”
Search Is On
The search is on. On one hand, the task seems easy enough: There are at least 100 billion planets in the Milky Way alone, and at least 20 percent of those are likely to be capable of producing a biosphere. Even if the evolution of life is exceedingly rare—take a conservative estimate of .001 percent or 200,000 planets, as proposed by the Oxford paper—you have to like the odds.
Of course, it’s not that easy by a billion light years.
Planet hunters can’t even agree on what signatures of life they should focus on. The idea is that where there’s smoke there’s fire. In the case of an alien world home to biological life, astrobiologists are searching for the presence of “biosignature gases,” vapors that could only be produced by alien life.
As Quanta Magazine reported, scientists do this by measuring a planet’s atmosphere against starlight. Gases in the atmosphere absorb certain frequencies of starlight, offering a clue as to what is brewing around a particular planet.
The presence of oxygen would seem to be a biological no-brainer, but there are instances where a planet can produce a false positive, meaning non-biological processes are responsible for the exoplanet’s oxygen. Scientists like Sara Seager, an astrophysicist at MIT, have argued there are plenty of examples of other types of gases produced by organisms right here on Earth that could also produce the smoking gun, er, planet.

Life as We Know It
Indeed, the existence of Earth-bound extremophiles—organisms that defy conventional wisdom about where life can exist, such as in the vacuum of space—offer another clue as to what kind of aliens we might eventually meet.
Lynn Rothschild, an astrobiologist and synthetic biologist in the Earth Science Division at NASA’s Ames Research Center in Silicon Valley, takes extremophiles as a baseline and then supersizes them through synthetic biology.
For example, say a bacteria is capable of surviving at 120 degrees Celsius. Rothschild’s lab might tweak an organism’s DNA to see if it could metabolize at 150 degrees. The idea, as she explains, is to expand the envelope for life without ever getting into a rocket ship.

While researchers may not always agree on the “where” and “how” and “what” of the search for extraterrestrial life, most do share one belief: Alien life must be out there.
“It would shock me if there weren’t [extraterrestrials],” Levin says. “There are few things that would shock me more than to find out there aren’t any aliens…If I had to bet on it, I would bet on the side of there being lots and lots of aliens out there.”
Image Credit: NASA Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

#431682 Oxford Study Says Alien Life Would ...

The alternative universe known as science fiction has given our culture a menagerie of alien species. From overstuffed teddy bears like Ewoks and Wookies to terrifying nightmares such as Alien and Predator, our collective imagination of what form alien life from another world may take has been irrevocably imprinted by Hollywood.
It might all be possible, or all these bug-eyed critters might turn out to be just B-movie versions of how real extraterrestrials will appear if and when they finally make the evening news.
One thing for certain is that aliens from another world will be shaped by the same evolutionary forces as here on Earth—natural selection. That’s the conclusion of a team of scientists from the University of Oxford in a study published this month in the International Journal of Astrobiology.
A complex alien that comprises a hierarchy of entities, where each lower level collection of entities has aligned evolutionary interests.Image Credit: Helen S. Cooper/University of Oxford.
The researchers suggest that evolutionary theory—famously put forth by Charles Darwin in his seminal book On the Origin of Species 158 years ago this month—can be used to make some predictions about alien species. In particular, the team argues that extraterrestrials will undergo natural selection, because that is the only process by which organisms can adapt to their environment.
“Adaptation is what defines life,” lead author Samuel Levin tells Singularity Hub.
While it’s likely that NASA or some SpaceX-like private venture will eventually kick over a few space rocks and discover microbial life in the not-too-distant future, the sorts of aliens Levin and his colleagues are interested in describing are more complex. That’s because natural selection is at work.
A quick evolutionary theory 101 refresher: Natural selection is the process by which certain traits are favored over others in a given population. For example, take a group of brown and green beetles. It just so happens that birds prefer foraging on green beetles, allowing more brown beetles to survive and reproduce than the more delectable green ones. Eventually, if these population pressures persist, brown beetles will become the dominant type. Brown wins, green loses.
And just as human beings are the result of millions of years of adaptations—eyes and thumbs, for example—aliens will similarly be constructed from parts that were once free living but through time came together to work as one organism.
“Life has so many intricate parts, so much complexity, for that to happen (randomly),” Levin explains. “It’s too complex and too many things working together in a purposeful way for that to happen by chance, as how certain molecules come about. Instead you need a process for making it, and natural selection is that process.”
Just don’t expect ET to show up as a bipedal humanoid with a large head and almond-shaped eyes, Levin says.
“They can be built from entirely different chemicals and so visually, superficially, unfamiliar,” he explains. “They will have passed through the same evolutionary history as us. To me, that’s way cooler and more exciting than them having two legs.”
Need for Data
Seth Shostak, a lead astronomer at the SETI Institute and host of the organization’s Big Picture Science radio show, wrote that while the argument is interesting, it doesn’t answer the question of ET’s appearance.
Shostak argues that a more productive approach would invoke convergent evolution, where similar environments lead to similar adaptations, at least assuming a range of Earth-like conditions such as liquid oceans and thick atmospheres. For example, an alien species that evolved in a liquid environment would evolve a streamlined body to move through water.
“Happenstance and the specifics of the environment will produce variations on an alien species’ planet as it has on ours, and there’s really no way to predict these,” Shostak concludes. “Alas, an accurate cosmic bestiary cannot be written by the invocation of biological mechanisms alone. We need data. That requires more than simply thinking about alien life. We need to actually discover it.”
Search is On
The search is on. On one hand, the task seems easy enough: There are at least 100 billion planets in the Milky Way alone, and at least 20 percent of those are likely to be capable of producing a biosphere. Even if the evolution of life is exceedingly rare—take a conservative estimate of .001 percent or 200,000 planets, as proposed by the Oxford paper—you have to like the odds.
Of course, it’s not that easy by a billion light years.
Planet hunters can’t even agree on what signatures of life they should focus on. The idea is that where there’s smoke there’s fire. In the case of an alien world home to biological life, astrobiologists are searching for the presence of “biosignature gases,” vapors that could only be produced by alien life.
As Quanta Magazine reported, scientists do this by measuring a planet’s atmosphere against starlight. Gases in the atmosphere absorb certain frequencies of starlight, offering a clue as to what is brewing around a particular planet.
The presence of oxygen would seem to be a biological no-brainer, but there are instances where a planet can produce a false positive, meaning non-biological processes are responsible for the exoplanet’s oxygen. Scientists like Sara Seager, an astrophysicist at MIT, have argued there are plenty of examples of other types of gases produced by organisms right here on Earth that could also produce the smoking gun, er, planet.

Life as We Know It
Indeed, the existence of Earth-bound extremophiles—organisms that defy conventional wisdom about where life can exist, such as in the vacuum of space—offer another clue as to what kind of aliens we might eventually meet.
Lynn Rothschild, an astrobiologist and synthetic biologist in the Earth Science Division at NASA’s Ames Research Center in Silicon Valley, takes extremophiles as a baseline and then supersizes them through synthetic biology.
For example, say a bacteria is capable of surviving at 120 degrees Celsius. Rothschild’s lab might tweak an organism’s DNA to see if it could metabolize at 150 degrees. The idea, as she explains, is to expand the envelope for life without ever getting into a rocket ship.

While researchers may not always agree on the “where” and “how” and “what” of the search for extraterrestrial life, most do share one belief: Alien life must be out there.
“It would shock me if there weren’t [extraterrestrials],” Levin says. “There are few things that would shock me more than to find out there aren’t any aliens…If I had to bet on it, I would bet on the side of there being lots and lots of aliens out there.”
Image Credit: NASA Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

#431603 What We Can Learn From the Second Life ...

For every new piece of technology that gets developed, you can usually find people saying it will never be useful. The president of the Michigan Savings Bank in 1903, for example, said, “The horse is here to stay but the automobile is only a novelty—a fad.” It’s equally easy to find people raving about whichever new technology is at the peak of the Gartner Hype Cycle, which tracks the buzz around these newest developments and attempts to temper predictions. When technologies emerge, there are all kinds of uncertainties, from the actual capacity of the technology to its use cases in real life to the price tag.
Eventually the dust settles, and some technologies get widely adopted, to the extent that they can become “invisible”; people take them for granted. Others fall by the wayside as gimmicky fads or impractical ideas. Picking which horses to back is the difference between Silicon Valley millions and Betamax pub-quiz-question obscurity. For a while, it seemed that Google had—for once—backed the wrong horse.
Google Glass emerged from Google X, the ubiquitous tech giant’s much-hyped moonshot factory, where highly secretive researchers work on the sci-fi technologies of the future. Self-driving cars and artificial intelligence are the more mundane end for an organization that apparently once looked into jetpacks and teleportation.
The original smart glasses, Google began selling Google Glass in 2013 for $1,500 as prototypes for their acolytes, around 8,000 early adopters. Users could control the glasses with a touchpad, or, activated by tilting the head back, with voice commands. Audio relay—as with several wearable products—is via bone conduction, which transmits sound by vibrating the skull bones of the user. This was going to usher in the age of augmented reality, the next best thing to having a chip implanted directly into your brain.
On the surface, it seemed to be a reasonable proposition. People had dreamed about augmented reality for a long time—an onboard, JARVIS-style computer giving you extra information and instant access to communications without even having to touch a button. After smartphone ubiquity, it looked like a natural step forward.
Instead, there was a backlash. People may be willing to give their data up to corporations, but they’re less pleased with the idea that someone might be filming them in public. The worst aspect of smartphones is trying to talk to people who are distractedly scrolling through their phones. There’s a famous analogy in Revolutionary Road about an old couple’s loveless marriage: the husband tunes out his wife’s conversation by turning his hearing aid down to zero. To many, Google Glass seemed to provide us with a whole new way to ignore each other in favor of our Twitter feeds.
Then there’s the fact that, regardless of whether it’s because we’re not used to them, or if it’s a more permanent feature, people wearing AR tech often look very silly. Put all this together with a lack of early functionality, the high price (do you really feel comfortable wearing a $1,500 computer?), and a killer pun for the users—Glassholes—and the final recipe wasn’t great for Google.
Google Glass was quietly dropped from sale in 2015 with the ominous slogan posted on Google’s website “Thanks for exploring with us.” Reminding the Glass users that they had always been referred to as “explorers”—beta-testing a product, in many ways—it perhaps signaled less enthusiasm for wearables than the original, Google Glass skydive might have suggested.
In reality, Google went back to the drawing board. Not with the technology per se, although it has improved in the intervening years, but with the uses behind the technology.
Under what circumstances would you actually need a Google Glass? When would it genuinely be preferable to a smartphone that can do many of the same things and more? Beyond simply being a fashion item, which Google Glass decidedly was not, even the most tech-evangelical of us need a convincing reason to splash $1,500 on a wearable computer that’s less socially acceptable and less easy to use than the machine you’re probably reading this on right now.
Enter the Google Glass Enterprise Edition.
Piloted in factories during the years that Google Glass was dormant, and now roaring back to life and commercially available, the Google Glass relaunch got under way in earnest in July of 2017. The difference here was the specific audience: workers in factories who need hands-free computing because they need to use their hands at the same time.
In this niche application, wearable computers can become invaluable. A new employee can be trained with pre-programmed material that explains how to perform actions in real time, while instructions can be relayed straight into a worker’s eyeline without them needing to check a phone or switch to email.
Medical devices have long been a dream application for Google Glass. You can imagine a situation where people receive real-time information during surgery, or are augmented by artificial intelligence that provides additional diagnostic information or questions in response to a patient’s symptoms. The quest to develop a healthcare AI, which can provide recommendations in response to natural language queries, is on. The famously untidy doctor’s handwriting—and the associated death toll—could be avoided if the glasses could take dictation straight into a patient’s medical records. All of this is far more useful than allowing people to check Facebook hands-free while they’re riding the subway.
Google’s “Lens” application indicates another use for Google Glass that hadn’t quite matured when the original was launched: the Lens processes images and provides information about them. You can look at text and have it translated in real time, or look at a building or sign and receive additional information. Image processing, either through neural networks hooked up to a cloud database or some other means, is the frontier that enables driverless cars and similar technology to exist. Hook this up to a voice-activated assistant relaying information to the user, and you have your killer application: real-time annotation of the world around you. It’s this functionality that just wasn’t ready yet when Google launched Glass.
Amazon’s recent announcement that they want to integrate Alexa into a range of smart glasses indicates that the tech giants aren’t ready to give up on wearables yet. Perhaps, in time, people will become used to voice activation and interaction with their machines, at which point smart glasses with bone conduction will genuinely be more convenient than a smartphone.
But in many ways, the real lesson from the initial failure—and promising second life—of Google Glass is a simple question that developers of any smart technology, from the Internet of Things through to wearable computers, must answer. “What can this do that my smartphone can’t?” Find your answer, as the Enterprise Edition did, as Lens might, and you find your product.
Image Credit: Hattanas / Shutterstock.com Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment