Tag Archives: state
#434559 Can AI Tell the Difference Between a ...
Scarcely a day goes by without another headline about neural networks: some new task that deep learning algorithms can excel at, approaching or even surpassing human competence. As the application of this approach to computer vision has continued to improve, with algorithms capable of specialized recognition tasks like those found in medicine, the software is getting closer to widespread commercial use—for example, in self-driving cars. Our ability to recognize patterns is a huge part of human intelligence: if this can be done faster by machines, the consequences will be profound.
Yet, as ever with algorithms, there are deep concerns about their reliability, especially when we don’t know precisely how they work. State-of-the-art neural networks will confidently—and incorrectly—classify images that look like television static or abstract art as real-world objects like school-buses or armadillos. Specific algorithms could be targeted by “adversarial examples,” where adding an imperceptible amount of noise to an image can cause an algorithm to completely mistake one object for another. Machine learning experts enjoy constructing these images to trick advanced software, but if a self-driving car could be fooled by a few stickers, it might not be so fun for the passengers.
These difficulties are hard to smooth out in large part because we don’t have a great intuition for how these neural networks “see” and “recognize” objects. The main insight analyzing a trained network itself can give us is a series of statistical weights, associating certain groups of points with certain objects: this can be very difficult to interpret.
Now, new research from UCLA, published in the journal PLOS Computational Biology, is testing neural networks to understand the limits of their vision and the differences between computer vision and human vision. Nicholas Baker, Hongjing Lu, and Philip J. Kellman of UCLA, alongside Gennady Erlikhman of the University of Nevada, tested a deep convolutional neural network called VGG-19. This is state-of-the-art technology that is already outperforming humans on standardized tests like the ImageNet Large Scale Visual Recognition Challenge.
They found that, while humans tend to classify objects based on their overall (global) shape, deep neural networks are far more sensitive to the textures of objects, including local color gradients and the distribution of points on the object. This result helps explain why neural networks in image recognition make mistakes that no human ever would—and could allow for better designs in the future.
In the first experiment, a neural network was trained to sort images into 1 of 1,000 different categories. It was then presented with silhouettes of these images: all of the local information was lost, while only the outline of the object remained. Ordinarily, the trained neural net was capable of recognizing these objects, assigning more than 90% probability to the correct classification. Studying silhouettes, this dropped to 10%. While human observers could nearly always produce correct shape labels, the neural networks appeared almost insensitive to the overall shape of the images. On average, the correct object was ranked as the 209th most likely solution by the neural network, even though the overall shapes were an exact match.
A particularly striking example arose when they tried to get the neural networks to classify glass figurines of objects they could already recognize. While you or I might find it easy to identify a glass model of an otter or a polar bear, the neural network classified them as “oxygen mask” and “can opener” respectively. By presenting glass figurines, where the texture information that neural networks relied on for classifying objects is lost, the neural network was unable to recognize the objects by shape alone. The neural network was similarly hopeless at classifying objects based on drawings of their outline.
If you got one of these right, you’re better than state-of-the-art image recognition software. Image Credit: Nicholas Baker, Hongjing Lu, Gennady Erlikhman, Philip J. Kelman. “Deep convolutional networks do not classify based on global object shape.” Plos Computational Biology. 12/7/18. / CC BY 4.0
When the neural network was explicitly trained to recognize object silhouettes—given no information in the training data aside from the object outlines—the researchers found that slight distortions or “ripples” to the contour of the image were again enough to fool the AI, while humans paid them no mind.
The fact that neural networks seem to be insensitive to the overall shape of an object—relying instead on statistical similarities between local distributions of points—suggests a further experiment. What if you scrambled the images so that the overall shape was lost but local features were preserved? It turns out that the neural networks are far better and faster at recognizing scrambled versions of objects than outlines, even when humans struggle. Students could classify only 37% of the scrambled objects, while the neural network succeeded 83% of the time.
Humans vastly outperform machines at classifying object (a) as a bear, while the machine learning algorithm has few problems classifying the bear in figure (b). Image Credit: Nicholas Baker, Hongjing Lu, Gennady Erlikhman, Philip J. Kelman. “Deep convolutional networks do not classify based on global object shape.” Plos Computational Biology. 12/7/18. / CC BY 4.0
“This study shows these systems get the right answer in the images they were trained on without considering shape,” Kellman said. “For humans, overall shape is primary for object recognition, and identifying images by overall shape doesn’t seem to be in these deep learning systems at all.”
Naively, one might expect that—as the many layers of a neural network are modeled on connections between neurons in the brain and resemble the visual cortex specifically—the way computer vision operates must necessarily be similar to human vision. But this kind of research shows that, while the fundamental architecture might resemble that of the human brain, the resulting “mind” operates very differently.
Researchers can, increasingly, observe how the “neurons” in neural networks light up when exposed to stimuli and compare it to how biological systems respond to the same stimuli. Perhaps someday it might be possible to use these comparisons to understand how neural networks are “thinking” and how those responses differ from humans.
But, as yet, it takes a more experimental psychology to probe how neural networks and artificial intelligence algorithms perceive the world. The tests employed against the neural network are closer to how scientists might try to understand the senses of an animal or the developing brain of a young child rather than a piece of software.
By combining this experimental psychology with new neural network designs or error-correction techniques, it may be possible to make them even more reliable. Yet this research illustrates just how much we still don’t understand about the algorithms we’re creating and using: how they tick, how they make decisions, and how they’re different from us. As they play an ever-greater role in society, understanding the psychology of neural networks will be crucial if we want to use them wisely and effectively—and not end up missing the woods for the trees.
Image Credit: Irvan Pratama / Shutterstock.com Continue reading
#434260 The Most Surprising Tech Breakthroughs ...
Development across the entire information technology landscape certainly didn’t slow down this year. From CRISPR babies, to the rapid decline of the crypto markets, to a new robot on Mars, and discovery of subatomic particles that could change modern physics as we know it, there was no shortage of headline-grabbing breakthroughs and discoveries.
As 2018 comes to a close, we can pause and reflect on some of the biggest technology breakthroughs and scientific discoveries that occurred this year.
I reached out to a few Singularity University speakers and faculty across the various technology domains we cover asking what they thought the biggest breakthrough was in their area of expertise. The question posed was:
“What, in your opinion, was the biggest development in your area of focus this year? Or, what was the breakthrough you were most surprised by in 2018?”
I can share that for me, hands down, the most surprising development I came across in 2018 was learning that a publicly-traded company that was briefly valued at over $1 billion, and has over 12,000 employees and contractors spread around the world, has no physical office space and the entire business is run and operated from inside an online virtual world. This is Ready Player One stuff happening now.
For the rest, here’s what our experts had to say.
DIGITAL BIOLOGY
Dr. Tiffany Vora | Faculty Director and Vice Chair, Digital Biology and Medicine, Singularity University
“That’s easy: CRISPR babies. I knew it was technically possible, and I’ve spent two years predicting it would happen first in China. I knew it was just a matter of time but I failed to predict the lack of oversight, the dubious consent process, the paucity of publicly-available data, and the targeting of a disease that we already know how to prevent and treat and that the children were at low risk of anyway.
I’m not convinced that this counts as a technical breakthrough, since one of the girls probably isn’t immune to HIV, but it sure was a surprise.”
For more, read Dr. Vora’s summary of this recent stunning news from China regarding CRISPR-editing human embryos.
QUANTUM COMPUTING
Andrew Fursman | Co-Founder/CEO 1Qbit, Faculty, Quantum Computing, Singularity University
“There were two last-minute holiday season surprise quantum computing funding and technology breakthroughs:
First, right before the government shutdown, one priority legislative accomplishment will provide $1.2 billion in quantum computing research over the next five years. Second, there’s the rise of ions as a truly viable, scalable quantum computing architecture.”
*Read this Gizmodo profile on an exciting startup in the space to learn more about this type of quantum computing
ENERGY
Ramez Naam | Chair, Energy and Environmental Systems, Singularity University
“2018 had plenty of energy surprises. In solar, we saw unsubsidized prices in the sunny parts of the world at just over two cents per kwh, or less than half the price of new coal or gas electricity. In the US southwest and Texas, new solar is also now cheaper than new coal or gas. But even more shockingly, in Germany, which is one of the least sunny countries on earth (it gets less sunlight than Canada) the average bid for new solar in a 2018 auction was less than 5 US cents per kwh. That’s as cheap as new natural gas in the US, and far cheaper than coal, gas, or any other new electricity source in most of Europe.
In fact, it’s now cheaper in some parts of the world to build new solar or wind than to run existing coal plants. Think tank Carbon Tracker calculates that, over the next 10 years, it will become cheaper to build new wind or solar than to operate coal power in most of the world, including specifically the US, most of Europe, and—most importantly—India and the world’s dominant burner of coal, China.
Here comes the sun.”
GLOBAL GRAND CHALLENGES
Darlene Damm | Vice Chair, Faculty, Global Grand Challenges, Singularity University
“In 2018 we saw a lot of areas in the Global Grand Challenges move forward—advancements in robotic farming technology and cultured meat, low-cost 3D printed housing, more sophisticated types of online education expanding to every corner of the world, and governments creating new policies to deal with the ethics of the digital world. These were the areas we were watching and had predicted there would be change.
What most surprised me was to see young people, especially teenagers, start to harness technology in powerful ways and use it as a platform to make their voices heard and drive meaningful change in the world. In 2018 we saw teenagers speak out on a number of issues related to their well-being and launch digital movements around issues such as gun and school safety, global warming and environmental issues. We often talk about the harm technology can cause to young people, but on the flip side, it can be a very powerful tool for youth to start changing the world today and something I hope we see more of in the future.”
BUSINESS STRATEGY
Pascal Finette | Chair, Entrepreneurship and Open Innovation, Singularity University
“Without a doubt the rapid and massive adoption of AI, specifically deep learning, across industries, sectors, and organizations. What was a curiosity for most companies at the beginning of the year has quickly made its way into the boardroom and leadership meetings, and all the way down into the innovation and IT department’s agenda. You are hard-pressed to find a mid- to large-sized company today that is not experimenting or implementing AI in various aspects of its business.
On the slightly snarkier side of answering this question: The very rapid decline in interest in blockchain (and cryptocurrencies). The blockchain party was short, ferocious, and ended earlier than most would have anticipated, with a huge hangover for some. The good news—with the hot air dissipated, we can now focus on exploring the unique use cases where blockchain does indeed offer real advantages over centralized approaches.”
*Author note: snark is welcome and appreciated
ROBOTICS
Hod Lipson | Director, Creative Machines Lab, Columbia University
“The biggest surprise for me this year in robotics was learning dexterity. For decades, roboticists have been trying to understand and imitate dexterous manipulation. We humans seem to be able to manipulate objects with our fingers with incredible ease—imagine sifting through a bunch of keys in the dark, or tossing and catching a cube. And while there has been much progress in machine perception, dexterous manipulation remained elusive.
There seemed to be something almost magical in how we humans can physically manipulate the physical world around us. Decades of research in grasping and manipulation, and millions of dollars spent on robot-hand hardware development, has brought us little progress. But in late 2018, the Berkley OpenAI group demonstrated that this hurdle may finally succumb to machine learning as well. Given 200 years worth of practice, machines learned to manipulate a physical object with amazing fluidity. This might be the beginning of a new age for dexterous robotics.”
MACHINE LEARNING
Jeremy Howard | Founding Researcher, fast.ai, Founder/CEO, Enlitic, Faculty Data Science, Singularity University
“The biggest development in machine learning this year has been the development of effective natural language processing (NLP).
The New York Times published an article last month titled “Finally, a Machine That Can Finish Your Sentence,” which argued that NLP neural networks have reached a significant milestone in capability and speed of development. The “finishing your sentence” capability mentioned in the title refers to a type of neural network called a “language model,” which is literally a model that learns how to finish your sentences.
Earlier this year, two systems (one, called ELMO, is from the Allen Institute for AI, and the other, called ULMFiT, was developed by me and Sebastian Ruder) showed that such a model could be fine-tuned to dramatically improve the state-of-the-art in nearly every NLP task that researchers study. This work was further developed by OpenAI, which in turn was greatly scaled up by Google Brain, who created a system called BERT which reached human-level performance on some of NLP’s toughest challenges.
Over the next year, expect to see fine-tuned language models used for everything from understanding medical texts to building disruptive social media troll armies.”
DIGITAL MANUFACTURING
Andre Wegner | Founder/CEO Authentise, Chair, Digital Manufacturing, Singularity University
“Most surprising to me was the extent and speed at which the industry finally opened up.
While previously, only few 3D printing suppliers had APIs and knew what to do with them, 2018 saw nearly every OEM (or original equipment manufacturer) enabling data access and, even more surprisingly, shying away from proprietary standards and adopting MTConnect, as stalwarts such as 3D Systems and Stratasys have been. This means that in two to three years, data access to machines will be easy, commonplace, and free. The value will be in what is being done with that data.
Another example of this openness are the seemingly endless announcements of integrated workflows: GE’s announcement with most major software players to enable integrated solutions, EOS’s announcement with Siemens, and many more. It’s clear that all actors in the additive ecosystem have taken a step forward in terms of openness. The result is a faster pace of innovation, particularly in the software and data domains that are crucial to enabling comprehensive digital workflow to drive agile and resilient manufacturing.
I’m more optimistic we’ll achieve that now than I was at the end of 2017.”
SCIENCE AND DISCOVERY
Paul Saffo | Chair, Future Studies, Singularity University, Distinguished Visiting Scholar, Stanford Media-X Research Network
“The most important development in technology this year isn’t a technology, but rather the astonishing science surprises made possible by recent technology innovations. My short list includes the discovery of the “neptmoon”, a Neptune-scale moon circling a Jupiter-scale planet 8,000 lightyears from us; the successful deployment of the Mars InSight Lander a month ago; and the tantalizing ANITA detection (what could be a new subatomic particle which would in turn blow the standard model wide open). The highest use of invention is to support science discovery, because those discoveries in turn lead us to the future innovations that will improve the state of the world—and fire up our imaginations.”
ROBOTICS
Pablos Holman | Inventor, Hacker, Faculty, Singularity University
“Just five or ten years ago, if you’d asked any of us technologists “What is harder for robots? Eyes, or fingers?” We’d have all said eyes. Robots have extraordinary eyes now, but even in a surgical robot, the fingers are numb and don’t feel anything. Stanford robotics researchers have invented fingertips that can feel, and this will be a kingpin that allows robots to go everywhere they haven’t been yet.”
BLOCKCHAIN
Nathana Sharma | Blockchain, Policy, Law, and Ethics, Faculty, Singularity University
“2017 was the year of peak blockchain hype. 2018 has been a year of resetting expectations and technological development, even as the broader cryptocurrency markets have faced a winter. It’s now about seeing adoption and applications that people want and need to use rise. An incredible piece of news from December 2018 is that Facebook is developing a cryptocurrency for users to make payments through Whatsapp. That’s surprisingly fast mainstream adoption of this new technology, and indicates how powerful it is.”
ARTIFICIAL INTELLIGENCE
Neil Jacobstein | Chair, Artificial Intelligence and Robotics, Singularity University
“I think one of the most visible improvements in AI was illustrated by the Boston Dynamics Parkour video. This was not due to an improvement in brushless motors, accelerometers, or gears. It was due to improvements in AI algorithms and training data. To be fair, the video released was cherry-picked from numerous attempts, many of which ended with a crash. However, the fact that it could be accomplished at all in 2018 was a real win for both AI and robotics.”
NEUROSCIENCE
Divya Chander | Chair, Neuroscience, Singularity University
“2018 ushered in a new era of exponential trends in non-invasive brain modulation. Changing behavior or restoring function takes on a new meaning when invasive interfaces are no longer needed to manipulate neural circuitry. The end of 2018 saw two amazing announcements: the ability to grow neural organoids (mini-brains) in a dish from neural stem cells that started expressing electrical activity, mimicking the brain function of premature babies, and the first (known) application of CRISPR to genetically alter two fetuses grown through IVF. Although this was ostensibly to provide genetic resilience against HIV infections, imagine what would happen if we started tinkering with neural circuitry and intelligence.”
Image Credit: Yurchanka Siarhei / Shutterstock.com Continue reading
#433928 The Surprising Parallels Between ...
The human mind can be a confusing and overwhelming place. Despite incredible leaps in human progress, many of us still struggle to make our peace with our thoughts. The roots of this are complex and multifaceted. To find explanations for the global mental health epidemic, one can tap into neuroscience, psychology, evolutionary biology, or simply observe the meaningless systems that dominate our modern-day world.
This is not only the context of our reality but also that of the critically-acclaimed Netflix series, Maniac. Psychological dark comedy meets science fiction, Maniac is a retro, futuristic, and hallucinatory trip that is filled with hidden symbols. Directed by Cary Joji Fukunaga, the series tells the story of two strangers who decide to participate in the final stage of a “groundbreaking” pharmaceutical trial—one that combines novel pharmaceuticals with artificial intelligence, and promises to make their emotional pain go away.
Naturally, things don’t go according to plan.
From exams used for testing defense mechanisms to techniques such as cognitive behavioral therapy, the narrative infuses genuine psychological science. As perplexing as the series may be to some viewers, many of the tools depicted actually have a strong grounding in current technological advancements.
Catalysts for Alleviating Suffering
In the therapy of Maniac, participants undergo a three-day trial wherein they ingest three pills and appear to connect their consciousness to a superintelligent AI. Each participant is hurled into the traumatic experiences imprinted in their subconscious and forced to cope with them in a series of hallucinatory and dream-like experiences.
Perhaps the most recognizable parallel that can be drawn is with the latest advancements in psychedelic therapy. Psychedelics are a class of drugs that alter the experience of consciousness, and often cause radical changes in perception and cognitive processes.
Through a process known as transient hypofrontality, the executive “over-thinking” parts of our brains get a rest, and deeper areas become more active. This experience, combined with the breakdown of the ego, is often correlated with feelings of timelessness, peacefulness, presence, unity, and above all, transcendence.
Despite being not addictive and extremely difficult to overdose on, regulators looked down on the use of psychedelics for decades and many continue to dismiss them as “party drugs.” But in the last few years, all of this began to change.
Earlier this summer, the FDA granted breakthrough therapy designation to MDMA for the treatment of PTSD, after several phases of successful trails. Similar research has discovered that Psilocybin (also known as magic mushrooms) combined with therapy is far more effective than traditional forms of treatment to treat depression and anxiety. Today, there is a growing and overwhelming body of research that proves that not only are psychedelics such as LSD, MDMA, or Psylicybin effective catalysts to alleviate suffering and enhance the human condition, but they are potentially the most effective tools out there.
It’s important to realize that these substances are not solutions on their own, but rather catalysts for more effective therapy. They can be groundbreaking, but only in the right context and setting.
Brain-Machine Interfaces
In Maniac, the medication-assisted therapy is guided by what appears to be a super-intelligent form of artificial intelligence called the GRTA, nicknamed Gertie. Gertie, who is a “guide” in machine form, accesses the minds of the participants through what appears to be a futuristic brain-scanning technology and curates customized hallucinatory experiences with the goal of accelerating the healing process.
Such a powerful form of brain-scanning technology is not unheard of. Current levels of scanning technology are already allowing us to decipher dreams and connect three human brains, and are only growing exponentially. Though they are nowhere as advanced as Gertie (we have a long way to go before we get to this kind of general AI), we are also seeing early signs of AI therapy bots, chatbots that listen, think, and communicate with users like a therapist would.
The parallels between current advancements in mental health therapy and the methods in Maniac can be startling, and are a testament to how science fiction and the arts can be used to explore the existential implications of technology.
Not Necessarily a Dystopia
While there are many ingenious similarities between the technology in Maniac and the state of mental health therapy, it’s important to recognize the stark differences. Like many other blockbuster science fiction productions, Maniac tells a fundamentally dystopian tale.
The series tells the story of the 73rd iteration of a controversial drug trial, one that has experienced many failures and even led to various participants being braindead. The scientists appear to be evil, secretive, and driven by their own superficial agendas and deep unresolved emotional issues.
In contrast, clinicians and researchers are not only required to file an “investigational new drug application” with the FDA (and get approval) but also update the agency with safety and progress reports throughout the trial.
Furthermore, many of today’s researchers are driven by a strong desire to contribute to the well-being and progress of our species. Even more, the results of decades of research by organizations like MAPS have been exceptionally promising and aligned with positive values. While Maniac is entertaining and thought-provoking, viewers must not forget the positive potential of such advancements in mental health therapy.
Science, technology, and psychology aside, Maniac is a deep commentary on the human condition and the often disorienting states that pain us all. Within any human lifetime, suffering is inevitable. It is the disproportionate, debilitating, and unjust levels of suffering that we ought to tackle as a society. Ultimately, Maniac explores whether advancements in science and technology can help us live not a life devoid of suffering, but one where it is balanced with fulfillment.
Image Credit: xpixel / Shutterstock.com Continue reading