Tag Archives: applications
#435541 This Giant AI Chip Is the Size of an ...
People say size doesn’t matter, but when it comes to AI the makers of the largest computer chip ever beg to differ. There are plenty of question marks about the gargantuan processor, but its unconventional design could herald an innovative new era in silicon design.
Computer chips specialized to run deep learning algorithms are a booming area of research as hardware limitations begin to slow progress, and both established players and startups are vying to build the successor to the GPU, the specialized graphics chip that has become the workhorse of the AI industry.
On Monday Californian startup Cerebras came out of stealth mode to unveil an AI-focused processor that turns conventional wisdom on its head. For decades chip makers have been focused on making their products ever-smaller, but the Wafer Scale Engine (WSE) is the size of an iPad and features 1.2 trillion transistors, 400,000 cores, and 18 gigabytes of on-chip memory.
The Cerebras Wafer-Scale Engine (WSE) is the largest chip ever built. It measures 46,225 square millimeters and includes 1.2 trillion transistors. Optimized for artificial intelligence compute, the WSE is shown here for comparison alongside the largest graphics processing unit. Image Credit: Used with permission from Cerebras Systems.
There is a method to the madness, though. Currently, getting enough cores to run really large-scale deep learning applications means connecting banks of GPUs together. But shuffling data between these chips is a major drain on speed and energy efficiency because the wires connecting them are relatively slow.
Building all 400,000 cores into the same chip should get round that bottleneck, but there are reasons it’s not been done before, and Cerebras has had to come up with some clever hacks to get around those obstacles.
Regular computer chips are manufactured using a process called photolithography to etch transistors onto the surface of a wafer of silicon. The wafers are inches across, so multiple chips are built onto them at once and then split up afterwards. But at 8.5 inches across, the WSE uses the entire wafer for a single chip.
The problem is that while for standard chip-making processes any imperfections in manufacturing will at most lead to a few processors out of several hundred having to be ditched, for Cerebras it would mean scrapping the entire wafer. To get around this the company built in redundant circuits so that even if there are a few defects, the chip can route around them.
The other big issue with a giant chip is the enormous amount of heat the processors can kick off—so the company has had to design a proprietary water-cooling system. That, along with the fact that no one makes connections and packaging for giant chips, means the WSE won’t be sold as a stand-alone component, but as part of a pre-packaged server incorporating the cooling technology.
There are no details on costs or performance so far, but some customers have already been testing prototypes, and according to Cerebras results have been promising. CEO and co-founder Andrew Feldman told Fortune that early tests show they are reducing training time from months to minutes.
We’ll have to wait until the first systems ship to customers in September to see if those claims stand up. But Feldman told ZDNet that the design of their chip should help spur greater innovation in the way engineers design neural networks. Many cornerstones of this process—for instance, tackling data in batches rather than individual data points—are guided more by the hardware limitations of GPUs than by machine learning theory, but their chip will do away with many of those obstacles.
Whether that turns out to be the case or not, the WSE might be the first indication of an innovative new era in silicon design. When Google announced it’s AI-focused Tensor Processing Unit in 2016 it was a wake-up call for chipmakers that we need some out-of-the-box thinking to square the slowing of Moore’s Law with skyrocketing demand for computing power.
It’s not just tech giants’ AI server farms driving innovation. At the other end of the spectrum, the desire to embed intelligence in everyday objects and mobile devices is pushing demand for AI chips that can run on tiny amounts of power and squeeze into the smallest form factors.
These trends have spawned renewed interest in everything from brain-inspired neuromorphic chips to optical processors, but the WSE also shows that there might be mileage in simply taking a sideways look at some of the other design decisions chipmakers have made in the past rather than just pumping ever more transistors onto a chip.
This gigantic chip might be the first exhibit in a weird and wonderful new menagerie of exotic, AI-inspired silicon.
Image Credit: Used with permission from Cerebras Systems. Continue reading
#435436 Undeclared Wars in Cyberspace Are ...
The US is at war. That’s probably not exactly news, as the country has been engaged in one type of conflict or another for most of its history. The last time we officially declared war was after Japan bombed Pearl Harbor in December 1941.
Our biggest undeclared war today is not being fought by drones in the mountains of Afghanistan or even through the less-lethal barrage of threats over the nuclear programs in North Korea and Iran. In this particular war, it is the US that is under attack and on the defensive.
This is cyberwarfare.
The definition of what constitutes a cyber attack is a broad one, according to Greg White, executive director of the Center for Infrastructure Assurance and Security (CIAS) at The University of Texas at San Antonio (UTSA).
At the level of nation-state attacks, cyberwarfare could involve “attacking systems during peacetime—such as our power grid or election systems—or it could be during war time in which case the attacks may be designed to cause destruction, damage, deception, or death,” he told Singularity Hub.
For the US, the Pearl Harbor of cyberwarfare occurred during 2016 with the Russian interference in the presidential election. However, according to White, an Air Force veteran who has been involved in computer and network security since 1986, the history of cyber war can be traced back much further, to at least the first Gulf War of the early 1990s.
“We started experimenting with cyber attacks during the first Gulf War, so this has been going on a long time,” he said. “Espionage was the prime reason before that. After the war, the possibility of expanding the types of targets utilized expanded somewhat. What is really interesting is the use of social media and things like websites for [psychological operation] purposes during a conflict.”
The 2008 conflict between Russia and the Republic of Georgia is often cited as a cyberwarfare case study due to the large scale and overt nature of the cyber attacks. Russian hackers managed to bring down more than 50 news, government, and financial websites through denial-of-service attacks. In addition, about 35 percent of Georgia’s internet networks suffered decreased functionality during the attacks, coinciding with the Russian invasion of South Ossetia.
The cyberwar also offers lessons for today on Russia’s approach to cyberspace as a tool for “holistic psychological manipulation and information warfare,” according to a 2018 report called Understanding Cyberwarfare from the Modern War Institute at West Point.
US Fights Back
News in recent years has highlighted how Russian hackers have attacked various US government entities and critical infrastructure such as energy and manufacturing. In particular, a shadowy group known as Unit 26165 within the country’s military intelligence directorate is believed to be behind the 2016 US election interference campaign.
However, the US hasn’t been standing idly by. Since at least 2012, the US has put reconnaissance probes into the control systems of the Russian electric grid, The New York Times reported. More recently, we learned that the US military has gone on the offensive, putting “crippling malware” inside the Russian power grid as the U.S. Cyber Command flexes its online muscles thanks to new authority granted to it last year.
“Access to the power grid that is obtained now could be used to shut something important down in the future when we are in a war,” White noted. “Espionage is part of the whole program. It is important to remember that cyber has just provided a new domain in which to conduct the types of activities we have been doing in the real world for years.”
The US is also beginning to pour more money into cybersecurity. The 2020 fiscal budget calls for spending $17.4 billion throughout the government on cyber-related activities, with the Department of Defense (DoD) alone earmarked for $9.6 billion.
Despite the growing emphasis on cybersecurity in the US and around the world, the demand for skilled security professionals is well outpacing the supply, with a projected shortfall of nearly three million open or unfilled positions according to the non-profit IT security organization (ISC)².
UTSA is rare among US educational institutions in that security courses and research are being conducted across three different colleges, according to White. About 10 percent of the school’s 30,000-plus students are enrolled in a cyber-related program, he added, and UTSA is one of only 21 schools that has received the Cyber Operations Center of Excellence designation from the National Security Agency.
“This track in the computer science program is specifically designed to prepare students for the type of jobs they might be involved in if they went to work for the DoD,” White said.
However, White is extremely doubtful there will ever be enough cyber security professionals to meet demand. “I’ve been preaching that we’ve got to worry about cybersecurity in the workforce, not just the cybersecurity workforce, not just cybersecurity professionals. Everybody has a responsibility for cybersecurity.”
Artificial Intelligence in Cybersecurity
Indeed, humans are often seen as the weak link in cybersecurity. That point was driven home at a cybersecurity roundtable discussion during this year’s Brainstorm Tech conference in Aspen, Colorado.
Participant Dorian Daley, general counsel at Oracle, said insider threats are at the top of the list when it comes to cybersecurity. “Sadly, I think some of the biggest challenges are people, and I mean that in a number of ways. A lot of the breaches really come from insiders. So the more that you can automate things and you can eliminate human malicious conduct, the better.”
White noted that automation is already the norm in cybersecurity. “Humans can’t react as fast as systems can launch attacks, so we need to rely on automated defenses as well,” he said. “This doesn’t mean that humans are not in the loop, but much of what is done these days is ‘scripted’.”
The use of artificial intelligence, machine learning, and other advanced automation techniques have been part of the cybersecurity conversation for quite some time, according to White, such as pattern analysis to look for specific behaviors that might indicate an attack is underway.
“What we are seeing quite a bit of today falls under the heading of big data and data analytics,” he explained.
But there are signs that AI is going off-script when it comes to cyber attacks. In the hands of threat groups, AI applications could lead to an increase in the number of cyberattacks, wrote Michelle Cantos, a strategic intelligence analyst at cybersecurity firm FireEye.
“Current AI technology used by businesses to analyze consumer behavior and find new customer bases can be appropriated to help attackers find better targets,” she said. “Adversaries can use AI to analyze datasets and generate recommendations for high-value targets they think the adversary should hit.”
In fact, security researchers have already demonstrated how a machine learning system could be used for malicious purposes. The Social Network Automated Phishing with Reconnaissance system, or SNAP_R, generated more than four times as many spear-phishing tweets on Twitter than a human—and was just as successful at targeting victims in order to steal sensitive information.
Cyber war is upon us. And like the current war on terrorism, there are many battlefields from which the enemy can attack and then disappear. While total victory is highly unlikely in the traditional sense, innovations through AI and other technologies can help keep the lights on against the next cyber attack.
Image Credit: pinkeyes / Shutterstock.com Continue reading
#435196 Avatar Love? New ‘Black Mirror’ ...
This week, the widely-anticipated fifth season of the dystopian series Black Mirror was released on Netflix. The storylines this season are less focused on far-out scenarios and increasingly aligned with current issues. With only three episodes, this season raises more questions than it answers, often leaving audiences bewildered.
The episode Smithereens explores our society’s crippling addiction to social media platforms and the monopoly they hold over our data. In Rachel, Jack and Ashley Too, we see the disruptive impact of technologies on the music and entertainment industry, and the price of fame for artists in the digital world. Like most Black Mirror episodes, these explore the sometimes disturbing implications of tech advancements on humanity.
But once again, in the midst of all the doom and gloom, the creators of the series leave us with a glimmer of hope. Aligned with Pride month, the episode Striking Vipers explores the impact of virtual reality on love, relationships, and sexual fluidity.
*The review contains a few spoilers.*
Striking Vipers
The first episode of the season, Striking Vipers may be one of the most thought-provoking episodes in Black Mirror history. Reminiscent of previous episodes San Junipero and Hang the DJ, the writers explore the potential for technology to transform human intimacy.
The episode tells the story of two old friends, Danny and Karl, whose friendship is reignited in an unconventional way. Karl unexpectedly appears at Danny’s 38th birthday and reintroduces him to the VR version of a game they used to play years before. In the game Striking Vipers X, each of the players is represented by an avatar of their choice in an uncanny digital reality. Following old tradition, Karl chooses to become the female fighter, Roxanne, and Danny takes on the role of the male fighter, Lance. The state-of-the-art VR headsets appear to use an advanced form of brain-machine interface to allow each player to be fully immersed in the virtual world, emulating all physical sensations.
To their surprise (and confusion), Danny and Karl find themselves transitioning from fist-fighting to kissing. Over the course of many games, they continue to explore a sexual and romantic relationship in the virtual world, leaving them confused and distant in the real world. The virtual and physical realities begin to blur, and so do the identities of the players with their avatars. Danny, who is married (in a heterosexual relationship) and is a father, begins to carry guilt and confusion in the real world. They both wonder if there would be any spark between them in real life.
The brain-machine interface (BMI) depicted in the episode is still science fiction, but that hasn’t stopped innovators from pushing the technology forward. Experts today are designing more intricate BMI systems while programming better algorithms to interpret the neural signals they capture. Scientists have already succeeded in enabling paralyzed patients to type with their minds, and are even allowing people to communicate with one another purely through brainwaves.
The convergence of BMIs with virtual reality and artificial intelligence could make the experience of such immersive digital realities possible. Virtual reality, too, is decreasing exponentially in cost and increasing in quality.
The narrative provides meaningful commentary on another tech area—gaming. It highlights video games not necessarily as addictive distractions, but rather as a platform for connecting with others in a deeper way. This is already very relevant. Video games like Final Fantasy are often a tool for meaningful digital connections for their players.
The Implications of Virtual Reality on Love and Relationships
The narrative of Striking Vipers raises many novel questions about the implications of immersive technologies on relationships: could the virtual world allow us a safe space to explore suppressed desires? Can virtual avatars make it easier for us to show affection to those we care about? Can a sexual or romantic encounter in the digital world be considered infidelity?
Above all, the episode explores the therapeutic possibilities of such technologies. While many fears about virtual reality had been raised in previous seasons of Black Mirror, this episode was focused on its potential. This includes the potential of immersive technology to be a source of liberation, meaningful connections, and self-exploration, as well as a tool for realizing our true identities and desires.
Once again, this is aligned with emerging trends in VR. We are seeing the rise of social VR applications and platforms that allow you to hang out with your friends and family as avatars in the virtual space. The technology is allowing for animation movies, such as Coco VR, to become an increasingly social and interactive experience. Considering that meaningful social interaction can alleviate depression and anxiety, such applications could contribute to well-being.
Techno-philosopher and National Geographic host Jason Silva points out that immersive media technologies can be “engines of empathy.” VR allows us to enter virtual spaces that mimic someone else’s state of mind, allowing us to empathize with the way they view the world. Silva said, “Imagine the intimacy that becomes possible when people meet and they say, ‘Hey, do you want to come visit my world? Do you want to see what it’s like to be inside my head?’”
What is most fascinating about Striking Vipers is that it explores how we may redefine love with virtual reality; we are introduced to love between virtual avatars. While this kind of love may seem confusing to audiences, it may be one of the complex implications of virtual reality on human relationships.
In many ways, the title Black Mirror couldn’t be more appropriate, as each episode serves as a mirror to the most disturbing aspects of our psyches as they get amplified through technology. However, what we see in uplifting and thought-provoking plots like Striking Vipers, San Junipero, and Hang The DJ is that technology could also amplify the most positive aspects of our humanity. This includes our powerful capacity to love.
Image Credit: Arsgera / Shutterstock.com Continue reading