Tag Archives: ai
#438882 Robotics in the entertainment industry
Mesmer Entertainment Robotics demonstrate some of their humanoid animatronics, as well as their humanoid robot, Owen.
#439192 Too Perilous For AI? EU Proposes ...
As part of its emerging role as a global regulatory watchdog, the European Commission published a proposal on 21 April for regulations to govern artificial intelligence use in the European Union.
The economic stakes are high: the Commission predicts European public and private investment in AI reaching €20 billion a year this decade, and that was before the additional earmark of up to €134 billion earmarked for digital transitions in Europe’s Covid-19 pandemic recovery fund, some of which the Commission presumes will fund AI, too. Add to that counting investments in AI outside the EU but which target EU residents, since these rules will apply to any use of AI in the EU, not just by EU-based companies or governments.
Things aren’t going to change overnight: the EU’s AI rules proposal is the result of three years of work by bureaucrats, industry experts, and public consultations and must go through the European Parliament—which requested it—before it can become law. EU member states then often take years to transpose EU-level regulations into their national legal codes.
The proposal defines four tiers for AI-related activity and differing levels of oversight for each. The first tier is unacceptable risk: some AI uses would be banned outright in public spaces, with specific exceptions granted by national laws and subject to additional oversight and stricter logging and human oversight. The to-be-banned AI activity that has probably garnered the most attention is real-time remote biometric identification, i.e. facial recognition. The proposal also bans subliminal behavior modification and social scoring applications. The proposal suggests fines of up to 6 percent of commercial violators’ global annual revenue.
The proposal next defines a high-risk category, determined by the purpose of the system and the potential and probability of harm. Examples listed in the proposal include job recruiting, credit checks, and the justice system. The rules would require such AI applications to use high-quality datasets, document their traceability, share information with users, and account for human oversight. The EU would create a central registry of such systems under the proposed rules and require approval before deployment.
Limited-risk activities, such as the use of chatbots or deepfakes on a website, will have less oversight but will require a warning label, to allow users to opt in or out. Then finally there is a tier for applications judged to present minimal risk.
As often happens when governments propose dense new rulebooks (this one is 108 pages), the initial reactions from industry and civil society groups seem to be more about the existence and reach of industry oversight than the specific content of the rules. One tech-funded think tank told the Wall Street Journal that it could become “infeasible to build AI in Europe.” In turn, privacy-focused civil society groups such as European Digital Rights (EDRi) said in a statement that the “regulation allows too wide a scope for self-regulation by companies.”
“I think one of the ideas behind this piece of regulation was trying to balance risk and get people excited about AI and regain trust,” saysLisa-Maria Neudert, AI governance researcher at the University of Oxford, England, and the Weizenbaum Institut in Berlin, Germany. A 2019 Lloyds Register Foundation poll found that the global public is about evenly split between fear and excitement about AI.
“I can imagine it might help if you have an experienced large legal team,” to help with compliance, Neudert says, and it may be “a difficult balance to strike” between rules that remain startup-friendly and succeed in reining in mega-corporations.
AI researchers Mona Sloane and Andrea Renda write in VentureBeat that the rules are weaker on monitoring of how AI plays out after approval and launch, neglecting “a crucial feature of AI-related risk: that it is pervasive, and it is emergent, often evolving in unpredictable ways after it has been developed and deployed.”
Europe has already been learning from the impact its sweeping 2018 General Data Protection Regulation (GDPR) had on global tech and privacy. Yes, some outside websites still serve Europeans a page telling them the website owners can’t be bothered to comply with GDPR, so Europeans can’t see any content. But most have found a way to adapt in order to reach this unified market of 448 million people.
“I don’t think we should generalize [from GDPR to the proposed AI rules], but it’s fair to assume that such a big piece of legislation will have effects beyond the EU,” Neudert says. It will be easier for legislators in other places to follow a template than to replicate the EU’s heavy investment in research, community engagement, and rule-writing.
While tech companies and their industry groups may grumble about the need to comply with the incipient AI rules, Register columnist Rupert Goodwin suggests they’d be better off focusing on forming the industry groups that will shape the implementation and enforcement of the rules in the future: “You may already be in one of the industry organizations for AI ethics or assessment; if not, then consider them the seeds from which influence will grow.” Continue reading
#439168 The World’s Biggest AI Chip Now Comes ...
The world’s biggest AI chip just doubled its specs—without adding an inch.
The Cerebras Systems Wafer Scale Engine is about the size of a big dinner plate. All that surface area enables a lot more of everything, from processors to memory. The first WSE chip, released in 2019, had an incredible 1.2 trillion transistors and 400,000 processing cores. Its successor doubles everything, except its physical size.
The WSE-2 crams in 2.6 trillion transistors and 850,000 cores on the same dinner plate. Its on-chip memory has increased from 18 gigabytes to 40 gigabytes, and the rate it shuttles information to and from said memory has gone from 9 petabytes per second to 20 petabytes per second.
It’s a beast any way you slice it.
The WSE-2 is manufactured by Taiwan Semiconductor Manufacturing Company (TSMC), and it was a jump from TSMC’s 16-nanometer chipmaking process to its 7-nanometer process—skipping the 10-nanometer node—that enabled most of the WSE-2’s gains.
This required changes to the physical design of the chip, but Cerebras says they also made improvements to each core above and beyond what was needed to make the new process work. The updated mega-chip should be a lot faster and more efficient.
Why Make Giant Computer Chips?
While graphics processing units (GPUs) still reign supreme in artificial intelligence, they weren’t made for AI in particular. Rather, GPUs were first developed and used for graphics-heavy applications like gaming.
They’ve done amazing things for AI and supercomputing, but in the last several years, specialized chips made for AI are on the up and up.
Cerebras is one of the contenders, alongside other up-and-comers like Graphcore and SambaNova and more familiar names like Intel and NVIDIA.
The company likes to compare the WSE-2 to a top AI processor (NVIDIA’s A100) to underscore just how different it is from the competition. The A100 has two percent the number of transistors (54.2 billion) occupying a little under two percent the surface area. It’s much smaller, but the A100’s might is more fully realized when hundreds or thousands of chips are linked together in a larger system.
In contrast, the WSE-2 reduces the cost and complexity of linking all those chips together by jamming as much processing and memory as possible onto a single wafer of silicon. At the same time, removing the need to move data between lots of chips spread out on multiple server racks dramatically increases speed and efficiency.
The chip’s design gives its small, speedy cores their own dedicated memory and facilitates quick communication between cores. And Cerebras’s compiling software works with machine learning models using standard frameworks—like PyTorch and TensorFlow—to make distributing tasks among the chip’s cores fairly painless.
The approach is called wafer-scale computing because the chip is the size of a standard silicon wafer from which many chips are normally cut. Wafer-scale computing has been on the radar for years, but Cerebras is the first to make a commercially viable chip.
The chip comes packaged in a computer system called the CS-2. The system includes cooling and power supply and fits in about a third of a standard server rack.
After the startup announced the chip in 2019, it began working with a growing list of customers. Cerebras counts GlaxoSmithKline, Lawrence Livermore National Lab, and Argonne National (among others) as customers alongside unnamed partners in pharmaceuticals, biotech, manufacturing, and the military. Many applications have been in AI, but not all. Last year, the company said the National Energy Technology Laboratory (NETL) used the chip to outpace a supercomputer in a simulation of fluid dynamics.
Will Wafer-Scale Go Big?
Whether wafer-scale computing catches on remains to be seen.
Cerebras says their chip significantly speeds up machine learning tasks, and testimony from early customers—some of which claim pretty big gains—supports this. But there aren’t yet independent head-to-head comparisons. Neither Cerebras nor most other AI hardware startups, for example, took part in a recent MLperf benchmark test of AI systems. (The top systems nearly all used NVIDIA GPUs to accelerate their algorithms.)
According to IEEE Spectrum, Cerebras says they’d rather let interested buyers test the system on their own specific neural networks as opposed to selling them on a more general and potentially less applicable benchmark. This isn’t an uncommon approach AI analyst Karl Freund said, “Everybody runs their own models that they developed for their own business. That’s the only thing that matters to buyers.”
It’s also worth noting the WSE can only handle tasks small enough to fit on its chip. The company says most suitable problems it’s encountered can fit, and the WSE-2 delivers even more space. Still, the size of machine learning algorithms is growing rapidly. Which is perhaps why Cerebras is keen to note that two or even three CS-2’s can fit into a server cabinet.
Ultimately, the WSE-2 doesn’t make sense for smaller tasks in which one or a few GPUs will do the trick. At the moment the chip is being used in large, compute-heavy projects in science and research. Current applications include cancer research and drug discovery, gravity wave detection, and fusion simulation. Cerebras CEO and cofounder Andrew Feldman says it may also be made available to customers with shorter-term, less intensive needs on the cloud.
The market for the chip is niche, but Feldman told HPC Wire it’s bigger than he anticipated in 2015, and it’s still growing as new approaches to AI are continually popping up. “The market is moving unbelievably quickly,” he said.
The increasing competition between AI chips is worth watching. There may end up being several fit-to-purpose approaches or one that rises to the top.
For the moment, at least, it appears there’s some appetite for a generous helping of giant computer chips.
Image Credit: Cerebras Continue reading
#439164 Advancing AI With a Supercomputer: A ...
Building a computer that can support artificial intelligence at the scale and complexity of the human brain will be a colossal engineering effort. Now researchers at the National Institute of Standards and Technology have outlined how they think we’ll get there.
How, when, and whether we’ll ever create machines that can match our cognitive capabilities is a topic of heated debate among both computer scientists and philosophers. One of the most contentious questions is the extent to which the solution needs to mirror our best example of intelligence so far: the human brain.
Rapid advances in AI powered by deep neural networks—which despite their name operate very differently than the brain—have convinced many that we may be able to achieve “artificial general intelligence” without mimicking the brain’s hardware or software.
Others think we’re still missing fundamental aspects of how intelligence works, and that the best way to fill the gaps is to borrow from nature. For many that means building “neuromorphic” hardware that more closely mimics the architecture and operation of biological brains.
The problem is that the existing computer technology we have at our disposal looks very different from biological information processing systems, and operates on completely different principles. For a start, modern computers are digital and neurons are analog. And although both rely on electrical signals, they come in very different flavors, and the brain also uses a host of chemical signals to carry out processing.
Now though, researchers at NIST think they’ve found a way to combine existing technologies in a way that could mimic the core attributes of the brain. Using their approach, they outline a blueprint for a “neuromorphic supercomputer” that could not only match, but surpass the physical limits of biological systems.
The key to their approach, outlined in Applied Physics Letters, is a combination of electronics and optical technologies. The logic is that electronics are great at computing, while optical systems can transmit information at the speed of light, so combining them is probably the best way to mimic the brain’s excellent computing and communication capabilities.
It’s not a new idea, but so far getting our best electronic and optical hardware to gel has proven incredibly tough. The team thinks they’ve found a potential workaround, dropping the temperature of the system to negative 450 degrees Fahrenheit.
While that might seem to only complicate matters, it actually opens up a host of new hardware possibilities. There are a bunch of high-performance electronic and optical components that only work at these frigid temperatures, like superconducting electronics, single-photon detectors, and silicon LEDs.
The researchers propose using these components to build artificial neurons that operate more like their biological cousins than conventional computer components, firing off electrical impulses, or spikes, rather than shuttling numbers around.
Each neuron has thousands of artificial synapses made from single photon detectors, which pick up optical messages from other neurons. These incoming signals are combined and processed by superconducting circuits, and once they cross a certain threshold a silicon LED is activated, sending an optical impulse to all downstream neurons.
The researchers envisage combining millions of these neurons on 300-millimeter silicon wafers and then stacking the wafers to create a highly interconnected network that mimics the architecture of the brain, with short-range connections dealt with by optical waveguides on each chip and long-range ones dealt with by fiber optic cables.
They acknowledge that the need to cryogenically cool the entire device is a challenge. But they say the improved power efficiency and that of their design should cancel out the cost of this cooling, and a system on the scale of the human brain should require no more power or space than a modern supercomputer. They also point out that there is significant R&D going into cryogenically-cooled quantum computers, which they could likely piggyback off of.
Some of the basic components of the system have already been experimentally demonstrated by the researchers, though they admit there’s still a long way to go to put all the pieces together. While many of these components are compatible with standard electronics fabrication, finding ways to manufacture them cheaply and integrate them will be a mammoth task.
Perhaps more important is the question of what kind of software the machine would run. It’s designed to implement “spiking neural networks” similar to those found in the brain, but our understanding of biological neural networks is still rudimentary, and our ability to mimic them is even worse. While both scientists and tech companies have been experimenting with the approach, it is still far less capable than deep learning.
Given the enormous engineering challenge involved in building a device of this scale, it may be a while before this blueprint makes it off the drawing board. But the proposal is an intriguing new chapter in the hunt for artificial general intelligence.
Image Credit: InspiredImages from Pixabay Continue reading
#437386 Scary A.I. more intelligent than you
GPT-3 (Generative Pre-trained Transformer 3), is an artificial intelligence language generator that uses deep learning to produce human-like output. The high quality of its text is very difficult to distinguish from a human’s. Many scientists, researchers and engineers (including Stephen … Continue reading