Tag Archives: artificial
#435520 These Are the Meta-Trends Shaping the ...
Life is pretty different now than it was 20 years ago, or even 10 years ago. It’s sort of exciting, and sort of scary. And hold onto your hat, because it’s going to keep changing—even faster than it already has been.
The good news is, maybe there won’t be too many big surprises, because the future will be shaped by trends that have already been set in motion. According to Singularity University co-founder and XPRIZE founder Peter Diamandis, a lot of these trends are unstoppable—but they’re also pretty predictable.
At SU’s Global Summit, taking place this week in San Francisco, Diamandis outlined some of the meta-trends he believes are key to how we’ll live our lives and do business in the (not too distant) future.
Increasing Global Abundance
Resources are becoming more abundant all over the world, and fewer people are seeing their lives limited by scarcity. “It’s hard for us to realize this as we see crisis news, but what people have access to is more abundant than ever before,” Diamandis said. Products and services are becoming cheaper and thus available to more people, and having more resources then enables people to create more, thus producing even more resources—and so on.
Need evidence? The proportion of the world’s population living in extreme poverty is currently lower than it’s ever been. The average human life expectancy is longer than it’s ever been. The costs of day-to-day needs like food, energy, transportation, and communications are on a downward trend.
Take energy. In most of the world, though its costs are decreasing, it’s still a fairly precious commodity; we turn off our lights and our air conditioners when we don’t need them (ideally, both to save money and to avoid wastefulness). But the cost of solar energy has plummeted, and the storage capacity of batteries is improving, and solar technology is steadily getting more efficient. Bids for new solar power plants in the past few years have broken each other’s records for lowest cost per kilowatt hour.
“We’re not far from a penny per kilowatt hour for energy from the sun,” Diamandis said. “And if you’ve got energy, you’ve got water.” Desalination, for one, will be much more widely feasible once the cost of the energy needed for it drops.
Knowledge is perhaps the most crucial resource that’s going from scarce to abundant. All the world’s knowledge is now at the fingertips of anyone who has a mobile phone and an internet connection—and the number of people connected is only going to grow. “Everyone is being connected at gigabit connection speeds, and this will be transformative,” Diamandis said. “We’re heading towards a world where anyone can know anything at any time.”
Increasing Capital Abundance
It’s not just goods, services, and knowledge that are becoming more plentiful. Money is, too—particularly money for business. “There’s more and more capital available to invest in companies,” Diamandis said. As a result, more people are getting the chance to bring their world-changing ideas to life.
Venture capital investments reached a new record of $130 billion in 2018, up from $84 billion in 2017—and that’s just in the US. Globally, VC funding grew 21 percent from 2017 to a total of $207 billion in 2018.
Through crowdfunding, any person in any part of the world can present their idea and ask for funding. That funding can come in the form of a loan, an equity investment, a reward, or an advanced purchase of the proposed product or service. “Crowdfunding means it doesn’t matter where you live, if you have a great idea you can get it funded by people from all over the world,” Diamandis said.
All this is making a difference; the number of unicorns—privately-held startups valued at over $1 billion—currently stands at an astounding 360.
One of the reasons why the world is getting better, Diamandis believes, is because entrepreneurs are trying more crazy ideas—not ideas that are reasonable or predictable or linear, but ideas that seem absurd at first, then eventually end up changing the world.
Everyone and Everything, Connected
As already noted, knowledge is becoming abundant thanks to the proliferation of mobile phones and wireless internet; everyone’s getting connected. In the next decade or sooner, connectivity will reach every person in the world. 5G is being tested and offered for the first time this year, and companies like Google, SpaceX, OneWeb, and Amazon are racing to develop global satellite internet constellations, whether by launching 12,000 satellites, as SpaceX’s Starlink is doing, or by floating giant balloons into the stratosphere like Google’s Project Loon.
“We’re about to reach a period of time in the next four to six years where we’re going from half the world’s people being connected to the whole world being connected,” Diamandis said. “What happens when 4.2 billion new minds come online? They’re all going to want to create, discover, consume, and invent.”
And it doesn’t stop at connecting people. Things are becoming more connected too. “By 2020 there will be over 20 billion connected devices and more than one trillion sensors,” Diamandis said. By 2030, those projections go up to 500 billion and 100 trillion. Think about it: there’s home devices like refrigerators, TVs, dishwashers, digital assistants, and even toasters. There’s city infrastructure, from stoplights to cameras to public transportation like buses or bike sharing. It’s all getting smart and connected.
Soon we’ll be adding autonomous cars to the mix, and an unimaginable glut of data to go with them. Every turn, every stop, every acceleration will be a data point. Some cars already collect over 25 gigabytes of data per hour, Diamandis said, and car data is projected to generate $750 billion of revenue by 2030.
“You’re going to start asking questions that were never askable before, because the data is now there to be mined,” he said.
Increasing Human Intelligence
Indeed, we’ll have data on everything we could possibly want data on. We’ll also soon have what Diamandis calls just-in-time education, where 5G combined with artificial intelligence and augmented reality will allow you to learn something in the moment you need it. “It’s not going and studying, it’s where your AR glasses show you how to do an emergency surgery, or fix something, or program something,” he said.
We’re also at the beginning of massive investments in research working towards connecting our brains to the cloud. “Right now, everything we think, feel, hear, or learn is confined in our synaptic connections,” Diamandis said. What will it look like when that’s no longer the case? Companies like Kernel, Neuralink, Open Water, Facebook, Google, and IBM are all investing billions of dollars into brain-machine interface research.
Increasing Human Longevity
One of the most important problems we’ll use our newfound intelligence to solve is that of our own health and mortality, making 100 years old the new 60—then eventually, 120 or 150.
“Our bodies were never evolved to live past age 30,” Diamandis said. “You’d go into puberty at age 13 and have a baby, and by the time you were 26 your baby was having a baby.”
Seeing how drastically our lifespans have changed over time makes you wonder what aging even is; is it natural, or is it a disease? Many companies are treating it as one, and using technologies like senolytics, CRISPR, and stem cell therapy to try to cure it. Scaffolds of human organs can now be 3D printed then populated with the recipient’s own stem cells so that their bodies won’t reject the transplant. Companies are testing small-molecule pharmaceuticals that can stop various forms of cancer.
“We don’t truly know what’s going on inside our bodies—but we can,” Diamandis said. “We’re going to be able to track our bodies and find disease at stage zero.”
Chins Up
The world is far from perfect—that’s not hard to see. What’s less obvious but just as true is that we’re living in an amazing time. More people are coming together, and they have more access to information, and that information moves faster, than ever before.
“I don’t think any of us understand how fast the world is changing,” Diamandis said. “Most people are fearful about the future. But we should be excited about the tools we now have to solve the world’s problems.”
Image Credit: spainter_vfx / Shutterstock.com Continue reading
#435474 Watch China’s New Hybrid AI Chip Power ...
When I lived in Beijing back in the 90s, a man walking his bike was nothing to look at. But today, I did a serious double-take at a video of a bike walking his man.
No kidding.
The bike itself looks overloaded but otherwise completely normal. Underneath its simplicity, however, is a hybrid computer chip that combines brain-inspired circuits with machine learning processes into a computing behemoth. Thanks to its smart chip, the bike self-balances as it gingerly rolls down a paved track before smoothly gaining speed into a jogging pace while navigating dexterously around obstacles. It can even respond to simple voice commands such as “speed up,” “left,” or “straight.”
Far from a circus trick, the bike is a real-world demo of the AI community’s latest attempt at fashioning specialized hardware to keep up with the challenges of machine learning algorithms. The Tianjic (天机*) chip isn’t just your standard neuromorphic chip. Rather, it has the architecture of a brain-like chip, but can also run deep learning algorithms—a match made in heaven that basically mashes together neuro-inspired hardware and software.
The study shows that China is readily nipping at the heels of Google, Facebook, NVIDIA, and other tech behemoths investing in developing new AI chip designs—hell, with billions in government investment it may have already had a head start. A sweeping AI plan from 2017 looks to catch up with the US on AI technology and application by 2020. By 2030, China’s aiming to be the global leader—and a champion for building general AI that matches humans in intellectual competence.
The country’s ambition is reflected in the team’s parting words.
“Our study is expected to stimulate AGI [artificial general intelligence] development by paving the way to more generalized hardware platforms,” said the authors, led by Dr. Luping Shi at Tsinghua University.
A Hardware Conundrum
Shi’s autonomous bike isn’t the first robotic two-wheeler. Back in 2015, the famed research nonprofit SRI International in Menlo Park, California teamed up with Yamaha to engineer MOTOBOT, a humanoid robot capable of driving a motorcycle. Powered by state-of-the-art robotic hardware and machine learning, MOTOBOT eventually raced MotoGPTM world champion Valentino Rossi in a nail-biting match-off.
However, the technological core of MOTOBOT and Shi’s bike vastly differ, and that difference reflects two pathways towards more powerful AI. One, exemplified by MOTOBOT, is software—developing brain-like algorithms with increasingly efficient architecture, efficacy, and speed. That sounds great, but deep neural nets demand so many computational resources that general-purpose chips can’t keep up.
As Shi told China Science Daily: “CPUs and other chips are driven by miniaturization technologies based on physics. Transistors might shrink to nanoscale-level in 10, 20 years. But what then?” As more transistors are squeezed onto these chips, efficient cooling becomes a limiting factor in computational speed. Tax them too much, and they melt.
For AI processes to continue, we need better hardware. An increasingly popular idea is to build neuromorphic chips, which resemble the brain from the ground up. IBM’s TrueNorth, for example, contains a massively parallel architecture nothing like the traditional Von Neumann structure of classic CPUs and GPUs. Similar to biological brains, TrueNorth’s memory is stored within “synapses” between physical “neurons” etched onto the chip, which dramatically cuts down on energy consumption.
But even these chips are limited. Because computation is tethered to hardware architecture, most chips resemble just one specific type of brain-inspired network called spiking neural networks (SNNs). Without doubt, neuromorphic chips are highly efficient setups with dynamics similar to biological networks. They also don’t play nicely with deep learning and other software-based AI.
Brain-AI Hybrid Core
Shi’s new Tianjic chip brought the two incompatibilities together onto a single piece of brainy hardware.
First was to bridge the deep learning and SNN divide. The two have very different computation philosophies and memory organizations, the team said. The biggest difference, however, is that artificial neural networks transform multidimensional data—image pixels, for example—into a single, continuous, multi-bit 0 and 1 stream. In contrast, neurons in SNNs activate using something called “binary spikes” that code for specific activation events in time.
Confused? Yeah, it’s hard to wrap my head around it too. That’s because SNNs act very similarly to our neural networks and nothing like computers. A particular neuron needs to generate an electrical signal (a “spike”) large enough to transfer down to the next one; little blips in signals don’t count. The way they transmit data also heavily depends on how they’re connected, or the network topology. The takeaway: SNNs work pretty differently than deep learning.
Shi’s team first recreated this firing quirk in the language of computers—0s and 1s—so that the coding mechanism would become compatible with deep learning algorithms. They then carefully aligned the step-by-step building blocks of the two models, which allowed them to tease out similarities into a common ground to further build on. “On the basis of this unified abstraction, we built a cross-paradigm neuron scheme,” they said.
In general, the design allowed both computational approaches to share the synapses, where neurons connect and store data, and the dendrites, the outgoing branches of the neurons. In contrast, the neuron body, where signals integrate, was left reconfigurable for each type of computation, as were the input branches. Each building block was combined into a single unified functional core (FCore), which acts like a deep learning/SNN converter depending on its specific setup. Translation: the chip can do both types of previously incompatible computation.
The Chip
Using nanoscale fabrication, the team arranged 156 FCores, containing roughly 40,000 neurons and 10 million synapses, onto a chip less than a fifth of an inch in length and width. Initial tests showcased the chip’s versatility, in that it can run both SNNs and deep learning algorithms such as the popular convolutional neural network (CNNs) often used in machine vision.
Compared to IBM TrueNorth, the density of Tianjic’s cores increased by 20 percent, speeding up performance ten times and increasing bandwidth at least 100-fold, the team said. When pitted against GPUs, the current hardware darling of machine learning, the chip increased processing throughput up to 100 times, while using just a sliver (1/10,000) of energy.
Although these stats are great, real-life performance is even better as a demo. Here’s where the authors gave their Tianjic brain a body. The team combined one chip with multiple specialized networks to process vision, balance, voice commands, and decision-making in real time. Object detection and target tracking, for example, relied on a deep neural net CNN, whereas voice commands and balance data were recognized using an SNN. The inputs were then integrated inside a neural state machine, which churned out decisions to downstream output modules—for example, controlling the handle bar to turn left.
Thanks to the chip’s brain-like architecture and bilingual ability, Tianjic “allowed all of the neural network models to operate in parallel and realized seamless communication across the models,” the team said. The result is an autonomous bike that rolls after its human, balances across speed bumps, avoids crashing into roadblocks, and answers to voice commands.
General AI?
“It’s a wonderful demonstration and quite impressive,” said the editorial team at Nature, which published the study on its cover last week.
However, they cautioned, when comparing Tianjic with state-of-the-art chips designed for a single problem toe-to-toe on that particular problem, Tianjic falls behind. But building these jack-of-all-trades hybrid chips is definitely worth the effort. Compared to today’s limited AI, what people really want is artificial general intelligence, which will require new architectures that aren’t designed to solve one particular problem.
Until people start to explore, innovate, and play around with different designs, it’s not clear how we can further progress in the pursuit of general AI. A self-driving bike might not be much to look at, but its hybrid brain is a pretty neat place to start.
*The name, in Chinese, means “heavenly machine,” “unknowable mystery of nature,” or “confidentiality.” Go figure.
Image Credit: Alexander Ryabintsev / Shutterstock.com Continue reading
#435436 Undeclared Wars in Cyberspace Are ...
The US is at war. That’s probably not exactly news, as the country has been engaged in one type of conflict or another for most of its history. The last time we officially declared war was after Japan bombed Pearl Harbor in December 1941.
Our biggest undeclared war today is not being fought by drones in the mountains of Afghanistan or even through the less-lethal barrage of threats over the nuclear programs in North Korea and Iran. In this particular war, it is the US that is under attack and on the defensive.
This is cyberwarfare.
The definition of what constitutes a cyber attack is a broad one, according to Greg White, executive director of the Center for Infrastructure Assurance and Security (CIAS) at The University of Texas at San Antonio (UTSA).
At the level of nation-state attacks, cyberwarfare could involve “attacking systems during peacetime—such as our power grid or election systems—or it could be during war time in which case the attacks may be designed to cause destruction, damage, deception, or death,” he told Singularity Hub.
For the US, the Pearl Harbor of cyberwarfare occurred during 2016 with the Russian interference in the presidential election. However, according to White, an Air Force veteran who has been involved in computer and network security since 1986, the history of cyber war can be traced back much further, to at least the first Gulf War of the early 1990s.
“We started experimenting with cyber attacks during the first Gulf War, so this has been going on a long time,” he said. “Espionage was the prime reason before that. After the war, the possibility of expanding the types of targets utilized expanded somewhat. What is really interesting is the use of social media and things like websites for [psychological operation] purposes during a conflict.”
The 2008 conflict between Russia and the Republic of Georgia is often cited as a cyberwarfare case study due to the large scale and overt nature of the cyber attacks. Russian hackers managed to bring down more than 50 news, government, and financial websites through denial-of-service attacks. In addition, about 35 percent of Georgia’s internet networks suffered decreased functionality during the attacks, coinciding with the Russian invasion of South Ossetia.
The cyberwar also offers lessons for today on Russia’s approach to cyberspace as a tool for “holistic psychological manipulation and information warfare,” according to a 2018 report called Understanding Cyberwarfare from the Modern War Institute at West Point.
US Fights Back
News in recent years has highlighted how Russian hackers have attacked various US government entities and critical infrastructure such as energy and manufacturing. In particular, a shadowy group known as Unit 26165 within the country’s military intelligence directorate is believed to be behind the 2016 US election interference campaign.
However, the US hasn’t been standing idly by. Since at least 2012, the US has put reconnaissance probes into the control systems of the Russian electric grid, The New York Times reported. More recently, we learned that the US military has gone on the offensive, putting “crippling malware” inside the Russian power grid as the U.S. Cyber Command flexes its online muscles thanks to new authority granted to it last year.
“Access to the power grid that is obtained now could be used to shut something important down in the future when we are in a war,” White noted. “Espionage is part of the whole program. It is important to remember that cyber has just provided a new domain in which to conduct the types of activities we have been doing in the real world for years.”
The US is also beginning to pour more money into cybersecurity. The 2020 fiscal budget calls for spending $17.4 billion throughout the government on cyber-related activities, with the Department of Defense (DoD) alone earmarked for $9.6 billion.
Despite the growing emphasis on cybersecurity in the US and around the world, the demand for skilled security professionals is well outpacing the supply, with a projected shortfall of nearly three million open or unfilled positions according to the non-profit IT security organization (ISC)².
UTSA is rare among US educational institutions in that security courses and research are being conducted across three different colleges, according to White. About 10 percent of the school’s 30,000-plus students are enrolled in a cyber-related program, he added, and UTSA is one of only 21 schools that has received the Cyber Operations Center of Excellence designation from the National Security Agency.
“This track in the computer science program is specifically designed to prepare students for the type of jobs they might be involved in if they went to work for the DoD,” White said.
However, White is extremely doubtful there will ever be enough cyber security professionals to meet demand. “I’ve been preaching that we’ve got to worry about cybersecurity in the workforce, not just the cybersecurity workforce, not just cybersecurity professionals. Everybody has a responsibility for cybersecurity.”
Artificial Intelligence in Cybersecurity
Indeed, humans are often seen as the weak link in cybersecurity. That point was driven home at a cybersecurity roundtable discussion during this year’s Brainstorm Tech conference in Aspen, Colorado.
Participant Dorian Daley, general counsel at Oracle, said insider threats are at the top of the list when it comes to cybersecurity. “Sadly, I think some of the biggest challenges are people, and I mean that in a number of ways. A lot of the breaches really come from insiders. So the more that you can automate things and you can eliminate human malicious conduct, the better.”
White noted that automation is already the norm in cybersecurity. “Humans can’t react as fast as systems can launch attacks, so we need to rely on automated defenses as well,” he said. “This doesn’t mean that humans are not in the loop, but much of what is done these days is ‘scripted’.”
The use of artificial intelligence, machine learning, and other advanced automation techniques have been part of the cybersecurity conversation for quite some time, according to White, such as pattern analysis to look for specific behaviors that might indicate an attack is underway.
“What we are seeing quite a bit of today falls under the heading of big data and data analytics,” he explained.
But there are signs that AI is going off-script when it comes to cyber attacks. In the hands of threat groups, AI applications could lead to an increase in the number of cyberattacks, wrote Michelle Cantos, a strategic intelligence analyst at cybersecurity firm FireEye.
“Current AI technology used by businesses to analyze consumer behavior and find new customer bases can be appropriated to help attackers find better targets,” she said. “Adversaries can use AI to analyze datasets and generate recommendations for high-value targets they think the adversary should hit.”
In fact, security researchers have already demonstrated how a machine learning system could be used for malicious purposes. The Social Network Automated Phishing with Reconnaissance system, or SNAP_R, generated more than four times as many spear-phishing tweets on Twitter than a human—and was just as successful at targeting victims in order to steal sensitive information.
Cyber war is upon us. And like the current war on terrorism, there are many battlefields from which the enemy can attack and then disappear. While total victory is highly unlikely in the traditional sense, innovations through AI and other technologies can help keep the lights on against the next cyber attack.
Image Credit: pinkeyes / Shutterstock.com Continue reading