Tag Archives: signal
#437182 MIT’s Tiny New Brain Chip Aims for AI ...
The human brain operates on roughly 20 watts of power (a third of a 60-watt light bulb) in a space the size of, well, a human head. The biggest machine learning algorithms use closer to a nuclear power plant’s worth of electricity and racks of chips to learn.
That’s not to slander machine learning, but nature may have a tip or two to improve the situation. Luckily, there’s a branch of computer chip design heeding that call. By mimicking the brain, super-efficient neuromorphic chips aim to take AI off the cloud and put it in your pocket.
The latest such chip is smaller than a piece of confetti and has tens of thousands of artificial synapses made out of memristors—chip components that can mimic their natural counterparts in the brain.
In a recent paper in Nature Nanotechnology, a team of MIT scientists say their tiny new neuromorphic chip was used to store, retrieve, and manipulate images of Captain America’s Shield and MIT’s Killian Court. Whereas images stored with existing methods tended to lose fidelity over time, the new chip’s images remained crystal clear.
“So far, artificial synapse networks exist as software. We’re trying to build real neural network hardware for portable artificial intelligence systems,” Jeehwan Kim, associate professor of mechanical engineering at MIT said in a press release. “Imagine connecting a neuromorphic device to a camera on your car, and having it recognize lights and objects and make a decision immediately, without having to connect to the internet. We hope to use energy-efficient memristors to do those tasks on-site, in real-time.”
A Brain in Your Pocket
Whereas the computers in our phones and laptops use separate digital components for processing and memory—and therefore need to shuttle information between the two—the MIT chip uses analog components called memristors that process and store information in the same place. This is similar to the way the brain works and makes memristors far more efficient. To date, however, they’ve struggled with reliability and scalability.
To overcome these challenges, the MIT team designed a new kind of silicon-based, alloyed memristor. Ions flowing in memristors made from unalloyed materials tend to scatter as the components get smaller, meaning the signal loses fidelity and the resulting computations are less reliable. The team found an alloy of silver and copper helped stabilize the flow of silver ions between electrodes, allowing them to scale the number of memristors on the chip without sacrificing functionality.
While MIT’s new chip is promising, there’s likely a ways to go before memristor-based neuromorphic chips go mainstream. Between now and then, engineers like Kim have their work cut out for them to further scale and demonstrate their designs. But if successful, they could make for smarter smartphones and other even smaller devices.
“We would like to develop this technology further to have larger-scale arrays to do image recognition tasks,” Kim said. “And some day, you might be able to carry around artificial brains to do these kinds of tasks, without connecting to supercomputers, the internet, or the cloud.”
Special Chips for AI
The MIT work is part of a larger trend in computing and machine learning. As progress in classical chips has flagged in recent years, there’s been an increasing focus on more efficient software and specialized chips to continue pushing the pace.
Neuromorphic chips, for example, aren’t new. IBM and Intel are developing their own designs. So far, their chips have been based on groups of standard computing components, such as transistors (as opposed to memristors), arranged to imitate neurons in the brain. These chips are, however, still in the research phase.
Graphics processing units (GPUs)—chips originally developed for graphics-heavy work like video games—are the best practical example of specialized hardware for AI and were heavily used in this generation of machine learning early on. In the years since, Google, NVIDIA, and others have developed even more specialized chips that cater more specifically to machine learning.
The gains from such specialized chips are already being felt.
In a recent cost analysis of machine learning, research and investment firm ARK Invest said cost declines have far outpaced Moore’s Law. In a particular example, they found the cost to train an image recognition algorithm (ResNet-50) went from around $1,000 in 2017 to roughly $10 in 2019. The fall in cost to actually run such an algorithm was even more dramatic. It took $10,000 to classify a billion images in 2017 and just $0.03 in 2019.
Some of these declines can be traced to better software, but according to ARK, specialized chips have improved performance by nearly 16 times in the last three years.
As neuromorphic chips—and other tailored designs—advance further in the years to come, these trends in cost and performance may continue. Eventually, if all goes to plan, we might all carry a pocket brain that can do the work of today’s best AI.
Image credit: Peng Lin Continue reading
#436403 Why Your 5G Phone Connection Could Mean ...
Will getting full bars on your 5G connection mean getting caught out by sudden weather changes?
The question may strike you as hypothetical, nonsensical even, but it is at the core of ongoing disputes between meteorologists and telecommunications companies. Everyone else, including you and I, are caught in the middle, wanting both 5G’s faster connection speeds and precise information about our increasingly unpredictable weather. So why can’t we have both?
Perhaps we can, but because of the way 5G networks function, it may take some special technology—specifically, artificial intelligence.
The Bandwidth Worries
Around the world, the first 5G networks are already being rolled out. The networks use a variety of frequencies to transmit data to and from devices at speeds up to 100 times faster than existing 4G networks.
One of the bandwidths used is between 24.25 and 24.45 gigahertz (GHz). In a recent FCC auction, telecommunications companies paid a combined $2 billion for the 5G usage rights for this spectrum in the US.
However, meteorologists are concerned that transmissions near the lower end of that range can interfere with their ability to accurately measure water vapor in the atmosphere. Wired reported that acting chief of the National Oceanic and Atmospheric Administration (NOAA), Neil Jacobs, told the US House Subcommittee on the Environment that 5G interference could substantially cut the amount of weather data satellites can gather. As a result, forecast accuracy could drop by as much as 30 percent.
Among the consequences could be less time to prepare for hurricanes, and it may become harder to predict storms’ paths. Due to the interconnectedness of weather patterns, measurement issues in one location can affect other areas too. Lack of accurate atmospheric data from the US could, for example, lead to less accurate forecasts for weather patterns over Europe.
The Numbers Game
Water vapor emits a faint signal at 23.8 GHz. Weather satellites measure the signals, and the data is used to gauge atmospheric humidity levels. Meteorologists have expressed concern that 5G signals in the same range can disturb those readings. The issue is that it would be nigh on impossible to tell whether a signal is water vapor or an errant 5G signal.
Furthermore, 5G disturbances in other frequency bands could make forecasting even more difficult. Rain and snow emit frequencies around 36-37 GHz. 50.2-50.4 GHz is used to measure atmospheric temperatures, and 86-92 GHz clouds and ice. All of the above are under consideration for international 5G signals. Some have warned that the wider consequences could set weather forecasts back to the 1980s.
Telecommunications companies and interest organizations have argued back, saying that weather sensors aren’t as susceptible to interference as meteorologists fear. Furthermore, 5G devices and signals will produce much less interference with weather forecasts than organizations like NOAA predict. Since very little scientific research has been carried out to examine the claims of either party, we seem stuck in a ‘wait and see’ situation.
To offset some of the possible effects, the two groups have tried to reach a consensus on a noise buffer between the 5G transmissions and water-vapor signals. It could be likened to limiting the noise from busy roads or loud sound systems to avoid bothering neighboring buildings.
The World Meteorological Organization was looking to establish a -55 decibel watts buffer. In Europe, regulators are locked in on a -42 decibel watts buffer for 5G base stations. For comparison, the US Federal Communications Commission has advocated for a -20 decibel watts buffer, which would, in reality, allow more than 150 times more noise than the European proposal.
How AI Could Help
Much of the conversation about 5G’s possible influence on future weather predictions is centered around mobile phones. However, the phones are far from the only systems that will be receiving and transmitting signals on 5G. Self-driving cars and the Internet of Things are two other technologies that could soon be heavily reliant on faster wireless signals.
Densely populated areas are likely going to be the biggest emitters of 5G signals, leading to a suggestion to only gather water-vapor data over oceans.
Another option is to develop artificial intelligence (AI) approaches to clean or process weather data. AI is playing an increasing role in weather forecasting. For example, in 2016 IBM bought The Weather Company for $2 billion. The goal was to combine the two companies’ models and data in IBM’s Watson to create more accurate forecasts. AI would also be able to predict increases or drops in business revenues due to weather changes. Monsanto has also been investing in AI for forecasting, in this case to provide agriculturally-related weather predictions.
Smartphones may also provide a piece of the weather forecasting puzzle. Studies have shown how data from thousands of smartphones can help to increase the accuracy of storm predictions, as well as the force of storms.
“Weather stations cost a lot of money,” Cliff Mass, an atmospheric scientist at the University of Washington in Seattle, told Inside Science, adding, “If there are already 20 million smartphones, you might as well take advantage of the observation system that’s already in place.”
Smartphones may not be the solution when it comes to finding new ways of gathering the atmospheric data on water vapor that 5G could disrupt. But it does go to show that some technologies open new doors, while at the same time, others shut them.
Image Credit: Image by Free-Photos from Pixabay Continue reading