Tag Archives: training
#431343 How Technology Is Driving Us Toward Peak ...
At some point in the future—and in some ways we are already seeing this—the amount of physical stuff moving around the world will peak and begin to decline. By “stuff,” I am referring to liquid fuels, coal, containers on ships, food, raw materials, products, etc.
New technologies are moving us toward “production-at-the-point-of-consumption” of energy, food, and products with reduced reliance on a global supply chain.
The trade of physical stuff has been central to globalization as we’ve known it. So, this declining movement of stuff may signal we are approaching “peak globalization.”
To be clear, even as the movement of stuff may slow, if not decline, the movement of people, information, data, and ideas around the world is growing exponentially and is likely to continue doing so for the foreseeable future.
Peak globalization may provide a pathway to preserving the best of globalization and global interconnectedness, enhancing economic and environmental sustainability, and empowering individuals and communities to strengthen democracy.
At the same time, some of the most troublesome aspects of globalization may be eased, including massive financial transfers to energy producers and loss of jobs to manufacturing platforms like China. This shift could bring relief to the “losers” of globalization and ease populist, nationalist political pressures that are roiling the developed countries.
That is quite a claim, I realize. But let me explain the vision.
New Technologies and Businesses: Digital, Democratized, Decentralized
The key factors moving us toward peak globalization and making it economically viable are new technologies and innovative businesses and business models allowing for “production-at-the-point-of-consumption” of energy, food, and products.
Exponential technologies are enabling these trends by sharply reducing the “cost of entry” for creating businesses. Driven by Moore’s Law, powerful technologies have become available to almost anyone, anywhere.
Beginning with the microchip, which has had a 100-billion-fold improvement in 40 years—10,000 times faster and 10 million times cheaper—the marginal cost of producing almost everything that can be digitized has fallen toward zero.
A hard copy of a book, for example, will always entail the cost of materials, printing, shipping, etc., even if the marginal cost falls as more copies are produced. But the marginal cost of a second digital copy, such as an e-book, streaming video, or song, is nearly zero as it is simply a digital file sent over the Internet, the world’s largest copy machine.* Books are one product, but there are literally hundreds of thousands of dollars in once-physical, separate products jammed into our devices at little to no cost.
A smartphone alone provides half the human population access to artificial intelligence—from SIRI, search, and translation to cloud computing—geolocation, free global video calls, digital photography and free uploads to social network sites, free access to global knowledge, a million apps for a huge variety of purposes, and many other capabilities that were unavailable to most people only a few years ago.
As powerful as dematerialization and demonetization are for private individuals, they’re having a stronger effect on businesses. A small team can access expensive, advanced tools that before were only available to the biggest organizations. Foundational digital platforms, such as the internet and GPS, and the platforms built on top of them by the likes of Google, Apple, Amazon, and others provide the connectivity and services democratizing business tools and driving the next generation of new startups.
“As these trends gain steam in coming decades, they’ll bleed into and fundamentally transform global supply chains.”
An AI startup, for example, doesn’t need its own server farm to train its software and provide service to customers. The team can rent computing power from Amazon Web Services. This platform model enables small teams to do big things on the cheap. And it isn’t just in software. Similar trends are happening in hardware too. Makers can 3D print or mill industrial grade prototypes of physical stuff in a garage or local maker space and send or sell designs to anyone with a laptop and 3D printer via online platforms.
These are early examples of trends that are likely to gain steam in coming decades, and as they do, they’ll bleed into and fundamentally transform global supply chains.
The old model is a series of large, connected bits of centralized infrastructure. It makes sense to mine, farm, or manufacture in bulk when the conditions, resources, machines, and expertise to do so exist in particular places and are specialized and expensive. The new model, however, enables smaller-scale production that is local and decentralized.
To see this more clearly, let’s take a look at the technological trends at work in the three biggest contributors to the global trade of physical stuff—products, energy, and food.
Products
3D printing (additive manufacturing) allows for distributed manufacturing near the point of consumption, eliminating or reducing supply chains and factory production lines.
This is possible because product designs are no longer made manifest in assembly line parts like molds or specialized mechanical tools. Rather, designs are digital and can be called up at will to guide printers. Every time a 3D printer prints, it can print a different item, so no assembly line needs to be set up for every different product. 3D printers can also print an entire finished product in one piece or reduce the number of parts of larger products, such as engines. This further lessens the need for assembly.
Because each item can be customized and printed on demand, there is no cost benefit from scaling production. No inventories. No shipping items across oceans. No carbon emissions transporting not only the final product but also all the parts in that product shipped from suppliers to manufacturer. Moreover, 3D printing builds items layer by layer with almost no waste, unlike “subtractive manufacturing” in which an item is carved out of a piece of metal, and much or even most of the material can be waste.
Finally, 3D printing is also highly scalable, from inexpensive 3D printers (several hundred dollars) for home and school use to increasingly capable and expensive printers for industrial production. There are also 3D printers being developed for printing buildings, including houses and office buildings, and other infrastructure.
The technology for finished products is only now getting underway, and there are still challenges to overcome, such as speed, quality, and range of materials. But as methods and materials advance, it will likely creep into more manufactured goods.
Ultimately, 3D printing will be a general purpose technology that involves many different types of printers and materials—such as plastics, metals, and even human cells—to produce a huge range of items, from human tissue and potentially human organs to household items and a range of industrial items for planes, trains, and automobiles.
Energy
Renewable energy production is located at or relatively near the source of consumption.
Although electricity generated by solar, wind, geothermal, and other renewable sources can of course be transmitted over longer distances, it is mostly generated and consumed locally or regionally. It is not transported around the world in tankers, ships, and pipelines like petroleum, coal, and natural gas.
Moreover, the fuel itself is free—forever. There is no global price on sun or wind. The people relying on solar and wind power need not worry about price volatility and potential disruption of fuel supplies as a result of political, market, or natural causes.
Renewables have their problems, of course, including intermittency and storage, and currently they work best if complementary to other sources, especially natural gas power plants that, unlike coal plants, can be turned on or off and modulated like a gas stove, and are half the carbon emissions of coal.
Within the next decades or so, it is likely the intermittency and storage problems will be solved or greatly mitigated. In addition, unlike coal and natural gas power plants, solar is scalable, from solar panels on individual homes or even cars and other devices, to large-scale solar farms. Solar can be connected with microgrids and even allow for autonomous electricity generation by homes, commercial buildings, and communities.
It may be several decades before fossil fuel power plants can be phased out, but the development cost of renewables has been falling exponentially and, in places, is beginning to compete with coal and gas. Solar especially is expected to continue to increase in efficiency and decline in cost.
Given these trends in cost and efficiency, renewables should become obviously cheaper over time—if the fuel is free for solar and has to be continually purchased for coal and gas, at some point the former is cheaper than the latter. Renewables are already cheaper if externalities such as carbon emissions and environmental degradation involved in obtaining and transporting the fuel are included.
Food
Food can be increasingly produced near the point of consumption with vertical farms and eventually with printed food and even printed or cultured meat.
These sources bring production of food very near the consumer, so transportation costs, which can be a significant portion of the cost of food to consumers, are greatly reduced. The use of land and water are reduced by 95% or more, and energy use is cut by nearly 50%. In addition, fertilizers and pesticides are not required and crops can be grown 365 days a year whatever the weather and in more climates and latitudes than is possible today.
While it may not be practical to grow grains, corn, and other such crops in vertical farms, many vegetables and fruits can flourish in such facilities. In addition, cultured or printed meat is being developed—the big challenge is scaling up and reducing cost—that is based on cells from real animals without slaughtering the animals themselves.
There are currently some 70 billion animals being raised for food around the world [PDF] and livestock alone counts for about 15% of global emissions. Moreover, livestock places huge demands on land, water, and energy. Like vertical farms, cultured or printed meat could be produced with no more land use than a brewery and with far less water and energy.
A More Democratic Economy Goes Bottom Up
This is a very brief introduction to the technologies that can bring “production-at-the-point-of-consumption” of products, energy, and food to cities and regions.
What does this future look like? Here’s a simplified example.
Imagine a universal manufacturing facility with hundreds of 3D printers printing tens of thousands of different products on demand for the local community—rather than assembly lines in China making tens of thousands of the same product that have to be shipped all over the world since no local market can absorb all of the same product.
Nearby, a vertical farm and cultured meat facility produce much of tomorrow night’s dinner. These facilities would be powered by local or regional wind and solar. Depending on need and quality, some infrastructure and machinery, like solar panels and 3D printers, would live in these facilities and some in homes and businesses.
The facilities could be owned by a large global corporation—but still locally produce goods—or they could be franchised or even owned and operated independently by the local population. Upkeep and management at each would provide jobs for communities nearby. Eventually, not only would global trade of parts and products diminish, but even required supplies of raw materials and feed stock would decline since there would be less waste in production, and many materials would be recycled once acquired.
“Peak globalization could be a viable pathway to an economic foundation that puts people first while building a more economically and environmentally sustainable future.”
This model suggests a shift toward a “bottom up” economy that is more democratic, locally controlled, and likely to generate more local jobs.
The global trends in democratization of technology make the vision technologically plausible. Much of this technology already exists and is improving and scaling while exponentially decreasing in cost to become available to almost anyone, anywhere.
This includes not only access to key technologies, but also to education through digital platforms available globally. Online courses are available for free, ranging from advanced physics, math, and engineering to skills training in 3D printing, solar installations, and building vertical farms. Social media platforms can enable local and global collaboration and sharing of knowledge and best practices.
These new communities of producers can be the foundation for new forms of democratic governance as they recognize and “capitalize” on the reality that control of the means of production can translate to political power. More jobs and local control could weaken populist, anti-globalization political forces as people recognize they could benefit from the positive aspects of globalization and international cooperation and connectedness while diminishing the impact of globalization’s downsides.
There are powerful vested interests that stand to lose in such a global structural shift. But this vision builds on trends that are already underway and are gaining momentum. Peak globalization could be a viable pathway to an economic foundation that puts people first while building a more economically and environmentally sustainable future.
This article was originally posted on Open Democracy (CC BY-NC 4.0). The version above was edited with the author for length and includes additions. Read the original article on Open Democracy.
* See Jeremy Rifkin, The Zero Marginal Cost Society, (New York: Palgrave Macmillan, 2014), Part II, pp. 69-154.
Image Credit: Sergey Nivens / Shutterstock.com Continue reading
#431238 AI Is Easy to Fool—Why That Needs to ...
Con artistry is one of the world’s oldest and most innovative professions, and it may soon have a new target. Research suggests artificial intelligence may be uniquely susceptible to tricksters, and as its influence in the modern world grows, attacks against it are likely to become more common.
The root of the problem lies in the fact that artificial intelligence algorithms learn about the world in very different ways than people do, and so slight tweaks to the data fed into these algorithms can throw them off completely while remaining imperceptible to humans.
Much of the research into this area has been conducted on image recognition systems, in particular those relying on deep learning neural networks. These systems are trained by showing them thousands of examples of images of a particular object until they can extract common features that allow them to accurately spot the object in new images.
But the features they extract are not necessarily the same high-level features a human would be looking for, like the word STOP on a sign or a tail on a dog. These systems analyze images at the individual pixel level to detect patterns shared between examples. These patterns can be obscure combinations of pixel values, in small pockets or spread across the image, that would be impossible to discern for a human, but highly accurate at predicting a particular object.
“An attacker can trick the object recognition algorithm into seeing something that isn’t there, without these alterations being obvious to a human.”
What this means is that by identifying these patterns and overlaying them over a different image, an attacker can trick the object recognition algorithm into seeing something that isn’t there, without these alterations being obvious to a human. This kind of manipulation is known as an “adversarial attack.”
Early attempts to trick image recognition systems this way required access to the algorithm’s inner workings to decipher these patterns. But in 2016 researchers demonstrated a “black box” attack that enabled them to trick such a system without knowing its inner workings.
By feeding the system doctored images and seeing how it classified them, they were able to work out what it was focusing on and therefore generate images they knew would fool it. Importantly, the doctored images were not obviously different to human eyes.
These approaches were tested by feeding doctored image data directly into the algorithm, but more recently, similar approaches have been applied in the real world. Last year it was shown that printouts of doctored images that were then photographed on a smartphone successfully tricked an image classification system.
Another group showed that wearing specially designed, psychedelically-colored spectacles could trick a facial recognition system into thinking people were celebrities. In August scientists showed that adding stickers to stop signs in particular configurations could cause a neural net designed to spot them to misclassify the signs.
These last two examples highlight some of the potential nefarious applications for this technology. Getting a self-driving car to miss a stop sign could cause an accident, either for insurance fraud or to do someone harm. If facial recognition becomes increasingly popular for biometric security applications, being able to pose as someone else could be very useful to a con artist.
Unsurprisingly, there are already efforts to counteract the threat of adversarial attacks. In particular, it has been shown that deep neural networks can be trained to detect adversarial images. One study from the Bosch Center for AI demonstrated such a detector, an adversarial attack that fools the detector, and a training regime for the detector that nullifies the attack, hinting at the kind of arms race we are likely to see in the future.
While image recognition systems provide an easy-to-visualize demonstration, they’re not the only machine learning systems at risk. The techniques used to perturb pixel data can be applied to other kinds of data too.
“Bypassing cybersecurity defenses is one of the more worrying and probable near-term applications for this approach.”
Chinese researchers showed that adding specific words to a sentence or misspelling a word can completely throw off machine learning systems designed to analyze what a passage of text is about. Another group demonstrated that garbled sounds played over speakers could make a smartphone running the Google Now voice command system visit a particular web address, which could be used to download malware.
This last example points toward one of the more worrying and probable near-term applications for this approach: bypassing cybersecurity defenses. The industry is increasingly using machine learning and data analytics to identify malware and detect intrusions, but these systems are also highly susceptible to trickery.
At this summer’s DEF CON hacking convention, a security firm demonstrated they could bypass anti-malware AI using a similar approach to the earlier black box attack on the image classifier, but super-powered with an AI of their own.
Their system fed malicious code to the antivirus software and then noted the score it was given. It then used genetic algorithms to iteratively tweak the code until it was able to bypass the defenses while maintaining its function.
All the approaches noted so far are focused on tricking pre-trained machine learning systems, but another approach of major concern to the cybersecurity industry is that of “data poisoning.” This is the idea that introducing false data into a machine learning system’s training set will cause it to start misclassifying things.
This could be particularly challenging for things like anti-malware systems that are constantly being updated to take into account new viruses. A related approach bombards systems with data designed to generate false positives so the defenders recalibrate their systems in a way that then allows the attackers to sneak in.
How likely it is that these approaches will be used in the wild will depend on the potential reward and the sophistication of the attackers. Most of the techniques described above require high levels of domain expertise, but it’s becoming ever easier to access training materials and tools for machine learning.
Simpler versions of machine learning have been at the heart of email spam filters for years, and spammers have developed a host of innovative workarounds to circumvent them. As machine learning and AI increasingly embed themselves in our lives, the rewards for learning how to trick them will likely outweigh the costs.
Image Credit: Nejron Photo / Shutterstock.com Continue reading
#431189 Researchers Develop New Tech to Predict ...
It is one of the top 10 deadliest diseases in the United States, and it cannot be cured or prevented. But new studies are finding ways to diagnose Alzheimer’s disease in its earliest stages, while some of the latest research says technologies like artificial intelligence can detect dementia years before the first symptoms occur.
These advances, in turn, will help bolster clinical trials seeking a cure or therapies to slow or prevent the disease. Catching Alzheimer’s disease or other forms of dementia early in their progression can help ease symptoms in some cases.
“Often neurodegeneration is diagnosed late when massive brain damage has already occurred,” says professor Francis L Martin at the University of Central Lancashire in the UK, in an email to Singularity Hub. “As we know more about the molecular basis of the disease, there is the possibility of clinical interventions that might slow or halt the progress of the disease, i.e., before brain damage. Extending cognitive ability for even a number of years would have huge benefit.”
Blood Diamond
Martin is the principal investigator on a project that has developed a technique to analyze blood samples to diagnose Alzheimer’s disease and distinguish between other forms of dementia.
The researchers used sensor-based technology with a diamond core to analyze about 550 blood samples. They identified specific chemical bonds within the blood after passing light through the diamond core and recording its interaction with the sample. The results were then compared against blood samples from cases of Alzheimer’s disease and other neurodegenerative diseases, along with those from healthy individuals.
“From a small drop of blood, we derive a fingerprint spectrum. That fingerprint spectrum contains numerical data, which can be inputted into a computational algorithm we have developed,” Martin explains. “This algorithm is validated for prediction of unknown samples. From this we determine sensitivity and specificity. Although not perfect, my clinical colleagues reliably tell me our results are far better than anything else they have seen.”
Martin says the breakthrough is the result of more than 10 years developing sensor-based technologies for routine screening, monitoring, or diagnosing neurodegenerative diseases and cancers.
“My vision was to develop something low-cost that could be readily applied in a typical clinical setting to handle thousands of samples potentially per day or per week,” he says, adding that the technology also has applications in environmental science and food security.
The new test can also distinguish accurately between Alzheimer’s disease and other forms of neurodegeneration, such as Lewy body dementia, which is one of the most common causes of dementia after Alzheimer’s.
“To this point, other than at post-mortem, there has been no single approach towards classifying these pathologies,” Martin notes. “MRI scanning is often used but is labor-intensive, costly, difficult to apply to dementia patients, and not a routine point-of-care test.”
Crystal Ball
Canadian researchers at McGill University believe they can predict Alzheimer’s disease up to two years before its onset using big data and artificial intelligence. They developed an algorithm capable of recognizing the signatures of dementia using a single amyloid PET scan of the brain of patients at risk of developing the disease.
Alzheimer’s is caused by the accumulation of two proteins—amyloid beta and tau. The latest research suggests that amyloid beta leads to the buildup of tau, which is responsible for damaging nerve cells and connections between cells called synapses.
The work was recently published in the journal Neurobiology of Aging.
“Despite the availability of biomarkers capable of identifying the proteins causative of Alzheimer’s disease in living individuals, the current technologies cannot predict whether carriers of AD pathology in the brain will progress to dementia,” Sulantha Mathotaarachchi, lead author on the paper and an expert in artificial neural networks, tells Singularity Hub by email.
The algorithm, trained on a population with amnestic mild cognitive impairment observed over 24 months, proved accurate 84.5 percent of the time. Mathotaarachchi says the algorithm can be trained on different populations for different observational periods, meaning the system can grow more comprehensive with more data.
“The more biomarkers we incorporate, the more accurate the prediction could be,” Mathotaarachchi adds. “However, right now, acquiring [the] required amount of training data is the biggest challenge. … In Alzheimer’s disease, it is known that the amyloid protein deposition occurs decades before symptoms onset.”
Unfortunately, the same process occurs in normal aging as well. “The challenge is to identify the abnormal patterns of deposition that lead to the disease later on,” he says
One of the key goals of the project is to improve the research in Alzheimer’s disease by ensuring those patients with the highest probability to develop dementia are enrolled in clinical trials. That will increase the efficiency of clinical programs, according to Mathotaarachchi.
“One of the most important outcomes from our study was the pilot, online, real-time prediction tool,” he says. “This can be used as a framework for patient screening before recruiting for clinical trials. … If a disease-modifying therapy becomes available for patients, a predictive tool might have clinical applications as well, by providing to the physician information regarding clinical progression.”
Pixel by Pixel Prediction
Private industry is also working toward improving science’s predictive powers when it comes to detecting dementia early. One startup called Darmiyan out of San Francisco claims its proprietary software can pick up signals before the onset of Alzheimer’s disease by up to 15 years.
Darmiyan didn’t respond to a request for comment for this article. Venture Beat reported that the company’s MRI-analyzing software “detects cell abnormalities at a microscopic level to reveal what a standard MRI scan cannot” and that the “software measures and highlights subtle microscopic changes in the brain tissue represented in every pixel of the MRI image long before any symptoms arise.”
Darmiyan claims to have a 90 percent accuracy rate and says its software has been vetted by top academic institutions like New York University, Rockefeller University, and Stanford, according to Venture Beat. The startup is awaiting FDA approval to proceed further but is reportedly working with pharmaceutical companies like Amgen, Johnson & Johnson, and Pfizer on pilot programs.
“Our technology enables smarter drug selection in preclinical animal studies, better patient selection for clinical trials, and much better drug-effect monitoring,” Darmiyan cofounder and CEO Padideh Kamali-Zare told Venture Beat.
Conclusions
An estimated 5.5 million Americans have Alzheimer’s, and one in 10 people over age 65 have been diagnosed with the disease. By mid-century, the number of Alzheimer’s patients could rise to 16 million. Health care costs in 2017 alone are estimated to be $259 billion, and by 2050 the annual price tag could be more than $1 trillion.
In sum, it’s a disease that cripples people and the economy.
Researchers are always after more data as they look to improve outcomes, with the hope of one day developing a cure or preventing the onset of neurodegeneration altogether. If interested in seeing this medical research progress, you can help by signing up on the Brain Health Registry to improve the quality of clinical trials.
Image Credit: rudall30 / Shutterstock.com Continue reading
#431175 Servosila introduces Mobile Robots ...
Servosila introduces a new member of the family of Servosila “Engineer” robots, a UGV called “Radio Engineer”. This new variant of the well-known backpack-transportable robot features a Software Defined Radio (SDR) payload module integrated into the robotic vehicle.
“Several of our key customers had asked us to enable an Electronic Warfare (EW) or Cognitive Radio applications in our robots”, – says a spokesman for the company, “By integrating a Software Defined Radio (SDR) module into our robotic platforms we cater to both requirements. Radio spectrum analysis, radio signal detection, jamming, and radio relay are important features for EOD robots such as ours. Servosila continues to serve the customers by pushing the boundaries of what their Servosila robots can do. Our partners in the research world and academia shall also greatly benefit from the new functionality that gives them more means of achieving their research goals.”
Photo Credit: Servosila – www.servosila.com
Coupling a programmable mobile robot with a software-defined radio creates a powerful platform for developing innovative applications that mix mobility and artificial intelligence with modern radio technologies. The new robotic radio applications include localized frequency hopping pattern analysis, OFDM waveform recognition, outdoor signal triangulation, cognitive mesh networking, automatic area search for radio emitters, passive or active mobile robotic radars, mobile base stations, mobile radio scanners, and many others.
A rotating head of the robot with mounts for external antennae acts as a pan-and-tilt device thus enabling various scanning and tracking applications. The neck of the robotic head is equipped with a pair of highly accurate Servosila-made servos with a pointing precision of 3.0 angular minutes. This means that the robot can point its antennae with an unprecedented accuracy.
Researchers and academia can benefit from the platform’s support for GnuRadio, an open source software framework for developing SDR applications. An on-board Intel i7 computer capable of executing OpenCL code, is internally connected to the SDR payload module. This makes it possible to execute most existing GnuRadio applications directly on the robot’s on-board computer. Other sensors of the robot such as a GPS sensor, an IMU or a thermal vision camera contribute into sensor fusion algorithms.
Since Servosila “Engineer” mobile robots are primarily designed for outdoor use, the SDR module is fully enclosed into a hardened body of the robot which provides protection in case of dust, rain, snow or impacts with obstacles while the robot is on the move. The robot and its SDR payload module are both powered by an on-board battery thus making the entire robotic radio platform independent of external power supplies.
Servosila plans to start shipping the SDR-equipped robots to international customers in October, 2017.
Web: https://www.servosila.com
YouTube: https://www.youtube.com/user/servosila/videos
About the Company
Servosila is a robotics technology company that designs, produces and markets a range of mobile robots, robotic arms, servo drives, harmonic reduction gears, robotic control systems as well as software packages that make the robots intelligent. Servosila provides consulting, training and operations support services to various customers around the world. The company markets its products and services directly or through a network of partners who provide tailored and localized services that meet specific procurement, support or operational needs.
Press Release above is by: Servosila
The post Servosila introduces Mobile Robots equipped with Software Defined Radio (SDR) payloads appeared first on Roboticmagazine. Continue reading
#431165 Intel Jumps Into Brain-Like Computing ...
The brain has long inspired the design of computers and their software. Now Intel has become the latest tech company to decide that mimicking the brain’s hardware could be the next stage in the evolution of computing.
On Monday the company unveiled an experimental “neuromorphic” chip called Loihi. Neuromorphic chips are microprocessors whose architecture is configured to mimic the biological brain’s network of neurons and the connections between them called synapses.
While neural networks—the in vogue approach to artificial intelligence and machine learning—are also inspired by the brain and use layers of virtual neurons, they are still implemented on conventional silicon hardware such as CPUs and GPUs.
The main benefit of mimicking the architecture of the brain on a physical chip, say neuromorphic computing’s proponents, is energy efficiency—the human brain runs on roughly 20 watts. The “neurons” in neuromorphic chips carry out the role of both processor and memory which removes the need to shuttle data back and forth between separate units, which is how traditional chips work. Each neuron also only needs to be powered while it’s firing.
At present, most machine learning is done in data centers due to the massive energy and computing requirements. Creating chips that capture some of nature’s efficiency could allow AI to be run directly on devices like smartphones, cars, and robots.
This is exactly the kind of application Michael Mayberry, managing director of Intel’s research arm, touts in a blog post announcing Loihi. He talks about CCTV cameras that can run image recognition to identify missing persons or traffic lights that can track traffic flow to optimize timing and keep vehicles moving.
There’s still a long way to go before that happens though. According to Wired, so far Intel has only been working with prototypes, and the first full-size version of the chip won’t be built until November.
Once complete, it will feature 130,000 neurons and 130 million synaptic connections split between 128 computing cores. The device will be 1,000 times more energy-efficient than standard approaches, according to Mayberry, but more impressive are claims the chip will be capable of continuous learning.
Intel’s newly launched self-learning neuromorphic chip.
Normally deep learning works by training a neural network on giant datasets to create a model that can then be applied to new data. The Loihi chip will combine training and inference on the same chip, which will allow it to learn on the fly, constantly updating its models and adapting to changing circumstances without having to be deliberately re-trained.
A select group of universities and research institutions will be the first to get their hands on the new chip in the first half of 2018, but Mayberry said it could be years before it’s commercially available. Whether commercialization happens at all may largely depend on whether early adopters can get the hardware to solve any practically useful problems.
So far neuromorphic computing has struggled to gain traction outside the research community. IBM released a neuromorphic chip called TrueNorth in 2014, but the device has yet to showcase any commercially useful applications.
Lee Gomes summarizes the hurdles facing neuromorphic computing excellently in IEEE Spectrum. One is that deep learning can run on very simple, low-precision hardware that can be optimized to use very little power, which suggests complicated new architectures may struggle to find purchase.
It’s also not easy to transfer deep learning approaches developed on conventional chips over to neuromorphic hardware, and even Intel Labs chief scientist Narayan Srinivasa admitted to Forbes Loihi wouldn’t work well with some deep learning models.
Finally, there’s considerable competition in the quest to develop new computer architectures specialized for machine learning. GPU vendors Nvidia and AMD have pivoted to take advantage of this newfound market and companies like Google and Microsoft are developing their own in-house solutions.
Intel, for its part, isn’t putting all its eggs in one basket. Last year it bought two companies building chips for specialized machine learning—Movidius and Nervana—and this was followed up with the $15 billion purchase of self-driving car chip- and camera-maker Mobileye.
And while the jury is still out on neuromorphic computing, it makes sense for a company eager to position itself as the AI chipmaker of the future to have its fingers in as many pies as possible. There are a growing number of voices suggesting that despite its undoubted power, deep learning alone will not allow us to imbue machines with the kind of adaptable, general intelligence humans possess.
What new approaches will get us there are hard to predict, but it’s entirely possible they will only work on hardware that closely mimics the one device we already know is capable of supporting this kind of intelligence—the human brain.
Image Credit: Intel Continue reading