Tag Archives: smartphone
#431958 The Next Generation of Cameras Might See ...
You might be really pleased with the camera technology in your latest smartphone, which can recognize your face and take slow-mo video in ultra-high definition. But these technological feats are just the start of a larger revolution that is underway.
The latest camera research is shifting away from increasing the number of mega-pixels towards fusing camera data with computational processing. By that, we don’t mean the Photoshop style of processing where effects and filters are added to a picture, but rather a radical new approach where the incoming data may not actually look like at an image at all. It only becomes an image after a series of computational steps that often involve complex mathematics and modeling how light travels through the scene or the camera.
This additional layer of computational processing magically frees us from the chains of conventional imaging techniques. One day we may not even need cameras in the conventional sense any more. Instead we will use light detectors that only a few years ago we would never have considered any use for imaging. And they will be able to do incredible things, like see through fog, inside the human body and even behind walls.
Single Pixel Cameras
One extreme example is the single pixel camera, which relies on a beautifully simple principle. Typical cameras use lots of pixels (tiny sensor elements) to capture a scene that is likely illuminated by a single light source. But you can also do things the other way around, capturing information from many light sources with a single pixel.
To do this you need a controlled light source, for example a simple data projector that illuminates the scene one spot at a time or with a series of different patterns. For each illumination spot or pattern, you then measure the amount of light reflected and add everything together to create the final image.
Clearly the disadvantage of taking a photo in this is way is that you have to send out lots of illumination spots or patterns in order to produce one image (which would take just one snapshot with a regular camera). But this form of imaging would allow you to create otherwise impossible cameras, for example that work at wavelengths of light beyond the visible spectrum, where good detectors cannot be made into cameras.
These cameras could be used to take photos through fog or thick falling snow. Or they could mimic the eyes of some animals and automatically increase an image’s resolution (the amount of detail it captures) depending on what’s in the scene.
It is even possible to capture images from light particles that have never even interacted with the object we want to photograph. This would take advantage of the idea of “quantum entanglement,” that two particles can be connected in a way that means whatever happens to one happens to the other, even if they are a long distance apart. This has intriguing possibilities for looking at objects whose properties might change when lit up, such as the eye. For example, does a retina look the same when in darkness as in light?
Multi-Sensor Imaging
Single-pixel imaging is just one of the simplest innovations in upcoming camera technology and relies, on the face of it, on the traditional concept of what forms a picture. But we are currently witnessing a surge of interest for systems that use lots of information but traditional techniques only collect a small part of it.
This is where we could use multi-sensor approaches that involve many different detectors pointed at the same scene. The Hubble telescope was a pioneering example of this, producing pictures made from combinations of many different images taken at different wavelengths. But now you can buy commercial versions of this kind of technology, such as the Lytro camera that collects information about light intensity and direction on the same sensor, to produce images that can be refocused after the image has been taken.
The next generation camera will probably look something like the Light L16 camera, which features ground-breaking technology based on more than ten different sensors. Their data are combined using a computer to provide a 50 MB, re-focusable and re-zoomable, professional-quality image. The camera itself looks like a very exciting Picasso interpretation of a crazy cell-phone camera.
Yet these are just the first steps towards a new generation of cameras that will change the way in which we think of and take images. Researchers are also working hard on the problem of seeing through fog, seeing behind walls, and even imaging deep inside the human body and brain.
All of these techniques rely on combining images with models that explain how light travels through through or around different substances.
Another interesting approach that is gaining ground relies on artificial intelligence to “learn” to recognize objects from the data. These techniques are inspired by learning processes in the human brain and are likely to play a major role in future imaging systems.
Single photon and quantum imaging technologies are also maturing to the point that they can take pictures with incredibly low light levels and videos with incredibly fast speeds reaching a trillion frames per second. This is enough to even capture images of light itself traveling across as scene.
Some of these applications might require a little time to fully develop, but we now know that the underlying physics should allow us to solve these and other problems through a clever combination of new technology and computational ingenuity.
This article was originally published on The Conversation. Read the original article.
Image Credit: Sylvia Adams / Shutterstock.com Continue reading
#431603 What We Can Learn From the Second Life ...
For every new piece of technology that gets developed, you can usually find people saying it will never be useful. The president of the Michigan Savings Bank in 1903, for example, said, “The horse is here to stay but the automobile is only a novelty—a fad.” It’s equally easy to find people raving about whichever new technology is at the peak of the Gartner Hype Cycle, which tracks the buzz around these newest developments and attempts to temper predictions. When technologies emerge, there are all kinds of uncertainties, from the actual capacity of the technology to its use cases in real life to the price tag.
Eventually the dust settles, and some technologies get widely adopted, to the extent that they can become “invisible”; people take them for granted. Others fall by the wayside as gimmicky fads or impractical ideas. Picking which horses to back is the difference between Silicon Valley millions and Betamax pub-quiz-question obscurity. For a while, it seemed that Google had—for once—backed the wrong horse.
Google Glass emerged from Google X, the ubiquitous tech giant’s much-hyped moonshot factory, where highly secretive researchers work on the sci-fi technologies of the future. Self-driving cars and artificial intelligence are the more mundane end for an organization that apparently once looked into jetpacks and teleportation.
The original smart glasses, Google began selling Google Glass in 2013 for $1,500 as prototypes for their acolytes, around 8,000 early adopters. Users could control the glasses with a touchpad, or, activated by tilting the head back, with voice commands. Audio relay—as with several wearable products—is via bone conduction, which transmits sound by vibrating the skull bones of the user. This was going to usher in the age of augmented reality, the next best thing to having a chip implanted directly into your brain.
On the surface, it seemed to be a reasonable proposition. People had dreamed about augmented reality for a long time—an onboard, JARVIS-style computer giving you extra information and instant access to communications without even having to touch a button. After smartphone ubiquity, it looked like a natural step forward.
Instead, there was a backlash. People may be willing to give their data up to corporations, but they’re less pleased with the idea that someone might be filming them in public. The worst aspect of smartphones is trying to talk to people who are distractedly scrolling through their phones. There’s a famous analogy in Revolutionary Road about an old couple’s loveless marriage: the husband tunes out his wife’s conversation by turning his hearing aid down to zero. To many, Google Glass seemed to provide us with a whole new way to ignore each other in favor of our Twitter feeds.
Then there’s the fact that, regardless of whether it’s because we’re not used to them, or if it’s a more permanent feature, people wearing AR tech often look very silly. Put all this together with a lack of early functionality, the high price (do you really feel comfortable wearing a $1,500 computer?), and a killer pun for the users—Glassholes—and the final recipe wasn’t great for Google.
Google Glass was quietly dropped from sale in 2015 with the ominous slogan posted on Google’s website “Thanks for exploring with us.” Reminding the Glass users that they had always been referred to as “explorers”—beta-testing a product, in many ways—it perhaps signaled less enthusiasm for wearables than the original, Google Glass skydive might have suggested.
In reality, Google went back to the drawing board. Not with the technology per se, although it has improved in the intervening years, but with the uses behind the technology.
Under what circumstances would you actually need a Google Glass? When would it genuinely be preferable to a smartphone that can do many of the same things and more? Beyond simply being a fashion item, which Google Glass decidedly was not, even the most tech-evangelical of us need a convincing reason to splash $1,500 on a wearable computer that’s less socially acceptable and less easy to use than the machine you’re probably reading this on right now.
Enter the Google Glass Enterprise Edition.
Piloted in factories during the years that Google Glass was dormant, and now roaring back to life and commercially available, the Google Glass relaunch got under way in earnest in July of 2017. The difference here was the specific audience: workers in factories who need hands-free computing because they need to use their hands at the same time.
In this niche application, wearable computers can become invaluable. A new employee can be trained with pre-programmed material that explains how to perform actions in real time, while instructions can be relayed straight into a worker’s eyeline without them needing to check a phone or switch to email.
Medical devices have long been a dream application for Google Glass. You can imagine a situation where people receive real-time information during surgery, or are augmented by artificial intelligence that provides additional diagnostic information or questions in response to a patient’s symptoms. The quest to develop a healthcare AI, which can provide recommendations in response to natural language queries, is on. The famously untidy doctor’s handwriting—and the associated death toll—could be avoided if the glasses could take dictation straight into a patient’s medical records. All of this is far more useful than allowing people to check Facebook hands-free while they’re riding the subway.
Google’s “Lens” application indicates another use for Google Glass that hadn’t quite matured when the original was launched: the Lens processes images and provides information about them. You can look at text and have it translated in real time, or look at a building or sign and receive additional information. Image processing, either through neural networks hooked up to a cloud database or some other means, is the frontier that enables driverless cars and similar technology to exist. Hook this up to a voice-activated assistant relaying information to the user, and you have your killer application: real-time annotation of the world around you. It’s this functionality that just wasn’t ready yet when Google launched Glass.
Amazon’s recent announcement that they want to integrate Alexa into a range of smart glasses indicates that the tech giants aren’t ready to give up on wearables yet. Perhaps, in time, people will become used to voice activation and interaction with their machines, at which point smart glasses with bone conduction will genuinely be more convenient than a smartphone.
But in many ways, the real lesson from the initial failure—and promising second life—of Google Glass is a simple question that developers of any smart technology, from the Internet of Things through to wearable computers, must answer. “What can this do that my smartphone can’t?” Find your answer, as the Enterprise Edition did, as Lens might, and you find your product.
Image Credit: Hattanas / Shutterstock.com Continue reading
#431343 How Technology Is Driving Us Toward Peak ...
At some point in the future—and in some ways we are already seeing this—the amount of physical stuff moving around the world will peak and begin to decline. By “stuff,” I am referring to liquid fuels, coal, containers on ships, food, raw materials, products, etc.
New technologies are moving us toward “production-at-the-point-of-consumption” of energy, food, and products with reduced reliance on a global supply chain.
The trade of physical stuff has been central to globalization as we’ve known it. So, this declining movement of stuff may signal we are approaching “peak globalization.”
To be clear, even as the movement of stuff may slow, if not decline, the movement of people, information, data, and ideas around the world is growing exponentially and is likely to continue doing so for the foreseeable future.
Peak globalization may provide a pathway to preserving the best of globalization and global interconnectedness, enhancing economic and environmental sustainability, and empowering individuals and communities to strengthen democracy.
At the same time, some of the most troublesome aspects of globalization may be eased, including massive financial transfers to energy producers and loss of jobs to manufacturing platforms like China. This shift could bring relief to the “losers” of globalization and ease populist, nationalist political pressures that are roiling the developed countries.
That is quite a claim, I realize. But let me explain the vision.
New Technologies and Businesses: Digital, Democratized, Decentralized
The key factors moving us toward peak globalization and making it economically viable are new technologies and innovative businesses and business models allowing for “production-at-the-point-of-consumption” of energy, food, and products.
Exponential technologies are enabling these trends by sharply reducing the “cost of entry” for creating businesses. Driven by Moore’s Law, powerful technologies have become available to almost anyone, anywhere.
Beginning with the microchip, which has had a 100-billion-fold improvement in 40 years—10,000 times faster and 10 million times cheaper—the marginal cost of producing almost everything that can be digitized has fallen toward zero.
A hard copy of a book, for example, will always entail the cost of materials, printing, shipping, etc., even if the marginal cost falls as more copies are produced. But the marginal cost of a second digital copy, such as an e-book, streaming video, or song, is nearly zero as it is simply a digital file sent over the Internet, the world’s largest copy machine.* Books are one product, but there are literally hundreds of thousands of dollars in once-physical, separate products jammed into our devices at little to no cost.
A smartphone alone provides half the human population access to artificial intelligence—from SIRI, search, and translation to cloud computing—geolocation, free global video calls, digital photography and free uploads to social network sites, free access to global knowledge, a million apps for a huge variety of purposes, and many other capabilities that were unavailable to most people only a few years ago.
As powerful as dematerialization and demonetization are for private individuals, they’re having a stronger effect on businesses. A small team can access expensive, advanced tools that before were only available to the biggest organizations. Foundational digital platforms, such as the internet and GPS, and the platforms built on top of them by the likes of Google, Apple, Amazon, and others provide the connectivity and services democratizing business tools and driving the next generation of new startups.
“As these trends gain steam in coming decades, they’ll bleed into and fundamentally transform global supply chains.”
An AI startup, for example, doesn’t need its own server farm to train its software and provide service to customers. The team can rent computing power from Amazon Web Services. This platform model enables small teams to do big things on the cheap. And it isn’t just in software. Similar trends are happening in hardware too. Makers can 3D print or mill industrial grade prototypes of physical stuff in a garage or local maker space and send or sell designs to anyone with a laptop and 3D printer via online platforms.
These are early examples of trends that are likely to gain steam in coming decades, and as they do, they’ll bleed into and fundamentally transform global supply chains.
The old model is a series of large, connected bits of centralized infrastructure. It makes sense to mine, farm, or manufacture in bulk when the conditions, resources, machines, and expertise to do so exist in particular places and are specialized and expensive. The new model, however, enables smaller-scale production that is local and decentralized.
To see this more clearly, let’s take a look at the technological trends at work in the three biggest contributors to the global trade of physical stuff—products, energy, and food.
Products
3D printing (additive manufacturing) allows for distributed manufacturing near the point of consumption, eliminating or reducing supply chains and factory production lines.
This is possible because product designs are no longer made manifest in assembly line parts like molds or specialized mechanical tools. Rather, designs are digital and can be called up at will to guide printers. Every time a 3D printer prints, it can print a different item, so no assembly line needs to be set up for every different product. 3D printers can also print an entire finished product in one piece or reduce the number of parts of larger products, such as engines. This further lessens the need for assembly.
Because each item can be customized and printed on demand, there is no cost benefit from scaling production. No inventories. No shipping items across oceans. No carbon emissions transporting not only the final product but also all the parts in that product shipped from suppliers to manufacturer. Moreover, 3D printing builds items layer by layer with almost no waste, unlike “subtractive manufacturing” in which an item is carved out of a piece of metal, and much or even most of the material can be waste.
Finally, 3D printing is also highly scalable, from inexpensive 3D printers (several hundred dollars) for home and school use to increasingly capable and expensive printers for industrial production. There are also 3D printers being developed for printing buildings, including houses and office buildings, and other infrastructure.
The technology for finished products is only now getting underway, and there are still challenges to overcome, such as speed, quality, and range of materials. But as methods and materials advance, it will likely creep into more manufactured goods.
Ultimately, 3D printing will be a general purpose technology that involves many different types of printers and materials—such as plastics, metals, and even human cells—to produce a huge range of items, from human tissue and potentially human organs to household items and a range of industrial items for planes, trains, and automobiles.
Energy
Renewable energy production is located at or relatively near the source of consumption.
Although electricity generated by solar, wind, geothermal, and other renewable sources can of course be transmitted over longer distances, it is mostly generated and consumed locally or regionally. It is not transported around the world in tankers, ships, and pipelines like petroleum, coal, and natural gas.
Moreover, the fuel itself is free—forever. There is no global price on sun or wind. The people relying on solar and wind power need not worry about price volatility and potential disruption of fuel supplies as a result of political, market, or natural causes.
Renewables have their problems, of course, including intermittency and storage, and currently they work best if complementary to other sources, especially natural gas power plants that, unlike coal plants, can be turned on or off and modulated like a gas stove, and are half the carbon emissions of coal.
Within the next decades or so, it is likely the intermittency and storage problems will be solved or greatly mitigated. In addition, unlike coal and natural gas power plants, solar is scalable, from solar panels on individual homes or even cars and other devices, to large-scale solar farms. Solar can be connected with microgrids and even allow for autonomous electricity generation by homes, commercial buildings, and communities.
It may be several decades before fossil fuel power plants can be phased out, but the development cost of renewables has been falling exponentially and, in places, is beginning to compete with coal and gas. Solar especially is expected to continue to increase in efficiency and decline in cost.
Given these trends in cost and efficiency, renewables should become obviously cheaper over time—if the fuel is free for solar and has to be continually purchased for coal and gas, at some point the former is cheaper than the latter. Renewables are already cheaper if externalities such as carbon emissions and environmental degradation involved in obtaining and transporting the fuel are included.
Food
Food can be increasingly produced near the point of consumption with vertical farms and eventually with printed food and even printed or cultured meat.
These sources bring production of food very near the consumer, so transportation costs, which can be a significant portion of the cost of food to consumers, are greatly reduced. The use of land and water are reduced by 95% or more, and energy use is cut by nearly 50%. In addition, fertilizers and pesticides are not required and crops can be grown 365 days a year whatever the weather and in more climates and latitudes than is possible today.
While it may not be practical to grow grains, corn, and other such crops in vertical farms, many vegetables and fruits can flourish in such facilities. In addition, cultured or printed meat is being developed—the big challenge is scaling up and reducing cost—that is based on cells from real animals without slaughtering the animals themselves.
There are currently some 70 billion animals being raised for food around the world [PDF] and livestock alone counts for about 15% of global emissions. Moreover, livestock places huge demands on land, water, and energy. Like vertical farms, cultured or printed meat could be produced with no more land use than a brewery and with far less water and energy.
A More Democratic Economy Goes Bottom Up
This is a very brief introduction to the technologies that can bring “production-at-the-point-of-consumption” of products, energy, and food to cities and regions.
What does this future look like? Here’s a simplified example.
Imagine a universal manufacturing facility with hundreds of 3D printers printing tens of thousands of different products on demand for the local community—rather than assembly lines in China making tens of thousands of the same product that have to be shipped all over the world since no local market can absorb all of the same product.
Nearby, a vertical farm and cultured meat facility produce much of tomorrow night’s dinner. These facilities would be powered by local or regional wind and solar. Depending on need and quality, some infrastructure and machinery, like solar panels and 3D printers, would live in these facilities and some in homes and businesses.
The facilities could be owned by a large global corporation—but still locally produce goods—or they could be franchised or even owned and operated independently by the local population. Upkeep and management at each would provide jobs for communities nearby. Eventually, not only would global trade of parts and products diminish, but even required supplies of raw materials and feed stock would decline since there would be less waste in production, and many materials would be recycled once acquired.
“Peak globalization could be a viable pathway to an economic foundation that puts people first while building a more economically and environmentally sustainable future.”
This model suggests a shift toward a “bottom up” economy that is more democratic, locally controlled, and likely to generate more local jobs.
The global trends in democratization of technology make the vision technologically plausible. Much of this technology already exists and is improving and scaling while exponentially decreasing in cost to become available to almost anyone, anywhere.
This includes not only access to key technologies, but also to education through digital platforms available globally. Online courses are available for free, ranging from advanced physics, math, and engineering to skills training in 3D printing, solar installations, and building vertical farms. Social media platforms can enable local and global collaboration and sharing of knowledge and best practices.
These new communities of producers can be the foundation for new forms of democratic governance as they recognize and “capitalize” on the reality that control of the means of production can translate to political power. More jobs and local control could weaken populist, anti-globalization political forces as people recognize they could benefit from the positive aspects of globalization and international cooperation and connectedness while diminishing the impact of globalization’s downsides.
There are powerful vested interests that stand to lose in such a global structural shift. But this vision builds on trends that are already underway and are gaining momentum. Peak globalization could be a viable pathway to an economic foundation that puts people first while building a more economically and environmentally sustainable future.
This article was originally posted on Open Democracy (CC BY-NC 4.0). The version above was edited with the author for length and includes additions. Read the original article on Open Democracy.
* See Jeremy Rifkin, The Zero Marginal Cost Society, (New York: Palgrave Macmillan, 2014), Part II, pp. 69-154.
Image Credit: Sergey Nivens / Shutterstock.com Continue reading
#431238 AI Is Easy to Fool—Why That Needs to ...
Con artistry is one of the world’s oldest and most innovative professions, and it may soon have a new target. Research suggests artificial intelligence may be uniquely susceptible to tricksters, and as its influence in the modern world grows, attacks against it are likely to become more common.
The root of the problem lies in the fact that artificial intelligence algorithms learn about the world in very different ways than people do, and so slight tweaks to the data fed into these algorithms can throw them off completely while remaining imperceptible to humans.
Much of the research into this area has been conducted on image recognition systems, in particular those relying on deep learning neural networks. These systems are trained by showing them thousands of examples of images of a particular object until they can extract common features that allow them to accurately spot the object in new images.
But the features they extract are not necessarily the same high-level features a human would be looking for, like the word STOP on a sign or a tail on a dog. These systems analyze images at the individual pixel level to detect patterns shared between examples. These patterns can be obscure combinations of pixel values, in small pockets or spread across the image, that would be impossible to discern for a human, but highly accurate at predicting a particular object.
“An attacker can trick the object recognition algorithm into seeing something that isn’t there, without these alterations being obvious to a human.”
What this means is that by identifying these patterns and overlaying them over a different image, an attacker can trick the object recognition algorithm into seeing something that isn’t there, without these alterations being obvious to a human. This kind of manipulation is known as an “adversarial attack.”
Early attempts to trick image recognition systems this way required access to the algorithm’s inner workings to decipher these patterns. But in 2016 researchers demonstrated a “black box” attack that enabled them to trick such a system without knowing its inner workings.
By feeding the system doctored images and seeing how it classified them, they were able to work out what it was focusing on and therefore generate images they knew would fool it. Importantly, the doctored images were not obviously different to human eyes.
These approaches were tested by feeding doctored image data directly into the algorithm, but more recently, similar approaches have been applied in the real world. Last year it was shown that printouts of doctored images that were then photographed on a smartphone successfully tricked an image classification system.
Another group showed that wearing specially designed, psychedelically-colored spectacles could trick a facial recognition system into thinking people were celebrities. In August scientists showed that adding stickers to stop signs in particular configurations could cause a neural net designed to spot them to misclassify the signs.
These last two examples highlight some of the potential nefarious applications for this technology. Getting a self-driving car to miss a stop sign could cause an accident, either for insurance fraud or to do someone harm. If facial recognition becomes increasingly popular for biometric security applications, being able to pose as someone else could be very useful to a con artist.
Unsurprisingly, there are already efforts to counteract the threat of adversarial attacks. In particular, it has been shown that deep neural networks can be trained to detect adversarial images. One study from the Bosch Center for AI demonstrated such a detector, an adversarial attack that fools the detector, and a training regime for the detector that nullifies the attack, hinting at the kind of arms race we are likely to see in the future.
While image recognition systems provide an easy-to-visualize demonstration, they’re not the only machine learning systems at risk. The techniques used to perturb pixel data can be applied to other kinds of data too.
“Bypassing cybersecurity defenses is one of the more worrying and probable near-term applications for this approach.”
Chinese researchers showed that adding specific words to a sentence or misspelling a word can completely throw off machine learning systems designed to analyze what a passage of text is about. Another group demonstrated that garbled sounds played over speakers could make a smartphone running the Google Now voice command system visit a particular web address, which could be used to download malware.
This last example points toward one of the more worrying and probable near-term applications for this approach: bypassing cybersecurity defenses. The industry is increasingly using machine learning and data analytics to identify malware and detect intrusions, but these systems are also highly susceptible to trickery.
At this summer’s DEF CON hacking convention, a security firm demonstrated they could bypass anti-malware AI using a similar approach to the earlier black box attack on the image classifier, but super-powered with an AI of their own.
Their system fed malicious code to the antivirus software and then noted the score it was given. It then used genetic algorithms to iteratively tweak the code until it was able to bypass the defenses while maintaining its function.
All the approaches noted so far are focused on tricking pre-trained machine learning systems, but another approach of major concern to the cybersecurity industry is that of “data poisoning.” This is the idea that introducing false data into a machine learning system’s training set will cause it to start misclassifying things.
This could be particularly challenging for things like anti-malware systems that are constantly being updated to take into account new viruses. A related approach bombards systems with data designed to generate false positives so the defenders recalibrate their systems in a way that then allows the attackers to sneak in.
How likely it is that these approaches will be used in the wild will depend on the potential reward and the sophistication of the attackers. Most of the techniques described above require high levels of domain expertise, but it’s becoming ever easier to access training materials and tools for machine learning.
Simpler versions of machine learning have been at the heart of email spam filters for years, and spammers have developed a host of innovative workarounds to circumvent them. As machine learning and AI increasingly embed themselves in our lives, the rewards for learning how to trick them will likely outweigh the costs.
Image Credit: Nejron Photo / Shutterstock.com Continue reading