Tag Archives: recognize
#431958 The Next Generation of Cameras Might See ...
You might be really pleased with the camera technology in your latest smartphone, which can recognize your face and take slow-mo video in ultra-high definition. But these technological feats are just the start of a larger revolution that is underway.
The latest camera research is shifting away from increasing the number of mega-pixels towards fusing camera data with computational processing. By that, we don’t mean the Photoshop style of processing where effects and filters are added to a picture, but rather a radical new approach where the incoming data may not actually look like at an image at all. It only becomes an image after a series of computational steps that often involve complex mathematics and modeling how light travels through the scene or the camera.
This additional layer of computational processing magically frees us from the chains of conventional imaging techniques. One day we may not even need cameras in the conventional sense any more. Instead we will use light detectors that only a few years ago we would never have considered any use for imaging. And they will be able to do incredible things, like see through fog, inside the human body and even behind walls.
Single Pixel Cameras
One extreme example is the single pixel camera, which relies on a beautifully simple principle. Typical cameras use lots of pixels (tiny sensor elements) to capture a scene that is likely illuminated by a single light source. But you can also do things the other way around, capturing information from many light sources with a single pixel.
To do this you need a controlled light source, for example a simple data projector that illuminates the scene one spot at a time or with a series of different patterns. For each illumination spot or pattern, you then measure the amount of light reflected and add everything together to create the final image.
Clearly the disadvantage of taking a photo in this is way is that you have to send out lots of illumination spots or patterns in order to produce one image (which would take just one snapshot with a regular camera). But this form of imaging would allow you to create otherwise impossible cameras, for example that work at wavelengths of light beyond the visible spectrum, where good detectors cannot be made into cameras.
These cameras could be used to take photos through fog or thick falling snow. Or they could mimic the eyes of some animals and automatically increase an image’s resolution (the amount of detail it captures) depending on what’s in the scene.
It is even possible to capture images from light particles that have never even interacted with the object we want to photograph. This would take advantage of the idea of “quantum entanglement,” that two particles can be connected in a way that means whatever happens to one happens to the other, even if they are a long distance apart. This has intriguing possibilities for looking at objects whose properties might change when lit up, such as the eye. For example, does a retina look the same when in darkness as in light?
Multi-Sensor Imaging
Single-pixel imaging is just one of the simplest innovations in upcoming camera technology and relies, on the face of it, on the traditional concept of what forms a picture. But we are currently witnessing a surge of interest for systems that use lots of information but traditional techniques only collect a small part of it.
This is where we could use multi-sensor approaches that involve many different detectors pointed at the same scene. The Hubble telescope was a pioneering example of this, producing pictures made from combinations of many different images taken at different wavelengths. But now you can buy commercial versions of this kind of technology, such as the Lytro camera that collects information about light intensity and direction on the same sensor, to produce images that can be refocused after the image has been taken.
The next generation camera will probably look something like the Light L16 camera, which features ground-breaking technology based on more than ten different sensors. Their data are combined using a computer to provide a 50 MB, re-focusable and re-zoomable, professional-quality image. The camera itself looks like a very exciting Picasso interpretation of a crazy cell-phone camera.
Yet these are just the first steps towards a new generation of cameras that will change the way in which we think of and take images. Researchers are also working hard on the problem of seeing through fog, seeing behind walls, and even imaging deep inside the human body and brain.
All of these techniques rely on combining images with models that explain how light travels through through or around different substances.
Another interesting approach that is gaining ground relies on artificial intelligence to “learn” to recognize objects from the data. These techniques are inspired by learning processes in the human brain and are likely to play a major role in future imaging systems.
Single photon and quantum imaging technologies are also maturing to the point that they can take pictures with incredibly low light levels and videos with incredibly fast speeds reaching a trillion frames per second. This is enough to even capture images of light itself traveling across as scene.
Some of these applications might require a little time to fully develop, but we now know that the underlying physics should allow us to solve these and other problems through a clever combination of new technology and computational ingenuity.
This article was originally published on The Conversation. Read the original article.
Image Credit: Sylvia Adams / Shutterstock.com Continue reading
#431828 This Self-Driving AI Is Learning to ...
I don’t have to open the doors of AImotive’s white 2015 Prius to see that it’s not your average car. This particular Prius has been christened El Capitan, the name written below the rear doors, and two small cameras are mounted on top of the car. Bundles of wire snake out from them, as well as from the two additional cameras on the car’s hood and trunk.
Inside is where things really get interesting, though. The trunk holds a computer the size of a microwave, and a large monitor covers the passenger glove compartment and dashboard. The center console has three switches labeled “Allowed,” “Error,” and “Active.”
Budapest-based AImotive is working to provide scalable self-driving technology alongside big players like Waymo and Uber in the autonomous vehicle world. On a highway test ride with CEO Laszlo Kishonti near the company’s office in Mountain View, California, I got a glimpse of just how complex that world is.
Camera-Based Feedback System
AImotive’s approach to autonomous driving is a little different from that of some of the best-known systems. For starters, they’re using cameras, not lidar, as primary sensors. “The traffic system is visual and the cost of cameras is low,” Kishonti said. “A lidar can recognize when there are people near the car, but a camera can differentiate between, say, an elderly person and a child. Lidar’s resolution isn’t high enough to recognize the subtle differences of urban driving.”
Image Credit: AImotive
The company’s aiDrive software uses data from the camera sensors to feed information to its algorithms for hierarchical decision-making, grouped under four concurrent activities: recognition, location, motion, and control.
Kishonti pointed out that lidar has already gotten more cost-efficient, and will only continue to do so.
“Ten years ago, lidar was best because there wasn’t enough processing power to do all the calculations by AI. But the cost of running AI is decreasing,” he said. “In our approach, computer vision and AI processing are key, and for safety, we’ll have fallback sensors like radar or lidar.”
aiDrive currently runs on Nvidia chips, which Kishonti noted were originally designed for graphics, and are not terribly efficient given how power-hungry they are. “We’re planning to substitute lower-cost, lower-energy chips in the next six months,” he said.
Testing in Virtual Reality
Waymo recently announced its fleet has now driven four million miles autonomously. That’s a lot of miles, and hard to compete with. But AImotive isn’t trying to compete, at least not by logging more real-life test miles. Instead, the company is doing 90 percent of its testing in virtual reality. “This is what truly differentiates us from competitors,” Kishonti said.
He outlined the three main benefits of VR testing: it can simulate scenarios too dangerous for the real world (such as hitting something), too costly (not every company has Waymo’s funds to run hundreds of cars on real roads), or too time-consuming (like waiting for rain, snow, or other weather conditions to occur naturally and repeatedly).
“Real-world traffic testing is very skewed towards the boring miles,” he said. “What we want to do is test all the cases that are hard to solve.”
On a screen that looked not unlike multiple games of Mario Kart, he showed me the simulator. Cartoon cars cruised down winding streets, outfitted with all the real-world surroundings: people, trees, signs, other cars. As I watched, a furry kangaroo suddenly hopped across one screen. “Volvo had an issue in Australia,” Kishonti explained. “A kangaroo’s movement is different than other animals since it hops instead of running.” Talk about cases that are hard to solve.
AImotive is currently testing around 1,000 simulated scenarios every night, with a steadily-rising curve of successful tests. These scenarios are broken down into features, and the car’s behavior around those features fed into a neural network. As the algorithms learn more features, the level of complexity the vehicles can handle goes up.
On the Road
After Kishonti and his colleagues filled me in on the details of their product, it was time to test it out. A safety driver sat in the driver’s seat, a computer operator in the passenger seat, and Kishonti and I in back. The driver maintained full control of the car until we merged onto the highway. Then he flicked the “Allowed” switch, his copilot pressed the “Active” switch, and he took his hands off the wheel.
What happened next, you ask?
A few things. El Capitan was going exactly the speed limit—65 miles per hour—which meant all the other cars were passing us. When a car merged in front of us or cut us off, El Cap braked accordingly (if a little abruptly). The monitor displayed the feed from each of the car’s cameras, plus multiple data fields and a simulation where a blue line marked the center of the lane, measured by the cameras tracking the lane markings on either side.
I noticed El Cap wobbling out of our lane a bit, but it wasn’t until two things happened in a row that I felt a little nervous: first we went under a bridge, then a truck pulled up next to us, both bridge and truck casting a complete shadow over our car. At that point El Cap lost it, and we swerved haphazardly to the right, narrowly missing the truck’s rear wheels. The safety driver grabbed the steering wheel and took back control of the car.
What happened, Kishonti explained, was that the shadows made it hard for the car’s cameras to see the lane markings. This was a new scenario the algorithm hadn’t previously encountered. If we’d only gone under a bridge or only been next to the truck for a second, El Cap may not have had so much trouble, but the two events happening in a row really threw the car for a loop—almost literally.
“This is a new scenario we’ll add to our testing,” Kishonti said. He added that another way for the algorithm to handle this type of scenario, rather than basing its speed and positioning on the lane markings, is to mimic nearby cars. “The human eye would see that other cars are still moving at the same speed, even if it can’t see details of the road,” he said.
After another brief—and thankfully uneventful—hands-off cruise down the highway, the safety driver took over, exited the highway, and drove us back to the office.
Driving into the Future
I climbed out of the car feeling amazed not only that self-driving cars are possible, but that driving is possible at all. I squint when driving into a tunnel, swerve to avoid hitting a stray squirrel, and brake gradually at stop signs—all without consciously thinking to do so. On top of learning to steer, brake, and accelerate, self-driving software has to incorporate our brains’ and bodies’ unconscious (but crucial) reactions, like our pupils dilating to let in more light so we can see in a tunnel.
Despite all the progress of machine learning, artificial intelligence, and computing power, I have a wholly renewed appreciation for the thing that’s been in charge of driving up till now: the human brain.
Kishonti seemed to feel similarly. “I don’t think autonomous vehicles in the near future will be better than the best drivers,” he said. “But they’ll be better than the average driver. What we want to achieve is safe, good-quality driving for everyone, with scalability.”
AImotive is currently working with American tech firms and with car and truck manufacturers in Europe, China, and Japan.
Image Credit: Alex Oakenman / Shutterstock.com Continue reading
#431343 How Technology Is Driving Us Toward Peak ...
At some point in the future—and in some ways we are already seeing this—the amount of physical stuff moving around the world will peak and begin to decline. By “stuff,” I am referring to liquid fuels, coal, containers on ships, food, raw materials, products, etc.
New technologies are moving us toward “production-at-the-point-of-consumption” of energy, food, and products with reduced reliance on a global supply chain.
The trade of physical stuff has been central to globalization as we’ve known it. So, this declining movement of stuff may signal we are approaching “peak globalization.”
To be clear, even as the movement of stuff may slow, if not decline, the movement of people, information, data, and ideas around the world is growing exponentially and is likely to continue doing so for the foreseeable future.
Peak globalization may provide a pathway to preserving the best of globalization and global interconnectedness, enhancing economic and environmental sustainability, and empowering individuals and communities to strengthen democracy.
At the same time, some of the most troublesome aspects of globalization may be eased, including massive financial transfers to energy producers and loss of jobs to manufacturing platforms like China. This shift could bring relief to the “losers” of globalization and ease populist, nationalist political pressures that are roiling the developed countries.
That is quite a claim, I realize. But let me explain the vision.
New Technologies and Businesses: Digital, Democratized, Decentralized
The key factors moving us toward peak globalization and making it economically viable are new technologies and innovative businesses and business models allowing for “production-at-the-point-of-consumption” of energy, food, and products.
Exponential technologies are enabling these trends by sharply reducing the “cost of entry” for creating businesses. Driven by Moore’s Law, powerful technologies have become available to almost anyone, anywhere.
Beginning with the microchip, which has had a 100-billion-fold improvement in 40 years—10,000 times faster and 10 million times cheaper—the marginal cost of producing almost everything that can be digitized has fallen toward zero.
A hard copy of a book, for example, will always entail the cost of materials, printing, shipping, etc., even if the marginal cost falls as more copies are produced. But the marginal cost of a second digital copy, such as an e-book, streaming video, or song, is nearly zero as it is simply a digital file sent over the Internet, the world’s largest copy machine.* Books are one product, but there are literally hundreds of thousands of dollars in once-physical, separate products jammed into our devices at little to no cost.
A smartphone alone provides half the human population access to artificial intelligence—from SIRI, search, and translation to cloud computing—geolocation, free global video calls, digital photography and free uploads to social network sites, free access to global knowledge, a million apps for a huge variety of purposes, and many other capabilities that were unavailable to most people only a few years ago.
As powerful as dematerialization and demonetization are for private individuals, they’re having a stronger effect on businesses. A small team can access expensive, advanced tools that before were only available to the biggest organizations. Foundational digital platforms, such as the internet and GPS, and the platforms built on top of them by the likes of Google, Apple, Amazon, and others provide the connectivity and services democratizing business tools and driving the next generation of new startups.
“As these trends gain steam in coming decades, they’ll bleed into and fundamentally transform global supply chains.”
An AI startup, for example, doesn’t need its own server farm to train its software and provide service to customers. The team can rent computing power from Amazon Web Services. This platform model enables small teams to do big things on the cheap. And it isn’t just in software. Similar trends are happening in hardware too. Makers can 3D print or mill industrial grade prototypes of physical stuff in a garage or local maker space and send or sell designs to anyone with a laptop and 3D printer via online platforms.
These are early examples of trends that are likely to gain steam in coming decades, and as they do, they’ll bleed into and fundamentally transform global supply chains.
The old model is a series of large, connected bits of centralized infrastructure. It makes sense to mine, farm, or manufacture in bulk when the conditions, resources, machines, and expertise to do so exist in particular places and are specialized and expensive. The new model, however, enables smaller-scale production that is local and decentralized.
To see this more clearly, let’s take a look at the technological trends at work in the three biggest contributors to the global trade of physical stuff—products, energy, and food.
Products
3D printing (additive manufacturing) allows for distributed manufacturing near the point of consumption, eliminating or reducing supply chains and factory production lines.
This is possible because product designs are no longer made manifest in assembly line parts like molds or specialized mechanical tools. Rather, designs are digital and can be called up at will to guide printers. Every time a 3D printer prints, it can print a different item, so no assembly line needs to be set up for every different product. 3D printers can also print an entire finished product in one piece or reduce the number of parts of larger products, such as engines. This further lessens the need for assembly.
Because each item can be customized and printed on demand, there is no cost benefit from scaling production. No inventories. No shipping items across oceans. No carbon emissions transporting not only the final product but also all the parts in that product shipped from suppliers to manufacturer. Moreover, 3D printing builds items layer by layer with almost no waste, unlike “subtractive manufacturing” in which an item is carved out of a piece of metal, and much or even most of the material can be waste.
Finally, 3D printing is also highly scalable, from inexpensive 3D printers (several hundred dollars) for home and school use to increasingly capable and expensive printers for industrial production. There are also 3D printers being developed for printing buildings, including houses and office buildings, and other infrastructure.
The technology for finished products is only now getting underway, and there are still challenges to overcome, such as speed, quality, and range of materials. But as methods and materials advance, it will likely creep into more manufactured goods.
Ultimately, 3D printing will be a general purpose technology that involves many different types of printers and materials—such as plastics, metals, and even human cells—to produce a huge range of items, from human tissue and potentially human organs to household items and a range of industrial items for planes, trains, and automobiles.
Energy
Renewable energy production is located at or relatively near the source of consumption.
Although electricity generated by solar, wind, geothermal, and other renewable sources can of course be transmitted over longer distances, it is mostly generated and consumed locally or regionally. It is not transported around the world in tankers, ships, and pipelines like petroleum, coal, and natural gas.
Moreover, the fuel itself is free—forever. There is no global price on sun or wind. The people relying on solar and wind power need not worry about price volatility and potential disruption of fuel supplies as a result of political, market, or natural causes.
Renewables have their problems, of course, including intermittency and storage, and currently they work best if complementary to other sources, especially natural gas power plants that, unlike coal plants, can be turned on or off and modulated like a gas stove, and are half the carbon emissions of coal.
Within the next decades or so, it is likely the intermittency and storage problems will be solved or greatly mitigated. In addition, unlike coal and natural gas power plants, solar is scalable, from solar panels on individual homes or even cars and other devices, to large-scale solar farms. Solar can be connected with microgrids and even allow for autonomous electricity generation by homes, commercial buildings, and communities.
It may be several decades before fossil fuel power plants can be phased out, but the development cost of renewables has been falling exponentially and, in places, is beginning to compete with coal and gas. Solar especially is expected to continue to increase in efficiency and decline in cost.
Given these trends in cost and efficiency, renewables should become obviously cheaper over time—if the fuel is free for solar and has to be continually purchased for coal and gas, at some point the former is cheaper than the latter. Renewables are already cheaper if externalities such as carbon emissions and environmental degradation involved in obtaining and transporting the fuel are included.
Food
Food can be increasingly produced near the point of consumption with vertical farms and eventually with printed food and even printed or cultured meat.
These sources bring production of food very near the consumer, so transportation costs, which can be a significant portion of the cost of food to consumers, are greatly reduced. The use of land and water are reduced by 95% or more, and energy use is cut by nearly 50%. In addition, fertilizers and pesticides are not required and crops can be grown 365 days a year whatever the weather and in more climates and latitudes than is possible today.
While it may not be practical to grow grains, corn, and other such crops in vertical farms, many vegetables and fruits can flourish in such facilities. In addition, cultured or printed meat is being developed—the big challenge is scaling up and reducing cost—that is based on cells from real animals without slaughtering the animals themselves.
There are currently some 70 billion animals being raised for food around the world [PDF] and livestock alone counts for about 15% of global emissions. Moreover, livestock places huge demands on land, water, and energy. Like vertical farms, cultured or printed meat could be produced with no more land use than a brewery and with far less water and energy.
A More Democratic Economy Goes Bottom Up
This is a very brief introduction to the technologies that can bring “production-at-the-point-of-consumption” of products, energy, and food to cities and regions.
What does this future look like? Here’s a simplified example.
Imagine a universal manufacturing facility with hundreds of 3D printers printing tens of thousands of different products on demand for the local community—rather than assembly lines in China making tens of thousands of the same product that have to be shipped all over the world since no local market can absorb all of the same product.
Nearby, a vertical farm and cultured meat facility produce much of tomorrow night’s dinner. These facilities would be powered by local or regional wind and solar. Depending on need and quality, some infrastructure and machinery, like solar panels and 3D printers, would live in these facilities and some in homes and businesses.
The facilities could be owned by a large global corporation—but still locally produce goods—or they could be franchised or even owned and operated independently by the local population. Upkeep and management at each would provide jobs for communities nearby. Eventually, not only would global trade of parts and products diminish, but even required supplies of raw materials and feed stock would decline since there would be less waste in production, and many materials would be recycled once acquired.
“Peak globalization could be a viable pathway to an economic foundation that puts people first while building a more economically and environmentally sustainable future.”
This model suggests a shift toward a “bottom up” economy that is more democratic, locally controlled, and likely to generate more local jobs.
The global trends in democratization of technology make the vision technologically plausible. Much of this technology already exists and is improving and scaling while exponentially decreasing in cost to become available to almost anyone, anywhere.
This includes not only access to key technologies, but also to education through digital platforms available globally. Online courses are available for free, ranging from advanced physics, math, and engineering to skills training in 3D printing, solar installations, and building vertical farms. Social media platforms can enable local and global collaboration and sharing of knowledge and best practices.
These new communities of producers can be the foundation for new forms of democratic governance as they recognize and “capitalize” on the reality that control of the means of production can translate to political power. More jobs and local control could weaken populist, anti-globalization political forces as people recognize they could benefit from the positive aspects of globalization and international cooperation and connectedness while diminishing the impact of globalization’s downsides.
There are powerful vested interests that stand to lose in such a global structural shift. But this vision builds on trends that are already underway and are gaining momentum. Peak globalization could be a viable pathway to an economic foundation that puts people first while building a more economically and environmentally sustainable future.
This article was originally posted on Open Democracy (CC BY-NC 4.0). The version above was edited with the author for length and includes additions. Read the original article on Open Democracy.
* See Jeremy Rifkin, The Zero Marginal Cost Society, (New York: Palgrave Macmillan, 2014), Part II, pp. 69-154.
Image Credit: Sergey Nivens / Shutterstock.com Continue reading