Tag Archives: cameras
Robotics has come a long way in the past few years. Robots can now fetch items from specific spots in massive warehouses, swim through the ocean to study marine life, and lift 200 times their own weight. They can even perform synchronized dance routines.
But the really big question is—can robots put together an Ikea chair?
A team of engineers from Nanyang Technological University in Singapore decided to find out, detailing their work in a paper published last week in the journal Science Robotics. The team took industrial robot arms and equipped them with parallel grippers, force-detecting sensors, and 3D cameras, and wrote software enabling the souped-up bots to tackle chair assembly. The robots’ starting point was a set of chair parts randomly scattered within reach.
As impressive as the above-mentioned robotic capabilities are, it’s worth noting that they’re mostly limited to a single skill. Putting together furniture, on the other hand, requires using and precisely coordinating multiple skills, including force control, visual localization, hand-eye coordination, and the patience to read each step of the manual without rushing through it and messing everything up.
Indeed, Ikea furniture, while meant to be simple and user-friendly, has left even the best of us scratching our heads and holding a spare oddly-shaped piece of wood as we stare at the desk or bed frame we just put together—or, for the less even-tempered among us, throwing said piece of wood across the room.
It’s a good thing robots don’t have tempers, because it took a few tries for the bots to get the chair assembly right.
Practice makes perfect, though (or in this case, rewriting code makes perfect), and these bots didn’t give up so easily. They had to hone three different skills: identifying which part was which among the scattered, differently-shaped pieces of wood, coordinating their movements to put those pieces in the right place, and knowing how much force to use in various steps of the process (i.e., more force is needed to connect two pieces than to pick up one piece).
A few tries later, the bots were able to assemble the chair from start to finish in about nine minutes.
On the whole, nicely done. But before we applaud the robots’ success too loudly, it’s important to note that they didn’t autonomously assemble the chair. Rather, each step of the process was planned and coded by engineers, down to the millimeter.
However, the team believes this closely-guided chair assembly was just a first step, and they see a not-so-distant future where combining artificial intelligence with advanced robotic capabilities could produce smart bots that would learn to assemble furniture and do other complex tasks on their own.
Future applications mentioned in the paper include electronics and aircraft manufacturing, logistics, and other high-mix, low-volume sectors.
Image Credit: Francisco Suárez-Ruiz and Quang-Cuong Pham/Nanyang Technological University Continue reading
Henry Ford didn’t invent the motor car. The late 1800s saw a flurry of innovation by hundreds of companies battling to deliver on the promise of fast, efficient and reasonably-priced mechanical transportation. Ford later came to dominate the industry thanks to the development of the moving assembly line.
Today, the sector is poised for another breakthrough with the advent of cars that drive themselves. But unlike the original wave of automobile innovation, the race for supremacy in autonomous vehicles is concentrated among a few corporate giants. So who is set to dominate this time?
I’ve analyzed six companies we think are leading the race to build the first truly driverless car. Three of these—General Motors, Ford, and Volkswagen—come from the existing car industry and need to integrate self-driving technology into their existing fleet of mass-produced vehicles. The other three—Tesla, Uber, and Waymo (owned by the same company as Google)—are newcomers from the digital technology world of Silicon Valley and have to build a mass manufacturing capability.
While it’s impossible to know all the developments at any given time, we have tracked investments, strategic partnerships, and official press releases to learn more about what’s happening behind the scenes. The car industry typically rates self-driving technology on a scale from Level 0 (no automation) to Level 5 (full automation). We’ve assessed where each company is now and estimated how far they are from reaching the top level. Here’s how we think each player is performing.
Volkswagen has invested in taxi-hailing app Gett and partnered with chip-maker Nvidia to develop an artificial intelligence co-pilot for its cars. In 2018, the VW Group is set to release the Audi A8, the first production vehicle that reaches Level 3 on the scale, “conditional driving automation.” This means the car’s computer will handle all driving functions, but a human has to be ready to take over if necessary.
Ford already sells cars with a Level 2 autopilot, “partial driving automation.” This means one or more aspects of driving are controlled by a computer based on information about the environment, for example combined cruise control and lane centering. Alongside other investments, the company has put $1 billion into Argo AI, an artificial intelligence company for self-driving vehicles. Following a trial to test pizza delivery using autonomous vehicles, Ford is now testing Level 4 cars on public roads. These feature “high automation,” where the car can drive entirely on its own but not in certain conditions such as when the road surface is poor or the weather is bad.
GM also sells vehicles with Level 2 automation but, after buying Silicon Valley startup Cruise Automation in 2016, now plans to launch the first mass-production-ready Level 5 autonomy vehicle that drives completely on its own by 2019. The Cruise AV will have no steering wheel or pedals to allow a human to take over and be part of a large fleet of driverless taxis the company plans to operate in big cities. But crucially the company hasn’t yet secured permission to test the car on public roads.
Waymo Level 5 testing. Image Credit: Waymo
Founded as a special project in 2009, Waymo separated from Google (though they’re both owned by the same parent firm, Alphabet) in 2016. Though it has never made, sold, or operated a car on a commercial basis, Waymo has created test vehicles that have clocked more than 4 million miles without human drivers as of November 2017. Waymo tested its Level 5 car, “Firefly,” between 2015 and 2017 but then decided to focus on hardware that could be installed in other manufacturers’ vehicles, starting with the Chrysler Pacifica.
The taxi-hailing app maker Uber has been testing autonomous cars on the streets of Pittsburgh since 2016, always with an employee behind the wheel ready to take over in case of a malfunction. After buying the self-driving truck company Otto in 2016 for a reported $680 million, Uber is now expanding its AI capabilities and plans to test NVIDIA’s latest chips in Otto’s vehicles. It has also partnered with Volvo to create a self-driving fleet of cars and with Toyota to co-create a ride-sharing autonomous vehicle.
The first major car manufacturer to come from Silicon Valley, Tesla was also the first to introduce Level 2 autopilot back in 2015. The following year, it announced that all new Teslas would have the hardware for full autonomy, meaning once the software is finished it can be deployed on existing cars with an instant upgrade. Some experts have challenged this approach, arguing that the company has merely added surround cameras to its production cars that aren’t as capable as the laser-based sensing systems that most other carmakers are using.
But the company has collected data from hundreds of thousands of cars, driving millions of miles across all terrains. So, we shouldn’t dismiss the firm’s founder, Elon Musk, when he claims a Level 4 Tesla will drive from LA to New York without any human interference within the first half of 2018.
Who’s leading the race? Image Credit: IMD
At the moment, the disruptors like Tesla, Waymo, and Uber seem to have the upper hand. While the traditional automakers are focusing on bringing Level 3 and 4 partial automation to market, the new companies are leapfrogging them by moving more directly towards Level 5 full automation. Waymo may have the least experience of dealing with consumers in this sector, but it has already clocked up a huge amount of time testing some of the most advanced technology on public roads.
The incumbent carmakers are also focused on the difficult process of integrating new technology and business models into their existing manufacturing operations by buying up small companies. The challengers, on the other hand, are easily partnering with other big players including manufacturers to get the scale and expertise they need more quickly.
Tesla is building its own manufacturing capability but also collecting vast amounts of critical data that will enable it to more easily upgrade its cars when ready for full automation. In particular, Waymo’s experience, technology capability, and ability to secure solid partnerships puts it at the head of the pack.
This article was originally published on The Conversation. Read the original article.
Image Credit: Waymo Continue reading
You might be really pleased with the camera technology in your latest smartphone, which can recognize your face and take slow-mo video in ultra-high definition. But these technological feats are just the start of a larger revolution that is underway.
The latest camera research is shifting away from increasing the number of mega-pixels towards fusing camera data with computational processing. By that, we don’t mean the Photoshop style of processing where effects and filters are added to a picture, but rather a radical new approach where the incoming data may not actually look like at an image at all. It only becomes an image after a series of computational steps that often involve complex mathematics and modeling how light travels through the scene or the camera.
This additional layer of computational processing magically frees us from the chains of conventional imaging techniques. One day we may not even need cameras in the conventional sense any more. Instead we will use light detectors that only a few years ago we would never have considered any use for imaging. And they will be able to do incredible things, like see through fog, inside the human body and even behind walls.
Single Pixel Cameras
One extreme example is the single pixel camera, which relies on a beautifully simple principle. Typical cameras use lots of pixels (tiny sensor elements) to capture a scene that is likely illuminated by a single light source. But you can also do things the other way around, capturing information from many light sources with a single pixel.
To do this you need a controlled light source, for example a simple data projector that illuminates the scene one spot at a time or with a series of different patterns. For each illumination spot or pattern, you then measure the amount of light reflected and add everything together to create the final image.
Clearly the disadvantage of taking a photo in this is way is that you have to send out lots of illumination spots or patterns in order to produce one image (which would take just one snapshot with a regular camera). But this form of imaging would allow you to create otherwise impossible cameras, for example that work at wavelengths of light beyond the visible spectrum, where good detectors cannot be made into cameras.
These cameras could be used to take photos through fog or thick falling snow. Or they could mimic the eyes of some animals and automatically increase an image’s resolution (the amount of detail it captures) depending on what’s in the scene.
It is even possible to capture images from light particles that have never even interacted with the object we want to photograph. This would take advantage of the idea of “quantum entanglement,” that two particles can be connected in a way that means whatever happens to one happens to the other, even if they are a long distance apart. This has intriguing possibilities for looking at objects whose properties might change when lit up, such as the eye. For example, does a retina look the same when in darkness as in light?
Single-pixel imaging is just one of the simplest innovations in upcoming camera technology and relies, on the face of it, on the traditional concept of what forms a picture. But we are currently witnessing a surge of interest for systems that use lots of information but traditional techniques only collect a small part of it.
This is where we could use multi-sensor approaches that involve many different detectors pointed at the same scene. The Hubble telescope was a pioneering example of this, producing pictures made from combinations of many different images taken at different wavelengths. But now you can buy commercial versions of this kind of technology, such as the Lytro camera that collects information about light intensity and direction on the same sensor, to produce images that can be refocused after the image has been taken.
The next generation camera will probably look something like the Light L16 camera, which features ground-breaking technology based on more than ten different sensors. Their data are combined using a computer to provide a 50 MB, re-focusable and re-zoomable, professional-quality image. The camera itself looks like a very exciting Picasso interpretation of a crazy cell-phone camera.
Yet these are just the first steps towards a new generation of cameras that will change the way in which we think of and take images. Researchers are also working hard on the problem of seeing through fog, seeing behind walls, and even imaging deep inside the human body and brain.
All of these techniques rely on combining images with models that explain how light travels through through or around different substances.
Another interesting approach that is gaining ground relies on artificial intelligence to “learn” to recognize objects from the data. These techniques are inspired by learning processes in the human brain and are likely to play a major role in future imaging systems.
Single photon and quantum imaging technologies are also maturing to the point that they can take pictures with incredibly low light levels and videos with incredibly fast speeds reaching a trillion frames per second. This is enough to even capture images of light itself traveling across as scene.
Some of these applications might require a little time to fully develop, but we now know that the underlying physics should allow us to solve these and other problems through a clever combination of new technology and computational ingenuity.
This article was originally published on The Conversation. Read the original article.
Image Credit: Sylvia Adams / Shutterstock.com Continue reading