Tag Archives: robot
#432878 Chinese Port Goes Full Robot With ...
By the end of 2018, something will be very different about the harbor area in the northern Chinese city of Caofeidian. If you were to visit, the whirring cranes and tractors driving containers to and fro would be the only things in sight.
Caofeidian is set to become the world’s first fully autonomous harbor by the end of the year. The US-Chinese startup TuSimple, a specialist in developing self-driving trucks, will replace human-driven terminal tractor-trucks with 20 self-driving models. A separate company handles crane automation, and a central control system will coordinate the movements of both.
According to Robert Brown, Director of Public Affairs at TuSimple, the project could quickly transform into a much wider trend. “The potential for automating systems in harbors and ports is staggering when considering the number of deep-water and inland ports around the world. At the same time, the closed, controlled nature of a port environment makes it a perfect proving ground for autonomous truck technology,” he said.
Going Global
The autonomous cranes and trucks have a big task ahead of them. Caofeidian currently processes around 300,000 TEU containers a year. Even if you were dealing with Lego bricks, that number of units would get you a decent-sized cathedral or a 22-foot-long aircraft carrier. For any maritime fans—or people who enjoy the moving of heavy objects—TEU stands for twenty-foot equivalent unit. It is the industry standard for containers. A TEU equals an 8-foot (2.43 meter) wide, 8.5-foot (2.59 meter) high, and 20-foot (6.06 meter) long container.
While impressive, the Caofeidian number pales in comparison with the biggest global ports like Shanghai, Singapore, Busan, or Rotterdam. For example, 2017 saw more than 40 million TEU moved through Shanghai port facilities.
Self-driving container vehicles have been trialled elsewhere, including in Yangshan, close to Shanghai, and Rotterdam. Qingdao New Qianwan Container Terminal in China recently laid claim to being the first fully automated terminal in Asia.
The potential for efficiencies has many ports interested in automation. Qingdao said its systems allow the terminal to operate in complete darkness and have reduced labor costs by 70 percent while increasing efficiency by 30 percent. In some cases, the number of workers needed to unload a cargo ship has gone from 60 to 9.
TuSimple says it is in negotiations with several other ports and also sees potential in related logistics-heavy fields.
Stable Testing Ground
For autonomous vehicles, ports seem like a perfect testing ground. They are restricted, confined areas with few to no pedestrians where operating speeds are limited. The predictability makes it unlike, say, city driving.
Robert Brown describes it as an ideal setting for the first adaptation of TuSimple’s technology. The company, which, amongst others, is backed by chipmaker Nvidia, have been retrofitting existing vehicles from Shaanxi Automobile Group with sensors and technology.
At the same time, it is running open road tests in Arizona and China of its Class 8 Level 4 autonomous trucks.
The Camera Approach
Dozens of autonomous truck startups are reported to have launched in China over the past two years. In other countries the situation is much the same, as the race for the future of goods transportation heats up. Startup companies like Embark, Einride, Starsky Robotics, and Drive.ai are just a few of the names in the space. They are facing competition from the likes of Tesla, Daimler, VW, Uber’s Otto subsidiary, and in March, Waymo announced it too was getting into the truck race.
Compared to many of its competitors, TuSimple’s autonomous driving system is based on a different approach. Instead of laser-based radar (LIDAR), TuSimple primarily uses cameras to gather data about its surroundings. Currently, the company uses ten cameras, including forward-facing, backward-facing, and wide-lens. Together, they produce the 360-degree “God View” of the vehicle’s surroundings, which is interpreted by the onboard autonomous driving systems.
Each camera gathers information at 30 frames a second. Millimeter wave radar is used as a secondary sensor. In total, the vehicles generate what Robert Brown describes with a laugh as “almost too much” data about its surroundings and is accurate beyond 300 meters in locating and identifying objects. This includes objects that have given LIDAR problems, such as black vehicles.
Another advantage is price. Companies often loathe revealing exact amounts, but Tesla has gone as far as to say that the ‘expected’ price of its autonomous truck will be from $150,0000 and upwards. While unconfirmed, TuSimple’s retrofitted, camera-based solution is thought to cost around $20,000.
Image Credit: chinahbzyg / Shutterstock.com Continue reading →
#432685 Inside TickTock’s Consumer Robot ...
Ryan Hickman, who co-founded the Cloud Robotics group at Google and was an early part of the Toyota Research Institute Product team, describes how his startup tried to make consumer home robots work Continue reading →
#432681 Misty Robotics Builds on Developer ...
The Misty II personal robot is designed to do whatever you can program it to do, and more Continue reading →
#432671 Stuff 3.0: The Era of Programmable ...
It’s the end of a long day in your apartment in the early 2040s. You decide your work is done for the day, stand up from your desk, and yawn. “Time for a film!” you say. The house responds to your cues. The desk splits into hundreds of tiny pieces, which flow behind you and take on shape again as a couch. The computer screen you were working on flows up the wall and expands into a flat projection screen. You relax into the couch and, after a few seconds, a remote control surfaces from one of its arms.
In a few seconds flat, you’ve gone from a neatly-equipped office to a home cinema…all within the same four walls. Who needs more than one room?
This is the dream of those who work on “programmable matter.”
In his recent book about AI, Max Tegmark makes a distinction between three different levels of computational sophistication for organisms. Life 1.0 is single-celled organisms like bacteria; here, hardware is indistinguishable from software. The behavior of the bacteria is encoded into its DNA; it cannot learn new things.
Life 2.0 is where humans live on the spectrum. We are more or less stuck with our hardware, but we can change our software by choosing to learn different things, say, Spanish instead of Italian. Much like managing space on your smartphone, your brain’s hardware will allow you to download only a certain number of packages, but, at least theoretically, you can learn new behaviors without changing your underlying genetic code.
Life 3.0 marks a step-change from this: creatures that can change both their hardware and software in something like a feedback loop. This is what Tegmark views as a true artificial intelligence—one that can learn to change its own base code, leading to an explosion in intelligence. Perhaps, with CRISPR and other gene-editing techniques, we could be using our “software” to doctor our “hardware” before too long.
Programmable matter extends this analogy to the things in our world: what if your sofa could “learn” how to become a writing desk? What if, instead of a Swiss Army knife with dozens of tool attachments, you just had a single tool that “knew” how to become any other tool you could require, on command? In the crowded cities of the future, could houses be replaced by single, OmniRoom apartments? It would save space, and perhaps resources too.
Such are the dreams, anyway.
But when engineering and manufacturing individual gadgets is such a complex process, you can imagine that making stuff that can turn into many different items can be extremely complicated. Professor Skylar Tibbits at MIT referred to it as 4D printing in a TED Talk, and the website for his research group, the Self-Assembly Lab, excitedly claims, “We have also identified the key ingredients for self-assembly as a simple set of responsive building blocks, energy and interactions that can be designed within nearly every material and machining process available. Self-assembly promises to enable breakthroughs across many disciplines, from biology to material science, software, robotics, manufacturing, transportation, infrastructure, construction, the arts, and even space exploration.”
Naturally, their projects are still in the early stages, but the Self-Assembly Lab and others are genuinely exploring just the kind of science fiction applications we mooted.
For example, there’s the cell-phone self-assembly project, which brings to mind eerie, 24/7 factories where mobile phones assemble themselves from 3D printed kits without human or robotic intervention. Okay, so the phones they’re making are hardly going to fly off the shelves as fashion items, but if all you want is something that works, it could cut manufacturing costs substantially and automate even more of the process.
One of the major hurdles to overcome in making programmable matter a reality is choosing the right fundamental building blocks. There’s a very important balance to strike. To create fine details, you need to have things that aren’t too big, so as to keep your rearranged matter from being too lumpy. This might make the building blocks useless for certain applications—for example, if you wanted to make tools for fine manipulation. With big pieces, it might be difficult to simulate a range of textures. On the other hand, if the pieces are too small, different problems can arise.
Imagine a setup where each piece is a small robot. You have to contain the robot’s power source and its brain, or at least some kind of signal-generator and signal-processor, all in the same compact unit. Perhaps you can imagine that one might be able to simulate a range of textures and strengths by changing the strength of the “bond” between individual units—your desk might need to be a little bit more firm than your bed, which might be nicer with a little more give.
Early steps toward creating this kind of matter have been taken by those who are developing modular robots. There are plenty of different groups working on this, including MIT, Lausanne, and the University of Brussels.
In the latter configuration, one individual robot acts as a centralized decision-maker, referred to as the brain unit, but additional robots can autonomously join the brain unit as and when needed to change the shape and structure of the overall system. Although the system is only ten units at present, it’s a proof-of-concept that control can be orchestrated over a modular system of robots; perhaps in the future, smaller versions of the same thing could be the components of Stuff 3.0.
You can imagine that with machine learning algorithms, such swarms of robots might be able to negotiate obstacles and respond to a changing environment more easily than an individual robot (those of you with techno-fear may read “respond to a changing environment” and imagine a robot seamlessly rearranging itself to allow a bullet to pass straight through without harm).
Speaking of robotics, the form of an ideal robot has been a subject of much debate. In fact, one of the major recent robotics competitions—DARPA’s Robotics Challenge—was won by a robot that could adapt, beating Boston Dynamics’ infamous ATLAS humanoid with the simple addition of a wheel that allowed it to drive as well as walk.
Rather than building robots into a humanoid shape (only sometimes useful), allowing them to evolve and discover the ideal form for performing whatever you’ve tasked them to do could prove far more useful. This is particularly true in disaster response, where expensive robots can still be more valuable than humans, but conditions can be very unpredictable and adaptability is key.
Further afield, many futurists imagine “foglets” as the tiny nanobots that will be capable of constructing anything from raw materials, somewhat like the “Santa Claus machine.” But you don’t necessarily need anything quite so indistinguishable from magic to be useful. Programmable matter that can respond and adapt to its surroundings could be used in all kinds of industrial applications. How about a pipe that can strengthen or weaken at will, or divert its direction on command?
We’re some way off from being able to order our beds to turn into bicycles. As with many tech ideas, it may turn out that the traditional low-tech solution is far more practical and cost-effective, even as we can imagine alternatives. But as the march to put a chip in every conceivable object goes on, it seems certain that inanimate objects are about to get a lot more animated.
Image Credit: PeterVrabel / Shutterstock.com Continue reading →
#432657 Video Friday: Cassie on Fire, ...
Your weekly selection of awesome robot videos Continue reading →