Tag Archives: camera

#432878 Chinese Port Goes Full Robot With ...

By the end of 2018, something will be very different about the harbor area in the northern Chinese city of Caofeidian. If you were to visit, the whirring cranes and tractors driving containers to and fro would be the only things in sight.

Caofeidian is set to become the world’s first fully autonomous harbor by the end of the year. The US-Chinese startup TuSimple, a specialist in developing self-driving trucks, will replace human-driven terminal tractor-trucks with 20 self-driving models. A separate company handles crane automation, and a central control system will coordinate the movements of both.

According to Robert Brown, Director of Public Affairs at TuSimple, the project could quickly transform into a much wider trend. “The potential for automating systems in harbors and ports is staggering when considering the number of deep-water and inland ports around the world. At the same time, the closed, controlled nature of a port environment makes it a perfect proving ground for autonomous truck technology,” he said.

Going Global
The autonomous cranes and trucks have a big task ahead of them. Caofeidian currently processes around 300,000 TEU containers a year. Even if you were dealing with Lego bricks, that number of units would get you a decent-sized cathedral or a 22-foot-long aircraft carrier. For any maritime fans—or people who enjoy the moving of heavy objects—TEU stands for twenty-foot equivalent unit. It is the industry standard for containers. A TEU equals an 8-foot (2.43 meter) wide, 8.5-foot (2.59 meter) high, and 20-foot (6.06 meter) long container.

While impressive, the Caofeidian number pales in comparison with the biggest global ports like Shanghai, Singapore, Busan, or Rotterdam. For example, 2017 saw more than 40 million TEU moved through Shanghai port facilities.

Self-driving container vehicles have been trialled elsewhere, including in Yangshan, close to Shanghai, and Rotterdam. Qingdao New Qianwan Container Terminal in China recently laid claim to being the first fully automated terminal in Asia.

The potential for efficiencies has many ports interested in automation. Qingdao said its systems allow the terminal to operate in complete darkness and have reduced labor costs by 70 percent while increasing efficiency by 30 percent. In some cases, the number of workers needed to unload a cargo ship has gone from 60 to 9.

TuSimple says it is in negotiations with several other ports and also sees potential in related logistics-heavy fields.

Stable Testing Ground
For autonomous vehicles, ports seem like a perfect testing ground. They are restricted, confined areas with few to no pedestrians where operating speeds are limited. The predictability makes it unlike, say, city driving.

Robert Brown describes it as an ideal setting for the first adaptation of TuSimple’s technology. The company, which, amongst others, is backed by chipmaker Nvidia, have been retrofitting existing vehicles from Shaanxi Automobile Group with sensors and technology.

At the same time, it is running open road tests in Arizona and China of its Class 8 Level 4 autonomous trucks.

The Camera Approach
Dozens of autonomous truck startups are reported to have launched in China over the past two years. In other countries the situation is much the same, as the race for the future of goods transportation heats up. Startup companies like Embark, Einride, Starsky Robotics, and Drive.ai are just a few of the names in the space. They are facing competition from the likes of Tesla, Daimler, VW, Uber’s Otto subsidiary, and in March, Waymo announced it too was getting into the truck race.

Compared to many of its competitors, TuSimple’s autonomous driving system is based on a different approach. Instead of laser-based radar (LIDAR), TuSimple primarily uses cameras to gather data about its surroundings. Currently, the company uses ten cameras, including forward-facing, backward-facing, and wide-lens. Together, they produce the 360-degree “God View” of the vehicle’s surroundings, which is interpreted by the onboard autonomous driving systems.

Each camera gathers information at 30 frames a second. Millimeter wave radar is used as a secondary sensor. In total, the vehicles generate what Robert Brown describes with a laugh as “almost too much” data about its surroundings and is accurate beyond 300 meters in locating and identifying objects. This includes objects that have given LIDAR problems, such as black vehicles.

Another advantage is price. Companies often loathe revealing exact amounts, but Tesla has gone as far as to say that the ‘expected’ price of its autonomous truck will be from $150,0000 and upwards. While unconfirmed, TuSimple’s retrofitted, camera-based solution is thought to cost around $20,000.

Image Credit: chinahbzyg / Shutterstock.com Continue reading

Posted in Human Robots

#432563 This Week’s Awesome Stories From ...

ARTIFICIAL INTELLIGENCE
Pedro Domingos on the Arms Race in Artificial Intelligence
Christoph Scheuermann and Bernhard Zand | Spiegel Online
“AI lowers the cost of knowledge by orders of magnitude. One good, effective machine learning system can do the work of a million people, whether it’s for commercial purposes or for cyberespionage. Imagine a country that produces a thousand times more knowledge than another. This is the challenge we are facing.”

BIOTECHNOLOGY
Gene Therapy Could Free Some People From a Lifetime of Blood Transfusions
Emily Mullin | MIT Technology Review
“A one-time, experimental treatment for an inherited blood disorder has shown dramatic results in a small study. …[Lead author Alexis Thompson] says the effect on patients has been remarkable. ‘They have been tied to this ongoing medical therapy that is burdensome and expensive for their whole lives,’ she says. ‘Gene therapy has allowed people to have aspirations and really pursue them.’ ”

ENVIRONMENT
The Revolutionary Giant Ocean Cleanup Machine Is About to Set Sail
Adele Peters | Fast Company
“By the end of 2018, the nonprofit says it will bring back its first harvest of ocean plastic from the North Pacific Gyre, along with concrete proof that the design works. The organization expects to bring 5,000 kilograms of plastic ashore per month with its first system. With a full fleet of systems deployed, it believes that it can collect half of the plastic trash in the Great Pacific Garbage Patch—around 40,000 metric tons—within five years.”

ROBOTICS
Autonomous Boats Will Be on the Market Sooner Than Self-Driving Cars
Tracey Lindeman | Motherboard
“Some unmanned watercraft…may be at sea commercially before 2020. That’s partly because automating all ships could generate a ridiculous amount of revenue. According to the United Nations, 90 percent of the world’s trade is carried by sea and 10.3 billion tons of products were shipped in 2016.”

DIGITAL CULTURE
Style Is an Algorithm
Kyle Chayka | Racked
“Confronting the Echo Look’s opaque statements on my fashion sense, I realize that all of these algorithmic experiences are matters of taste: the question of what we like and why we like it, and what it means that taste is increasingly dictated by black-box robots like the camera on my shelf.”

COMPUTING
How Apple Will Use AR to Reinvent the Human-Computer Interface
Tim Bajarin | Fast Company
“It’s in Apple’s DNA to continually deliver the ‘next’ major advancement to the personal computing experience. Its innovation in man-machine interfaces started with the Mac and then extended to the iPod, the iPhone, the iPad, and most recently, the Apple Watch. Now, get ready for the next chapter, as Apple tackles augmented reality, in a way that could fundamentally transform the human-computer interface.”

SCIENCE
Advanced Microscope Shows Cells at Work in Incredible Detail
Steve Dent | Engadget
“For the first time, scientists have peered into living cells and created videos showing how they function with unprecedented 3D detail. Using a special microscope and new lighting techniques, a team from Harvard and the Howard Hughes Medical Institute captured zebrafish immune cell interactions with unheard-of 3D detail and resolution.”

Image Credit: dubassy / Shutterstock.com Continue reading

Posted in Human Robots

#432549 Your Next Pilot Could Be Drone Software

Would you get on a plane that didn’t have a human pilot in the cockpit? Half of air travelers surveyed in 2017 said they would not, even if the ticket was cheaper. Modern pilots do such a good job that almost any air accident is big news, such as the Southwest engine disintegration on April 17.

But stories of pilot drunkenness, rants, fights and distraction, however rare, are reminders that pilots are only human. Not every plane can be flown by a disaster-averting pilot, like Southwest Capt. Tammie Jo Shults or Capt. Chesley “Sully” Sullenberger. But software could change that, equipping every plane with an extremely experienced guidance system that is always learning more.

In fact, on many flights, autopilot systems already control the plane for basically all of the flight. And software handles the most harrowing landings—when there is no visibility and the pilot can’t see anything to even know where he or she is. But human pilots are still on hand as backups.

A new generation of software pilots, developed for self-flying vehicles, or drones, will soon have logged more flying hours than all humans have—ever. By combining their enormous amounts of flight data and experience, drone-control software applications are poised to quickly become the world’s most experienced pilots.

Drones That Fly Themselves
Drones come in many forms, from tiny quad-rotor copter toys to missile-firing winged planes, or even 7-ton aircraft that can stay aloft for 34 hours at a stretch.

When drones were first introduced, they were flown remotely by human operators. However, this merely substitutes a pilot on the ground for one aloft. And it requires significant communications bandwidth between the drone and control center, to carry real-time video from the drone and to transmit the operator’s commands.

Many newer drones no longer need pilots; some drones for hobbyists and photographers can now fly themselves along human-defined routes, leaving the human free to sightsee—or control the camera to get the best view.

University researchers, businesses, and military agencies are now testing larger and more capable drones that will operate autonomously. Swarms of drones can fly without needing tens or hundreds of humans to control them. And they can perform coordinated maneuvers that human controllers could never handle.

Could humans control these 1,218 drones all together?

Whether flying in swarms or alone, the software that controls these drones is rapidly gaining flight experience.

Importance of Pilot Experience
Experience is the main qualification for pilots. Even a person who wants to fly a small plane for personal and noncommercial use needs 40 hours of flying instruction before getting a private pilot’s license. Commercial airline pilots must have at least 1,000 hours before even serving as a co-pilot.

On-the-ground training and in-flight experience prepare pilots for unusual and emergency scenarios, ideally to help save lives in situations like the “Miracle on the Hudson.” But many pilots are less experienced than “Sully” Sullenberger, who saved his planeload of people with quick and creative thinking. With software, though, every plane can have on board a pilot with as much experience—if not more. A popular software pilot system, in use in many aircraft at once, could gain more flight time each day than a single human might accumulate in a year.

As someone who studies technology policy as well as the use of artificial intelligence for drones, cars, robots, and other uses, I don’t lightly suggest handing over the controls for those additional tasks. But giving software pilots more control would maximize computers’ advantages over humans in training, testing, and reliability.

Training and Testing Software Pilots
Unlike people, computers will follow sets of instructions in software the same way every time. That lets developers create instructions, test reactions, and refine aircraft responses. Testing could make it far less likely, for example, that a computer would mistake the planet Venus for an oncoming jet and throw the plane into a steep dive to avoid it.

The most significant advantage is scale: Rather than teaching thousands of individual pilots new skills, updating thousands of aircraft would require only downloading updated software.

These systems would also need to be thoroughly tested—in both real-life situations and in simulations—to handle a wide range of aviation situations and to withstand cyberattacks. But once they’re working well, software pilots are not susceptible to distraction, disorientation, fatigue, or other human impairments that can create problems or cause errors even in common situations.

Rapid Response and Adaptation
Already, aircraft regulators are concerned that human pilots are forgetting how to fly on their own and may have trouble taking over from an autopilot in an emergency.

In the “Miracle on the Hudson” event, for example, a key factor in what happened was how long it took for the human pilots to figure out what had happened—that the plane had flown through a flock of birds, which had damaged both engines—and how to respond. Rather than the approximately one minute it took the humans, a computer could have assessed the situation in seconds, potentially saving enough time that the plane could have landed on a runway instead of a river.

Aircraft damage can pose another particularly difficult challenge for human pilots: It can change what effects the controls have on its flight. In cases where damage renders a plane uncontrollable, the result is often tragedy. A sufficiently advanced automated system could make minute changes to the aircraft’s steering and use its sensors to quickly evaluate the effects of those movements—essentially learning how to fly all over again with a damaged plane.

Boosting Public Confidence
The biggest barrier to fully automated flight is psychological, not technical. Many people may not want to trust their lives to computer systems. But they might come around when reassured that the software pilot has tens, hundreds, or thousands more hours of flight experience than any human pilot.

Other autonomous technologies, too, are progressing despite public concerns. Regulators and lawmakers are allowing self-driving cars on the roads in many states. But more than half of Americans don’t want to ride in one, largely because they don’t trust the technology. And only 17 percent of travelers around the world are willing to board a plane without a pilot. However, as more people experience self-driving cars on the road and have drones deliver them packages, it is likely that software pilots will gain in acceptance.

The airline industry will certainly be pushing people to trust the new systems: Automating pilots could save tens of billions of dollars a year. And the current pilot shortage means software pilots may be the key to having any airline service to smaller destinations.

Both Boeing and Airbus have made significant investments in automated flight technology, which would remove or reduce the need for human pilots. Boeing has actually bought a drone manufacturer and is looking to add software pilot capabilities to the next generation of its passenger aircraft. (Other tests have tried to retrofit existing aircraft with robotic pilots.)

One way to help regular passengers become comfortable with software pilots—while also helping to both train and test the systems—could be to introduce them as co-pilots working alongside human pilots. Planes would be operated by software from gate to gate, with the pilots instructed to touch the controls only if the system fails. Eventually pilots could be removed from the aircraft altogether, just like they eventually were from the driverless trains that we routinely ride in airports around the world.

This article was originally published on The Conversation. Read the original article.

Image Credit: Skycolors / Shutterstock.com Continue reading

Posted in Human Robots

#432482 This Week’s Awesome Stories From ...

CYBERNETICS
A Brain-Boosting Prosthesis Moves From Rats to Humans
Robbie Gonzalez | WIRED
“Today, their proof-of-concept prosthetic lives outside a patient’s head and connects to the brain via wires. But in the future, Hampson hopes, surgeons could implant a similar apparatus entirely within a person’s skull, like a neural pacemaker. It could augment all manner of brain functions—not just in victims of dementia and brain injury, but healthy individuals, as well.”

ARTIFICIAL INTELLIGENCE
Here’s How the US Needs to Prepare for the Age of Artificial Intelligence
Will Knight | MIT Technology Review
“The Trump administration has abandoned this vision and has no intention of devising its own AI plan, say those working there. They say there is no need for an AI moonshot, and that minimizing government interference is the best way to make sure the technology flourishes… That looks like a huge mistake. If it essentially ignores such a technological transformation, the US might never make the most of an opportunity to reboot its economy and kick-start both wage growth and job creation. Failure to plan could also cause the birthplace of AI to lose ground to international rivals.”

BIOMIMICRY
Underwater GPS Inspired by Shrimp Eyes
Jeremy Hsu | IEEE Spectrum
“A few years ago, U.S. and Australian researchers developed a special camera inspired by the eyes of mantis shrimp that can see the polarization patterns of light waves, which resemble those in a rope being waved up and down. That means the bio-inspired camera can detect how light polarization patterns change once the light enters the water and gets deflected or scattered.”

POLITICS & TECHNOLOGY
‘The Business of War’: Google Employees Protest Work for the Pentagon
Scott Shane and Daisuke Wakabayashi | The New York Times
“Thousands of Google employees, including dozens of senior engineers, have signed a letter protesting the company’s involvement in a Pentagon program that uses artificial intelligence to interpret video imagery and could be used to improve the targeting of drone strikes.

The letter, which is circulating inside Google and has garnered more than 3,100 signatures, reflects a culture clash between Silicon Valley and the federal government that is likely to intensify as cutting-edge artificial intelligence is increasingly employed for military purposes. ‘We believe that Google should not be in the business of war,’ says the letter, addressed to Sundar Pichai, the company’s chief executive. It asks that Google pull out of Project Maven, a Pentagon pilot program, and announce a policy that it will not ‘ever build warfare technology.’ (Read the text of the letter.)”

CYBERNETICS
MIT’s New Headset Reads the ‘Words in Your Head’
Brian Heater | TechCrunch
“A team at MIT has been working on just such a device, though the hardware design, admittedly, doesn’t go too far toward removing that whole self-consciousness bit from the equation. AlterEgo is a headmounted—or, more properly, jaw-mounted—device that’s capable of reading neuromuscular signals through built-in electrodes. The hardware, as MIT puts it, is capable of reading ‘words in your head.’”



Image Credit: christitzeimaging.com / Shutterstock.com Continue reading

Posted in Human Robots

#432352 Watch This Lifelike Robot Fish Swim ...

Earth’s oceans are having a rough go of it these days. On top of being the repository for millions of tons of plastic waste, global warming is affecting the oceans and upsetting marine ecosystems in potentially irreversible ways.

Coral bleaching, for example, occurs when warming water temperatures or other stress factors cause coral to cast off the algae that live on them. The coral goes from lush and colorful to white and bare, and sometimes dies off altogether. This has a ripple effect on the surrounding ecosystem.

Warmer water temperatures have also prompted many species of fish to move closer to the north or south poles, disrupting fisheries and altering undersea environments.

To keep these issues in check or, better yet, try to address and improve them, it’s crucial for scientists to monitor what’s going on in the water. A paper released last week by a team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) unveiled a new tool for studying marine life: a biomimetic soft robotic fish, dubbed SoFi, that can swim with, observe, and interact with real fish.

SoFi isn’t the first robotic fish to hit the water, but it is the most advanced robot of its kind. Here’s what sets it apart.

It swims in three dimensions
Up until now, most robotic fish could only swim forward at a given water depth, advancing at a steady speed. SoFi blows older models out of the water. It’s equipped with side fins called dive planes, which move to adjust its angle and allow it to turn, dive downward, or head closer to the surface. Its density and thus its buoyancy can also be adjusted by compressing or decompressing air in an inner compartment.

“To our knowledge, this is the first robotic fish that can swim untethered in three dimensions for extended periods of time,” said CSAIL PhD candidate Robert Katzschmann, lead author of the study. “We are excited about the possibility of being able to use a system like this to get closer to marine life than humans can get on their own.”

The team took SoFi to the Rainbow Reef in Fiji to test out its swimming skills, and the robo fish didn’t disappoint—it was able to swim at depths of over 50 feet for 40 continuous minutes. What keeps it swimming? A lithium polymer battery just like the one that powers our smartphones.

It’s remote-controlled… by Super Nintendo
SoFi has sensors to help it see what’s around it, but it doesn’t have a mind of its own yet. Rather, it’s controlled by a nearby scuba-diving human, who can send it commands related to speed, diving, and turning. The best part? The commands come from an actual repurposed (and waterproofed) Super Nintendo controller. What’s not to love?

Image Credit: MIT CSAIL
Previous robotic fish built by this team had to be tethered to a boat, so the fact that SoFi can swim independently is a pretty big deal. Communication between the fish and the diver was most successful when the two were less than 10 meters apart.

It looks real, sort of
SoFi’s side fins are a bit stiff, and its camera may not pass for natural—but otherwise, it looks a lot like a real fish. This is mostly thanks to the way its tail moves; a motor pumps water between two chambers in the tail, and as one chamber fills, the tail bends towards that side, then towards the other side as water is pumped into the other chamber. The result is a motion that closely mimics the way fish swim. Not only that, the hydraulic system can change the water flow to get different tail movements that let SoFi swim at varying speeds; its average speed is around half a body length (21.7 centimeters) per second.

Besides looking neat, it’s important SoFi look lifelike so it can blend in with marine life and not scare real fish away, so it can get close to them and observe them.

“A robot like this can help explore the reef more closely than current robots, both because it can get closer more safely for the reef and because it can be better accepted by the marine species.” said Cecilia Laschi, a biorobotics professor at the Sant’Anna School of Advanced Studies in Pisa, Italy.

Just keep swimming
It sounds like this fish is nothing short of a regular Nemo. But its creators aren’t quite finished yet.

They’d like SoFi to be able to swim faster, so they’ll work on improving the robo fish’s pump system and streamlining its body and tail design. They also plan to tweak SoFi’s camera to help it follow real fish.

“We view SoFi as a first step toward developing almost an underwater observatory of sorts,” said CSAIL director Daniela Rus. “It has the potential to be a new type of tool for ocean exploration and to open up new avenues for uncovering the mysteries of marine life.”

The CSAIL team plans to make a whole school of SoFis to help biologists learn more about how marine life is reacting to environmental changes.

Image Credit: MIT CSAIL Continue reading

Posted in Human Robots