Tag Archives: 2016

#435757 Robotic Animal Agility

An off-shore wind power platform, somewhere in the North Sea, on a freezing cold night, with howling winds and waves crashing against the impressive structure. An imperturbable ANYmal is quietly conducting its inspection.

ANYmal, a medium sized dog-like quadruped robot, walks down the stairs, lifts a “paw” to open doors or to call the elevator and trots along corridors. Darkness is no problem: it knows the place perfectly, having 3D-mapped it. Its laser sensors keep it informed about its precise path, location and potential obstacles. It conducts its inspection across several rooms. Its cameras zoom in on counters, recording the measurements displayed. Its thermal sensors record the temperature of machines and equipment and its ultrasound microphone checks for potential gas leaks. The robot also inspects lever positions as well as the correct positioning of regulatory fire extinguishers. As the electronic buzz of its engines resumes, it carries on working tirelessly.

After a little over two hours of inspection, the robot returns to its docking station for recharging. It will soon head back out to conduct its next solitary patrol. ANYmal played alongside Mulder and Scully in the “X-Files” TV series*, but it is in no way a Hollywood robot. It genuinely exists and surveillance missions are part of its very near future.

Off-shore oil platforms, the first test fields and probably the first actual application of ANYmal. ©ANYbotics

This quadruped robot was designed by ANYbotics, a spinoff of the Swiss Federal Institute of Technology in Zurich (ETH Zurich). Made of carbon fibre and aluminium, it weighs about thirty kilos. It is fully ruggedised, water- and dust-proof (IP-67). A kevlar belly protects its main body, carrying its powerful brain, batteries, network device, power management system and navigational systems.

ANYmal was designed for all types of terrain, including rubble, sand or snow. It has been field tested on industrial sites and is at ease with new obstacles to overcome (and it can even get up after a fall). Depending on its mission, its batteries last 2 to 4 hours.

On its jointed legs, protected by rubber pads, it can walk (at the speed of human steps), trot, climb, curl upon itself to crawl, carry a load or even jump and dance. It is the need to move on all surfaces that has driven its designers to choose a quadruped. “Biped robots are not easy to stabilise, especially on irregular terrain” explains Dr Péter Fankhauser, co-founder and chief business development officer of ANYbotics. “Wheeled or tracked robots can carry heavy loads, but they are bulky and less agile. Flying drones are highly mobile, but cannot carry load, handle objects or operate in bad weather conditions. We believe that quadrupeds combine the optimal characteristics, both in terms of mobility and versatility.”

What served as a source of inspiration for the team behind the project, the Robotic Systems Lab of the ETH Zurich, is a champion of agility on rugged terrain: the mountain goat. “We are of course still a long way” says Fankhauser. “However, it remains our objective on the longer term.

The first prototype, ALoF, was designed already back in 2009. It was still rather slow, very rigid and clumsy – more of a proof of concept than a robot ready for application. In 2012, StarlETH, fitted with spring joints, could hop, jump and climb. It was with this robot that the team started participating in 2014 in ARGOS, a full-scale challenge, launched by the Total oil group. The idea was to present a robot capable of inspecting an off-shore drilling station autonomously.

Up against dozens of competitors, the ETH Zurich team was the only team to enter the competition with such a quadrupedal robot. They didn’t win, but the multiple field tests were growing evermore convincing. Especially because, during the challenge, the team designed new joints with elastic actuators made in-house. These joints, inspired by tendons and muscles, are compact, sealed and include their own custom control electronics. They can regulate joint torque, position and impedance directly. Thanks to this innovation, the team could enter the same competition with a new version of its robot, ANYmal, fitted with three joints on each leg.

The ARGOS experience confirms the relevance of the selected means of locomotion. “Our robot is lighter, takes up less space on site and it is less noisy” says Fankhauser. “It also overcomes bigger obstacles than larger wheeled or tracked robots!” As ANYmal generated public interest and its transformation into a genuine product seemed more than possible, the startup ANYbotics was launched in 2016. It sold not only its robot, but also its revolutionary joints, called ANYdrive.

Today, ANYmal is not yet ready for sale to companies. However, ANYbotics has a growing number of partnerships with several industries, testing the robot for a few days or several weeks, for all types of tasks. Last October, for example, ANYmal navigated its way through the dark sewage system of the city of Zurich in order to test its capacity to help workers in similar difficult, repetitive and even dangerous tasks.

Why such an early interest among companies? “Because many companies want to integrate robots into their maintenance tasks” answers Fankhauser. “With ANYmal, they can actually evaluate its feasibility and plan their strategy. Eventually, both the architecture and the equipment of buildings could be rethought to be adapted to these maintenance robots”.

ANYmal requires ruggedised, sealed and extremely reliable interconnection solutions, such as LEMO. ©ANYbotics

Through field demonstrations and testing, ANYbotics can gather masses of information (up to 50,000 measurements are recorded every second during each test!) “It helps us to shape the product.” In due time, the startup will be ready to deliver a commercial product which really caters for companies’ needs.

Inspection and surveillance tasks on industrial sites are not the only applications considered. The startup is also thinking of agricultural inspections – with its onboard sensors, ANYmal is capable of mapping its environment, measuring bio mass and even taking soil samples. In the longer term, it could also be used for search and rescue operations. By the way, the robot can already be switched to “remote control” mode at any time and can be easily tele-operated. It is also capable of live audio and video transmission.

The transition from the prototype to the marketed product stage will involve a number of further developments. These include increasing ANYmal’s agility and speed, extending its capacity to map large-scale environments, improving safety, security, user handling and integrating the system with the customer’s data management software. It will also be necessary to enhance the robot’s reliability “so that it can work for days, weeks, or even months without human supervision.” All required certifications will have to be obtained. The locomotion system, which had triggered the whole business, is only one of a number of considerations of ANYbotics.

Designed for extreme environments, for ANYmal smoke is not a problem and it can walk in the snow, through rubble or in water. ©ANYbotics

The startup is not all alone. In fact, it has sold ANYmal robots to a dozen major universities who use them to develop their know-how in robotics. The startup has also founded ANYmal Research, a community including members such as Toyota Research Institute, the German Aerospace Center and the computer company Nvidia. Members have full access to ANYmal’s control software, simulations and documentation. Sharing has boosted both software and hardware ideas and developments (built on ROS, the open-source Robot Operating System). In particular, payload variations, providing for expandability and scalability. For instance, one of the universities uses a robotic arm which enables ANYmal to grasp or handle objects and open doors.

Among possible applications, ANYbotics mentions entertainment. It is not only about playing in more films or TV series, but rather about participating in various attractions (trade shows, museums, etc.). “ANYmal is so novel that it attracts a great amount of interest” confirms Fankhauser with a smile. “Whenever we present it somewhere, people gather around.”

Videos of these events show a fascinated and sometimes slightly fearful audience, when ANYmal gets too close to them. Is it fear of the “bad robot”? “This fear exists indeed and we are happy to be able to use ANYmal also to promote public awareness towards robotics and robots.” Reminiscent of a young dog, ANYmal is truly adapted for the purpose.

However, Péter Fankhauser softens the image of humans and sophisticated robots living together. “These coming years, robots will continue to work in the background, like they have for a long time in factories. Then, they will be used in public places in a selective and targeted way, for instance for dangerous missions. We will need to wait another ten years before animal-like robots, such as ANYmal will share our everyday lives!”

At the Consumer Electronics Show (CES) in Las Vegas in January, Continental, the German automotive manufacturing company, used robots to demonstrate a last-mile delivery. It showed ANYmal getting out of an autonomous vehicle with a parcel, climbing onto the front porch, lifting a paw to ring the doorbell, depositing the parcel before getting back into the vehicle. This futuristic image seems very close indeed.

*X-Files, season 11, episode 7, aired in February 2018 Continue reading

Posted in Human Robots

#435674 MIT Future of Work Report: We ...

Robots aren’t going to take everyone’s jobs, but technology has already reshaped the world of work in ways that are creating clear winners and losers. And it will continue to do so without intervention, says the first report of MIT’s Task Force on the Work of the Future.

The supergroup of MIT academics was set up by MIT President Rafael Reif in early 2018 to investigate how emerging technologies will impact employment and devise strategies to steer developments in a positive direction. And the headline finding from their first publication is that it’s not the quantity of jobs we should be worried about, but the quality.

Widespread press reports of a looming “employment apocalypse” brought on by AI and automation are probably wide of the mark, according to the authors. Shrinking workforces as developed countries age and outstanding limitations in what machines can do mean we’re unlikely to have a shortage of jobs.

But while unemployment is historically low, recent decades have seen a polarization of the workforce as the number of both high- and low-skilled jobs have grown at the expense of the middle-skilled ones, driving growing income inequality and depriving the non-college-educated of viable careers.

This is at least partly attributable to the growth of digital technology and automation, the report notes, which are rendering obsolete many middle-skilled jobs based around routine work like assembly lines and administrative support.

That leaves workers to either pursue high-skilled jobs that require deep knowledge and creativity, or settle for low-paid jobs that rely on skills—like manual dexterity or interpersonal communication—that are still beyond machines, but generic to most humans and therefore not valued by employers. And the growth of emerging technology like AI and robotics is only likely to exacerbate the problem.

This isn’t the first report to note this trend. The World Bank’s 2016 World Development Report noted how technology is causing a “hollowing out” of labor markets. But the MIT report goes further in saying that the cause isn’t simply technology, but the institutions and policies we’ve built around it.

The motivation for introducing new technology is broadly assumed to be to increase productivity, but the authors note a rarely-acknowledged fact: “Not all innovations that raise productivity displace workers, and not all innovations that displace workers substantially raise productivity.”

Examples of the former include computer-aided design software that makes engineers and architects more productive, while examples of the latter include self-service checkouts and automated customer support that replace human workers, often at the expense of a worse customer experience.

While the report notes that companies have increasingly adopted the language of technology augmenting labor, in reality this has only really benefited high-skilled workers. For lower-skilled jobs the motivation is primarily labor cost savings, which highlights the other major force shaping technology’s impact on employment: shareholder capitalism.

The authors note that up until the 1980s, increasing productivity resulted in wage growth across the economic spectrum, but since then average wage growth has failed to keep pace and gains have dramatically skewed towards the top earners.

The report shies away from directly linking this trend to the birth of Reaganomics (something others have been happy to do), but it notes that American veneration of the shareholder as the primary stakeholder in a business and tax policies that incentivize investment in capital rather than labor have exacerbated the negative impacts technology can have on employment.

That means the current focus on re-skilling workers to thrive in the new economy is a necessary, but not sufficient, solution to the disruptive impact technology is having on work, the authors say.

Alongside significant investment in education, fiscal policies need to be re-balanced away from subsidizing investment in physical capital and towards boosting investment in human capital, the authors write, and workers need to have a greater say in corporate decision-making.

The authors point to other developed economies where productivity growth, income growth, and equality haven’t become so disconnected thanks to investments in worker skills, social safety nets, and incentives to invest in human capital. Whether such a radical reshaping of US economic policy is achievable in today’s political climate remains to be seen, but the authors conclude with a call to arms.

“The failure of the US labor market to deliver broadly shared prosperity despite rising productivity is not an inevitable byproduct of current technologies or free markets,” they write. “We can and should do better.”

Image Credit: Simon Abrams / Unsplash/a> Continue reading

Posted in Human Robots

#435601 New Double 3 Robot Makes Telepresence ...

Today, Double Robotics is announcing Double 3, the latest major upgrade to its line of consumer(ish) telepresence robots. We had a (mostly) fantastic time testing out Double 2 back in 2016. One of the things that we found out back then was that it takes a lot of practice to remotely drive the robot around. Double 3 solves this problem by leveraging the substantial advances in 3D sensing and computing that have taken place over the past few years, giving their new robot a level of intelligence that promises to make telepresence more accessible for everyone.

Double 2’s iPad has been replaced by “a fully integrated solution”—which is a fancy way of saying a dedicated 9.7-inch touchscreen and a whole bunch of other stuff. That other stuff includes an NVIDIA Jetson TX2 AI computing module, a beamforming six-microphone array, an 8-watt speaker, a pair of 13-megapixel cameras (wide angle and zoom) on a tilting mount, five ultrasonic rangefinders, and most excitingly, a pair of Intel RealSense D430 depth sensors.

It’s those new depth sensors that really make Double 3 special. The D430 modules each uses a pair of stereo cameras with a pattern projector to generate 1280 x 720 depth data with a range of between 0.2 and 10 meters away. The Double 3 robot uses all of this high quality depth data to locate obstacles, but at this point, it still doesn’t drive completely autonomously. Instead, it presents the remote operator with a slick, augmented reality view of drivable areas in the form of a grid of dots. You just click where you want the robot to go, and it will skillfully take itself there while avoiding obstacles (including dynamic obstacles) and related mishaps along the way.

This effectively offloads the most stressful part of telepresence—not running into stuff—from the remote user to the robot itself, which is the way it should be. That makes it that much easier to encourage people to utilize telepresence for the first time. The way the system is implemented through augmented reality is particularly impressive, I think. It looks like it’s intuitive enough for an inexperienced user without being restrictive, and is a clever way of mitigating even significant amounts of lag.

Otherwise, Double 3’s mobility system is exactly the same as the one featured on Double 2. In fact, that you can stick a Double 3 head on a Double 2 body and it instantly becomes a Double 3. Double Robotics is thoughtfully offering this to current Double 2 owners as a significantly more affordable upgrade option than buying a whole new robot.

For more details on all of Double 3's new features, we spoke with the co-founders of Double Robotics, Marc DeVidts and David Cann.

IEEE Spectrum: Why use this augmented reality system instead of just letting the user click on a regular camera image? Why make things more visually complicated, especially for new users?

Marc DeVidts and David Cann: One of the things that we realized about nine months ago when we got this whole thing working was that without the mixed reality for driving, it was really too magical of an experience for the customer. Even us—we had a hard time understanding whether the robot could really see obstacles and understand where the floor is and that kind of thing. So, we said “What would be the best way of communicating this information to the user?” And the right way to do it ended up drawing the graphics directly onto the scene. It’s really awesome—we have a full, real time 3D scene with the depth information drawn on top of it. We’re starting with some relatively simple graphics, and we’ll be adding more graphics in the future to help the user understand what the robot is seeing.

How robust is the vision system when it comes to obstacle detection and avoidance? Does it work with featureless surfaces, IR absorbent surfaces, in low light, in direct sunlight, etc?

We’ve looked at all of those cases, and one of the reasons that we’re going with the RealSense is the projector that helps us to see blank walls. We also found that having two sensors—one facing the floor and one facing forward—gives us a great coverage area. Having ultrasonic sensors in there as well helps us to detect anything that we can't see with the cameras. They're sort of a last safety measure, especially useful for detecting glass.

It seems like there’s a lot more that you could do with this sensing and mapping capability. What else are you working on?

We're starting with this semi-autonomous driving variant, and we're doing a private beta of full mapping. So, we’re going to do full SLAM of your environment that will be mapped by multiple robots at the same time while you're driving, and then you'll be able to zoom out to a map and click anywhere and it will drive there. That's where we're going with it, but we want to take baby steps to get there. It's the obvious next step, I think, and there are a lot more possibilities there.

Do you expect developers to be excited for this new mapping capability?

We're using a very powerful computer in the robot, a NVIDIA Jetson TX2 running Ubuntu. There's room to grow. It’s actually really exciting to be able to see, in real time, the 3D pose of the robot along with all of the depth data that gets transformed in real time into one view that gives you a full map. Having all of that data and just putting those pieces together and getting everything to work has been a huge feat in of itself.

We have an extensive API for developers to do custom implementations, either for telepresence or other kinds of robotics research. Our system isn't running ROS, but we're going to be adding ROS adapters for all of our hardware components.

Telepresence robots depend heavily on wireless connectivity, which is usually not something that telepresence robotics companies like Double have direct control over. Have you found that connectivity has been getting significantly better since you first introduced Double?

When we started in 2013, we had a lot of customers that didn’t have WiFi in their hallways, just in the conference rooms. We very rarely hear about customers having WiFi connectivity issues these days. The bigger issue we see is when people are calling into the robot from home, where they don't have proper traffic management on their home network. The robot doesn't need a ton of bandwidth, but it does need consistent, low latency bandwidth. And so, if someone else in the house is watching Netflix or something like that, it’s going to saturate your connection. But for the most part, it’s gotten a lot better over the last few years, and it’s no longer a big problem for us.

Do you think 5G will make a significant difference to telepresence robots?

We’ll see. We like the low latency possibilities and the better bandwidth, but it's all going to be a matter of what kind of reception you get. LTE can be great, if you have good reception; it’s all about where the tower is. I’m pretty sure that WiFi is going to be the primary thing for at least the next few years.

DeVidts also mentioned that an unfortunate side effect of the new depth sensors is that hanging a t-shirt on your Double to give it some personality will likely render it partially blind, so that's just something to keep in mind. To make up for this, you can switch around the colorful trim surrounding the screen, which is nowhere near as fun.

When the Double 3 is ready for shipping in late September, US $2,000 will get you the new head with all the sensors and stuff, which seamlessly integrates with your Double 2 base. Buying Double 3 straight up (with the included charging dock) will run you $4,ooo. This is by no means an inexpensive robot, and my impression is that it’s not really designed for individual consumers. But for commercial, corporate, healthcare, or education applications, $4k for a robot as capable as the Double 3 is really quite a good deal—especially considering the kinds of use cases for which it’s ideal.

[ Double Robotics ] Continue reading

Posted in Human Robots

#435541 This Giant AI Chip Is the Size of an ...

People say size doesn’t matter, but when it comes to AI the makers of the largest computer chip ever beg to differ. There are plenty of question marks about the gargantuan processor, but its unconventional design could herald an innovative new era in silicon design.

Computer chips specialized to run deep learning algorithms are a booming area of research as hardware limitations begin to slow progress, and both established players and startups are vying to build the successor to the GPU, the specialized graphics chip that has become the workhorse of the AI industry.

On Monday Californian startup Cerebras came out of stealth mode to unveil an AI-focused processor that turns conventional wisdom on its head. For decades chip makers have been focused on making their products ever-smaller, but the Wafer Scale Engine (WSE) is the size of an iPad and features 1.2 trillion transistors, 400,000 cores, and 18 gigabytes of on-chip memory.

The Cerebras Wafer-Scale Engine (WSE) is the largest chip ever built. It measures 46,225 square millimeters and includes 1.2 trillion transistors. Optimized for artificial intelligence compute, the WSE is shown here for comparison alongside the largest graphics processing unit. Image Credit: Used with permission from Cerebras Systems.
There is a method to the madness, though. Currently, getting enough cores to run really large-scale deep learning applications means connecting banks of GPUs together. But shuffling data between these chips is a major drain on speed and energy efficiency because the wires connecting them are relatively slow.

Building all 400,000 cores into the same chip should get round that bottleneck, but there are reasons it’s not been done before, and Cerebras has had to come up with some clever hacks to get around those obstacles.

Regular computer chips are manufactured using a process called photolithography to etch transistors onto the surface of a wafer of silicon. The wafers are inches across, so multiple chips are built onto them at once and then split up afterwards. But at 8.5 inches across, the WSE uses the entire wafer for a single chip.

The problem is that while for standard chip-making processes any imperfections in manufacturing will at most lead to a few processors out of several hundred having to be ditched, for Cerebras it would mean scrapping the entire wafer. To get around this the company built in redundant circuits so that even if there are a few defects, the chip can route around them.

The other big issue with a giant chip is the enormous amount of heat the processors can kick off—so the company has had to design a proprietary water-cooling system. That, along with the fact that no one makes connections and packaging for giant chips, means the WSE won’t be sold as a stand-alone component, but as part of a pre-packaged server incorporating the cooling technology.

There are no details on costs or performance so far, but some customers have already been testing prototypes, and according to Cerebras results have been promising. CEO and co-founder Andrew Feldman told Fortune that early tests show they are reducing training time from months to minutes.

We’ll have to wait until the first systems ship to customers in September to see if those claims stand up. But Feldman told ZDNet that the design of their chip should help spur greater innovation in the way engineers design neural networks. Many cornerstones of this process—for instance, tackling data in batches rather than individual data points—are guided more by the hardware limitations of GPUs than by machine learning theory, but their chip will do away with many of those obstacles.

Whether that turns out to be the case or not, the WSE might be the first indication of an innovative new era in silicon design. When Google announced it’s AI-focused Tensor Processing Unit in 2016 it was a wake-up call for chipmakers that we need some out-of-the-box thinking to square the slowing of Moore’s Law with skyrocketing demand for computing power.

It’s not just tech giants’ AI server farms driving innovation. At the other end of the spectrum, the desire to embed intelligence in everyday objects and mobile devices is pushing demand for AI chips that can run on tiny amounts of power and squeeze into the smallest form factors.

These trends have spawned renewed interest in everything from brain-inspired neuromorphic chips to optical processors, but the WSE also shows that there might be mileage in simply taking a sideways look at some of the other design decisions chipmakers have made in the past rather than just pumping ever more transistors onto a chip.

This gigantic chip might be the first exhibit in a weird and wonderful new menagerie of exotic, AI-inspired silicon.

Image Credit: Used with permission from Cerebras Systems. Continue reading

Posted in Human Robots

#435505 This Week’s Awesome Stories From ...

AUGMENTED REALITY
This Is the Computer You’ll Wear on Your Face in 10 Years
Mark Sullivan | Fast Company
“[Snap’s new Spectacles 3] foreshadow a device that many of us may wear as our primary personal computing device in about 10 years. Based on what I’ve learned by talking AR with technologists in companies big and small, here is what such a device might look like and do.”

ROBOTICS
These Robo-Shorts Are the Precursor to a True Robotic Exoskeleton
Devin Coldewey | TechCrunch
“The whole idea, then, is to leave behind the idea of an exosuit as a big mechanical thing for heavy industry or work, and bring in the idea that one could help an elderly person stand up from a chair, or someone recovering from an accident walk farther without fatigue.”

ENVIRONMENT
Artificial Tree Promises to Suck Up as Much Air Pollution as a Small Forest
Luke Dormehl | Digital Trends
“The company has developed an artificial tree that it claims is capable of sucking up the equivalent amount of air pollution as 368 living trees. That’s not only a saving on growing time, but also on the space needed to accommodate them.”

FUTURE
The Anthropocene Is a Joke
Peter Brannen | The Atlantic
“Unless we fast learn how to endure on this planet, and on a scale far beyond anything we’ve yet proved ourselves capable of, the detritus of civilization will be quickly devoured by the maw of deep time.”

ARTIFICIAL INTELLIGENCE
DeepMind’s Losses and the Future of Artificial Intelligence
Gary Marcus | Wired
“Still, the rising magnitude of DeepMind’s losses is worth considering: $154 million in 2016, $341 million in 2017, $572 million in 2018. In my view, there are three central questions: Is DeepMind on the right track scientifically? Are investments of this magnitude sound from Alphabet’s perspective? And how will the losses affect AI in general?”

Image Credit: Tithi Luadthong / Shutterstock.com Continue reading

Posted in Human Robots