Tag Archives: 2016
#435674 MIT Future of Work Report: We ...
Robots aren’t going to take everyone’s jobs, but technology has already reshaped the world of work in ways that are creating clear winners and losers. And it will continue to do so without intervention, says the first report of MIT’s Task Force on the Work of the Future.
The supergroup of MIT academics was set up by MIT President Rafael Reif in early 2018 to investigate how emerging technologies will impact employment and devise strategies to steer developments in a positive direction. And the headline finding from their first publication is that it’s not the quantity of jobs we should be worried about, but the quality.
Widespread press reports of a looming “employment apocalypse” brought on by AI and automation are probably wide of the mark, according to the authors. Shrinking workforces as developed countries age and outstanding limitations in what machines can do mean we’re unlikely to have a shortage of jobs.
But while unemployment is historically low, recent decades have seen a polarization of the workforce as the number of both high- and low-skilled jobs have grown at the expense of the middle-skilled ones, driving growing income inequality and depriving the non-college-educated of viable careers.
This is at least partly attributable to the growth of digital technology and automation, the report notes, which are rendering obsolete many middle-skilled jobs based around routine work like assembly lines and administrative support.
That leaves workers to either pursue high-skilled jobs that require deep knowledge and creativity, or settle for low-paid jobs that rely on skills—like manual dexterity or interpersonal communication—that are still beyond machines, but generic to most humans and therefore not valued by employers. And the growth of emerging technology like AI and robotics is only likely to exacerbate the problem.
This isn’t the first report to note this trend. The World Bank’s 2016 World Development Report noted how technology is causing a “hollowing out” of labor markets. But the MIT report goes further in saying that the cause isn’t simply technology, but the institutions and policies we’ve built around it.
The motivation for introducing new technology is broadly assumed to be to increase productivity, but the authors note a rarely-acknowledged fact: “Not all innovations that raise productivity displace workers, and not all innovations that displace workers substantially raise productivity.”
Examples of the former include computer-aided design software that makes engineers and architects more productive, while examples of the latter include self-service checkouts and automated customer support that replace human workers, often at the expense of a worse customer experience.
While the report notes that companies have increasingly adopted the language of technology augmenting labor, in reality this has only really benefited high-skilled workers. For lower-skilled jobs the motivation is primarily labor cost savings, which highlights the other major force shaping technology’s impact on employment: shareholder capitalism.
The authors note that up until the 1980s, increasing productivity resulted in wage growth across the economic spectrum, but since then average wage growth has failed to keep pace and gains have dramatically skewed towards the top earners.
The report shies away from directly linking this trend to the birth of Reaganomics (something others have been happy to do), but it notes that American veneration of the shareholder as the primary stakeholder in a business and tax policies that incentivize investment in capital rather than labor have exacerbated the negative impacts technology can have on employment.
That means the current focus on re-skilling workers to thrive in the new economy is a necessary, but not sufficient, solution to the disruptive impact technology is having on work, the authors say.
Alongside significant investment in education, fiscal policies need to be re-balanced away from subsidizing investment in physical capital and towards boosting investment in human capital, the authors write, and workers need to have a greater say in corporate decision-making.
The authors point to other developed economies where productivity growth, income growth, and equality haven’t become so disconnected thanks to investments in worker skills, social safety nets, and incentives to invest in human capital. Whether such a radical reshaping of US economic policy is achievable in today’s political climate remains to be seen, but the authors conclude with a call to arms.
“The failure of the US labor market to deliver broadly shared prosperity despite rising productivity is not an inevitable byproduct of current technologies or free markets,” they write. “We can and should do better.”
Image Credit: Simon Abrams / Unsplash/a> Continue reading
#435601 New Double 3 Robot Makes Telepresence ...
Today, Double Robotics is announcing Double 3, the latest major upgrade to its line of consumer(ish) telepresence robots. We had a (mostly) fantastic time testing out Double 2 back in 2016. One of the things that we found out back then was that it takes a lot of practice to remotely drive the robot around. Double 3 solves this problem by leveraging the substantial advances in 3D sensing and computing that have taken place over the past few years, giving their new robot a level of intelligence that promises to make telepresence more accessible for everyone.
Double 2’s iPad has been replaced by “a fully integrated solution”—which is a fancy way of saying a dedicated 9.7-inch touchscreen and a whole bunch of other stuff. That other stuff includes an NVIDIA Jetson TX2 AI computing module, a beamforming six-microphone array, an 8-watt speaker, a pair of 13-megapixel cameras (wide angle and zoom) on a tilting mount, five ultrasonic rangefinders, and most excitingly, a pair of Intel RealSense D430 depth sensors.
It’s those new depth sensors that really make Double 3 special. The D430 modules each uses a pair of stereo cameras with a pattern projector to generate 1280 x 720 depth data with a range of between 0.2 and 10 meters away. The Double 3 robot uses all of this high quality depth data to locate obstacles, but at this point, it still doesn’t drive completely autonomously. Instead, it presents the remote operator with a slick, augmented reality view of drivable areas in the form of a grid of dots. You just click where you want the robot to go, and it will skillfully take itself there while avoiding obstacles (including dynamic obstacles) and related mishaps along the way.
This effectively offloads the most stressful part of telepresence—not running into stuff—from the remote user to the robot itself, which is the way it should be. That makes it that much easier to encourage people to utilize telepresence for the first time. The way the system is implemented through augmented reality is particularly impressive, I think. It looks like it’s intuitive enough for an inexperienced user without being restrictive, and is a clever way of mitigating even significant amounts of lag.
Otherwise, Double 3’s mobility system is exactly the same as the one featured on Double 2. In fact, that you can stick a Double 3 head on a Double 2 body and it instantly becomes a Double 3. Double Robotics is thoughtfully offering this to current Double 2 owners as a significantly more affordable upgrade option than buying a whole new robot.
For more details on all of Double 3's new features, we spoke with the co-founders of Double Robotics, Marc DeVidts and David Cann.
IEEE Spectrum: Why use this augmented reality system instead of just letting the user click on a regular camera image? Why make things more visually complicated, especially for new users?
Marc DeVidts and David Cann: One of the things that we realized about nine months ago when we got this whole thing working was that without the mixed reality for driving, it was really too magical of an experience for the customer. Even us—we had a hard time understanding whether the robot could really see obstacles and understand where the floor is and that kind of thing. So, we said “What would be the best way of communicating this information to the user?” And the right way to do it ended up drawing the graphics directly onto the scene. It’s really awesome—we have a full, real time 3D scene with the depth information drawn on top of it. We’re starting with some relatively simple graphics, and we’ll be adding more graphics in the future to help the user understand what the robot is seeing.
How robust is the vision system when it comes to obstacle detection and avoidance? Does it work with featureless surfaces, IR absorbent surfaces, in low light, in direct sunlight, etc?
We’ve looked at all of those cases, and one of the reasons that we’re going with the RealSense is the projector that helps us to see blank walls. We also found that having two sensors—one facing the floor and one facing forward—gives us a great coverage area. Having ultrasonic sensors in there as well helps us to detect anything that we can't see with the cameras. They're sort of a last safety measure, especially useful for detecting glass.
It seems like there’s a lot more that you could do with this sensing and mapping capability. What else are you working on?
We're starting with this semi-autonomous driving variant, and we're doing a private beta of full mapping. So, we’re going to do full SLAM of your environment that will be mapped by multiple robots at the same time while you're driving, and then you'll be able to zoom out to a map and click anywhere and it will drive there. That's where we're going with it, but we want to take baby steps to get there. It's the obvious next step, I think, and there are a lot more possibilities there.
Do you expect developers to be excited for this new mapping capability?
We're using a very powerful computer in the robot, a NVIDIA Jetson TX2 running Ubuntu. There's room to grow. It’s actually really exciting to be able to see, in real time, the 3D pose of the robot along with all of the depth data that gets transformed in real time into one view that gives you a full map. Having all of that data and just putting those pieces together and getting everything to work has been a huge feat in of itself.
We have an extensive API for developers to do custom implementations, either for telepresence or other kinds of robotics research. Our system isn't running ROS, but we're going to be adding ROS adapters for all of our hardware components.
Telepresence robots depend heavily on wireless connectivity, which is usually not something that telepresence robotics companies like Double have direct control over. Have you found that connectivity has been getting significantly better since you first introduced Double?
When we started in 2013, we had a lot of customers that didn’t have WiFi in their hallways, just in the conference rooms. We very rarely hear about customers having WiFi connectivity issues these days. The bigger issue we see is when people are calling into the robot from home, where they don't have proper traffic management on their home network. The robot doesn't need a ton of bandwidth, but it does need consistent, low latency bandwidth. And so, if someone else in the house is watching Netflix or something like that, it’s going to saturate your connection. But for the most part, it’s gotten a lot better over the last few years, and it’s no longer a big problem for us.
Do you think 5G will make a significant difference to telepresence robots?
We’ll see. We like the low latency possibilities and the better bandwidth, but it's all going to be a matter of what kind of reception you get. LTE can be great, if you have good reception; it’s all about where the tower is. I’m pretty sure that WiFi is going to be the primary thing for at least the next few years.
DeVidts also mentioned that an unfortunate side effect of the new depth sensors is that hanging a t-shirt on your Double to give it some personality will likely render it partially blind, so that's just something to keep in mind. To make up for this, you can switch around the colorful trim surrounding the screen, which is nowhere near as fun.
When the Double 3 is ready for shipping in late September, US $2,000 will get you the new head with all the sensors and stuff, which seamlessly integrates with your Double 2 base. Buying Double 3 straight up (with the included charging dock) will run you $4,ooo. This is by no means an inexpensive robot, and my impression is that it’s not really designed for individual consumers. But for commercial, corporate, healthcare, or education applications, $4k for a robot as capable as the Double 3 is really quite a good deal—especially considering the kinds of use cases for which it’s ideal.
[ Double Robotics ] Continue reading
#435541 This Giant AI Chip Is the Size of an ...
People say size doesn’t matter, but when it comes to AI the makers of the largest computer chip ever beg to differ. There are plenty of question marks about the gargantuan processor, but its unconventional design could herald an innovative new era in silicon design.
Computer chips specialized to run deep learning algorithms are a booming area of research as hardware limitations begin to slow progress, and both established players and startups are vying to build the successor to the GPU, the specialized graphics chip that has become the workhorse of the AI industry.
On Monday Californian startup Cerebras came out of stealth mode to unveil an AI-focused processor that turns conventional wisdom on its head. For decades chip makers have been focused on making their products ever-smaller, but the Wafer Scale Engine (WSE) is the size of an iPad and features 1.2 trillion transistors, 400,000 cores, and 18 gigabytes of on-chip memory.
The Cerebras Wafer-Scale Engine (WSE) is the largest chip ever built. It measures 46,225 square millimeters and includes 1.2 trillion transistors. Optimized for artificial intelligence compute, the WSE is shown here for comparison alongside the largest graphics processing unit. Image Credit: Used with permission from Cerebras Systems.
There is a method to the madness, though. Currently, getting enough cores to run really large-scale deep learning applications means connecting banks of GPUs together. But shuffling data between these chips is a major drain on speed and energy efficiency because the wires connecting them are relatively slow.
Building all 400,000 cores into the same chip should get round that bottleneck, but there are reasons it’s not been done before, and Cerebras has had to come up with some clever hacks to get around those obstacles.
Regular computer chips are manufactured using a process called photolithography to etch transistors onto the surface of a wafer of silicon. The wafers are inches across, so multiple chips are built onto them at once and then split up afterwards. But at 8.5 inches across, the WSE uses the entire wafer for a single chip.
The problem is that while for standard chip-making processes any imperfections in manufacturing will at most lead to a few processors out of several hundred having to be ditched, for Cerebras it would mean scrapping the entire wafer. To get around this the company built in redundant circuits so that even if there are a few defects, the chip can route around them.
The other big issue with a giant chip is the enormous amount of heat the processors can kick off—so the company has had to design a proprietary water-cooling system. That, along with the fact that no one makes connections and packaging for giant chips, means the WSE won’t be sold as a stand-alone component, but as part of a pre-packaged server incorporating the cooling technology.
There are no details on costs or performance so far, but some customers have already been testing prototypes, and according to Cerebras results have been promising. CEO and co-founder Andrew Feldman told Fortune that early tests show they are reducing training time from months to minutes.
We’ll have to wait until the first systems ship to customers in September to see if those claims stand up. But Feldman told ZDNet that the design of their chip should help spur greater innovation in the way engineers design neural networks. Many cornerstones of this process—for instance, tackling data in batches rather than individual data points—are guided more by the hardware limitations of GPUs than by machine learning theory, but their chip will do away with many of those obstacles.
Whether that turns out to be the case or not, the WSE might be the first indication of an innovative new era in silicon design. When Google announced it’s AI-focused Tensor Processing Unit in 2016 it was a wake-up call for chipmakers that we need some out-of-the-box thinking to square the slowing of Moore’s Law with skyrocketing demand for computing power.
It’s not just tech giants’ AI server farms driving innovation. At the other end of the spectrum, the desire to embed intelligence in everyday objects and mobile devices is pushing demand for AI chips that can run on tiny amounts of power and squeeze into the smallest form factors.
These trends have spawned renewed interest in everything from brain-inspired neuromorphic chips to optical processors, but the WSE also shows that there might be mileage in simply taking a sideways look at some of the other design decisions chipmakers have made in the past rather than just pumping ever more transistors onto a chip.
This gigantic chip might be the first exhibit in a weird and wonderful new menagerie of exotic, AI-inspired silicon.
Image Credit: Used with permission from Cerebras Systems. Continue reading