Tag Archives: automate

#437620 The Trillion-Transistor Chip That Just ...

The history of computer chips is a thrilling tale of extreme miniaturization.

The smaller, the better is a trend that’s given birth to the digital world as we know it. So, why on earth would you want to reverse course and make chips a lot bigger? Well, while there’s no particularly good reason to have a chip the size of an iPad in an iPad, such a chip may prove to be genius for more specific uses, like artificial intelligence or simulations of the physical world.

At least, that’s what Cerebras, the maker of the biggest computer chip in the world, is hoping.

The Cerebras Wafer-Scale Engine is massive any way you slice it. The chip is 8.5 inches to a side and houses 1.2 trillion transistors. The next biggest chip, NVIDIA’s A100 GPU, measures an inch to a side and has a mere 54 billion transistors. The former is new, largely untested and, so far, one-of-a-kind. The latter is well-loved, mass-produced, and has taken over the world of AI and supercomputing in the last decade.

So can Goliath flip the script on David? Cerebras is on a mission to find out.

Big Chips Beyond AI
When Cerebras first came out of stealth last year, the company said it could significantly speed up the training of deep learning models.

Since then, the WSE has made its way into a handful of supercomputing labs, where the company’s customers are putting it through its paces. One of those labs, the National Energy Technology Laboratory, is looking to see what it can do beyond AI.

So, in a recent trial, researchers pitted the chip—which is housed in an all-in-one system about the size of a dorm room mini-fridge called the CS-1—against a supercomputer in a fluid dynamics simulation. Simulating the movement of fluids is a common supercomputer application useful for solving complex problems like weather forecasting and airplane wing design.

The trial was described in a preprint paper written by a team led by Cerebras’s Michael James and NETL’s Dirk Van Essendelft and presented at the supercomputing conference SC20 this week. The team said the CS-1 completed a simulation of combustion in a power plant roughly 200 times faster than it took the Joule 2.0 supercomputer to do a similar task.

The CS-1 was actually faster-than-real-time. As Cerebrus wrote in a blog post, “It can tell you what is going to happen in the future faster than the laws of physics produce the same result.”

The researchers said the CS-1’s performance couldn’t be matched by any number of CPUs and GPUs. And CEO and cofounder Andrew Feldman told VentureBeat that would be true “no matter how large the supercomputer is.” At a point, scaling a supercomputer like Joule no longer produces better results in this kind of problem. That’s why Joule’s simulation speed peaked at 16,384 cores, a fraction of its total 86,400 cores.

A comparison of the two machines drives the point home. Joule is the 81st fastest supercomputer in the world, takes up dozens of server racks, consumes up to 450 kilowatts of power, and required tens of millions of dollars to build. The CS-1, by comparison, fits in a third of a server rack, consumes 20 kilowatts of power, and sells for a few million dollars.

While the task is niche (but useful) and the problem well-suited to the CS-1, it’s still a pretty stunning result. So how’d they pull it off? It’s all in the design.

Cut the Commute
Computer chips begin life on a big piece of silicon called a wafer. Multiple chips are etched onto the same wafer and then the wafer is cut into individual chips. While the WSE is also etched onto a silicon wafer, the wafer is left intact as a single, operating unit. This wafer-scale chip contains almost 400,000 processing cores. Each core is connected to its own dedicated memory and its four neighboring cores.

Putting that many cores on a single chip and giving them their own memory is why the WSE is bigger; it’s also why, in this case, it’s better.

Most large-scale computing tasks depend on massively parallel processing. Researchers distribute the task among hundreds or thousands of chips. The chips need to work in concert, so they’re in constant communication, shuttling information back and forth. A similar process takes place within each chip, as information moves between processor cores, which are doing the calculations, and shared memory to store the results.

It’s a little like an old-timey company that does all its business on paper.

The company uses couriers to send and collect documents from other branches and archives across town. The couriers know the best routes through the city, but the trips take some minimum amount of time determined by the distance between the branches and archives, the courier’s top speed, and how many other couriers are on the road. In short, distance and traffic slow things down.

Now, imagine the company builds a brand new gleaming skyscraper. Every branch is moved into the new building and every worker gets a small filing cabinet in their office to store documents. Now any document they need can be stored and retrieved in the time it takes to step across the office or down the hall to their neighbor’s office. The information commute has all but disappeared. Everything’s in the same house.

Cerebras’s megachip is a bit like that skyscraper. The way it shuttles information—aided further by its specially tailored compiling software—is far more efficient compared to a traditional supercomputer that needs to network a ton of traditional chips.

Simulating the World as It Unfolds
It’s worth noting the chip can only handle problems small enough to fit on the wafer. But such problems may have quite practical applications because of the machine’s ability to do high-fidelity simulation in real-time. The authors note, for example, the machine should in theory be able to accurately simulate the air flow around a helicopter trying to land on a flight deck and semi-automate the process—something not possible with traditional chips.

Another opportunity, they note, would be to use a simulation as input to train a neural network also residing on the chip. In an intriguing and related example, a Caltech machine learning technique recently proved to be 1,000 times faster at solving the same kind of partial differential equations at play here to simulate fluid dynamics.

They also note that improvements in the chip (and others like it, should they arrive) will push back the limits of what can be accomplished. Already, Cerebras has teased the release of its next-generation chip, which will have 2.6 trillion transistors, 850,00 cores, and more than double the memory.

Of course, it still remains to be seen whether wafer-scale computing really takes off. The idea has been around for decades, but Cerebras is the first to pursue it seriously. Clearly, they believe they’ve solved the problem in a way that’s useful and economical.

Other new architectures are also being pursued in the lab. Memristor-based neuromorphic chips, for example, mimic the brain by putting processing and memory into individual transistor-like components. And of course, quantum computers are in a separate lane, but tackle similar problems.

It could be that one of these technologies eventually rises to rule them all. Or, and this seems just as likely, computing may splinter into a bizarre quilt of radical chips, all stitched together to make the most of each depending on the situation.

Image credit: Cerebras Continue reading

Posted in Human Robots

#437550 McDonald’s Is Making a Plant-Based ...

Fast-food chains have been doing what they can in recent years to health-ify their menus. For better or worse, burgers, fries, fried chicken, roast beef sandwiches, and the like will never go out of style—this is America, after all—but consumers are increasingly gravitating towards healthier options.

One of those options is plant-based foods, and not just salads and veggie burgers, but “meat” made from plants. Burger King was one of the first big fast-food chains to jump on the plant-based meat bandwagon, introducing its Impossible Whopper in restaurants across the country last year after a successful pilot program. Dunkin’ (formerly Dunkin’ Donuts) uses plant-based patties in its Beyond Sausage breakfast sandwiches.

But there’s one big player in the fast food market that’s been oddly missing from the plant-based trend—until now. McDonald’s announced last week that it will debut a sandwich called the McPlant in key US markets next year. Unlike Dunkin’ and Burger King, who both worked with Impossible Foods to make their plant-based products, McDonald’s worked with Los Angeles-based Beyond Meat, which makes chicken, beef, and pork-like products from plants.

According to Bloomberg, though, McDonald’s decided to forego a partnership with Beyond Meat in favor of creating its own plant-based products. Imitation chicken nuggets and plant-based breakfast sandwiches are in its plans as well.

McDonald’s has bounced back impressively from its March low (when the coronavirus lockdowns first happened in the US). Last month the company’s stock reached a 52-week high of $231 per share (as compared to its low in March of $124 per share).

To keep those numbers high and make it as easy as possible for customers to get their hands on plant-based burgers and all the traditional menu items too, the fast food chain is investing in tech and integrating more digital offerings into its restaurants.

McDonald’s has acquired a couple artificial intelligence companies in the last year and a half; Dynamic Yield is an Israeli company that uses AI to personalize customers’ experiences, and McDonald’s is using Dynamic Yield’s tech on its smart menu boards, for example by customizing the items displayed on the drive-thru menu based on the weather and the time of day, and recommending additional items based on what a customer asks for first (i.e. “You know what would go great with that coffee? Some pancakes!”).

The fast food giant also bought Apprente, a startup that uses AI in voice-based ordering platforms. McDonald’s is using the tech to help automate its drive-throughs.

In addition to these investments, the company plans to launch a digital hub called MyMcDonald’s that will include a loyalty program, start doing deliveries of its food through its mobile app, and test different ways of streamlining the food order and pickup process—with many of the new ideas geared towards pandemic times, like express pickup lanes for people who placed digital orders and restaurants with drive-throughs for delivery and pickup orders only.

Plant-based meat patties appear to be just one small piece of McDonald’s modernization plans. Those of us who were wondering what they were waiting for should have known—one of the most-recognized fast food chains in the world wasn’t about to let itself get phased out. It seems it will only be a matter of time until you can pull out your phone, make a few selections, and have a burger made from plants—with a side of fries made from more plants—show up at your door a little while later. Drive-throughs, shouting your order into a fuzzy speaker with a confused teen on the other end, and burgers made from beef? So 2019.

Image Credit: McDonald’s Continue reading

Posted in Human Robots

#437157 A Human-Centric World of Work: Why It ...

Long before coronavirus appeared and shattered our pre-existing “normal,” the future of work was a widely discussed and debated topic. We’ve watched automation slowly but surely expand its capabilities and take over more jobs, and we’ve wondered what artificial intelligence will eventually be capable of.

The pandemic swiftly turned the working world on its head, putting millions of people out of a job and forcing millions more to work remotely. But essential questions remain largely unchanged: we still want to make sure we’re not replaced, we want to add value, and we want an equitable society where different types of work are valued fairly.

To address these issues—as well as how the pandemic has impacted them—this week Singularity University held a digital summit on the future of work. Forty-three speakers from multiple backgrounds, countries, and sectors of the economy shared their expertise on everything from work in developing markets to why we shouldn’t want to go back to the old normal.

Gary Bolles, SU’s chair for the Future of Work, kicked off the discussion with his thoughts on a future of work that’s human-centric, including why it matters and how to build it.

What Is Work?
“Work” seems like a straightforward concept to define, but since it’s constantly shifting shape over time, let’s make sure we’re on the same page. Bolles defined work, very basically, as human skills applied to problems.

“It doesn’t matter if it’s a dirty floor or a complex market entry strategy or a major challenge in the world,” he said. “We as humans create value by applying our skills to solve problems in the world.” You can think of the problems that need solving as the demand and human skills as the supply, and the two are in constant oscillation, including, every few decades or centuries, a massive shift.

We’re in the midst of one of those shifts right now (and we already were, long before the pandemic). Skills that have long been in demand are declining. The World Economic Forum’s 2018 Future of Jobs report listed things like manual dexterity, management of financial and material resources, and quality control and safety awareness as declining skills. Meanwhile, skills the next generation will need include analytical thinking and innovation, emotional intelligence, creativity, and systems analysis.

Along Came a Pandemic
With the outbreak of coronavirus and its spread around the world, the demand side of work shrunk; all the problems that needed solving gave way to the much bigger, more immediate problem of keeping people alive. But as a result, tens of millions of people around the world are out of work—and those are just the ones that are being counted, and they’re a fraction of the true total. There are additional millions in seasonal or gig jobs or who work in informal economies now without work, too.

“This is our opportunity to focus,” Bolles said. “How do we help people re-engage with work? And make it better work, a better economy, and a better set of design heuristics for a world that we all want?”

Bolles posed five key questions—some spurred by impact of the pandemic—on which future of work conversations should focus to make sure it’s a human-centric future.

1. What does an inclusive world of work look like? Rather than seeing our current systems of work as immutable, we need to actually understand those systems and how we want to change them.

2. How can we increase the value of human work? We know that robots and software are going to be fine in the future—but for humans to be fine, we need to design for that very intentionally.

3. How can entrepreneurship help create a better world of work? In many economies the new value that’s created often comes from younger companies; how do we nurture entrepreneurship?

4. What will the intersection of workplace and geography look like? A large percentage of the global workforce is now working from home; what could some of the outcomes of that be? How does gig work fit in?

5. How can we ensure a healthy evolution of work and life? The health and the protection of those at risk is why we shut down our economies, but we need to find a balance that allows people to work while keeping them safe.

Problem-Solving Doesn’t End
The end result these questions are driving towards, and our overarching goal, is maximizing human potential. “If we come up with ways we can continue to do that, we’ll have a much more beneficial future of work,” Bolles said. “We should all be talking about where we can have an impact.”

One small silver lining? We had plenty of problems to solve in the world before ever hearing about coronavirus, and now we have even more. Is the pace of automation accelerating due to the virus? Yes. Are companies finding more ways to automate their processes in order to keep people from getting sick? They are.

But we have a slew of new problems on our hands, and we’re not going to stop needing human skills to solve them (not to mention the new problems that will surely emerge as second- and third-order effects of the shutdowns). If Bolles’ definition of work holds up, we’ve got ours cut out for us.

In an article from April titled The Great Reset, Bolles outlined three phases of the unemployment slump (we’re currently still in the first phase) and what we should be doing to minimize the damage. “The evolution of work is not about what will happen 10 to 20 years from now,” he said. “It’s about what we could be doing differently today.”

Watch Bolles’ talk and those of dozens of other experts for more insights into building a human-centric future of work here.

Image Credit: www_slon_pics from Pixabay Continue reading

Posted in Human Robots

#436984 Robots to the Rescue: How They Can Help ...

As the coronavirus pandemic forces people to keep their distance, could this be robots‘ time to shine? A group of scientists think so, and they’re calling for robots to do the “dull, dirty, and dangerous jobs” of infectious disease management.

Social distancing has emerged as one of the most effective strategies for slowing the spread of COVID-19, but it’s also bringing many jobs to a standstill and severely restricting our daily lives. And unfortunately, the one group that can’t rely on its protective benefits are the medical and emergency services workers we’re relying on to save us.

Robots could be a solution, according to the editorial board of Science Robotics, by helping replace humans in a host of critical tasks, from disinfecting hospitals to collecting patient samples and automating lab tests.

According to the authors, the key areas where robots could help are clinical care, logistics, and reconnaissance, which refers to tasks like identifying the infected or making sure people comply with quarantines or social distancing requirements. Outside of the medical sphere, robots could also help keep the economy and infrastructure going by standing in for humans in factories or vital utilities like waste management or power plants.

When it comes to clinical care, robots can play important roles in disease prevention, diagnosis and screening, and patient care, the researchers say. Robots have already been widely deployed to disinfect hospitals and other public spaces either using UV light that kills bugs or by repurposing agricultural robots and drones to spray disinfectant, reducing the exposure of cleaning staff to potentially contaminated surfaces. They are also being used to carry out crucial deliveries of food and medication without exposing humans.

But they could also play an important role in tracking the disease, say the researchers. Thermal cameras combined with image recognition algorithms are already being used to detect potential cases at places like airports, but incorporating them into mobile robots or drones could greatly expand the coverage of screening programs.

A more complex challenge—but one that could significantly reduce medical workers’ exposure to the virus—would be to design robots that could automate the collection of nasal swabs used to test for COVID-19. Similarly automated blood collection for tests could be of significant help, and researchers are already investigating using ultrasound to help robots locate veins to draw blood from.

Convincing people it’s safe to let a robot stick a swab up their nose or jab a needle in their arm might be a hard sell right now, but a potentially more realistic scenario would be to get robots to carry out laboratory tests on collected samples to reduce exposure to lab technicians. Commercial laboratory automation systems already exist, so this might be a more achievable near-term goal.

Not all solutions need to be automated, though. While autonomous systems will be helpful for reducing the workload of stretched health workers, remote systems can still provide useful distancing. Remote control robotics systems are already becoming increasingly common in the delicate business of surgery, so it would be entirely feasible to create remote systems to carry out more prosaic medical tasks.

Such systems would make it possible for experts to contribute remotely in many different places without having to travel. And robotic systems could combine medical tasks like patient monitoring with equally important social interaction for people who may have been shut off from human contact.

In a teleconference last week Guang-Zhong Yang, a medical roboticist from Carnegie Mellon University and founding editor of Science Robotics, highlighted the importance of including both doctors and patients in the design of these robots to ensure they are safe and effective, but also to make sure people trust them to observe social protocols and not invade their privacy.

But Yang also stressed the importance of putting the pieces in place to enable the rapid development and deployment of solutions. During the 2015 Ebola outbreak, the White House Office of Science and Technology Policy and the National Science Foundation organized workshops to identify where robotics could help deal with epidemics.

But once the threat receded, attention shifted elsewhere, and by the time the next pandemic came around little progress had been made on potential solutions. The result is that it’s unclear how much help robots will really be able to provide to the COVID-19 response.

That means it’s crucial to invest in a sustained research effort into this field, say the paper’s authors, with more funding and multidisciplinary research partnerships between government agencies and industry so that next time around we will be prepared.

“These events are rare and then it’s just that people start to direct their efforts to other applications,” said Yang. “So I think this time we really need to nail it, because without a sustained approach to this history will repeat itself and robots won’t be ready.”

Image Credit: ABB’s YuMi collaborative robot. Image courtesy of ABB Continue reading

Posted in Human Robots