Tag Archives: CEO

#437635 Toyota Research Demonstrates ...

Over the last several years, Toyota has been putting more muscle into forward-looking robotics research than just about anyone. In addition to the Toyota Research Institute (TRI), there’s that massive 175-acre robot-powered city of the future that Toyota still plans to build next to Mount Fuji. Even Toyota itself acknowledges that it might be crazy, but that’s just how they roll—as TRI CEO Gill Pratt told me a while back, when Toyota decides to do something, they really do go all-in on it.

TRI has been focusing heavily on home robots, which is reflective of the long-term nature of what TRI is trying to do, because home robots are both the place where we’ll need robots the most at the same time as they’re the place where it’s going to be hardest to deploy them. The unpredictable nature of homes, and the fact that homes tend to have squishy fragile people in them, are robot-unfriendly characteristics, but as the population continues to age (an increasingly acute problem in Japan), homes offer an enormous amount of potential for helping us maintain our independence.

Today, Toyota is showing off some of the research that it’s been working on recently, in the form of a virtual reality presentation in lieu of an in-person press event. For journalists, TRI pre-loaded the recording onto a VR headset, which was FedEx’ed to my house. You can watch the entire 40-minute presentation in 360 video on YouTube (or in VR if you have a headset of your own), but if you don’t watch the whole thing, you should at least check out the full-on GLaDOS (with arms) that TRI thinks belongs in your home.

The presentation features an introduction from Gill Pratt, who looks entirely too comfortable embedded inside of one of TRI’s telepresence robots. The event also covers a lot of territory, but the highlight is almost certainly the new hardware that TRI demonstrates.

Soft bubble gripper

Photo: TRI

This is a “soft bubble gripper,” under development at TRI’s Cambridge, Mass., branch. These passively-compliant, air-filled grippers make it easier to grasp many different kinds of objects safely, but the nifty thing is that they’ve got cameras inside of them watching a pattern of dots on the interior of the soft membrane.

When the outside of the bubble makes contact with an object, the bubble deforms, and the deformation of the dot pattern on the inside can be tracked by the camera to determine both directions and magnitudes of forces. This is a concept that we’ve seen elsewhere before, but TRI’s implementation is a clever way of making an inherently safe end effector that can still perform all the sensing you need it to do for relatively complex manipulation tasks.

The bubble gripper was presented at ICRA this year, and you can read the technical paper here.

Ceiling-mounted home robot

Photo: TRI

I don’t know whether robots dangling from the ceiling was somehow sinister pre-Portal, but it sure as heck is for me having played through that game a couple of times, and it’s since been reinforced by AUTO from WALL-E.

The reason that we generally see robots mounted on the floor or on tables or on mobile bases is that we’re bipeds, not bats, and giving a robot access to a human-like workspace is easiest to do if you also give that robot a human-like position and orientation. And if you want to be able to reach stuff high up, you do what TRI did with their previous generation of kitchen manipulator, and just give it the ability to make itself super tall. But TRI is convinced it’s a good place to put our future home robots:

One innovative concept is a “gantry robot” that would descend from an overhead framework to perform tasks such as loading the dishwasher, wiping surfaces, and clearing clutter. By traveling on the ceiling, the robot avoids the problems of navigating household floor clutter and navigating cramped spaces. When not in use, the robot would tuck itself up out of the way. To further investigate this idea, the team has built a laboratory prototype robot that can do all the same tasks as a floor-based mobile robot but with the innovative overhead mobility system.

Another obvious problem with the gantry robot is that you have to install all kinds of stuff in your ceiling for this to work, which makes it very impractical (if not totally impossible) to introduce a system like this into a home that wasn’t built specifically for it. If, however, you do build a home with a robot like this in mind, the animation below from TRI shows how it could be extra useful. Suddenly, stairs are a non-issue. Payload is presumably also a non-issue, since loads can be transferred to the ceiling. Batteries become unnecessary, so the whole robot can be much lighter weight, which in turn makes it safer. Sensors get a fantastic view, and obstacle avoidance becomes trivial.

Robots as “time machines”

Photo: TRI

TRI’s presentation covered more than what we’ve highlighted here—our focus has been on the hardware prototypes, but TRI had more to talk about, including learning through demonstration, scaling learning through simulation, and how TRI has been working with users to figure out what research directions should be explored. It’s all available right now on YouTube, and it’s well worth 40 minutes of your time.

“What we’re really focused on is this principle idea of amplifying, rather than replacing, human beings”
—Gill Pratt, TRI

It’s only been five years since Toyota announced the $1 billion investment that established TRI, and it feels like the progress that’s been made since then has been substantial. It’s not often that vision, resources, and long-term commitment come together like this, and TRI’s emphasis on making life better for people is one of the things that helps to keep us optimistic about the future of robotics.

“What we’re really focused on is this principle idea of amplifying, rather than replacing, human beings,” Gill Pratt told us. “And what it means to amplify a person, particularly as they’re aging—what we’re really trying to do is build a time machine. This may sound fanciful, and of course we can’t build a real time machine, but maybe we can build robotic assistants to make our lives as we age seem as if we are actually using a time machine.” He explains that it doesn’t mean building robots for convenience or to do our jobs for us. “It means building technology that enables us to continue to live and to work and to relate to each other as if we were younger,” he says. “And that’s really what our main goal is.” Continue reading

Posted in Human Robots

#437630 How Toyota Research Envisions the Future ...

Yesterday, the Toyota Research Institute (TRI) showed off some of the projects that it’s been working on recently, including a ceiling-mounted robot that could one day help us with household chores. That system is just one example of how TRI envisions the future of robotics and artificial intelligence. As TRI CEO Gill Pratt told us, the company is focusing on robotics and AI technology for “amplifying, rather than replacing, human beings.” In other words, Toyota wants to develop robots not for convenience or to do our jobs for us, but rather to allow people to continue to live and work independently even as we age.

To better understand Toyota’s vision of robotics 15 to 20 years from now, it’s worth watching the 20-minute video below, which depicts various scenarios “where the application of robotic capabilities is enabling members of an aging society to live full and independent lives in spite of the challenges that getting older brings.” It’s a long video, but it helps explains TRI’s perspective on how robots will collaborate with humans in our daily lives over the next couple of decades.

Those are some interesting conceptual telepresence-controlled bipeds they’ve got running around in that video, right?

For more details, we sent TRI some questions on how it plans to go from concepts like the ones shown in the video to real products that can be deployed in human environments. Below are answers from TRI CEO Gill Pratt, who is also chief scientist for Toyota Motor Corp.; Steffi Paepcke, senior UX designer at TRI; and Max Bajracharya, VP of robotics at TRI.

IEEE Spectrum: TRI seems to have a more explicit focus on eventual commercialization than most of the robotics research that we cover. At what point TRI starts to think about things like reliability and cost?

Photo: TRI

Toyota is exploring robots capable of manipulating dishes in a sink and a dishwasher, performing experiments and simulations to make sure that the robots can handle a wide range of conditions.

Gill Pratt: It’s a really interesting question, because the normal way to think about this would be to say, well, both reliability and cost are product development tasks. But actually, we need to think about it at the earliest possible stage with research as well. The hardware that we use in the laboratory for doing experiments, we don’t worry about cost there, or not nearly as much as you’d worry about for a product. However, in terms of what research we do, we very much have to think about, is it possible (if the research is successful) for it to end up in a product that has a reasonable cost. Because if a customer can’t afford what we come up with, maybe it has some academic value but it’s not actually going to make a difference in their quality of life in the real world. So we think about cost very much from the beginning.

The same is true with reliability. Right now, we’re working very hard to make our control techniques robust to wide variations in the environment. For instance, in work that Russ Tedrake is doing with manipulating dishes in a sink and a dishwasher, both in physical testing and in simulation, we’re doing thousands and now millions of different experiments to make sure that we can handle the edge cases and it works over a very wide range of conditions.

A tremendous amount of work that we do is trying to bring robotics out of the age of doing demonstrations. There’s been a history of robotics where for some time, things have not been reliable, so we’d catch the robot succeeding just once and then show that video to the world, and people would get the mis-impression that it worked all of the time. Some researchers have been very good about showing the blooper reel too, to show that some of the time, robots don’t work.

“A tremendous amount of work that we do is trying to bring robotics out of the age of doing demonstrations. There’s been a history of robotics where for some time, things have not been reliable, so we’d catch the robot succeeding just once and then show that video to the world, and people would get the mis-impression that it worked all of the time.”
—Gill Pratt, TRI

In the spirit of sharing things that didn’t work, can you tell us a bit about some of the robots that TRI has had under development that didn’t make it into the demo yesterday because they were abandoned along the way?

Steffi Paepcke: We’re really looking at how we can connect people; it can be hard to stay in touch and see our loved ones as much as we would like to. There have been a few prototypes that we’ve worked on that had to be put on the shelf, at least for the time being. We were exploring how to use light so that people could be ambiently aware of one another across distances. I was very excited about that—the internal name was “glowing orb.” For a variety of reasons, it didn’t work out, but it was really fascinating to investigate different modalities for keeping in touch.

Another prototype we worked on—we found through our research that grocery shopping is obviously an important part of life, and for a lot of older adults, it’s not necessarily the right answer to always have groceries delivered. Getting up and getting out of the house keeps you physically active, and a lot of people prefer to continue doing it themselves. But it can be challenging, especially if you’re purchasing heavy items that you need to transport. We had a prototype that assisted with grocery shopping, but when we pivoted our focus to Japan, we found that the inside of a Japanese home really needs to stay inside, and the outside needs to stay outside, so a robot that traverses both domains is probably not the right fit for a Japanese audience, and those were some really valuable lessons for us.

Photo: TRI

Toyota recently demonstrated a gantry robot that would hang from the ceiling to perform tasks like wiping surfaces and clearing clutter.

I love that TRI is exploring things like the gantry robot both in terms of near-term research and as part of its long-term vision, but is a robot like this actually worth pursuing? Or more generally, what’s the right way to compromise between making an environment robot friendly, and asking humans to make changes to their homes?

Max Bajracharya: We think a lot about the problems that we’re trying to address in a holistic way. We don’t want to just give people a robot, and assume that they’re not going to change anything about their lifestyle. We have a lot of evidence from people who use automated vacuum cleaners that people will adapt to the tools you give them, and they’ll change their lifestyle. So we want to think about what is that trade between changing the environment, and giving people robotic assistance and tools.

We certainly think that there are ways to make the gantry system plausible. The one you saw today is obviously a prototype and does require significant infrastructure. If we’re going to retrofit a home, that isn’t going to be the way to do it. But we still feel like we’re very much in the prototype phase, where we’re trying to understand whether this is worth it to be able to bypass navigation challenges, and coming up with the pros and cons of the gantry system. We’re evaluating whether we think this is the right approach to solving the problem.

To what extent do you think humans should be either directly or indirectly in the loop with home and service robots?

Bajracharya: Our goal is to amplify people, so achieving this is going to require robots to be in a loop with people in some form. One thing we have learned is that using people in a slow loop with robots, such as teaching them or helping them when they make mistakes, gives a robot an important advantage over one that has to do everything perfectly 100 percent of the time. In unstructured human environments, robots are going to encounter corner cases, and are going to need to learn to adapt. People will likely play an important role in helping the robots learn. Continue reading

Posted in Human Robots

#437628 Video Friday: An In-Depth Look at Mesmer ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

AUVSI EXPONENTIAL 2020 – October 5-8, 2020 – [Online]
IROS 2020 – October 25-29, 2020 – [Online]
ROS World 2020 – November 12, 2020 – [Online]
CYBATHLON 2020 – November 13-14, 2020 – [Online]
ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA
Let us know if you have suggestions for next week, and enjoy today’s videos.

Bear Robotics, a robotics and artificial intelligence company, and SoftBank Robotics Group, a leading robotics manufacturer and solutions provider, have collaborated to bring a new robot named Servi to the food service and hospitality field.

[ Bear Robotics ]

A literal in-depth look at Engineered Arts’ Mesmer android.

[ Engineered Arts ]

Is your robot running ROS? Is it connected to the Internet? Are you actually in control of it right now? Are you sure?

I appreciate how the researchers admitted to finding two of their own robots as part of the scan, a Baxter and a drone.

[ Brown ]

Smile Robotics describes this as “(possibly) world’s first full-autonomous clear-up-the-table robot.”

We’re not qualified to make a judgement on the world firstness, but personally I hate clearing tables, so this robot has my vote.

Smile Robotics founder and CEO Takashi Ogura, along with chief engineer Mitsutaka Kabasawa and engineer Kazuya Kobayashi, are former Google roboticists. Ogura also worked at SCHAFT. Smile says its robot uses ROS and is controlled by a framework written mainly in Rust, adding: “We are hiring Rustacean Roboticists!”

[ Smile Robotics ]

We’re not entirely sure why, but Panasonic has released plans for an Internet of Things system for hamsters.

We devised a recipe for a “small animal healthcare device” that can measure the weight and activity of small animals, the temperature and humidity of the breeding environment, and manage their health. This healthcare device visualizes the health status and breeding environment of small animals and manages their health to promote early detection of diseases. While imagining the scene where a healthcare device is actually used for an important small animal that we treat with affection, we hope to help overcome the current difficult situation through manufacturing.

[ Panasonic ] via [ RobotStart ]

Researchers at Yale have developed a robotic fabric, a breakthrough that could lead to such innovations as adaptive clothing, self-deploying shelters, or lightweight shape-changing machinery.

The researchers focused on processing functional materials into fiber-form so they could be integrated into fabrics while retaining its advantageous properties. For example, they made variable stiffness fibers out of an epoxy embedded with particles of Field’s metal, an alloy that liquifies at relatively low temperatures. When cool, the particles are solid metal and make the material stiffer; when warm, the particles melt into liquid and make the material softer.

[ Yale ]

In collaboration with Armasuisse and SBB, RSL demonstrated the use of a teleoperated Menzi Muck M545 to clean up a rock slide in Central Switzerland. The machine can be operated from a teloperation platform with visual and motion feedback. The walking excavator features an active chassis that can adapt to uneven terrain.

[ ETHZ RSL ]

An international team of JKU researchers is continuing to develop their vision for robots made out of soft materials. A new article in the journal “Communications Materials” demonstrates just how these kinds of soft machines react using weak magnetic fields to move very quickly. A triangle-shaped robot can roll itself in air at high speed and walk forward when exposed to an alternating in-plane square wave magnetic field (3.5 mT, 1.5 Hz). The diameter of the robot is 18 mm with a thickness of 80 µm. A six-arm robot can grab, transport, and release non-magnetic objects such as a polyurethane foam cube controlled by a permanent magnet.

Okay but tell me more about that cute sheep.

[ JKU ]

Interbotix has this “research level robotic crawler,” which both looks mean and runs ROS, a dangerous combination.

And here’s how it all came together:

[ Interbotix ]

I guess if you call them “loitering missile systems” rather than “drones that blow things up” people are less likely to get upset?

[ AeroVironment ]

In this video, we show a planner for a master dual-arm robot to manipulate tethered tools with an assistant dual-arm robot’s help. The assistant robot provides assistance to the master robot by manipulating the tool cable and avoiding collisions. The provided assistance allows the master robot to perform tool placements on the robot workspace table to regrasp the tool, which would typically fail since the tool cable tension may change the tool positions. It also allows the master robot to perform tool handovers, which would normally cause entanglements or collisions with the cable and the environment without the assistance.

[ Harada Lab ]

This video shows a flexible and robust robotic system for autonomous drawing on 3D surfaces. The system takes 2D drawing strokes and a 3D target surface (mesh or point clouds) as input. It maps the 2D strokes onto the 3D surface and generates a robot motion to draw the mapped strokes using visual recognition, grasp pose reasoning, and motion planning.

[ Harada Lab ]

Weekly mobility test. This time the Warthog takes on a fallen tree. Will it cross it? The answer is in the video!

And the answer is: kinda?

[ NORLAB ]

One of the advantages of walking machines is their ability to apply forces in all directions and of various magnitudes to the environment. Many of the multi-legged robots are equipped with point contact feet as these simplify the design and control of the robot. The iStruct project focuses on the development of a foot that allows extensive contact with the environment.

[ DFKI ]

An urgent medical transport was simulated in NASA’s second Systems Integration and Operationalization (SIO) demonstration Sept. 28 with partner Bell Textron Inc. Bell used the remotely-piloted APT 70 to conduct a flight representing an urgent medical transport mission. It is envisioned in the future that an operational APT 70 could provide rapid medical transport for blood, organs, and perishable medical supplies (payload up to 70 pounds). The APT 70 is estimated to move three times as fast as ground transportation.

Always a little suspicious when the video just shows the drone flying, and sitting on the ground, but not that tricky transition between those two states.

[ NASA ]

A Lockheed Martin Robotics Seminar on “Socially Assistive Mobile Robots,” by Yi Guo from Stevens Institute of Technology.

The use of autonomous mobile robots in human environments is on the rise. Assistive robots have been seen in real-world environments, such as robot guides in airports, robot polices in public parks, and patrolling robots in supermarkets. In this talk, I will first present current research activities conducted in the Robotics and Automation Laboratory at Stevens. I’ll then focus on robot-assisted pedestrian regulation, where pedestrian flows are regulated and optimized through passive human-robot interaction.

[ UMD ]

This week’s CMU RI Seminar is by CMU’s Zachary Manchester, on “The World’s Tiniest Space Program.”

The aerospace industry has experienced a dramatic shift over the last decade: Flying a spacecraft has gone from something only national governments and large defense contractors could afford to something a small startup can accomplish on a shoestring budget. A virtuous cycle has developed where lower costs have led to more launches and the growth of new markets for space-based data. However, many barriers remain. This talk will focus on driving these trends to their ultimate limit by harnessing advances in electronics, planning, and control to build spacecraft that cost less than a new smartphone and can be deployed in large numbers.

[ CMU RI ] Continue reading

Posted in Human Robots

#437620 The Trillion-Transistor Chip That Just ...

The history of computer chips is a thrilling tale of extreme miniaturization.

The smaller, the better is a trend that’s given birth to the digital world as we know it. So, why on earth would you want to reverse course and make chips a lot bigger? Well, while there’s no particularly good reason to have a chip the size of an iPad in an iPad, such a chip may prove to be genius for more specific uses, like artificial intelligence or simulations of the physical world.

At least, that’s what Cerebras, the maker of the biggest computer chip in the world, is hoping.

The Cerebras Wafer-Scale Engine is massive any way you slice it. The chip is 8.5 inches to a side and houses 1.2 trillion transistors. The next biggest chip, NVIDIA’s A100 GPU, measures an inch to a side and has a mere 54 billion transistors. The former is new, largely untested and, so far, one-of-a-kind. The latter is well-loved, mass-produced, and has taken over the world of AI and supercomputing in the last decade.

So can Goliath flip the script on David? Cerebras is on a mission to find out.

Big Chips Beyond AI
When Cerebras first came out of stealth last year, the company said it could significantly speed up the training of deep learning models.

Since then, the WSE has made its way into a handful of supercomputing labs, where the company’s customers are putting it through its paces. One of those labs, the National Energy Technology Laboratory, is looking to see what it can do beyond AI.

So, in a recent trial, researchers pitted the chip—which is housed in an all-in-one system about the size of a dorm room mini-fridge called the CS-1—against a supercomputer in a fluid dynamics simulation. Simulating the movement of fluids is a common supercomputer application useful for solving complex problems like weather forecasting and airplane wing design.

The trial was described in a preprint paper written by a team led by Cerebras’s Michael James and NETL’s Dirk Van Essendelft and presented at the supercomputing conference SC20 this week. The team said the CS-1 completed a simulation of combustion in a power plant roughly 200 times faster than it took the Joule 2.0 supercomputer to do a similar task.

The CS-1 was actually faster-than-real-time. As Cerebrus wrote in a blog post, “It can tell you what is going to happen in the future faster than the laws of physics produce the same result.”

The researchers said the CS-1’s performance couldn’t be matched by any number of CPUs and GPUs. And CEO and cofounder Andrew Feldman told VentureBeat that would be true “no matter how large the supercomputer is.” At a point, scaling a supercomputer like Joule no longer produces better results in this kind of problem. That’s why Joule’s simulation speed peaked at 16,384 cores, a fraction of its total 86,400 cores.

A comparison of the two machines drives the point home. Joule is the 81st fastest supercomputer in the world, takes up dozens of server racks, consumes up to 450 kilowatts of power, and required tens of millions of dollars to build. The CS-1, by comparison, fits in a third of a server rack, consumes 20 kilowatts of power, and sells for a few million dollars.

While the task is niche (but useful) and the problem well-suited to the CS-1, it’s still a pretty stunning result. So how’d they pull it off? It’s all in the design.

Cut the Commute
Computer chips begin life on a big piece of silicon called a wafer. Multiple chips are etched onto the same wafer and then the wafer is cut into individual chips. While the WSE is also etched onto a silicon wafer, the wafer is left intact as a single, operating unit. This wafer-scale chip contains almost 400,000 processing cores. Each core is connected to its own dedicated memory and its four neighboring cores.

Putting that many cores on a single chip and giving them their own memory is why the WSE is bigger; it’s also why, in this case, it’s better.

Most large-scale computing tasks depend on massively parallel processing. Researchers distribute the task among hundreds or thousands of chips. The chips need to work in concert, so they’re in constant communication, shuttling information back and forth. A similar process takes place within each chip, as information moves between processor cores, which are doing the calculations, and shared memory to store the results.

It’s a little like an old-timey company that does all its business on paper.

The company uses couriers to send and collect documents from other branches and archives across town. The couriers know the best routes through the city, but the trips take some minimum amount of time determined by the distance between the branches and archives, the courier’s top speed, and how many other couriers are on the road. In short, distance and traffic slow things down.

Now, imagine the company builds a brand new gleaming skyscraper. Every branch is moved into the new building and every worker gets a small filing cabinet in their office to store documents. Now any document they need can be stored and retrieved in the time it takes to step across the office or down the hall to their neighbor’s office. The information commute has all but disappeared. Everything’s in the same house.

Cerebras’s megachip is a bit like that skyscraper. The way it shuttles information—aided further by its specially tailored compiling software—is far more efficient compared to a traditional supercomputer that needs to network a ton of traditional chips.

Simulating the World as It Unfolds
It’s worth noting the chip can only handle problems small enough to fit on the wafer. But such problems may have quite practical applications because of the machine’s ability to do high-fidelity simulation in real-time. The authors note, for example, the machine should in theory be able to accurately simulate the air flow around a helicopter trying to land on a flight deck and semi-automate the process—something not possible with traditional chips.

Another opportunity, they note, would be to use a simulation as input to train a neural network also residing on the chip. In an intriguing and related example, a Caltech machine learning technique recently proved to be 1,000 times faster at solving the same kind of partial differential equations at play here to simulate fluid dynamics.

They also note that improvements in the chip (and others like it, should they arrive) will push back the limits of what can be accomplished. Already, Cerebras has teased the release of its next-generation chip, which will have 2.6 trillion transistors, 850,00 cores, and more than double the memory.

Of course, it still remains to be seen whether wafer-scale computing really takes off. The idea has been around for decades, but Cerebras is the first to pursue it seriously. Clearly, they believe they’ve solved the problem in a way that’s useful and economical.

Other new architectures are also being pursued in the lab. Memristor-based neuromorphic chips, for example, mimic the brain by putting processing and memory into individual transistor-like components. And of course, quantum computers are in a separate lane, but tackle similar problems.

It could be that one of these technologies eventually rises to rule them all. Or, and this seems just as likely, computing may splinter into a bizarre quilt of radical chips, all stitched together to make the most of each depending on the situation.

Image credit: Cerebras Continue reading

Posted in Human Robots

#437407 Nvidia’s Arm Acquisition Brings the ...

Artificial intelligence and mobile computing have been two of the most disruptive technologies of this century. The unification of the two companies that made them possible could have wide-ranging consequences for the future of computing.

California-based Nvidia’s graphics processing units (GPUs) have powered the deep learning revolution ever since Google researchers discovered in 2011 that they could run neural networks far more efficiently than conventional CPUs. UK company Arm’s energy-efficient chip designs have dominated the mobile and embedded computing markets for even longer.

Now the two will join forces after the American company announced a $40 billion deal to buy Arm from its Japanese owner, Softbank. In a press release announcing the deal, Nvidia touted its potential to rapidly expand the reach of AI into all areas of our lives.

“In the years ahead, trillions of computers running AI will create a new internet-of-things that is thousands of times larger than today’s internet-of-people,” said Nvidia founder and CEO Jensen Huang. “Uniting NVIDIA’s AI computing capabilities with the vast ecosystem of Arm’s CPU, we can advance computing from the cloud, smartphones, PCs, self-driving cars and robotics, to edge IoT, and expand AI computing to every corner of the globe.”

There are good reasons to believe the hype. The two companies are absolutely dominant in their respective fields—Nvidia’s GPUs support more than 97 percent of AI computing infrastructure offered by big cloud service providers, and Arm’s chips power more than 90 percent of smartphones. And there’s little overlap in their competencies, which means the relationship could be a truly symbiotic one.

“I think the deal “fits like a glove” in that Arm plays in areas that Nvidia does not or isn’t that successful, while NVIDIA plays in many places Arm doesn’t or isn’t that successful,” analyst Patrick Moorhead wrote in Forbes.

One of the most obvious directions would be to expand Nvidia’s AI capabilities to the kind of low-power edge devices that Arm excels in. There’s growing demand for AI in devices like smartphones, wearables, cars, and drones, where transmitting data to the cloud for processing is undesirable either for reasons of privacy or speed.

But there might also be fruitful exchanges in the other direction. Huang told Moorhead a major focus would be bringing Arm’s expertise in energy efficiency to the data center. That’s a big concern for technology companies whose electricity bills and green credentials are taking a battering thanks to the huge amounts of energy required to run millions of computer chips around the clock.

The deal may not be plain sailing, though, most notably due to the two companies’ differing business models. While Nvidia sells ready-made processors, Arm simply creates chip designs and then licenses them to other companies who can then customize them to their particular hardware needs. It operates on an open-licence basis whereby any company with the necessary cash can access its designs.

As a result, its designs are found in products built by hundreds of companies that license its innovations, including Apple, Samsung, Huawei, Qualcomm, and even Nvidia. Some, including two of the company’s co-founders, have raised concerns that the purchase by Nvidia, which competes with many of these other companies, could harm the neutrality that has been central to its success.

It’s possible this could push more companies towards RISC-V, an open-source technology developed by researchers at the University of California at Berkeley that rivals Arm’s and is not owned by any one company. However, there are plenty of reasons why most companies still prefer arm over the less feature-rich open-source option, and it might take a considerable push to convince Arm’s customers to jump ship.

The deal will also have to navigate some thorny political issues. Unions, politicians, and business leaders in the UK have voiced concerns that it could lead to the loss of high-tech jobs, and government sources have suggested conditions could be placed on the deal.

Regulators in other countries could also put a spanner in the works. China is concerned that if Arm becomes US-owned, many of the Chinese companies that rely on its technology could become victims of export restrictions as the China-US trade war drags on. South Korea is also wary that the deal could create a new technology juggernaut that could dent Samsung’s growth in similar areas.

Nvidia has made commitments to keep Arm’s headquarters in the UK, which it says should lessen concerns around jobs and export restrictions. It’s also pledged to open a new world-class technology center in Cambridge and build a state-of-the-art AI supercomputer powered by Arm’s chips there. Whether the deal goes through still hangs in the balance, but of it does it could spur a whole new wave of AI innovation.

Image Credit: Nvidia Continue reading

Posted in Human Robots