Tag Archives: taking

#435660 Toyota Research Developing New ...

With the Olympics taking place next year in Japan, Toyota is (among other things) stepping up its robotics game to help provide “mobility for all.” We know that Toyota’s HSR will be doing work there, along with a few other mobile systems, but the Toyota Research Institute (TRI) has just announced a new telepresence robot called the T-TR1, featuring an absolutely massive screen designed to give you a near-lifesize virtual presence.

T-TR1 is a virtual mobility/tele-presence robot developed by Toyota Research Institute in the United States. It is equipped with a camera atop a large, near-lifesize display.
By projecting an image of a user from a remote location, the robot will help that person feel more physically present at the robot’s location.
With T-TR1, Toyota will give people that are physically unable to attend the events such as the Games a chance to virtually attend, with an on-screen presence capable of conversation between the two locations.

TRI isn’t ready to share much more detail on this system yet (we asked, of course), but we can infer some things from the video and the rest of the info that’s out there. For example, that ball on top is a 360-degree camera (that looks a lot like an Insta360 Pro), giving the remote user just as good of an awareness of their surroundings as they would if they were there in person. There are multiple 3D-sensing systems, including at least two depth cameras plus a lidar at the base. It’s not at all clear whether the robot is autonomous or semi-autonomous (using the sensors for automated obstacle avoidance, say), and since the woman on the other end of the robot does not seem to be controlling it at all for the demo, it’s hard to make an educated guess about the level of autonomy, or even how it’s supposed to be controlled.

We really like that enormous screen—despite the fact that telepresence now requires pants. It adds to the embodiment that makes independent telepresence robots useful.

We really like that enormous screen—despite the fact that telepresence now requires pants. It adds to the embodiment that makes independent telepresence robots useful. It’s also nice that the robot can move fast enough to keep up a person walking briskly. Hopefully, it’s safe for it to move at that speed in an environment more realistic than a carpeted, half-empty conference room, although it’ll probably have to leverage all of those sensors to do so. The other challenge for the T-TR1 will be bandwidth—even assuming that all of the sensor data processing and stuff is done on-robot, 360 cameras are huge bandwidth hogs, plus there’s the primary (presumably high quality) feed from the main camera, and then the video of the user coming the other way. It’s a lot of data in a very latency-sensitive application, and it’ll presumably be operating in places where connectivity is going to be a challenge due to crowds. This has always been a problem for telepresence robots—no matter how amazing your robot is, the experience will often for better or worse be defined by Internet connections that you may have no control over.

We should emphasize that Toyota has only released the bare minimum of information about the T-TR1, although we’re told that we can expect more as the 2020 Olympics approach: opening ceremonies are one year from today.

[ TRI ] Continue reading

Posted in Human Robots

#435646 Video Friday: Kiki Is a New Social Robot ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

DARPA SubT Tunnel Circuit – August 15-22, 2019 – Pittsburgh, Pa., USA
IEEE Africon 2019 – September 25-27, 2019 – Accra, Ghana
ISRR 2019 – October 6-10, 2019 – Hanoi, Vietnam
Ro-Man 2019 – October 14-18, 2019 – New Delhi, India
Humanoids 2019 – October 15-17, 2019 – Toronto, Canada
ARSO 2019 – October 31-1, 2019 – Beijing, China
ROSCon 2019 – October 31-1, 2019 – Macau
Let us know if you have suggestions for next week, and enjoy today’s videos.

The DARPA Subterranean Challenge tunnel circuit takes place in just a few weeks, and we’ll be there!

[ DARPA SubT ]

Time lapse video of robotic arm on NASA’s Mars 2020 rover handily maneuvers 88-pounds (40 kilograms) worth of sensor-laden turret as it moves from a deployed to stowed configuration.

If you haven’t read our interview with Matt Robinson, now would be a great time, since he’s one of the folks at JPL who designed this arm.

[ Mars 2020 ]

Kiki is a small, white, stationary social robot with an evolving personality who promises to be your friend and costs $800 and is currently on Kickstarter.

The Kickstarter page is filled with the same type of overpromising that we’ve seen with other (now very dead) social robots: Kiki is “conscious,” “understands your feelings,” and “loves you back.” Oof. That said, we’re happy to see more startups trying to succeed in this space, which is certainly one of the toughest in consumer electronics, and hopefully they’ve been learning from the recent string of failures. And we have to say Kiki is a cute robot. Its overall design, especially the body mechanics and expressive face, look neat. And kudos to the team—the company was founded by two ex-Googlers, Mita Yun and Jitu Das—for including the “unedited prototype videos,” which help counterbalance the hype.

Another thing that Kiki has going for it is that everything runs on the robot itself. This simplifies privacy and means that the robot won’t partially die on you if the company behind it goes under, but also limits how clever the robot will be able to be. The Kickstarter campaign is already over a third funded, so…We’ll see.

[ Kickstarter ]

When your UAV isn’t enough UAV, so you put a UAV on your UAV.

[ CanberraUAV ]

ABB’s YuMi is testing ATMs because a human trying to do this task would go broke almost immediately.

[ ABB ]

DJI has a fancy new FPV system that features easy setup, digital HD streaming at up to 120 FPS, and <30ms latency.

If it looks expensive, that’s because it costs $930 with the remote included.

[ DJI ]

Honeybee Robotics has recently developed a regolith excavation and rock cleaning system for NASA JPL’s PUFFER rovers. This system, called POCCET (PUFFER-Oriented Compact Cleaning and Excavation Tool), uses compressed gas to perform all excavation and cleaning tasks. Weighing less than 300 grams with potential for further mass reduction, POCCET can be used not just on the Moon, but on other Solar System bodies such as asteroids, comets, and even Mars.

[ Honeybee Robotics ]

DJI’s 2019 RoboMaster tournament, which takes place this month in Shenzen, looks like it’ll be fun to watch, with a plenty of action and rules that are easy to understand.

[ RoboMaster ]

Robots and baked goods are an automatic Video Friday inclusion.

Wow I want a cupcake right now.

[ Soft Robotics ]

The ICRA 2019 Best Paper Award went to Michelle A. Lee at Stanford, for “Making Sense of Vision and Touch: Self-Supervised Learning of Multimodal Representations for Contact-Rich Tasks.”

The ICRA video is here, and you can find the paper at the link below.

[ Paper ] via [ RoboHub ]

Cobalt Robotics put out a bunch of marketing-y videos this week, but this one reasonably interesting, even if you’re familiar with what they’re doing over there.

[ Cobalt Robotics ]

RightHand Robotics launched RightPick2 with a gala event which looked like fun as long as you were really, really in to robots.

[ RightHand Robotics ]

Thanks Jeff!

This video presents a framework for whole-body control applied to the assistive robotic system EDAN. We show how the proposed method can be used for a task like open, pass through and close a door. Also, we show the efficiency of the whole-body coordination with controlling the end-effector with respect to a fixed reference. Additionally, showing how easy the system can be manually manoeuvred by direct interaction with the end-effector, without the need for an extra input device.

[ DLR ]

You’ll probably need to turn on auto-translated subtitles for most of this, but it’s worth it for the adorable little single-seat robotic car designed to help people get around airports.

[ ZMP ]

In this week’s episode of Robots in Depth, Per speaks with Gonzalo Rey from Moog about their fancy 3D printed integrated hydraulic actuators.

Gonzalo talks about how Moog got started with hydraulic control,taking part in the space program and early robotics development. He shares how Moog’s technology is used in fly-by-wire systems in aircraft and in flow control in deep space probes. They have even reached Mars.

[ Robots in Depth ] Continue reading

Posted in Human Robots

#435628 Soft Exosuit Makes Walking and Running ...

Researchers at Harvard’s Wyss Institute have been testing a flexible, lightweight exosuit that can improve your metabolic efficiency by 4 to 10 percent while walking and running. This is very important because, according to a press release from Harvard, the suit can help you be faster and more efficient, whether you’re “walking at a leisurely pace,” or “running for your life.” Great!

Making humans better at running for their lives is something that we don’t put nearly enough research effort into, I think. The problem may not come up very often, but when it does, it’s super important (because, bears). So, sign me up for anything that we can do to make our desperate flights faster or more efficient—especially if it’s a lightweight, wearable exosuit that’s soft, flexible, and comfortable to wear.

This is the same sort of exosuit that was part of a DARPA program that we wrote about a few years ago, which was designed to make it easier for soldiers to carry heavy loads for long distances.

Photos: Wyss Institute at Harvard University

The system uses two waist-mounted electrical motors connected with cables to thigh straps that run down around your butt. The motors pull on the cables at the same time that your muscles actuate, helping them out and reducing the amount of work that your muscles put in without decreasing the amount of force they exert on your legs. The entire suit (batteries included) weighs 5 kilograms (11 pounds).

In order for the cables to actuate at the right time, the suit tracks your gait with two inertial measurement units (IMUs) on the thighs and one on the waist, and then adjusts its actuation profile accordingly. It works well, too, with measurable increases in performance:

We show that a portable exosuit that assists hip extension can reduce the metabolic rate of treadmill walking at 1.5 meters per second by 9.3 percent and that of running at 2.5 meters per second by 4.0 percent compared with locomotion without the exosuit. These reduction magnitudes are comparable to the effects of taking off 7.4 and 5.7 kilograms during walking and running, respectively, and are in a range that has shown meaningful athletic performance changes.

By increasing your efficiency, you can think of the suit as being able to make you walk or run faster, or farther, or carry a heavier load, all while spending the same amount of energy (or less), which could be just enough to outrun the bear that’s chasing you. Plus, it doesn’t appear to be uncomfortable to wear, and doesn’t require the user to do anything differently, which means that (unlike most robotics things) it’s maybe actually somewhat practical for real-world use—whether you’re indoors or outdoors, or walking or running, or being chased by a bear or not.

Sadly, I have no idea when you might be able to buy one of these things. But the researchers are looking for ways to make the suit even easier to use, while also reducing the weight and making the efficiency increase more pronounced. Harvard’s Conor Walsh says they’re “excited to continue to apply it to a range of applications, including assisting those with gait impairments, industry workers at risk of injury performing physically strenuous tasks, or recreational weekend warriors.” As a weekend warrior who is not entirely sure whether he can outrun a bear, I’m excited for this.

Reducing the metabolic rate of walking and running with a versatile, portable exosuit, by Jinsoo Kim, Giuk Lee, Roman Heimgartner, Dheepak Arumukhom Revi, Nikos Karavas, Danielle Nathanson, Ignacio Galiana, Asa Eckert-Erdheim, Patrick Murphy, David Perry, Nicolas Menard, Dabin Kim Choe, Philippe Malcolm, and Conor J. Walsh from the Wyss Institute for Biologically Inspired Engineering at Harvard University, appears in the current issue of Science. Continue reading

Posted in Human Robots

#435619 Video Friday: Watch This Robot Dog ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

IEEE Africon 2019 – September 25-27, 2019 – Accra, Ghana
RoboBusiness 2019 – October 1-3, 2019 – Santa Clara, CA, USA
ISRR 2019 – October 6-10, 2019 – Hanoi, Vietnam
Ro-Man 2019 – October 14-18, 2019 – New Delhi, India
Humanoids 2019 – October 15-17, 2019 – Toronto, Canada
ARSO 2019 – October 31-1, 2019 – Beijing, China
ROSCon 2019 – October 31-1, 2019 – Macau
IROS 2019 – November 4-8, 2019 – Macau
Let us know if you have suggestions for next week, and enjoy today’s videos.

Team PLUTO (University of Pennsylvania, Ghost Robotics, and Exyn Technologies) put together this video giving us a robot’s-eye-view (or whatever they happen to be using for eyes) of the DARPA Subterranean Challenge tunnel circuits.

[ PLUTO ]

Zhifeng Huang has been improving his jet-stepping humanoid robot, which features new hardware and the ability to take larger and more complex steps.

This video reported the last progress of an ongoing project utilizing ducted-fan propulsion system to improve humanoid robot’s ability in stepping over large ditches. The landing point of the robot’s swing foot can be not only forward but also side direction. With keeping quasi-static balance, the robot was able to step over a ditch with 450mm in width (up to 97% of the robot’s leg’s length) in 3D stepping.

[ Paper ]

Thanks Zhifeng!

These underacuated hands from Matei Ciocarlie’s lab at Columbia are magically able to reconfigure themselves to grasp different object types with just one or two motors.

[ Paper ] via [ ROAM Lab ]

This is one reason we should pursue not “autonomous cars” but “fully autonomous cars” that never require humans to take over. We can’t be trusted.

During our early days as the Google self-driving car project, we invited some employees to test our vehicles on their commutes and weekend trips. What we were testing at the time was similar to the highway driver assist features that are now available on cars today, where the car takes over the boring parts of the driving, but if something outside its ability occurs, the driver has to take over immediately.

What we saw was that our testers put too much trust in that technology. They were doing things like texting, applying makeup, and even falling asleep that made it clear they would not be ready to take over driving if the vehicle asked them to. This is why we believe that nothing short of full autonomy will do.

[ Waymo ]

Buddy is a DIY and fetchingly minimalist social robot (of sorts) that will be coming to Kickstarter this month.

We have created a new arduino kit. His name is Buddy. He is a DIY social robot to serve as a replacement for Jibo, Cozmo, or any of the other bots that are no longer available. Fully 3D printed and supported he adds much more to our series of Arduino STEM robotics kits.

Buddy is able to look around and map his surroundings and react to changes within them. He can be surprised and he will always have a unique reaction to changes. The kit can be built very easily in less than an hour. It is even robust enough to take the abuse that kids can give it in a classroom.

[ Littlebots ]

The android Mindar, based on the Buddhist deity of mercy, preaches sermons at Kodaiji temple in Kyoto, and its human colleagues predict that with artificial intelligence it could one day acquire unlimited wisdom. Developed at a cost of almost $1 million (¥106 million) in a joint project between the Zen temple and robotics professor Hiroshi Ishiguro, the robot teaches about compassion and the dangers of desire, anger and ego.

[ Japan Times ]

I’m not sure whether it’s the sound or what, but this thing scares me for some reason.

[ BIRL ]

This gripper uses magnets as a sort of adjustable spring for dynamic stiffness control, which seems pretty clever.

[ Buffalo ]

What a package of medicine sees while being flown by drone from a hospital to a remote clinic in the Dominican Republic. The drone flew 11 km horizontally and 800 meters vertically, and I can’t even imagine what it would take to make that drive.

[ WeRobotics ]

My first ride in a fully autonomous car was at Stanford in 2009. I vividly remember getting in the back seat of a descendant of Junior, and watching the steering wheel turn by itself as the car executed a perfect parking maneuver. Ten years later, it’s still fun to watch other people have that experience.

[ Waymo ]

Flirtey, the pioneer of the commercial drone delivery industry, has unveiled the much-anticipated first video of its next-generation delivery drone, the Flirtey Eagle. The aircraft designer and manufacturer also unveiled the Flirtey Portal, a sophisticated take off and landing platform that enables scalable store-to-door operations; and an autonomous software platform that enables drones to deliver safely to homes.

[ Flirtey ]

EPFL scientists are developing new approaches for improved control of robotic hands – in particular for amputees – that combines individual finger control and automation for improved grasping and manipulation. This interdisciplinary proof-of-concept between neuroengineering and robotics was successfully tested on three amputees and seven healthy subjects.

[ EPFL ]

This video is a few years old, but we’ll take any excuse to watch the majestic sage-grouse be majestic in all their majesticness.

[ UC Davis ]

I like the idea of a game of soccer (or, football to you weirdos in the rest of the world) where the ball has a mind of its own.

[ Sphero ]

Looks like the whole delivery glider idea is really taking off! Or, you know, not taking off.

Weird that they didn’t show the landing, because it sure looked like it was going to plow into the side of the hill at full speed.

[ Yates ] via [ sUAS News ]

This video is from a 2018 paper, but it’s not like we ever get tired of seeing quadrupeds do stuff, right?

[ MIT ]

Founder and Head of Product, Ian Bernstein, and Head of Engineering, Morgan Bell, have been involved in the Misty project for years and they have learned a thing or two about building robots. Hear how and why Misty evolved into a robot development platform, learn what some of the earliest prototypes did (and why they didn’t work for what we envision), and take a deep dive into the technology decisions that form the Misty II platform.

[ Misty Robotics ]

Lex Fridman interviews Vijay Kumar on the Artifiical Intelligence Podcast.

[ AI Podcast ]

This week’s CMU RI Seminar is from Ross Knepper at Cornell, on Formalizing Teamwork in Human-Robot Interaction.

Robots out in the world today work for people but not with people. Before robots can work closely with ordinary people as part of a human-robot team in a home or office setting, robots need the ability to acquire a new mix of functional and social skills. Working with people requires a shared understanding of the task, capabilities, intentions, and background knowledge. For robots to act jointly as part of a team with people, they must engage in collaborative planning, which involves forming a consensus through an exchange of information about goals, capabilities, and partial plans. Often, much of this information is conveyed through implicit communication. In this talk, I formalize components of teamwork involving collaboration, communication, and representation. I illustrate how these concepts interact in the application of social navigation, which I argue is a first-class example of teamwork. In this setting, participants must avoid collision by legibly conveying intended passing sides via nonverbal cues like path shape. A topological representation using the braid groups enables the robot to reason about a small enumerable set of passing outcomes. I show how implicit communication of topological group plans achieves rapid covergence to a group consensus, and how a robot in the group can deliberately influence the ultimate outcome to maximize joint performance, yielding pedestrian comfort with the robot.

[ CMU RI ]

In this week’s episode of Robots in Depth, Per speaks with Julien Bourgeois about Claytronics, a project from Carnegie Mellon and Intel to develop “programmable matter.”

Julien started out as a computer scientist. He was always interested in robotics privately but then had the opportunity to get into micro robots when his lab was merged into the FEMTO-ST Institute. He later worked with Seth Copen Goldstein at Carnegie Mellon on the Claytronics project.

Julien shows an enlarged mock-up of the small robots that make up programmable matter, catoms, and speaks about how they are designed. Currently he is working on a unit that is one centimeter in diameter and he shows us the very small CPU that goes into that model.

[ Robots in Depth ] Continue reading

Posted in Human Robots

#435614 3 Easy Ways to Evaluate AI Claims

When every other tech startup claims to use artificial intelligence, it can be tough to figure out if an AI service or product works as advertised. In the midst of the AI “gold rush,” how can you separate the nuggets from the fool’s gold?

There’s no shortage of cautionary tales involving overhyped AI claims. And applying AI technologies to health care, education, and law enforcement mean that getting it wrong can have real consequences for society—not just for investors who bet on the wrong unicorn.

So IEEE Spectrum asked experts to share their tips for how to identify AI hype in press releases, news articles, research papers, and IPO filings.

“It can be tricky, because I think the people who are out there selling the AI hype—selling this AI snake oil—are getting more sophisticated over time,” says Tim Hwang, director of the Harvard-MIT Ethics and Governance of AI Initiative.

The term “AI” is perhaps most frequently used to describe machine learning algorithms (and deep learning algorithms, which require even less human guidance) that analyze huge amounts of data and make predictions based on patterns that humans might miss. These popular forms of AI are mostly suited to specialized tasks, such as automatically recognizing certain objects within photos. For that reason, they are sometimes described as “weak” or “narrow” AI.

Some researchers and thought leaders like to talk about the idea of “artificial general intelligence” or “strong AI” that has human-level capacity and flexibility to handle many diverse intellectual tasks. But for now, this type of AI remains firmly in the realm of science fiction and is far from being realized in the real world.

“AI has no well-defined meaning and many so-called AI companies are simply trying to take advantage of the buzz around that term,” says Arvind Narayanan, a computer scientist at Princeton University. “Companies have even been caught claiming to use AI when, in fact, the task is done by human workers.”

Here are three ways to recognize AI hype.

Look for Buzzwords
One red flag is what Hwang calls the “hype salad.” This means stringing together the term “AI” with many other tech buzzwords such as “blockchain” or “Internet of Things.” That doesn’t automatically disqualify the technology, but spotting a high volume of buzzwords in a post, pitch, or presentation should raise questions about what exactly the company or individual has developed.

Other experts agree that strings of buzzwords can be a red flag. That’s especially true if the buzzwords are never really explained in technical detail, and are simply tossed around as vague, poorly-defined terms, says Marzyeh Ghassemi, a computer scientist and biomedical engineer at the University of Toronto in Canada.

“I think that if it looks like a Google search—picture ‘interpretable blockchain AI deep learning medicine’—it's probably not high-quality work,” Ghassemi says.

Hwang also suggests mentally replacing all mentions of “AI” in an article with the term “magical fairy dust.” It’s a way of seeing whether an individual or organization is treating the technology like magic. If so—that’s another good reason to ask more questions about what exactly the AI technology involves.

And even the visual imagery used to illustrate AI claims can indicate that an individual or organization is overselling the technology.

“I think that a lot of the people who work on machine learning on a day-to-day basis are pretty humble about the technology, because they’re largely confronted with how frequently it just breaks and doesn't work,” Hwang says. “And so I think that if you see a company or someone representing AI as a Terminator head, or a big glowing HAL eye or something like that, I think it’s also worth asking some questions.”

Interrogate the Data

It can be hard to evaluate AI claims without any relevant expertise, says Ghassemi at the University of Toronto. Even experts need to know the technical details of the AI algorithm in question and have some access to the training data that shaped the AI model’s predictions. Still, savvy readers with some basic knowledge of applied statistics can search for red flags.

To start, readers can look for possible bias in training data based on small sample sizes or a skewed population that fails to reflect the broader population, Ghassemi says. After all, an AI model trained only on health data from white men would not necessarily achieve similar results for other populations of patients.

“For me, a red flag is not demonstrating deep knowledge of how your labels are defined.”
—Marzyeh Ghassemi, University of Toronto

How machine learning and deep learning models perform also depends on how well humans labeled the sample datasets use to train these programs. This task can be straightforward when labeling photos of cats versus dogs, but gets more complicated when assigning disease diagnoses to certain patient cases.

Medical experts frequently disagree with each other on diagnoses—which is why many patients seek a second opinion. Not surprisingly, this ambiguity can also affect the diagnostic labels that experts assign in training datasets. “For me, a red flag is not demonstrating deep knowledge of how your labels are defined,” Ghassemi says.

Such training data can also reflect the cultural stereotypes and biases of the humans who labeled the data, says Narayanan at Princeton University. Like Ghassemi, he recommends taking a hard look at exactly what the AI has learned: “A good way to start critically evaluating AI claims is by asking questions about the training data.”

Another red flag is presenting an AI system’s performance through a single accuracy figure without much explanation, Narayanan says. Claiming that an AI model achieves “99 percent” accuracy doesn’t mean much without knowing the baseline for comparison—such as whether other systems have already achieved 99 percent accuracy—or how well that accuracy holds up in situations beyond the training dataset.

Narayanan also emphasized the need to ask questions about an AI model’s false positive rate—the rate of making wrong predictions about the presence of a given condition. Even if the false positive rate of a hypothetical AI service is just one percent, that could have major consequences if that service ends up screening millions of people for cancer.

Readers can also consider whether using AI in a given situation offers any meaningful improvement compared to traditional statistical methods, says Clayton Aldern, a data scientist and journalist who serves as managing director for Caldern LLC. He gave the hypothetical example of a “super-duper-fancy deep learning model” that achieves a prediction accuracy of 89 percent, compared to a “little polynomial regression model” that achieves 86 percent on the same dataset.

“We're talking about a three-percentage-point increase on something that you learned about in Algebra 1,” Aldern says. “So is it worth the hype?”

Don’t Ignore the Drawbacks

The hype surrounding AI isn’t just about the technical merits of services and products driven by machine learning. Overblown claims about the beneficial impacts of AI technology—or vague promises to address ethical issues related to deploying it—should also raise red flags.

“If a company promises to use its tech ethically, it is important to question if its business model aligns with that promise,” Narayanan says. “Even if employees have noble intentions, it is unrealistic to expect the company as a whole to resist financial imperatives.”

One example might be a company with a business model that depends on leveraging customers’ personal data. Such companies “tend to make empty promises when it comes to privacy,” Narayanan says. And, if companies hire workers to produce training data, it’s also worth asking whether the companies treat those workers ethically.

The transparency—or lack thereof—about any AI claim can also be telling. A company or research group can minimize concerns by publishing technical claims in peer-reviewed journals or allowing credible third parties to evaluate their AI without giving away big intellectual property secrets, Narayanan says. Excessive secrecy is a big red flag.

With these strategies, you don’t need to be a computer engineer or data scientist to start thinking critically about AI claims. And, Narayanan says, the world needs many people from different backgrounds for societies to fully consider the real-world implications of AI.

Editor’s Note: The original version of this story misspelled Clayton Aldern’s last name as Alderton. Continue reading

Posted in Human Robots