Tag Archives: short

#436202 Trump CTO Addresses AI, Facial ...

Michael Kratsios, the Chief Technology Officer of the United States, took the stage at Stanford University last week to field questions from Stanford’s Eileen Donahoe and attendees at the 2019 Fall Conference of the Institute for Human-Centered Artificial Intelligence (HAI).

Kratsios, the fourth to hold the U.S. CTO position since its creation by President Barack Obama in 2009, was confirmed in August as President Donald Trump’s first CTO. Before joining the Trump administration, he was chief of staff at investment firm Thiel Capital and chief financial officer of hedge fund Clarium Capital. Donahoe is Executive Director of Stanford’s Global Digital Policy Incubator and served as the first U.S. Ambassador to the United Nations Human Rights Council during the Obama Administration.

The conversation jumped around, hitting on both accomplishments and controversies. Kratsios touted the administration’s success in fixing policy around the use of drones, its memorandum on STEM education, and an increase in funding for basic research in AI—though the magnitude of that increase wasn’t specified. He pointed out that the Trump administration’s AI policy has been a continuation of the policies of the Obama administration, and will continue to build on that foundation. As proof of this, he pointed to Trump’s signing of the American AI Initiative earlier this year. That executive order, Kratsios said, was intended to bring various government agencies together to coordinate their AI efforts and to push the idea that AI is a tool for the American worker. The AI Initiative, he noted, also took into consideration that AI will cause job displacement, and asked private companies to pledge to retrain workers.

The administration, he said, is also looking to remove barriers to AI innovation. In service of that goal, the government will, in the next month or so, release a regulatory guidance memo instructing government agencies about “how they should think about AI technologies,” said Kratsios.

U.S. vs China in AI

A few of the exchanges between Kratsios and Donahoe hit on current hot topics, starting with the tension between the U.S. and China.

Donahoe:

“You talk a lot about unique U.S. ecosystem. In which aspect of AI is the U.S. dominant, and where is China challenging us in dominance?

Kratsios:

“They are challenging us on machine vision. They have more data to work with, given that they have surveillance data.”

Donahoe:

“To what extent would you say the quantity of data collected and available will be a determining factor in AI dominance?”

Kratsios:

“It makes a big difference in the short term. But we do research on how we get over these data humps. There is a future where you don’t need as much data, a lot of federal grants are going to [research in] how you can train models using less data.”

Donahoe turned the conversation to a different tension—that between innovation and values.

Donahoe:

“A lot of conversation yesterday was about the tension between innovation and values, and how do you hold those things together and lead in both realms.”

Kratsios:

“We recognized that the U.S. hadn’t signed on to principles around developing AI. In May, we signed [the Organization for Economic Cooperation and Development Principles on Artificial Intelligence], coming together with other Western democracies to say that these are values that we hold dear.

[Meanwhile,] we have adversaries around the world using AI to surveil people, to suppress human rights. That is why American leadership is so critical: We want to come out with the next great product. And we want our values to underpin the use cases.”

A member of the audience pushed further:

“Maintaining U.S. leadership in AI might have costs in terms of individuals and society. What costs should individuals and society bear to maintain leadership?”

Kratsios:

“I don’t view the world that way. Our companies big and small do not hesitate to talk about the values that underpin their technology. [That is] markedly different from the way our adversaries think. The alternatives are so dire [that we] need to push efforts to bake the values that we hold dear into this technology.”

Facial recognition

And then the conversation turned to the use of AI for facial recognition, an application which (at least for police and other government agencies) was recently banned in San Francisco.

Donahoe:

“Some private sector companies have called for government regulation of facial recognition, and there already are some instances of local governments regulating it. Do you expect federal regulation of facial recognition anytime soon? If not, what ought the parameters be?”

Kratsios:

“A patchwork of regulation of technology is not beneficial for the country. We want to avoid that. Facial recognition has important roles—for example, finding lost or displaced children. There are use cases, but they need to be underpinned by values.”

A member of the audience followed up on that topic, referring to some data presented earlier at the HAI conference on bias in AI:

“Frequently the example of finding missing children is given as the example of why we should not restrict use of facial recognition. But we saw Joy Buolamwini’s presentation on bias in data. I would like to hear your thoughts about how government thinks we should use facial recognition, knowing about this bias.”

Kratsios:

“Fairness, accountability, and robustness are things we want to bake into any technology—not just facial recognition—as we build rules governing use cases.”

Immigration and innovation

A member of the audience brought up the issue of immigration:

“One major pillar of innovation is immigration, does your office advocate for it?”

Kratsios:

“Our office pushes for best and brightest people from around the world to come to work here and study here. There are a few efforts we have made to move towards a more merit-based immigration system, without congressional action. [For example, in] the H1-B visa system, you go through two lotteries. We switched the order of them in order to get more people with advanced degrees through.”

The government’s tech infrastructure

Donahoe brought the conversation around to the tech infrastructure of the government itself:

“We talk about the shiny object, AI, but the 80 percent is the unsexy stuff, at federal and state levels. We don’t have a modern digital infrastructure to enable all the services—like a research cloud. How do we create this digital infrastructure?”

Kratsios:

“I couldn’t agree more; the least partisan issue in Washington is about modernizing IT infrastructure. We spend like $85 billion a year on IT at the federal level, we can certainly do a better job of using those dollars.” Continue reading

Posted in Human Robots

#436188 The Blogger Behind “AI ...

Sure, artificial intelligence is transforming the world’s societies and economies—but can an AI come up with plausible ideas for a Halloween costume?

Janelle Shane has been asking such probing questions since she started her AI Weirdness blog in 2016. She specializes in training neural networks (which underpin most of today’s machine learning techniques) on quirky data sets such as compilations of knitting instructions, ice cream flavors, and names of paint colors. Then she asks the neural net to generate its own contributions to these categories—and hilarity ensues. AI is not likely to disrupt the paint industry with names like “Ronching Blue,” “Dorkwood,” and “Turdly.”

Shane’s antics have a serious purpose. She aims to illustrate the serious limitations of today’s AI, and to counteract the prevailing narrative that describes AI as well on its way to superintelligence and complete human domination. “The danger of AI is not that it’s too smart,” Shane writes in her new book, “but that it’s not smart enough.”

The book, which came out on Tuesday, is called You Look Like a Thing and I Love You. It takes its odd title from a list of AI-generated pick-up lines, all of which would at least get a person’s attention if shouted, preferably by a robot, in a crowded bar. Shane’s book is shot through with her trademark absurdist humor, but it also contains real explanations of machine learning concepts and techniques. It’s a painless way to take AI 101.

She spoke with IEEE Spectrum about the perils of placing too much trust in AI systems, the strange AI phenomenon of “giraffing,” and her next potential Halloween costume.

Janelle Shane on . . .

The un-delicious origin of her blog
“The narrower the problem, the smarter the AI will seem”
Why overestimating AI is dangerous
Giraffing!
Machine and human creativity

The un-delicious origin of her blog IEEE Spectrum: You studied electrical engineering as an undergrad, then got a master’s degree in physics. How did that lead to you becoming the comedian of AI?
Janelle Shane: I’ve been interested in machine learning since freshman year of college. During orientation at Michigan State, a professor who worked on evolutionary algorithms gave a talk about his work. It was full of the most interesting anecdotes–some of which I’ve used in my book. He told an anecdote about people setting up a machine learning algorithm to do lens design, and the algorithm did end up designing an optical system that works… except one of the lenses was 50 feet thick, because they didn’t specify that it couldn’t do that.
I started working in his lab on optics, doing ultra-short laser pulse work. I ended up doing a lot more optics than machine learning, but I always found it interesting. One day I came across a list of recipes that someone had generated using a neural net, and I thought it was hilarious and remembered why I thought machine learning was so cool. That was in 2016, ages ago in machine learning land.
Spectrum: So you decided to “establish weirdness as your goal” for your blog. What was the first weird experiment that you blogged about?
Shane: It was generating cookbook recipes. The neural net came up with ingredients like: “Take ¼ pounds of bones or fresh bread.” That recipe started out: “Brown the salmon in oil, add creamed meat to the mixture.” It was making mistakes that showed the thing had no memory at all.
Spectrum: You say in the book that you can learn a lot about AI by giving it a task and watching it flail. What do you learn?
Shane: One thing you learn is how much it relies on surface appearances rather than deep understanding. With the recipes, for example: It got the structure of title, category, ingredients, instructions, yield at the end. But when you look more closely, it has instructions like “Fold the water and roll it into cubes.” So clearly this thing does not understand water, let alone the other things. It’s recognizing certain phrases that tend to occur, but it doesn’t have a concept that these recipes are describing something real. You start to realize how very narrow the algorithms in this world are. They only know exactly what we tell them in our data set.
BACK TO TOP↑ “The narrower the problem, the smarter the AI will seem” Spectrum: That makes me think of DeepMind’s AlphaGo, which was universally hailed as a triumph for AI. It can play the game of Go better than any human, but it doesn’t know what Go is. It doesn’t know that it’s playing a game.
Shane: It doesn’t know what a human is, or if it’s playing against a human or another program. That’s also a nice illustration of how well these algorithms do when they have a really narrow and well-defined problem.
The narrower the problem, the smarter the AI will seem. If it’s not just doing something repeatedly but instead has to understand something, coherence goes down. For example, take an algorithm that can generate images of objects. If the algorithm is restricted to birds, it could do a recognizable bird. If this same algorithm is asked to generate images of any animal, if its task is that broad, the bird it generates becomes an unrecognizable brown feathered smear against a green background.
Spectrum: That sounds… disturbing.
Shane: It’s disturbing in a weird amusing way. What’s really disturbing is the humans it generates. It hasn’t seen them enough times to have a good representation, so you end up with an amorphous, usually pale-faced thing with way too many orifices. If you asked it to generate an image of a person eating pizza, you’ll have blocks of pizza texture floating around. But if you give that image to an image-recognition algorithm that was trained on that same data set, it will say, “Oh yes, that’s a person eating pizza.”
BACK TO TOP↑ Why overestimating AI is dangerous Spectrum: Do you see it as your role to puncture the AI hype?
Shane: I do see it that way. Not a lot of people are bringing out this side of AI. When I first started posting my results, I’d get people saying, “I don’t understand, this is AI, shouldn’t it be better than this? Why doesn't it understand?” Many of the impressive examples of AI have a really narrow task, or they’ve been set up to hide how little understanding it has. There’s a motivation, especially among people selling products based on AI, to represent the AI as more competent and understanding than it actually is.
Spectrum: If people overestimate the abilities of AI, what risk does that pose?
Shane: I worry when I see people trusting AI with decisions it can’t handle, like hiring decisions or decisions about moderating content. These are really tough tasks for AI to do well on. There are going to be a lot of glitches. I see people saying, “The computer decided this so it must be unbiased, it must be objective.”

“If the algorithm’s task is to replicate human hiring decisions, it’s going to glom onto gender bias and race bias.”
—Janelle Shane, AI Weirdness blogger
That’s another thing I find myself highlighting in the work I’m doing. If the data includes bias, the algorithm will copy that bias. You can’t tell it not to be biased, because it doesn’t understand what bias is. I think that message is an important one for people to understand.
If there’s bias to be found, the algorithm is going to go after it. It’s like, “Thank goodness, finally a signal that’s reliable.” But for a tough problem like: Look at these resumes and decide who’s best for the job. If its task is to replicate human hiring decisions, it’s going to glom onto gender bias and race bias. There’s an example in the book of a hiring algorithm that Amazon was developing that discriminated against women, because the historical data it was trained on had that gender bias.
Spectrum: What are the other downsides of using AI systems that don’t really understand their tasks?
Shane: There is a risk in putting too much trust in AI and not examining its decisions. Another issue is that it can solve the wrong problems, without anyone realizing it. There have been a couple of cases in medicine. For example, there was an algorithm that was trained to recognize things like skin cancer. But instead of recognizing the actual skin condition, it latched onto signals like the markings a surgeon makes on the skin, or a ruler placed there for scale. It was treating those things as a sign of skin cancer. It’s another indication that these algorithms don’t understand what they’re looking at and what the goal really is.
BACK TO TOP↑ Giraffing Spectrum: In your blog, you often have neural nets generate names for things—such as ice cream flavors, paint colors, cats, mushrooms, and types of apples. How do you decide on topics?
Shane: Quite often it’s because someone has written in with an idea or a data set. They’ll say something like, “I’m the MIT librarian and I have a whole list of MIT thesis titles.” That one was delightful. Or they’ll say, “We are a high school robotics team, and we know where there’s a list of robotics team names.” It’s fun to peek into a different world. I have to be careful that I’m not making fun of the naming conventions in the field. But there’s a lot of humor simply in the neural net’s complete failure to understand. Puns in particular—it really struggles with puns.
Spectrum: Your blog is quite absurd, but it strikes me that machine learning is often absurd in itself. Can you explain the concept of giraffing?
Shane: This concept was originally introduced by [internet security expert] Melissa Elliott. She proposed this phrase as a way to describe the algorithms’ tendency to see giraffes way more often than would be likely in the real world. She posted a whole bunch of examples, like a photo of an empty field in which an image-recognition algorithm has confidently reported that there are giraffes. Why does it think giraffes are present so often when they’re actually really rare? Because they’re trained on data sets from online. People tend to say, “Hey look, a giraffe!” And then take a photo and share it. They don’t do that so often when they see an empty field with rocks.
There’s also a chatbot that has a delightful quirk. If you show it some photo and ask it how many giraffes are in the picture, it will always answer with some non zero number. This quirk comes from the way the training data was generated: These were questions asked and answered by humans online. People tended not to ask the question “How many giraffes are there?” when the answer was zero. So you can show it a picture of someone holding a Wii remote. If you ask it how many giraffes are in the picture, it will say two.
BACK TO TOP↑ Machine and human creativity Spectrum: AI can be absurd, and maybe also creative. But you make the point that AI art projects are really human-AI collaborations: Collecting the data set, training the algorithm, and curating the output are all artistic acts on the part of the human. Do you see your work as a human-AI art project?
Shane: Yes, I think there is artistic intent in my work; you could call it literary or visual. It’s not so interesting to just take a pre-trained algorithm that’s been trained on utilitarian data, and tell it to generate a bunch of stuff. Even if the algorithm isn’t one that I’ve trained myself, I think about, what is it doing that’s interesting, what kind of story can I tell around it, and what do I want to show people.

The Halloween costume algorithm “was able to draw on its knowledge of which words are related to suggest things like sexy barnacle.”
—Janelle Shane, AI Weirdness blogger
Spectrum: For the past three years you’ve been getting neural nets to generate ideas for Halloween costumes. As language models have gotten dramatically better over the past three years, are the costume suggestions getting less absurd?
Shane: Yes. Before I would get a lot more nonsense words. This time I got phrases that were related to real things in the data set. I don’t believe the training data had the words Flying Dutchman or barnacle. But it was able to draw on its knowledge of which words are related to suggest things like sexy barnacle and sexy Flying Dutchman.
Spectrum: This year, I saw on Twitter that someone made the gothy giraffe costume happen. Would you ever dress up for Halloween in a costume that the neural net suggested?
Shane: I think that would be fun. But there would be some challenges. I would love to go as the sexy Flying Dutchman. But my ambition may constrict me to do something more like a list of leg parts.
BACK TO TOP↑ Continue reading

Posted in Human Robots

#436155 This MIT Robot Wants to Use Your ...

MIT researchers have demonstrated a new kind of teleoperation system that allows a two-legged robot to “borrow” a human operator’s physical skills to move with greater agility. The system works a bit like those haptic suits from the Spielberg movie “Ready Player One.” But while the suits in the film were used to connect humans to their VR avatars, the MIT suit connects the operator to a real robot.

The robot is called Little HERMES, and it’s currently just a pair of little legs, about a third the size of an average adult. It can step and jump in place or walk a short distance while supported by a gantry. While that in itself is not very impressive, the researchers say their approach could help bring capable disaster robots closer to reality. They explain that, despite recent advances, building fully autonomous robots with motor and decision-making skills comparable to those of humans remains a challenge. That’s where a more advanced teleoperation system could help.

The researchers, João Ramos, now an assistant professor at the University of Illinois at Urbana-Champaign, and Sangbae Kim, director of MIT’s Biomimetic Robotics Lab, describe the project in this week’s issue of Science Robotics. In the paper, they argue that existing teleoperation systems often can’t effectively match the operator’s motions to that of a robot. In addition, conventional systems provide no physical feedback to the human teleoperator about what the robot is doing. Their new approach addresses these two limitations, and to see how it would work in practice, they built Little HERMES.

Image: Science Robotics

The main components of MIT’s bipedal robot Little HERMES: (A) Custom actuators designed to withstand impact and capable of producing high torque. (B) Lightweight limbs with low inertia and fast leg swing. (C) Impact-robust and lightweight foot sensors with three-axis contact force sensor. (D) Ruggedized IMU to estimates the robot’s torso posture, angular rate, and linear acceleration. (E) Real-time computer sbRIO 9606 from National Instruments for robot control. (F) Two three-cell lithium-polymer batteries in series. (G) Rigid and lightweight frame to minimize the robot mass.

Early this year, the MIT researchers wrote an in-depth article for IEEE Spectrum about the project, which includes Little HERMES and also its big brother, HERMES (for Highly Efficient Robotic Mechanisms and Electromechanical System). In that article, they describe the two main components of the system:

[…] We are building a telerobotic system that has two parts: a humanoid capable of nimble, dynamic behaviors, and a new kind of two-way human-machine interface that sends your motions to the robot and the robot’s motions to you. So if the robot steps on debris and starts to lose its balance, the operator feels the same instability and instinctively reacts to avoid falling. We then capture that physical response and send it back to the robot, which helps it avoid falling, too. Through this human-robot link, the robot can harness the operator’s innate motor skills and split-second reflexes to keep its footing.

You could say we’re putting a human brain inside the machine.

Image: Science Robotics

The human-machine interface built by the MIT researchers for controlling Little HERMES is different from conventional ones in that it relies on the operator’s reflexes to improve the robot’s stability. The researchers call it the balance-feedback interface, or BFI. The main modules of the BFI include: (A) Custom interface attachments for torso and feet designed to capture human motion data at high speed (1 kHz). (B) Two underactuated modules to track the position and orientation of the torso and apply forces to the operator. (C) Each actuation module has three DoFs, one of which is a push/pull rod actuated by a DC brushless motor. (D) A series of linkages with passive joints connected to the operator’s feet and track their spatial translation. (E) Real-time controller cRIO 9082 from National Instruments to close the BFI control loop. (F) Force plate to estimated the operator’s center of pressure position and measure the shear and normal components of the operator’s net contact force.

Here’s more footage of the experiments, showing Little HERMES stepping and jumping in place, walking a few steps forward and backward, and balancing. Watch until the end to see a compilation of unsuccessful stepping experiments. Poor Little HERMES!

In the new Science Robotics paper, the MIT researchers explain how they solved one of the key challenges in making their teleoperation system effective:

The challenge of this strategy lies in properly mapping human body motion to the machine while simultaneously informing the operator how closely the robot is reproducing the movement. Therefore, we propose a solution for this bilateral feedback policy to control a bipedal robot to take steps, jump, and walk in synchrony with a human operator. Such dynamic synchronization was achieved by (i) scaling the core components of human locomotion data to robot proportions in real time and (ii) applying feedback forces to the operator that are proportional to the relative velocity between human and robot.

Little HERMES is now taking its first steps, quite literally, but the researchers say they hope to use robotic legs with similar design as part of a more advanced humanoid. One possibility they’ve envisioned is a fast-moving quadruped robot that could run through various kinds of terrain and then transform into a bipedal robot that would use its hands to perform dexterous manipulations. This could involve merging some of the robots the MIT researchers have built in their lab, possibly creating hybrids between Cheetah and HERMES, or Mini Cheetah and Little HERMES. We can’t wait to see what the resulting robots will look like.

[ Science Robotics ] Continue reading

Posted in Human Robots

#435824 A Q&A with Cruise’s head of AI, ...

In 2016, Cruise, an autonomous vehicle startup acquired by General Motors, had about 50 employees. At the beginning of 2019, the headcount at its San Francisco headquarters—mostly software engineers, mostly working on projects connected to machine learning and artificial intelligence—hit around 1000. Now that number is up to 1500, and by the end of this year it’s expected to reach about 2000, sprawling into a recently purchased building that had housed Dropbox. And that’s not counting the 200 or so tech workers that Cruise is aiming to install in a Seattle, Wash., satellite development center and a handful of others in Phoenix, Ariz., and Pasadena, Calif.

Cruise’s recent hires aren’t all engineers—it takes more than engineering talent to manage operations. And there are hundreds of so-called safety drivers that are required to sit in the 180 or so autonomous test vehicles whenever they roam the San Francisco streets. But that’s still a lot of AI experts to be hiring in a time of AI engineer shortages.

Hussein Mehanna, head of AI/ML at Cruise, says the company’s hiring efforts are on track, due to the appeal of the challenge of autonomous vehicles in drawing in AI experts from other fields. Mehanna himself joined Cruise in May from Google, where he was director of engineering at Google Cloud AI. Mehanna had been there about a year and a half, a relatively quick career stop after a short stint at Snap following four years working in machine learning at Facebook.

Mehanna has been immersed in AI and machine learning research since his graduate studies in speech recognition and natural language processing at the University of Cambridge. I sat down with Mehanna to talk about his career, the challenges of recruiting AI experts and autonomous vehicle development in general—and some of the challenges specific to San Francisco. We were joined by Michael Thomas, Cruise’s manager of AI/ML recruiting, who had also spent time recruiting AI engineers at Google and then Facebook.

IEEE Spectrum: When you were at Cambridge, did you think AI was going to take off like a rocket?

Mehanna: Did I imagine that AI was going to be as dominant and prevailing and sometimes hyped as it is now? No. I do recall in 2003 that my supervisor and I were wondering if neural networks could help at all in speech recognition. I remember my supervisor saying if anyone could figure out how use a neural net for speech he would give them a grant immediately. So he was on the right path. Now neural networks have dominated vision, speech, and language [processing]. But that boom started in 2012.

“In the early days, Facebook wasn’t that open to PhDs, it actually had a negative sentiment about researchers, and then Facebook shifted”

I didn’t [expect it], but I certainly aimed for it when [I was at] Microsoft, where I deliberately pushed my career towards machine learning instead of big data, which was more popular at the time. And [I aimed for it] when I joined Facebook.

In the early days, Facebook wasn’t that open to PhDs, or researchers. It actually had a negative sentiment about researchers. And then Facebook shifted to becoming one of the key places where PhD students wanted to do internships or join after they graduated. It was a mindset shift, they were [once] at a point in time where they thought what was needed for success wasn’t research, but now it’s different.

There was definitely an element of risk [in taking a machine learning career path], but I was very lucky, things developed very fast.

IEEE Spectrum: Is it getting harder or easier to find AI engineers to hire, given the reported shortages?

Mehanna: There is a mismatch [between job openings and qualified engineers], though it is hard to quantify it with numbers. There is good news as well: I see a lot more students diving deep into machine learning and data in their [undergraduate] computer science studies, so it’s not as bleak as it seems. But there is massive demand in the market.

Here at Cruise, demand for AI talent is just growing and growing. It might be is saturating or slowing down at other kinds of companies, though, [which] are leveraging more traditional applications—ad prediction, recommendations—that have been out there in the market for a while. These are more mature, better understood problems.

I believe autonomous vehicle technologies is the most difficult AI problem out there. The magnitude of the challenge of these problems is 1000 times more than other problems. They aren’t as well understood yet, and they require far deeper technology. And also the quality at which they are expected to operate is off the roof.

The autonomous vehicle problem is the engineering challenge of our generation. There’s a lot of code to write, and if we think we are going to hire armies of people to write it line by line, it’s not going to work. Machine learning can accelerate the process of generating the code, but that doesn’t mean we aren’t going to have engineers; we actually need a lot more engineers.

Sometimes people worry that AI is taking jobs. It is taking some developer jobs, but it is actually generating other developer jobs as well, protecting developers from the mundane and helping them build software faster and faster.

IEEE Spectrum: Are you concerned that the demand for AI in industry is drawing out the people in academia who are needed to educate future engineers, that is, the “eating the seed corn” problem?

Mehanna: There are some negative examples in the industry, but that’s not our style. We are looking for collaborations with professors, we want to cultivate a very deep and respectful relationship with universities.

And there’s another angle to this: Universities require a thriving industry for them to thrive. It is going to be extremely beneficial for academia to have this flourishing industry in AI, because it attracts more students to academia. I think we are doing them a fantastic favor by building these career opportunities. This is not the same as in my early days, [when] people told me “don’t go to AI; go to networking, work in the mobile industry; mobile is flourishing.”

IEEE Spectrum: Where are you looking as you try to find a thousand or so engineers to hire this year?

Thomas: We look for people who want to use machine learning to solve problems. They can be in many different industries—in the financial markets, in social media, in advertising. The autonomous vehicle industry is in its infancy. You can compare it to mobile in the early days: When the iPhone first came out, everyone was looking for developers with mobile experience, but you weren’t going to find them unless you went to straight to Apple, [so you had to hire other kinds of engineers]. This is the same type of thing: it is so new that you aren’t going to find experts in this area, because we are all still learning.

“You don’t have to be an autonomous vehicle expert to flourish in this world. It’s not too late to move…now would be a great time for AI experts working on other problems to shift their attention to autonomous vehicles.”

Mehanna: Because autonomous vehicle technology is the new frontier for AI experts, [the number of] people with both AI and autonomous vehicle experience is quite limited. So we are acquiring AI experts wherever they are, and helping them grow into the autonomous vehicle area. You don’t have to be an autonomous vehicle expert to flourish in this world. It’s not too late to move; even though there is a lot of great tech developed, there’s even more innovation ahead, so now would be a great time for AI experts working on other problems or applications to shift their attention to autonomous vehicles.

It feels like the Internet in 1980. It’s about to happen, but there are endless applications [to be developed over] the next few decades. Even if we can get a car to drive safely, there is the question of how can we tune the ride comfort, and then applying it all to different cities, different vehicles, different driving situations, and who knows to what other applications.

I can see how I can spend a lifetime career trying to solve this problem.

IEEE Spectrum: Why are you doing most of your development in San Francisco?

Mehanna: I think the best talent of the world is in Silicon Valley, and solving the autonomous vehicle problem is going to require the best of the best. It’s not just the engineering talent that is here, but [also] the entrepreneurial spirit. Solving the problem just as a technology is not going to be successful, you need to solve the product and the technology together. And the entrepreneurial spirit is one of the key reasons Cruise secured 7.5 billion in funding [besides GM, the company has a number of outside investors, including Honda, Softbank, and T. Rowe Price]. That [funding] is another reason Cruise is ahead of many others, because this problem requires deep resources.

“If you can do an autonomous vehicle in San Francisco you can do it almost anywhere.”

[And then there is the driving environment.] When I speak to my peers in the industry, they have a lot of respect for us, because the problems to solve in San Francisco technically are an order of magnitude harder. It is a tight environment, with a lot of pedestrians, and driving patterns that, let’s put it this way, are not necessarily the best in the nation. Which means we are seeing more problems ahead of our competitors, which gets us to better [software]. I think if you can do an autonomous vehicle in San Francisco you can do it almost anywhere.

A version of this post appears in the September 2019 print magazine as “AI Engineers: The Autonomous-Vehicle Industry Wants You.” Continue reading

Posted in Human Robots

#435681 Video Friday: This NASA Robot Uses ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

ICRES 2019 – July 29-30, 2019 – London, U.K.
DARPA SubT Tunnel Circuit – August 15-22, 2019 – Pittsburgh, Pa., USA
IEEE Africon 2019 – September 25-27, 2019 – Accra, Ghana
ISRR 2019 – October 6-10, 2019 – Hanoi, Vietnam
Let us know if you have suggestions for next week, and enjoy today’s videos.

Robots can land on the Moon and drive on Mars, but what about the places they can’t reach? Designed by engineers as NASA’s Jet Propulsion Laboratory in Pasadena, California, a four-limbed robot named LEMUR (Limbed Excursion Mechanical Utility Robot) can scale rock walls, gripping with hundreds of tiny fishhooks in each of its 16 fingers and using artificial intelligence to find its way around obstacles. In its last field test in Death Valley, California, in early 2019, LEMUR chose a route up a cliff, scanning the rock for ancient fossils from the sea that once filled the area.

The LEMUR project has since concluded, but it helped lead to a new generation of walking, climbing and crawling robots. In future missions to Mars or icy moons, robots with AI and climbing technology derived from LEMUR could discover similar signs of life. Those robots are being developed now, honing technology that may one day be part of future missions to distant worlds.

[ NASA ]

This video demonstrates the autonomous footstep planning developed by IHMC. Robots in this video are the Atlas humanoid robot (DRC version) and the NASA Valkyrie. The operator specifies a goal location in the world, which is modeled as planar regions using the robot’s perception sensors. The planner then automatically computes the necessary steps to reach the goal using a Weighted A* algorithm. The algorithm does not reject footholds that have a certain amount of support, but instead modifies them after the plan is found to try and increase that support area.

Currently, narrow terrain has a success rate of about 50%, rough terrain is about 90%, whereas flat ground is near 100%. We plan on increasing planner speed and the ability to plan through mazes and to unseen goals by including a body-path planner as the first step. Control, Perception, and Planning algorithms by IHMC Robotics.

[ IHMC ]

I’ve never really been able to get into watching people play poker, but throw an AI from CMU and Facebook into a game of no-limit Texas hold’em with five humans, and I’m there.

[ Facebook ]

In this video, Cassie Blue is navigating autonomously. Right now, her world is very small, the Wavefield at the University of Michigan, where she is told to turn left at intersections. You’re right, that is not a lot of independence, but it’s a first step away from a human and an RC controller!

Using a RealSense RGBD Camera, an IMU, and our version of an InEKF with contact factors, Cassie Blue is building a 3D semantic map in real time that identifies sidewalks, grass, poles, bicycles, and buildings. From the semantic map, occupancy and cost maps are built with the sidewalk identified as walk-able area and everything else considered as an obstacle. A planner then sets a goal to stay approximately 50 cm to the right of the sidewalk’s left edge and plans a path around obstacles and corners using D*. The path is translated into way-points that are achieved via Cassie Blue’s gait controller.

[ University of Michigan ]

Thanks Jesse!

Dave from HEBI Robotics wrote in to share some new actuators that are designed to get all kinds of dirty: “The R-Series takes HEBI’s X-Series to the next level, providing a sealed robotics solution for rugged, industrial applications and laying the groundwork for industrial users to address challenges that are not well met by traditional robotics. To prove it, we shot some video right in the Allegheny River here in Pittsburgh. Not a bad way to spend an afternoon :-)”

The R-Series Actuator is a full-featured robotic component as opposed to a simple servo motor. The output rotates continuously, requires no calibration or homing on boot-up, and contains a thru-bore for easy daisy-chaining of wiring. Modular in nature, R-Series Actuators can be used in everything from wheeled robots to collaborative robotic arms. They are sealed to IP67 and designed with a lightweight form factor for challenging field applications, and they’re packed with sensors that enable simultaneous control of position, velocity, and torque.

[ HEBI Robotics ]

Thanks Dave!

If your robot hands out karate chops on purpose, that’s great. If it hands out karate chops accidentally, maybe you should fix that.

COVR is short for “being safe around collaborative and versatile robots in shared spaces”. Our mission is to significantly reduce the complexity in safety certifying cobots. Increasing safety for collaborative robots enables new innovative applications, thus increasing production and job creation for companies utilizing the technology. Whether you’re an established company seeking to deploy cobots or an innovative startup with a prototype of a cobot related product, COVR will help you analyze, test and validate the safety for that application.

[ COVR ]

Thanks Anna!

EPFL startup Flybotix has developed a novel drone with just two propellers and an advanced stabilization system that allow it to fly for twice as long as conventional models. That fact, together with its small size, makes it perfect for inspecting hard-to-reach parts of industrial facilities such as ducts.

[ Flybotix ]

SpaceBok is a quadruped robot designed and built by a Swiss student team from ETH Zurich and ZHAW Zurich, currently being tested using Automation and Robotics Laboratories (ARL) facilities at our technical centre in the Netherlands. The robot is being used to investigate the potential of ‘dynamic walking’ and jumping to get around in low gravity environments.

SpaceBok could potentially go up to 2 m high in lunar gravity, although such a height poses new challenges. Once it comes off the ground the legged robot needs to stabilise itself to come down again safely – like a mini-spacecraft. So, like a spacecraft. SpaceBok uses a reaction wheel to control its orientation.

[ ESA ]

A new video from GITAI showing progress on their immersive telepresence robot for space.

[ GITAI ]

Tech United’s HERO robot (a Toyota HSR) competed in the RoboCup@Home competition, and it had a couple of garbage-related hiccups.

[ Tech United ]

Even small drones are getting better at autonomous obstacle avoidance in cluttered environments at useful speeds, as this work from the HKUST Aerial Robotics Group shows.

[ HKUST ]

DelFly Nimbles now come in swarms.

[ DelFly Nimble ]

This is a very short video, but it’s a fairly impressive look at a Baxter robot collaboratively helping someone put a shirt on, a useful task for folks with disabilities.

[ Shibata Lab ]

ANYmal can inspect the concrete in sewers for deterioration by sliding its feet along the ground.

[ ETH Zurich ]

HUG is a haptic user interface for teleoperating advanced robotic systems as the humanoid robot Justin or the assistive robotic system EDAN. With its lightweight robot arms, HUG can measure human movements and simultaneously display forces from the distant environment. In addition to such teleoperation applications, HUG serves as a research platform for virtual assembly simulations, rehabilitation, and training.

[ DLR ]

This video about “image understanding” from CMU in 1979 (!) is amazing, and even though it’s long, you won’t regret watching until 3:30. Or maybe you will.

[ ARGOS (pdf) ]

Will Burrard-Lucas’ BeetleCam turned 10 this month, and in this video, he recounts the history of his little robotic camera.

[ BeetleCam ]

In this week’s episode of Robots in Depth, Per speaks with Gabriel Skantze from Furhat Robotics.

Gabriel Skantze is co-founder and Chief Scientist at Furhat Robotics and Professor in speech technology at KTH with a specialization in conversational systems. He has a background in research into how humans use spoken communication to interact.

In this interview, Gabriel talks about how the social robot revolution makes it necessary to communicate with humans in a human ways through speech and facial expressions. This is necessary as we expand the number of people that interact with robots as well as the types of interaction. Gabriel gives us more insight into the many challenges of implementing spoken communication for co-bots, where robots and humans work closely together. They need to communicate about the world, the objects in it and how to handle them. We also get to hear how having an embodied system using the Furhat robot head helps the interaction between humans and the system.

[ Robots in Depth ] Continue reading

Posted in Human Robots