Tag Archives: new
#436042 Video Friday: Caltech’s Drone With ...
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):
ISRR 2019 – October 6-10, 2019 – Hanoi, Vietnam
Ro-Man 2019 – October 14-18, 2019 – New Delhi, India
Humanoids 2019 – October 15-17, 2019 – Toronto, Canada
ARSO 2019 – October 31-1, 2019 – Beijing, China
ROSCon 2019 – October 31-1, 2019 – Macau
IROS 2019 – November 4-8, 2019 – Macau
Let us know if you have suggestions for next week, and enjoy today’s videos.
Caltech has been making progress on LEONARDO (LEg ON Aerial Robotic DrOne), their leggy thruster powered humanoid-thing. It can now balance and walk, which is quite impressive to see.
We’ll circle back again when they’ve got it jumping and floating around.
[ Caltech ]
Turn the subtitles on to learn how robots became experts at slicing bubbly, melty, delicious cheese.
These robots learned how to do the traditional Swiss raclette from demonstration. The Robot Learning & Interaction group at the Idiap Research Institute has developed an imitation learning technique allowing the robot to acquire new skills by considering position and force information, with an automatic adaptation to new situations. The range of applications is wide, including industrial robots, service robots, and assistive robots.
[ Idiap ]
Thanks Sylvain!
Some amazing news this week from Skydio, with the announcement of their better in every single way Skydio 2 autonomous drone. Read our full article for details, but here’s a getting started video that gives you an overview of what the drone can do.
The first batch sold out in 36 hours, but you can put down a $100 deposit to reserve the $999 drone for 2020 delivery.
[ Skydio ]
UBTECH is introducing a couple new robot kits for the holidays: ChampBot and FireBot.
$130 each, available on October 20.
[ Ubtech ]
NASA’s InSight lander on Mars is trying to use its robotic arm to get the mission’s heat flow probe, or mole, digging again. InSight team engineer Ashitey Trebbi-Ollennu, based at NASA’s Jet Propulsion Laboratory in Pasadena, California, explains what has been attempted and the game plan for the coming weeks. The next tactic they’ll try will be “pinning” the mole against the hole it’s in.
[ NASA ]
We introduce shape-changing swarm robots. A swarm of self-transformable robots can both individually and collectively change their configuration to display information, actuate objects, act as tangible controllers, visualize data, and provide physical affordances. ShapeBots is a concept prototype of shape-changing swarm robots. Each robot can change its shape by leveraging small linear actuators that are thin (2.5 cm) and highly extendable (up to 20cm) in both horizontal and vertical directions.
[ Ryo Suzuki ]
Robot abuse!
Vision 60 legged robot managing unstructured terrain without vision or force sensors in its legs. Using only high-transparency actuators and 2kHz algorithmic stability control… 4-limbs and 12-motors with only a velocity command.
[ Ghost Robotics ]
We asked real people to bring in real products they needed picked for their application. In MINUTES, we assembled the right tool.
This is a cool idea, but for a real challenge they should try it outside a supermarket. Or a pet store.
[ Soft Robotics ]
Good water quality is important to humans and to nature. In a country with as much water as the Netherlands has, ensuring water quality is a very labour-intensive undertaking. To address this issue, researchers from TU Delft have developed a ‘pelican drone’: a drone capable of taking water samples quickly, in combination with a measuring instrument that immediately analyses the water quality. The drone was tested this week at the new Marker Wadden nature area ‘Living Lab’.
[ MAVLab ]
In an international collaboration led by scientists in Switzerland, three amputees merge with their bionic prosthetic legs as they climb over various obstacles without having to look. The amputees report using and feeling their bionic leg as part of their own body, thanks to sensory feedback from the prosthetic leg that is delivered to nerves in the leg’s stump.
[ EPFL ]
It’s a little hard to see, but this is one way of testing out asteroid imaging spacecraft without actually going into space: a fake asteroid and a 2D microgravity simulator.
[ Caltech ]
Drones can help filmmakers do the kinds of shots that would be otherwise impossible.
[ DJI ]
Two long interviews this week from Lex Fridman’s AI Podcast, and both of them are worth watching: Gary Marcus, and Peter Norvig.
[ AI Podcast ]
This week’s CMU RI Seminar comes from Tucker Hermans at the University of Utah, on “Improving Multi-fingered Robot Manipulation by Unifying Learning and Planning.”
Multi-fingered hands offer autonomous robots increased dexterity, versatility, and stability over simple two-fingered grippers. Naturally, this increased ability comes with increased complexity in planning and executing manipulation actions. As such, I propose combining model-based planning with learned components to improve over purely data-driven or purely-model based approaches to manipulation. This talk examines multi-fingered autonomous manipulation when the robot has only partial knowledge of the object of interest. I will first present results on planning multi-fingered grasps for novel objects using a learned neural network. I will then present our approach to planning in-hand manipulation tasks when dynamic properties of objects are not known. I will conclude with a discussion of our ongoing and future research to further unify these two approaches.
[ CMU RI ] Continue reading
#436021 AI Faces Speed Bumps and Potholes on Its ...
Implementing machine learning in the real world isn’t easy. The tools are available and the road is well-marked—but the speed bumps are many.
That was the conclusion of panelists wrapping up a day of discussions at the IEEE AI Symposium 2019, held at Cisco’s San Jose, Calif., campus last week.
The toughest problem, says Ben Irving, senior manager of Cisco’s strategy innovations group, is people.
It’s tough to find data scientist expertise, he indicated, so companies are looking into non-traditional sources of personnel, like political science. “There are some untapped areas with a lot of untapped data science expertise,” Irving says.
Lazard’s artificial intelligence manager Trevor Mottl agreed that would-be data scientists don’t need formal training or experience to break into the field. “This field is changing really rapidly,” he says. “There are new language models coming out every month, and new tools, so [anyone should] expect to not know everything. Experiment, try out new tools and techniques, read, study, spend time; there aren’t any true experts at this point because the foundational elements are shifting so rapidly.”
“It is a wonderful time to get into a field,” he reasons, noting that it doesn’t take long to catch up because there aren’t 20 years of history.”
Confusion about what different kinds of machine learning specialists do doesn’t help the personnel situation. An audience member asked panelists to explain the difference between data scientist, data analyst, and data engineer. Darrin Johnson, Nvidia global director of technical marketing for enterprise, admitted it’s hard to sort out, and any two companies could define the positions differently. “Sometimes,” he says, particularly at smaller companies, “a data scientist plays all three roles. But as companies grow, there are different groups that ingest data, clean data, and use data. At some companies, training and inference are separate. It really depends, which is a challenge when you are trying to hire someone.”
Mitigating the risks of a hot job market
The competition to hire data scientists, analysts, engineers, or whatever companies call them requires that managers make sure any work being done is structured and comprehensible at all times, the panelists cautioned.
“We need to remember that our data scientists go home every day and sometimes they don’t come back because they go home and then go to a different company,” says Lazard’s Mottl. “That’s a fact of life. If you give people choice on [how they do development], and have a successful person who gets poached by competitor, you have to either hire a team to unwrap what that person built or jettison their work and rebuild it.”
By contrast, he says, “places that have structured coding and structured commits and organized constructions of software have done very well.”
But keeping all of a company’s engineers working with the same languages and on the same development paths is not easy to do in a field that moves as fast as machine learning. Zongjie Diao, Cisco director of product management for machine learning, quipped: “I have a data scientist friend who says the speed at which he changes girlfriends is less than speed at which he changes languages.”
The data scientist/IT manager clash
Once a company finds the data engineers and scientists they need and get them started on the task of applying machine learning to that company’s operations, one of the first obstacles they face just might be the company’s IT department, the panelists suggested.
“IT is process oriented,” Mottl says. The IT team “knows how to keep data secure, to set up servers. But when you bring in a data science team, they want sandboxes, they want freedom, they want to explore and play.”
Also, Nvidia’s Johnson pointed out, “There is a language barrier.” The AI world, he says, is very different from networking or storage, and data scientists find it hard to articulate their requirements to IT.
On the ground or in the cloud?
And then there is the decision of where exactly machine learning should happen—on site, or in the cloud? At Lazard, Mottl says, the deep learning engineers do their experimentation on premises; that’s their sandbox. “But when we deploy, we deploy in the cloud,” he says.
Nvidia, Johnson says, thinks the opposite approach is better. We see the cloud as “the sandbox,” he says. “So you can run as many experiments as possible, fail fast, and learn faster.”
For Cisco’s Irving, the “where” of machine learning depends on the confidentiality of the data.
Mottl, who says rolling machine learning technology into operation can hit resistance from all across the company, had one last word of caution for those aiming to implement AI:
Data scientists are building things that might change the ways other people in the organization work, like sales and even knowledge workers. [You need to] think about the internal stakeholders and prepare them, because the last thing you want to do is to create a valuable new thing that nobody likes and people take potshots against.
The AI Symposium was organized by the Silicon Valley chapters of the IEEE Young Professionals, the IEEE Consultants’ Network, and IEEE Women in Engineering and supported by Cisco. Continue reading
#436005 NASA Hiring Engineers to Develop “Next ...
It’s been nearly six years since NASA unveiled Valkyrie, a state-of-the-art full-size humanoid robot. After the DARPA Robotics Challenge, NASA has continued to work with Valkyrie at Johnson Space Center, and has also provided Valkyrie robots to several different universities. Although it’s not a new platform anymore (six years is a long time in robotics), Valkyrie is still very capable, with plenty of potential for robotics research.
With that in mind, we were caught by surprise when over the last several months, Jacobs, a Dallas-based engineering company that appears to provide a wide variety of technical services to anyone who wants them, has posted several open jobs in need of roboticists in the Houston, Texas, area who are interested in working with NASA on “the next generation of humanoid robot.”
Here are the relevant bullet points from the one of the job descriptions (which you can view at this link):
Work directly with NASA Johnson Space Center in designing the next generation of humanoid robot.
Join the Valkyrie humanoid robot team in NASA’s Robotic Systems Technology Branch.
Build on the success of the existing Valkyrie and Robonaut 2 humanoid robots and advance NASA’s ability to project a remote human presence and dexterous manipulation capability into challenging, dangerous, and distant environments both in space and here on earth.
The question is, why is NASA developing its own humanoid robot (again) when it could instead save a whole bunch of time and money by using a platform that already exists, whether it’s Atlas, Digit, Valkyrie itself, or one of the small handful of other humanoids that are more or less available? The only answer that I can come up with is that no existing platforms meet NASA’s requirements, whatever those may be. And if that’s the case, what kind of requirements are we talking about? The obvious one would be the ability to work in the kinds of environments that NASA specializes in—space, the Moon, and Mars.
Image: NASA
Artist’s concept of NASA’s Valkyrie humanoid robot working on the surface of Mars.
NASA’s existing humanoid robots, including Robonaut 2 and Valkyrie, were designed to operate on Earth. Robonaut 2 ended up going to space anyway (it’s recently returned to Earth for repairs), but its hardware was certainly never intended to function outside of the International Space Station. Working in a vacuum involves designing for a much more rigorous set of environmental challenges, and things get even worse on the Moon or on Mars, where highly abrasive dust gets everywhere.
We know that it’s possible to design robots for long term operation in these kinds of environments because we’ve done it before. But if you’re not actually going to send your robot off-world, there’s very little reason to bother making sure that it can operate through (say) 300° Celsius temperature swings like you’d find on the Moon. In the past, NASA has quite sensibly focused on designing robots that can be used as platforms for the development of software and techniques that could one day be applied to off-world operations, without over-engineering those specific robots to operate in places that they would almost certainly never go. As NASA increasingly focuses on a return to the Moon, though, maybe it’s time to start thinking about a humanoid robot that could actually do useful stuff on the lunar surface.
Image: NASA
Artist’s concept of the Gateway moon-orbiting space station (seen on the right) with an Orion crew vehicle approaching.
The other possibility that I can think of, and perhaps the more likely one, is that this next humanoid robot will be a direct successor to Robonaut 2, intended for NASA’s Gateway space station orbiting the Moon. Some of the robotics folks at NASA that we’ve talked to recently have emphasized how important robotics will be for Gateway:
Trey Smith, NASA Ames: Everybody at NASA is really excited about work on the Gateway space station that would be in near lunar space. We don’t have definite plans for what would happen on the Gateway yet, but there’s a general recognition that intra-vehicular robots are important for space stations. And so, it would not be surprising to see a mobile manipulator like Robonaut, and a free flyer like Astrobee, on the Gateway.
If you have an un-crewed cargo vehicle that shows up stuffed to the rafters with cargo bags and it docks with the Gateway when there’s no crew there, it would be very useful to have intra-vehicular robots that can pull all those cargo bags out, unpack them, stow all the items, and then even allow the cargo vehicle to detach before the crew show up so that the crew don’t have to waste their time with that.
Julia Badger, NASA JSC: One of the systems on board Gateway is going to be intravehicular robots. They’re not going to necessarily look like Robonaut, but they’ll have some of the same functionality as Robonaut—being mobile, being able to carry payloads from one part of the module to another, doing some dexterous manipulation tasks, inspecting behind panels, those sorts of things.
Image: NASA
Artist’s concept of NASA’s Valkyrie humanoid robot working inside a spacecraft.
Since Gateway won’t be crewed by humans all of the time, it’ll be important to have a permanent robotic presence to keep things running while nobody is home while saving on resources by virtue of the fact that robots aren’t always eating food, drinking water, consuming oxygen, demanding that the temperature stays just so, and producing a variety of disgusting kinds of waste. Obviously, the robot won’t be as capable as humans, but if they can manage to do even basic continuing maintenance tasks (most likely through at least partial teleoperation), that would be very useful.
Photo: Evan Ackerman/IEEE Spectrum
NASA’s Robonaut team plans to perform a variety of mobility and motion-planning experiments using the robot’s new legs, which can grab handrails on the International Space Station.
As for whether robots designed for Gateway would really fall into the “humanoid” category, it’s worth considering that Gateway is designed for humans, implying that an effective robotic system on Gateway would need to be able to interact with the station in similar ways to how a human astronaut would. So, you’d expect to see arms with end-effectors that can grip things as well as push buttons, and some kind of mobility system—the legged version of Robonaut 2 seems like a likely template, but redesigned from the ground up to work in space, incorporating all the advances in robotics hardware and computing that have taken place over the last decade.
We’ve been pestering NASA about this for a little bit now, and they’re not ready to comment on this project, or even to confirm it. And again, everything in this article (besides the job post, which you should totally check out and consider applying for) is just speculation on our part, and we could be wrong about absolutely all of it. As soon as we hear more, we’ll definitely let you know. Continue reading
#435828 Video Friday: Boston Dynamics’ ...
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):
RoboBusiness 2019 – October 1-3, 2019 – Santa Clara, Calif., USA
ISRR 2019 – October 6-10, 2019 – Hanoi, Vietnam
Ro-Man 2019 – October 14-18, 2019 – New Delhi, India
Humanoids 2019 – October 15-17, 2019 – Toronto, Canada
ARSO 2019 – October 31-1, 2019 – Beijing, China
ROSCon 2019 – October 31-1, 2019 – Macau
IROS 2019 – November 4-8, 2019 – Macau
Let us know if you have suggestions for next week, and enjoy today’s videos.
You’ve almost certainly seen the new Spot and Atlas videos from Boston Dynamics, if for no other reason than we posted about Spot’s commercial availability earlier this week. But what, are we supposed to NOT include them in Video Friday anyway? Psh! Here you go:
[ Boston Dynamics ]
Eight deadly-looking robots. One Giant Nut trophy. Tonight is the BattleBots season finale, airing on Discovery, 8 p.m. ET, or check your local channels.
[ BattleBots ]
Thanks Trey!
Speaking of battling robots… Having giant robots fight each other is one of those things that sounds really great in theory, but doesn’t work out so well in reality. And sadly, MegaBots is having to deal with reality, which means putting their giant fighting robot up on eBay.
As of Friday afternoon, the current bid is just over $100,000 with a week to go.
[ MegaBots ]
Michigan Engineering has figured out the secret formula to getting 150,000 views on YouTube: drone plus nail gun.
[ Michigan Engineering ]
Michael Burke from the University of Edinburgh writes:
We’ve been learning to scoop grapefruit segments using a PR2, by “feeling” the difference between peel and pulp. We use joint torque measurements to predict the probability that the knife is in the peel or pulp, and use this to apply feedback control to a nominal cutting trajectory learned from human demonstration, so that we remain in a position of maximum uncertainty about which medium we’re cutting. This means we slice along the boundary between the two mediums. It works pretty well!
[ Paper ] via [ Robust Autonomy and Decisions Group ]
Thanks Michael!
Hey look, it’s Jan with eight EMYS robot heads. Hi, Jan! Hi, EMYSes!
[ EMYS ]
We’re putting the KRAKEN Arm through its paces, demonstrating that it can unfold from an Express Rack locker on the International Space Station and access neighboring lockers in NASA’s FabLab system to enable transfer of materials and parts between manufacturing, inspection, and storage stations. The KRAKEN arm will be able to change between multiple ’end effector’ tools such as grippers and inspection sensors – those are in development so they’re not shown in this video.
[ Tethers Unlimited ]
UBTECH’s Alpha Mini Robot with Smart Robot’s “Maatje” software is offering healthcare service to children at Praktijk Intraverte Multidisciplinary Institution in Netherlands.
This institution is using Alpha Mini in counseling children’s behavior. Alpha Mini can move and talk to children and offers games and activities to stimulate and interact with them. Alpha Mini talks, helps and motivates children thereby becoming more flexible in society.
[ UBTECH ]
Some impressive work here from Anusha Nagabandi, Kurt Konoglie, Sergey Levine, Vikash Kumar at Google Brain, training a dexterous multi-fingered hand to do that thing with two balls that I’m really bad at.
Dexterous multi-fingered hands can provide robots with the ability to flexibly perform a wide range of manipulation skills. However, many of the more complex behaviors are also notoriously difficult to control: Performing in-hand object manipulation, executing finger gaits to move objects, and exhibiting precise fine motor skills such as writing, all require finely balancing contact forces, breaking and reestablishing contacts repeatedly, and maintaining control of unactuated objects. In this work, we demonstrate that our method of online planning with deep dynamics models (PDDM) addresses both of these limitations; we show that improvements in learned dynamics models, together with improvements in online model-predictive control, can indeed enable efficient and effective learning of flexible contact-rich dexterous manipulation skills — and that too, on a 24-DoF anthropomorphic hand in the real world, using just 2-4 hours of purely real-world data to learn to simultaneously coordinate multiple free-floating objects.
[ PDDM ]
Thanks Vikash!
CMU’s Ballbot has a deceptively light touch that’s ideal for leading people around.
A paper on this has been submitted to IROS 2019.
[ CMU ]
The Autonomous Robots Lab at the University of Nevada is sharing some of the work they’ve done on path planning and exploration for aerial robots during the DARPA SubT Challenge.
[ Autonomous Robots Lab ]
More proof that anything can be a drone if you staple some motors to it. Even 32 feet of styrofoam insulation.
[ YouTube ]
Whatever you think of military drones, we can all agree that they look cool.
[ Boeing ]
I appreciate the fact that iCub has eyelids, I really do, but sometimes, it ends up looking kinda sleepy in research videos.
[ EPFL LASA ]
Video shows autonomous flight of a lightweight aerial vehicle outdoors and indoors on the campus of Carnegie Mellon University. The vehicle is equipped with limited onboard sensing from a front-facing camera and a proximity sensor. The aerial autonomy is enabled by utilizing a 3D prior map built in Step 1.
[ CMU ]
The Stanford Space Robotics Facility allows researchers to test innovative guidance and navigation algorithms on a realistic frictionless, underactuated system.
[ Stanford ASL ]
In this video, Ian and CP discuss Misty’s many capabilities including robust locomotion, obstacle avoidance, 3D mapping/SLAM, face detection and recognition, sound localization, hardware extensibility, photo and video capture, and programmable personality. They also talk about some of the skills he’s built using these capabilities (and others) and how those skills can be expanded upon by you.
[ Misty Robotics ]
This week’s CMU RI Seminar comes from Aaron Parness at Caltech and NASA JPL, on “Robotic Grippers for Planetary Applications.”
The previous generation of NASA missions to the outer solar system discovered salt water oceans on Europa and Enceladus, each with more liquid water than Earth – compelling targets to look for extraterrestrial life. Closer to home, JAXA and NASA have imaged sky-light entrances to lava tube caves on the Moon more than 100 m in diameter and ESA has characterized the incredibly varied and complex terrain of Comet 67P. While JPL has successfully landed and operated four rovers on the surface of Mars using a 6-wheeled rocker-bogie architecture, future missions will require new mobility architectures for these extreme environments. Unfortunately, the highest value science targets often lie in the terrain that is hardest to access. This talk will explore robotic grippers that enable missions to these extreme terrains through their ability to grip a wide variety of surfaces (shapes, sizes, and geotechnical properties). To prepare for use in space where repair or replacement is not possible, we field-test these grippers and robots in analog extreme terrain on Earth. Many of these systems are enabled by advances in autonomy. The talk will present a rapid overview of my work and a detailed case study of an underactuated rock gripper for deflecting asteroids.
[ CMU ]
Rod Brooks gives some of the best robotics talks ever. He gave this one earlier this week at UC Berkeley, on “Steps Toward Super Intelligence and the Search for a New Path.”
[ UC Berkeley ] Continue reading
#435824 A Q&A with Cruise’s head of AI, ...
In 2016, Cruise, an autonomous vehicle startup acquired by General Motors, had about 50 employees. At the beginning of 2019, the headcount at its San Francisco headquarters—mostly software engineers, mostly working on projects connected to machine learning and artificial intelligence—hit around 1000. Now that number is up to 1500, and by the end of this year it’s expected to reach about 2000, sprawling into a recently purchased building that had housed Dropbox. And that’s not counting the 200 or so tech workers that Cruise is aiming to install in a Seattle, Wash., satellite development center and a handful of others in Phoenix, Ariz., and Pasadena, Calif.
Cruise’s recent hires aren’t all engineers—it takes more than engineering talent to manage operations. And there are hundreds of so-called safety drivers that are required to sit in the 180 or so autonomous test vehicles whenever they roam the San Francisco streets. But that’s still a lot of AI experts to be hiring in a time of AI engineer shortages.
Hussein Mehanna, head of AI/ML at Cruise, says the company’s hiring efforts are on track, due to the appeal of the challenge of autonomous vehicles in drawing in AI experts from other fields. Mehanna himself joined Cruise in May from Google, where he was director of engineering at Google Cloud AI. Mehanna had been there about a year and a half, a relatively quick career stop after a short stint at Snap following four years working in machine learning at Facebook.
Mehanna has been immersed in AI and machine learning research since his graduate studies in speech recognition and natural language processing at the University of Cambridge. I sat down with Mehanna to talk about his career, the challenges of recruiting AI experts and autonomous vehicle development in general—and some of the challenges specific to San Francisco. We were joined by Michael Thomas, Cruise’s manager of AI/ML recruiting, who had also spent time recruiting AI engineers at Google and then Facebook.
IEEE Spectrum: When you were at Cambridge, did you think AI was going to take off like a rocket?
Mehanna: Did I imagine that AI was going to be as dominant and prevailing and sometimes hyped as it is now? No. I do recall in 2003 that my supervisor and I were wondering if neural networks could help at all in speech recognition. I remember my supervisor saying if anyone could figure out how use a neural net for speech he would give them a grant immediately. So he was on the right path. Now neural networks have dominated vision, speech, and language [processing]. But that boom started in 2012.
“In the early days, Facebook wasn’t that open to PhDs, it actually had a negative sentiment about researchers, and then Facebook shifted”
I didn’t [expect it], but I certainly aimed for it when [I was at] Microsoft, where I deliberately pushed my career towards machine learning instead of big data, which was more popular at the time. And [I aimed for it] when I joined Facebook.
In the early days, Facebook wasn’t that open to PhDs, or researchers. It actually had a negative sentiment about researchers. And then Facebook shifted to becoming one of the key places where PhD students wanted to do internships or join after they graduated. It was a mindset shift, they were [once] at a point in time where they thought what was needed for success wasn’t research, but now it’s different.
There was definitely an element of risk [in taking a machine learning career path], but I was very lucky, things developed very fast.
IEEE Spectrum: Is it getting harder or easier to find AI engineers to hire, given the reported shortages?
Mehanna: There is a mismatch [between job openings and qualified engineers], though it is hard to quantify it with numbers. There is good news as well: I see a lot more students diving deep into machine learning and data in their [undergraduate] computer science studies, so it’s not as bleak as it seems. But there is massive demand in the market.
Here at Cruise, demand for AI talent is just growing and growing. It might be is saturating or slowing down at other kinds of companies, though, [which] are leveraging more traditional applications—ad prediction, recommendations—that have been out there in the market for a while. These are more mature, better understood problems.
I believe autonomous vehicle technologies is the most difficult AI problem out there. The magnitude of the challenge of these problems is 1000 times more than other problems. They aren’t as well understood yet, and they require far deeper technology. And also the quality at which they are expected to operate is off the roof.
The autonomous vehicle problem is the engineering challenge of our generation. There’s a lot of code to write, and if we think we are going to hire armies of people to write it line by line, it’s not going to work. Machine learning can accelerate the process of generating the code, but that doesn’t mean we aren’t going to have engineers; we actually need a lot more engineers.
Sometimes people worry that AI is taking jobs. It is taking some developer jobs, but it is actually generating other developer jobs as well, protecting developers from the mundane and helping them build software faster and faster.
IEEE Spectrum: Are you concerned that the demand for AI in industry is drawing out the people in academia who are needed to educate future engineers, that is, the “eating the seed corn” problem?
Mehanna: There are some negative examples in the industry, but that’s not our style. We are looking for collaborations with professors, we want to cultivate a very deep and respectful relationship with universities.
And there’s another angle to this: Universities require a thriving industry for them to thrive. It is going to be extremely beneficial for academia to have this flourishing industry in AI, because it attracts more students to academia. I think we are doing them a fantastic favor by building these career opportunities. This is not the same as in my early days, [when] people told me “don’t go to AI; go to networking, work in the mobile industry; mobile is flourishing.”
IEEE Spectrum: Where are you looking as you try to find a thousand or so engineers to hire this year?
Thomas: We look for people who want to use machine learning to solve problems. They can be in many different industries—in the financial markets, in social media, in advertising. The autonomous vehicle industry is in its infancy. You can compare it to mobile in the early days: When the iPhone first came out, everyone was looking for developers with mobile experience, but you weren’t going to find them unless you went to straight to Apple, [so you had to hire other kinds of engineers]. This is the same type of thing: it is so new that you aren’t going to find experts in this area, because we are all still learning.
“You don’t have to be an autonomous vehicle expert to flourish in this world. It’s not too late to move…now would be a great time for AI experts working on other problems to shift their attention to autonomous vehicles.”
Mehanna: Because autonomous vehicle technology is the new frontier for AI experts, [the number of] people with both AI and autonomous vehicle experience is quite limited. So we are acquiring AI experts wherever they are, and helping them grow into the autonomous vehicle area. You don’t have to be an autonomous vehicle expert to flourish in this world. It’s not too late to move; even though there is a lot of great tech developed, there’s even more innovation ahead, so now would be a great time for AI experts working on other problems or applications to shift their attention to autonomous vehicles.
It feels like the Internet in 1980. It’s about to happen, but there are endless applications [to be developed over] the next few decades. Even if we can get a car to drive safely, there is the question of how can we tune the ride comfort, and then applying it all to different cities, different vehicles, different driving situations, and who knows to what other applications.
I can see how I can spend a lifetime career trying to solve this problem.
IEEE Spectrum: Why are you doing most of your development in San Francisco?
Mehanna: I think the best talent of the world is in Silicon Valley, and solving the autonomous vehicle problem is going to require the best of the best. It’s not just the engineering talent that is here, but [also] the entrepreneurial spirit. Solving the problem just as a technology is not going to be successful, you need to solve the product and the technology together. And the entrepreneurial spirit is one of the key reasons Cruise secured 7.5 billion in funding [besides GM, the company has a number of outside investors, including Honda, Softbank, and T. Rowe Price]. That [funding] is another reason Cruise is ahead of many others, because this problem requires deep resources.
“If you can do an autonomous vehicle in San Francisco you can do it almost anywhere.”
[And then there is the driving environment.] When I speak to my peers in the industry, they have a lot of respect for us, because the problems to solve in San Francisco technically are an order of magnitude harder. It is a tight environment, with a lot of pedestrians, and driving patterns that, let’s put it this way, are not necessarily the best in the nation. Which means we are seeing more problems ahead of our competitors, which gets us to better [software]. I think if you can do an autonomous vehicle in San Francisco you can do it almost anywhere.
A version of this post appears in the September 2019 print magazine as “AI Engineers: The Autonomous-Vehicle Industry Wants You.” Continue reading