Tag Archives: generation
#435824 A Q&A with Cruise’s head of AI, ...
In 2016, Cruise, an autonomous vehicle startup acquired by General Motors, had about 50 employees. At the beginning of 2019, the headcount at its San Francisco headquarters—mostly software engineers, mostly working on projects connected to machine learning and artificial intelligence—hit around 1000. Now that number is up to 1500, and by the end of this year it’s expected to reach about 2000, sprawling into a recently purchased building that had housed Dropbox. And that’s not counting the 200 or so tech workers that Cruise is aiming to install in a Seattle, Wash., satellite development center and a handful of others in Phoenix, Ariz., and Pasadena, Calif.
Cruise’s recent hires aren’t all engineers—it takes more than engineering talent to manage operations. And there are hundreds of so-called safety drivers that are required to sit in the 180 or so autonomous test vehicles whenever they roam the San Francisco streets. But that’s still a lot of AI experts to be hiring in a time of AI engineer shortages.
Hussein Mehanna, head of AI/ML at Cruise, says the company’s hiring efforts are on track, due to the appeal of the challenge of autonomous vehicles in drawing in AI experts from other fields. Mehanna himself joined Cruise in May from Google, where he was director of engineering at Google Cloud AI. Mehanna had been there about a year and a half, a relatively quick career stop after a short stint at Snap following four years working in machine learning at Facebook.
Mehanna has been immersed in AI and machine learning research since his graduate studies in speech recognition and natural language processing at the University of Cambridge. I sat down with Mehanna to talk about his career, the challenges of recruiting AI experts and autonomous vehicle development in general—and some of the challenges specific to San Francisco. We were joined by Michael Thomas, Cruise’s manager of AI/ML recruiting, who had also spent time recruiting AI engineers at Google and then Facebook.
IEEE Spectrum: When you were at Cambridge, did you think AI was going to take off like a rocket?
Mehanna: Did I imagine that AI was going to be as dominant and prevailing and sometimes hyped as it is now? No. I do recall in 2003 that my supervisor and I were wondering if neural networks could help at all in speech recognition. I remember my supervisor saying if anyone could figure out how use a neural net for speech he would give them a grant immediately. So he was on the right path. Now neural networks have dominated vision, speech, and language [processing]. But that boom started in 2012.
“In the early days, Facebook wasn’t that open to PhDs, it actually had a negative sentiment about researchers, and then Facebook shifted”
I didn’t [expect it], but I certainly aimed for it when [I was at] Microsoft, where I deliberately pushed my career towards machine learning instead of big data, which was more popular at the time. And [I aimed for it] when I joined Facebook.
In the early days, Facebook wasn’t that open to PhDs, or researchers. It actually had a negative sentiment about researchers. And then Facebook shifted to becoming one of the key places where PhD students wanted to do internships or join after they graduated. It was a mindset shift, they were [once] at a point in time where they thought what was needed for success wasn’t research, but now it’s different.
There was definitely an element of risk [in taking a machine learning career path], but I was very lucky, things developed very fast.
IEEE Spectrum: Is it getting harder or easier to find AI engineers to hire, given the reported shortages?
Mehanna: There is a mismatch [between job openings and qualified engineers], though it is hard to quantify it with numbers. There is good news as well: I see a lot more students diving deep into machine learning and data in their [undergraduate] computer science studies, so it’s not as bleak as it seems. But there is massive demand in the market.
Here at Cruise, demand for AI talent is just growing and growing. It might be is saturating or slowing down at other kinds of companies, though, [which] are leveraging more traditional applications—ad prediction, recommendations—that have been out there in the market for a while. These are more mature, better understood problems.
I believe autonomous vehicle technologies is the most difficult AI problem out there. The magnitude of the challenge of these problems is 1000 times more than other problems. They aren’t as well understood yet, and they require far deeper technology. And also the quality at which they are expected to operate is off the roof.
The autonomous vehicle problem is the engineering challenge of our generation. There’s a lot of code to write, and if we think we are going to hire armies of people to write it line by line, it’s not going to work. Machine learning can accelerate the process of generating the code, but that doesn’t mean we aren’t going to have engineers; we actually need a lot more engineers.
Sometimes people worry that AI is taking jobs. It is taking some developer jobs, but it is actually generating other developer jobs as well, protecting developers from the mundane and helping them build software faster and faster.
IEEE Spectrum: Are you concerned that the demand for AI in industry is drawing out the people in academia who are needed to educate future engineers, that is, the “eating the seed corn” problem?
Mehanna: There are some negative examples in the industry, but that’s not our style. We are looking for collaborations with professors, we want to cultivate a very deep and respectful relationship with universities.
And there’s another angle to this: Universities require a thriving industry for them to thrive. It is going to be extremely beneficial for academia to have this flourishing industry in AI, because it attracts more students to academia. I think we are doing them a fantastic favor by building these career opportunities. This is not the same as in my early days, [when] people told me “don’t go to AI; go to networking, work in the mobile industry; mobile is flourishing.”
IEEE Spectrum: Where are you looking as you try to find a thousand or so engineers to hire this year?
Thomas: We look for people who want to use machine learning to solve problems. They can be in many different industries—in the financial markets, in social media, in advertising. The autonomous vehicle industry is in its infancy. You can compare it to mobile in the early days: When the iPhone first came out, everyone was looking for developers with mobile experience, but you weren’t going to find them unless you went to straight to Apple, [so you had to hire other kinds of engineers]. This is the same type of thing: it is so new that you aren’t going to find experts in this area, because we are all still learning.
“You don’t have to be an autonomous vehicle expert to flourish in this world. It’s not too late to move…now would be a great time for AI experts working on other problems to shift their attention to autonomous vehicles.”
Mehanna: Because autonomous vehicle technology is the new frontier for AI experts, [the number of] people with both AI and autonomous vehicle experience is quite limited. So we are acquiring AI experts wherever they are, and helping them grow into the autonomous vehicle area. You don’t have to be an autonomous vehicle expert to flourish in this world. It’s not too late to move; even though there is a lot of great tech developed, there’s even more innovation ahead, so now would be a great time for AI experts working on other problems or applications to shift their attention to autonomous vehicles.
It feels like the Internet in 1980. It’s about to happen, but there are endless applications [to be developed over] the next few decades. Even if we can get a car to drive safely, there is the question of how can we tune the ride comfort, and then applying it all to different cities, different vehicles, different driving situations, and who knows to what other applications.
I can see how I can spend a lifetime career trying to solve this problem.
IEEE Spectrum: Why are you doing most of your development in San Francisco?
Mehanna: I think the best talent of the world is in Silicon Valley, and solving the autonomous vehicle problem is going to require the best of the best. It’s not just the engineering talent that is here, but [also] the entrepreneurial spirit. Solving the problem just as a technology is not going to be successful, you need to solve the product and the technology together. And the entrepreneurial spirit is one of the key reasons Cruise secured 7.5 billion in funding [besides GM, the company has a number of outside investors, including Honda, Softbank, and T. Rowe Price]. That [funding] is another reason Cruise is ahead of many others, because this problem requires deep resources.
“If you can do an autonomous vehicle in San Francisco you can do it almost anywhere.”
[And then there is the driving environment.] When I speak to my peers in the industry, they have a lot of respect for us, because the problems to solve in San Francisco technically are an order of magnitude harder. It is a tight environment, with a lot of pedestrians, and driving patterns that, let’s put it this way, are not necessarily the best in the nation. Which means we are seeing more problems ahead of our competitors, which gets us to better [software]. I think if you can do an autonomous vehicle in San Francisco you can do it almost anywhere.
A version of this post appears in the September 2019 print magazine as “AI Engineers: The Autonomous-Vehicle Industry Wants You.” Continue reading
#435775 Jaco Is a Low-Power Robot Arm That Hooks ...
We usually think of robots as taking the place of humans in various tasks, but robots of all kinds can also enhance human capabilities. This may be especially true for people with disabilities. And while the Cybathlon competition showed what's possible when cutting-edge research robotics is paired with expert humans, that competition isn't necessarily reflective of the kind of robotics available to most people today.
Kinova Robotics's Jaco arm is an assistive robotic arm designed to be mounted on an electric wheelchair. With six degrees of freedom plus a three-fingered gripper, the lightweight carbon fiber arm is frequently used in research because it's rugged and versatile. But from the start, Kinova created it to add autonomy to the lives of people with mobility constraints.
Earlier this year, Kinova shared the story of Mary Nelson, an 11-year-old girl with spinal muscular atrophy, who uses her Jaco arm to show her horse in competition. Spinal muscular atrophy is a neuromuscular disorder that impairs voluntary muscle movement, including muscles that help with respiration, and Mary depends on a power chair for mobility.
We wanted to learn more about how Kinova designs its Jaco arm, and what that means for folks like Mary, so we spoke with both Kinova and Mary's parents to find out how much of a difference a robot arm can make.
IEEE Spectrum: How did Mary interact with the world before having her arm, and what was involved in the decision to try a robot arm in general? And why then Kinova's arm specifically?
Ryan Nelson: Mary interacts with the world much like you and I do, she just uses different tools to do so. For example, she is 100 percent independent using her computer, iPad, and phone, and she prefers to use a mouse. However, she cannot move a standard mouse, so she connects her wheelchair to each device with Bluetooth to move the mouse pointer/cursor using her wheelchair joystick.
For years, we had a Manfrotto magic arm and super clamp attached to her wheelchair and she used that much like the robotic arm. We could put a baseball bat, paint brush, toys, etc. in the super clamp so that Mary could hold the object and interact as physically able children do. Mary has always wanted to be more independent, so we knew the robotic arm was something she must try. We had seen videos of the Kinova arm on YouTube and on their website, so we reached out to them to get a trial.
Can you tell us about the Jaco arm, and how the process of designing an assistive robot arm is different from the process of designing a conventional robot arm?
Nathaniel Swenson, Director of U.S. Operations — Assistive Technologies at Kinova: Jaco is our flagship robotic arm. Inspired by our CEO's uncle and its namesake, Jacques “Jaco” Forest, it was designed as assistive technology with power wheelchair users in mind.
The primary differences between Jaco and our other robots, such as the new Gen3, which was designed to meet the needs of academic and industry research teams, are speed and power consumption. Other robots such as the Gen3 can move faster and draw slightly more power because they aren't limited by the battery size of power wheelchairs. Depending on the use case, they might not interact directly with a human being in the research setting and can safely move more quickly. Jaco is designed to move at safe speeds and make direct contact with the end user and draw very little power directly from their wheelchair.
The most important consideration in the design process of an assistive robot is the safety of the end user. Jaco users operate their robots through their existing drive controls to assist them in daily activities such as eating, drinking, and opening doors and they don't have to worry about the robot draining their chair's batteries throughout the day. The elegant design that results from meeting the needs of our power chair users has benefited subsequent iterations, [of products] such as the Gen3, as well: Kinova's robots are lightweight, extremely efficient in their power consumption, and safe for direct human-robot interaction. This is not true of conventional industrial robots.
What was the learning process like for Mary? Does she feel like she's mastered the arm, or is it a continuous learning process?
Ryan Nelson: The learning process was super quick for Mary. However, she amazes us every day with the new things that she can do with the arm. Literally within minutes of installing the arm on her chair, Mary had it figured out and was shaking hands with the Kinova rep. The control of the arm is super intuitive and the Kinova reps say that SMA (Spinal Muscular Atrophy) children are perfect users because they are so smart—they pick it up right away. Mary has learned to do many fine motor tasks with the arm, from picking up small objects like a pencil or a ruler, to adjusting her glasses on her face, to doing science experiments.
Photo: The Nelson Family
Mary uses a headset microphone to amplify her voice, and she will use the arm and finger to adjust the microphone in front of her mouth after she is done eating (also a task she mastered quickly with the arm). Additionally, Mary will use the arms to reach down and adjust her feet or leg by grabbing them with the arm and moving them to a more comfortable position. All of these examples are things she never really asked us to do, but something she needed and just did on her own, with the help of the arm.
What is the most common feedback that you get from new users of the arm? How about from experienced users who have been using the arm for a while?
Nathaniel Swenson: New users always tell us how excited they are to see what they can accomplish with their new Jaco. From day one, they are able to do things that they have longed to do without assistance from a caregiver: take a drink of water or coffee, scratch an itch, push the button to open an “accessible” door or elevator, or even feed their baby with a bottle.
The most common feedback I hear from experienced users is that Jaco has changed their life. Our experienced users like Mary are rock stars: everywhere they go, people get excited to see what they'll do next. The difference between a new user and an experienced user could be as little as two weeks. People who operate power wheelchairs every day are already expert drivers and we just add a new “gear” to their chair: robot mode. It's fun to see how quickly new users master the intuitive Jaco control modes.
What changes would you like to see in the next generation of Jaco arm?
Ryan Nelson: Titanium fingers! Make it lift heavier objects, hold heavier items like a baseball bat, machine gun, flame thrower, etc., and Mary literally said this last night: “I wish the arm moved fast enough to play the piano.”
Nathaniel Swenson: I love the idea of titanium fingers! Jaco's fingers are made from a flexible polymer and designed to avoid harm. This allows the fingers to bend or dislocate, rather than break, but it also means they are not as durable as a material like titanium. Increased payload, the ability to manipulate heavier objects, requires increased power consumption. We've struck a careful balance between providing enough strength to accomplish most medically necessary Activities of Daily Living and efficient use of the power chair's batteries.
We take Isaac Asimov's Laws of Robotics pretty seriously. When we start to combine machine guns, flame throwers, and artificial intelligence with robots, I get very nervous!
I wish the arm moved fast enough to play the piano, too! I am also a musician and I share Mary's dream of an assistive robot that would enable her to make music. In the meantime, while we work on that, please enjoy this beautiful violin piece by Manami Ito and her one-of-a-kind violin prosthesis:
To what extent could more autonomy for the arm be helpful for users? What would be involved in implementing that?
Nathaniel Swenson: Artificial intelligence, machine learning, and deep learning will introduce greater autonomy in future iterations of assistive robots. This will enable them to perform more complex tasks that aren't currently possible, and enable them to accomplish routine tasks more quickly and with less input than the current manual control requires.
For assistive robots, implementation of greater autonomy involves a focus on end-user safety and improvements in the robot's awareness of its environment. Autonomous robots that work in close proximity with humans need vision. They must be able to see to avoid collisions and they use haptic feedback to tell the robot how much force is being exerted on objects. All of these technologies exist, but the largest obstacle to bringing them to the assistive technology market is to prove to the health insurance companies who will fund them that they are both safe and medically necessary. Continue reading
#435621 ANYbotics Introduces Sleek New ANYmal C ...
Quadrupedal robots are making significant advances lately, and just in the past few months we’ve seen Boston Dynamics’ Spot hauling a truck, IIT’s HyQReal pulling a plane, MIT’s MiniCheetah doing backflips, Unitree Robotics’ Laikago towing a van, and Ghost Robotics’ Vision 60 exploring a mine. Robot makers are betting that their four-legged machines will prove useful in a variety of applications in construction, security, delivery, and even at home.
ANYbotics has been working on such applications for years, testing out their ANYmal robot in places where humans typically don’t want to go (like offshore platforms) as well as places where humans really don’t want to go (like sewers), and they have a better idea than most companies what can make quadruped robots successful.
This week, ANYbotics is announcing a completely new quadruped platform, ANYmal C, a major upgrade from the really quite research-y ANYmal B. The new quadruped has been optimized for ruggedness and reliability in industrial environments, with a streamlined body painted a color that lets you know it means business.
ANYmal C’s physical specs are pretty impressive for a production quadruped. It can move at 1 meter per second, manage 20-degree slopes and 45-degree stairs, cross 25-centimeter gaps, and squeeze through passages just 60 centimeters wide. It’s packed with cameras and 3D sensors, including a lidar for 3D mapping and simultaneous localization and mapping (SLAM). All these sensors (along with the vast volume of gait research that’s been done with ANYmal) make this one of the most reliably autonomous quadrupeds out there, with real-time motion planning and obstacle avoidance.
Image: ANYbotics
ANYmal can autonomously attach itself to a cone-shaped docking station to recharge.
ANYmal C is also one of the ruggedest legged robots in existence. The 50-kilogram robot is IP67 rated, meaning that it’s completely impervious to dust and can withstand being submerged in a meter of water for an hour. If it’s submerged for longer than that, you’re absolutely doing something wrong. The robot will run for over 2 hours on battery power, and if that’s not enough endurance, don’t worry, because ANYmal can autonomously impale itself on a weird cone-shaped docking station to recharge.
Photo: ANYbotics
ANYmal C’s sensor payload includes cameras and a lidar for 3D mapping and SLAM.
As far as what ANYmal C is designed to actually do, it’s mostly remote inspection tasks where you need to move around through a relatively complex environment, but where for whatever reason you’d be better off not sending a human. ANYmal C has a sensor payload that gives it lots of visual options, like thermal imaging, and with the ability to handle a 10-kilogram payload, the robot can be adapted to many different environments.
Over the next few months, we’re hoping to see more examples of ANYmal C being deployed to do useful stuff in real-world environments, but for now, we do have a bit more detail from ANYbotics CTO Christian Gehring.
IEEE Spectrum: Can you tell us about the development process for ANYmal C?
Christian Gehring: We tested the previous generation of ANYmal (B) in a broad range of environments over the last few years and gained a lot of insights. Based on our learnings, it became clear that we would have to re-design the robot to meet the requirements of industrial customers in terms of safety, quality, reliability, and lifetime. There were different prototype stages both for the new drives and for single robot assemblies. Apart from electrical tests, we thoroughly tested the thermal control and ingress protection of various subsystems like the depth cameras and actuators.
What can ANYmal C do that the previous version of ANYmal can’t?
ANYmal C was redesigned with a focus on performance increase regarding actuation (new drives), computational power (new hexacore Intel i7 PCs), locomotion and navigation skills, and autonomy (new depth cameras). The new robot additionally features a docking system for autonomous recharging and an inspection payload as an option. The design of ANYmal C is far more integrated than its predecessor, which increases both performance and reliability.
How much of ANYmal C’s development and design was driven by your experience with commercial or industry customers?
Tests (such as the offshore installation with TenneT) and discussions with industry customers were important to get the necessary design input in terms of performance, safety, quality, reliability, and lifetime. Most customers ask for very similar inspection tasks that can be performed with our standard inspection payload and the required software packages. Some are looking for a robot that can also solve some simple manipulation tasks like pushing a button. Overall, most use cases customers have in mind are realistic and achievable, but some are really tough for the robot, like climbing 50° stairs in hot environments of 50°C.
Can you describe how much autonomy you expect ANYmal C to have in industrial or commercial operations?
ANYmal C is primarily developed to perform autonomous routine inspections in industrial environments. This autonomy especially adds value for operations that are difficult to access, as human operation is extremely costly. The robot can naturally also be operated via a remote control and we are working on long-distance remote operation as well.
Do you expect that researchers will be interested in ANYmal C? What research applications could it be useful for?
ANYmal C has been designed to also address the needs of the research community. The robot comes with two powerful hexacore Intel i7 computers and can additionally be equipped with an NVIDIA Jetson Xavier graphics card for learning-based applications. Payload interfaces enable users to easily install and test new sensors. By joining our established ANYmal Research community, researchers get access to simulation tools and software APIs, which boosts their research in various areas like control, machine learning, and navigation.
[ ANYmal C ] Continue reading