Tag Archives: becoming
#435828 Video Friday: Boston Dynamics’ ...
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):
RoboBusiness 2019 – October 1-3, 2019 – Santa Clara, Calif., USA
ISRR 2019 – October 6-10, 2019 – Hanoi, Vietnam
Ro-Man 2019 – October 14-18, 2019 – New Delhi, India
Humanoids 2019 – October 15-17, 2019 – Toronto, Canada
ARSO 2019 – October 31-1, 2019 – Beijing, China
ROSCon 2019 – October 31-1, 2019 – Macau
IROS 2019 – November 4-8, 2019 – Macau
Let us know if you have suggestions for next week, and enjoy today’s videos.
You’ve almost certainly seen the new Spot and Atlas videos from Boston Dynamics, if for no other reason than we posted about Spot’s commercial availability earlier this week. But what, are we supposed to NOT include them in Video Friday anyway? Psh! Here you go:
[ Boston Dynamics ]
Eight deadly-looking robots. One Giant Nut trophy. Tonight is the BattleBots season finale, airing on Discovery, 8 p.m. ET, or check your local channels.
[ BattleBots ]
Thanks Trey!
Speaking of battling robots… Having giant robots fight each other is one of those things that sounds really great in theory, but doesn’t work out so well in reality. And sadly, MegaBots is having to deal with reality, which means putting their giant fighting robot up on eBay.
As of Friday afternoon, the current bid is just over $100,000 with a week to go.
[ MegaBots ]
Michigan Engineering has figured out the secret formula to getting 150,000 views on YouTube: drone plus nail gun.
[ Michigan Engineering ]
Michael Burke from the University of Edinburgh writes:
We’ve been learning to scoop grapefruit segments using a PR2, by “feeling” the difference between peel and pulp. We use joint torque measurements to predict the probability that the knife is in the peel or pulp, and use this to apply feedback control to a nominal cutting trajectory learned from human demonstration, so that we remain in a position of maximum uncertainty about which medium we’re cutting. This means we slice along the boundary between the two mediums. It works pretty well!
[ Paper ] via [ Robust Autonomy and Decisions Group ]
Thanks Michael!
Hey look, it’s Jan with eight EMYS robot heads. Hi, Jan! Hi, EMYSes!
[ EMYS ]
We’re putting the KRAKEN Arm through its paces, demonstrating that it can unfold from an Express Rack locker on the International Space Station and access neighboring lockers in NASA’s FabLab system to enable transfer of materials and parts between manufacturing, inspection, and storage stations. The KRAKEN arm will be able to change between multiple ’end effector’ tools such as grippers and inspection sensors – those are in development so they’re not shown in this video.
[ Tethers Unlimited ]
UBTECH’s Alpha Mini Robot with Smart Robot’s “Maatje” software is offering healthcare service to children at Praktijk Intraverte Multidisciplinary Institution in Netherlands.
This institution is using Alpha Mini in counseling children’s behavior. Alpha Mini can move and talk to children and offers games and activities to stimulate and interact with them. Alpha Mini talks, helps and motivates children thereby becoming more flexible in society.
[ UBTECH ]
Some impressive work here from Anusha Nagabandi, Kurt Konoglie, Sergey Levine, Vikash Kumar at Google Brain, training a dexterous multi-fingered hand to do that thing with two balls that I’m really bad at.
Dexterous multi-fingered hands can provide robots with the ability to flexibly perform a wide range of manipulation skills. However, many of the more complex behaviors are also notoriously difficult to control: Performing in-hand object manipulation, executing finger gaits to move objects, and exhibiting precise fine motor skills such as writing, all require finely balancing contact forces, breaking and reestablishing contacts repeatedly, and maintaining control of unactuated objects. In this work, we demonstrate that our method of online planning with deep dynamics models (PDDM) addresses both of these limitations; we show that improvements in learned dynamics models, together with improvements in online model-predictive control, can indeed enable efficient and effective learning of flexible contact-rich dexterous manipulation skills — and that too, on a 24-DoF anthropomorphic hand in the real world, using just 2-4 hours of purely real-world data to learn to simultaneously coordinate multiple free-floating objects.
[ PDDM ]
Thanks Vikash!
CMU’s Ballbot has a deceptively light touch that’s ideal for leading people around.
A paper on this has been submitted to IROS 2019.
[ CMU ]
The Autonomous Robots Lab at the University of Nevada is sharing some of the work they’ve done on path planning and exploration for aerial robots during the DARPA SubT Challenge.
[ Autonomous Robots Lab ]
More proof that anything can be a drone if you staple some motors to it. Even 32 feet of styrofoam insulation.
[ YouTube ]
Whatever you think of military drones, we can all agree that they look cool.
[ Boeing ]
I appreciate the fact that iCub has eyelids, I really do, but sometimes, it ends up looking kinda sleepy in research videos.
[ EPFL LASA ]
Video shows autonomous flight of a lightweight aerial vehicle outdoors and indoors on the campus of Carnegie Mellon University. The vehicle is equipped with limited onboard sensing from a front-facing camera and a proximity sensor. The aerial autonomy is enabled by utilizing a 3D prior map built in Step 1.
[ CMU ]
The Stanford Space Robotics Facility allows researchers to test innovative guidance and navigation algorithms on a realistic frictionless, underactuated system.
[ Stanford ASL ]
In this video, Ian and CP discuss Misty’s many capabilities including robust locomotion, obstacle avoidance, 3D mapping/SLAM, face detection and recognition, sound localization, hardware extensibility, photo and video capture, and programmable personality. They also talk about some of the skills he’s built using these capabilities (and others) and how those skills can be expanded upon by you.
[ Misty Robotics ]
This week’s CMU RI Seminar comes from Aaron Parness at Caltech and NASA JPL, on “Robotic Grippers for Planetary Applications.”
The previous generation of NASA missions to the outer solar system discovered salt water oceans on Europa and Enceladus, each with more liquid water than Earth – compelling targets to look for extraterrestrial life. Closer to home, JAXA and NASA have imaged sky-light entrances to lava tube caves on the Moon more than 100 m in diameter and ESA has characterized the incredibly varied and complex terrain of Comet 67P. While JPL has successfully landed and operated four rovers on the surface of Mars using a 6-wheeled rocker-bogie architecture, future missions will require new mobility architectures for these extreme environments. Unfortunately, the highest value science targets often lie in the terrain that is hardest to access. This talk will explore robotic grippers that enable missions to these extreme terrains through their ability to grip a wide variety of surfaces (shapes, sizes, and geotechnical properties). To prepare for use in space where repair or replacement is not possible, we field-test these grippers and robots in analog extreme terrain on Earth. Many of these systems are enabled by advances in autonomy. The talk will present a rapid overview of my work and a detailed case study of an underactuated rock gripper for deflecting asteroids.
[ CMU ]
Rod Brooks gives some of the best robotics talks ever. He gave this one earlier this week at UC Berkeley, on “Steps Toward Super Intelligence and the Search for a New Path.”
[ UC Berkeley ] Continue reading
#435824 A Q&A with Cruise’s head of AI, ...
In 2016, Cruise, an autonomous vehicle startup acquired by General Motors, had about 50 employees. At the beginning of 2019, the headcount at its San Francisco headquarters—mostly software engineers, mostly working on projects connected to machine learning and artificial intelligence—hit around 1000. Now that number is up to 1500, and by the end of this year it’s expected to reach about 2000, sprawling into a recently purchased building that had housed Dropbox. And that’s not counting the 200 or so tech workers that Cruise is aiming to install in a Seattle, Wash., satellite development center and a handful of others in Phoenix, Ariz., and Pasadena, Calif.
Cruise’s recent hires aren’t all engineers—it takes more than engineering talent to manage operations. And there are hundreds of so-called safety drivers that are required to sit in the 180 or so autonomous test vehicles whenever they roam the San Francisco streets. But that’s still a lot of AI experts to be hiring in a time of AI engineer shortages.
Hussein Mehanna, head of AI/ML at Cruise, says the company’s hiring efforts are on track, due to the appeal of the challenge of autonomous vehicles in drawing in AI experts from other fields. Mehanna himself joined Cruise in May from Google, where he was director of engineering at Google Cloud AI. Mehanna had been there about a year and a half, a relatively quick career stop after a short stint at Snap following four years working in machine learning at Facebook.
Mehanna has been immersed in AI and machine learning research since his graduate studies in speech recognition and natural language processing at the University of Cambridge. I sat down with Mehanna to talk about his career, the challenges of recruiting AI experts and autonomous vehicle development in general—and some of the challenges specific to San Francisco. We were joined by Michael Thomas, Cruise’s manager of AI/ML recruiting, who had also spent time recruiting AI engineers at Google and then Facebook.
IEEE Spectrum: When you were at Cambridge, did you think AI was going to take off like a rocket?
Mehanna: Did I imagine that AI was going to be as dominant and prevailing and sometimes hyped as it is now? No. I do recall in 2003 that my supervisor and I were wondering if neural networks could help at all in speech recognition. I remember my supervisor saying if anyone could figure out how use a neural net for speech he would give them a grant immediately. So he was on the right path. Now neural networks have dominated vision, speech, and language [processing]. But that boom started in 2012.
“In the early days, Facebook wasn’t that open to PhDs, it actually had a negative sentiment about researchers, and then Facebook shifted”
I didn’t [expect it], but I certainly aimed for it when [I was at] Microsoft, where I deliberately pushed my career towards machine learning instead of big data, which was more popular at the time. And [I aimed for it] when I joined Facebook.
In the early days, Facebook wasn’t that open to PhDs, or researchers. It actually had a negative sentiment about researchers. And then Facebook shifted to becoming one of the key places where PhD students wanted to do internships or join after they graduated. It was a mindset shift, they were [once] at a point in time where they thought what was needed for success wasn’t research, but now it’s different.
There was definitely an element of risk [in taking a machine learning career path], but I was very lucky, things developed very fast.
IEEE Spectrum: Is it getting harder or easier to find AI engineers to hire, given the reported shortages?
Mehanna: There is a mismatch [between job openings and qualified engineers], though it is hard to quantify it with numbers. There is good news as well: I see a lot more students diving deep into machine learning and data in their [undergraduate] computer science studies, so it’s not as bleak as it seems. But there is massive demand in the market.
Here at Cruise, demand for AI talent is just growing and growing. It might be is saturating or slowing down at other kinds of companies, though, [which] are leveraging more traditional applications—ad prediction, recommendations—that have been out there in the market for a while. These are more mature, better understood problems.
I believe autonomous vehicle technologies is the most difficult AI problem out there. The magnitude of the challenge of these problems is 1000 times more than other problems. They aren’t as well understood yet, and they require far deeper technology. And also the quality at which they are expected to operate is off the roof.
The autonomous vehicle problem is the engineering challenge of our generation. There’s a lot of code to write, and if we think we are going to hire armies of people to write it line by line, it’s not going to work. Machine learning can accelerate the process of generating the code, but that doesn’t mean we aren’t going to have engineers; we actually need a lot more engineers.
Sometimes people worry that AI is taking jobs. It is taking some developer jobs, but it is actually generating other developer jobs as well, protecting developers from the mundane and helping them build software faster and faster.
IEEE Spectrum: Are you concerned that the demand for AI in industry is drawing out the people in academia who are needed to educate future engineers, that is, the “eating the seed corn” problem?
Mehanna: There are some negative examples in the industry, but that’s not our style. We are looking for collaborations with professors, we want to cultivate a very deep and respectful relationship with universities.
And there’s another angle to this: Universities require a thriving industry for them to thrive. It is going to be extremely beneficial for academia to have this flourishing industry in AI, because it attracts more students to academia. I think we are doing them a fantastic favor by building these career opportunities. This is not the same as in my early days, [when] people told me “don’t go to AI; go to networking, work in the mobile industry; mobile is flourishing.”
IEEE Spectrum: Where are you looking as you try to find a thousand or so engineers to hire this year?
Thomas: We look for people who want to use machine learning to solve problems. They can be in many different industries—in the financial markets, in social media, in advertising. The autonomous vehicle industry is in its infancy. You can compare it to mobile in the early days: When the iPhone first came out, everyone was looking for developers with mobile experience, but you weren’t going to find them unless you went to straight to Apple, [so you had to hire other kinds of engineers]. This is the same type of thing: it is so new that you aren’t going to find experts in this area, because we are all still learning.
“You don’t have to be an autonomous vehicle expert to flourish in this world. It’s not too late to move…now would be a great time for AI experts working on other problems to shift their attention to autonomous vehicles.”
Mehanna: Because autonomous vehicle technology is the new frontier for AI experts, [the number of] people with both AI and autonomous vehicle experience is quite limited. So we are acquiring AI experts wherever they are, and helping them grow into the autonomous vehicle area. You don’t have to be an autonomous vehicle expert to flourish in this world. It’s not too late to move; even though there is a lot of great tech developed, there’s even more innovation ahead, so now would be a great time for AI experts working on other problems or applications to shift their attention to autonomous vehicles.
It feels like the Internet in 1980. It’s about to happen, but there are endless applications [to be developed over] the next few decades. Even if we can get a car to drive safely, there is the question of how can we tune the ride comfort, and then applying it all to different cities, different vehicles, different driving situations, and who knows to what other applications.
I can see how I can spend a lifetime career trying to solve this problem.
IEEE Spectrum: Why are you doing most of your development in San Francisco?
Mehanna: I think the best talent of the world is in Silicon Valley, and solving the autonomous vehicle problem is going to require the best of the best. It’s not just the engineering talent that is here, but [also] the entrepreneurial spirit. Solving the problem just as a technology is not going to be successful, you need to solve the product and the technology together. And the entrepreneurial spirit is one of the key reasons Cruise secured 7.5 billion in funding [besides GM, the company has a number of outside investors, including Honda, Softbank, and T. Rowe Price]. That [funding] is another reason Cruise is ahead of many others, because this problem requires deep resources.
“If you can do an autonomous vehicle in San Francisco you can do it almost anywhere.”
[And then there is the driving environment.] When I speak to my peers in the industry, they have a lot of respect for us, because the problems to solve in San Francisco technically are an order of magnitude harder. It is a tight environment, with a lot of pedestrians, and driving patterns that, let’s put it this way, are not necessarily the best in the nation. Which means we are seeing more problems ahead of our competitors, which gets us to better [software]. I think if you can do an autonomous vehicle in San Francisco you can do it almost anywhere.
A version of this post appears in the September 2019 print magazine as “AI Engineers: The Autonomous-Vehicle Industry Wants You.” Continue reading
#435793 Tiny Robots Carry Stem Cells Through a ...
Engineers have built microrobots to perform all sorts of tasks in the body, and can now add to that list another key skill: delivering stem cells. In a paper published today in Science Robotics, researchers describe propelling a magnetically-controlled, stem-cell-carrying bot through a live mouse.
Under a rotating magnetic field, the microrobots moved with rolling and corkscrew-style locomotion. The researchers, led by Hongsoo Choi and his team at the Daegu Gyeongbuk Institute of Science & Technology (DGIST), in South Korea, also demonstrated their bot’s moves in slices of mouse brain, in blood vessels isolated from rat brains, and in a multi-organ-on-a chip.
The invention provides an alternative way to deliver stem cells, which are increasingly important in medicine. Such cells can be coaxed into becoming nearly any kind of cell, making them great candidates for treating neurodegenerative disorders such as Alzheimer’s.
But delivering stem cells typically requires an injection with a needle, which lowers the survival rate of the stem cells, and limits their reach in the body. Microrobots, however, have the potential to deliver stem cells to precise, hard-to-reach areas, with less damage to surrounding tissue, and better survival rates, says Jin-young Kim, a principle investigator at DGIST-ETH Microrobotics Research Center, and an author on the paper.
The virtues of microrobots have inspired several research groups to propose and test different designs in simple conditions, such as microfluidic channels and other static environments. A group out of Hong Kong last year described a burr-shaped bot that carried cells through live, transparent zebrafish.
The new research presents a magnetically-actuated microrobot that successfully carried stem cells through a live mouse. In additional experiments, the cells, which had differentiated into brain cells such as astrocytes, oligodendrocytes, and neurons, transferred to microtissues on the multi-organ-on-a-chip. Taken together, the proof-of-concept experiments demonstrate the potential for microrobots to be used in human stem cell therapy, says Kim.
The team fabricated the robots with 3D laser lithography, and designed them in two shapes: spherical and helical. Using a rotating magnetic field, the scientists navigated the spherical-shaped bots with a rolling motion, and the helical bots with a corkscrew motion. These styles of locomotion proved more efficient than that from a simple pulling force, and were more suitable for use in biological fluids, the scientists reported.
The big challenge in navigating microbots in a live animal (or human body) is being able to see them in real time. Imaging with fMRI doesn’t work, because the magnetic fields interfere with the system. “To precisely control microbots in vivo, it is important to actually see them as they move,” the authors wrote in their paper.
That wasn’t possible during experiments in a live mouse, so the researchers had to check the location of the microrobots before and after the experiments using an optical tomography system called IVIS. They also had to resort to using a pulling force with a permanent magnet to navigate the microrobots inside the mouse, due to the limitations of the IVIS system.
Kim says he and his colleagues are developing imaging systems that will enable them to view in real time the locomotion of their microrobots in live animals. Continue reading