Tag Archives: university

#436079 Video Friday: This Humanoid Robot Will ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

Northeast Robotics Colloquium – October 12, 2019 – Philadelphia, Pa., USA
Ro-Man 2019 – October 14-18, 2019 – New Delhi, India
Humanoids 2019 – October 15-17, 2019 – Toronto, Canada
ARSO 2019 – October 31-1, 2019 – Beijing, China
ROSCon 2019 – October 31-1, 2019 – Macau
IROS 2019 – November 4-8, 2019 – Macau
Let us know if you have suggestions for next week, and enjoy today’s videos.

What’s better than a robotics paper with “dynamic” in the title? A robotics paper with “highly dynamic” in the title. From Sangbae Kim’s lab at MIT, the latest exploits of Mini Cheetah:

Yes I’d very much like one please. Full paper at the link below.

[ Paper ] via [ MIT ]

A humanoid robot serving you ice cream—on his own ice cream bike: What a delicious vision!

[ Roboy ]

The Roomba “i” series and “s” series vacuums have just gotten an update that lets you set “keep out” zones, which is super useful. Tell your robot where not to go!

I feel bad, that Roomba was probably just hungry 🙁

[ iRobot ]

We wrote about Voliro’s tilt-rotor hexcopter a couple years ago, and now it’s off doing practical things, like spray painting a building pretty much the same color that it was before.

[ Voliro ]

Thanks Mina!

Here’s a clever approach for bin-picking problematic objects, like shiny things: Just grab a whole bunch, and then sort out what you need on a nice robot-friendly table.

It might take a little bit longer, but what do you care, you’re probably off sipping a cocktail with a little umbrella in it on a beach somewhere.

[ Harada Lab ]

A unique combination of the IRB 1200 and YuMi industrial robots that use vision, AI and deep learning to recognize and categorize trash for recycling.

[ ABB ]

Measuring glacial movements in-situ is a challenging, but necessary task to model glaciers and predict their future evolution. However, installing GPS stations on ice can be dangerous and expensive when not impossible in the presence of large crevasses. In this project, the ASL develops UAVs for dropping and recovering lightweight GPS stations over inaccessible glaciers to record the ice flow motion. This video shows the results of first tests performed at Gorner glacier, Switzerland, in July 2019.

[ EPFL ]

Turns out Tertills actually do a pretty great job fighting weeds.

Plus, they leave all those cute lil’ Tertill tracks.

[ Franklin Robotics ]

The online autonomous navigation and semantic mapping experiment presented [below] is conducted with the Cassie Blue bipedal robot at the University of Michigan. The sensors attached to the robot include an IMU, a 32-beam LiDAR and an RGB-D camera. The whole online process runs in real-time on a Jetson Xavier and a laptop with an i7 processor.

The resulting map is so precise that it looks like we are doing real-time SLAM (simultaneous localization and mapping). In fact, the map is based on dead-reckoning via the InvEKF.

[ GTSAM ] via [ University of Michigan ]

UBTECH has announced an upgraded version of its Meebot, which is 30 percent bigger and comes with more sensors and programmable eyes.

[ UBTECH ]

ABB’s research team will be working with medical staff, scientist and engineers to develop non-surgical medical robotics systems, including logistics and next-generation automated laboratory technologies. The team will develop robotics solutions that will help eliminate bottlenecks in laboratory work and address the global shortage of skilled medical staff.

[ ABB ]

In this video, Ian and Chris go through Misty’s SDK, discussing the languages we’ve included, the tools that make it easy for you to get started quickly, a quick rundown of how to run the skills you build, plus what’s ahead on the Misty SDK roadmap.

[ Misty Robotics ]

My guess is that this was not one of iRobot’s testing environments for the Roomba.

You know, that’s actually super impressive. And maybe if they threw one of the self-emptying Roombas in there, it would be a viable solution to the entire problem.

[ How Farms Work ]

Part of WeRobotics’ Flying Labs network, Panama Flying Labs is a local knowledge hub catalyzing social good and empowering local experts. Through training and workshops, demonstrations and missions, the Panama Flying Labs team leverages the power of drones, data, and AI to promote entrepreneurship, build local capacity, and confront the pressing social challenges faced by communities in Panama and across Central America.

[ Panama Flying Labs ]

Go on a virtual flythrough of the NIOSH Experimental Mine, one of two courses used in the recent DARPA Subterranean Challenge Tunnel Circuit Event held 15-22 August, 2019. The data used for this partial flythrough tour were collected using 3D LIDAR sensors similar to the sensors commonly used on autonomous mobile robots.

[ SubT ]

Special thanks to PBS, Mark Knobil, Joe Seamans and Stan Brandorff and many others who produced this program in 1991.

It features Reid Simmons (and his 1 year old son), David Wettergreen, Red Whittaker, Mac Macdonald, Omead Amidi, and other Field Robotics Center alumni building the planetary walker prototype called Ambler. The team gets ready for an important demo for NASA.

[ CMU RI ]

As art and technology merge, roboticist Madeline Gannon explores the frontiers of human-robot interaction across the arts, sciences and society, and explores what this could mean for the future.

[ Sonar+D ] Continue reading

Posted in Human Robots

#436065 From Mainframes to PCs: What Robot ...

This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.

Autonomous robots are coming around slowly. We already got autonomous vacuum cleaners, autonomous lawn mowers, toys that bleep and blink, and (maybe) soon autonomous cars. Yet, generation after generation, we keep waiting for the robots that we all know from movies and TV shows. Instead, businesses seem to get farther and farther away from the robots that are able to do a large variety of tasks using general-purpose, human anatomy-inspired hardware.

Although these are the droids we have been looking for, anything that came close, such as Willow Garage’s PR2 or Rethink Robotics’ Baxter has bitten the dust. With building a robotic company being particularly hard, compounding business risk with technological risk, the trend goes from selling robots to selling actual services like mowing your lawn, provide taxi rides, fulfilling retail orders, or picking strawberries by the pound. Unfortunately for fans of R2-D2 and C-3PO, these kind of business models emphasize specialized, room- or fridge-sized hardware that is optimized for one very specific task, but does not contribute to a general-purpose robotic platform.

We have actually seen something very similar in the personal computer (PC) industry. In the 1950s, even though computers could be as big as an entire room and were only available to a selected few, the public already had a good idea of what computers would look like. A long list of fictional computers started to populate mainstream entertainment during that time. In a 1962 New York Times article titled “Pocket Computer to Replace Shopping List,” visionary scientist John Mauchly stated that “there is no reason to suppose the average boy or girl cannot be master of a personal computer.”

In 1968, Douglas Engelbart gave us the “mother of all demos,” browsing hypertext on a graphical screen and a mouse, and other ideas that have become standard only decades later. Now that we have finally seen all of this, it might be helpful to examine what actually enabled the computing revolution to learn where robotics is really at and what we need to do next.

The parallels between computers and robots

In the 1970s, mainframes were about to be replaced by the emerging class of mini-computers, fridge-sized devices that cost less than US $25,000 ($165,000 in 2019 dollars). These computers did not use punch-cards, but could be programmed in Fortran and BASIC, dramatically expanding the ease with which potential applications could be created. Yet it was still unclear whether mini-computers could ever replace big mainframes in applications that require fast and efficient processing of large amounts of data, let alone enter every living room. This is very similar to the robotics industry right now, where large-scale factory robots (mainframes) that have existed since the 1960s are seeing competition from a growing industry of collaborative robots that can safely work next to humans and can easily be installed and programmed (minicomputers). As in the ’70s, applications for these devices that reach system prices comparable to that of a luxury car are quite limited, and it is hard to see how they could ever become a consumer product.

Yet, as in the computer industry, successful architectures are quickly being cloned, driving prices down, and entirely new approaches on how to construct or program robotic arms are sprouting left and right. Arm makers are joined by manufacturers of autonomous carts, robotic grippers, and sensors. These components can be combined, paving the way for standard general purpose platforms that follow the model of the IBM PC, which built a capable, open architecture relying as much on commodity parts as possible.

General purpose robotic systems have not been successful for similar reasons that general purpose, also known as “personal,” computers took decades to emerge. Mainframes were custom-built for each application, while typewriters got smarter and smarter, not really leaving room for general purpose computers in between. Indeed, given the cost of hardware and the relatively little abilities of today’s autonomous robots, it is almost always smarter to build a special purpose machine than trying to make a collaborative mobile manipulator smart.

A current example is e-commerce grocery fulfillment. The current trend is to reserve underutilized parts of a brick-and-mortar store for a micro-fulfillment center that stores goods in little crates with an automated retrieval system and a (human) picker. A number of startups like Alert Innovation, Fabric, Ocado Technology, TakeOff Technologies, and Tompkins Robotics, to just name a few, have raised hundreds of millions of venture capital recently to build mainframe equivalents of robotic fulfillment centers. This is in contrast with a robotic picker, which would drive through the aisles to restock and pick from shelves. Such a robotic store clerk would come much closer to our vision of a general purpose robot, but would require many copies of itself that crowd the aisles to churn out hundreds of orders per hour as a microwarehouse could. Although eventually more efficient, the margins in retail are already low and make it unlikely that this industry will produce the technological jump that we need to get friendly C-3POs manning the aisles.

Startups have raised hundreds of millions of venture capital recently to build mainframe equivalents of robotic fulfillment centers. This is in contrast with a robotic picker, which would drive through the aisles to restock and pick from shelves, and would come much closer to our vision of a general purpose robot.

Mainframes were also attacked from the bottom. Fascination with the new digital technology has led to a hobbyist movement to create microcomputers that were sold via mail order or at RadioShack. Initially, a large number of small businesses was selling tens, at most hundreds, of devices, usually as a kit and with wooden enclosures. This trend culminated into the “1977 Trinity” in the form of the Apple II, the Commodore PET, and the Tandy TRS-80, complete computers that were sold for prices around $2500 (TRS) to $5000 (Apple) in today’s dollars. The main application of these computers was their programmability (in BASIC), which would enable consumers to “learn to chart your biorhythms, balance your checking account, or even control your home environment,” according to an original Apple advertisement. Similarly, there exists a myriad of gadgets that explore different aspects of robotics such as mobility, manipulation, and entertainment.

As in the fledgling personal computing industry, the advertised functionality was at best a model of the real deal. A now-famous milestone in entertainment robotics was the original Sony’s Aibo, a robotic dog that was advertised to have many properties that a real dog has such as develop its own personality, play with a toy, and interact with its owner. Released in 1999, and re-launched in 2018, the platform has a solid following among hobbyists and academics who like its programmability, but probably only very few users who accept the device as a pet stand-in.

There also exist countless “build-your-own-robotic-arm” kits. One of the more successful examples is the uArm, which sells for around $800, and is advertised to perform pick and place, assembly, 3D printing, laser engraving, and many other things that sound like high value applications. Using compelling videos of the robot actually doing these things in a constrained environment has led to two successful crowd-funding campaigns, and have established the robot as a successful educational tool.

Finally, there exist platforms that allow hobbyist programmers to explore mobility to construct robots that patrol your house, deliver items, or provide their users with telepresence abilities. An example of that is the Misty II. Much like with the original Apple II, there remains a disconnect between the price of the hardware and the fidelity of the applications that were available.

For computers, this disconnect began to disappear with the invention of the first electronic spreadsheet software VisiCalc that spun out of Harvard in 1979 and prompted many people to buy an entire microcomputer just to run the program. VisiCalc was soon joined by WordStar, a word processing application, that sold for close to $2000 in today’s dollars. WordStar, too, would entice many people to buy the entire hardware just to use the software. The two programs are early examples of what became known as “killer application.”

With factory automation being mature, and robots with the price tag of a minicomputer being capable of driving around and autonomously carrying out many manipulation tasks, the robotics industry is somewhere where the PC industry was between 1973—the release of the Xerox Alto, the first computer with a graphical user interface, mouse, and special software—and 1979—when microcomputers in the under $5000 category began to take off.

Killer apps for robots
So what would it take for robotics to continue to advance like computers did? The market itself already has done a good job distilling what the possible killer apps are. VCs and customers alike push companies who have set out with lofty goals to reduce their offering to a simple value proposition. As a result, companies that started at opposite ends often converge to mirror images of each other that offer very similar autonomous carts, (bin) picking, palletizing, depalletizing, or sorting solutions. Each of these companies usually serves a single application to a single vertical—for example bin-picking clothes, transporting warehouse goods, or picking strawberries by the pound. They are trying to prove that their specific technology works without spreading themselves too thin.

Very few of these companies have really taken off. One example is Kiva Systems, which turned into the logistic robotics division of Amazon. Kiva and others are structured around sound value propositions that are grounded in well-known user needs. As these solutions are very specialized, however, it is unlikely that they result into any economies of scale of the same magnitude that early computer users who bought both a spreadsheet and a word processor application for their expensive minicomputer could enjoy. What would make these robotic solutions more interesting is when functionality becomes stackable. Instead of just being able to do bin picking, palletizing, and transportation with the same hardware, these three skills could be combined to model entire processes.

A skill that is yet little addressed by startups and is historically owned by the mainframe equivalent of robotics is assembly of simple mechatronic devices. The ability to assemble mechatronic parts is equivalent to other tasks such as changing a light bulb, changing the batteries in a remote control, or tending machines like a lever-based espresso machine. These tasks would involve the autonomous execution of complete workflows possible using a single machine, eventually leading to an explosion of industrial productivity across all sectors. For example, picking up an item from a bin, arranging it on the robot, moving it elsewhere, and placing it into a shelf or a machine is a process that equally applies to a manufacturing environment, a retail store, or someone’s kitchen.

Image: Robotic Materials Inc.

Autonomous, vision and force-based assembly of the
Siemens robot learning challenge.

Even though many of the above applications are becoming possible, it is still very hard to get a platform off the ground without added components that provide “killer app” value of their own. Interesting examples are Rethink Robotics or the Robot Operating System (ROS). Rethink Robotics’ Baxter and Sawyer robots pioneered a great user experience (like the 1973 Xerox Alto, really the first PC), but its applications were difficult to extend beyond simple pick-and-place and palletizing and depalletizing items.

ROS pioneered interprocess communication software that was adapted to robotic needs (multiple computers, different programming languages) and the idea of software modularity in robotics, but—in the absence of a common hardware platform—hasn’t yet delivered a single application, e.g. for navigation, path planning, or grasping, that performs beyond research-grade demonstration level and won’t get discarded once developers turn to production systems. At the same time, an increasing number of robotic devices, such as robot arms or 3D perception systems that offer intelligent functionality, provide other ways to wire them together that do not require an intermediary computer, while keeping close control over the real-time aspects of their hardware.

Image: Robotic Materials Inc.

Robotic Materials GPR-1 combines a MIR-100 autonomous cart with an UR-5 collaborative robotic arm, an onRobot force/torque sensor and Robotic Materials’ SmartHand to perform out-of-the-box mobile assembly, bin picking, palletizing, and depalletizing tasks.

At my company, Robotic Materials Inc., we have made strides to identify a few applications such as bin picking and assembly, making them configurable with a single click by combining machine learning and optimization with an intuitive user interface. Here, users can define object classes and how to grasp them using a web browser, which then appear as first-class objects in a robot-specific graphical programming language. We have also done this for assembly, allowing users to stack perception-based picking and force-based assembly primitives by simply dragging and dropping appropriate commands together.

While such an approach might answer the question of a killer app for robots priced in the “minicomputer” range, it is unclear how killer app-type value can be generated with robots in the less-than-$5000 category. A possible answer is two-fold: First, with low-cost arms, mobility platforms, and entertainment devices continuously improving, a confluence of technology readiness and user innovation, like with the Apple II and VisiCalc, will eventually happen. For example, there is not much innovation needed to turn Misty into a home security system; the uArm into a low-cost bin-picking system; or an Aibo-like device into a therapeutic system for the elderly or children with autism.

Second, robots and their components have to become dramatically cheaper. Indeed, computers have seen an exponential reduction in price accompanied by an exponential increase in computational power, thanks in great part to Moore’s Law. This development has helped robotics too, allowing us to reach breakthroughs in mobility and manipulation due to the ability to process massive amounts of image and depth data in real-time, and we can expect it to continue to do so.

Is there a Moore’s Law for robots?
One might ask, however, how a similar dynamics might be possible for robots as a whole, including all their motors and gears, and what a “Moore’s Law” would look like for the robotics industry. Here, it helps to remember that the perpetuation of Moore’s Law is not the reason, but the result of the PC revolution. Indeed, the first killer apps for bookkeeping, editing, and gaming were so good that they unleashed tremendous consumer demand, beating the benchmark on what was thought to be physically possible over and over again. (I vividly remember 56 kbps to be the absolute maximum data rate for copper phone lines until DSL appeared.)

That these economies of scale are also applicable to mechatronics is impressively demonstrated by the car industry. A good example is the 2020 Prius Prime, a highly computerized plug-in hybrid, that is available for one third of the cost of my company’s GPR-1 mobile manipulator while being orders of magnitude more complex, sporting an electrical motor, a combustion engine, and a myriad of sensors and computers. It is therefore very well conceivable to produce a mobile manipulator that retails at one tenth of the cost of a modern car, once robotics enjoy similar mass-market appeal. Given that these robots are part of the equation, actively lowering cost of production, this might happen as fast as never before in the history of industrialization.

It is therefore very well conceivable to produce a mobile manipulator that retails at one tenth of the cost of a modern car, once robotics enjoy similar mass-market appeal.

There is one more driver that might make robots exponentially more capable: the cloud. Once a general purpose robot has learned or was programmed with a new skill, it could share it with every other robot. At some point, a grocer who buys a robot could assume that it already knows how to recognize and handle 99 percent of the retail items in the store. Likewise, a manufacturer can assume that the robot can handle and assemble every item available from McMaster-Carr and Misumi. Finally, families could expect a robot to know every kitchen item that Ikea and Pottery Barn is selling. Sounds like a labor intense problem, but probably more manageable than collecting footage for Google’s Street View using cars, tricycles, and snowmobiles, among other vehicles.

Strategies for robot startups
While we are waiting for these two trends—better and better applications and hardware with decreasing cost—to converge, we as a community have to keep exploring what the canonical robotic applications beyond mobility, bin picking, palletizing, depalletizing, and assembly are. We must also continue to solve the fundamental challenges that stand in the way of making these solutions truly general and robust.

For both questions, it might help to look at the strategies that have been critical in the development of the personal computer, which might equally well apply to robotics:

Start with a solution to a problem your customers have. Unfortunately, their problem is almost never that they need your sensor, widget, or piece of code, but something that already costs them money or negatively affects them in some other way. Example: There are many more people who had a problem calculating their taxes (and wanted to buy VisiCalc) than writing their own solution in BASIC.

Build as little of your own hardware as necessary. Your business model should be stronger than the margin you can make on the hardware. Why taking the risk? Example: Why build your own typewriter if you can write the best typewriting application that makes it worth buying a computer just for that?

If your goal is a platform, make sure it comes with a killer application, which alone justifies the platform cost. Example: Microcomputer companies came and went until the “1977 Trinity” intersected with the killer apps spreadsheet and word processors. Corollary: You can also get lucky.

Use an open architecture, which creates an ecosystem where others compete on creating better components and peripherals, while allowing others to integrate your solution into their vertical and stack it with other devices. Example: Both the Apple II and the IBM PC were completely open architectures, enabling many clones, thereby growing the user and developer base.

It’s worthwhile pursuing this. With most business processes already being digitized, general purpose robots will allow us to fill in gaps in mobility and manipulation, increasing productivity at levels only limited by the amount of resources and energy that are available, possibly creating a utopia in which creativity becomes the ultimate currency. Maybe we’ll even get R2-D2.

Nikolaus Correll is an associate professor of computer science at the University of Colorado at Boulder where he works on mobile manipulation and other robotics applications. He’s co-founder and CTO of Robotic Materials Inc., which is supported by the National Science Foundation and the National Institute of Standards and Technology via their Small Business Innovative Research (SBIR) programs. Continue reading

Posted in Human Robots

#436042 Video Friday: Caltech’s Drone With ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

ISRR 2019 – October 6-10, 2019 – Hanoi, Vietnam
Ro-Man 2019 – October 14-18, 2019 – New Delhi, India
Humanoids 2019 – October 15-17, 2019 – Toronto, Canada
ARSO 2019 – October 31-1, 2019 – Beijing, China
ROSCon 2019 – October 31-1, 2019 – Macau
IROS 2019 – November 4-8, 2019 – Macau
Let us know if you have suggestions for next week, and enjoy today’s videos.

Caltech has been making progress on LEONARDO (LEg ON Aerial Robotic DrOne), their leggy thruster powered humanoid-thing. It can now balance and walk, which is quite impressive to see.

We’ll circle back again when they’ve got it jumping and floating around.

[ Caltech ]

Turn the subtitles on to learn how robots became experts at slicing bubbly, melty, delicious cheese.

These robots learned how to do the traditional Swiss raclette from demonstration. The Robot Learning & Interaction group at the Idiap Research Institute has developed an imitation learning technique allowing the robot to acquire new skills by considering position and force information, with an automatic adaptation to new situations. The range of applications is wide, including industrial robots, service robots, and assistive robots.

[ Idiap ]

Thanks Sylvain!

Some amazing news this week from Skydio, with the announcement of their better in every single way Skydio 2 autonomous drone. Read our full article for details, but here’s a getting started video that gives you an overview of what the drone can do.

The first batch sold out in 36 hours, but you can put down a $100 deposit to reserve the $999 drone for 2020 delivery.

[ Skydio ]

UBTECH is introducing a couple new robot kits for the holidays: ChampBot and FireBot.

$130 each, available on October 20.

[ Ubtech ]

NASA’s InSight lander on Mars is trying to use its robotic arm to get the mission’s heat flow probe, or mole, digging again. InSight team engineer Ashitey Trebbi-Ollennu, based at NASA’s Jet Propulsion Laboratory in Pasadena, California, explains what has been attempted and the game plan for the coming weeks. The next tactic they’ll try will be “pinning” the mole against the hole it’s in.

[ NASA ]

We introduce shape-changing swarm robots. A swarm of self-transformable robots can both individually and collectively change their configuration to display information, actuate objects, act as tangible controllers, visualize data, and provide physical affordances. ShapeBots is a concept prototype of shape-changing swarm robots. Each robot can change its shape by leveraging small linear actuators that are thin (2.5 cm) and highly extendable (up to 20cm) in both horizontal and vertical directions.

[ Ryo Suzuki ]

Robot abuse!

Vision 60 legged robot managing unstructured terrain without vision or force sensors in its legs. Using only high-transparency actuators and 2kHz algorithmic stability control… 4-limbs and 12-motors with only a velocity command.

[ Ghost Robotics ]

We asked real people to bring in real products they needed picked for their application. In MINUTES, we assembled the right tool.

This is a cool idea, but for a real challenge they should try it outside a supermarket. Or a pet store.

[ Soft Robotics ]

Good water quality is important to humans and to nature. In a country with as much water as the Netherlands has, ensuring water quality is a very labour-intensive undertaking. To address this issue, researchers from TU Delft have developed a ‘pelican drone’: a drone capable of taking water samples quickly, in combination with a measuring instrument that immediately analyses the water quality. The drone was tested this week at the new Marker Wadden nature area ‘Living Lab’.

[ MAVLab ]

In an international collaboration led by scientists in Switzerland, three amputees merge with their bionic prosthetic legs as they climb over various obstacles without having to look. The amputees report using and feeling their bionic leg as part of their own body, thanks to sensory feedback from the prosthetic leg that is delivered to nerves in the leg’s stump.

[ EPFL ]

It’s a little hard to see, but this is one way of testing out asteroid imaging spacecraft without actually going into space: a fake asteroid and a 2D microgravity simulator.

[ Caltech ]

Drones can help filmmakers do the kinds of shots that would be otherwise impossible.

[ DJI ]

Two long interviews this week from Lex Fridman’s AI Podcast, and both of them are worth watching: Gary Marcus, and Peter Norvig.

[ AI Podcast ]

This week’s CMU RI Seminar comes from Tucker Hermans at the University of Utah, on “Improving Multi-fingered Robot Manipulation by Unifying Learning and Planning.”

Multi-fingered hands offer autonomous robots increased dexterity, versatility, and stability over simple two-fingered grippers. Naturally, this increased ability comes with increased complexity in planning and executing manipulation actions. As such, I propose combining model-based planning with learned components to improve over purely data-driven or purely-model based approaches to manipulation. This talk examines multi-fingered autonomous manipulation when the robot has only partial knowledge of the object of interest. I will first present results on planning multi-fingered grasps for novel objects using a learned neural network. I will then present our approach to planning in-hand manipulation tasks when dynamic properties of objects are not known. I will conclude with a discussion of our ongoing and future research to further unify these two approaches.

[ CMU RI ] Continue reading

Posted in Human Robots

#435828 Video Friday: Boston Dynamics’ ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

RoboBusiness 2019 – October 1-3, 2019 – Santa Clara, Calif., USA
ISRR 2019 – October 6-10, 2019 – Hanoi, Vietnam
Ro-Man 2019 – October 14-18, 2019 – New Delhi, India
Humanoids 2019 – October 15-17, 2019 – Toronto, Canada
ARSO 2019 – October 31-1, 2019 – Beijing, China
ROSCon 2019 – October 31-1, 2019 – Macau
IROS 2019 – November 4-8, 2019 – Macau
Let us know if you have suggestions for next week, and enjoy today’s videos.

You’ve almost certainly seen the new Spot and Atlas videos from Boston Dynamics, if for no other reason than we posted about Spot’s commercial availability earlier this week. But what, are we supposed to NOT include them in Video Friday anyway? Psh! Here you go:

[ Boston Dynamics ]

Eight deadly-looking robots. One Giant Nut trophy. Tonight is the BattleBots season finale, airing on Discovery, 8 p.m. ET, or check your local channels.

[ BattleBots ]

Thanks Trey!

Speaking of battling robots… Having giant robots fight each other is one of those things that sounds really great in theory, but doesn’t work out so well in reality. And sadly, MegaBots is having to deal with reality, which means putting their giant fighting robot up on eBay.

As of Friday afternoon, the current bid is just over $100,000 with a week to go.

[ MegaBots ]

Michigan Engineering has figured out the secret formula to getting 150,000 views on YouTube: drone plus nail gun.

[ Michigan Engineering ]

Michael Burke from the University of Edinburgh writes:

We’ve been learning to scoop grapefruit segments using a PR2, by “feeling” the difference between peel and pulp. We use joint torque measurements to predict the probability that the knife is in the peel or pulp, and use this to apply feedback control to a nominal cutting trajectory learned from human demonstration, so that we remain in a position of maximum uncertainty about which medium we’re cutting. This means we slice along the boundary between the two mediums. It works pretty well!

[ Paper ] via [ Robust Autonomy and Decisions Group ]

Thanks Michael!

Hey look, it’s Jan with eight EMYS robot heads. Hi, Jan! Hi, EMYSes!

[ EMYS ]

We’re putting the KRAKEN Arm through its paces, demonstrating that it can unfold from an Express Rack locker on the International Space Station and access neighboring lockers in NASA’s FabLab system to enable transfer of materials and parts between manufacturing, inspection, and storage stations. The KRAKEN arm will be able to change between multiple ’end effector’ tools such as grippers and inspection sensors – those are in development so they’re not shown in this video.

[ Tethers Unlimited ]

UBTECH’s Alpha Mini Robot with Smart Robot’s “Maatje” software is offering healthcare service to children at Praktijk Intraverte Multidisciplinary Institution in Netherlands.

This institution is using Alpha Mini in counseling children’s behavior. Alpha Mini can move and talk to children and offers games and activities to stimulate and interact with them. Alpha Mini talks, helps and motivates children thereby becoming more flexible in society.

[ UBTECH ]

Some impressive work here from Anusha Nagabandi, Kurt Konoglie, Sergey Levine, Vikash Kumar at Google Brain, training a dexterous multi-fingered hand to do that thing with two balls that I’m really bad at.

Dexterous multi-fingered hands can provide robots with the ability to flexibly perform a wide range of manipulation skills. However, many of the more complex behaviors are also notoriously difficult to control: Performing in-hand object manipulation, executing finger gaits to move objects, and exhibiting precise fine motor skills such as writing, all require finely balancing contact forces, breaking and reestablishing contacts repeatedly, and maintaining control of unactuated objects. In this work, we demonstrate that our method of online planning with deep dynamics models (PDDM) addresses both of these limitations; we show that improvements in learned dynamics models, together with improvements in online model-predictive control, can indeed enable efficient and effective learning of flexible contact-rich dexterous manipulation skills — and that too, on a 24-DoF anthropomorphic hand in the real world, using just 2-4 hours of purely real-world data to learn to simultaneously coordinate multiple free-floating objects.

[ PDDM ]

Thanks Vikash!

CMU’s Ballbot has a deceptively light touch that’s ideal for leading people around.

A paper on this has been submitted to IROS 2019.

[ CMU ]

The Autonomous Robots Lab at the University of Nevada is sharing some of the work they’ve done on path planning and exploration for aerial robots during the DARPA SubT Challenge.

[ Autonomous Robots Lab ]

More proof that anything can be a drone if you staple some motors to it. Even 32 feet of styrofoam insulation.

[ YouTube ]

Whatever you think of military drones, we can all agree that they look cool.

[ Boeing ]

I appreciate the fact that iCub has eyelids, I really do, but sometimes, it ends up looking kinda sleepy in research videos.

[ EPFL LASA ]

Video shows autonomous flight of a lightweight aerial vehicle outdoors and indoors on the campus of Carnegie Mellon University. The vehicle is equipped with limited onboard sensing from a front-facing camera and a proximity sensor. The aerial autonomy is enabled by utilizing a 3D prior map built in Step 1.

[ CMU ]

The Stanford Space Robotics Facility allows researchers to test innovative guidance and navigation algorithms on a realistic frictionless, underactuated system.

[ Stanford ASL ]

In this video, Ian and CP discuss Misty’s many capabilities including robust locomotion, obstacle avoidance, 3D mapping/SLAM, face detection and recognition, sound localization, hardware extensibility, photo and video capture, and programmable personality. They also talk about some of the skills he’s built using these capabilities (and others) and how those skills can be expanded upon by you.

[ Misty Robotics ]

This week’s CMU RI Seminar comes from Aaron Parness at Caltech and NASA JPL, on “Robotic Grippers for Planetary Applications.”

The previous generation of NASA missions to the outer solar system discovered salt water oceans on Europa and Enceladus, each with more liquid water than Earth – compelling targets to look for extraterrestrial life. Closer to home, JAXA and NASA have imaged sky-light entrances to lava tube caves on the Moon more than 100 m in diameter and ESA has characterized the incredibly varied and complex terrain of Comet 67P. While JPL has successfully landed and operated four rovers on the surface of Mars using a 6-wheeled rocker-bogie architecture, future missions will require new mobility architectures for these extreme environments. Unfortunately, the highest value science targets often lie in the terrain that is hardest to access. This talk will explore robotic grippers that enable missions to these extreme terrains through their ability to grip a wide variety of surfaces (shapes, sizes, and geotechnical properties). To prepare for use in space where repair or replacement is not possible, we field-test these grippers and robots in analog extreme terrain on Earth. Many of these systems are enabled by advances in autonomy. The talk will present a rapid overview of my work and a detailed case study of an underactuated rock gripper for deflecting asteroids.

[ CMU ]

Rod Brooks gives some of the best robotics talks ever. He gave this one earlier this week at UC Berkeley, on “Steps Toward Super Intelligence and the Search for a New Path.”

[ UC Berkeley ] Continue reading

Posted in Human Robots

#435824 A Q&A with Cruise’s head of AI, ...

In 2016, Cruise, an autonomous vehicle startup acquired by General Motors, had about 50 employees. At the beginning of 2019, the headcount at its San Francisco headquarters—mostly software engineers, mostly working on projects connected to machine learning and artificial intelligence—hit around 1000. Now that number is up to 1500, and by the end of this year it’s expected to reach about 2000, sprawling into a recently purchased building that had housed Dropbox. And that’s not counting the 200 or so tech workers that Cruise is aiming to install in a Seattle, Wash., satellite development center and a handful of others in Phoenix, Ariz., and Pasadena, Calif.

Cruise’s recent hires aren’t all engineers—it takes more than engineering talent to manage operations. And there are hundreds of so-called safety drivers that are required to sit in the 180 or so autonomous test vehicles whenever they roam the San Francisco streets. But that’s still a lot of AI experts to be hiring in a time of AI engineer shortages.

Hussein Mehanna, head of AI/ML at Cruise, says the company’s hiring efforts are on track, due to the appeal of the challenge of autonomous vehicles in drawing in AI experts from other fields. Mehanna himself joined Cruise in May from Google, where he was director of engineering at Google Cloud AI. Mehanna had been there about a year and a half, a relatively quick career stop after a short stint at Snap following four years working in machine learning at Facebook.

Mehanna has been immersed in AI and machine learning research since his graduate studies in speech recognition and natural language processing at the University of Cambridge. I sat down with Mehanna to talk about his career, the challenges of recruiting AI experts and autonomous vehicle development in general—and some of the challenges specific to San Francisco. We were joined by Michael Thomas, Cruise’s manager of AI/ML recruiting, who had also spent time recruiting AI engineers at Google and then Facebook.

IEEE Spectrum: When you were at Cambridge, did you think AI was going to take off like a rocket?

Mehanna: Did I imagine that AI was going to be as dominant and prevailing and sometimes hyped as it is now? No. I do recall in 2003 that my supervisor and I were wondering if neural networks could help at all in speech recognition. I remember my supervisor saying if anyone could figure out how use a neural net for speech he would give them a grant immediately. So he was on the right path. Now neural networks have dominated vision, speech, and language [processing]. But that boom started in 2012.

“In the early days, Facebook wasn’t that open to PhDs, it actually had a negative sentiment about researchers, and then Facebook shifted”

I didn’t [expect it], but I certainly aimed for it when [I was at] Microsoft, where I deliberately pushed my career towards machine learning instead of big data, which was more popular at the time. And [I aimed for it] when I joined Facebook.

In the early days, Facebook wasn’t that open to PhDs, or researchers. It actually had a negative sentiment about researchers. And then Facebook shifted to becoming one of the key places where PhD students wanted to do internships or join after they graduated. It was a mindset shift, they were [once] at a point in time where they thought what was needed for success wasn’t research, but now it’s different.

There was definitely an element of risk [in taking a machine learning career path], but I was very lucky, things developed very fast.

IEEE Spectrum: Is it getting harder or easier to find AI engineers to hire, given the reported shortages?

Mehanna: There is a mismatch [between job openings and qualified engineers], though it is hard to quantify it with numbers. There is good news as well: I see a lot more students diving deep into machine learning and data in their [undergraduate] computer science studies, so it’s not as bleak as it seems. But there is massive demand in the market.

Here at Cruise, demand for AI talent is just growing and growing. It might be is saturating or slowing down at other kinds of companies, though, [which] are leveraging more traditional applications—ad prediction, recommendations—that have been out there in the market for a while. These are more mature, better understood problems.

I believe autonomous vehicle technologies is the most difficult AI problem out there. The magnitude of the challenge of these problems is 1000 times more than other problems. They aren’t as well understood yet, and they require far deeper technology. And also the quality at which they are expected to operate is off the roof.

The autonomous vehicle problem is the engineering challenge of our generation. There’s a lot of code to write, and if we think we are going to hire armies of people to write it line by line, it’s not going to work. Machine learning can accelerate the process of generating the code, but that doesn’t mean we aren’t going to have engineers; we actually need a lot more engineers.

Sometimes people worry that AI is taking jobs. It is taking some developer jobs, but it is actually generating other developer jobs as well, protecting developers from the mundane and helping them build software faster and faster.

IEEE Spectrum: Are you concerned that the demand for AI in industry is drawing out the people in academia who are needed to educate future engineers, that is, the “eating the seed corn” problem?

Mehanna: There are some negative examples in the industry, but that’s not our style. We are looking for collaborations with professors, we want to cultivate a very deep and respectful relationship with universities.

And there’s another angle to this: Universities require a thriving industry for them to thrive. It is going to be extremely beneficial for academia to have this flourishing industry in AI, because it attracts more students to academia. I think we are doing them a fantastic favor by building these career opportunities. This is not the same as in my early days, [when] people told me “don’t go to AI; go to networking, work in the mobile industry; mobile is flourishing.”

IEEE Spectrum: Where are you looking as you try to find a thousand or so engineers to hire this year?

Thomas: We look for people who want to use machine learning to solve problems. They can be in many different industries—in the financial markets, in social media, in advertising. The autonomous vehicle industry is in its infancy. You can compare it to mobile in the early days: When the iPhone first came out, everyone was looking for developers with mobile experience, but you weren’t going to find them unless you went to straight to Apple, [so you had to hire other kinds of engineers]. This is the same type of thing: it is so new that you aren’t going to find experts in this area, because we are all still learning.

“You don’t have to be an autonomous vehicle expert to flourish in this world. It’s not too late to move…now would be a great time for AI experts working on other problems to shift their attention to autonomous vehicles.”

Mehanna: Because autonomous vehicle technology is the new frontier for AI experts, [the number of] people with both AI and autonomous vehicle experience is quite limited. So we are acquiring AI experts wherever they are, and helping them grow into the autonomous vehicle area. You don’t have to be an autonomous vehicle expert to flourish in this world. It’s not too late to move; even though there is a lot of great tech developed, there’s even more innovation ahead, so now would be a great time for AI experts working on other problems or applications to shift their attention to autonomous vehicles.

It feels like the Internet in 1980. It’s about to happen, but there are endless applications [to be developed over] the next few decades. Even if we can get a car to drive safely, there is the question of how can we tune the ride comfort, and then applying it all to different cities, different vehicles, different driving situations, and who knows to what other applications.

I can see how I can spend a lifetime career trying to solve this problem.

IEEE Spectrum: Why are you doing most of your development in San Francisco?

Mehanna: I think the best talent of the world is in Silicon Valley, and solving the autonomous vehicle problem is going to require the best of the best. It’s not just the engineering talent that is here, but [also] the entrepreneurial spirit. Solving the problem just as a technology is not going to be successful, you need to solve the product and the technology together. And the entrepreneurial spirit is one of the key reasons Cruise secured 7.5 billion in funding [besides GM, the company has a number of outside investors, including Honda, Softbank, and T. Rowe Price]. That [funding] is another reason Cruise is ahead of many others, because this problem requires deep resources.

“If you can do an autonomous vehicle in San Francisco you can do it almost anywhere.”

[And then there is the driving environment.] When I speak to my peers in the industry, they have a lot of respect for us, because the problems to solve in San Francisco technically are an order of magnitude harder. It is a tight environment, with a lot of pedestrians, and driving patterns that, let’s put it this way, are not necessarily the best in the nation. Which means we are seeing more problems ahead of our competitors, which gets us to better [software]. I think if you can do an autonomous vehicle in San Francisco you can do it almost anywhere.

A version of this post appears in the September 2019 print magazine as “AI Engineers: The Autonomous-Vehicle Industry Wants You.” Continue reading

Posted in Human Robots