Tag Archives: seen
#436044 Want a Really Hard Machine Learning ...
What’s the world’s hardest machine learning problem? Autonomous vehicles? Robots that can walk? Cancer detection?
Nope, says Julian Sanchez. It’s agriculture.
Sanchez might be a little biased. He is the director of precision agriculture for John Deere, and is in charge of adding intelligence to traditional farm vehicles. But he does have a little perspective, having spent time working on software for both medical devices and air traffic control systems.
I met with Sanchez and Alexey Rostapshov, head of digital innovation at John Deere Labs, at the organization’s San Francisco offices last month. Labs launched in 2017 to take advantage of the area’s tech expertise, both to apply machine learning to in-house agricultural problems and to work with partners to build technologies that play nicely with Deere’s big green machines. Deere’s neighbors in San Francisco’s tech-heavy South of Market are LinkedIn, Salesforce, and Planet Labs, which puts it in a good position for recruiting.
“We’ve literally had folks knock on the door and say, ‘What are you doing here?’” says Rostapshov, and some return to drop off resumes.
Here’s why Sanchez believes agriculture is such a big challenge for artificial intelligence.
“It’s not just about driving tractors around,” he says, although autonomous driving technologies are part of the mix. (John Deere is doing a lot of work with precision GPS to improve autonomous driving, for example, and allow tractors to plan their own routes around fields.)
But more complex than the driving problem, says Sanchez, are the classification problems.
Corn: A Classic Classification Problem
Photo: Tekla Perry
One key effort, Sanchez says, are AI systems “that allow me to tell whether grain being harvested is good quality or low quality and to make automatic adjustment systems for the harvester.” The company is already selling an early version of this image analysis technology. But the many differences between grain types, and grains grown under different conditions, make this task a tough one for machine learning.
“Take corn,” Sanchez says. “Let’s say we are building a deep learning algorithm to detect this corn. And we take lots of pictures of kernels to give it. Say we pick those kernels in central Illinois. But, one mile over, the farmer planted a slightly different hybrid which has slightly different coloration of yellow. Meanwhile, this other farm harvested three days later in a field five miles away; it’s the same hybrid, but it also looks different.
“It’s an overwhelming classification challenge, and that’s just for corn. But you are not only doing it for corn, you have to add 20 more varieties of grain to the mix; and some, like canola, are almost microscopic.”
Even the ground conditions vary dramatically—far more than road conditions, Sanchez points out.
“Let’s say we are building a deep learning algorithm to detect how much residue is left on the soil after a harvest, including stubble and some chaff. Let’s drive 2,000 acres of fields in the Midwest looking at residue. That’s great, but I guarantee that if you go drive those the next year, it will look significantly different.
“Deep learning is great at interpolating conditions between what it knows; it is not good at extrapolating to situations it hasn’t seen. And in agriculture, you always feel that there is a set of conditions that you haven’t yet classified.”
A Flood of Big Data
The scale of the data is also daunting, Rostapshov points out. “We are one of the largest users of cloud computing services in the world,” he says. “We are gathering 5 to 15 million measurements per second from 130,000 connected machines globally. We have over 150 million acres in our databases, using petabytes and petabytes [of storage]. We process more data than Twitter does.”
Much of this information is so-called dirty data, that is, it doesn’t share the same format or structure, because it’s coming not only from a wide variety of John Deere machines, but also includes data from some 100 other companies that have access to the platform, including weather information, aerial imagery, and soil analyses.
As a result, says Sanchez, Deere has had to make “tremendous investments in back-end data cleanup.”
Deep learning is great at interpolating conditions between what it knows; it is not good at extrapolating to situations it hasn’t seen.”
—Julian Sanchez, John Deere
“We have gotten progressively more skilled at that problem,” he says. “We started simply by cleaning up our own data. You’d think it would be nice and neat, since it’s coming from our own machines, but there is a wide variety of different models and different years. Then we started geospatially tagging the agronomic data—the information about where you are applying herbicides and fertilizer and the like—coming in from our vehicles. When we started bringing in other data, from drones, say, we were already good at cleaning it up.”
John Deere’s Hiring Pitch
Hard problems can be a good thing to have for a company looking to hire machine learning engineers.
“Our opening line to potential recruits,” Sanchez says, “is ‘This stuff matters.’ Then, if we get a chance to talk to them more, we follow up with ‘Not only does this stuff matter, but the problems are really hard and interesting.’ When we explain the variability in farming and how we have to apply all the latest tools to these problems, we get their attention.”
Software engineers “know that feeding a growing population is a massive problem and are excited about the prospect of making a difference,” Rostapshov says.
Only 20 engineers work in the San Francisco labs right now, and that’s on a busy day—some of the researchers spend part of their time at Blue River Technology, a startup based in Sunnyvale that was acquired by Deere in 2017. About half of the researchers are focusing on AI. The Lab is in the process of doubling its office space (no word on staffing plans for that expansion yet).
“We are one of the largest users of cloud computing services in the world.”
—Alexey Rostapshov, John Deere Labs
Company-wide, Deere has thousands of software engineers, with many using AI and machine learning tools in their work, and about the same number of mechanical and electrical engineers, Sanchez reports. “If you look at our hiring 10 years ago,” he says, “it was heavily weighted to mechanical engineers. But if you look at those numbers now, it is by a large majority [engineers working] in the software space. We still need mechanical engineers—we do build green machines—but if you go by our footprint of tech talent, it is pretty safe to call John Deere a software company. And if you follow the key conversations that are happening in the company right now, 95 percent of them are software-related.”
For now, these software engineers are focused on developing technologies that allow farmers to “do more with less,” Sanchez says. Meaning, to get more and better crops from less fuel, less seed, less fertilizer, less pesticide, and fewer workers, and putting together building blocks that, he says, could eventually lead to fully autonomous farm vehicles. The data Deere collects today, for the most part, stays in silos (the virtual kind), with AI algorithms that analyze specific sets of data to provide guidance to individual farmers. At some point, however, with tools to anonymize data and buy-in from farmers, aggregating data could provide some powerful insights.
“We are not asking farmers for that yet,” Sanchez says. “We are not doing aggregation to look for patterns. We are focused on offering technology that allows an individual farmer to use less, on positioning ourselves to be in a neutral spot. We are not about selling you more seed or more fertilizer. So we are building up a good trust level. In the long term, we can have conversations about doing more with deep learning.” Continue reading
#436005 NASA Hiring Engineers to Develop “Next ...
It’s been nearly six years since NASA unveiled Valkyrie, a state-of-the-art full-size humanoid robot. After the DARPA Robotics Challenge, NASA has continued to work with Valkyrie at Johnson Space Center, and has also provided Valkyrie robots to several different universities. Although it’s not a new platform anymore (six years is a long time in robotics), Valkyrie is still very capable, with plenty of potential for robotics research.
With that in mind, we were caught by surprise when over the last several months, Jacobs, a Dallas-based engineering company that appears to provide a wide variety of technical services to anyone who wants them, has posted several open jobs in need of roboticists in the Houston, Texas, area who are interested in working with NASA on “the next generation of humanoid robot.”
Here are the relevant bullet points from the one of the job descriptions (which you can view at this link):
Work directly with NASA Johnson Space Center in designing the next generation of humanoid robot.
Join the Valkyrie humanoid robot team in NASA’s Robotic Systems Technology Branch.
Build on the success of the existing Valkyrie and Robonaut 2 humanoid robots and advance NASA’s ability to project a remote human presence and dexterous manipulation capability into challenging, dangerous, and distant environments both in space and here on earth.
The question is, why is NASA developing its own humanoid robot (again) when it could instead save a whole bunch of time and money by using a platform that already exists, whether it’s Atlas, Digit, Valkyrie itself, or one of the small handful of other humanoids that are more or less available? The only answer that I can come up with is that no existing platforms meet NASA’s requirements, whatever those may be. And if that’s the case, what kind of requirements are we talking about? The obvious one would be the ability to work in the kinds of environments that NASA specializes in—space, the Moon, and Mars.
Image: NASA
Artist’s concept of NASA’s Valkyrie humanoid robot working on the surface of Mars.
NASA’s existing humanoid robots, including Robonaut 2 and Valkyrie, were designed to operate on Earth. Robonaut 2 ended up going to space anyway (it’s recently returned to Earth for repairs), but its hardware was certainly never intended to function outside of the International Space Station. Working in a vacuum involves designing for a much more rigorous set of environmental challenges, and things get even worse on the Moon or on Mars, where highly abrasive dust gets everywhere.
We know that it’s possible to design robots for long term operation in these kinds of environments because we’ve done it before. But if you’re not actually going to send your robot off-world, there’s very little reason to bother making sure that it can operate through (say) 300° Celsius temperature swings like you’d find on the Moon. In the past, NASA has quite sensibly focused on designing robots that can be used as platforms for the development of software and techniques that could one day be applied to off-world operations, without over-engineering those specific robots to operate in places that they would almost certainly never go. As NASA increasingly focuses on a return to the Moon, though, maybe it’s time to start thinking about a humanoid robot that could actually do useful stuff on the lunar surface.
Image: NASA
Artist’s concept of the Gateway moon-orbiting space station (seen on the right) with an Orion crew vehicle approaching.
The other possibility that I can think of, and perhaps the more likely one, is that this next humanoid robot will be a direct successor to Robonaut 2, intended for NASA’s Gateway space station orbiting the Moon. Some of the robotics folks at NASA that we’ve talked to recently have emphasized how important robotics will be for Gateway:
Trey Smith, NASA Ames: Everybody at NASA is really excited about work on the Gateway space station that would be in near lunar space. We don’t have definite plans for what would happen on the Gateway yet, but there’s a general recognition that intra-vehicular robots are important for space stations. And so, it would not be surprising to see a mobile manipulator like Robonaut, and a free flyer like Astrobee, on the Gateway.
If you have an un-crewed cargo vehicle that shows up stuffed to the rafters with cargo bags and it docks with the Gateway when there’s no crew there, it would be very useful to have intra-vehicular robots that can pull all those cargo bags out, unpack them, stow all the items, and then even allow the cargo vehicle to detach before the crew show up so that the crew don’t have to waste their time with that.
Julia Badger, NASA JSC: One of the systems on board Gateway is going to be intravehicular robots. They’re not going to necessarily look like Robonaut, but they’ll have some of the same functionality as Robonaut—being mobile, being able to carry payloads from one part of the module to another, doing some dexterous manipulation tasks, inspecting behind panels, those sorts of things.
Image: NASA
Artist’s concept of NASA’s Valkyrie humanoid robot working inside a spacecraft.
Since Gateway won’t be crewed by humans all of the time, it’ll be important to have a permanent robotic presence to keep things running while nobody is home while saving on resources by virtue of the fact that robots aren’t always eating food, drinking water, consuming oxygen, demanding that the temperature stays just so, and producing a variety of disgusting kinds of waste. Obviously, the robot won’t be as capable as humans, but if they can manage to do even basic continuing maintenance tasks (most likely through at least partial teleoperation), that would be very useful.
Photo: Evan Ackerman/IEEE Spectrum
NASA’s Robonaut team plans to perform a variety of mobility and motion-planning experiments using the robot’s new legs, which can grab handrails on the International Space Station.
As for whether robots designed for Gateway would really fall into the “humanoid” category, it’s worth considering that Gateway is designed for humans, implying that an effective robotic system on Gateway would need to be able to interact with the station in similar ways to how a human astronaut would. So, you’d expect to see arms with end-effectors that can grip things as well as push buttons, and some kind of mobility system—the legged version of Robonaut 2 seems like a likely template, but redesigned from the ground up to work in space, incorporating all the advances in robotics hardware and computing that have taken place over the last decade.
We’ve been pestering NASA about this for a little bit now, and they’re not ready to comment on this project, or even to confirm it. And again, everything in this article (besides the job post, which you should totally check out and consider applying for) is just speculation on our part, and we could be wrong about absolutely all of it. As soon as we hear more, we’ll definitely let you know. Continue reading
#435828 Video Friday: Boston Dynamics’ ...
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):
RoboBusiness 2019 – October 1-3, 2019 – Santa Clara, Calif., USA
ISRR 2019 – October 6-10, 2019 – Hanoi, Vietnam
Ro-Man 2019 – October 14-18, 2019 – New Delhi, India
Humanoids 2019 – October 15-17, 2019 – Toronto, Canada
ARSO 2019 – October 31-1, 2019 – Beijing, China
ROSCon 2019 – October 31-1, 2019 – Macau
IROS 2019 – November 4-8, 2019 – Macau
Let us know if you have suggestions for next week, and enjoy today’s videos.
You’ve almost certainly seen the new Spot and Atlas videos from Boston Dynamics, if for no other reason than we posted about Spot’s commercial availability earlier this week. But what, are we supposed to NOT include them in Video Friday anyway? Psh! Here you go:
[ Boston Dynamics ]
Eight deadly-looking robots. One Giant Nut trophy. Tonight is the BattleBots season finale, airing on Discovery, 8 p.m. ET, or check your local channels.
[ BattleBots ]
Thanks Trey!
Speaking of battling robots… Having giant robots fight each other is one of those things that sounds really great in theory, but doesn’t work out so well in reality. And sadly, MegaBots is having to deal with reality, which means putting their giant fighting robot up on eBay.
As of Friday afternoon, the current bid is just over $100,000 with a week to go.
[ MegaBots ]
Michigan Engineering has figured out the secret formula to getting 150,000 views on YouTube: drone plus nail gun.
[ Michigan Engineering ]
Michael Burke from the University of Edinburgh writes:
We’ve been learning to scoop grapefruit segments using a PR2, by “feeling” the difference between peel and pulp. We use joint torque measurements to predict the probability that the knife is in the peel or pulp, and use this to apply feedback control to a nominal cutting trajectory learned from human demonstration, so that we remain in a position of maximum uncertainty about which medium we’re cutting. This means we slice along the boundary between the two mediums. It works pretty well!
[ Paper ] via [ Robust Autonomy and Decisions Group ]
Thanks Michael!
Hey look, it’s Jan with eight EMYS robot heads. Hi, Jan! Hi, EMYSes!
[ EMYS ]
We’re putting the KRAKEN Arm through its paces, demonstrating that it can unfold from an Express Rack locker on the International Space Station and access neighboring lockers in NASA’s FabLab system to enable transfer of materials and parts between manufacturing, inspection, and storage stations. The KRAKEN arm will be able to change between multiple ’end effector’ tools such as grippers and inspection sensors – those are in development so they’re not shown in this video.
[ Tethers Unlimited ]
UBTECH’s Alpha Mini Robot with Smart Robot’s “Maatje” software is offering healthcare service to children at Praktijk Intraverte Multidisciplinary Institution in Netherlands.
This institution is using Alpha Mini in counseling children’s behavior. Alpha Mini can move and talk to children and offers games and activities to stimulate and interact with them. Alpha Mini talks, helps and motivates children thereby becoming more flexible in society.
[ UBTECH ]
Some impressive work here from Anusha Nagabandi, Kurt Konoglie, Sergey Levine, Vikash Kumar at Google Brain, training a dexterous multi-fingered hand to do that thing with two balls that I’m really bad at.
Dexterous multi-fingered hands can provide robots with the ability to flexibly perform a wide range of manipulation skills. However, many of the more complex behaviors are also notoriously difficult to control: Performing in-hand object manipulation, executing finger gaits to move objects, and exhibiting precise fine motor skills such as writing, all require finely balancing contact forces, breaking and reestablishing contacts repeatedly, and maintaining control of unactuated objects. In this work, we demonstrate that our method of online planning with deep dynamics models (PDDM) addresses both of these limitations; we show that improvements in learned dynamics models, together with improvements in online model-predictive control, can indeed enable efficient and effective learning of flexible contact-rich dexterous manipulation skills — and that too, on a 24-DoF anthropomorphic hand in the real world, using just 2-4 hours of purely real-world data to learn to simultaneously coordinate multiple free-floating objects.
[ PDDM ]
Thanks Vikash!
CMU’s Ballbot has a deceptively light touch that’s ideal for leading people around.
A paper on this has been submitted to IROS 2019.
[ CMU ]
The Autonomous Robots Lab at the University of Nevada is sharing some of the work they’ve done on path planning and exploration for aerial robots during the DARPA SubT Challenge.
[ Autonomous Robots Lab ]
More proof that anything can be a drone if you staple some motors to it. Even 32 feet of styrofoam insulation.
[ YouTube ]
Whatever you think of military drones, we can all agree that they look cool.
[ Boeing ]
I appreciate the fact that iCub has eyelids, I really do, but sometimes, it ends up looking kinda sleepy in research videos.
[ EPFL LASA ]
Video shows autonomous flight of a lightweight aerial vehicle outdoors and indoors on the campus of Carnegie Mellon University. The vehicle is equipped with limited onboard sensing from a front-facing camera and a proximity sensor. The aerial autonomy is enabled by utilizing a 3D prior map built in Step 1.
[ CMU ]
The Stanford Space Robotics Facility allows researchers to test innovative guidance and navigation algorithms on a realistic frictionless, underactuated system.
[ Stanford ASL ]
In this video, Ian and CP discuss Misty’s many capabilities including robust locomotion, obstacle avoidance, 3D mapping/SLAM, face detection and recognition, sound localization, hardware extensibility, photo and video capture, and programmable personality. They also talk about some of the skills he’s built using these capabilities (and others) and how those skills can be expanded upon by you.
[ Misty Robotics ]
This week’s CMU RI Seminar comes from Aaron Parness at Caltech and NASA JPL, on “Robotic Grippers for Planetary Applications.”
The previous generation of NASA missions to the outer solar system discovered salt water oceans on Europa and Enceladus, each with more liquid water than Earth – compelling targets to look for extraterrestrial life. Closer to home, JAXA and NASA have imaged sky-light entrances to lava tube caves on the Moon more than 100 m in diameter and ESA has characterized the incredibly varied and complex terrain of Comet 67P. While JPL has successfully landed and operated four rovers on the surface of Mars using a 6-wheeled rocker-bogie architecture, future missions will require new mobility architectures for these extreme environments. Unfortunately, the highest value science targets often lie in the terrain that is hardest to access. This talk will explore robotic grippers that enable missions to these extreme terrains through their ability to grip a wide variety of surfaces (shapes, sizes, and geotechnical properties). To prepare for use in space where repair or replacement is not possible, we field-test these grippers and robots in analog extreme terrain on Earth. Many of these systems are enabled by advances in autonomy. The talk will present a rapid overview of my work and a detailed case study of an underactuated rock gripper for deflecting asteroids.
[ CMU ]
Rod Brooks gives some of the best robotics talks ever. He gave this one earlier this week at UC Berkeley, on “Steps Toward Super Intelligence and the Search for a New Path.”
[ UC Berkeley ] Continue reading