Tag Archives: area
#436044 Want a Really Hard Machine Learning ...
What’s the world’s hardest machine learning problem? Autonomous vehicles? Robots that can walk? Cancer detection?
Nope, says Julian Sanchez. It’s agriculture.
Sanchez might be a little biased. He is the director of precision agriculture for John Deere, and is in charge of adding intelligence to traditional farm vehicles. But he does have a little perspective, having spent time working on software for both medical devices and air traffic control systems.
I met with Sanchez and Alexey Rostapshov, head of digital innovation at John Deere Labs, at the organization’s San Francisco offices last month. Labs launched in 2017 to take advantage of the area’s tech expertise, both to apply machine learning to in-house agricultural problems and to work with partners to build technologies that play nicely with Deere’s big green machines. Deere’s neighbors in San Francisco’s tech-heavy South of Market are LinkedIn, Salesforce, and Planet Labs, which puts it in a good position for recruiting.
“We’ve literally had folks knock on the door and say, ‘What are you doing here?’” says Rostapshov, and some return to drop off resumes.
Here’s why Sanchez believes agriculture is such a big challenge for artificial intelligence.
“It’s not just about driving tractors around,” he says, although autonomous driving technologies are part of the mix. (John Deere is doing a lot of work with precision GPS to improve autonomous driving, for example, and allow tractors to plan their own routes around fields.)
But more complex than the driving problem, says Sanchez, are the classification problems.
Corn: A Classic Classification Problem
Photo: Tekla Perry
One key effort, Sanchez says, are AI systems “that allow me to tell whether grain being harvested is good quality or low quality and to make automatic adjustment systems for the harvester.” The company is already selling an early version of this image analysis technology. But the many differences between grain types, and grains grown under different conditions, make this task a tough one for machine learning.
“Take corn,” Sanchez says. “Let’s say we are building a deep learning algorithm to detect this corn. And we take lots of pictures of kernels to give it. Say we pick those kernels in central Illinois. But, one mile over, the farmer planted a slightly different hybrid which has slightly different coloration of yellow. Meanwhile, this other farm harvested three days later in a field five miles away; it’s the same hybrid, but it also looks different.
“It’s an overwhelming classification challenge, and that’s just for corn. But you are not only doing it for corn, you have to add 20 more varieties of grain to the mix; and some, like canola, are almost microscopic.”
Even the ground conditions vary dramatically—far more than road conditions, Sanchez points out.
“Let’s say we are building a deep learning algorithm to detect how much residue is left on the soil after a harvest, including stubble and some chaff. Let’s drive 2,000 acres of fields in the Midwest looking at residue. That’s great, but I guarantee that if you go drive those the next year, it will look significantly different.
“Deep learning is great at interpolating conditions between what it knows; it is not good at extrapolating to situations it hasn’t seen. And in agriculture, you always feel that there is a set of conditions that you haven’t yet classified.”
A Flood of Big Data
The scale of the data is also daunting, Rostapshov points out. “We are one of the largest users of cloud computing services in the world,” he says. “We are gathering 5 to 15 million measurements per second from 130,000 connected machines globally. We have over 150 million acres in our databases, using petabytes and petabytes [of storage]. We process more data than Twitter does.”
Much of this information is so-called dirty data, that is, it doesn’t share the same format or structure, because it’s coming not only from a wide variety of John Deere machines, but also includes data from some 100 other companies that have access to the platform, including weather information, aerial imagery, and soil analyses.
As a result, says Sanchez, Deere has had to make “tremendous investments in back-end data cleanup.”
Deep learning is great at interpolating conditions between what it knows; it is not good at extrapolating to situations it hasn’t seen.”
—Julian Sanchez, John Deere
“We have gotten progressively more skilled at that problem,” he says. “We started simply by cleaning up our own data. You’d think it would be nice and neat, since it’s coming from our own machines, but there is a wide variety of different models and different years. Then we started geospatially tagging the agronomic data—the information about where you are applying herbicides and fertilizer and the like—coming in from our vehicles. When we started bringing in other data, from drones, say, we were already good at cleaning it up.”
John Deere’s Hiring Pitch
Hard problems can be a good thing to have for a company looking to hire machine learning engineers.
“Our opening line to potential recruits,” Sanchez says, “is ‘This stuff matters.’ Then, if we get a chance to talk to them more, we follow up with ‘Not only does this stuff matter, but the problems are really hard and interesting.’ When we explain the variability in farming and how we have to apply all the latest tools to these problems, we get their attention.”
Software engineers “know that feeding a growing population is a massive problem and are excited about the prospect of making a difference,” Rostapshov says.
Only 20 engineers work in the San Francisco labs right now, and that’s on a busy day—some of the researchers spend part of their time at Blue River Technology, a startup based in Sunnyvale that was acquired by Deere in 2017. About half of the researchers are focusing on AI. The Lab is in the process of doubling its office space (no word on staffing plans for that expansion yet).
“We are one of the largest users of cloud computing services in the world.”
—Alexey Rostapshov, John Deere Labs
Company-wide, Deere has thousands of software engineers, with many using AI and machine learning tools in their work, and about the same number of mechanical and electrical engineers, Sanchez reports. “If you look at our hiring 10 years ago,” he says, “it was heavily weighted to mechanical engineers. But if you look at those numbers now, it is by a large majority [engineers working] in the software space. We still need mechanical engineers—we do build green machines—but if you go by our footprint of tech talent, it is pretty safe to call John Deere a software company. And if you follow the key conversations that are happening in the company right now, 95 percent of them are software-related.”
For now, these software engineers are focused on developing technologies that allow farmers to “do more with less,” Sanchez says. Meaning, to get more and better crops from less fuel, less seed, less fertilizer, less pesticide, and fewer workers, and putting together building blocks that, he says, could eventually lead to fully autonomous farm vehicles. The data Deere collects today, for the most part, stays in silos (the virtual kind), with AI algorithms that analyze specific sets of data to provide guidance to individual farmers. At some point, however, with tools to anonymize data and buy-in from farmers, aggregating data could provide some powerful insights.
“We are not asking farmers for that yet,” Sanchez says. “We are not doing aggregation to look for patterns. We are focused on offering technology that allows an individual farmer to use less, on positioning ourselves to be in a neutral spot. We are not about selling you more seed or more fertilizer. So we are building up a good trust level. In the long term, we can have conversations about doing more with deep learning.” Continue reading
#436042 Video Friday: Caltech’s Drone With ...
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):
ISRR 2019 – October 6-10, 2019 – Hanoi, Vietnam
Ro-Man 2019 – October 14-18, 2019 – New Delhi, India
Humanoids 2019 – October 15-17, 2019 – Toronto, Canada
ARSO 2019 – October 31-1, 2019 – Beijing, China
ROSCon 2019 – October 31-1, 2019 – Macau
IROS 2019 – November 4-8, 2019 – Macau
Let us know if you have suggestions for next week, and enjoy today’s videos.
Caltech has been making progress on LEONARDO (LEg ON Aerial Robotic DrOne), their leggy thruster powered humanoid-thing. It can now balance and walk, which is quite impressive to see.
We’ll circle back again when they’ve got it jumping and floating around.
[ Caltech ]
Turn the subtitles on to learn how robots became experts at slicing bubbly, melty, delicious cheese.
These robots learned how to do the traditional Swiss raclette from demonstration. The Robot Learning & Interaction group at the Idiap Research Institute has developed an imitation learning technique allowing the robot to acquire new skills by considering position and force information, with an automatic adaptation to new situations. The range of applications is wide, including industrial robots, service robots, and assistive robots.
[ Idiap ]
Thanks Sylvain!
Some amazing news this week from Skydio, with the announcement of their better in every single way Skydio 2 autonomous drone. Read our full article for details, but here’s a getting started video that gives you an overview of what the drone can do.
The first batch sold out in 36 hours, but you can put down a $100 deposit to reserve the $999 drone for 2020 delivery.
[ Skydio ]
UBTECH is introducing a couple new robot kits for the holidays: ChampBot and FireBot.
$130 each, available on October 20.
[ Ubtech ]
NASA’s InSight lander on Mars is trying to use its robotic arm to get the mission’s heat flow probe, or mole, digging again. InSight team engineer Ashitey Trebbi-Ollennu, based at NASA’s Jet Propulsion Laboratory in Pasadena, California, explains what has been attempted and the game plan for the coming weeks. The next tactic they’ll try will be “pinning” the mole against the hole it’s in.
[ NASA ]
We introduce shape-changing swarm robots. A swarm of self-transformable robots can both individually and collectively change their configuration to display information, actuate objects, act as tangible controllers, visualize data, and provide physical affordances. ShapeBots is a concept prototype of shape-changing swarm robots. Each robot can change its shape by leveraging small linear actuators that are thin (2.5 cm) and highly extendable (up to 20cm) in both horizontal and vertical directions.
[ Ryo Suzuki ]
Robot abuse!
Vision 60 legged robot managing unstructured terrain without vision or force sensors in its legs. Using only high-transparency actuators and 2kHz algorithmic stability control… 4-limbs and 12-motors with only a velocity command.
[ Ghost Robotics ]
We asked real people to bring in real products they needed picked for their application. In MINUTES, we assembled the right tool.
This is a cool idea, but for a real challenge they should try it outside a supermarket. Or a pet store.
[ Soft Robotics ]
Good water quality is important to humans and to nature. In a country with as much water as the Netherlands has, ensuring water quality is a very labour-intensive undertaking. To address this issue, researchers from TU Delft have developed a ‘pelican drone’: a drone capable of taking water samples quickly, in combination with a measuring instrument that immediately analyses the water quality. The drone was tested this week at the new Marker Wadden nature area ‘Living Lab’.
[ MAVLab ]
In an international collaboration led by scientists in Switzerland, three amputees merge with their bionic prosthetic legs as they climb over various obstacles without having to look. The amputees report using and feeling their bionic leg as part of their own body, thanks to sensory feedback from the prosthetic leg that is delivered to nerves in the leg’s stump.
[ EPFL ]
It’s a little hard to see, but this is one way of testing out asteroid imaging spacecraft without actually going into space: a fake asteroid and a 2D microgravity simulator.
[ Caltech ]
Drones can help filmmakers do the kinds of shots that would be otherwise impossible.
[ DJI ]
Two long interviews this week from Lex Fridman’s AI Podcast, and both of them are worth watching: Gary Marcus, and Peter Norvig.
[ AI Podcast ]
This week’s CMU RI Seminar comes from Tucker Hermans at the University of Utah, on “Improving Multi-fingered Robot Manipulation by Unifying Learning and Planning.”
Multi-fingered hands offer autonomous robots increased dexterity, versatility, and stability over simple two-fingered grippers. Naturally, this increased ability comes with increased complexity in planning and executing manipulation actions. As such, I propose combining model-based planning with learned components to improve over purely data-driven or purely-model based approaches to manipulation. This talk examines multi-fingered autonomous manipulation when the robot has only partial knowledge of the object of interest. I will first present results on planning multi-fingered grasps for novel objects using a learned neural network. I will then present our approach to planning in-hand manipulation tasks when dynamic properties of objects are not known. I will conclude with a discussion of our ongoing and future research to further unify these two approaches.
[ CMU RI ] Continue reading
#436005 NASA Hiring Engineers to Develop “Next ...
It’s been nearly six years since NASA unveiled Valkyrie, a state-of-the-art full-size humanoid robot. After the DARPA Robotics Challenge, NASA has continued to work with Valkyrie at Johnson Space Center, and has also provided Valkyrie robots to several different universities. Although it’s not a new platform anymore (six years is a long time in robotics), Valkyrie is still very capable, with plenty of potential for robotics research.
With that in mind, we were caught by surprise when over the last several months, Jacobs, a Dallas-based engineering company that appears to provide a wide variety of technical services to anyone who wants them, has posted several open jobs in need of roboticists in the Houston, Texas, area who are interested in working with NASA on “the next generation of humanoid robot.”
Here are the relevant bullet points from the one of the job descriptions (which you can view at this link):
Work directly with NASA Johnson Space Center in designing the next generation of humanoid robot.
Join the Valkyrie humanoid robot team in NASA’s Robotic Systems Technology Branch.
Build on the success of the existing Valkyrie and Robonaut 2 humanoid robots and advance NASA’s ability to project a remote human presence and dexterous manipulation capability into challenging, dangerous, and distant environments both in space and here on earth.
The question is, why is NASA developing its own humanoid robot (again) when it could instead save a whole bunch of time and money by using a platform that already exists, whether it’s Atlas, Digit, Valkyrie itself, or one of the small handful of other humanoids that are more or less available? The only answer that I can come up with is that no existing platforms meet NASA’s requirements, whatever those may be. And if that’s the case, what kind of requirements are we talking about? The obvious one would be the ability to work in the kinds of environments that NASA specializes in—space, the Moon, and Mars.
Image: NASA
Artist’s concept of NASA’s Valkyrie humanoid robot working on the surface of Mars.
NASA’s existing humanoid robots, including Robonaut 2 and Valkyrie, were designed to operate on Earth. Robonaut 2 ended up going to space anyway (it’s recently returned to Earth for repairs), but its hardware was certainly never intended to function outside of the International Space Station. Working in a vacuum involves designing for a much more rigorous set of environmental challenges, and things get even worse on the Moon or on Mars, where highly abrasive dust gets everywhere.
We know that it’s possible to design robots for long term operation in these kinds of environments because we’ve done it before. But if you’re not actually going to send your robot off-world, there’s very little reason to bother making sure that it can operate through (say) 300° Celsius temperature swings like you’d find on the Moon. In the past, NASA has quite sensibly focused on designing robots that can be used as platforms for the development of software and techniques that could one day be applied to off-world operations, without over-engineering those specific robots to operate in places that they would almost certainly never go. As NASA increasingly focuses on a return to the Moon, though, maybe it’s time to start thinking about a humanoid robot that could actually do useful stuff on the lunar surface.
Image: NASA
Artist’s concept of the Gateway moon-orbiting space station (seen on the right) with an Orion crew vehicle approaching.
The other possibility that I can think of, and perhaps the more likely one, is that this next humanoid robot will be a direct successor to Robonaut 2, intended for NASA’s Gateway space station orbiting the Moon. Some of the robotics folks at NASA that we’ve talked to recently have emphasized how important robotics will be for Gateway:
Trey Smith, NASA Ames: Everybody at NASA is really excited about work on the Gateway space station that would be in near lunar space. We don’t have definite plans for what would happen on the Gateway yet, but there’s a general recognition that intra-vehicular robots are important for space stations. And so, it would not be surprising to see a mobile manipulator like Robonaut, and a free flyer like Astrobee, on the Gateway.
If you have an un-crewed cargo vehicle that shows up stuffed to the rafters with cargo bags and it docks with the Gateway when there’s no crew there, it would be very useful to have intra-vehicular robots that can pull all those cargo bags out, unpack them, stow all the items, and then even allow the cargo vehicle to detach before the crew show up so that the crew don’t have to waste their time with that.
Julia Badger, NASA JSC: One of the systems on board Gateway is going to be intravehicular robots. They’re not going to necessarily look like Robonaut, but they’ll have some of the same functionality as Robonaut—being mobile, being able to carry payloads from one part of the module to another, doing some dexterous manipulation tasks, inspecting behind panels, those sorts of things.
Image: NASA
Artist’s concept of NASA’s Valkyrie humanoid robot working inside a spacecraft.
Since Gateway won’t be crewed by humans all of the time, it’ll be important to have a permanent robotic presence to keep things running while nobody is home while saving on resources by virtue of the fact that robots aren’t always eating food, drinking water, consuming oxygen, demanding that the temperature stays just so, and producing a variety of disgusting kinds of waste. Obviously, the robot won’t be as capable as humans, but if they can manage to do even basic continuing maintenance tasks (most likely through at least partial teleoperation), that would be very useful.
Photo: Evan Ackerman/IEEE Spectrum
NASA’s Robonaut team plans to perform a variety of mobility and motion-planning experiments using the robot’s new legs, which can grab handrails on the International Space Station.
As for whether robots designed for Gateway would really fall into the “humanoid” category, it’s worth considering that Gateway is designed for humans, implying that an effective robotic system on Gateway would need to be able to interact with the station in similar ways to how a human astronaut would. So, you’d expect to see arms with end-effectors that can grip things as well as push buttons, and some kind of mobility system—the legged version of Robonaut 2 seems like a likely template, but redesigned from the ground up to work in space, incorporating all the advances in robotics hardware and computing that have taken place over the last decade.
We’ve been pestering NASA about this for a little bit now, and they’re not ready to comment on this project, or even to confirm it. And again, everything in this article (besides the job post, which you should totally check out and consider applying for) is just speculation on our part, and we could be wrong about absolutely all of it. As soon as we hear more, we’ll definitely let you know. Continue reading
#435824 A Q&A with Cruise’s head of AI, ...
In 2016, Cruise, an autonomous vehicle startup acquired by General Motors, had about 50 employees. At the beginning of 2019, the headcount at its San Francisco headquarters—mostly software engineers, mostly working on projects connected to machine learning and artificial intelligence—hit around 1000. Now that number is up to 1500, and by the end of this year it’s expected to reach about 2000, sprawling into a recently purchased building that had housed Dropbox. And that’s not counting the 200 or so tech workers that Cruise is aiming to install in a Seattle, Wash., satellite development center and a handful of others in Phoenix, Ariz., and Pasadena, Calif.
Cruise’s recent hires aren’t all engineers—it takes more than engineering talent to manage operations. And there are hundreds of so-called safety drivers that are required to sit in the 180 or so autonomous test vehicles whenever they roam the San Francisco streets. But that’s still a lot of AI experts to be hiring in a time of AI engineer shortages.
Hussein Mehanna, head of AI/ML at Cruise, says the company’s hiring efforts are on track, due to the appeal of the challenge of autonomous vehicles in drawing in AI experts from other fields. Mehanna himself joined Cruise in May from Google, where he was director of engineering at Google Cloud AI. Mehanna had been there about a year and a half, a relatively quick career stop after a short stint at Snap following four years working in machine learning at Facebook.
Mehanna has been immersed in AI and machine learning research since his graduate studies in speech recognition and natural language processing at the University of Cambridge. I sat down with Mehanna to talk about his career, the challenges of recruiting AI experts and autonomous vehicle development in general—and some of the challenges specific to San Francisco. We were joined by Michael Thomas, Cruise’s manager of AI/ML recruiting, who had also spent time recruiting AI engineers at Google and then Facebook.
IEEE Spectrum: When you were at Cambridge, did you think AI was going to take off like a rocket?
Mehanna: Did I imagine that AI was going to be as dominant and prevailing and sometimes hyped as it is now? No. I do recall in 2003 that my supervisor and I were wondering if neural networks could help at all in speech recognition. I remember my supervisor saying if anyone could figure out how use a neural net for speech he would give them a grant immediately. So he was on the right path. Now neural networks have dominated vision, speech, and language [processing]. But that boom started in 2012.
“In the early days, Facebook wasn’t that open to PhDs, it actually had a negative sentiment about researchers, and then Facebook shifted”
I didn’t [expect it], but I certainly aimed for it when [I was at] Microsoft, where I deliberately pushed my career towards machine learning instead of big data, which was more popular at the time. And [I aimed for it] when I joined Facebook.
In the early days, Facebook wasn’t that open to PhDs, or researchers. It actually had a negative sentiment about researchers. And then Facebook shifted to becoming one of the key places where PhD students wanted to do internships or join after they graduated. It was a mindset shift, they were [once] at a point in time where they thought what was needed for success wasn’t research, but now it’s different.
There was definitely an element of risk [in taking a machine learning career path], but I was very lucky, things developed very fast.
IEEE Spectrum: Is it getting harder or easier to find AI engineers to hire, given the reported shortages?
Mehanna: There is a mismatch [between job openings and qualified engineers], though it is hard to quantify it with numbers. There is good news as well: I see a lot more students diving deep into machine learning and data in their [undergraduate] computer science studies, so it’s not as bleak as it seems. But there is massive demand in the market.
Here at Cruise, demand for AI talent is just growing and growing. It might be is saturating or slowing down at other kinds of companies, though, [which] are leveraging more traditional applications—ad prediction, recommendations—that have been out there in the market for a while. These are more mature, better understood problems.
I believe autonomous vehicle technologies is the most difficult AI problem out there. The magnitude of the challenge of these problems is 1000 times more than other problems. They aren’t as well understood yet, and they require far deeper technology. And also the quality at which they are expected to operate is off the roof.
The autonomous vehicle problem is the engineering challenge of our generation. There’s a lot of code to write, and if we think we are going to hire armies of people to write it line by line, it’s not going to work. Machine learning can accelerate the process of generating the code, but that doesn’t mean we aren’t going to have engineers; we actually need a lot more engineers.
Sometimes people worry that AI is taking jobs. It is taking some developer jobs, but it is actually generating other developer jobs as well, protecting developers from the mundane and helping them build software faster and faster.
IEEE Spectrum: Are you concerned that the demand for AI in industry is drawing out the people in academia who are needed to educate future engineers, that is, the “eating the seed corn” problem?
Mehanna: There are some negative examples in the industry, but that’s not our style. We are looking for collaborations with professors, we want to cultivate a very deep and respectful relationship with universities.
And there’s another angle to this: Universities require a thriving industry for them to thrive. It is going to be extremely beneficial for academia to have this flourishing industry in AI, because it attracts more students to academia. I think we are doing them a fantastic favor by building these career opportunities. This is not the same as in my early days, [when] people told me “don’t go to AI; go to networking, work in the mobile industry; mobile is flourishing.”
IEEE Spectrum: Where are you looking as you try to find a thousand or so engineers to hire this year?
Thomas: We look for people who want to use machine learning to solve problems. They can be in many different industries—in the financial markets, in social media, in advertising. The autonomous vehicle industry is in its infancy. You can compare it to mobile in the early days: When the iPhone first came out, everyone was looking for developers with mobile experience, but you weren’t going to find them unless you went to straight to Apple, [so you had to hire other kinds of engineers]. This is the same type of thing: it is so new that you aren’t going to find experts in this area, because we are all still learning.
“You don’t have to be an autonomous vehicle expert to flourish in this world. It’s not too late to move…now would be a great time for AI experts working on other problems to shift their attention to autonomous vehicles.”
Mehanna: Because autonomous vehicle technology is the new frontier for AI experts, [the number of] people with both AI and autonomous vehicle experience is quite limited. So we are acquiring AI experts wherever they are, and helping them grow into the autonomous vehicle area. You don’t have to be an autonomous vehicle expert to flourish in this world. It’s not too late to move; even though there is a lot of great tech developed, there’s even more innovation ahead, so now would be a great time for AI experts working on other problems or applications to shift their attention to autonomous vehicles.
It feels like the Internet in 1980. It’s about to happen, but there are endless applications [to be developed over] the next few decades. Even if we can get a car to drive safely, there is the question of how can we tune the ride comfort, and then applying it all to different cities, different vehicles, different driving situations, and who knows to what other applications.
I can see how I can spend a lifetime career trying to solve this problem.
IEEE Spectrum: Why are you doing most of your development in San Francisco?
Mehanna: I think the best talent of the world is in Silicon Valley, and solving the autonomous vehicle problem is going to require the best of the best. It’s not just the engineering talent that is here, but [also] the entrepreneurial spirit. Solving the problem just as a technology is not going to be successful, you need to solve the product and the technology together. And the entrepreneurial spirit is one of the key reasons Cruise secured 7.5 billion in funding [besides GM, the company has a number of outside investors, including Honda, Softbank, and T. Rowe Price]. That [funding] is another reason Cruise is ahead of many others, because this problem requires deep resources.
“If you can do an autonomous vehicle in San Francisco you can do it almost anywhere.”
[And then there is the driving environment.] When I speak to my peers in the industry, they have a lot of respect for us, because the problems to solve in San Francisco technically are an order of magnitude harder. It is a tight environment, with a lot of pedestrians, and driving patterns that, let’s put it this way, are not necessarily the best in the nation. Which means we are seeing more problems ahead of our competitors, which gets us to better [software]. I think if you can do an autonomous vehicle in San Francisco you can do it almost anywhere.
A version of this post appears in the September 2019 print magazine as “AI Engineers: The Autonomous-Vehicle Industry Wants You.” Continue reading
#435784 Amazon Uses 800 Robots to Run This ...
At Amazon’s re:MARS conference in Las Vegas today, who else but Amazon is introducing two new robots designed to make its fulfillment centers even more fulfilling. Xanthus (named after a mythological horse that could very briefly talk but let’s not read too much into that) is a completely redesigned drive unit, one of the robotic mobile bases that carries piles of stuff around for humans to pick from. It has a thinner profile, a third of the parts, costs half as much, and can wear different modules on top to perform a much wider variety of tasks than its predecessor.
Pegasus (named after a mythological horse that could fly but let’s not read too much into that either) is also a mobile robot, but much smaller than Xanthus, designed to help the company quickly and accurately sort individual packages. For Amazon, it’s a completely new large-scale robotic system involving tightly coordinated fleets of robots tossing boxes down chutes, and it’s just as fun to watch as it sounds.
Amazon has 800 Pegasus units already deployed at a sorting facility in the United States, adding to their newly updated total of 200,000 robotic drive units worldwide.
If the Pegasus system looks familiar, it’s because other warehouse automation companies have had something that’s at least superficially very similar up and running for years.
Photo: Amazon
Pegasus is one of Amazon’s new warehouse robots, equipped with a conveyor belt on top and used in the company’s sorting facilities.
But the most interesting announcement that Amazon made, kind of low key and right at the end of their re:MARS talk, is that they’re working on ways of making some of their mobile robots actually collaborative, leveraging some of the technology that they acquired from Boulder, Colo.-based warehouse robotics startup Canvas Technology earlier this year:
“With our recent acquisition of Canvas, we expect to be able to combine this drive platform with AI and autonomous mobility capabilities, and for the first time, allow our robots to move outside of our robotic drive fields, and interact collaboratively with our associates to do a number of mobility tasks,” said Brad Porter, VP of robotics at Amazon.
At the moment, Amazon’s robots are physically separated from humans except for one highly structured station where the human only interacts with the robot in one or two very specific ways. We were told a few months ago that Amazon would like to have mobile robots that are able to move things through the areas of fulfillment centers that have people in them, but that they’re (quite rightly) worried about the safety aspects of having robots and humans work around each other. Other companies are already doing this on a smaller scale, and it means developing a reliable safety system that can handle randomly moving humans, environmental changes, and all kinds of other stuff. It’s much more difficult than having a nice, clean, roped-off area to work in where a wayward human would be an exception rather than just another part of the job.
Photo: Canvas Technology
A robot created by Canvas Technology, a Boulder, Colo.-based warehouse robotics startup acquired by Amazon earlier this year.
It now seems like Canvas has provided the secret sauce that Amazon needed to start implementing this level of autonomy. As for what it’s going to look like, our best guess is that Amazon is going to have to do a little bit more than slap some extra sensors onto Xanthus or Pegasus, if for no other reason than the robots will almost certainly need more ground clearance to let them operate away from the reliably flat floors that they’re accustomed to. We’re expecting to see them performing many of the tasks that companies like Fetch Robotics and OTTO Motors are doing already—moving everything from small boxes to large pallets to keep humans from having to waste time walking.
Of course, this all feeds back into what drives Amazon more than anything else: efficiency. And for better or worse, humans are not uniquely good at moving things from place to place, so it’s no surprise that Amazon wants to automate that, too. The good news is that, at least for now, Amazon still needs humans to babysit all those robots.
[ Amazon ] Continue reading