Tag Archives: 2017

#436220 How Boston Dynamics Is Redefining Robot ...

Gif: Bob O’Connor/IEEE Spectrum

With their jaw-dropping agility and animal-like reflexes, Boston Dynamics’ bioinspired robots have always seemed to have no equal. But that preeminence hasn’t stopped the company from pushing its technology to new heights, sometimes literally. Its latest crop of legged machines can trudge up and down hills, clamber over obstacles, and even leap into the air like a gymnast. There’s no denying their appeal: Every time Boston Dynamics uploads a new video to YouTube, it quickly racks up millions of views. These are probably the first robots you could call Internet stars.

Spot

Photo: Bob O’Connor

84 cm HEIGHT

25 kg WEIGHT

5.76 km/h SPEED

SENSING: Stereo cameras, inertial measurement unit, position/force sensors

ACTUATION: 12 DC motors

POWER: Battery (90 minutes per charge)

Boston Dynamics, once owned by Google’s parent company, Alphabet, and now by the Japanese conglomerate SoftBank, has long been secretive about its designs. Few publications have been granted access to its Waltham, Mass., headquarters, near Boston. But one morning this past August, IEEE Spectrum got in. We were given permission to do a unique kind of photo shoot that day. We set out to capture the company’s robots in action—running, climbing, jumping—by using high-speed cameras coupled with powerful strobes. The results you see on this page: freeze-frames of pure robotic agility.

We also used the photos to create interactive views, which you can explore online on our Robots Guide. These interactives let you spin the robots 360 degrees, or make them walk and jump on your screen.

Boston Dynamics has amassed a minizoo of robotic beasts over the years, with names like BigDog, SandFlea, and WildCat. When we visited, we focused on the two most advanced machines the company has ever built: Spot, a nimble quadruped, and Atlas, an adult-size humanoid.

Spot can navigate almost any kind of terrain while sensing its environment. Boston Dynamics recently made it available for lease, with plans to manufacture something like a thousand units per year. It envisions Spot, or even packs of them, inspecting industrial sites, carrying out hazmat missions, and delivering packages. And its YouTube fame has not gone unnoticed: Even entertainment is a possibility, with Cirque du Soleil auditioning Spot as a potential new troupe member.

“It’s really a milestone for us going from robots that work in the lab to these that are hardened for work out in the field,” Boston Dynamics CEO Marc Raibert says in an interview.

Atlas

Photo: Bob O’Connor

150 cm HEIGHT

80 kg WEIGHT

5.4 km/h SPEED

SENSING: Lidar and stereo vision

ACTUATION: 28 hydraulic actuators

POWER: Battery

Our other photographic subject, Atlas, is Boston Dynamics’ biggest celebrity. This 150-centimeter-tall (4-foot-11-inch-tall) humanoid is capable of impressive athletic feats. Its actuators are driven by a compact yet powerful hydraulic system that the company engineered from scratch. The unique system gives the 80-kilogram (176-pound) robot the explosive strength needed to perform acrobatic leaps and flips that don’t seem possible for such a large humanoid to do. Atlas has inspired a string of parody videos on YouTube and more than a few jokes about a robot takeover.

While Boston Dynamics excels at making robots, it has yet to prove that it can sell them. Ever since its founding in 1992 as a spin-off from MIT, the company has been an R&D-centric operation, with most of its early funding coming from U.S. military programs. The emphasis on commercialization seems to have intensified after the acquisition by SoftBank, in 2017. SoftBank’s founder and CEO, Masayoshi Son, is known to love robots—and profits.

The launch of Spot is a significant step for Boston Dynamics as it seeks to “productize” its creations. Still, Raibert says his long-term goals have remained the same: He wants to build machines that interact with the world dynamically, just as animals and humans do. Has anything changed at all? Yes, one thing, he adds with a grin. In his early career as a roboticist, he used to write papers and count his citations. Now he counts YouTube views.

In the Spotlight

Photo: Bob O’Connor

Boston Dynamics designed Spot as a versatile mobile machine suitable for a variety of applications. The company has not announced how much Spot will cost, saying only that it is being made available to select customers, which will be able to lease the robot. A payload bay lets you add up to 14 kilograms of extra hardware to the robot’s back. One of the accessories that Boston Dynamics plans to offer is a 6-degrees-of-freedom arm, which will allow Spot to grasp objects and open doors.

Super Senses

Photo: Bob O’Connor

Spot’s hardware is almost entirely custom-designed. It includes powerful processing boards for control as well as sensor modules for perception. The ­sensors are located on the front, rear, and sides of the robot’s body. Each module consists of a pair of stereo cameras, a wide-angle camera, and a texture projector, which enhances 3D sensing in low light. The sensors allow the robot to use the navigation method known as SLAM, or simultaneous localization and mapping, to get around autonomously.

Stepping Up

Photo: Bob O’Connor

In addition to its autonomous behaviors, Spot can also be steered by a remote operator with a game-style controller. But even when in manual mode, the robot still exhibits a high degree of autonomy. If there’s an obstacle ahead, Spot will go around it. If there are stairs, Spot will climb them. The robot goes into these operating modes and then performs the related actions completely on its own, without any input from the operator. To go down a flight of stairs, Spot walks backward, an approach Boston Dynamics says provides greater stability.

Funky Feet

Gif: Bob O’Connor/IEEE Spectrum

Spot’s legs are powered by 12 custom DC motors, each geared down to provide high torque. The robot can walk forward, sideways, and backward, and trot at a top speed of 1.6 meters per second. It can also turn in place. Other gaits include crawling and pacing. In one wildly popular YouTube video, Spot shows off its fancy footwork by dancing to the pop hit “Uptown Funk.”

Robot Blood

Photo: Bob O’Connor

Atlas is powered by a hydraulic system consisting of 28 actuators. These actuators are basically cylinders filled with pressurized fluid that can drive a piston with great force. Their high performance is due in part to custom servo valves that are significantly smaller and lighter than the aerospace models that Boston Dynamics had been using in earlier designs. Though not visible from the outside, the innards of an Atlas are filled with these hydraulic actuators as well as the lines of fluid that connect them. When one of those lines ruptures, Atlas bleeds the hydraulic fluid, which happens to be red.

Next Generation

Gif: Bob O’Connor/IEEE Spectrum

The current version of Atlas is a thorough upgrade of the original model, which was built for the DARPA Robotics Challenge in 2015. The newest robot is lighter and more agile. Boston Dynamics used industrial-grade 3D printers to make key structural parts, giving the robot greater strength-to-weight ratio than earlier designs. The next-gen Atlas can also do something that its predecessor, famously, could not: It can get up after a fall.

Walk This Way

Photo: Bob O’Connor

To control Atlas, an operator provides general steering via a manual controller while the robot uses its stereo cameras and lidar to adjust to changes in the environment. Atlas can also perform certain tasks autonomously. For example, if you add special bar-code-type tags to cardboard boxes, Atlas can pick them up and stack them or place them on shelves.

Biologically Inspired

Photos: Bob O’Connor

Atlas’s control software doesn’t explicitly tell the robot how to move its joints, but rather it employs mathematical models of the underlying physics of the robot’s body and how it interacts with the environment. Atlas relies on its whole body to balance and move. When jumping over an obstacle or doing acrobatic stunts, the robot uses not only its legs but also its upper body, swinging its arms to propel itself just as an athlete would.

This article appears in the December 2019 print issue as “By Leaps and Bounds.” Continue reading

Posted in Human Robots

#436178 Within 10 Years, We’ll Travel by ...

What’s faster than autonomous vehicles and flying cars?

Try Hyperloop, rocket travel, and robotic avatars. Hyperloop is currently working towards 670 mph (1080 kph) passenger pods, capable of zipping us from Los Angeles to downtown Las Vegas in under 30 minutes. Rocket Travel (think SpaceX’s Starship) promises to deliver you almost anywhere on the planet in under an hour. Think New York to Shanghai in 39 minutes.

But wait, it gets even better…

As 5G connectivity, hyper-realistic virtual reality, and next-gen robotics continue their exponential progress, the emergence of “robotic avatars” will all but nullify the concept of distance, replacing human travel with immediate remote telepresence.

Let’s dive in.

Hyperloop One: LA to SF in 35 Minutes
Did you know that Hyperloop was the brainchild of Elon Musk? Just one in a series of transportation innovations from a man determined to leave his mark on the industry.

In 2013, in an attempt to shorten the long commute between Los Angeles and San Francisco, the California state legislature proposed a $68 billion budget allocation for what appeared to be the slowest and most expensive bullet train in history.

Musk was outraged. The cost was too high, the train too sluggish. Teaming up with a group of engineers from Tesla and SpaceX, he published a 58-page concept paper for “The Hyperloop,” a high-speed transportation network that used magnetic levitation to propel passenger pods down vacuum tubes at speeds of up to 670 mph. If successful, it would zip you across California in 35 minutes—just enough time to watch your favorite sitcom.

In January 2013, venture capitalist Shervin Pishevar, with Musk’s blessing, started Hyperloop One with myself, Jim Messina (former White House Deputy Chief of Staff for President Obama), and tech entrepreneurs Joe Lonsdale and David Sacks as founding board members. A couple of years after that, the Virgin Group invested in this idea, Richard Branson was elected chairman, and Virgin Hyperloop One was born.

“The Hyperloop exists,” says Josh Giegel, co-founder and chief technology officer of Hyperloop One, “because of the rapid acceleration of power electronics, computational modeling, material sciences, and 3D printing.”

Thanks to these convergences, there are now ten major Hyperloop One projects—in various stages of development—spread across the globe. Chicago to DC in 35 minutes. Pune to Mumbai in 25 minutes. According to Giegel, “Hyperloop is targeting certification in 2023. By 2025, the company plans to have multiple projects under construction and running initial passenger testing.”

So think about this timetable: Autonomous car rollouts by 2020. Hyperloop certification and aerial ridesharing by 2023. By 2025—going on vacation might have a totally different meaning. Going to work most definitely will.

But what’s faster than Hyperloop?

Rocket Travel
As if autonomous vehicles, flying cars, and Hyperloop weren’t enough, in September of 2017, speaking at the International Astronautical Congress in Adelaide, Australia, Musk promised that for the price of an economy airline ticket, his rockets will fly you “anywhere on Earth in under an hour.”

Musk wants to use SpaceX’s megarocket, Starship, which was designed to take humans to Mars, for terrestrial passenger delivery. The Starship travels at 17,500 mph. It’s an order of magnitude faster than the supersonic jet Concorde.

Think about what this actually means: New York to Shanghai in 39 minutes. London to Dubai in 29 minutes. Hong Kong to Singapore in 22 minutes.

So how real is the Starship?

“We could probably demonstrate this [technology] in three years,” Musk explained, “but it’s going to take a while to get the safety right. It’s a high bar. Aviation is incredibly safe. You’re safer on an airplane than you are at home.”

That demonstration is proceeding as planned. In September 2017, Musk announced his intentions to retire his current rocket fleet, both the Falcon 9 and Falcon Heavy, and replace them with the Starships in the 2020s.

Less than a year later, LA mayor Eric Garcetti tweeted that SpaceX was planning to break ground on an 18-acre rocket production facility near the port of Los Angeles. And April of this year marked an even bigger milestone: the very first test flights of the rocket.

Thus, sometime in the next decade or so, “off to Europe for lunch” may become a standard part of our lexicon.

Avatars
Wait, wait, there’s one more thing.

While the technologies we’ve discussed will decimate the traditional transportation industry, there’s something on the horizon that will disrupt travel itself. What if, to get from A to B, you didn’t have to move your body? What if you could quote Captain Kirk and just say “Beam me up, Scotty”?

Well, shy of the Star Trek transporter, there’s the world of avatars.

An avatar is a second self, typically in one of two forms. The digital version has been around for a couple of decades. It emerged from the video game industry and was popularized by virtual world sites like Second Life and books-turned-blockbusters like Ready Player One.

A VR headset teleports your eyes and ears to another location, while a set of haptic sensors shifts your sense of touch. Suddenly, you’re inside an avatar inside a virtual world. As you move in the real world, your avatar moves in the virtual.

Use this technology to give a lecture and you can do it from the comfort of your living room, skipping the trip to the airport, the cross-country flight, and the ride to the conference center.

Robots are the second form of avatars. Imagine a humanoid robot that you can occupy at will. Maybe, in a city far from home, you’ve rented the bot by the minute—via a different kind of ridesharing company—or maybe you have spare robot avatars located around the country.

Either way, put on VR goggles and a haptic suit, and you can teleport your senses into that robot. This allows you to walk around, shake hands, and take action—all without leaving your home.

And like the rest of the tech we’ve been talking about, even this future isn’t far away.

In 2018, entrepreneur Dr. Harry Kloor recommended to All Nippon Airways (ANA), Japan’s largest airline, the design of an Avatar XPRIZE. ANA then funded this vision to the tune of $10 million to speed the development of robotic avatars. Why? Because ANA knows this is one of the technologies likely to disrupt their own airline industry, and they want to be ready.

ANA recently announced its “newme” robot that humans can use to virtually explore new places. The colorful robots have Roomba-like wheeled bases and cameras mounted around eye-level, which capture surroundings viewable through VR headsets.

If the robot was stationed in your parents’ home, you could cruise around the rooms and chat with your family at any time of day. After revealing the technology at Tokyo’s Combined Exhibition of Advanced Technologies in October, ANA plans to deploy 1,000 newme robots by 2020.

With virtual avatars like newme, geography, distance, and cost will no longer limit our travel choices. From attractions like the Eiffel Tower or the pyramids of Egypt to unreachable destinations like the moon or deep sea, we will be able to transcend our own physical limits, explore the world and outer space, and access nearly any experience imaginable.

Final Thoughts
Individual car ownership has enjoyed over a century of ascendancy and dominance.

The first real threat it faced—today’s ride-sharing model—only showed up in the last decade. But that ridesharing model won’t even get ten years to dominate. Already, it’s on the brink of autonomous car displacement, which is on the brink of flying car disruption, which is on the brink of Hyperloop and rockets-to-anywhere decimation. Plus, avatars.

The most important part: All of this change will happen over the next ten years. Welcome to a future of human presence where the only constant is rapid change.

Note: This article—an excerpt from my next book The Future Is Faster Than You Think, co-authored with Steven Kotler, to be released January 28th, 2020—originally appeared on my tech blog at diamandis.com. Read the original article here.

Join Me
Abundance-Digital Online Community: Stay ahead of technological advancements and turn your passion into action. Abundance Digital is now part of Singularity University. Learn more.

Image Credit: Virgin Hyperloop One Continue reading

Posted in Human Robots

#436165 Video Friday: DJI’s Mavic Mini Is ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

IROS 2019 – November 4-8, 2019 – Macau
Let us know if you have suggestions for next week, and enjoy today’s videos.

DJI’s new Mavic Mini looks like a pretty great drone for US $400 ($500 for a combo with more accessories): It’s tiny, flies for 30 minutes, and will do what you need as far as pictures and video (although not a whole lot more).

DJI seems to have put a bunch of effort into making the drone 249 grams, 1 gram under what’s required for FAA registration. That means you save $5 and a few minutes of your time, but that does not mean you don’t have to follow the FAA’s rules and regulations governing drone use.

[ DJI ]

Don’t panic, but Clearpath and HEBI Robotics have armed the Jackal:

After locking eyes across a crowded room at ICRA 2019, Clearpath Robotics and HEBI Robotics basked in that warm and fuzzy feeling that comes with starting a new and exciting relationship. Over a conference hall coffee, they learned that the two companies have many overlapping interests. The most compelling was the realization that customers across a variety of industries are hunting for an elusive true love of their own – a robust but compact robotic platform combined with a long reach manipulator for remote inspection tasks.

After ICRA concluded, Arron Griffiths, Application Engineer at Clearpath, and Matthew Tesch, Software Engineer at HEBI, kept in touch and decided there had been enough magic in the air to warrant further exploration. A couple of months later, Matthew arrived at Clearpath to formally introduce the HEBI’s X-Series Arm to Clearpath’s Jackal UGV. It was love.

[ Clearpath ]

Thanks Dave!

I’m really not a fan of the people-carrying drones, but heavy lift cargo drones seem like a more okay idea.

Volocopter, the pioneer in Urban Air Mobility, presented the demonstrator of its VoloDrone. This marks Volocopters expansion into the logistics, agriculture, infrastructure and public services industry. The VoloDrone is an unmanned, fully electric, heavy-lift utility drone capable of carrying a payload of 200 kg (440 lbs) up to 40 km (25 miles). With a standardized payload attachment, VoloDrone can serve a great variety of purposes from transporting boxes, to liquids, to equipment and beyond. It can be remotely piloted or flown in automated mode on pre-set routes.

[ Volocopter ]

JAY is a mobile service robot that projects a display on the floor and plays sound with its speaker. By playing sounds and videos, it provides visual and audio entertainment in various places such as exhibition halls, airports, hotels, department stores and more.

[ Rainbow Robotics ]

The DARPA Subterranean Challenge Virtual Tunnel Circuit concluded this week—it was the same idea as the physical challenge that took place in August, just with a lot less IRL dirt.

The awards ceremony and team presentations are in this next video, and we’ll have more on this once we get back from IROS.

[ DARPA SubT ]

NASA is sending a mobile robot to the south pole of the Moon to get a close-up view of the location and concentration of water ice in the region and for the first time ever, actually sample the water ice at the same pole where the first woman and next man will land in 2024 under the Artemis program.

About the size of a golf cart, the Volatiles Investigating Polar Exploration Rover, or VIPER, will roam several miles, using its four science instruments — including a 1-meter drill — to sample various soil environments. Planned for delivery in December 2022, VIPER will collect about 100 days of data that will be used to inform development of the first global water resource maps of the Moon.

[ NASA ]

Happy Halloween from HEBI Robotics!

[ HEBI ]

Happy Halloween from Soft Robotics!

[ Soft Robotics ]

Halloween must be really, really confusing for autonomous cars.

[ Waymo ]

Once a year at Halloween, hardworking JPL engineers put their skills to the test in a highly competitive pumpkin carving contest. The result: A pumpkin gently landed on the Moon, its retrorockets smoldering, while across the room a Nemo-inspired pumpkin explored the sub-surface ocean of Jupiter moon Europa. Suffice to say that when the scientists and engineers at NASA’s Jet Propulsion Laboratory compete in a pumpkin-carving contest, the solar system’s the limit. Take a look at some of the masterpieces from 2019.

Now in its ninth year, the contest gives teams only one hour to carve and decorate their pumpkin though they can prepare non-pumpkin materials – like backgrounds, sound effects and motorized parts – ahead of time.

[ JPL ]

The online autonomous navigation and semantic mapping experiment presented [below] is conducted with the Cassie Blue bipedal robot at the University of Michigan. The sensors attached to the robot include an IMU, a 32-beam LiDAR and an RGB-D camera. The whole online process runs in real-time on a Jetson Xavier and a laptop with an i7 processor.

[ BPL ]

Misty II is now available to anyone who wants one, and she’s on sale for a mere $2900.

[ Misty ]

We leveraged LIDAR-based slam, in conjunction with our specialized relative localization sensor UVDAR to perform a de-centralized, communication-free swarm flight without the units knowing their absolute locations. The swarming and obstacle avoidance control is based on a modified Boids-like algorithm, while the whole swarm is controlled by directing a selected leader unit.

[ MRS ]

The MallARD robot is an autonomous surface vehicle (ASV), designed for the monitoring and inspection of wet storage facilities for example spent fuel pools or wet silos. The MallARD is holonomic, uses a LiDAR for localisation and features a robust trajectory tracking controller.

The University of Manchester’s researcher Dr Keir Groves designed and built the autonomous surface vehicle (ASV) for the challenge which came in the top three of the second round in Nov 2017. The MallARD went on to compete in a final 3rd round where it was deployed in a spent fuel pond at a nuclear power plant in Finland by the IAEA, along with two other entries. The MallARD came second overall, in November 2018.

[ RNE ]

Thanks Jennifer!

I sometimes get the sense that in the robotic grasping and manipulation world, suction cups are kinda seen as cheating at times. But, their nature allows you to do some pretty interesting things.

More clever octopus footage please.

[ CMU ]

A Personal, At-Home Teacher For Playful Learning: From academic topics to child-friendly news bulletins, fun facts and more, Miko 2 is packed with relevant and freshly updated content specially designed by educationists and child-specialists. Your little one won’t even realize they’re learning.

As we point out pretty much every time we post a video like this, keep in mind that you’re seeing a heavily edited version of a hypothetical best case scenario for how this robot can function. And things like “creating a relationship that they can then learn how to form with their peers” is almost certainly overselling things. But at $300 (shipping included), this may be a decent robot as long as your expectations are appropriately calibrated.

[ Miko ]

ICRA 2018 plenary talk by Rodney Brooks: “Robots and People: the Research Challenge.”

[ IEEE RAS ]

ICRA-X 2018 talk by Ron Arkin: “Lethal Autonomous Robots and the Plight of the Noncombatant.”

[ IEEE RAS ]

On the most recent episode of the AI Podcast, Lex Fridman interviews Garry Kasparov.

[ AI Podcast ] Continue reading

Posted in Human Robots

#436100 Labrador Systems Developing Affordable ...

Developing robots for the home is still a challenge, especially if you want those robots to interact with people and help them do practical, useful things. However, the potential markets for home robots are huge, and one of the most compelling markets is for home robots that can assist humans who need them. Today, Labrador Systems, a startup based in California, is announcing a pre-seed funding round of $2 million (led by SOSV’s hardware accelerator HAX with participation from Amazon’s Alexa Fund and iRobot Ventures, among others) with the goal of expanding development and conducting pilot studies of “a new [assistive robot] platform for supporting home health.”

Labrador was founded two years ago by Mike Dooley and Nikolai Romanov. Both Mike and Nikolai have backgrounds in consumer robotics at Evolution Robotics and iRobot, but as an ’80s gamer, Mike’s bio (or at least the parts of his bio on LinkedIn) caught my attention: From 1995 to 1997, Mike worked at Brøderbund Software, helping to manage play testing for games like Myst and Riven and the Where in the World is Carmen San Diego series. He then spent three years at Lego as the product manager for MindStorms. After doing some marginally less interesting things, Mike was the VP of product development at Evolution Robotics from 2006 to 2012, where he led the team that developed the Mint floor sweeping robot. Evolution was acquired by iRobot in 2012, and Mike ended up as the VP of product development over there until 2017, when he co-founded Labrador.

I was pretty much sold at Where in the World is Carmen San Diego (the original version of which I played from a 5.25” floppy on my dad’s Apple IIe)*, but as you can see from all that other stuff, Mike knows what he’s doing in robotics as well.

And according to Labrador’s press release, what they’re doing is this:

Labrador Systems is an early stage technology company developing a new generation of assistive robots to help people live more independently. The company’s core focus is creating affordable solutions that address practical and physical needs at a fraction of the cost of commercial robots. … Labrador’s technology platform offers an affordable solution to improve the quality of care while promoting independence and successful aging.

Labrador’s personal robot, the company’s first offering, will enter pilot studies in 2020.

That’s about as light on detail as a press release gets, but there’s a bit more on Labrador’s website, including:

Our core focus is creating affordable solutions that address practical and physical needs. (we are not a social robot company)
By affordable, we mean products and technologies that will be available at less than 1/10th the cost of commercial robots.
We achieve those low costs by fusing the latest technologies coming out of augmented reality with robotics to move things in the real world.

The only hardware we’ve actually seen from Labrador at this point is a demo that they put together for Amazon’s re:MARS conference, which took place a few months ago, showing a “demonstration project” called Smart Walker:

This isn’t the home assistance robot that Labrador got its funding for, but rather a demonstration of some of their technology. So of course, the question is, what’s Labrador working on, then? It’s still a secret, but Mike Dooley was able to give us a few more details.

IEEE Spectrum: Your website shows a smart walker concept—how is that related to the assistive robot that you’re working on?

Mike Dooley: The smart walker was a request from a major senior living organization to have our robot (which is really good at navigation) guide residents from place to place within their communities. To test the idea with residents, it turned out to be much quicker to take the navigation system from the robot and put it on an existing rollator walker. So when you see the clips of the technology in the smart walker video on our website, that’s actually the robot’s navigation system localizing in real time and path planning in an environment.

“Assistive robot” can cover a huge range of designs and capabilities—can you give us any more detail about your robot, and what it’ll be able to do?

One of the core features of our robot is to help people move things where they have difficulty moving themselves, particularly in the home setting. That may sound trivial, but to someone who has impaired mobility, it can be a major daily challenge and negatively impact their life and health in a number of ways. Some examples we repeatedly hear are people not staying hydrated or taking their medication on time simply because there is a distance between where they are and the items they need. Once we have those base capabilities, i.e. the ability to navigate around a home and move things within it, then the robot becomes a platform for a wider variety of applications.

What made you decide to develop assistive robots, and why are robots a good solution for seniors who want to live independently?

Supporting independent living has been seen as a massive opportunity in robotics for some time, but also as something off in the future. The turning point for me was watching my mother enter that stage in her life and seeing her transition to using a cane, then a walker, and eventually to a wheelchair. That made the problems very real for me. It also made things much clearer about how we could start addressing specific needs with the tools that are becoming available now.

In terms of why robots can be a good solution, the basic answer is the level of need is so overwhelming that even helping with “basic” tasks can make an appreciable difference in the quality of someone’s daily life. It’s also very much about giving individuals a degree of control back over their environment. That applies to seniors as well as others whose world starts getting more complex to manage as their abilities become more impaired.

What are the particular challenges of developing assistive robots, and how are you addressing them? Why do you think there aren’t more robotics startups in this space?

The setting (operating in homes and personal spaces) and the core purpose of the product (aiding a wide variety of individuals) bring a lot of complexity to any capability you want to build into an assistive robot. Our approach is to put as much structure as we can into the system to make it functional, affordable, understandable and reliable.

I think one of the reasons you don’t see more startups in the space is that a lot of roboticists want to skip ahead and do the fancy stuff, such as taking on human-level capabilities around things like manipulation. Those are very interesting research topics, but we think those are also very far away from being practical solutions you can productize for people to use in their homes.

How do you think assistive robots and human caregivers should work together?

The ideal scenario is allowing caregivers to focus more of their time on the high-touch, personal side of care. The robot can offload the more basic support tasks as well as extend the impact of the caregiver for the long hours of the day they can’t be with someone at their home. We see that applying to both paid care providers as well as the 40 million unpaid family members and friends that provide assistance.

The robot is really there as a tool, both for individuals in need and the people that help them. What’s promising in the research discussions we’ve had so far, is that even when a caregiver is present, giving control back to the individual for simple things can mean a lot in the relationship between them and the caregiver.

What should we look forward to from Labrador in 2020?

Our big goal in 2020 is to start placing the next version of the robot with individuals with different types of needs to let them experience it naturally in their own homes and provide feedback on what they like, what don’t like and how we can make it better. We are currently reaching out to companies in the healthcare and home health fields to participate in those studies and test specific applications related to their services. We plan to share more detail about those studies and the robot itself as we get further into 2020.

If you’re an organization (or individual) who wants to possibly try out Labrador’s prototype, the company encourages you to connect with them through their website. And as we learn more about what Labrador is up to, we’ll have updates for you, presumably in 2020.

[ Labrador Systems ]

* I just lost an hour of my life after finding out that you can play Where in the World is Carmen San Diego in your browser for free. Continue reading

Posted in Human Robots

#436044 Want a Really Hard Machine Learning ...

What’s the world’s hardest machine learning problem? Autonomous vehicles? Robots that can walk? Cancer detection?

Nope, says Julian Sanchez. It’s agriculture.

Sanchez might be a little biased. He is the director of precision agriculture for John Deere, and is in charge of adding intelligence to traditional farm vehicles. But he does have a little perspective, having spent time working on software for both medical devices and air traffic control systems.

I met with Sanchez and Alexey Rostapshov, head of digital innovation at John Deere Labs, at the organization’s San Francisco offices last month. Labs launched in 2017 to take advantage of the area’s tech expertise, both to apply machine learning to in-house agricultural problems and to work with partners to build technologies that play nicely with Deere’s big green machines. Deere’s neighbors in San Francisco’s tech-heavy South of Market are LinkedIn, Salesforce, and Planet Labs, which puts it in a good position for recruiting.

“We’ve literally had folks knock on the door and say, ‘What are you doing here?’” says Rostapshov, and some return to drop off resumes.

Here’s why Sanchez believes agriculture is such a big challenge for artificial intelligence.

“It’s not just about driving tractors around,” he says, although autonomous driving technologies are part of the mix. (John Deere is doing a lot of work with precision GPS to improve autonomous driving, for example, and allow tractors to plan their own routes around fields.)

But more complex than the driving problem, says Sanchez, are the classification problems.

Corn: A Classic Classification Problem

Photo: Tekla Perry

One key effort, Sanchez says, are AI systems “that allow me to tell whether grain being harvested is good quality or low quality and to make automatic adjustment systems for the harvester.” The company is already selling an early version of this image analysis technology. But the many differences between grain types, and grains grown under different conditions, make this task a tough one for machine learning.

“Take corn,” Sanchez says. “Let’s say we are building a deep learning algorithm to detect this corn. And we take lots of pictures of kernels to give it. Say we pick those kernels in central Illinois. But, one mile over, the farmer planted a slightly different hybrid which has slightly different coloration of yellow. Meanwhile, this other farm harvested three days later in a field five miles away; it’s the same hybrid, but it also looks different.

“It’s an overwhelming classification challenge, and that’s just for corn. But you are not only doing it for corn, you have to add 20 more varieties of grain to the mix; and some, like canola, are almost microscopic.”

Even the ground conditions vary dramatically—far more than road conditions, Sanchez points out.

“Let’s say we are building a deep learning algorithm to detect how much residue is left on the soil after a harvest, including stubble and some chaff. Let’s drive 2,000 acres of fields in the Midwest looking at residue. That’s great, but I guarantee that if you go drive those the next year, it will look significantly different.

“Deep learning is great at interpolating conditions between what it knows; it is not good at extrapolating to situations it hasn’t seen. And in agriculture, you always feel that there is a set of conditions that you haven’t yet classified.”

A Flood of Big Data

The scale of the data is also daunting, Rostapshov points out. “We are one of the largest users of cloud computing services in the world,” he says. “We are gathering 5 to 15 million measurements per second from 130,000 connected machines globally. We have over 150 million acres in our databases, using petabytes and petabytes [of storage]. We process more data than Twitter does.”

Much of this information is so-called dirty data, that is, it doesn’t share the same format or structure, because it’s coming not only from a wide variety of John Deere machines, but also includes data from some 100 other companies that have access to the platform, including weather information, aerial imagery, and soil analyses.

As a result, says Sanchez, Deere has had to make “tremendous investments in back-end data cleanup.”

Deep learning is great at interpolating conditions between what it knows; it is not good at extrapolating to situations it hasn’t seen.”
—Julian Sanchez, John Deere

“We have gotten progressively more skilled at that problem,” he says. “We started simply by cleaning up our own data. You’d think it would be nice and neat, since it’s coming from our own machines, but there is a wide variety of different models and different years. Then we started geospatially tagging the agronomic data—the information about where you are applying herbicides and fertilizer and the like—coming in from our vehicles. When we started bringing in other data, from drones, say, we were already good at cleaning it up.”

John Deere’s Hiring Pitch

Hard problems can be a good thing to have for a company looking to hire machine learning engineers.

“Our opening line to potential recruits,” Sanchez says, “is ‘This stuff matters.’ Then, if we get a chance to talk to them more, we follow up with ‘Not only does this stuff matter, but the problems are really hard and interesting.’ When we explain the variability in farming and how we have to apply all the latest tools to these problems, we get their attention.”

Software engineers “know that feeding a growing population is a massive problem and are excited about the prospect of making a difference,” Rostapshov says.

Only 20 engineers work in the San Francisco labs right now, and that’s on a busy day—some of the researchers spend part of their time at Blue River Technology, a startup based in Sunnyvale that was acquired by Deere in 2017. About half of the researchers are focusing on AI. The Lab is in the process of doubling its office space (no word on staffing plans for that expansion yet).

“We are one of the largest users of cloud computing services in the world.”
—Alexey Rostapshov, John Deere Labs

Company-wide, Deere has thousands of software engineers, with many using AI and machine learning tools in their work, and about the same number of mechanical and electrical engineers, Sanchez reports. “If you look at our hiring 10 years ago,” he says, “it was heavily weighted to mechanical engineers. But if you look at those numbers now, it is by a large majority [engineers working] in the software space. We still need mechanical engineers—we do build green machines—but if you go by our footprint of tech talent, it is pretty safe to call John Deere a software company. And if you follow the key conversations that are happening in the company right now, 95 percent of them are software-related.”

For now, these software engineers are focused on developing technologies that allow farmers to “do more with less,” Sanchez says. Meaning, to get more and better crops from less fuel, less seed, less fertilizer, less pesticide, and fewer workers, and putting together building blocks that, he says, could eventually lead to fully autonomous farm vehicles. The data Deere collects today, for the most part, stays in silos (the virtual kind), with AI algorithms that analyze specific sets of data to provide guidance to individual farmers. At some point, however, with tools to anonymize data and buy-in from farmers, aggregating data could provide some powerful insights.

“We are not asking farmers for that yet,” Sanchez says. “We are not doing aggregation to look for patterns. We are focused on offering technology that allows an individual farmer to use less, on positioning ourselves to be in a neutral spot. We are not about selling you more seed or more fertilizer. So we are building up a good trust level. In the long term, we can have conversations about doing more with deep learning.” Continue reading

Posted in Human Robots