Tag Archives: new

#436234 Robot Gift Guide 2019

Welcome to the eighth edition of IEEE Spectrum’s Robot Gift Guide!

This year we’re featuring 15 robotic products that we think will make fantastic holiday gifts. As always, we tried to include a broad range of robot types and prices, focusing mostly on items released this year. (A reminder: While we provide links to places where you can buy these items, we’re not endorsing any in particular, and a little bit of research may result in better deals.)

If you need even more robot gift ideas, take a look at our past guides: 2018, 2017, 2016, 2015, 2014, 2013, and 2012. Some of those robots are still great choices and might be way cheaper now than when we first posted about them. And if you have suggestions that you’d like to share, post a comment below to help the rest of us find the perfect robot gift.

Skydio 2

Image: Skydio

What makes robots so compelling is their autonomy, and the Skydio 2 is one of the most autonomous robots we’ve ever seen. It uses an array of cameras to map its environment and avoid obstacles in real-time, making flight safe and effortless and enabling the kinds of shots that would be impossible otherwise. Seriously, this thing is magical, and it’s amazing that you can actually buy one.
$1,000
Skydio
UBTECH Jimu MeeBot 2

Image: UBTECH

The Jimu MeeBot 2.0 from UBTECH is a STEM education robot designed to be easy to build and program. It includes six servo motors, a color sensor, and LED lights. An app for iPhone or iPad provides step-by-step 3D instructions, and helps you code different behaviors for the robot. It’s available exclusively from Apple.
$130
Apple
iRobot Roomba s9+

Image: iRobot

We know that $1,400 is a crazy amount of money to spend on a robot vacuum, but the Roomba s9+ is a crazy robot vacuum. As if all of its sensors and mapping intelligence wasn’t enough, it empties itself, which means that you can have your floors vacuumed every single day for a month and you don’t have to even think about it. This is what home robots are supposed to be.
$1,400
iRobot
PFF Gita

Photo: Piaggio Fast Forward

Nobody likes carrying things, which is why Gita is perfect for everyone with an extra $3,000 lying around. Developed by Piaggio Fast Forward, this autonomous robot will follow you around with a cargo hold full of your most important stuff, and do it in a way guaranteed to attract as much attention as possible.
$3,250
Gita
DJI Mavic Mini

Photo: DJI

It’s tiny, it’s cheap, and it takes good pictures—what more could you ask for from a drone? And for $400, this is an excellent drone to get if you’re on a budget and comfortable with manual flight. Keep in mind that while the Mavic Mini is small enough that you don’t need to register it with the FAA, you do still need to follow all the same rules and regulations.
$400
DJI
LEGO Star Wars Droid Commander

Image: LEGO

Designed for kids ages 8+, this LEGO set includes more than 1,000 pieces, enough to build three different droids: R2-D2, Gonk Droid, and Mouse Droid. Using a Bluetooth-controlled robotic brick called Move Hub, which connects to the LEGO BOOST Star Wars app, kids can change how the robots behave and solve challenges, learning basic robotics and coding skills.
$200
LEGO
Sony Aibo

Photo: Sony

Robot pets don’t get much more sophisticated (or expensive) than Sony’s Aibo. Strictly speaking, it’s one of the most complex consumer robots you can buy, and Sony continues to add to Aibo’s software. Recent new features include user programmability, and the ability to “feed” it.
$2,900 (free aibone and paw pads until 12/29/2019)
Sony
Neato Botvac D4 Connected

Photo: Neato

The Neato Botvac D4 may not have all of the features of its fancier and more expensive siblings, but it does have the features that you probably care the most about: The ability to make maps of its environment for intelligent cleaning (using lasers!), along with user-defined no-go lines that keep it where you want it. And it cleans quite well, too.
$530 $350 (sale)
Neato Robotics
Cubelets Curiosity Set

Photo: Modular Robotics

Cubelets are magnetic blocks that you can snap together to make an endless variety of robots with no programming and no wires. The newest set, called Curiosity, is designed for kids ages 4+ and comes with 10 robotic cubes. These include light and distance sensors, motors, and a Bluetooth module, which connects the robot constructions to the Cubelets app.
$250
Modular Robotics
Tertill

Photo: Franklin Robotics

Tertill does one simple job: It weeds your garden. It’s waterproof, dirt proof, solar powered, and fully autonomous, meaning that you can leave it out in your garden all summer and just enjoy eating your plants rather than taking care of them.
$350
Tertill
iRobot Root

Photo: iRobot

Root was originally developed by Harvard University as a tool to help kids progressively learn to code. iRobot has taken over Root and is now supporting the curriculum, which starts for kids before they even know how to read and should keep them busy for years afterwards.
$200
iRobot
LOVOT

Image: Lovot

Let’s be honest: Nobody is really quite sure what LOVOT is. We can all agree that it’s kinda cute, though. And kinda weird. But cute. Created by Japanese robotics startup Groove X, LOVOT does have a whole bunch of tech packed into its bizarre little body and it will do its best to get you to love it.
$2,750 (¥300,000)
LOVOT
Sphero RVR

Photo: Sphero

RVR is a rugged, versatile, easy to program mobile robot. It’s a development platform designed to be a bridge between educational robots like Sphero and more sophisticated and expensive systems like Misty. It’s mostly affordable, very expandable, and comes from a company with a lot of experience making robots.
$250
Sphero
“How to Train Your Robot”

Image: Lawrence Hall of Science

Aimed at 4th and 5th graders, “How to Train Your Robot,” written by Blooma Goldberg, Ken Goldberg, and Ashley Chase, and illustrated by Dave Clegg, is a perfect introduction to robotics for kids who want to get started with designing and building robots. But the book isn’t just for beginners: It’s also a fun, inspiring read for kids who are already into robotics and want to go further—it even introduces concepts like computer simulations and deep learning. You can download a free digital copy or request hardcopies here.
Free
UC Berkeley
MIT Mini Cheetah

Photo: MIT

Yes, Boston Dynamics’ Spot, now available for lease, is probably the world’s most famous quadruped, but MIT is starting to pump out Mini Cheetahs en masse for researchers, and while we’re not exactly sure how you’d manage to get one of these things short of stealing one directly for MIT, a Mini Cheetah is our fantasy robotics gift this year. Mini Cheetah looks like a ton of fun—it’s portable, highly dynamic, super rugged, and easy to control. We want one!
Price N/A
MIT Biomimetic Robotics Lab

For more tech gift ideas, see also IEEE Spectrum’s annual Gift Guide. Continue reading

Posted in Human Robots

#436220 How Boston Dynamics Is Redefining Robot ...

Gif: Bob O’Connor/IEEE Spectrum

With their jaw-dropping agility and animal-like reflexes, Boston Dynamics’ bioinspired robots have always seemed to have no equal. But that preeminence hasn’t stopped the company from pushing its technology to new heights, sometimes literally. Its latest crop of legged machines can trudge up and down hills, clamber over obstacles, and even leap into the air like a gymnast. There’s no denying their appeal: Every time Boston Dynamics uploads a new video to YouTube, it quickly racks up millions of views. These are probably the first robots you could call Internet stars.

Spot

Photo: Bob O’Connor

84 cm HEIGHT

25 kg WEIGHT

5.76 km/h SPEED

SENSING: Stereo cameras, inertial measurement unit, position/force sensors

ACTUATION: 12 DC motors

POWER: Battery (90 minutes per charge)

Boston Dynamics, once owned by Google’s parent company, Alphabet, and now by the Japanese conglomerate SoftBank, has long been secretive about its designs. Few publications have been granted access to its Waltham, Mass., headquarters, near Boston. But one morning this past August, IEEE Spectrum got in. We were given permission to do a unique kind of photo shoot that day. We set out to capture the company’s robots in action—running, climbing, jumping—by using high-speed cameras coupled with powerful strobes. The results you see on this page: freeze-frames of pure robotic agility.

We also used the photos to create interactive views, which you can explore online on our Robots Guide. These interactives let you spin the robots 360 degrees, or make them walk and jump on your screen.

Boston Dynamics has amassed a minizoo of robotic beasts over the years, with names like BigDog, SandFlea, and WildCat. When we visited, we focused on the two most advanced machines the company has ever built: Spot, a nimble quadruped, and Atlas, an adult-size humanoid.

Spot can navigate almost any kind of terrain while sensing its environment. Boston Dynamics recently made it available for lease, with plans to manufacture something like a thousand units per year. It envisions Spot, or even packs of them, inspecting industrial sites, carrying out hazmat missions, and delivering packages. And its YouTube fame has not gone unnoticed: Even entertainment is a possibility, with Cirque du Soleil auditioning Spot as a potential new troupe member.

“It’s really a milestone for us going from robots that work in the lab to these that are hardened for work out in the field,” Boston Dynamics CEO Marc Raibert says in an interview.

Atlas

Photo: Bob O’Connor

150 cm HEIGHT

80 kg WEIGHT

5.4 km/h SPEED

SENSING: Lidar and stereo vision

ACTUATION: 28 hydraulic actuators

POWER: Battery

Our other photographic subject, Atlas, is Boston Dynamics’ biggest celebrity. This 150-centimeter-tall (4-foot-11-inch-tall) humanoid is capable of impressive athletic feats. Its actuators are driven by a compact yet powerful hydraulic system that the company engineered from scratch. The unique system gives the 80-kilogram (176-pound) robot the explosive strength needed to perform acrobatic leaps and flips that don’t seem possible for such a large humanoid to do. Atlas has inspired a string of parody videos on YouTube and more than a few jokes about a robot takeover.

While Boston Dynamics excels at making robots, it has yet to prove that it can sell them. Ever since its founding in 1992 as a spin-off from MIT, the company has been an R&D-centric operation, with most of its early funding coming from U.S. military programs. The emphasis on commercialization seems to have intensified after the acquisition by SoftBank, in 2017. SoftBank’s founder and CEO, Masayoshi Son, is known to love robots—and profits.

The launch of Spot is a significant step for Boston Dynamics as it seeks to “productize” its creations. Still, Raibert says his long-term goals have remained the same: He wants to build machines that interact with the world dynamically, just as animals and humans do. Has anything changed at all? Yes, one thing, he adds with a grin. In his early career as a roboticist, he used to write papers and count his citations. Now he counts YouTube views.

In the Spotlight

Photo: Bob O’Connor

Boston Dynamics designed Spot as a versatile mobile machine suitable for a variety of applications. The company has not announced how much Spot will cost, saying only that it is being made available to select customers, which will be able to lease the robot. A payload bay lets you add up to 14 kilograms of extra hardware to the robot’s back. One of the accessories that Boston Dynamics plans to offer is a 6-degrees-of-freedom arm, which will allow Spot to grasp objects and open doors.

Super Senses

Photo: Bob O’Connor

Spot’s hardware is almost entirely custom-designed. It includes powerful processing boards for control as well as sensor modules for perception. The ­sensors are located on the front, rear, and sides of the robot’s body. Each module consists of a pair of stereo cameras, a wide-angle camera, and a texture projector, which enhances 3D sensing in low light. The sensors allow the robot to use the navigation method known as SLAM, or simultaneous localization and mapping, to get around autonomously.

Stepping Up

Photo: Bob O’Connor

In addition to its autonomous behaviors, Spot can also be steered by a remote operator with a game-style controller. But even when in manual mode, the robot still exhibits a high degree of autonomy. If there’s an obstacle ahead, Spot will go around it. If there are stairs, Spot will climb them. The robot goes into these operating modes and then performs the related actions completely on its own, without any input from the operator. To go down a flight of stairs, Spot walks backward, an approach Boston Dynamics says provides greater stability.

Funky Feet

Gif: Bob O’Connor/IEEE Spectrum

Spot’s legs are powered by 12 custom DC motors, each geared down to provide high torque. The robot can walk forward, sideways, and backward, and trot at a top speed of 1.6 meters per second. It can also turn in place. Other gaits include crawling and pacing. In one wildly popular YouTube video, Spot shows off its fancy footwork by dancing to the pop hit “Uptown Funk.”

Robot Blood

Photo: Bob O’Connor

Atlas is powered by a hydraulic system consisting of 28 actuators. These actuators are basically cylinders filled with pressurized fluid that can drive a piston with great force. Their high performance is due in part to custom servo valves that are significantly smaller and lighter than the aerospace models that Boston Dynamics had been using in earlier designs. Though not visible from the outside, the innards of an Atlas are filled with these hydraulic actuators as well as the lines of fluid that connect them. When one of those lines ruptures, Atlas bleeds the hydraulic fluid, which happens to be red.

Next Generation

Gif: Bob O’Connor/IEEE Spectrum

The current version of Atlas is a thorough upgrade of the original model, which was built for the DARPA Robotics Challenge in 2015. The newest robot is lighter and more agile. Boston Dynamics used industrial-grade 3D printers to make key structural parts, giving the robot greater strength-to-weight ratio than earlier designs. The next-gen Atlas can also do something that its predecessor, famously, could not: It can get up after a fall.

Walk This Way

Photo: Bob O’Connor

To control Atlas, an operator provides general steering via a manual controller while the robot uses its stereo cameras and lidar to adjust to changes in the environment. Atlas can also perform certain tasks autonomously. For example, if you add special bar-code-type tags to cardboard boxes, Atlas can pick them up and stack them or place them on shelves.

Biologically Inspired

Photos: Bob O’Connor

Atlas’s control software doesn’t explicitly tell the robot how to move its joints, but rather it employs mathematical models of the underlying physics of the robot’s body and how it interacts with the environment. Atlas relies on its whole body to balance and move. When jumping over an obstacle or doing acrobatic stunts, the robot uses not only its legs but also its upper body, swinging its arms to propel itself just as an athlete would.

This article appears in the December 2019 print issue as “By Leaps and Bounds.” Continue reading

Posted in Human Robots

#436218 An AI Debated Its Own Potential for Good ...

Artificial intelligence is going to overhaul the way we live and work. But will the changes it brings be for the better? As the technology slowly develops (let’s remember that right now, we’re still very much in the narrow AI space and nowhere near an artificial general intelligence), whether it will end up doing us more harm than good is a question at the top of everyone’s mind.

What kind of response might we get if we posed this question to an AI itself?

Last week at the Cambridge Union in England, IBM did just that. Its Project Debater (an AI that narrowly lost a debate to human debating champion Harish Natarajan in February) gave the opening arguments in a debate about the promise and peril of artificial intelligence.

Critical thinking, linking different lines of thought, and anticipating counter-arguments are all valuable debating skills that humans can practice and refine. While these skills are tougher for an AI to get good at since they often require deeper contextual understanding, AI does have a major edge over humans in absorbing and analyzing information. In the February debate, Project Debater used IBM’s cloud computing infrastructure to read hundreds of millions of documents and extract relevant details to construct an argument.

This time around, Debater looked through 1,100 arguments for or against AI. The arguments were submitted to IBM by the public during the week prior to the debate, through a website set up for that purpose. Of the 1,100 submissions, the AI classified 570 as anti-AI, or of the opinion that the technology will bring more harm to humanity than good. 511 arguments were found to be pro-AI, and the rest were irrelevant to the topic at hand.

Debater grouped the arguments into five themes; the technology’s ability to take over dangerous or monotonous jobs was a pro-AI theme, and on the flip side was its potential to perpetuate the biases of its creators. “AI companies still have too little expertise on how to properly assess datasets and filter out bias,” the tall black box that houses Project Debater said. “AI will take human bias and will fixate it for generations.”
After Project Debater kicked off the debate by giving opening arguments for both sides, two teams of people took over, elaborating on its points and coming up with their own counter-arguments.

In the end, an audience poll voted in favor of the pro-AI side, but just barely; 51.2 percent of voters felt convinced that AI can help us more than it can hurt us.

The software’s natural language processing was able to identify racist, obscene, or otherwise inappropriate comments and weed them out as being irrelevant to the debate. But it also repeated the same arguments multiple times, and mixed up a statement about bias as being pro-AI rather than anti-AI.

IBM has been working on Project Debater for over six years, and though it aims to iron out small glitches like these, the system’s goal isn’t to ultimately outwit and defeat humans. On the contrary, the AI is meant to support our decision-making by taking in and processing huge amounts of information in a nuanced way, more quickly than we ever could.

IBM engineer Noam Slonim envisions Project Debater’s tech being used, for example, by a government seeking citizens’ feedback about a new policy. “This technology can help to establish an interesting and effective communication channel between the decision maker and the people that are going to be impacted by the decision,” he said.

As for the question of whether AI will do more good or harm, perhaps Sylvie Delacroix put it best. A professor of law and ethics at the University of Birmingham who argued on the pro-AI side of the debate, she pointed out that the impact AI will have depends on the way we design it, saying “AI is only as good as the data it has been fed.”

She’s right; rather than asking what sort of impact AI will have on humanity, we should start by asking what sort of impact we want it to have. The people working on AI—not AIs themselves—are ultimately responsible for how much good or harm will be done.

Image Credit: IBM Project Debater at Cambridge Union Society, photo courtesy of IBM Research Continue reading

Posted in Human Robots

#436215 Help Rescuers Find Missing Persons With ...

There’s a definite sense that robots are destined to become a critical part of search and rescue missions and disaster relief efforts, working alongside humans to help first responders move faster and more efficiently. And we’ve seen all kinds of studies that include the claim “this robot could potentially help with disaster relief,” to varying degrees of plausibility.

But it takes a long time, and a lot of extra effort, for academic research to actually become anything useful—especially for first responders, where there isn’t a lot of financial incentive for further development.

It turns out that if you actually ask first responders what they most need for disaster relief, they’re not necessarily interested in the latest and greatest robotic platform or other futuristic technology. They’re using commercial off-the-shelf drones, often consumer-grade ones, because they’re simple and cheap and great at surveying large areas. The challenge is doing something useful with all of the imagery that these drones collect. Computer vision algorithms could help with that, as long as those algorithms are readily accessible and nearly effortless to use.

The IEEE Robotics and Automation Society and the Center for Robotic-Assisted Search and Rescue (CRASAR) at Texas A&M University have launched a contest to bridge this gap between the kinds of tools that roboticists and computer vision researchers might call “basic” and a system that’s useful to first responders in the field. It’s a simple and straightforward idea, and somewhat surprising that no one had thought of it before now. And if you can develop such a system, it’s worth some cash.

CRASAR does already have a Computer Vision Emergency Response Toolkit (created right after Hurricane Harvey), which includes a few pixel filters and some edge and corner detectors. Through this contest, you can get paid your share of a $3,000 prize pool for adding some other excessively basic tools, including:

Image enhancement through histogram equalization, which can be applied to electro-optical (visible light cameras) and thermal imagery

Color segmentation for a range

Grayscale segmentation for a range in a thermal image

If it seems like this contest is really not that hard, that’s because it isn’t. “The first thing to understand about this contest is that strictly speaking, it’s really not that hard,” says Robin Murphy, director of CRASAR. “This contest isn’t necessarily about coming up with algorithms that are brand new, or even state-of-the-art, but rather algorithms that are functional and reliable and implemented in a way that’s immediately [usable] by inexperienced users in the field.”

Murphy readily admits that some of what needs to be done is not particularly challenging at all, but that’s not the point—the point is to make these functionalities accessible to folks who have better things to do than solve these problems themselves, as Murphy explains.

“A lot of my research is driven by problems that I’ve seen in the field that you’d think somebody would have solved, but apparently not. More than half of this is available in OpenCV, but who’s going to find it, download it, learn Python, that kind of thing? We need to get these tools into an open framework. We’re happy if you take libraries that already exist (just don’t steal code)—not everything needs to be rewritten from scratch. Just use what’s already there. Some of it may seem too simple, because it IS that simple. It already exists and you just need to move some code around.”

If you want to get very slightly more complicated, there’s a second category that involves a little bit of math:

Coders must provide a system that does the following for each nadir image in a set:

Reads the geotag embedded in the .jpg
Overlays a USNG grid for a user-specified interval (e.g., every 50, 100, or 200 meters)
Gives the GPS coordinates of each pixel if a cursor is rolled over the image
Given a set of images with the GPS or USNG coordinate and a bounding box, finds all images in the set that have a pixel intersecting that location

The final category awards prizes to anyone who comes up with anything else that turns out to be useful. Or, more specifically, “entrants can submit any algorithm they believe will be of value.” Whether or not it’s actually of value will be up to a panel of judges that includes both first responders and computer vision experts. More detailed rules can be found here, along with sample datasets that you can use for testing.

The contest deadline is 16 December, so you’ve got about a month to submit an entry. Winners will be announced at the beginning of January. Continue reading

Posted in Human Robots

#436209 Video Friday: Robotic Endoscope Travels ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

DARPA SubT Urban Circuit – February 18-27, 2020 – Olympia, WA, USA
Let us know if you have suggestions for next week, and enjoy today's videos.

Kuka has just announced the results of its annual Innovation Award. From an initial batch of 30 applicants, five teams reached the finals (we were part of the judging committee). The five finalists worked for nearly a year on their applications, which they demonstrated this week at the Medica trade show in Düsseldorf, Germany. And the winner of the €20,000 prize is…Team RoboFORCE, led by the STORM Lab in the U.K., which developed a “robotic magnetic flexible endoscope for painless colorectal cancer screening, surveillance, and intervention.”

The system could improve colonoscopy procedures by reducing pain and discomfort as well as other risks such as bleeding and perforation, according to the STORM Lab researchers. It uses a magnetic field to control the endoscope, pulling rather than pushing it through the colon.

The other four finalists also presented some really interesting applications—you can see their videos below.

“Because we were so please with the high quality of the submissions, we will have next year’s finals again at the Medica fair, and the challenge will be named ‘Medical Robotics’,” says Rainer Bischoff, vice president for corporate research at Kuka. He adds that the selected teams will again use Kuka’s LBR Med robot arm, which is “already certified for integration into medical products and makes it particularly easy for startups to use a robot as the main component for a particular solution.”

Applications are now open for Kuka’s Innovation Award 2020. You can find more information on how to enter here. The deadline is 5 January 2020.

[ Kuka ]

Oh good, Aibo needs to be fed now.

You know what comes next, right?

[ Aibo ]

Your cat needs this robot.

It's about $200 on Kickstarter.

[ Kickstarter ]

Enjoy this tour of the Skydio offices courtesy Skydio 2, which runs into not even one single thing.

If any Skydio employees had important piles of papers on their desks, well, they don’t anymore.

[ Skydio ]

Artificial intelligence is everywhere nowadays, but what exactly does it mean? We asked a group MIT computer science grad students and post-docs how they personally define AI.

“When most people say AI, they actually mean machine learning, which is just pattern recognition.” Yup.

[ MIT ]

Using event-based cameras, this drone control system can track attitude at 1600 degrees per second (!).

[ UZH ]

Introduced at CES 2018, Walker is an intelligent humanoid service robot from UBTECH Robotics. Below are the latest features and technologies used during our latest round of development to make Walker even better.

[ Ubtech ]

Introducing the Alpha Prime by #VelodyneLidar, the most advanced lidar sensor on the market! Alpha Prime delivers an unrivaled combination of field-of-view, range, high-resolution, clarity and operational performance.

Performance looks good, but don’t expect it to be cheap.

[ Velodyne ]

Ghost Robotics’ Spirit 40 will start shipping to researchers in January of next year.

[ Ghost Robotics ]

Unitree is about to ship the first batch of their AlienGo quadrupeds as well:

[ Unitree ]

Mechanical engineering’s Sarah Bergbreiter discusses her work on micro robotics, how they draw inspiration from insects and animals, and how tiny robots can help humans in a variety of fields.

[ CMU ]

Learning contact-rich, robotic manipulation skills is a challenging problem due to the high-dimensionality of the state and action space as well as uncertainty from noisy sensors and inaccurate motor control. To combat these factors and achieve more robust manipulation, humans actively exploit contact constraints in the environment. By adopting a similar strategy, robots can also achieve more robust manipulation. In this paper, we enable a robot to autonomously modify its environment and thereby discover how to ease manipulation skill learning. Specifically, we provide the robot with fixtures that it can freely place within the environment. These fixtures provide hard constraints that limit the outcome of robot actions. Thereby, they funnel uncertainty from perception and motor control and scaffold manipulation skill learning.

[ Stanford ]

Since 2016, Verity's drones have completed more than 200,000 flights around the world. Completely autonomous, client-operated and designed for live events, Verity is making the magic real by turning drones into flying lights, characters, and props.

[ Verity ]

To monitor and stop the spread of wildfires, University of Michigan engineers developed UAVs that could find, map and report fires. One day UAVs like this could work with disaster response units, firefighters and other emergency teams to provide real-time accurate information to reduce damage and save lives. For their research, the University of Michigan graduate students won first place at a competition for using a swarm of UAVs to successfully map and report simulated wildfires.

[ University of Michigan ]

Here’s an important issue that I haven’t heard talked about all that much: How first responders should interact with self-driving cars.

“To put the car in manual mode, you must call Waymo.” Huh.

[ Waymo ]

Here’s what Gitai has been up to recently, from a Humanoids 2019 workshop talk.

[ Gitai ]

The latest CMU RI seminar comes from Girish Chowdhary at the University of Illinois at Urbana-Champaign on “Autonomous and Intelligent Robots in Unstructured Field Environments.”

What if a team of collaborative autonomous robots grew your food for you? In this talk, I will discuss some key advances in robotics, machine learning, and autonomy that will one day enable teams of small robots to grow food for you in your backyard in a fundamentally more sustainable way than modern mega-farms! Teams of small aerial and ground robots could be a potential solution to many of the serious problems that modern agriculture is facing. However, fully autonomous robots that operate without supervision for weeks, months, or entire growing season are not yet practical. I will discuss my group’s theoretical and practical work towards the underlying challenging problems in robotic systems, autonomy, sensing, and learning. I will begin with our lightweight, compact, and autonomous field robot TerraSentia and the recent successes of this type of undercanopy robots for high-throughput phenotyping with deep learning-based machine vision. I will also discuss how to make a team of autonomous robots learn to coordinate to weed large agricultural farms under partial observability. These direct applications will help me make the case for the type of reinforcement learning and adaptive control that are necessary to usher in the next generation of autonomous field robots that learn to solve complex problems in harsh, changing, and dynamic environments. I will then end with an overview of our new MURI, in which we are working towards developing AI and control that leverages neurodynamics inspired by the Octopus brain.

[ CMU RI ] Continue reading

Posted in Human Robots