Tag Archives: questions
#435632 DARPA Subterranean Challenge: Tunnel ...
The Tunnel Circuit of the DARPA Subterranean Challenge starts later this week at the NIOSH research mine just outside of Pittsburgh, Pennsylvania. From 15-22 August, 11 teams will send robots into a mine that they've never seen before, with the goal of making maps and locating items. All DARPA SubT events involve tunnels of one sort or another, but in this case, the “Tunnel Circuit” refers to mines as opposed to urban underground areas or natural caves. This month’s challenge is the first of three discrete events leading up to a huge final event in August of 2021.
While the Tunnel Circuit competition will be closed to the public, and media are only allowed access for a single day (which we'll be at, of course), DARPA has provided a substantial amount of information about what teams will be able to expect. We also have details from the SubT Integration Exercise, called STIX, which was a completely closed event that took place back in April. STIX was aimed at giving some teams (and DARPA) a chance to practice in a real tunnel environment.
For more general background on SubT, here are some articles to get you all caught up:
SubT: The Next DARPA Challenge for Robotics
Q&A with DARPA Program Manager Tim Chung
Meet The First Nine Teams
It makes sense to take a closer look at what happened at April's STIX exercise, because it is (probably) very similar to what teams will experience in the upcoming Tunnel Circuit. STIX took place at Edgar Experimental Mine in Colorado, and while no two mines are the same (and many are very, very different), there are enough similarities for STIX to have been a valuable experience for teams. Here's an overview video of the exercise from DARPA:
DARPA has also put together a much more detailed walkthrough of the STIX mine exercise, which gives you a sense of just how vast, complicated, and (frankly) challenging for robots the mine environment is:
So, that's the kind of thing that teams had to deal with back in April. Since the event was an exercise, rather than a competition, DARPA didn't really keep score, and wouldn't comment on the performance of individual teams. We've been trolling YouTube for STIX footage, though, to get a sense of how things went, and we found a few interesting videos.
Here's a nice overview from Team CERBERUS, which used drones plus an ANYmal quadruped:
Team CTU-CRAS also used drones, along with a tracked robot:
Team Robotika was brave enough to post video of a “fatal failure” experienced by its wheeled robot; the poor little bot gets rescued at about 7:00 in case you get worried:
So that was STIX. But what about the Tunnel Circuit competition this week? Here's a course preview video from DARPA:
It sort of looks like the NIOSH mine might be a bit less dusty than the Edgar mine was, but it could also be wetter and muddier. It’s hard to tell, because we’re just getting a few snapshots of what’s probably an enormous area with kilometers of tunnels that the robots will have to explore. But DARPA has promised “constrained passages, sharp turns, large drops/climbs, inclines, steps, ladders, and mud, sand, and/or water.” Combine that with the serious challenge to communications imposed by the mine itself, and robots will have to be both physically capable, and almost entirely autonomous. Which is, of course, exactly what DARPA is looking to test with this challenge.
Lastly, we had a chance to catch up with Tim Chung, Program Manager for the Subterranean Challenge at DARPA, and ask him a few brief questions about STIX and what we have to look forward to this week.
IEEE Spectrum: How did STIX go?
Tim Chung: It was a lot of fun! I think it gave a lot of the teams a great opportunity to really get a taste of what these types of real world environments look like, and also what DARPA has in store for them in the SubT Challenge. STIX I saw as an experiment—a learning experience for all the teams involved (as well as the DARPA team) so that we can continue our calibration.
What do you think teams took away from STIX, and what do you think DARPA took away from STIX?
I think the thing that teams took away was that, when DARPA hosts a challenge, we have very audacious visions for what the art of the possible is. And that's what we want—in my mind, the purpose of a DARPA Grand Challenge is to provide that inspiration of, ‘Holy cow, someone thinks we can do this!’ So I do think the teams walked away with a better understanding of what DARPA's vision is for the capabilities we're seeking in the SubT Challenge, and hopefully walked away with a better understanding of the technical, physical, even maybe mental challenges of doing this in the wild— which will all roll back into how they think about the problem, and how they develop their systems.
This was a collaborative exercise, so the DARPA field team was out there interacting with the other engineers, figuring out what their strengths and weaknesses and needs might be, and even understanding how to handle the robots themselves. That will help [strengthen] connections between these university teams and DARPA going forward. Across the board, I think that collaborative spirit is something we really wish to encourage, and something that the DARPA folks were able to take away.
What do we have to look forward to during the Tunnel Circuit?
The vision here is that the Tunnel Circuit is representative of one of the three subterranean subdomains, along with urban and cave. Characteristics of all of these three subdomains will be mashed together in an epic final course, so that teams will have to face hints of tunnel once again in that final event.
Without giving too much away, the NIOSH mine will be similar to the Edgar mine in that it's a human-made environment that supports mining operations and research. But of course, every site is different, and these differences, I think, will provide good opportunities for the teams to shine.
Again, we'll be visiting the NIOSH mine in Pennsylvania during the Tunnel Circuit and will post as much as we can from there. But if you’re an actual participant in the Subterranean Challenge, please tweet me @BotJunkie so that I can follow and help share live updates.
[ DARPA Subterranean Challenge ] Continue reading
#435614 3 Easy Ways to Evaluate AI Claims
When every other tech startup claims to use artificial intelligence, it can be tough to figure out if an AI service or product works as advertised. In the midst of the AI “gold rush,” how can you separate the nuggets from the fool’s gold?
There’s no shortage of cautionary tales involving overhyped AI claims. And applying AI technologies to health care, education, and law enforcement mean that getting it wrong can have real consequences for society—not just for investors who bet on the wrong unicorn.
So IEEE Spectrum asked experts to share their tips for how to identify AI hype in press releases, news articles, research papers, and IPO filings.
“It can be tricky, because I think the people who are out there selling the AI hype—selling this AI snake oil—are getting more sophisticated over time,” says Tim Hwang, director of the Harvard-MIT Ethics and Governance of AI Initiative.
The term “AI” is perhaps most frequently used to describe machine learning algorithms (and deep learning algorithms, which require even less human guidance) that analyze huge amounts of data and make predictions based on patterns that humans might miss. These popular forms of AI are mostly suited to specialized tasks, such as automatically recognizing certain objects within photos. For that reason, they are sometimes described as “weak” or “narrow” AI.
Some researchers and thought leaders like to talk about the idea of “artificial general intelligence” or “strong AI” that has human-level capacity and flexibility to handle many diverse intellectual tasks. But for now, this type of AI remains firmly in the realm of science fiction and is far from being realized in the real world.
“AI has no well-defined meaning and many so-called AI companies are simply trying to take advantage of the buzz around that term,” says Arvind Narayanan, a computer scientist at Princeton University. “Companies have even been caught claiming to use AI when, in fact, the task is done by human workers.”
Here are three ways to recognize AI hype.
Look for Buzzwords
One red flag is what Hwang calls the “hype salad.” This means stringing together the term “AI” with many other tech buzzwords such as “blockchain” or “Internet of Things.” That doesn’t automatically disqualify the technology, but spotting a high volume of buzzwords in a post, pitch, or presentation should raise questions about what exactly the company or individual has developed.
Other experts agree that strings of buzzwords can be a red flag. That’s especially true if the buzzwords are never really explained in technical detail, and are simply tossed around as vague, poorly-defined terms, says Marzyeh Ghassemi, a computer scientist and biomedical engineer at the University of Toronto in Canada.
“I think that if it looks like a Google search—picture ‘interpretable blockchain AI deep learning medicine’—it's probably not high-quality work,” Ghassemi says.
Hwang also suggests mentally replacing all mentions of “AI” in an article with the term “magical fairy dust.” It’s a way of seeing whether an individual or organization is treating the technology like magic. If so—that’s another good reason to ask more questions about what exactly the AI technology involves.
And even the visual imagery used to illustrate AI claims can indicate that an individual or organization is overselling the technology.
“I think that a lot of the people who work on machine learning on a day-to-day basis are pretty humble about the technology, because they’re largely confronted with how frequently it just breaks and doesn't work,” Hwang says. “And so I think that if you see a company or someone representing AI as a Terminator head, or a big glowing HAL eye or something like that, I think it’s also worth asking some questions.”
Interrogate the Data
It can be hard to evaluate AI claims without any relevant expertise, says Ghassemi at the University of Toronto. Even experts need to know the technical details of the AI algorithm in question and have some access to the training data that shaped the AI model’s predictions. Still, savvy readers with some basic knowledge of applied statistics can search for red flags.
To start, readers can look for possible bias in training data based on small sample sizes or a skewed population that fails to reflect the broader population, Ghassemi says. After all, an AI model trained only on health data from white men would not necessarily achieve similar results for other populations of patients.
“For me, a red flag is not demonstrating deep knowledge of how your labels are defined.”
—Marzyeh Ghassemi, University of Toronto
How machine learning and deep learning models perform also depends on how well humans labeled the sample datasets use to train these programs. This task can be straightforward when labeling photos of cats versus dogs, but gets more complicated when assigning disease diagnoses to certain patient cases.
Medical experts frequently disagree with each other on diagnoses—which is why many patients seek a second opinion. Not surprisingly, this ambiguity can also affect the diagnostic labels that experts assign in training datasets. “For me, a red flag is not demonstrating deep knowledge of how your labels are defined,” Ghassemi says.
Such training data can also reflect the cultural stereotypes and biases of the humans who labeled the data, says Narayanan at Princeton University. Like Ghassemi, he recommends taking a hard look at exactly what the AI has learned: “A good way to start critically evaluating AI claims is by asking questions about the training data.”
Another red flag is presenting an AI system’s performance through a single accuracy figure without much explanation, Narayanan says. Claiming that an AI model achieves “99 percent” accuracy doesn’t mean much without knowing the baseline for comparison—such as whether other systems have already achieved 99 percent accuracy—or how well that accuracy holds up in situations beyond the training dataset.
Narayanan also emphasized the need to ask questions about an AI model’s false positive rate—the rate of making wrong predictions about the presence of a given condition. Even if the false positive rate of a hypothetical AI service is just one percent, that could have major consequences if that service ends up screening millions of people for cancer.
Readers can also consider whether using AI in a given situation offers any meaningful improvement compared to traditional statistical methods, says Clayton Aldern, a data scientist and journalist who serves as managing director for Caldern LLC. He gave the hypothetical example of a “super-duper-fancy deep learning model” that achieves a prediction accuracy of 89 percent, compared to a “little polynomial regression model” that achieves 86 percent on the same dataset.
“We're talking about a three-percentage-point increase on something that you learned about in Algebra 1,” Aldern says. “So is it worth the hype?”
Don’t Ignore the Drawbacks
The hype surrounding AI isn’t just about the technical merits of services and products driven by machine learning. Overblown claims about the beneficial impacts of AI technology—or vague promises to address ethical issues related to deploying it—should also raise red flags.
“If a company promises to use its tech ethically, it is important to question if its business model aligns with that promise,” Narayanan says. “Even if employees have noble intentions, it is unrealistic to expect the company as a whole to resist financial imperatives.”
One example might be a company with a business model that depends on leveraging customers’ personal data. Such companies “tend to make empty promises when it comes to privacy,” Narayanan says. And, if companies hire workers to produce training data, it’s also worth asking whether the companies treat those workers ethically.
The transparency—or lack thereof—about any AI claim can also be telling. A company or research group can minimize concerns by publishing technical claims in peer-reviewed journals or allowing credible third parties to evaluate their AI without giving away big intellectual property secrets, Narayanan says. Excessive secrecy is a big red flag.
With these strategies, you don’t need to be a computer engineer or data scientist to start thinking critically about AI claims. And, Narayanan says, the world needs many people from different backgrounds for societies to fully consider the real-world implications of AI.
Editor’s Note: The original version of this story misspelled Clayton Aldern’s last name as Alderton. Continue reading