Tag Archives: pitch
#435716 Watch This Drone Explode Into Maple Seed ...
As useful as conventional fixed-wing and quadrotor drones have become, they still tend to be relatively complicated, expensive machines that you really want to be able to use more than once. When a one-way trip is all that you have in mind, you want something simple, reliable, and cheap, and we’ve seen a bunch of different designs for drone gliders that more or less fulfill those criteria.
For an even simpler gliding design, you want to minimize both airframe mass and control surfaces, and the maple tree provides some inspiration in the form of samara, those distinctive seed pods that whirl to the ground in the fall. Samara are essentially just an unbalanced wing that spins, and while the natural ones don’t steer, adding an actuated flap to the robotic version and moving it at just the right time results in enough controllability to aim for a specific point on the ground.
Roboticists at the Singapore University of Technology and Design (SUTD) have been experimenting with samara-inspired drones, and in a new paper in IEEE Robotics and Automation Letters they explore what happens if you attach five of the drones together and then separate them in mid air.
Image: Singapore University of Technology and Design
The drone with all five wings attached (top left), and details of the individual wings: (a) smaller 44.9-gram wing for semi-indoor testing; (b) larger 83.4-gram wing able to carry a Pixracer, GPS, and magnetometer for directional control experiments.
Fundamentally, a samara design acts as a decelerator for an aerial payload. You can think of it like a parachute: It makes sure that whatever you toss out of an airplane gets to the ground intact rather than just smashing itself to bits on impact. Steering is possible, but you don’t get a lot of stability or precision control. The RA-L paper describes one solution to this, which is to collaboratively use five drones at once in a configuration that looks a bit like a helicopter rotor.
And once the multi-drone is right where you want it, the five individual samara drones can split off all at once, heading out on their own missions. It's quite a sight:
The concept features a collaborative autorotation in the initial stage of drop whereby several wings are attached to each other to form a rotor hub. The combined form achieves higher rotational energy and a collaborative control strategy is possible. Once closer to the ground, they can exit the collaborative form and continue to descend to unique destinations. A section of each wing forms a flap and a small actuator changes its pitch cyclically. Since all wing-flaps can actuate simultaneously in collaborative mode, better maneuverability is possible, hence higher resistance against environmental conditions. The vertical and horizontal speeds can be controlled to a certain extent, allowing it to navigate towards a target location and land softly.
The samara autorotating wing drones themselves could conceivably carry small payloads like sensors or emergency medical supplies, with these small-scale versions in the video able to handle an extra 30 grams of payload. While they might not have as much capacity as a traditional fixed-wing glider, they have the advantage of being able to descent vertically, and can perform better than a parachute due to their ability to steer. The researchers plan on improving the design of their little drones, with the goal of increasing the rotation speed and improving the control performance of both the individual drones and the multi-wing collaborative version.
“Dynamics and Control of a Collaborative and Separating Descent of Samara Autorotating Wings,” by Shane Kyi Hla Win, Luke Soe Thura Win, Danial Sufiyan, Gim Song Soh, and Shaohui Foong from Singapore University of Technology and Design, appears in the current issue of IEEE Robotics and Automation Letters.
[ SUTD ]
< Back to IEEE Journal Watch Continue reading
#435614 3 Easy Ways to Evaluate AI Claims
When every other tech startup claims to use artificial intelligence, it can be tough to figure out if an AI service or product works as advertised. In the midst of the AI “gold rush,” how can you separate the nuggets from the fool’s gold?
There’s no shortage of cautionary tales involving overhyped AI claims. And applying AI technologies to health care, education, and law enforcement mean that getting it wrong can have real consequences for society—not just for investors who bet on the wrong unicorn.
So IEEE Spectrum asked experts to share their tips for how to identify AI hype in press releases, news articles, research papers, and IPO filings.
“It can be tricky, because I think the people who are out there selling the AI hype—selling this AI snake oil—are getting more sophisticated over time,” says Tim Hwang, director of the Harvard-MIT Ethics and Governance of AI Initiative.
The term “AI” is perhaps most frequently used to describe machine learning algorithms (and deep learning algorithms, which require even less human guidance) that analyze huge amounts of data and make predictions based on patterns that humans might miss. These popular forms of AI are mostly suited to specialized tasks, such as automatically recognizing certain objects within photos. For that reason, they are sometimes described as “weak” or “narrow” AI.
Some researchers and thought leaders like to talk about the idea of “artificial general intelligence” or “strong AI” that has human-level capacity and flexibility to handle many diverse intellectual tasks. But for now, this type of AI remains firmly in the realm of science fiction and is far from being realized in the real world.
“AI has no well-defined meaning and many so-called AI companies are simply trying to take advantage of the buzz around that term,” says Arvind Narayanan, a computer scientist at Princeton University. “Companies have even been caught claiming to use AI when, in fact, the task is done by human workers.”
Here are three ways to recognize AI hype.
Look for Buzzwords
One red flag is what Hwang calls the “hype salad.” This means stringing together the term “AI” with many other tech buzzwords such as “blockchain” or “Internet of Things.” That doesn’t automatically disqualify the technology, but spotting a high volume of buzzwords in a post, pitch, or presentation should raise questions about what exactly the company or individual has developed.
Other experts agree that strings of buzzwords can be a red flag. That’s especially true if the buzzwords are never really explained in technical detail, and are simply tossed around as vague, poorly-defined terms, says Marzyeh Ghassemi, a computer scientist and biomedical engineer at the University of Toronto in Canada.
“I think that if it looks like a Google search—picture ‘interpretable blockchain AI deep learning medicine’—it's probably not high-quality work,” Ghassemi says.
Hwang also suggests mentally replacing all mentions of “AI” in an article with the term “magical fairy dust.” It’s a way of seeing whether an individual or organization is treating the technology like magic. If so—that’s another good reason to ask more questions about what exactly the AI technology involves.
And even the visual imagery used to illustrate AI claims can indicate that an individual or organization is overselling the technology.
“I think that a lot of the people who work on machine learning on a day-to-day basis are pretty humble about the technology, because they’re largely confronted with how frequently it just breaks and doesn't work,” Hwang says. “And so I think that if you see a company or someone representing AI as a Terminator head, or a big glowing HAL eye or something like that, I think it’s also worth asking some questions.”
Interrogate the Data
It can be hard to evaluate AI claims without any relevant expertise, says Ghassemi at the University of Toronto. Even experts need to know the technical details of the AI algorithm in question and have some access to the training data that shaped the AI model’s predictions. Still, savvy readers with some basic knowledge of applied statistics can search for red flags.
To start, readers can look for possible bias in training data based on small sample sizes or a skewed population that fails to reflect the broader population, Ghassemi says. After all, an AI model trained only on health data from white men would not necessarily achieve similar results for other populations of patients.
“For me, a red flag is not demonstrating deep knowledge of how your labels are defined.”
—Marzyeh Ghassemi, University of Toronto
How machine learning and deep learning models perform also depends on how well humans labeled the sample datasets use to train these programs. This task can be straightforward when labeling photos of cats versus dogs, but gets more complicated when assigning disease diagnoses to certain patient cases.
Medical experts frequently disagree with each other on diagnoses—which is why many patients seek a second opinion. Not surprisingly, this ambiguity can also affect the diagnostic labels that experts assign in training datasets. “For me, a red flag is not demonstrating deep knowledge of how your labels are defined,” Ghassemi says.
Such training data can also reflect the cultural stereotypes and biases of the humans who labeled the data, says Narayanan at Princeton University. Like Ghassemi, he recommends taking a hard look at exactly what the AI has learned: “A good way to start critically evaluating AI claims is by asking questions about the training data.”
Another red flag is presenting an AI system’s performance through a single accuracy figure without much explanation, Narayanan says. Claiming that an AI model achieves “99 percent” accuracy doesn’t mean much without knowing the baseline for comparison—such as whether other systems have already achieved 99 percent accuracy—or how well that accuracy holds up in situations beyond the training dataset.
Narayanan also emphasized the need to ask questions about an AI model’s false positive rate—the rate of making wrong predictions about the presence of a given condition. Even if the false positive rate of a hypothetical AI service is just one percent, that could have major consequences if that service ends up screening millions of people for cancer.
Readers can also consider whether using AI in a given situation offers any meaningful improvement compared to traditional statistical methods, says Clayton Aldern, a data scientist and journalist who serves as managing director for Caldern LLC. He gave the hypothetical example of a “super-duper-fancy deep learning model” that achieves a prediction accuracy of 89 percent, compared to a “little polynomial regression model” that achieves 86 percent on the same dataset.
“We're talking about a three-percentage-point increase on something that you learned about in Algebra 1,” Aldern says. “So is it worth the hype?”
Don’t Ignore the Drawbacks
The hype surrounding AI isn’t just about the technical merits of services and products driven by machine learning. Overblown claims about the beneficial impacts of AI technology—or vague promises to address ethical issues related to deploying it—should also raise red flags.
“If a company promises to use its tech ethically, it is important to question if its business model aligns with that promise,” Narayanan says. “Even if employees have noble intentions, it is unrealistic to expect the company as a whole to resist financial imperatives.”
One example might be a company with a business model that depends on leveraging customers’ personal data. Such companies “tend to make empty promises when it comes to privacy,” Narayanan says. And, if companies hire workers to produce training data, it’s also worth asking whether the companies treat those workers ethically.
The transparency—or lack thereof—about any AI claim can also be telling. A company or research group can minimize concerns by publishing technical claims in peer-reviewed journals or allowing credible third parties to evaluate their AI without giving away big intellectual property secrets, Narayanan says. Excessive secrecy is a big red flag.
With these strategies, you don’t need to be a computer engineer or data scientist to start thinking critically about AI claims. And, Narayanan says, the world needs many people from different backgrounds for societies to fully consider the real-world implications of AI.
Editor’s Note: The original version of this story misspelled Clayton Aldern’s last name as Alderton. Continue reading