Tag Archives: did
#437596 IROS Robotics Conference Is Online Now ...
The 2020 International Conference on Intelligent Robots and Systems (IROS) was originally going to be held in Las Vegas this week. Like ICRA last spring, IROS has transitioned to a completely online conference, which is wonderful news: Now everyone everywhere can participate in IROS without having to spend a dime on travel.
IROS officially opened yesterday, and the best news is that registration is entirely free! We’ll take a quick look at what IROS has on offer this year, which includes some stuff that’s brand news to IROS.
Registration for IROS is super easy, and did we mention that it’s free? To register, just go here and fill out a quick and easy form. You don’t even have to be an IEEE Member or anything like that, although in our unbiased opinion, an IEEE membership is well worth it. Once you get the confirmation email, go to https://www.iros2020.org/ondemand/, put in the email address you used to register, and that’s it, you’ve got IROS!
Here are some highlights:
Plenaries and Keynotes
Without the normal space and time constraints, you won’t have to pick and choose between any of the three plenaries or 10 keynotes. Some of them are fancier than others, but we’re used to that sort of thing by now. It’s worth noting that all three plenaries (and three of the 10 keynotes) are given by extraordinarily talented women, which is excellent to see.
Technical Tracks
There are over 1,400 technical talks, divided up into 12 categories of 20 sessions each. Note that each of the 12 categories that you see on the main page can be scrolled through to show all 20 of the sessions; if there’s a bright red arrow pointing left or right you can scroll, and if the arrow is transparent, you’ve reached the end.
On the session page, you’ll see an autoplaying advertisement (that you can mute but not stop), below which each talk has a preview slide, a link to a ~15 minute presentation video, and another link to a PDF of the paper. No supplementary videos are available, which is a bit disappointing. While you can leave a comment on the video, there’s no way of interacting with the author(s) directly through the IROS site, so you’ll have to check the paper for an email address if you want to ask a question.
Award Finalists
IROS has thoughtfully grouped all of the paper award finalists together into nine sessions. These are some truly outstanding papers, and it’s worth watching these sessions even if you’re not interested in specific subject matter.
Workshops and Tutorials
This stuff is a little more impacted by asynchronicity and on-demandedness, and some of the workshops and tutorials have already taken place. But IROS has done a good job at collecting videos of everything and making them easy to access, and the dedicated websites for the workshops and tutorials themselves sometimes have more detailed info. If you’re having trouble finding where the workshops and tutorial section is, try the “Entrance” drop-down menu up at the top.
IROS Original Series
In place of social events and lab tours, IROS this year has come up with the “IROS Original Series,” which “hosts unique content that would be difficult to see at in-person events.” Right now, there are some interviews with a diverse group of interesting roboticists, and hopefully more will show up later on.
Enjoy!
Everything on the IROS On-Demand site should be available for at least the next month, so there’s no need to try and watch a thousand presentations over three days (which is what we normally have to do). So, relax, and enjoy yourself a bit by browsing all the options. And additional content will be made available over the next several weeks, so make sure to check back often to see what’s new.
[ IROS 2020 ] Continue reading
#437554 Ending the COVID-19 Pandemic
Photo: F.J. Jimenez/Getty Images
The approach of a new year is always a time to take stock and be hopeful. This year, though, reflection and hope are more than de rigueur—they’re rejuvenating. We’re coming off a year in which doctors, engineers, and scientists took on the most dire public threat in decades, and in the new year we’ll see the greatest results of those global efforts. COVID-19 vaccines are just months away, and biomedical testing is being revolutionized.
At IEEE Spectrum we focus on the high-tech solutions: Can artificial intelligence (AI) be used to diagnose COVID-19 using cough recordings? Can mathematical modeling determine whether preventive measures against COVID-19 work? Can big data and AI provide accurate pandemic forecasting?
Consider our story “AI Recognizes COVID-19 in the Sound of a Cough,” reported by Megan Scudellari in our Human OS blog. Using a cellphone-recorded cough, machine-learning models can now detect coronavirus with 90 percent accuracy, even in people with no symptoms. It’s a remarkable research milestone. This AI model sifts through hundreds of factors to distinguish the COVID-19 cough from those of bronchitis, whooping cough, and asthma.
But while such high-tech triumphs give us hope, the no-tech solutions are mostly what we have to work with. Soon, as our Numbers Don’t Lie columnist, Vaclav Smil, pointed out in a recent email, we will have near-instantaneous home testing, and we will have an ability to use big data to crunch every move and every outbreak. But we are nowhere near that yet. So let’s use, as he says, some old-fashioned kindergarten epidemiology, the no-tech measures, while we work to get there:
Masks: Wear them. If we all did so, we could cut transmission by two-thirds, perhaps even 80 percent.
Hands: Wash them.
Social distancing: If we could all stay home for two weeks, we could see enormous declines in COVID-19 transmission.
These are all time-tested solutions, proven effective ages ago in countless outbreaks of diseases including typhoid and cholera. They’re inexpensive and easy to prescribe, and the regimens are easy to follow.
The conflict between public health and individual rights and privacy, however, is less easy to resolve. Even during the pandemic of 1918–19, there was widespread resistance to mask wearing and social distancing. Fifty million people died—675,000 in the United States alone. Today, we are up to 240,000 deaths in the United States, and the end is not in sight. Antiflu measures were framed in 1918 as a way to protect the troops fighting in World War I, and people who refused to wear masks were called out as “dangerous slackers.” There was a world war, and yet it was still hard to convince people of the need for even such simple measures.
Personally, I have found the resistance to these easy fixes startling. I wouldn’t want maskless, gloveless doctors taking me through a surgical procedure. Or waltzing in from lunch without washing their hands. I’m sure you wouldn’t, either.
Science-based medicine has been one of the world’s greatest and most fundamental advances. In recent years, it has been turbocharged by breakthroughs in genetics technologies, advanced materials, high-tech diagnostics, and implants and other electronics-based interventions. Such leaps have already saved untold lives, but there’s much more to be done. And there will be many more pandemics ahead for humanity.
< Back to IEEE COVID-19 Resources Continue reading
#437491 3.2 Billion Images and 720,000 Hours of ...
Twitter over the weekend “tagged” as manipulated a video showing US Democratic presidential candidate Joe Biden supposedly forgetting which state he’s in while addressing a crowd.
Biden’s “hello Minnesota” greeting contrasted with prominent signage reading “Tampa, Florida” and “Text FL to 30330.”
The Associated Press’s fact check confirmed the signs were added digitally and the original footage was indeed from a Minnesota rally. But by the time the misleading video was removed it already had more than one million views, The Guardian reports.
A FALSE video claiming Biden forgot what state he was in was viewed more than 1 million times on Twitter in the past 24 hours
In the video, Biden says “Hello, Minnesota.”
The event did indeed happen in MN — signs on stage read MN
But false video edited signs to read Florida pic.twitter.com/LdHQVaky8v
— Donie O'Sullivan (@donie) November 1, 2020
If you use social media, the chances are you see (and forward) some of the more than 3.2 billion images and 720,000 hours of video shared daily. When faced with such a glut of content, how can we know what’s real and what’s not?
While one part of the solution is an increased use of content verification tools, it’s equally important we all boost our digital media literacy. Ultimately, one of the best lines of defense—and the only one you can control—is you.
Seeing Shouldn’t Always Be Believing
Misinformation (when you accidentally share false content) and disinformation (when you intentionally share it) in any medium can erode trust in civil institutions such as news organizations, coalitions and social movements. However, fake photos and videos are often the most potent.
For those with a vested political interest, creating, sharing and/or editing false images can distract, confuse and manipulate viewers to sow discord and uncertainty (especially in already polarized environments). Posters and platforms can also make money from the sharing of fake, sensationalist content.
Only 11-25 percent of journalists globally use social media content verification tools, according to the International Centre for Journalists.
Could You Spot a Doctored Image?
Consider this photo of Martin Luther King Jr.
Dr. Martin Luther King Jr. Giving the middle finger #DopeHistoricPics pic.twitter.com/5W38DRaLHr
— Dope Historic Pics (@dopehistoricpic) December 20, 2013
This altered image clones part of the background over King Jr’s finger, so it looks like he’s flipping off the camera. It has been shared as genuine on Twitter, Reddit, and white supremacist websites.
In the original 1964 photo, King flashed the “V for victory” sign after learning the US Senate had passed the civil rights bill.
“Those who love peace must learn to organize as effectively as those who love war.”
Dr. Martin Luther King Jr.
This photo was taken on June 19th, 1964, showing Dr King giving a peace sign after hearing that the civil rights bill had passed the senate. @snopes pic.twitter.com/LXHmwMYZS5
— Willie's Reserve (@WilliesReserve) January 21, 2019
Beyond adding or removing elements, there’s a whole category of photo manipulation in which images are fused together.
Earlier this year, a photo of an armed man was photoshopped by Fox News, which overlaid the man onto other scenes without disclosing the edits, the Seattle Times reported.
You mean this guy who’s been photoshopped into three separate photos released by Fox News? pic.twitter.com/fAXpIKu77a
— Zander Yates ザンダーイェーツ (@ZanderYates) June 13, 2020
Similarly, the image below was shared thousands of times on social media in January, during Australia’s Black Summer bushfires. The AFP’s fact check confirmed it is not authentic and is actually a combination of several separate photos.
Image is more powerful than screams of Greta. A silent girl is holding a koala. She looks straight at you from the waters of the ocean where they found a refuge. She is wearing a breathing mask. A wall of fire is behind them. I do not know the name of the photographer #Australia pic.twitter.com/CrTX3lltdh
— EVC Music (@EVCMusicUK) January 6, 2020
Fully and Partially Synthetic Content
Online, you’ll also find sophisticated “deepfake” videos showing (usually famous) people saying or doing things they never did. Less advanced versions can be created using apps such as Zao and Reface.
Or, if you don’t want to use your photo for a profile picture, you can default to one of several websites offering hundreds of thousands of AI-generated, photorealistic images of people.
These people don’t exist, they’re just images generated by artificial intelligence. Generated Photos, CC BY
Editing Pixel Values and the (not so) Simple Crop
Cropping can greatly alter the context of a photo, too.
We saw this in 2017, when a US government employee edited official pictures of Donald Trump’s inauguration to make the crowd appear bigger, according to The Guardian. The staffer cropped out the empty space “where the crowd ended” for a set of pictures for Trump.
Views of the crowds at the inaugurations of former US President Barack Obama in 2009 (left) and President Donald Trump in 2017 (right). AP
But what about edits that only alter pixel values such as color, saturation, or contrast?
One historical example illustrates the consequences of this. In 1994, Time magazine’s cover of OJ Simpson considerably “darkened” Simpson in his police mugshot. This added fuel to a case already plagued by racial tension, to which the magazine responded, “No racial implication was intended, by Time or by the artist.”
Tools for Debunking Digital Fakery
For those of us who don’t want to be duped by visual mis/disinformation, there are tools available—although each comes with its own limitations (something we discuss in our recent paper).
Invisible digital watermarking has been proposed as a solution. However, it isn’t widespread and requires buy-in from both content publishers and distributors.
Reverse image search (such as Google’s) is often free and can be helpful for identifying earlier, potentially more authentic copies of images online. That said, it’s not foolproof because it:
Relies on unedited copies of the media already being online.
Doesn’t search the entire web.
Doesn’t always allow filtering by publication time. Some reverse image search services such as TinEye support this function, but Google’s doesn’t.
Returns only exact matches or near-matches, so it’s not thorough. For instance, editing an image and then flipping its orientation can fool Google into thinking it’s an entirely different one.
Most Reliable Tools Are Sophisticated
Meanwhile, manual forensic detection methods for visual mis/disinformation focus mostly on edits visible to the naked eye, or rely on examining features that aren’t included in every image (such as shadows). They’re also time-consuming, expensive, and need specialized expertise.
Still, you can access work in this field by visiting sites such as Snopes.com—which has a growing repository of “fauxtography.”
Computer vision and machine learning also offer relatively advanced detection capabilities for images and videos. But they too require technical expertise to operate and understand.
Moreover, improving them involves using large volumes of “training data,” but the image repositories used for this usually don’t contain the real-world images seen in the news.
If you use an image verification tool such as the REVEAL project’s image verification assistant, you might need an expert to help interpret the results.
The good news, however, is that before turning to any of the above tools, there are some simple questions you can ask yourself to potentially figure out whether a photo or video on social media is fake. Think:
Was it originally made for social media?
How widely and for how long was it circulated?
What responses did it receive?
Who were the intended audiences?
Quite often, the logical conclusions drawn from the answers will be enough to weed out inauthentic visuals. You can access the full list of questions, put together by Manchester Metropolitan University experts, here.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Image Credit: Simon Steinberger from Pixabay Continue reading