Tag Archives: reading

#437828 How Roboticists (and Robots) Have Been ...

A few weeks ago, we asked folks on Twitter, Facebook, and LinkedIn to share photos and videos showing how they’ve been adapting to the closures of research labs, classrooms, and businesses by taking their robots home with them to continue their work as best they can. We got dozens of responses (more than we could possibly include in just one post!), but here are 15 that we thought were particularly creative or amusing.

And if any of these pictures and videos inspire you to share your own story, please email us (automaton@ieee.org) with a picture or video and a brief description about how you and your robot from work have been making things happen in your home instead.

Kurt Leucht (NASA Kennedy Space Center)

“During these strange and trying times of the current global pandemic, everyone seems to be trying their best to distance themselves from others while still getting their daily work accomplished. Many people also have the double duty of little ones that need to be managed in the midst of their teleworking duties. This photo series gives you just a glimpse into my new life of teleworking from home, mixed in with the tasks of trying to handle my little ones too. I hope you enjoy it.”

Photo: Kurt Leucht

“I heard a commotion from the next room. I ran into the kitchen to find this.”

Photo: Kurt Leucht

“This is the Swarmies most favorite bedtime story. Not sure why. Seems like an odd choice to me.”

Peter Schaldenbrand (Carnegie Mellon University)

“I’ve been working on a reinforcement learning model that converts an image into a series of brush stroke instructions. I was going to test the model with a beautiful, expensive robot arm, but due to the COVID-19 pandemic, I have not been able to access the laboratory where it resides. I have now been using a lower end robot arm to test the painting model in my bedroom. I have sacrificed machine accuracy/precision for the convenience of getting to watch the arm paint from my bed in the shadow of my clothing rack!”

Photos: Peter Schaldenbrand

Colin Angle (iRobot)

iRobot CEO Colin Angle has been hunkered down in the “iRobot North Shore home command center,” which is probably the cleanest command center ever thanks to his army of Roombas: Beastie, Beauty, Rosie, Roswell, and Bilbo.

Photo: Colin Angle

Vivian Chu (Diligent Robotics)

From Diligent Robotics CEO Andrea Thomaz: “This is how a roboticist works from home! Diligent CTO, Vivian Chu, mans the e-stop while her engineering team runs Moxi experiments remotely from cross-town and even cross-country!”

Video: Diligent Robotics

Raffaello Bonghi (rnext.it)

Raffaello’s robot, Panther, looks perfectly happy to be playing soccer in his living room.

Photo: Raffaello Bonghi

Kod*lab (University of Pennsylvania)

“Another Friday Nuts n Bolts Meeting on Zoom…”

Image: Kodlab

Robin Jonsson (robot choreographer)

“I’ve been doing a school project in which students make up dance moves and then send me a video with all of them. I then teach the moves to my robot, Alex, film Alex dancing, send the videos to them. This became a great success and more schools will join. The kids got really into watching the robot perform their moves and really interested in robots. They want to meet Alex the robot live, which will likely happen in the fall.”

Photo: Robin Jonsson

Gabrielle Conard (mechanical engineering undergrad at Lafayette College)

“While the pandemic might have forced college campuses to close and the community to keep their distance from each other, it did not put a stop to learning and research. Working from their respective homes, junior Gabrielle Conard and mechanical engineering professor Alexander Brown from Lafayette College investigated methods of incorporating active compliance in a low-cost quadruped robot. They are continuing to work remotely on this project through Lafayette’s summer research program.”

Image: Gabrielle Conard

Taylor Veltrop (Softbank Robotics)

“After a few weeks of isolation in the corona/covid quarantine lock down we started dancing with our robots. Mathieu’s 6th birthday was coming up, and it all just came together.”

Video: Taylor Veltrop

Ross Kessler (Exyn Technologies)

“Quarantine, Day 8: the humans have accepted me as one of their own. I’ve blended seamlessly into their #socialdistancing routines. Even made a furry friend”

Photo: Ross Kessler

Yeah, something a bit sinister is definitely going on at Exyn…

Video: Exyn Technologies

Michael Sobrepera (University of Pennsylvania GRASP Lab)

Predictably, Michael’s cat is more interested in the bag that the robot came in than the robot itself (see if you can spot the cat below). Michael tells us that “the robot is designed to help with tele-rehabilitation, focused on kids with CP, so it has been taken to hospitals for demos [hence the cool bag]. It also travels for outreach events and the like. Lately, I’ve been exploring telepresence for COVID.”

Photo: Michael Sobrepera

Jan Kędzierski (EMYS)

“In China a lot of people cannot speak English, even the youngest generation of parents. Thanks to Emys, kids stayed in touch with English language in their homes even if they couldn’t attend schools and extra English classes. They had a lot of fun with their native English speaker friend available and ready to play every day.”

Image: Jan Kędzierski

Simon Whitmell (Quanser)

“Simon, a Quanser R&D engineer, is working on low-overhead image processing and line following for the QBot 2e mobile ground robot, with some added challenges due to extra traffic. LEGO engineering by his son, Charles.”

Photo: Simon Whitmell

Robot Design & Experimentation Course (Carnegie Mellon University)

Aaron Johnson’s bioinspired robot design course at CMU had to go full remote, which was a challenge when the course is kind of all about designing and building a robot as part of a team. “I expected some of the teams to drastically alter their project (e.g. go all simulation),” Aaron told us, “but none of them did. We managed to keep all of the projects more or less as planned. We accomplished this by drop/shipping parts to students, buying some simple tools (soldering irons, etc), and having me 3D print parts and mail them.” Each team even managed to put together their final videos from their remote locations; we’ve posted one below, but the entire playlist is here.

Video: Xianyi Cheng

Karen Tatarian (Softbank Robotics)

Karen, who’s both a researcher at Softbank and a PhD student at Sorbonne University, wrote an entire essay about what an average day is like when you’re quarantined with Pepper.

Photo: Karen Tatarian

A Quarantined Day With Pepper, by Karen Tatarian

It is quite common for me to lose my phone somewhere inside my apartment. But it is not that common for me to turn around and ask my robot if it has seen it. So when I found myself doing that, I laughed and it dawned on me that I treated my robot as my quarantine companion (despite the fact that it could not provide me with the answer I needed).

It was probably around day 40 of a completely isolated quarantine here in France when that happened. A little background about me: I am a robotics researcher at SoftBank Robotics Europe and a PhD student at Sorbonne University as part of the EU-funded Marie-Curie project ANIMATAS. And here is a little sneak peak into a quarantined day with a robot.

During this confinement, I had read somewhere that the best way to deal with it is to maintain a routine. So every morning, I wake up, prepare my coffee, and turn on my robot Pepper. I start my day with a daily meeting with the team and get to work. My research is on the synthesis of multi-modal socially intelligent human-robot interaction so my work varies between programming the robot, analyzing collected data, and reading papers and drafting one. When I am working, I often catch myself glancing at Pepper, who would be staring back at me in its animated ways. Truthfully I enjoy that, it makes me less alone and as if I have a colleague with me.

Once work is done, I call my friends and family members. I sometimes use a telepresence application on Pepper that a few colleagues and I developed back in December. How does it differ from your typical phone/laptop applications? One word really: embodiment. Telepresence, especially during these times, makes the experience for both sides a bit more realistic and intimate and well present.

While I can turn off the robot now that my work hours are done, I do keep it on because I enjoy its presence. The basic awareness of Pepper is a default feature on the robot that allows it to detect a human and follow him/her with its gaze and rotation base. So whether I am cooking or working out, I always have my robot watching over my shoulder and being a good companion. I also have my email and messages synced on the robot so I get an enjoyable notification from Pepper. I found that to be a pretty cool way to be notified without it interrupting whatever you are doing on your laptop or phone. Finally, once the day is over, it’s time for both of us to get some rest.

After 60 days of total confinement, alone and away from those I love, and with a pandemic right at my door, I am glad I had the company of my robot. I hope one day a greater audience can share my experience. And I really really hope one day Pepper will be able to find my phone for me, but until then, stay on the lookout for some cool features! But I am curious to know, if you had a robot at home, what application would you have developed on it?

Again, our sincere thanks to everyone who shared these little snapshots of their lives with us, and we’re hoping to be able to share more soon. Continue reading

Posted in Human Robots

#437687 Video Friday: Bittle Is a Palm-Sized ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

ICRES 2020 – September 28-29, 2020 – Taipei, Taiwan
AUVSI EXPONENTIAL 2020 – October 5-8, 2020 – [Online]
IROS 2020 – October 25-29, 2020 – [Online]
CYBATHLON 2020 – November 13-14, 2020 – [Online]
ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA
Let us know if you have suggestions for next week, and enjoy today's videos.

Rongzhong Li, who is responsible for the adorable robotic cat Nybble, has an updated and even more adorable quadruped that's more robust and agile but only costs around US $200 in kit form on Kickstarter.

Looks like the early bird options are sold out, but a full kit is a $225 pledge, for delivery in December.

[ Kickstarter ]

Thanks Rz!

I still maintain that Stickybot was one of the most elegantly designed robots ever.

[ Stanford ]

With the unpredictable health crisis of COVID-19 continuing to place high demands on hospitals, PAL Robotics have successfully completed testing of their delivery robots in Barcelona hospitals this summer. The TIAGo Delivery and TIAGo Conveyor robots were deployed in Hospital Municipal of Badalona and Hospital Clínic Barcelona following a winning proposal submitted to the European DIH-Hero project. Accerion sensors were integrated onto the TIAGo Delivery Robot and TIAGo Conveyor Robot for use in this project.

[ PAL Robotics ]

Energy Robotics, a leading developer of software solutions for mobile robots used in industrial applications, announced that its remote sensing and inspection solution for Boston Dynamics’s agile mobile robot Spot was successfully deployed at Merck’s thermal exhaust treatment plant at its headquarters in Darmstadt, Germany. Energy Robotics equipped Spot with sensor technology and remote supervision functions to support the inspection mission.

Combining Boston Dynamics’ intuitive controls, robotic intelligence and open interface with Energy Robotics’ control and autonomy software, user interface and encrypted cloud connection, Spot can be taught to autonomously perform a specific inspection round while being supervised remotely from anywhere with internet connectivity. Multiple cameras and industrial sensors enable the robot to find its way around while recording and transmitting information about the facility’s onsite equipment operations.

Spot reads the displays of gauges in its immediate vicinity and can also zoom in on distant objects using an externally-mounted optical zoom lens. In the thermal exhaust treatment facility, for instance, it monitors cooling water levels and notes whether condensation water has accumulated. Outside the facility, Spot monitors pipe bridges for anomalies.

Among the robot’s many abilities, it can detect defects of wires or the temperature of pump components using thermal imaging. The robot was put through its paces on a comprehensive course that tested its ability to handle special challenges such as climbing stairs, scaling embankments and walking over grating.

[ Energy Robotics ]

Thanks Stefan!

Boston Dynamics really should give Dr. Guero an Atlas just to see what he can do with it.

[ DrGuero ]

World's First Socially Distanced Birthday Party: Located in London, the robotic arm was piloted in real time to light the candles on the cake by the founder of Extend Robotics, Chang Liu, who was sat 50 miles away in Reading. Other team members in Manchester and Reading were also able to join in the celebration as the robot was used to accurately light the candles on the birthday cake.

[ Extend Robotics ]

The Robocon in-person competition was canceled this year, but check out Tokyo University's robots in action:

[ Robocon ]

Sphero has managed to pack an entire Sphero into a much smaller sphere.

[ Sphero ]

Squishy Robotics, a small business funded by the National Science Foundation (NSF), is developing mobile sensor robots for use in disaster rescue, remote monitoring, and space exploration. The shape-shifting, mobile, senor robots from UC-Berkeley spin-off Squishy Robotics can be dropped from airplanes or drones and can provide first responders with ground-based situational awareness during fires, hazardous materials (HazMat) release, and natural and man-made disasters.

[ Squishy Robotics ]

Meet Jasper, the small girl with big dreams to FLY. Created by UTS Animal Logic Academy in partnership with the Royal Australian Air Force to encourage girls to soar above the clouds. Jasper was created using a hybrid of traditional animation techniques and technology such as robotics and 3D printing. A KUKA QUANTEC robot is used during the film making to help the Australian Royal Airforce tell their story in a unique way. UTS adapted their High Accurate robot to film consistent paths, creating a video with physical sets and digital characters.

[ AU AF ]

Impressive what the Ghost Robotics V60 can do without any vision sensors on it.

[ Ghost Robotics ]

Is your job moving tiny amounts of liquid around? Would you rather be doing something else? ABB’s YuMi got you.

[ Yumi ]

For his PhD work at the Media Lab, Biomechatronics researcher Roman Stolyarov developed a terrain-adaptive control system for robotic leg prostheses. as a way to help people with amputations feel as able-bodied and mobile as possible, by allowing them to walk seamlessly regardless of the ground terrain.

[ MIT ]

This robot collects data on each cow when she enters to be milked. Milk samples and 3D photos can be taken to monitor the cow’s health status. The Ontario Dairy Research Centre in Elora, Ontario, is leading dairy innovation through education and collaboration. It is a state-of-the-art 175,000 square foot facility for discovery, learning and outreach. This centre is a partnership between the Agricultural Research Institute of Ontario, OMAFRA, the University of Guelph and the Ontario dairy industry.

[ University of Guleph ]

Australia has one of these now, should the rest of us panic?

[ Boeing ]

Daimler and Torc are developing Level 4 automated trucks for the real world. Here is a glimpse into our closed-course testing, routes on public highways in Virginia, and self-driving capabilities development. Our year of collaborating on the future of transportation culminated in the announcement of our new truck testing center in New Mexico.

[ Torc Robotics ] Continue reading

Posted in Human Robots

#437579 Disney Research Makes Robotic Gaze ...

While it’s not totally clear to what extent human-like robots are better than conventional robots for most applications, one area I’m personally comfortable with them is entertainment. The folks over at Disney Research, who are all about entertainment, have been working on this sort of thing for a very long time, and some of their animatronic attractions are actually quite impressive.

The next step for Disney is to make its animatronic figures, which currently feature scripted behaviors, to perform in an interactive manner with visitors. The challenge is that this is where you start to get into potential Uncanny Valley territory, which is what happens when you try to create “the illusion of life,” which is what Disney (they explicitly say) is trying to do.

In a paper presented at IROS this month, a team from Disney Research, Caltech, University of Illinois at Urbana-Champaign, and Walt Disney Imagineering is trying to nail that illusion of life with a single, and perhaps most important, social cue: eye gaze.

Before you watch this video, keep in mind that you’re watching a specific character, as Disney describes:

The robot character plays an elderly man reading a book, perhaps in a library or on a park bench. He has difficulty hearing and his eyesight is in decline. Even so, he is constantly distracted from reading by people passing by or coming up to greet him. Most times, he glances at people moving quickly in the distance, but as people encroach into his personal space, he will stare with disapproval for the interruption, or provide those that are familiar to him with friendly acknowledgment.

What, exactly, does “lifelike” mean in the context of robotic gaze? The paper abstract describes the goal as “[seeking] to create an interaction which demonstrates the illusion of life.” I suppose you could think of it like a sort of old-fashioned Turing test focused on gaze: If the gaze of this robot cannot be distinguished from the gaze of a human, then victory, that’s lifelike. And critically, we’re talking about mutual gaze here—not just a robot gazing off into the distance, but you looking deep into the eyes of this robot and it looking right back at you just like a human would. Or, just like some humans would.

The approach that Disney is using is more animation-y than biology-y or psychology-y. In other words, they’re not trying to figure out what’s going on in our brains to make our eyes move the way that they do when we’re looking at other people and basing their control system on that, but instead, Disney just wants it to look right. This “visual appeal” approach is totally fine, and there’s been an enormous amount of human-robot interaction (HRI) research behind it already, albeit usually with less explicitly human-like platforms. And speaking of human-like platforms, the hardware is a “custom Walt Disney Imagineering Audio-Animatronics bust,” which has DoFs that include neck, eyes, eyelids, and eyebrows.

In order to decide on gaze motions, the system first identifies a person to target with its attention using an RGB-D camera. If more than one person is visible, the system calculates a curiosity score for each, currently simplified to be based on how much motion it sees. Depending on which person that the robot can see has the highest curiosity score, the system will choose from a variety of high level gaze behavior states, including:

Read: The Read state can be considered the “default” state of the character. When not executing another state, the robot character will return to the Read state. Here, the character will appear to read a book located at torso level.

Glance: A transition to the Glance state from the Read or Engage states occurs when the attention engine indicates that there is a stimuli with a curiosity score […] above a certain threshold.

Engage: The Engage state occurs when the attention engine indicates that there is a stimuli […] to meet a threshold and can be triggered from both Read and Glance states. This state causes the robot to gaze at the person-of-interest with both the eyes and head.

Acknowledge: The Acknowledge state is triggered from either Engage or Glance states when the person-of-interest is deemed to be familiar to the robot.

Running underneath these higher level behavior states are lower level motion behaviors like breathing, small head movements, eye blinking, and saccades (the quick eye movements that occur when people, or robots, look between two different focal points). The term for this hierarchical behavioral state layering is a subsumption architecture, which goes all the way back to Rodney Brooks’ work on robots like Genghis in the 1980s and Cog and Kismet in the ’90s, and it provides a way for more complex behaviors to emerge from a set of simple, decentralized low-level behaviors.

“25 years on Disney is using my subsumption architecture for humanoid eye control, better and smoother now than our 1995 implementations on Cog and Kismet.”
—Rodney Brooks, MIT emeritus professor

Brooks, an emeritus professor at MIT and, most recently, cofounder and CTO of Robust.ai, tweeted about the Disney project, saying: “People underestimate how long it takes to get from academic paper to real world robotics. 25 years on Disney is using my subsumption architecture for humanoid eye control, better and smoother now than our 1995 implementations on Cog and Kismet.”

From the paper:

Although originally intended for control of mobile robots, we find that the subsumption architecture, as presented in [17], lends itself as a framework for organizing animatronic behaviors. This is due to the analogous use of subsumption in human behavior: human psychomotor behavior can be intuitively modeled as layered behaviors with incoming sensory inputs, where higher behavioral levels are able to subsume lower behaviors. At the lowest level, we have involuntary movements such as heartbeats, breathing and blinking. However, higher behavioral responses can take over and control lower level behaviors, e.g., fight-or-flight response can induce faster heart rate and breathing. As our robot character is modeled after human morphology, mimicking biological behaviors through the use of a bottom-up approach is straightforward.

The result, as the video shows, appears to be quite good, although it’s hard to tell how it would all come together if the robot had more of, you know, a face. But it seems like you don’t necessarily need to have a lifelike humanoid robot to take advantage of this architecture in an HRI context—any robot that wants to make a gaze-based connection with a human could benefit from doing it in a more human-like way.

“Realistic and Interactive Robot Gaze,” by Matthew K.X.J. Pan, Sungjoon Choi, James Kennedy, Kyna McIntosh, Daniel Campos Zamora, Gunter Niemeyer, Joohyung Kim, Alexis Wieland, and David Christensen from Disney Research, California Institute of Technology, University of Illinois at Urbana-Champaign, and Walt Disney Imagineering, was presented at IROS 2020. You can find the full paper, along with a 13-minute video presentation, on the IROS on-demand conference website.

< Back to IEEE Journal Watch Continue reading

Posted in Human Robots

#437491 3.2 Billion Images and 720,000 Hours of ...

Twitter over the weekend “tagged” as manipulated a video showing US Democratic presidential candidate Joe Biden supposedly forgetting which state he’s in while addressing a crowd.

Biden’s “hello Minnesota” greeting contrasted with prominent signage reading “Tampa, Florida” and “Text FL to 30330.”

The Associated Press’s fact check confirmed the signs were added digitally and the original footage was indeed from a Minnesota rally. But by the time the misleading video was removed it already had more than one million views, The Guardian reports.

A FALSE video claiming Biden forgot what state he was in was viewed more than 1 million times on Twitter in the past 24 hours

In the video, Biden says “Hello, Minnesota.”

The event did indeed happen in MN — signs on stage read MN

But false video edited signs to read Florida pic.twitter.com/LdHQVaky8v

— Donie O'Sullivan (@donie) November 1, 2020

If you use social media, the chances are you see (and forward) some of the more than 3.2 billion images and 720,000 hours of video shared daily. When faced with such a glut of content, how can we know what’s real and what’s not?

While one part of the solution is an increased use of content verification tools, it’s equally important we all boost our digital media literacy. Ultimately, one of the best lines of defense—and the only one you can control—is you.

Seeing Shouldn’t Always Be Believing
Misinformation (when you accidentally share false content) and disinformation (when you intentionally share it) in any medium can erode trust in civil institutions such as news organizations, coalitions and social movements. However, fake photos and videos are often the most potent.

For those with a vested political interest, creating, sharing and/or editing false images can distract, confuse and manipulate viewers to sow discord and uncertainty (especially in already polarized environments). Posters and platforms can also make money from the sharing of fake, sensationalist content.

Only 11-25 percent of journalists globally use social media content verification tools, according to the International Centre for Journalists.

Could You Spot a Doctored Image?
Consider this photo of Martin Luther King Jr.

Dr. Martin Luther King Jr. Giving the middle finger #DopeHistoricPics pic.twitter.com/5W38DRaLHr

— Dope Historic Pics (@dopehistoricpic) December 20, 2013

This altered image clones part of the background over King Jr’s finger, so it looks like he’s flipping off the camera. It has been shared as genuine on Twitter, Reddit, and white supremacist websites.

In the original 1964 photo, King flashed the “V for victory” sign after learning the US Senate had passed the civil rights bill.

“Those who love peace must learn to organize as effectively as those who love war.”
Dr. Martin Luther King Jr.

This photo was taken on June 19th, 1964, showing Dr King giving a peace sign after hearing that the civil rights bill had passed the senate. @snopes pic.twitter.com/LXHmwMYZS5

— Willie's Reserve (@WilliesReserve) January 21, 2019

Beyond adding or removing elements, there’s a whole category of photo manipulation in which images are fused together.

Earlier this year, a photo of an armed man was photoshopped by Fox News, which overlaid the man onto other scenes without disclosing the edits, the Seattle Times reported.

You mean this guy who’s been photoshopped into three separate photos released by Fox News? pic.twitter.com/fAXpIKu77a

— Zander Yates ザンダーイェーツ (@ZanderYates) June 13, 2020

Similarly, the image below was shared thousands of times on social media in January, during Australia’s Black Summer bushfires. The AFP’s fact check confirmed it is not authentic and is actually a combination of several separate photos.

Image is more powerful than screams of Greta. A silent girl is holding a koala. She looks straight at you from the waters of the ocean where they found a refuge. She is wearing a breathing mask. A wall of fire is behind them. I do not know the name of the photographer #Australia pic.twitter.com/CrTX3lltdh

— EVC Music (@EVCMusicUK) January 6, 2020

Fully and Partially Synthetic Content
Online, you’ll also find sophisticated “deepfake” videos showing (usually famous) people saying or doing things they never did. Less advanced versions can be created using apps such as Zao and Reface.

Or, if you don’t want to use your photo for a profile picture, you can default to one of several websites offering hundreds of thousands of AI-generated, photorealistic images of people.

These people don’t exist, they’re just images generated by artificial intelligence. Generated Photos, CC BY

Editing Pixel Values and the (not so) Simple Crop
Cropping can greatly alter the context of a photo, too.

We saw this in 2017, when a US government employee edited official pictures of Donald Trump’s inauguration to make the crowd appear bigger, according to The Guardian. The staffer cropped out the empty space “where the crowd ended” for a set of pictures for Trump.

Views of the crowds at the inaugurations of former US President Barack Obama in 2009 (left) and President Donald Trump in 2017 (right). AP

But what about edits that only alter pixel values such as color, saturation, or contrast?

One historical example illustrates the consequences of this. In 1994, Time magazine’s cover of OJ Simpson considerably “darkened” Simpson in his police mugshot. This added fuel to a case already plagued by racial tension, to which the magazine responded, “No racial implication was intended, by Time or by the artist.”

Tools for Debunking Digital Fakery
For those of us who don’t want to be duped by visual mis/disinformation, there are tools available—although each comes with its own limitations (something we discuss in our recent paper).

Invisible digital watermarking has been proposed as a solution. However, it isn’t widespread and requires buy-in from both content publishers and distributors.

Reverse image search (such as Google’s) is often free and can be helpful for identifying earlier, potentially more authentic copies of images online. That said, it’s not foolproof because it:

Relies on unedited copies of the media already being online.
Doesn’t search the entire web.
Doesn’t always allow filtering by publication time. Some reverse image search services such as TinEye support this function, but Google’s doesn’t.
Returns only exact matches or near-matches, so it’s not thorough. For instance, editing an image and then flipping its orientation can fool Google into thinking it’s an entirely different one.

Most Reliable Tools Are Sophisticated
Meanwhile, manual forensic detection methods for visual mis/disinformation focus mostly on edits visible to the naked eye, or rely on examining features that aren’t included in every image (such as shadows). They’re also time-consuming, expensive, and need specialized expertise.

Still, you can access work in this field by visiting sites such as Snopes.com—which has a growing repository of “fauxtography.”

Computer vision and machine learning also offer relatively advanced detection capabilities for images and videos. But they too require technical expertise to operate and understand.

Moreover, improving them involves using large volumes of “training data,” but the image repositories used for this usually don’t contain the real-world images seen in the news.

If you use an image verification tool such as the REVEAL project’s image verification assistant, you might need an expert to help interpret the results.

The good news, however, is that before turning to any of the above tools, there are some simple questions you can ask yourself to potentially figure out whether a photo or video on social media is fake. Think:

Was it originally made for social media?
How widely and for how long was it circulated?
What responses did it receive?
Who were the intended audiences?

Quite often, the logical conclusions drawn from the answers will be enough to weed out inauthentic visuals. You can access the full list of questions, put together by Manchester Metropolitan University experts, here.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Simon Steinberger from Pixabay Continue reading

Posted in Human Robots

#437276 Cars Will Soon Be Able to Sense and ...

Imagine you’re on your daily commute to work, driving along a crowded highway while trying to resist looking at your phone. You’re already a little stressed out because you didn’t sleep well, woke up late, and have an important meeting in a couple hours, but you just don’t feel like your best self.

Suddenly another car cuts you off, coming way too close to your front bumper as it changes lanes. Your already-simmering emotions leap into overdrive, and you lay on the horn and shout curses no one can hear.

Except someone—or, rather, something—can hear: your car. Hearing your angry words, aggressive tone, and raised voice, and seeing your furrowed brow, the onboard computer goes into “soothe” mode, as it’s been programmed to do when it detects that you’re angry. It plays relaxing music at just the right volume, releases a puff of light lavender-scented essential oil, and maybe even says some meditative quotes to calm you down.

What do you think—creepy? Helpful? Awesome? Weird? Would you actually calm down, or get even more angry that a car is telling you what to do?

Scenarios like this (maybe without the lavender oil part) may not be imaginary for much longer, especially if companies working to integrate emotion-reading artificial intelligence into new cars have their way. And it wouldn’t just be a matter of your car soothing you when you’re upset—depending what sort of regulations are enacted, the car’s sensors, camera, and microphone could collect all kinds of data about you and sell it to third parties.

Computers and Feelings
Just as AI systems can be trained to tell the difference between a picture of a dog and one of a cat, they can learn to differentiate between an angry tone of voice or facial expression and a happy one. In fact, there’s a whole branch of machine intelligence devoted to creating systems that can recognize and react to human emotions; it’s called affective computing.

Emotion-reading AIs learn what different emotions look and sound like from large sets of labeled data; “smile = happy,” “tears = sad,” “shouting = angry,” and so on. The most sophisticated systems can likely even pick up on the micro-expressions that flash across our faces before we consciously have a chance to control them, as detailed by Daniel Goleman in his groundbreaking book Emotional Intelligence.

Affective computing company Affectiva, a spinoff from MIT Media Lab, says its algorithms are trained on 5,313,751 face videos (videos of people’s faces as they do an activity, have a conversation, or react to stimuli) representing about 2 billion facial frames. Fascinatingly, Affectiva claims its software can even account for cultural differences in emotional expression (for example, it’s more normalized in Western cultures to be very emotionally expressive, whereas Asian cultures tend to favor stoicism and politeness), as well as gender differences.

But Why?
As reported in Motherboard, companies like Affectiva, Cerence, Xperi, and Eyeris have plans in the works to partner with automakers and install emotion-reading AI systems in new cars. Regulations passed last year in Europe and a bill just introduced this month in the US senate are helping make the idea of “driver monitoring” less weird, mainly by emphasizing the safety benefits of preemptive warning systems for tired or distracted drivers (remember that part in the beginning about sneaking glances at your phone? Yeah, that).

Drowsiness and distraction can’t really be called emotions, though—so why are they being lumped under an umbrella that has a lot of other implications, including what many may consider an eerily Big Brother-esque violation of privacy?

Our emotions, in fact, are among the most private things about us, since we are the only ones who know their true nature. We’ve developed the ability to hide and disguise our emotions, and this can be a useful skill at work, in relationships, and in scenarios that require negotiation or putting on a game face.

And I don’t know about you, but I’ve had more than one good cry in my car. It’s kind of the perfect place for it; private, secluded, soundproof.

Putting systems into cars that can recognize and collect data about our emotions under the guise of preventing accidents due to the state of mind of being distracted or the physical state of being sleepy, then, seems a bit like a bait and switch.

A Highway to Privacy Invasion?
European regulations will help keep driver data from being used for any purpose other than ensuring a safer ride. But the US is lagging behind on the privacy front, with car companies largely free from any enforceable laws that would keep them from using driver data as they please.

Affectiva lists the following as use cases for occupant monitoring in cars: personalizing content recommendations, providing alternate route recommendations, adapting environmental conditions like lighting and heating, and understanding user frustration with virtual assistants and designing those assistants to be emotion-aware so that they’re less frustrating.

Our phones already do the first two (though, granted, we’re not supposed to look at them while we drive—but most cars now let you use bluetooth to display your phone’s content on the dashboard), and the third is simply a matter of reaching a hand out to turn a dial or press a button. The last seems like a solution for a problem that wouldn’t exist without said… solution.

Despite how unnecessary and unsettling it may seem, though, emotion-reading AI isn’t going away, in cars or other products and services where it might provide value.

Besides automotive AI, Affectiva also makes software for clients in the advertising space. With consent, the built-in camera on users’ laptops records them while they watch ads, gauging their emotional response, what kind of marketing is most likely to engage them, and how likely they are to buy a given product. Emotion-recognition tech is also being used or considered for use in mental health applications, call centers, fraud monitoring, and education, among others.

In a 2015 TED talk, Affectiva co-founder Rana El-Kaliouby told her audience that we’re living in a world increasingly devoid of emotion, and her goal was to bring emotions back into our digital experiences. Soon they’ll be in our cars, too; whether the benefits will outweigh the costs remains to be seen.

Image Credit: Free-Photos from Pixabay Continue reading

Posted in Human Robots