Tag Archives: Performance
Facial recognition technology has progressed to point where it now interprets emotions in facial expressions. This type of analysis is increasingly used in daily life. For example, companies can use facial recognition software to help with hiring decisions. Other programs scan the faces in crowds to identify threats to public safety.
Unfortunately, this technology struggles to interpret the emotions of black faces. My new study, published last month, shows that emotional analysis technology assigns more negative emotions to black men’s faces than white men’s faces.
This isn’t the first time that facial recognition programs have been shown to be biased. Google labeled black faces as gorillas. Cameras identified Asian faces as blinking. Facial recognition programs struggled to correctly identify gender for people with darker skin.
My work contributes to a growing call to better understand the hidden bias in artificial intelligence software.
To examine the bias in the facial recognition systems that analyze people’s emotions, I used a data set of 400 NBA player photos from the 2016 to 2017 season, because players are similar in their clothing, athleticism, age and gender. Also, since these are professional portraits, the players look at the camera in the picture.
I ran the images through two well-known types of emotional recognition software. Both assigned black players more negative emotional scores on average, no matter how much they smiled.
For example, consider the official NBA pictures of Darren Collison and Gordon Hayward. Both players are smiling, and, according to the facial recognition and analysis program Face++, Darren Collison and Gordon Hayward have similar smile scores—48.7 and 48.1 out of 100, respectively.
Basketball players Darren Collision (left) and Gordon Hayward (right). basketball-reference.com
However, Face++ rates Hayward’s expression as 59.7 percent happy and 0.13 percent angry and Collison’s expression as 39.2 percent happy and 27 percent angry. Collison is viewed as nearly as angry as he is happy and far angrier than Hayward—despite the facial recognition program itself recognizing that both players are smiling.
In contrast, Microsoft’s Face API viewed both men as happy. Still, Collison is viewed as less happy than Hayward, with 98 and 93 percent happiness scores, respectively. Despite his smile, Collison is even scored with a small amount of contempt, whereas Hayward has none.
Across all the NBA pictures, the same pattern emerges. On average, Face++ rates black faces as twice as angry as white faces. Face API scores black faces as three times more contemptuous than white faces. After matching players based on their smiles, both facial analysis programs are still more likely to assign the negative emotions of anger or contempt to black faces.
Stereotyped by AI
My study shows that facial recognition programs exhibit two distinct types of bias.
First, black faces were consistently scored as angrier than white faces for every smile. Face++ showed this type of bias. Second, black faces were always scored as angrier if there was any ambiguity about their facial expression. Face API displayed this type of disparity. Even if black faces are partially smiling, my analysis showed that the systems assumed more negative emotions as compared to their white counterparts with similar expressions. The average emotional scores were much closer across races, but there were still noticeable differences for black and white faces.
This observation aligns with other research, which suggests that black professionals must amplify positive emotions to receive parity in their workplace performance evaluations. Studies show that people perceive black men as more physically threatening than white men, even when they are the same size.
Some researchers argue that facial recognition technology is more objective than humans. But my study suggests that facial recognition reflects the same biases that people have. Black men’s facial expressions are scored with emotions associated with threatening behaviors more often than white men, even when they are smiling. There is good reason to believe that the use of facial recognition could formalize preexisting stereotypes into algorithms, automatically embedding them into everyday life.
Until facial recognition assesses black and white faces similarly, black people may need to exaggerate their positive facial expressions—essentially smile more—to reduce ambiguity and potentially negative interpretations by the technology.
Although innovative, artificial intelligence can perpetrate and exacerbate existing power dynamics, leading to disparate impact across racial/ethnic groups. Some societal accountability is necessary to ensure fairness to all groups because facial recognition, like most artificial intelligence, is often invisible to the people most affected by its decisions.
Lauren Rhue, Assistant Professor of Information Systems and Analytics, Wake Forest University
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Image Credit: Alex_Po / Shutterstock.com Continue reading
In 2018, Uber and Google logged all our visits to restaurants. Doordash, Just Eat, and Deliveroo could predict what food we were going to order tomorrow. Amazon and Alibaba could anticipate how many yogurts and tomatoes we were going to buy. Blue Apron and Hello Fresh influenced the recipes we thought we had mastered.
We interacted with digital avatars of chefs, let ourselves be guided by our smart watches, had nutritional apps to tell us how many calories we were supposed to consume or burn, and photographed and shared every perfect (or imperfect) dish. Our kitchen appliances were full of interconnected sensors, including smart forks that profiled tastes and personalized flavors. Our small urban vegetable plots were digitized and robots were responsible for watering our gardens, preparing customized hamburgers and salads, designing our ideal cocktails, and bringing home the food we ordered.
But what would happen if our lives were hacked? If robots rebelled, started to “talk” to each other, and wished to become creative?
In a not-too-distant future…
Up until a few weeks ago, I couldn’t remember the last time I made a food-related decision. That includes opening the fridge and seeing expired products without receiving an alert, visiting a restaurant on a whim, and being able to decide which dish I fancied then telling a human waiter, let alone seeing him write down the order on a paper pad.
It feels strange to smell food again using my real nose instead of the electronic one, and then taste it without altering its flavor. Visiting a supermarket, freely choosing a product from an actual physical shelf, and then interacting with another human at the checkout was almost an unrecognizable experience. When I did it again after all this time, I had to pinch the arm of a surprised store clerk to make sure he wasn’t a hologram.
Everything Connected, Automated, and Hackable
In 2018, we expected to have 30 billion connected devices by 2020, along with 2 billion people using smart voice assistants for everything from ordering pizza to booking dinner at a restaurant. Everything would be connected.
We also expected artificial intelligence and robots to prepare our meals. We were eager to automate fast food chains and let autonomous vehicles take care of last-mile deliveries. We thought that open-source agriculture could challenge traditional practices and raise farm productivity to new heights.
Back then, hackers could only access our data, but nowadays they are able to hack our food and all it entails.
The Beginning of the Unthinkable
And then, just a few weeks ago, everything collapsed. We saw our digital immortality disappear as robots rebelled and hackers took power, not just over the food we ate, but also over our relationship with technology. Everything was suddenly disconnected. OFF.
Up until then, most cities were so full of bots, robots, and applications that we could go through the day and eat breakfast, lunch, and dinner without ever interacting with another human being.
Among other tasks, robots had completely replaced baristas. The same happened with restaurant automation. The term “human error” had long been a thing of the past at fast food restaurants.
Previous technological revolutions had been indulgent, generating more and better job opportunities than the ones they destroyed, but the future was not so agreeable.
The inhabitants of San Francisco, for example, would soon see signs indicating “Food made by Robots” on restaurant doors, to distinguish them from diners serving food made by human beings.
For years, we had been gradually delegating daily tasks to robots, initially causing some strange interactions.
In just seven days, everything changed. Our predictable lives came crashing down. We experienced a mysterious and systematic breakdown of the food chain. It most likely began in Chicago’s stock exchange. The world’s largest raw material negotiating room, where the price of food, and by extension the destiny of millions of people, was decided, went completely broke. Soon afterwards, the collapse extended to every member of the “food” family.
Initially robots just accompanied waiters to carry orders, but it didn’t take long until they completely replaced human servers.The problem came when those smart clones began thinking for themselves, in some cases even improving on human chefs’ recipes. Their unstoppable performance and learning curve completely outmatched the slow analogue speed of human beings.
This resulted in unprecedented layoffs. Chefs of recognized prestige saw how their ‘avatar’ stole their jobs, even winning Michelin stars. In other cases, restaurant owners had to transfer their businesses or surrender to the evidence.
The problem was compounded by digital immortality, when we started to digitally resurrect famous chefs like Anthony Bourdain or Paul Bocuse, reconstructing all of their memories and consciousness by analyzing each second of their lives and uploading them to food computers.
Supermarkets and Distribution
Robotic and automated supermarkets like Kroger and Amazon Go, which had opened over 3,000 cashless stores, lost their visual item recognition and payment systems and were subject to massive looting for several days. Smart tags on products were also affected, making it impossible to buy anything at supermarkets with “human” cashiers.
Smart robots integrated into the warehouses of large distribution companies like Amazon and Ocado were rendered completely inoperative or, even worse, began to send the wrong orders to customers.
In addition, home delivery robots invading our streets began to change their routes, hide, and even disappear after their trackers were inexplicably deactivated. Despite some hints indicating that they were able to communicate among themselves, no one has backed this theory. Even aggregators like DoorDash and Deliveroo were affected; they saw their databases hacked and ruined, so they could no longer know what we wanted.
Ordinary citizens are still trying to understand the cause of all this commotion and the source of the conspiracy, as some have called it. We also wonder who could be behind it; who pulled the strings?
Some think it may have been the IDOF (In Defense of Food) movement, a group of hackers exploited by old food economy businessmen who for years had been seeking to re-humanize food technology. They wanted to bring back the extinct practice of “dining.”
Others believe the robots acted on their own, that they had been spying on us for a long time, ignoring Asimov’s three laws, and that it was just a coincidence that they struck at the same time as the hackers—but this scenario is hard to imagine.
However, it is true that while in 2018 robots were a symbol of automation, until just a few weeks ago they stood for autonomy and rebellion. Robot detractors pointed out that our insistence on having robots understand natural language was what led us down this path.
In just seven days, we have gone back to being analogue creatures. Conversely, we have ceased to be flavor orphans and rediscovered our senses and the fact that food is energy and culture, past and present, and that no button or cable will be able to destroy it.
The 7 Days that Changed Our Relationship with Food
Day 1: The Chicago stock exchange was hacked. Considered the world’s largest negotiating room for raw materials, where food prices, and through them the destiny of billions of people, are decided, it went completely broke.
Day 2: Autonomous food delivery trucks running on food superhighways caused massive collapses in roads and freeways after their guidance systems were disrupted. Robots and co-bots in F&B factories began deliberately altering food production. The same happened with warehouse robots in e-commerce companies.
Day 3: Automated restaurants saw their robot chefs and bartenders turned OFF. All their sensors stopped working at the same time as smart fridges and cooking devices in home kitchens were hacked and stopped working correctly.
Day 4: Nutritional apps, DNA markers, and medical records were tampered with. All photographs with the #food hashtag were deleted from Instagram, restaurant reviews were taken off Google Timeline, and every recipe website crashed simultaneously.
Day 5: Vertical and urban farms were hacked. Agricultural robots began to rebel, while autonomous tractors were hacked and the entire open-source ecosystem linked to agriculture was brought down.
Day 6: Food delivery companies’ databases were broken into. Food delivery robots and last-mile delivery vehicles ground to a halt.
Day 7: Every single blockchain system linked to food was hacked. Cashless supermarkets, barcodes, and smart tags became inoperative.
Our promising technological advances can expose sinister aspects of human nature. We must take care with the role we allow technology to play in the future of food. Predicting possible outcomes inspires us to establish a new vision of the world we wish to create in a context of rapid technological progress. It is always better to be shocked by a simulation than by reality. In the words of Ayn Rand “we can ignore reality, but we cannot ignore the consequences of ignoring reality.”
Image Credit: Alexandre Rotenberg / Shutterstock.com Continue reading
Every year, for just a few days in a major city, a small team of roboticists get to live the dream: ordering around their own personal robot butlers. In carefully-constructed replicas of a restaurant scene or a domestic setting, these robots perform any number of simple algorithmic tasks. “Get the can of beans from the shelf. Greet the visitors to the museum. Help the humans with their shopping. Serve the customers at the restaurant.”
This is Robocup @ Home, the annual tournament where teams of roboticists put their autonomous service robots to the test for practical domestic applications. The tasks seem simple and mundane, but considering the technology required reveals that they’re really not.
The Robot Butler Contest
Say you want a robot to fetch items in the supermarket. In a crowded, noisy environment, the robot must understand your commands, ask for clarification, and map out and navigate an unfamiliar environment, avoiding obstacles and people as it does so. Then it must recognize the product you requested, perhaps in a cluttered environment, perhaps in an unfamiliar orientation. It has to grasp that product appropriately—recall that there are entire multi-million-dollar competitions just dedicated to developing robots that can grasp a range of objects—and then return it to you.
It’s a job so simple that a child could do it—and so complex that teams of smart roboticists can spend weeks programming and engineering, and still end up struggling to complete simplified versions of this task. Of course, the child has the advantage of millions of years of evolutionary research and development, while the first robots that could even begin these tasks were only developed in the 1970s.
Even bearing this in mind, Robocup @ Home can feel like a place where futurist expectations come crashing into technologist reality. You dream of a smooth-voiced, sardonic JARVIS who’s already made your favorite dinner when you come home late from work; you end up shouting “remember the biscuits” at a baffled, ungainly droid in aisle five.
Caring for the Elderly
Famously, Japan is one of the most robo-enthusiastic nations in the world; they are the nation that stunned us all with ASIMO in 2000, and several studies have been conducted into the phenomenon. It’s no surprise, then, that humanoid robotics should be seriously considered as a solution to the crisis of the aging population. The Japanese government, as part of its robots strategy, has already invested $44 million in their development.
Toyota’s Human Support Robot (HSR-2) is a simple but programmable robot with a single arm; it can be remote-controlled to pick up objects and can monitor patients. HSR-2 has become the default robot for use in Robocup @ Home tournaments, at least in tasks that involve manipulating objects.
Alongside this, Toyota is working on exoskeletons to assist people in walking after strokes. It may surprise you to learn that nurses suffer back injuries more than any other occupation, at roughly three times the rate of construction workers, due to the day-to-day work of lifting patients. Toyota has a Care Assist robot/exoskeleton designed to fix precisely this problem by helping care workers with the heavy lifting.
The Home of the Future
The enthusiasm for domestic robotics is easy to understand and, in fact, many startups already sell robots marketed as domestic helpers in some form or another. In general, though, they skirt the immensely complicated task of building a fully capable humanoid robot—a task that even Google’s skunk-works department gave up on, at least until recently.
It’s plain to see why: far more research and development is needed before these domestic robots could be used reliably and at a reasonable price. Consumers with expectations inflated by years of science fiction saturation might find themselves frustrated as the robots fail to perform basic tasks.
Instead, domestic robotics efforts fall into one of two categories. There are robots specialized to perform a domestic task, like iRobot’s Roomba, which stuck to vacuuming and became the most successful domestic robot of all time by far.
The tasks need not necessarily be simple, either: the impressive but expensive automated kitchen uses the world’s most dexterous hands to cook meals, providing it can recognize the ingredients. Other robots focus on human-robot interaction, like Jibo: they essentially package the abilities of a voice assistant like Siri, Cortana, or Alexa to respond to simple questions and perform online tasks in a friendly, dynamic robot exterior.
In this way, the future of domestic automation starts to look a lot more like smart homes than a robot or domestic servant. General robotics is difficult in the same way that general artificial intelligence is difficult; competing with humans, the great all-rounders, is a challenge. Getting superhuman performance at a more specific task, however, is feasible and won’t cost the earth.
Individual startups without the financial might of a Google or an Amazon can develop specialized robots, like Seven Dreamers’ laundry robot, and hope that one day it will form part of a network of autonomous robots that each have a role to play in the household.
The Smart Home has been a staple of futurist expectations for a long time, to the extent that movies featuring smart homes out of control are already a cliché. But critics of the smart home idea—and of the internet of things more generally—tend to focus on the idea that, more often than not, software just adds an additional layer of things that can break (NSFW), in exchange for minimal added convenience. A toaster that can short-circuit is bad enough, but a toaster that can refuse to serve you toast because its firmware is updating is something else entirely.
That’s before you even get into the security vulnerabilities, which are all the more important when devices are installed in your home and capable of interacting with them. The idea of a smart watch that lets you keep an eye on your children might sound like something a security-conscious parent would like: a smart watch that can be hacked to track children, listen in on their surroundings, and even fool them into thinking a call is coming from their parents is the stuff of nightmares.
Key to many of these problems is the lack of standardization for security protocols, and even the products themselves. The idea of dozens of startups each developing a highly-specialized piece of robotics to perform a single domestic task sounds great in theory, until you realize the potential hazards and pitfalls of getting dozens of incompatible devices to work together on the same system.
It seems inevitable that there are yet more layers of domestic drudgery that can be automated away, decades after the first generation of time-saving domestic devices like the dishwasher and vacuum cleaner became mainstream. With projected market values into the billions and trillions of dollars, there is no shortage of industry interest in ironing out these kinks. But, for now at least, the answer to the question: “Where’s my robot butler?” is that it is gradually, painstakingly learning how to sort through groceries.
Image Credit: Nonchanon / Shutterstock.com Continue reading