Tag Archives: develops
#431389 Tech Is Becoming Emotionally ...
Many people get frustrated with technology when it malfunctions or is counterintuitive. The last thing people might expect is for that same technology to pick up on their emotions and engage with them differently as a result.
All of that is now changing. Computers are increasingly able to figure out what we’re feeling—and it’s big business.
A recent report predicts that the global affective computing market will grow from $12.2 billion in 2016 to $53.98 billion by 2021. The report by research and consultancy firm MarketsandMarkets observed that enabling technologies have already been adopted in a wide range of industries and noted a rising demand for facial feature extraction software.
Affective computing is also referred to as emotion AI or artificial emotional intelligence. Although many people are still unfamiliar with the category, researchers in academia have already discovered a multitude of uses for it.
At the University of Tokyo, Professor Toshihiko Yamasaki decided to develop a machine learning system that evaluates the quality of TED Talk videos. Of course, a TED Talk is only considered to be good if it resonates with a human audience. On the surface, this would seem too qualitatively abstract for computer analysis. But Yamasaki wanted his system to watch videos of presentations and predict user impressions. Could a machine learning system accurately evaluate the emotional persuasiveness of a speaker?
Yamasaki and his colleagues came up with a method that analyzed correlations and “multimodal features including linguistic as well as acoustic features” in a dataset of 1,646 TED Talk videos. The experiment was successful. The method obtained “a statistically significant macro-average accuracy of 93.3 percent, outperforming several competitive baseline methods.”
A machine was able to predict whether or not a person would emotionally connect with other people. In their report, the authors noted that these findings could be used for recommendation purposes and also as feedback to the presenters, in order to improve the quality of their public presentation. However, the usefulness of affective computing goes far beyond the way people present content. It may also transform the way they learn it.
Researchers from North Carolina State University explored the connection between students’ affective states and their ability to learn. Their software was able to accurately predict the effectiveness of online tutoring sessions by analyzing the facial expressions of participating students. The software tracked fine-grained facial movements such as eyebrow raising, eyelid tightening, and mouth dimpling to determine engagement, frustration, and learning. The authors concluded that “analysis of facial expressions has great potential for educational data mining.”
This type of technology is increasingly being used within the private sector. Affectiva is a Boston-based company that makes emotion recognition software. When asked to comment on this emerging technology, Gabi Zijderveld, chief marketing officer at Affectiva, explained in an interview for this article, “Our software measures facial expressions of emotion. So basically all you need is our software running and then access to a camera so you can basically record a face and analyze it. We can do that in real time or we can do this by looking at a video and then analyzing data and sending it back to folks.”
The technology has particular relevance for the advertising industry.
Zijderveld said, “We have products that allow you to measure how consumers or viewers respond to digital content…you could have a number of people looking at an ad, you measure their emotional response so you aggregate the data and it gives you insight into how well your content is performing. And then you can adapt and adjust accordingly.”
Zijderveld explained that this is the first market where the company got traction. However, they have since packaged up their core technology in software development kits or SDKs. This allows other companies to integrate emotion detection into whatever they are building.
By licensing its technology to others, Affectiva is now rapidly expanding into a wide variety of markets, including gaming, education, robotics, and healthcare. The core technology is also used in human resources for the purposes of video recruitment. The software analyzes the emotional responses of interviewees, and that data is factored into hiring decisions.
Richard Yonck is founder and president of Intelligent Future Consulting and the author of a book about our relationship with technology. “One area I discuss in Heart of the Machine is the idea of an emotional economy that will arise as an ecosystem of emotionally aware businesses, systems, and services are developed. This will rapidly expand into a multi-billion-dollar industry, leading to an infrastructure that will be both emotionally responsive and potentially exploitive at personal, commercial, and political levels,” said Yonck, in an interview for this article.
According to Yonck, these emotionally-aware systems will “better anticipate needs, improve efficiency, and reduce stress and misunderstandings.”
Affectiva is uniquely positioned to profit from this “emotional economy.” The company has already created the world’s largest emotion database. “We’ve analyzed a little bit over 4.7 million faces in 75 countries,” said Zijderveld. “This is data first and foremost, it’s data gathered with consent. So everyone has opted in to have their faces analyzed.”
The vastness of that database is essential for deep learning approaches. The software would be inaccurate if the data was inadequate. According to Zijderveld, “If you don’t have massive amounts of data of people of all ages, genders, and ethnicities, then your algorithms are going to be pretty biased.”
This massive database has already revealed cultural insights into how people express emotion. Zijderveld explained, “Obviously everyone knows that women are more expressive than men. But our data confirms that, but not only that, it can also show that women smile longer. They tend to smile more often. There’s also regional differences.”
Yonck believes that affective computing will inspire unimaginable forms of innovation and that change will happen at a fast pace.
He explained, “As businesses, software, systems, and services develop, they’ll support and make possible all sorts of other emotionally aware technologies that couldn’t previously exist. This leads to a spiral of increasingly sophisticated products, just as happened in the early days of computing.”
Those who are curious about affective technology will soon be able to interact with it.
Hubble Connected unveiled the Hubble Hugo at multiple trade shows this year. Hugo is billed as “the world’s first smart camera,” with emotion AI video analytics powered by Affectiva. The product can identify individuals, figure out how they’re feeling, receive voice commands, video monitor your home, and act as a photographer and videographer of events. Media can then be transmitted to the cloud. The company’s website describes Hugo as “a fun pal to have in the house.”
Although he sees the potential for improved efficiencies and expanding markets, Richard Yonck cautions that AI technology is not without its pitfalls.
“It’s critical that we understand we are headed into very unknown territory as we develop these systems, creating problems unlike any we’ve faced before,” said Yonck. “We should put our focus on ensuring AI develops in a way that represents our human values and ideals.”
Image Credit: Kisan / Shutterstock.com Continue reading →
#430640 RE2 Robotics Receives Air Force Funding ...
PITTSBURGH, PA – June 21, 2017 – RE2 Robotics announced today that the Company was selected by the Air Force to develop a drop-in robotic system to rapidly convert a variety of traditionally manned aircraft to robotically piloted, autonomous aircraft under the Small Business Innovation Research (SBIR) program. This robotic system, named “Common Aircraft Retrofit for Novel Autonomous Control” (CARNAC), will operate the aircraft similarly to a human pilot and will not require any modifications to the aircraft.
Automation and autonomy have broad value to the Department of Defense with the potential to enhance system performance of existing platforms, reduce costs, and enable new missions and capabilities, especially with reduced human exposure to dangerous or life-threatening situations. The CARNAC project leverages existing aviation assets and advances in vehicle automation technologies to develop a cutting-edge drop-in robotic flight system.
During the program, RE2 Robotics will demonstrate system architecture feasibility, humanoid-like robotic manipulation capabilities, vision-based flight-status recognition, and cognitive architecture-based decision making.
“Our team is excited to incorporate the Company’s robotic manipulation expertise with proven technologies in applique systems, vision processing algorithms, and decision making to create a customized application that will allow a wide variety of existing aircraft to be outfitted with a robotic pilot,” stated Jorgen Pedersen, president and CEO of RE2 Robotics. “By creating a drop-in robotic pilot, we have the ability to insert autonomy into and expand the capabilities of not only traditionally manned air vehicles, but ground and underwater vehicles as well. This application will open up a whole new market for our mobile robotic manipulator systems.”
###
About RE2 RoboticsRE2 Robotics develops mobile robotic technologies that enable robot users to remotely interact with their world from a safe distance — whether on the ground, in the air, or underwater. RE2 creates interoperable robotic manipulator arms with human-like performance, intuitive human robot interfaces, and advanced autonomy software for mobile robotics. For more information, please visit www.resquared.com or call 412.681.6382.
Media Contact: RE2 Public Relations, pr@resquared.com, 412.681.6382.
The post RE2 Robotics Receives Air Force Funding to Develop Robotic Pilot appeared first on Roboticmagazine. Continue reading →
#430579 What These Lifelike Androids Can Teach ...
For Dr. Hiroshi Ishiguro, one of the most interesting things about androids is the changing questions they pose us, their creators, as they evolve. Does it, for example, do something to the concept of being human if a human-made creation starts telling you about what kind of boys ‘she’ likes?
If you want to know the answer to the boys question, you need to ask ERICA, one of Dr. Ishiguro’s advanced androids. Beneath her plastic skull and silicone skin, wires connect to AI software systems that bring her to life. Her ability to respond goes far beyond standard inquiries. Spend a little time with her, and the feeling of a distinct personality starts to emerge. From time to time, she works as a receptionist at Dr. Ishiguro and his team’s Osaka University labs. One of her android sisters is an actor who has starred in plays and a film.
ERICA’s ‘brother’ is an android version of Dr. Ishiguro himself, which has represented its creator at various events while the biological Ishiguro can remain in his offices in Japan. Microphones and cameras capture Ishiguro’s voice and face movements, which are relayed to the android. Apart from mimicking its creator, the Geminoid™ android is also capable of lifelike blinking, fidgeting, and breathing movements.
Say hello to relaxation
As technological development continues to accelerate, so do the possibilities for androids. From a position as receptionist, ERICA may well branch out into many other professions in the coming years. Companion for the elderly, comic book storyteller (an ancient profession in Japan), pop star, conversational foreign language partner, and newscaster are some of the roles and responsibilities Dr. Ishiguro sees androids taking on in the near future.
“Androids are not uncanny anymore. Most people adapt to interacting with Erica very quickly. Actually, I think that in interacting with androids, which are still different from us, we get a better appreciation of interacting with other cultures. In both cases, we are talking with someone who is different from us and learn to overcome those differences,” he says.
A lot has been written about how robots will take our jobs. Dr. Ishiguro believes these fears are blown somewhat out of proportion.
“Robots and androids will take over many simple jobs. Initially there might be some job-related issues, but new schemes, like for example a robot tax similar to the one described by Bill Gates, should help,” he says.
“Androids will make it possible for humans to relax and keep evolving. If we compare the time we spend studying now compared to 100 years ago, it has grown a lot. I think it needs to keep growing if we are to keep expanding our scientific and technological knowledge. In the future, we might end up spending 20 percent of our lifetime on work and 80 percent of the time on education and growing our skills.”
Android asks who you are
For Dr. Ishiguro, another aspect of robotics in general, and androids in particular, is how they question what it means to be human.
“Identity is a very difficult concept for humans sometimes. For example, I think clothes are part of our identity, in a way that is similar to our faces and bodies. We don’t change those from one day to the next, and that is why I have ten matching black outfits,” he says.
This link between physical appearance and perceived identity is one of the aspects Dr. Ishiguro is exploring. Another closely linked concept is the connection between body and feeling of self. The Ishiguro avatar was once giving a presentation in Austria. Its creator recalls how he felt distinctly like he was in Austria, even capable of feeling sensation of touch on his own body when people laid their hands on the android. If he was distracted, he felt almost ‘sucked’ back into his body in Japan.
“I am constantly thinking about my life in this way, and I believe that androids are a unique mirror that helps us formulate questions about why we are here and why we have been so successful. I do not necessarily think I have found the answers to these questions, so if you have, please share,” he says with a laugh.
His work and these questions, while extremely interesting on their own, become extra poignant when considering the predicted melding of mind and machine in the near future.
The ability to be present in several locations through avatars—virtual or robotic—raises many questions of both philosophical and practical nature. Then add the hypotheticals, like why send a human out onto the hostile surface of Mars if you could send a remote-controlled android, capable of relaying everything it sees, hears and feels?
The two ways of robotics will meet
Dr. Ishiguro sees the world of AI-human interaction as currently roughly split into two. One is the chat-bot approach that companies like Amazon, Microsoft, Google, and recently Apple, employ using stationary objects like speakers. Androids like ERICA represent another approach.
“It is about more than the form factor. I think that the android approach is generally more story-based. We are integrating new conversation features based on assumptions about the situation and running different scenarios that expand the android’s vocabulary and interactions. Another aspect we are working on is giving androids desire and intention. Like with people, androids should have desires and intentions in order for you to want to interact with them over time,” Dr. Ishiguro explains.
This could be said to be part of a wider trend for Japan, where many companies are developing human-like robots that often have some Internet of Things capabilities, making them able to handle some of the same tasks as an Amazon Echo. The difference in approach could be summed up in the words ‘assistant’ (Apple, Amazon, etc.) and ‘companion’ (Japan).
Dr. Ishiguro sees this as partly linked to how Japanese as a language—and market—is somewhat limited. This has a direct impact on viability and practicality of ‘pure’ voice recognition systems. At the same time, Japanese people have had greater exposure to positive images of robots, and have a different cultural / religious view of objects having a ‘soul’. However, it may also mean Japanese companies and android scientists are both stealing a lap on their western counterparts.
“If you speak to an Amazon Echo, that is not a natural way to interact for humans. This is part of why we are making human-like robot systems. The human brain is set up to recognize and interact with humans. So, it makes sense to focus on developing the body for the AI mind, as well as the AI. I believe that the final goal for both Japanese and other companies and scientists is to create human-like interaction. Technology has to adapt to us, because we cannot adapt fast enough to it, as it develops so quickly,” he says.
Banner image courtesy of Hiroshi Ishiguro Laboratories, ATR all rights reserved.
Dr. Ishiguro’s team has collaborated with partners and developed a number of android systems:
Geminoid™ HI-2 has been developed by Hiroshi Ishiguro Laboratories and Advanced Telecommunications Research Institute International (ATR).
Geminoid™ F has been developed by Osaka University and Hiroshi Ishiguro Laboratories, Advanced Telecommunications Research Institute International (ATR).
ERICA has been developed by ERATO ISHIGURO Symbiotic Human-Robot Interaction Project Continue reading →