Tag Archives: face

#430652 The Jobs AI Will Take Over First

11th July 2017: The robotic revolution is set to cause the biggest transformation in the world’s workforce since the industrial revolution. In fact, research suggests that over 30% of jobs in Britain are under threat from breakthroughs in artificial intelligence (AI) technology.

With pioneering advances in technology many jobs that weren’t considered ripe for automation suddenly are. RS Components have used PWC Data to reveal how many jobs per sector are at risk of being taken by robots by 2030, a mere 13 years away. Did you think you were exempt from the robot revolution?

The top three sectors who are most exposed to the threats of robots are Transport and Storage, Manufacturing and Wholesale and Retail with 56%, 46% and 44% risk of automation respectively. The PWC report states that the differentiating factor between losing jobs to automation probability is education; those with a GCSE-level education or lower face a 46% risk, whilst those with undergraduate degrees or higher face a 12% risk. If a job is repetitive, physical and requires minimum effort to train for, this will have a higher likelihood to become automated by machines.

The manufacturing industry has the 3rd highest likelihood potential at 46.6%, shortly behind Transportation and Storage (56.4%) and Water, Sewage and Waste Management (62.6%). Although the manufacturing sector has the 3rd highest likelihood, it has the second largest number of jobs at risk of being taken by robots; an astonishing 1.22 million jobs are at risk in the near future. Repetitive manual labour and routine tasks can be taught to fixed machines and mimicked easily, saving employers both time and money.

The three sectors least at risk are Education, Health and Social and Agriculture, Forestry and Fishing with 9%, 17% and 19% risk of automation respectively. These operations are non-repetitive and consist of characteristics that cannot be taught and are harder to replicate with AI and robotics.

These are not the only fields where the introduction of AI will have an impact on employment prospects; Administrative and Support Services, Accommodation and Food Services, Finance and Insurance, Construction, Real Estate, Public Administration and Defence, and Arts and Entertainment are not out of the woods either.

The future is not all doom and gloom. Automation is set to boost productivity to enable workers to focus on higher value, more rewarding jobs; leaving repetitive and uncomplicated ones to the robots. An increase in sectors that are less easy to automate is also expected due to lower running costs. Wealth and spending will also be boosted by the initiation of AI seizing work. Also, there are just some things AI cannot learn so these jobs will be safe.

In some sectors half of the jobs could be taken by a fully automated system. Is your job next?

The post The Jobs AI Will Take Over First appeared first on Roboticmagazine. Continue reading

Posted in Human Robots

#430579 What These Lifelike Androids Can Teach ...

For Dr. Hiroshi Ishiguro, one of the most interesting things about androids is the changing questions they pose us, their creators, as they evolve. Does it, for example, do something to the concept of being human if a human-made creation starts telling you about what kind of boys ‘she’ likes?
If you want to know the answer to the boys question, you need to ask ERICA, one of Dr. Ishiguro’s advanced androids. Beneath her plastic skull and silicone skin, wires connect to AI software systems that bring her to life. Her ability to respond goes far beyond standard inquiries. Spend a little time with her, and the feeling of a distinct personality starts to emerge. From time to time, she works as a receptionist at Dr. Ishiguro and his team’s Osaka University labs. One of her android sisters is an actor who has starred in plays and a film.

ERICA’s ‘brother’ is an android version of Dr. Ishiguro himself, which has represented its creator at various events while the biological Ishiguro can remain in his offices in Japan. Microphones and cameras capture Ishiguro’s voice and face movements, which are relayed to the android. Apart from mimicking its creator, the Geminoid™ android is also capable of lifelike blinking, fidgeting, and breathing movements.
Say hello to relaxation
As technological development continues to accelerate, so do the possibilities for androids. From a position as receptionist, ERICA may well branch out into many other professions in the coming years. Companion for the elderly, comic book storyteller (an ancient profession in Japan), pop star, conversational foreign language partner, and newscaster are some of the roles and responsibilities Dr. Ishiguro sees androids taking on in the near future.
“Androids are not uncanny anymore. Most people adapt to interacting with Erica very quickly. Actually, I think that in interacting with androids, which are still different from us, we get a better appreciation of interacting with other cultures. In both cases, we are talking with someone who is different from us and learn to overcome those differences,” he says.
A lot has been written about how robots will take our jobs. Dr. Ishiguro believes these fears are blown somewhat out of proportion.
“Robots and androids will take over many simple jobs. Initially there might be some job-related issues, but new schemes, like for example a robot tax similar to the one described by Bill Gates, should help,” he says.
“Androids will make it possible for humans to relax and keep evolving. If we compare the time we spend studying now compared to 100 years ago, it has grown a lot. I think it needs to keep growing if we are to keep expanding our scientific and technological knowledge. In the future, we might end up spending 20 percent of our lifetime on work and 80 percent of the time on education and growing our skills.”
Android asks who you are
For Dr. Ishiguro, another aspect of robotics in general, and androids in particular, is how they question what it means to be human.
“Identity is a very difficult concept for humans sometimes. For example, I think clothes are part of our identity, in a way that is similar to our faces and bodies. We don’t change those from one day to the next, and that is why I have ten matching black outfits,” he says.
This link between physical appearance and perceived identity is one of the aspects Dr. Ishiguro is exploring. Another closely linked concept is the connection between body and feeling of self. The Ishiguro avatar was once giving a presentation in Austria. Its creator recalls how he felt distinctly like he was in Austria, even capable of feeling sensation of touch on his own body when people laid their hands on the android. If he was distracted, he felt almost ‘sucked’ back into his body in Japan.
“I am constantly thinking about my life in this way, and I believe that androids are a unique mirror that helps us formulate questions about why we are here and why we have been so successful. I do not necessarily think I have found the answers to these questions, so if you have, please share,” he says with a laugh.
His work and these questions, while extremely interesting on their own, become extra poignant when considering the predicted melding of mind and machine in the near future.
The ability to be present in several locations through avatars—virtual or robotic—raises many questions of both philosophical and practical nature. Then add the hypotheticals, like why send a human out onto the hostile surface of Mars if you could send a remote-controlled android, capable of relaying everything it sees, hears and feels?
The two ways of robotics will meet
Dr. Ishiguro sees the world of AI-human interaction as currently roughly split into two. One is the chat-bot approach that companies like Amazon, Microsoft, Google, and recently Apple, employ using stationary objects like speakers. Androids like ERICA represent another approach.
“It is about more than the form factor. I think that the android approach is generally more story-based. We are integrating new conversation features based on assumptions about the situation and running different scenarios that expand the android’s vocabulary and interactions. Another aspect we are working on is giving androids desire and intention. Like with people, androids should have desires and intentions in order for you to want to interact with them over time,” Dr. Ishiguro explains.
This could be said to be part of a wider trend for Japan, where many companies are developing human-like robots that often have some Internet of Things capabilities, making them able to handle some of the same tasks as an Amazon Echo. The difference in approach could be summed up in the words ‘assistant’ (Apple, Amazon, etc.) and ‘companion’ (Japan).
Dr. Ishiguro sees this as partly linked to how Japanese as a language—and market—is somewhat limited. This has a direct impact on viability and practicality of ‘pure’ voice recognition systems. At the same time, Japanese people have had greater exposure to positive images of robots, and have a different cultural / religious view of objects having a ‘soul’. However, it may also mean Japanese companies and android scientists are both stealing a lap on their western counterparts.
“If you speak to an Amazon Echo, that is not a natural way to interact for humans. This is part of why we are making human-like robot systems. The human brain is set up to recognize and interact with humans. So, it makes sense to focus on developing the body for the AI mind, as well as the AI. I believe that the final goal for both Japanese and other companies and scientists is to create human-like interaction. Technology has to adapt to us, because we cannot adapt fast enough to it, as it develops so quickly,” he says.
Banner image courtesy of Hiroshi Ishiguro Laboratories, ATR all rights reserved.
Dr. Ishiguro’s team has collaborated with partners and developed a number of android systems:
Geminoid™ HI-2 has been developed by Hiroshi Ishiguro Laboratories and Advanced Telecommunications Research Institute International (ATR).
Geminoid™ F has been developed by Osaka University and Hiroshi Ishiguro Laboratories, Advanced Telecommunications Research Institute International (ATR).
ERICA has been developed by ERATO ISHIGURO Symbiotic Human-Robot Interaction Project Continue reading

Posted in Human Robots

#430556 Forget Flying Cars, the Future Is ...

Flying car concepts have been around nearly as long as their earthbound cousins, but no one has yet made them a commercial success. MIT engineers think we’ve been coming at the problem from the wrong direction; rather than putting wings on cars, we should be helping drones to drive.
The team from the university’s Computer Science and Artificial Intelligence Laboratory (CSAIL) added wheels to a fleet of eight mini-quadcopters and tested driving and flying them around a tiny toy town made out of cardboard and fabric.
Adding the ability to drive reduced the distance the drone could fly by 14 percent compared to a wheel-less version. But while driving was slower, the drone could travel 150 percent further than when flying. The result is a vehicle that combines the speed and mobility of flying with the energy-efficiency of driving.

CSAIL director Daniela Rus told MIT News their work suggested that when looking to create flying cars, it might make more sense to build on years of research into drones rather than trying to simply “put wings on cars.”
Historically, flying car concepts have looked like someone took apart a Cessna light aircraft and a family sedan, mixed all the parts up, and bolted them back together again. Not everyone has abandoned this approach—two of the most developed flying car designs from Terrafugia and AeroMobil are cars with folding wings that need an airstrip to take off.
But flying car concepts are looking increasingly drone-like these days, with multiple small rotors, electric propulsion and vertical take-off abilities. Take the eHang 184 autonomous aerial vehicle being developed in China, the Kitty Hawk all-electric aircraft backed by Google founder Larry Page, which is little more than a quadcopter with a seat, the AirQuadOne designed by UK consortium Neva Aerospace, or Lilium Aviation’s Jet.
The attraction is obvious. Electric-powered drones are more compact, maneuverable, and environmentally friendly, making them suitable for urban environments.
Most of these vehicles are not quite the same as those proposed by the MIT engineers, as they’re pure flying machines. But a recent Airbus concept builds on the same principle that the future of urban mobility is vehicles that can both fly and drive. Its Pop.Up design is a two-passenger pod that can either be clipped to a set of wheels or hang under a quadcopter.
Importantly, they envisage their creation being autonomous in both flight and driving modes. And they’re not the only ones who think the future of flying cars is driverless. Uber has committed to developing a network of autonomous air taxis within a decade. This spring, Dubai announced it would launch a pilotless passenger drone service using the Ehang 184 as early as next month (July).
While integrating fully-fledged autonomous flying cars into urban environments will be far more complex, the study by Rus and her colleagues provides a good starting point for the kind of 3D route-planning and collision avoidance capabilities this would require.
The team developed multi-robot path planning algorithms that were able to control all eight drones as they flew and drove around their mock up city, while also making sure they didn’t crash into each other and avoided no-fly zones.
“This work provides an algorithmic solution for large-scale, mixed-mode transportation and shows its applicability to real-world problems,” Jingjin Yu, a computer science professor at Rutgers University who was not involved in the research, told MIT News.
This vision of a driverless future for flying cars might be a bit of a disappointment for those who’d envisaged themselves one day piloting their own hover car just like George Jetson. But autonomy and Uber-like ride-hailing business models are likely to be attractive, as they offer potential solutions to three of the biggest hurdles drone-like passenger vehicles face.
Firstly, it makes the vehicles accessible to anyone by removing the need to learn how to safely pilot an aircraft. Secondly, battery life still limits most electric vehicles to flight times measured in minutes. For personal vehicles this could be frustrating, but if you’re just hopping in a driverless air taxi for a five minute trip across town it’s unlikely to become apparent to you.
Operators of the service simply need to make sure they have a big enough fleet to ensure a charged vehicle is never too far away, or they’ll need a way to swap out batteries easily, such as the one suggested by the makers of the Volocopter electric helicopter.
Finally, there has already been significant progress in developing technology and regulations needed to integrate autonomous drones into our airspace that future driverless flying cars can most likely piggyback off of.
Safety requirements will inevitably be more stringent, but adding more predictable and controllable autonomous drones to the skies is likely to be more attractive to regulators than trying to license and police thousands of new amateur pilots.
Image Credit: Lilium Continue reading

Posted in Human Robots

#429587 Zoe, the emotional talking head

Zoe is a digital avatar that has uncanny emotional, humanlike, expressions and emotions! She’s envisaged as the face of a personal assistant, or one day you’ll be able to create your own similar digital avatar to personalize your online and … Continue reading

Posted in Human Robots

#428181 Adding social touch to robotics

A squeeze in the arm, a pat on the shoulder, or a slap in the face – touch is an important part of the social interaction between people. Social touch, however, is a relatively unknown field when it comes to robots, even though robots operate with increasing frequency in society at large, rather than just in the controlled environment of a factory. Merel Jung is conducting research at the University of Twente CTIT research institute into social touch interaction with robots. Using a relatively simple system – a mannequin's arm with pressure sensors, connected to a computer – she has succeeded in getting it to recognize sixty percent of all touches. The research is being published today in the Journal on Multimodal User Interfaces scientific journal. Continue reading

Posted in Human Robots