Tag Archives: institute

#431081 How the Intelligent Home of the Future ...

As Dorothy famously said in The Wizard of Oz, there’s no place like home. Home is where we go to rest and recharge. It’s familiar, comfortable, and our own. We take care of our homes by cleaning and maintaining them, and fixing things that break or go wrong.
What if our homes, on top of giving us shelter, could also take care of us in return?
According to Chris Arkenberg, this could be the case in the not-so-distant future. As part of Singularity University’s Experts On Air series, Arkenberg gave a talk called “How the Intelligent Home of The Future Will Care For You.”
Arkenberg is a research and strategy lead at Orange Silicon Valley, and was previously a research fellow at the Deloitte Center for the Edge and a visiting researcher at the Institute for the Future.
Arkenberg told the audience that there’s an evolution going on: homes are going from being smart to being connected, and will ultimately become intelligent.
Market Trends
Intelligent home technologies are just now budding, but broader trends point to huge potential for their growth. We as consumers already expect continuous connectivity wherever we go—what do you mean my phone won’t get reception in the middle of Yosemite? What do you mean the smart TV is down and I can’t stream Game of Thrones?
As connectivity has evolved from a privilege to a basic expectation, Arkenberg said, we’re also starting to have a better sense of what it means to give up our data in exchange for services and conveniences. It’s so easy to click a few buttons on Amazon and have stuff show up at your front door a few days later—never mind that data about your purchases gets recorded and aggregated.
“Right now we have single devices that are connected,” Arkenberg said. “Companies are still trying to show what the true value is and how durable it is beyond the hype.”

Connectivity is the basis of an intelligent home. To take a dumb object and make it smart, you get it online. Belkin’s Wemo, for example, lets users control lights and appliances wirelessly and remotely, and can be paired with Amazon Echo or Google Home for voice-activated control.
Speaking of voice-activated control, Arkenberg pointed out that physical interfaces are evolving, too, to the point that we’re actually getting rid of interfaces entirely, or transitioning to ‘soft’ interfaces like voice or gesture.
Drivers of change
Consumers are open to smart home tech and companies are working to provide it. But what are the drivers making this tech practical and affordable? Arkenberg said there are three big ones:
Computation: Computers have gotten exponentially more powerful over the past few decades. If it wasn’t for processors that could handle massive quantities of information, nothing resembling an Echo or Alexa would even be possible. Artificial intelligence and machine learning are powering these devices, and they hinge on computing power too.
Sensors: “There are more things connected now than there are people on the planet,” Arkenberg said. Market research firm Gartner estimates there are 8.4 billion connected things currently in use. Wherever digital can replace hardware, it’s doing so. Cheaper sensors mean we can connect more things, which can then connect to each other.
Data: “Data is the new oil,” Arkenberg said. “The top companies on the planet are all data-driven giants. If data is your business, though, then you need to keep finding new ways to get more and more data.” Home assistants are essentially data collection systems that sit in your living room and collect data about your life. That data in turn sets up the potential of machine learning.
Colonizing the Living Room
Alexa and Echo can turn lights on and off, and Nest can help you be energy-efficient. But beyond these, what does an intelligent home really look like?
Arkenberg’s vision of an intelligent home uses sensing, data, connectivity, and modeling to manage resource efficiency, security, productivity, and wellness.
Autonomous vehicles provide an interesting comparison: they’re surrounded by sensors that are constantly mapping the world to build dynamic models to understand the change around itself, and thereby predict things. Might we want this to become a model for our homes, too? By making them smart and connecting them, Arkenberg said, they’d become “more biological.”
There are already several products on the market that fit this description. RainMachine uses weather forecasts to adjust home landscape watering schedules. Neurio monitors energy usage, identifies areas where waste is happening, and makes recommendations for improvement.
These are small steps in connecting our homes with knowledge systems and giving them the ability to understand and act on that knowledge.
He sees the homes of the future being equipped with digital ears (in the form of home assistants, sensors, and monitoring devices) and digital eyes (in the form of facial recognition technology and machine vision to recognize who’s in the home). “These systems are increasingly able to interrogate emotions and understand how people are feeling,” he said. “When you push more of this active intelligence into things, the need for us to directly interface with them becomes less relevant.”
Could our homes use these same tools to benefit our health and wellness? FREDsense uses bacteria to create electrochemical sensors that can be applied to home water systems to detect contaminants. If that’s not personal enough for you, get a load of this: ClinicAI can be installed in your toilet bowl to monitor and evaluate your biowaste. What’s the point, you ask? Early detection of colon cancer and other diseases.
What if one day, your toilet’s biowaste analysis system could link up with your fridge, so that when you opened it it would tell you what to eat, and how much, and at what time of day?
Roadblocks to intelligence
“The connected and intelligent home is still a young category trying to establish value, but the technological requirements are now in place,” Arkenberg said. We’re already used to living in a world of ubiquitous computation and connectivity, and we have entrained expectations about things being connected. For the intelligent home to become a widespread reality, its value needs to be established and its challenges overcome.
One of the biggest challenges will be getting used to the idea of continuous surveillance. We’ll get convenience and functionality if we give up our data, but how far are we willing to go? Establishing security and trust is going to be a big challenge moving forward,” Arkenberg said.
There’s also cost and reliability, interoperability and fragmentation of devices, or conversely, what Arkenberg called ‘platform lock-on,’ where you’d end up relying on only one provider’s system and be unable to integrate devices from other brands.
Ultimately, Arkenberg sees homes being able to learn about us, manage our scheduling and transit, watch our moods and our preferences, and optimize our resource footprint while predicting and anticipating change.
“This is the really fascinating provocation of the intelligent home,” Arkenberg said. “And I think we’re going to start to see this play out over the next few years.”
Sounds like a home Dorothy wouldn’t recognize, in Kansas or anywhere else.
Stock Media provided by adam121 / Pond5 Continue reading

Posted in Human Robots

#430855 Why Education Is the Hardest Sector of ...

We’ve all heard the warning cries: automation will disrupt entire industries and put millions of people out of jobs. In fact, up to 45 percent of existing jobs can be automated using current technology.
However, this may not necessarily apply to the education sector. After a detailed analysis of more than 2,000-plus work activities for more than 800 occupations, a report by McKinsey & Co states that of all the sectors examined, “…the technical feasibility of automation is lowest in education.”
There is no doubt that technological trends will have a powerful impact on global education, both by improving the overall learning experience and by increasing global access to education. Massive open online courses (MOOCs), chatbot tutors, and AI-powered lesson plans are just a few examples of the digital transformation in global education. But will robots and artificial intelligence ever fully replace teachers?
The Most Difficult Sector to Automate
While various tasks revolving around education—like administrative tasks or facilities maintenance—are open to automation, teaching itself is not.
Effective education involves more than just transfer of information from a teacher to a student. Good teaching requires complex social interactions and adaptation to the individual student’s learning needs. An effective teacher is not just responsive to each student’s strengths and weaknesses, but is also empathetic towards the student’s state of mind. It’s about maximizing human potential.
Furthermore, students don’t just rely on effective teachers to teach them the course material, but also as a source of life guidance and career mentorship. Deep and meaningful human interaction is crucial and is something that is very difficult, if not impossible, to automate.
Automating teaching is an example of a task that would require artificial general intelligence (as opposed to narrow or specific intelligence). In other words, this is the kind of task that would require an AI that understands natural human language, can be empathetic towards emotions, plan, strategize and make impactful decisions under unpredictable circumstances.
This would be the kind of machine that can do anything a human can do, and it doesn’t exist—at least, not yet.
We’re Getting There
Let’s not forget how quickly AI is evolving. Just because it’s difficult to fully automate teaching, it doesn’t mean the world’s leading AI experts aren’t trying.
Meet Jill Watson, the teaching assistant from Georgia Institute of Technology. Watson isn’t your average TA. She’s an IBM-powered artificial intelligence that is being implemented in universities around the world. Watson is able to answer students’ questions with 97 percent certainty.
Technologies like this also have applications in grading and providing feedback. Some AI algorithms are being trained and refined to perform automatic essay scoring. One project has achieved a 0.945 correlation with human graders.
All of this will have a remarkable impact on online education as we know it and dramatically increase online student retention rates.

Any student with a smartphone can access a wealth of information and free courses from universities around the world. MOOCs have allowed valuable courses to become available to millions of students. But at the moment, not all participants can receive customized feedback for their work. Currently, this is limited by manpower, but in the future that may not be the case.
What chatbots like Jill Watson allow is the opportunity for hundreds of thousands, if not millions, of students to have their work reviewed and all their questions answered at a minimal cost.
AI algorithms also have a significant role to play in personalization of education. Every student is unique and has a different set of strengths and weaknesses. Data analysis can be used to improve individual student results, assess each student’s strengths and weaknesses, and create mass-customized programs. Algorithms can analyze student data and consequently make flexible programs that adapt to the learner based on real-time feedback. According to the McKinsey Global Institute, all of this data in education could unlock between $900 billion and $1.2 trillion in global economic value.
Beyond Automated Teaching
It’s important to recognize that technological automation alone won’t fix the many issues in our global education system today. Dominated by outdated curricula, standardized tests, and an emphasis on short-term knowledge, many experts are calling for a transformation of how we teach.
It is not enough to simply automate the process. We can have a completely digital learning experience that continues to focus on outdated skills and fails to prepare students for the future. In other words, we must not only be innovative with our automation capabilities, but also with educational content, strategy, and policies.
Are we equipping students with the most important survival skills? Are we inspiring young minds to create a better future? Are we meeting the unique learning needs of each and every student? There’s no point automating and digitizing a system that is already flawed. We need to ensure the system that is being digitized is itself being transformed for the better.
Stock Media provided by davincidig / Pond5 Continue reading

Posted in Human Robots

#430854 Get a Live Look Inside Singularity ...

Singularity University’s (SU) second annual Global Summit begins today in San Francisco, and the Singularity Hub team will be there to give you a live look inside the event, exclusive speaker interviews, and articles on great talks.
Whereas SU’s other summits each focus on a specific field or industry, Global Summit is a broad look at emerging technologies and how they can help solve the world’s biggest challenges.
Talks will cover the latest in artificial intelligence, the brain and technology, augmented and virtual reality, space exploration, the future of work, the future of learning, and more.
We’re bringing three full days of live Facebook programming, streaming on Singularity Hub’s Facebook page, complete with 30+ speaker interviews, tours of the EXPO innovation hall, and tech demos. You can also livestream main stage talks at Singularity University’s Facebook page.
Interviews include Peter Diamandis, cofounder and chairman of Singularity University; Sylvia Earle, National Geographic explorer-in-residence; Esther Wojcicki, founder of the Palo Alto High Media Arts Center; Bob Richards, founder and CEO of Moon Express; Matt Oehrlein, cofounder of MegaBots; and Craig Newmark, founder of Craigslist and the Craig Newmark Foundation.
Pascal Finette, SU vice president of startup solutions, and Alison Berman, SU staff writer and digital producer, will host the show, and Lisa Kay Solomon, SU chair of transformational practices, will put on a special daily segment on exponential leadership with thought leaders.
Make sure you don’t miss anything by ‘liking’ the Singularity Hub and Singularity University Facebook pages and turn on notifications from both pages so you know when we go live. And to get a taste of what’s in store, check out the below selection of stories from last year’s event.
Are We at the Edge of a Second Sexual Revolution?By Vanessa Bates Ramirez
“Brace yourself, because according to serial entrepreneur Martin Varsavsky, all our existing beliefs about procreation are about to be shattered again…According to Varsavsky, the second sexual revolution will decouple procreation from sex, because sex will no longer be the best way to make babies.”
VR Pioneer Chris Milk: Virtual Reality Will Mirror Life Like Nothing Else BeforeBy Jason Ganz
“Milk is already a legend in the VR community…But [he] is just getting started. His company Within has plans to help shape the language we use for virtual reality storytelling. Because let’s be clear, VR storytelling is still very much in its infancy. This fact makes it even crazier there are already VR films out there that can inspire and captivate on such a profound level. And we’re only going up from here.”
7 Key Factors Driving the Artificial Intelligence RevolutionBy David Hill
“Jacobstein calmly and optimistically assures that this revolution isn’t going to disrupt humans completely, but usher in a future in which there’s a symbiosis between human and machine intelligence. He highlighted 7 factors driving this revolution.”
Are There Other Intelligent Civilizations Out There? Two Views on the Fermi ParadoxBy Alison Berman
“Cliché or not, when I stare up at the sky, I still wonder if we’re alone in the galaxy. Could there be another technologically advanced civilization out there? During a panel discussion on space exploration at Singularity University’s Global Summit, Jill Tarter, the Bernard M. Oliver chair at the SETI Institute, was asked to explain the Fermi paradox and her position on it. Her answer was pretty brilliant.”
Engineering Will Soon Be ‘More Parenting Than Programming’By Sveta McShane
“In generative design, the user states desired goals and constraints and allows the computer to generate entire designs, iterations and solution sets based on those constraints. It is, in fact, a lot like parents setting boundaries for their children’s activities. The user basically says, ‘Yes, it’s ok to do this, but it’s not ok to do that.’ The resulting solutions are ones you might never have thought of on your own.”
Biohacking Will Let You Connect Your Body to Anything You WantBy Vanessa Bates Ramirez
“How many cyborgs did you see during your morning commute today? I would guess at least five. Did they make you nervous? Probably not; you likely didn’t even realize they were there…[Hannes] Sjoblad said that the cyborgs we see today don’t look like Hollywood prototypes; they’re regular people who have integrated technology into their bodies to improve or monitor some aspect of their health.”
Peter Diamandis: We’ll Radically Extend Our Lives With New TechnologiesBy Jason Dorrier
“[Diamandis] said humans aren’t the longest-lived animals. Other species have multi-hundred-year lifespans. Last year, a study “dating” Greenland sharks found they can live roughly 400 years. Though the technique isn’t perfectly precise, they estimated one shark to be about 392. Its approximate birthday was 1624…Diamandis said he asked himself: If these animals can live centuries—why can’t I?” Continue reading

Posted in Human Robots

#430830 Biocomputers Made From Cells Can Now ...

When it comes to biomolecules, RNA doesn’t get a lot of love.
Maybe you haven’t even heard of the silent workhorse. RNA is the cell’s de facto translator: like a game of telephone, RNA takes DNA’s genetic code to a cellular factory called ribosomes. There, the cell makes proteins based on RNA’s message.
But RNA isn’t just a middleman. It controls what proteins are formed. Because proteins wiz around the cell completing all sorts of important processes, you can say that RNA is the gatekeeper: no RNA message, no proteins, no life.
In a new study published in Nature, RNA finally took center stage. By adding bits of genetic material to the E. Coli bacteria, a team of biohackers at the Wyss Institute hijacked the organism’s RNA messengers so that they only spring into action following certain inputs.
The result? A bacterial biocomputer capable of performing 12-input logic operations—AND, OR, and NOT—following specific inputs. Rather than outputting 0s and 1s, these biocircuits produce results based on the presence or absence of proteins and other molecules.
“It’s the greatest number of inputs in a circuit that a cell has been able to process,” says study author Dr. Alexander Green at Arizona State University. “To be able to analyze those signals and make a decision is the big advance here.”
When given a specific set of inputs, the bacteria spit out a protein that made them glow neon green under fluorescent light.
But synthetic biology promises far more than just a party trick—by tinkering with a cell’s RNA repertoire, scientists may one day coax them to photosynthesize, produce expensive drugs on the fly, or diagnose and hunt down rogue tumor cells.
Illustration of an RNA-based ‘ribocomputing’ device that makes logic-based decisions in living cells. The long gate RNA (blue) detects the binding of an input RNA (red). The ribosome (purple/mauve) reads the gate RNA to produce an output protein. Image Credit: Alexander Green / Arizona State University
The software of life
This isn’t the first time that scientists hijacked life’s algorithms to reprogram cells into nanocomputing systems. Previous work has already introduced to the world yeast cells that can make anti-malaria drugs from sugar or mammalian cells that can perform Boolean logic.
Yet circuits with multiple inputs and outputs remain hard to program. The reason is this: synthetic biologists have traditionally focused on snipping, fusing, or otherwise arranging a cell’s DNA to produce the outcomes they want.
But DNA is two steps removed from proteins, and tinkering with life’s code often leads to unexpected consequences. For one, the cell may not even accept and produce the extra bits of DNA code. For another, the added code, when transformed into proteins, may not act accordingly in the crowded and ever-changing environment of the cell.
What’s more, tinkering with one gene is often not enough to program an entirely new circuit. Scientists often need to amp up or shut down the activity of multiple genes, or multiple biological “modules” each made up of tens or hundreds of genes.
It’s like trying to fit new Lego pieces in a specific order into a room full of Lego constructions. Each new piece has the potential to wander off track and click onto something it’s not supposed to touch.
Getting every moving component to work in sync—as you might have guessed—is a giant headache.
The RNA way
With “ribocomputing,” Green and colleagues set off to tackle a main problem in synthetic biology: predictability.
Named after the “R (ribo)” in “RNA,” the method grew out of an idea that first struck Green back in 2012.
“The synthetic biological circuits to date have relied heavily on protein-based regulators that are difficult to scale up,” Green wrote at the time. We only have a limited handful of “designable parts” that work well, and these circuits require significant resources to encode and operate, he explains.
RNA, in comparison, is a lot more predictable. Like its more famous sibling DNA, RNA is composed of units that come in four different flavors: A, G, C, and U. Although RNA is only single-stranded, rather than the double helix for which DNA is known for, it can bind short DNA-like sequences in a very predictable manner: Gs always match up with Cs and As always with Us.
Because of this predictability, it’s possible to design RNA components that bind together perfectly. In other words, it reduces the chance that added RNA bits might go rogue in an unsuspecting cell.
Normally, once RNA is produced it immediately rushes to the ribosome—the cell’s protein-building factory. Think of it as a constantly “on” system.
However, Green and his team found a clever mechanism to slow them down. Dubbed the “toehold switch,” it works like this: the artificial RNA component is first incorporated into a chain of A, G, C, and U folded into a paperclip-like structure.
This blocks the RNA from accessing the ribosome. Because one RNA strand generally maps to one protein, the switch prevents that protein from ever getting made.
In this way, the switch is set to “off” by default—a “NOT” gate, in Boolean logic.
To activate the switch, the cell needs another component: a “trigger RNA,” which binds to the RNA toehold switch. This flips it on: the RNA grabs onto the ribosome, and bam—proteins.
BioLogic gates
String a few RNA switches together, with the activity of each one relying on the one before, and it forms an “AND” gate. Alternatively, if the activity of each switch is independent, that’s an “OR” gate.
“Basically, the toehold switches performed so well that we wanted to find a way to best exploit them for cellular applications,” says Green. They’re “kind of the equivalent of your first transistors,” he adds.
Once the team optimized the designs for different logic gates, they carefully condensed the switches into “gate RNA” molecules. These gate RNAs contain both codes for proteins and the logic operations needed to kickstart the process—a molecular logic circuit, so to speak.
If you’ve ever played around with an Arduino-controlled electrical circuit, you probably know the easiest way to test its function is with a light bulb.
That’s what the team did here, though with a biological bulb: green fluorescent protein, a light-sensing protein not normally present in bacteria that—when turned on—makes the microbugs glow neon green.
In a series of experiments, Green and his team genetically inserted gate RNAs into bacteria. Then, depending on the type of logical function, they added different combinations of trigger RNAs—the inputs.
When the input RNA matched up with its corresponding gate RNA, it flipped on the switch, causing the cell to light up.

Their most complex circuit contained five AND gates, five OR gates, and two NOTs—a 12-input ribocomputer that functioned exactly as designed.
That’s quite the achievement. “Everything is interacting with everything else and there are a million ways those interactions could flip the switch on accident,” says RNA researcher Dr. Julies Lucks at Northwestern University.
The specificity is thanks to RNA, the authors explain. Because RNAs bind to others so predictably, we can now design massive libraries of gate and trigger units to mix-and-match into all types of nano-biocomputers.
RNA BioNanobots
Although the technology doesn’t have any immediate applications, the team has high hopes.
For the first time, it’s now possible to massively scale up the process of programming new circuits into living cells. We’ve expanded the library of available biocomponents that can be used to reprogram life’s basic code, the authors say.
What’s more, when freeze-dried onto a piece of tissue paper, RNA keeps very well. We could potentially print RNA toehold switches onto paper that respond to viruses or to tumor cells, the authors say, essentially transforming the technology into highly accurate diagnostic platforms.
But Green’s hopes are even wilder for his RNA-based circuits.
“Because we’re using RNA, a universal molecule of life, we know these interactions can also work in other cells, so our method provides a general strategy that could be ported to other organisms,” he says.
Ultimately, the hope is to program neural network-like capabilities into the body’s other cells.
Imagine cells endowed with circuits capable of performing the kinds of computation the brain does, the authors say.
Perhaps one day, synthetic biology will transform our own cells into fully programmable entities, turning us all into biological cyborgs from the inside. How wild would that be?
Image Credit: Wyss Institute at Harvard University Continue reading

Posted in Human Robots

#430579 What These Lifelike Androids Can Teach ...

For Dr. Hiroshi Ishiguro, one of the most interesting things about androids is the changing questions they pose us, their creators, as they evolve. Does it, for example, do something to the concept of being human if a human-made creation starts telling you about what kind of boys ‘she’ likes?
If you want to know the answer to the boys question, you need to ask ERICA, one of Dr. Ishiguro’s advanced androids. Beneath her plastic skull and silicone skin, wires connect to AI software systems that bring her to life. Her ability to respond goes far beyond standard inquiries. Spend a little time with her, and the feeling of a distinct personality starts to emerge. From time to time, she works as a receptionist at Dr. Ishiguro and his team’s Osaka University labs. One of her android sisters is an actor who has starred in plays and a film.

ERICA’s ‘brother’ is an android version of Dr. Ishiguro himself, which has represented its creator at various events while the biological Ishiguro can remain in his offices in Japan. Microphones and cameras capture Ishiguro’s voice and face movements, which are relayed to the android. Apart from mimicking its creator, the Geminoid™ android is also capable of lifelike blinking, fidgeting, and breathing movements.
Say hello to relaxation
As technological development continues to accelerate, so do the possibilities for androids. From a position as receptionist, ERICA may well branch out into many other professions in the coming years. Companion for the elderly, comic book storyteller (an ancient profession in Japan), pop star, conversational foreign language partner, and newscaster are some of the roles and responsibilities Dr. Ishiguro sees androids taking on in the near future.
“Androids are not uncanny anymore. Most people adapt to interacting with Erica very quickly. Actually, I think that in interacting with androids, which are still different from us, we get a better appreciation of interacting with other cultures. In both cases, we are talking with someone who is different from us and learn to overcome those differences,” he says.
A lot has been written about how robots will take our jobs. Dr. Ishiguro believes these fears are blown somewhat out of proportion.
“Robots and androids will take over many simple jobs. Initially there might be some job-related issues, but new schemes, like for example a robot tax similar to the one described by Bill Gates, should help,” he says.
“Androids will make it possible for humans to relax and keep evolving. If we compare the time we spend studying now compared to 100 years ago, it has grown a lot. I think it needs to keep growing if we are to keep expanding our scientific and technological knowledge. In the future, we might end up spending 20 percent of our lifetime on work and 80 percent of the time on education and growing our skills.”
Android asks who you are
For Dr. Ishiguro, another aspect of robotics in general, and androids in particular, is how they question what it means to be human.
“Identity is a very difficult concept for humans sometimes. For example, I think clothes are part of our identity, in a way that is similar to our faces and bodies. We don’t change those from one day to the next, and that is why I have ten matching black outfits,” he says.
This link between physical appearance and perceived identity is one of the aspects Dr. Ishiguro is exploring. Another closely linked concept is the connection between body and feeling of self. The Ishiguro avatar was once giving a presentation in Austria. Its creator recalls how he felt distinctly like he was in Austria, even capable of feeling sensation of touch on his own body when people laid their hands on the android. If he was distracted, he felt almost ‘sucked’ back into his body in Japan.
“I am constantly thinking about my life in this way, and I believe that androids are a unique mirror that helps us formulate questions about why we are here and why we have been so successful. I do not necessarily think I have found the answers to these questions, so if you have, please share,” he says with a laugh.
His work and these questions, while extremely interesting on their own, become extra poignant when considering the predicted melding of mind and machine in the near future.
The ability to be present in several locations through avatars—virtual or robotic—raises many questions of both philosophical and practical nature. Then add the hypotheticals, like why send a human out onto the hostile surface of Mars if you could send a remote-controlled android, capable of relaying everything it sees, hears and feels?
The two ways of robotics will meet
Dr. Ishiguro sees the world of AI-human interaction as currently roughly split into two. One is the chat-bot approach that companies like Amazon, Microsoft, Google, and recently Apple, employ using stationary objects like speakers. Androids like ERICA represent another approach.
“It is about more than the form factor. I think that the android approach is generally more story-based. We are integrating new conversation features based on assumptions about the situation and running different scenarios that expand the android’s vocabulary and interactions. Another aspect we are working on is giving androids desire and intention. Like with people, androids should have desires and intentions in order for you to want to interact with them over time,” Dr. Ishiguro explains.
This could be said to be part of a wider trend for Japan, where many companies are developing human-like robots that often have some Internet of Things capabilities, making them able to handle some of the same tasks as an Amazon Echo. The difference in approach could be summed up in the words ‘assistant’ (Apple, Amazon, etc.) and ‘companion’ (Japan).
Dr. Ishiguro sees this as partly linked to how Japanese as a language—and market—is somewhat limited. This has a direct impact on viability and practicality of ‘pure’ voice recognition systems. At the same time, Japanese people have had greater exposure to positive images of robots, and have a different cultural / religious view of objects having a ‘soul’. However, it may also mean Japanese companies and android scientists are both stealing a lap on their western counterparts.
“If you speak to an Amazon Echo, that is not a natural way to interact for humans. This is part of why we are making human-like robot systems. The human brain is set up to recognize and interact with humans. So, it makes sense to focus on developing the body for the AI mind, as well as the AI. I believe that the final goal for both Japanese and other companies and scientists is to create human-like interaction. Technology has to adapt to us, because we cannot adapt fast enough to it, as it develops so quickly,” he says.
Banner image courtesy of Hiroshi Ishiguro Laboratories, ATR all rights reserved.
Dr. Ishiguro’s team has collaborated with partners and developed a number of android systems:
Geminoid™ HI-2 has been developed by Hiroshi Ishiguro Laboratories and Advanced Telecommunications Research Institute International (ATR).
Geminoid™ F has been developed by Osaka University and Hiroshi Ishiguro Laboratories, Advanced Telecommunications Research Institute International (ATR).
ERICA has been developed by ERATO ISHIGURO Symbiotic Human-Robot Interaction Project Continue reading

Posted in Human Robots