Tag Archives: virtual

#430988 The Week’s Awesome Stories From Around ...

BIOTECH
Lab-Grown Food Startup Memphis Meats Raises $17 Million From DFJ, Cargill, Bill Gates, OthersPaul Sawers | Venture Beat “Meat grown in a laboratory is the future, if certain sustainable food advocates have their way, and one startup just raised a bucketload of cash from major investors to make this goal a reality….Leading the $17 million series A round was venture capital (VC) firm DFJ, backer of Skype, Tesla, SpaceX, Tumblr, Foursquare, Baidu, and Box.”
ROBOTICS
Blossom: A Handmade Approach to Social Robotics From Cornell and GoogleEvan Ackerman | IEEE Spectrum “Blossom’s overall aesthetic is, in some ways, a response to the way that the design of home robots (and personal technology) has been trending recently. We’re surrounding ourselves with sterility embodied in metal and plastic, perhaps because of a perception that tech should be flawless. And I suppose when it comes to my phone or my computer, sterile flawlessness is good.”
AUTOMOTIVE
Mercedes’ Outrageously Swoopy Concept Says Nein to the Pod-Car FutureAlex Davies | WIRED “The swooping concept car, unveiled last weekend at the Pebble Beach Concoursd’Elegance, rejects all notions of practicality. It measures nearly 18.7 feet long and 6.9 feet wide, yet offers just two seats…Each wheel gets its own electric motor that draws power from the battery that comprises the car’s underbody. All told, they generate 750 horsepower, and the car will go 200 miles between charges.”
EDTECH
Amazon’s TenMarks Releases a New Curriculum for Educators That Teaches Kids Writing Using Digital Assistants, Text Messaging and MoreSarah Perez | TechCrunch“Now, the business is offering an online curriculum for teachers designed to help students learn how to be better writers. The program includes a writing coach that leverages natural language processing, a variety of resources for teachers, and something called “bursts,” which are short writing prompts kids will be familiar with because of their use of mobile apps.”
VIRTUAL REALITY
What We Can Learn From Immersing Mice, Fruit Flies, and Zebrafish in VRAlessandra Potenza | The Verge “The VR system, called FreemoVR, pretty much resembles a holodeck from the TV show Star Trek. It’s an arena surrounded by computer screens that immerses the animals in a virtual world. Researchers tested the system on mice, fruit flies, and zebrafish, and found that the animals reacted to the virtual objects and environments as they would to real ones.” Continue reading

Posted in Human Robots

#430854 Get a Live Look Inside Singularity ...

Singularity University’s (SU) second annual Global Summit begins today in San Francisco, and the Singularity Hub team will be there to give you a live look inside the event, exclusive speaker interviews, and articles on great talks.
Whereas SU’s other summits each focus on a specific field or industry, Global Summit is a broad look at emerging technologies and how they can help solve the world’s biggest challenges.
Talks will cover the latest in artificial intelligence, the brain and technology, augmented and virtual reality, space exploration, the future of work, the future of learning, and more.
We’re bringing three full days of live Facebook programming, streaming on Singularity Hub’s Facebook page, complete with 30+ speaker interviews, tours of the EXPO innovation hall, and tech demos. You can also livestream main stage talks at Singularity University’s Facebook page.
Interviews include Peter Diamandis, cofounder and chairman of Singularity University; Sylvia Earle, National Geographic explorer-in-residence; Esther Wojcicki, founder of the Palo Alto High Media Arts Center; Bob Richards, founder and CEO of Moon Express; Matt Oehrlein, cofounder of MegaBots; and Craig Newmark, founder of Craigslist and the Craig Newmark Foundation.
Pascal Finette, SU vice president of startup solutions, and Alison Berman, SU staff writer and digital producer, will host the show, and Lisa Kay Solomon, SU chair of transformational practices, will put on a special daily segment on exponential leadership with thought leaders.
Make sure you don’t miss anything by ‘liking’ the Singularity Hub and Singularity University Facebook pages and turn on notifications from both pages so you know when we go live. And to get a taste of what’s in store, check out the below selection of stories from last year’s event.
Are We at the Edge of a Second Sexual Revolution?By Vanessa Bates Ramirez
“Brace yourself, because according to serial entrepreneur Martin Varsavsky, all our existing beliefs about procreation are about to be shattered again…According to Varsavsky, the second sexual revolution will decouple procreation from sex, because sex will no longer be the best way to make babies.”
VR Pioneer Chris Milk: Virtual Reality Will Mirror Life Like Nothing Else BeforeBy Jason Ganz
“Milk is already a legend in the VR community…But [he] is just getting started. His company Within has plans to help shape the language we use for virtual reality storytelling. Because let’s be clear, VR storytelling is still very much in its infancy. This fact makes it even crazier there are already VR films out there that can inspire and captivate on such a profound level. And we’re only going up from here.”
7 Key Factors Driving the Artificial Intelligence RevolutionBy David Hill
“Jacobstein calmly and optimistically assures that this revolution isn’t going to disrupt humans completely, but usher in a future in which there’s a symbiosis between human and machine intelligence. He highlighted 7 factors driving this revolution.”
Are There Other Intelligent Civilizations Out There? Two Views on the Fermi ParadoxBy Alison Berman
“Cliché or not, when I stare up at the sky, I still wonder if we’re alone in the galaxy. Could there be another technologically advanced civilization out there? During a panel discussion on space exploration at Singularity University’s Global Summit, Jill Tarter, the Bernard M. Oliver chair at the SETI Institute, was asked to explain the Fermi paradox and her position on it. Her answer was pretty brilliant.”
Engineering Will Soon Be ‘More Parenting Than Programming’By Sveta McShane
“In generative design, the user states desired goals and constraints and allows the computer to generate entire designs, iterations and solution sets based on those constraints. It is, in fact, a lot like parents setting boundaries for their children’s activities. The user basically says, ‘Yes, it’s ok to do this, but it’s not ok to do that.’ The resulting solutions are ones you might never have thought of on your own.”
Biohacking Will Let You Connect Your Body to Anything You WantBy Vanessa Bates Ramirez
“How many cyborgs did you see during your morning commute today? I would guess at least five. Did they make you nervous? Probably not; you likely didn’t even realize they were there…[Hannes] Sjoblad said that the cyborgs we see today don’t look like Hollywood prototypes; they’re regular people who have integrated technology into their bodies to improve or monitor some aspect of their health.”
Peter Diamandis: We’ll Radically Extend Our Lives With New TechnologiesBy Jason Dorrier
“[Diamandis] said humans aren’t the longest-lived animals. Other species have multi-hundred-year lifespans. Last year, a study “dating” Greenland sharks found they can live roughly 400 years. Though the technique isn’t perfectly precise, they estimated one shark to be about 392. Its approximate birthday was 1624…Diamandis said he asked himself: If these animals can live centuries—why can’t I?” Continue reading

Posted in Human Robots

#430761 How Robots Are Getting Better at Making ...

The multiverse of science fiction is populated by robots that are indistinguishable from humans. They are usually smarter, faster, and stronger than us. They seem capable of doing any job imaginable, from piloting a starship and battling alien invaders to taking out the trash and cooking a gourmet meal.
The reality, of course, is far from fantasy. Aside from industrial settings, robots have yet to meet The Jetsons. The robots the public are exposed to seem little more than over-sized plastic toys, pre-programmed to perform a set of tasks without the ability to interact meaningfully with their environment or their creators.
To paraphrase PayPal co-founder and tech entrepreneur Peter Thiel, we wanted cool robots, instead we got 140 characters and Flippy the burger bot. But scientists are making progress to empower robots with the ability to see and respond to their surroundings just like humans.
Some of the latest developments in that arena were presented this month at the annual Robotics: Science and Systems Conference in Cambridge, Massachusetts. The papers drilled down into topics that ranged from how to make robots more conversational and help them understand language ambiguities to helping them see and navigate through complex spaces.
Improved Vision
Ben Burchfiel, a graduate student at Duke University, and his thesis advisor George Konidaris, an assistant professor of computer science at Brown University, developed an algorithm to enable machines to see the world more like humans.
In the paper, Burchfiel and Konidaris demonstrate how they can teach robots to identify and possibly manipulate three-dimensional objects even when they might be obscured or sitting in unfamiliar positions, such as a teapot that has been tipped over.
The researchers trained their algorithm by feeding it 3D scans of about 4,000 common household items such as beds, chairs, tables, and even toilets. They then tested its ability to identify about 900 new 3D objects just from a bird’s eye view. The algorithm made the right guess 75 percent of the time versus a success rate of about 50 percent for other computer vision techniques.
In an email interview with Singularity Hub, Burchfiel notes his research is not the first to train machines on 3D object classification. How their approach differs is that they confine the space in which the robot learns to classify the objects.
“Imagine the space of all possible objects,” Burchfiel explains. “That is to say, imagine you had tiny Legos, and I told you [that] you could stick them together any way you wanted, just build me an object. You have a huge number of objects you could make!”
The infinite possibilities could result in an object no human or machine might recognize.
To address that problem, the researchers had their algorithm find a more restricted space that would host the objects it wants to classify. “By working in this restricted space—mathematically we call it a subspace—we greatly simplify our task of classification. It is the finding of this space that sets us apart from previous approaches.”
Following Directions
Meanwhile, a pair of undergraduate students at Brown University figured out a way to teach robots to understand directions better, even at varying degrees of abstraction.
The research, led by Dilip Arumugam and Siddharth Karamcheti, addressed how to train a robot to understand nuances of natural language and then follow instructions correctly and efficiently.
“The problem is that commands can have different levels of abstraction, and that can cause a robot to plan its actions inefficiently or fail to complete the task at all,” says Arumugam in a press release.
In this project, the young researchers crowdsourced instructions for moving a virtual robot through an online domain. The space consisted of several rooms and a chair, which the robot was told to manipulate from one place to another. The volunteers gave various commands to the robot, ranging from general (“take the chair to the blue room”) to step-by-step instructions.
The researchers then used the database of spoken instructions to teach their system to understand the kinds of words used in different levels of language. The machine learned to not only follow instructions but to recognize the level of abstraction. That was key to kickstart its problem-solving abilities to tackle the job in the most appropriate way.
The research eventually moved from virtual pixels to a real place, using a Roomba-like robot that was able to respond to instructions within one second 90 percent of the time. Conversely, when unable to identify the specificity of the task, it took the robot 20 or more seconds to plan a task about 50 percent of the time.
One application of this new machine-learning technique referenced in the paper is a robot worker in a warehouse setting, but there are many fields that could benefit from a more versatile machine capable of moving seamlessly between small-scale operations and generalized tasks.
“Other areas that could possibly benefit from such a system include things from autonomous vehicles… to assistive robotics, all the way to medical robotics,” says Karamcheti, responding to a question by email from Singularity Hub.
More to Come
These achievements are yet another step toward creating robots that see, listen, and act more like humans. But don’t expect Disney to build a real-life Westworld next to Toon Town anytime soon.
“I think we’re a long way off from human-level communication,” Karamcheti says. “There are so many problems preventing our learning models from getting to that point, from seemingly simple questions like how to deal with words never seen before, to harder, more complicated questions like how to resolve the ambiguities inherent in language, including idiomatic or metaphorical speech.”
Even relatively verbose chatbots can run out of things to say, Karamcheti notes, as the conversation becomes more complex.
The same goes for human vision, according to Burchfiel.
While deep learning techniques have dramatically improved pattern matching—Google can find just about any picture of a cat—there’s more to human eyesight than, well, meets the eye.
“There are two big areas where I think perception has a long way to go: inductive bias and formal reasoning,” Burchfiel says.
The former is essentially all of the contextual knowledge people use to help them reason, he explains. Burchfiel uses the example of a puddle in the street. People are conditioned or biased to assume it’s a puddle of water rather than a patch of glass, for instance.
“This sort of bias is why we see faces in clouds; we have strong inductive bias helping us identify faces,” he says. “While it sounds simple at first, it powers much of what we do. Humans have a very intuitive understanding of what they expect to see, [and] it makes perception much easier.”
Formal reasoning is equally important. A machine can use deep learning, in Burchfiel’s example, to figure out the direction any river flows once it understands that water runs downhill. But it’s not yet capable of applying the sort of human reasoning that would allow us to transfer that knowledge to an alien setting, such as figuring out how water moves through a plumbing system on Mars.
“Much work was done in decades past on this sort of formal reasoning… but we have yet to figure out how to merge it with standard machine-learning methods to create a seamless system that is useful in the actual physical world.”
Robots still have a lot to learn about being human, which should make us feel good that we’re still by far the most complex machines on the planet.
Image Credit: Alex Knight via Unsplash Continue reading

Posted in Human Robots

#430686 This Week’s Awesome Stories From ...

ARTIFICIAL INTELLIGENCE
DeepMind’s AI Is Teaching Itself Parkour, and the Results Are AdorableJames Vincent | The Verge“The research explores how reinforcement learning (or RL) can be used to teach a computer to navigate unfamiliar and complex environments. It’s the sort of fundamental AI research that we’re now testing in virtual worlds, but that will one day help program robots that can navigate the stairs in your house.”
VIRTUAL REALITY
Now You Can Broadcast Facebook Live Videos From Virtual RealityDaniel Terdiman | Fast Company“The idea is fairly simple. Spaces allows up to four people—each of whom must have an Oculus Rift VR headset—to hang out together in VR. Together, they can talk, chat, draw, create new objects, watch 360-degree videos, share photos, and much more. And now, they can live-broadcast everything they do in Spaces, much the same way that any Facebook user can produce live video of real life and share it with the world.”
ROBOTICS
I Watched Two Robots Chat Together on Stage at a Tech EventJon Russell | TechCrunch“The robots in question are Sophia and Han, and they belong to Hanson Robotics, a Hong Kong-based company that is developing and deploying artificial intelligence in humanoids. The duo took to the stage at Rise in Hong Kong with Hanson Robotics’ Chief Scientist Ben Goertzel directing the banter. The conversation, which was partially scripted, wasn’t as slick as the human-to-human panels at the show, but it was certainly a sight to behold for the packed audience.”
BIOTECH
Scientists Used CRISPR to Put a GIF Inside a Living Organism’s DNAEmily Mullin | MIT Technology Review“They delivered the GIF into the living bacteria in the form of five frames: images of a galloping horse and rider, taken by English photographer Eadweard Muybridge…The researchers were then able to retrieve the data by sequencing the bacterial DNA. They reconstructed the movie with 90 percent accuracy by reading the pixel nucleotide code.”
DIGITAL MEDIA
AI Creates Fake ObamaCharles Q. Choi | IEEE Spectrum“In the new study, the neural net learned what mouth shapes were linked to various sounds. The researchers took audio clips and dubbed them over the original sound files of a video. They next took mouth shapes that matched the new audio clips and grafted and blended them onto the video. Essentially, the researchers synthesized videos where Obama lip-synched words he said up to decades beforehand.”
Stock Media provided by adam121 / Pond5 Continue reading

Posted in Human Robots

#430579 What These Lifelike Androids Can Teach ...

For Dr. Hiroshi Ishiguro, one of the most interesting things about androids is the changing questions they pose us, their creators, as they evolve. Does it, for example, do something to the concept of being human if a human-made creation starts telling you about what kind of boys ‘she’ likes?
If you want to know the answer to the boys question, you need to ask ERICA, one of Dr. Ishiguro’s advanced androids. Beneath her plastic skull and silicone skin, wires connect to AI software systems that bring her to life. Her ability to respond goes far beyond standard inquiries. Spend a little time with her, and the feeling of a distinct personality starts to emerge. From time to time, she works as a receptionist at Dr. Ishiguro and his team’s Osaka University labs. One of her android sisters is an actor who has starred in plays and a film.

ERICA’s ‘brother’ is an android version of Dr. Ishiguro himself, which has represented its creator at various events while the biological Ishiguro can remain in his offices in Japan. Microphones and cameras capture Ishiguro’s voice and face movements, which are relayed to the android. Apart from mimicking its creator, the Geminoid™ android is also capable of lifelike blinking, fidgeting, and breathing movements.
Say hello to relaxation
As technological development continues to accelerate, so do the possibilities for androids. From a position as receptionist, ERICA may well branch out into many other professions in the coming years. Companion for the elderly, comic book storyteller (an ancient profession in Japan), pop star, conversational foreign language partner, and newscaster are some of the roles and responsibilities Dr. Ishiguro sees androids taking on in the near future.
“Androids are not uncanny anymore. Most people adapt to interacting with Erica very quickly. Actually, I think that in interacting with androids, which are still different from us, we get a better appreciation of interacting with other cultures. In both cases, we are talking with someone who is different from us and learn to overcome those differences,” he says.
A lot has been written about how robots will take our jobs. Dr. Ishiguro believes these fears are blown somewhat out of proportion.
“Robots and androids will take over many simple jobs. Initially there might be some job-related issues, but new schemes, like for example a robot tax similar to the one described by Bill Gates, should help,” he says.
“Androids will make it possible for humans to relax and keep evolving. If we compare the time we spend studying now compared to 100 years ago, it has grown a lot. I think it needs to keep growing if we are to keep expanding our scientific and technological knowledge. In the future, we might end up spending 20 percent of our lifetime on work and 80 percent of the time on education and growing our skills.”
Android asks who you are
For Dr. Ishiguro, another aspect of robotics in general, and androids in particular, is how they question what it means to be human.
“Identity is a very difficult concept for humans sometimes. For example, I think clothes are part of our identity, in a way that is similar to our faces and bodies. We don’t change those from one day to the next, and that is why I have ten matching black outfits,” he says.
This link between physical appearance and perceived identity is one of the aspects Dr. Ishiguro is exploring. Another closely linked concept is the connection between body and feeling of self. The Ishiguro avatar was once giving a presentation in Austria. Its creator recalls how he felt distinctly like he was in Austria, even capable of feeling sensation of touch on his own body when people laid their hands on the android. If he was distracted, he felt almost ‘sucked’ back into his body in Japan.
“I am constantly thinking about my life in this way, and I believe that androids are a unique mirror that helps us formulate questions about why we are here and why we have been so successful. I do not necessarily think I have found the answers to these questions, so if you have, please share,” he says with a laugh.
His work and these questions, while extremely interesting on their own, become extra poignant when considering the predicted melding of mind and machine in the near future.
The ability to be present in several locations through avatars—virtual or robotic—raises many questions of both philosophical and practical nature. Then add the hypotheticals, like why send a human out onto the hostile surface of Mars if you could send a remote-controlled android, capable of relaying everything it sees, hears and feels?
The two ways of robotics will meet
Dr. Ishiguro sees the world of AI-human interaction as currently roughly split into two. One is the chat-bot approach that companies like Amazon, Microsoft, Google, and recently Apple, employ using stationary objects like speakers. Androids like ERICA represent another approach.
“It is about more than the form factor. I think that the android approach is generally more story-based. We are integrating new conversation features based on assumptions about the situation and running different scenarios that expand the android’s vocabulary and interactions. Another aspect we are working on is giving androids desire and intention. Like with people, androids should have desires and intentions in order for you to want to interact with them over time,” Dr. Ishiguro explains.
This could be said to be part of a wider trend for Japan, where many companies are developing human-like robots that often have some Internet of Things capabilities, making them able to handle some of the same tasks as an Amazon Echo. The difference in approach could be summed up in the words ‘assistant’ (Apple, Amazon, etc.) and ‘companion’ (Japan).
Dr. Ishiguro sees this as partly linked to how Japanese as a language—and market—is somewhat limited. This has a direct impact on viability and practicality of ‘pure’ voice recognition systems. At the same time, Japanese people have had greater exposure to positive images of robots, and have a different cultural / religious view of objects having a ‘soul’. However, it may also mean Japanese companies and android scientists are both stealing a lap on their western counterparts.
“If you speak to an Amazon Echo, that is not a natural way to interact for humans. This is part of why we are making human-like robot systems. The human brain is set up to recognize and interact with humans. So, it makes sense to focus on developing the body for the AI mind, as well as the AI. I believe that the final goal for both Japanese and other companies and scientists is to create human-like interaction. Technology has to adapt to us, because we cannot adapt fast enough to it, as it develops so quickly,” he says.
Banner image courtesy of Hiroshi Ishiguro Laboratories, ATR all rights reserved.
Dr. Ishiguro’s team has collaborated with partners and developed a number of android systems:
Geminoid™ HI-2 has been developed by Hiroshi Ishiguro Laboratories and Advanced Telecommunications Research Institute International (ATR).
Geminoid™ F has been developed by Osaka University and Hiroshi Ishiguro Laboratories, Advanced Telecommunications Research Institute International (ATR).
ERICA has been developed by ERATO ISHIGURO Symbiotic Human-Robot Interaction Project Continue reading

Posted in Human Robots