Tag Archives: flawless

#436256 Alphabet Is Developing a Robot to Take ...

Robots excel at carrying out specialized tasks in controlled environments, but put them in your average office and they’d be lost. Alphabet wants to change that by developing what they call the Everyday Robot, which could learn to help us out with our daily chores.

For a long time most robots were painstakingly hand-coded to carry out their functions, but since the deep learning revolution earlier this decade there’s been a growing effort to imbue them with AI that lets them learn new tasks through experience.

That’s led to some impressive breakthroughs, like a robotic hand nimble enough to solve a Rubik’s cube and a robotic arm that can accurately toss bananas across a room.

And it turns out Alphabet’s early-stage research and development division, Alphabet X, has also secretly been using similar machine learning techniques to develop robots adaptable enough to carry out a range of tasks in cluttered and unpredictable human environments like homes and offices.

The robots they’ve built combine a wheeled base with a single arm and a head full of sensors (including LIDAR) for 3D scanning, borrowed from Alphabet’s self-driving car division, Waymo.

At the minute, though, they’re largely restricted to sorting trash for recycling, project leader Hans Peter Brondmo writes in a blog post. While that might sound mundane, identifying different kinds of trash, grasping it, and moving it to the correct bin is still a difficult thing for a robot to do consistently. Some of the robots also have to navigate around the office to sort trash at various recycling stations.

Alphabet says even its human staff were getting it wrong 20 percent of the time, but after several months of training the robots have managed to get that down to 3.5 percent.

Every day, 30 robots toil away in what’s been dubbed the “playpen” sorting trash, and then every night thousands of virtual robots continue to practice in a simulation. This experience is then used to update the robots’ control algorithms each night. All the robots also share their experiences with the others through a process called collaborative learning.

The process isn’t flawless, though. Simonite notes that while the robots exhibit some uncannily smart behaviors, like stirring piles of rubbish to make it easier to grab specific items, they also frequently miss or fumble the objects they’re trying to grasp.

Nonetheless, the project’s leaders are happy with their progress so far. And the hope is that creating robots that are able to learn from little more than experience in complex environments like an office should be a first step towards general-purpose robots that can pick up a variety of useful skills to assist humans.

Taking that next step will be the major test of the project. So far there’s been limited evidence that experience gained by robots in one task can be transferred to learning another. That’s something the group hopes to demonstrate next year.

And it seems there may be more robot news coming out of Alphabet X soon. The group has several other robotics “moonshots” in the pipeline, built on technology and talent transferred over in 2016 from the remains of a broadly unsuccessful splurge on robotics startups by former Google executive Andy Rubin.

Whether this robotics renaissance at Alphabet will finally help robots break into our homes and offices remains to be seen, but with the resources they have at hand, they just may be able to make it happen.

Image Credit: Everyday Robot, Alphabet X Continue reading

Posted in Human Robots

#433386 What We Have to Gain From Making ...

The borders between the real world and the digital world keep crumbling, and the latter’s importance in both our personal and professional lives keeps growing. Some describe the melding of virtual and real worlds as part of the fourth industrial revolution. Said revolution’s full impact on us as individuals, our companies, communities, and societies is still unknown.

Greg Cross, chief business officer of New Zealand-based AI company Soul Machines, thinks one inescapable consequence of these crumbling borders is people spending more and more time interacting with technology. In a presentation at Singularity University’s Global Summit in San Francisco last month, Cross unveiled Soul Machines’ latest work and shared his views on the current state of human-like AI and where the technology may go in the near future.

Humanizing Technology Interaction
Cross started by introducing Rachel, one of Soul Machines’ “emotionally responsive digital humans.” The company has built 15 different digital humans of various sexes, groups, and ethnicities. Rachel, along with her “sisters” and “brothers,” has a virtual nervous system based on neural networks and biological models of different paths in the human brain. The system is controlled by virtual neurotransmitters and hormones akin to dopamine, serotonin, and oxytocin, which influence learning and behavior.

As a result, each digital human can have its own unique set of “feelings” and responses to interactions. People interact with them via visual and audio sensors, and the machines respond in real time.

“Over the last 20 or 30 years, the way we think about machines and the way we interact with machines has changed,” Cross said. “We’ve always had this view that they should actually be more human-like.”

The realism of the digital humans’ graphic representations comes thanks to the work of Soul Machines’ other co-founder, Dr. Mark Sager, who has won two Academy Awards for his work on some computer-generated movies, including James Cameron’s Avatar.

Cross pointed out, for example, that rather than being unrealistically flawless and clear, Rachel’s skin has blemishes and sun spots, just like real human skin would.

The Next Human-Machine Frontier
When people interact with each other face to face, emotional and intellectual engagement both heavily influence the interaction. What would it look like for machines to bring those same emotional and intellectual capacities to our interactions with them, and how would this type of interaction affect the way we use, relate to, and feel about AI?

Cross and his colleagues believe that humanizing artificial intelligence will make the technology more useful to humanity, and prompt people to use AI in more beneficial ways.

“What we think is a very important view as we move forward is that these machines can be more helpful to us. They can be more useful to us. They can be more interesting to us if they’re actually more like us,” Cross said.

It is an approach that seems to resonate with companies and organizations. For example, in the UK, where NatWest Bank is testing out Cora as a digital employee to help answer customer queries. In Germany, Daimler Financial Group plans to employ Sarah as something “similar to a personal concierge” for its customers. According to Cross, Daimler is looking at other ways it could deploy digital humans across the organization, from building digital service people, digital sales people, and maybe in the future, digital chauffeurs.

Soul Machines’ latest creation is Will, a digital teacher that can interact with children through a desktop, tablet, or mobile device and help them learn about renewable energy. Cross sees other social uses for digital humans, including potentially serving as doctors to rural communities.

Our Digital Friends—and Twins
Soul Machines is not alone in its quest to humanize technology. It is a direction many technology companies, including the likes of Amazon, also seem to be pursuing. Amazon is working on building a home robot that, according to Bloomberg, “could be a sort of mobile Alexa.”

Finding a more human form for technology seems like a particularly pervasive pursuit in Japan. Not just when it comes to its many, many robots, but also virtual assistants like Gatebox.

The Japanese approach was perhaps best summed up by famous android researcher Dr. Hiroshi Ishiguro, who I interviewed last year: “The human brain is set up to recognize and interact with humans. So, it makes sense to focus on developing the body for the AI mind, as well as the AI. I believe that the final goal for both Japanese and other companies and scientists is to create human-like interaction.”

During Cross’s presentation, Rob Nail, CEO and associate founder of Singularity University, joined him on the stage, extending an invitation to Rachel to be SU’s first fully digital faculty member. Rachel accepted, and though she’s the only digital faculty right now, she predicted this won’t be the case for long.

“In 10 years, all of you will have digital versions of yourself, just like me, to take on specific tasks and make your life a whole lot easier,” she said. “This is great news for me. I’ll have millions of digital friends.”

Image Credit: Soul Machines Continue reading

Posted in Human Robots

#432190 In the Future, There Will Be No Limit to ...

New planets found in distant corners of the galaxy. Climate models that may improve our understanding of sea level rise. The emergence of new antimalarial drugs. These scientific advances and discoveries have been in the news in recent months.

While representing wildly divergent disciplines, from astronomy to biotechnology, they all have one thing in common: Artificial intelligence played a key role in their scientific discovery.

One of the more recent and famous examples came out of NASA at the end of 2017. The US space agency had announced an eighth planet discovered in the Kepler-90 system. Scientists had trained a neural network—a computer with a “brain” modeled on the human mind—to re-examine data from Kepler, a space-borne telescope with a four-year mission to seek out new life and new civilizations. Or, more precisely, to find habitable planets where life might just exist.

The researchers trained the artificial neural network on a set of 15,000 previously vetted signals until it could identify true planets and false positives 96 percent of the time. It then went to work on weaker signals from nearly 700 star systems with known planets.

The machine detected Kepler 90i—a hot, rocky planet that orbits its sun about every two Earth weeks—through a nearly imperceptible change in brightness captured when a planet passes a star. It also found a sixth Earth-sized planet in the Kepler-80 system.

AI Handles Big Data
The application of AI to science is being driven by three great advances in technology, according to Ross King from the Manchester Institute of Biotechnology at the University of Manchester, leader of a team that developed an artificially intelligent “scientist” called Eve.

Those three advances include much faster computers, big datasets, and improved AI methods, King said. “These advances increasingly give AI superhuman reasoning abilities,” he told Singularity Hub by email.

AI systems can flawlessly remember vast numbers of facts and extract information effortlessly from millions of scientific papers, not to mention exhibit flawless logical reasoning and near-optimal probabilistic reasoning, King says.

AI systems also beat humans when it comes to dealing with huge, diverse amounts of data.

That’s partly what attracted a team of glaciologists to turn to machine learning to untangle the factors involved in how heat from Earth’s interior might influence the ice sheet that blankets Greenland.

Algorithms juggled 22 geologic variables—such as bedrock topography, crustal thickness, magnetic anomalies, rock types, and proximity to features like trenches, ridges, young rifts, and volcanoes—to predict geothermal heat flux under the ice sheet throughout Greenland.

The machine learning model, for example, predicts elevated heat flux upstream of Jakobshavn Glacier, the fastest-moving glacier in the world.

“The major advantage is that we can incorporate so many different types of data,” explains Leigh Stearns, associate professor of geology at Kansas University, whose research takes her to the polar regions to understand how and why Earth’s great ice sheets are changing, questions directly related to future sea level rise.

“All of the other models just rely on one parameter to determine heat flux, but the [machine learning] approach incorporates all of them,” Stearns told Singularity Hub in an email. “Interestingly, we found that there is not just one parameter…that determines the heat flux, but a combination of many factors.”

The research was published last month in Geophysical Research Letters.

Stearns says her team hopes to apply high-powered machine learning to characterize glacier behavior over both short and long-term timescales, thanks to the large amounts of data that she and others have collected over the last 20 years.

Emergence of Robot Scientists
While Stearns sees machine learning as another tool to augment her research, King believes artificial intelligence can play a much bigger role in scientific discoveries in the future.

“I am interested in developing AI systems that autonomously do science—robot scientists,” he said. Such systems, King explained, would automatically originate hypotheses to explain observations, devise experiments to test those hypotheses, physically run the experiments using laboratory robotics, and even interpret the results. The conclusions would then influence the next cycle of hypotheses and experiments.

His AI scientist Eve recently helped researchers discover that triclosan, an ingredient commonly found in toothpaste, could be used as an antimalarial drug against certain strains that have developed a resistance to other common drug therapies. The research was published in the journal Scientific Reports.

Automation using artificial intelligence for drug discovery has become a growing area of research, as the machines can work orders of magnitude faster than any human. AI is also being applied in related areas, such as synthetic biology for the rapid design and manufacture of microorganisms for industrial uses.

King argues that machines are better suited to unravel the complexities of biological systems, with even the most “simple” organisms are host to thousands of genes, proteins, and small molecules that interact in complicated ways.

“Robot scientists and semi-automated AI tools are essential for the future of biology, as there are simply not enough human biologists to do the necessary work,” he said.

Creating Shockwaves in Science
The use of machine learning, neural networks, and other AI methods can often get better results in a fraction of the time it would normally take to crunch data.

For instance, scientists at the National Center for Supercomputing Applications, located at the University of Illinois at Urbana-Champaign, have a deep learning system for the rapid detection and characterization of gravitational waves. Gravitational waves are disturbances in spacetime, emanating from big, high-energy cosmic events, such as the massive explosion of a star known as a supernova. The “Holy Grail” of this type of research is to detect gravitational waves from the Big Bang.

Dubbed Deep Filtering, the method allows real-time processing of data from LIGO, a gravitational wave observatory comprised of two enormous laser interferometers located thousands of miles apart in California and Louisiana. The research was published in Physics Letters B. You can watch a trippy visualization of the results below.

In a more down-to-earth example, scientists published a paper last month in Science Advances on the development of a neural network called ConvNetQuake to detect and locate minor earthquakes from ground motion measurements called seismograms.

ConvNetQuake uncovered 17 times more earthquakes than traditional methods. Scientists say the new method is particularly useful in monitoring small-scale seismic activity, which has become more frequent, possibly due to fracking activities that involve injecting wastewater deep underground. You can learn more about ConvNetQuake in this video:

King says he believes that in the long term there will be no limit to what AI can accomplish in science. He and his team, including Eve, are currently working on developing cancer therapies under a grant from DARPA.

“Robot scientists are getting smarter and smarter; human scientists are not,” he says. “Indeed, there is arguably a case that human scientists are less good. I don’t see any scientist alive today of the stature of a Newton or Einstein—despite the vast number of living scientists. The Physics Nobel [laureate] Frank Wilczek is on record as saying (10 years ago) that in 100 years’ time the best physicist will be a machine. I agree.”

Image Credit: Romaset / Shutterstock.com Continue reading

Posted in Human Robots

#430988 The Week’s Awesome Stories From Around ...

BIOTECH
Lab-Grown Food Startup Memphis Meats Raises $17 Million From DFJ, Cargill, Bill Gates, OthersPaul Sawers | Venture Beat “Meat grown in a laboratory is the future, if certain sustainable food advocates have their way, and one startup just raised a bucketload of cash from major investors to make this goal a reality….Leading the $17 million series A round was venture capital (VC) firm DFJ, backer of Skype, Tesla, SpaceX, Tumblr, Foursquare, Baidu, and Box.”
ROBOTICS
Blossom: A Handmade Approach to Social Robotics From Cornell and GoogleEvan Ackerman | IEEE Spectrum “Blossom’s overall aesthetic is, in some ways, a response to the way that the design of home robots (and personal technology) has been trending recently. We’re surrounding ourselves with sterility embodied in metal and plastic, perhaps because of a perception that tech should be flawless. And I suppose when it comes to my phone or my computer, sterile flawlessness is good.”
AUTOMOTIVE
Mercedes’ Outrageously Swoopy Concept Says Nein to the Pod-Car FutureAlex Davies | WIRED “The swooping concept car, unveiled last weekend at the Pebble Beach Concoursd’Elegance, rejects all notions of practicality. It measures nearly 18.7 feet long and 6.9 feet wide, yet offers just two seats…Each wheel gets its own electric motor that draws power from the battery that comprises the car’s underbody. All told, they generate 750 horsepower, and the car will go 200 miles between charges.”
EDTECH
Amazon’s TenMarks Releases a New Curriculum for Educators That Teaches Kids Writing Using Digital Assistants, Text Messaging and MoreSarah Perez | TechCrunch“Now, the business is offering an online curriculum for teachers designed to help students learn how to be better writers. The program includes a writing coach that leverages natural language processing, a variety of resources for teachers, and something called “bursts,” which are short writing prompts kids will be familiar with because of their use of mobile apps.”
VIRTUAL REALITY
What We Can Learn From Immersing Mice, Fruit Flies, and Zebrafish in VRAlessandra Potenza | The Verge “The VR system, called FreemoVR, pretty much resembles a holodeck from the TV show Star Trek. It’s an arena surrounded by computer screens that immerses the animals in a virtual world. Researchers tested the system on mice, fruit flies, and zebrafish, and found that the animals reacted to the virtual objects and environments as they would to real ones.” Continue reading

Posted in Human Robots