Tag Archives: host

#431682 Oxford Study Says Alien Life Would ...

The alternative universe known as science fiction has given our culture a menagerie of alien species. From overstuffed teddy bears like Ewoks and Wookies to terrifying nightmares such as Alien and Predator, our collective imagination of what form alien life from another world may take has been irrevocably imprinted by Hollywood.
It might all be possible, or all these bug-eyed critters might turn out to be just B-movie versions of how real extraterrestrials will appear if and when they finally make the evening news.
One thing for certain is that aliens from another world will be shaped by the same evolutionary forces as here on Earth—natural selection. That’s the conclusion of a team of scientists from the University of Oxford in a study published this month in the International Journal of Astrobiology.
A complex alien that comprises a hierarchy of entities, where each lower level collection of entities has aligned evolutionary interests.Image Credit: Helen S. Cooper/University of Oxford.
The researchers suggest that evolutionary theory—famously put forth by Charles Darwin in his seminal book On the Origin of Species 158 years ago this month—can be used to make some predictions about alien species. In particular, the team argues that extraterrestrials will undergo natural selection, because that is the only process by which organisms can adapt to their environment.
“Adaptation is what defines life,” lead author Samuel Levin tells Singularity Hub.
While it’s likely that NASA or some SpaceX-like private venture will eventually kick over a few space rocks and discover microbial life in the not-too-distant future, the sorts of aliens Levin and his colleagues are interested in describing are more complex. That’s because natural selection is at work.
A quick evolutionary theory 101 refresher: Natural selection is the process by which certain traits are favored over others in a given population. For example, take a group of brown and green beetles. It just so happens that birds prefer foraging on green beetles, allowing more brown beetles to survive and reproduce than the more delectable green ones. Eventually, if these population pressures persist, brown beetles will become the dominant type. Brown wins, green loses.
And just as human beings are the result of millions of years of adaptations—eyes and thumbs, for example—aliens will similarly be constructed from parts that were once free living but through time came together to work as one organism.
“Life has so many intricate parts, so much complexity, for that to happen (randomly),” Levin explains. “It’s too complex and too many things working together in a purposeful way for that to happen by chance, as how certain molecules come about. Instead you need a process for making it, and natural selection is that process.”
Just don’t expect ET to show up as a bipedal humanoid with a large head and almond-shaped eyes, Levin says.
“They can be built from entirely different chemicals and so visually, superficially, unfamiliar,” he explains. “They will have passed through the same evolutionary history as us. To me, that’s way cooler and more exciting than them having two legs.”
Need for Data
Seth Shostak, a lead astronomer at the SETI Institute and host of the organization’s Big Picture Science radio show, wrote that while the argument is interesting, it doesn’t answer the question of ET’s appearance.
Shostak argues that a more productive approach would invoke convergent evolution, where similar environments lead to similar adaptations, at least assuming a range of Earth-like conditions such as liquid oceans and thick atmospheres. For example, an alien species that evolved in a liquid environment would evolve a streamlined body to move through water.
“Happenstance and the specifics of the environment will produce variations on an alien species’ planet as it has on ours, and there’s really no way to predict these,” Shostak concludes. “Alas, an accurate cosmic bestiary cannot be written by the invocation of biological mechanisms alone. We need data. That requires more than simply thinking about alien life. We need to actually discover it.”
Search is On
The search is on. On one hand, the task seems easy enough: There are at least 100 billion planets in the Milky Way alone, and at least 20 percent of those are likely to be capable of producing a biosphere. Even if the evolution of life is exceedingly rare—take a conservative estimate of .001 percent or 200,000 planets, as proposed by the Oxford paper—you have to like the odds.
Of course, it’s not that easy by a billion light years.
Planet hunters can’t even agree on what signatures of life they should focus on. The idea is that where there’s smoke there’s fire. In the case of an alien world home to biological life, astrobiologists are searching for the presence of “biosignature gases,” vapors that could only be produced by alien life.
As Quanta Magazine reported, scientists do this by measuring a planet’s atmosphere against starlight. Gases in the atmosphere absorb certain frequencies of starlight, offering a clue as to what is brewing around a particular planet.
The presence of oxygen would seem to be a biological no-brainer, but there are instances where a planet can produce a false positive, meaning non-biological processes are responsible for the exoplanet’s oxygen. Scientists like Sara Seager, an astrophysicist at MIT, have argued there are plenty of examples of other types of gases produced by organisms right here on Earth that could also produce the smoking gun, er, planet.

Life as We Know It
Indeed, the existence of Earth-bound extremophiles—organisms that defy conventional wisdom about where life can exist, such as in the vacuum of space—offer another clue as to what kind of aliens we might eventually meet.
Lynn Rothschild, an astrobiologist and synthetic biologist in the Earth Science Division at NASA’s Ames Research Center in Silicon Valley, takes extremophiles as a baseline and then supersizes them through synthetic biology.
For example, say a bacteria is capable of surviving at 120 degrees Celsius. Rothschild’s lab might tweak an organism’s DNA to see if it could metabolize at 150 degrees. The idea, as she explains, is to expand the envelope for life without ever getting into a rocket ship.

While researchers may not always agree on the “where” and “how” and “what” of the search for extraterrestrial life, most do share one belief: Alien life must be out there.
“It would shock me if there weren’t [extraterrestrials],” Levin says. “There are few things that would shock me more than to find out there aren’t any aliens…If I had to bet on it, I would bet on the side of there being lots and lots of aliens out there.”
Image Credit: NASA Continue reading

Posted in Human Robots

#431238 AI Is Easy to Fool—Why That Needs to ...

Con artistry is one of the world’s oldest and most innovative professions, and it may soon have a new target. Research suggests artificial intelligence may be uniquely susceptible to tricksters, and as its influence in the modern world grows, attacks against it are likely to become more common.
The root of the problem lies in the fact that artificial intelligence algorithms learn about the world in very different ways than people do, and so slight tweaks to the data fed into these algorithms can throw them off completely while remaining imperceptible to humans.
Much of the research into this area has been conducted on image recognition systems, in particular those relying on deep learning neural networks. These systems are trained by showing them thousands of examples of images of a particular object until they can extract common features that allow them to accurately spot the object in new images.
But the features they extract are not necessarily the same high-level features a human would be looking for, like the word STOP on a sign or a tail on a dog. These systems analyze images at the individual pixel level to detect patterns shared between examples. These patterns can be obscure combinations of pixel values, in small pockets or spread across the image, that would be impossible to discern for a human, but highly accurate at predicting a particular object.

“An attacker can trick the object recognition algorithm into seeing something that isn’t there, without these alterations being obvious to a human.”

What this means is that by identifying these patterns and overlaying them over a different image, an attacker can trick the object recognition algorithm into seeing something that isn’t there, without these alterations being obvious to a human. This kind of manipulation is known as an “adversarial attack.”
Early attempts to trick image recognition systems this way required access to the algorithm’s inner workings to decipher these patterns. But in 2016 researchers demonstrated a “black box” attack that enabled them to trick such a system without knowing its inner workings.
By feeding the system doctored images and seeing how it classified them, they were able to work out what it was focusing on and therefore generate images they knew would fool it. Importantly, the doctored images were not obviously different to human eyes.
These approaches were tested by feeding doctored image data directly into the algorithm, but more recently, similar approaches have been applied in the real world. Last year it was shown that printouts of doctored images that were then photographed on a smartphone successfully tricked an image classification system.
Another group showed that wearing specially designed, psychedelically-colored spectacles could trick a facial recognition system into thinking people were celebrities. In August scientists showed that adding stickers to stop signs in particular configurations could cause a neural net designed to spot them to misclassify the signs.
These last two examples highlight some of the potential nefarious applications for this technology. Getting a self-driving car to miss a stop sign could cause an accident, either for insurance fraud or to do someone harm. If facial recognition becomes increasingly popular for biometric security applications, being able to pose as someone else could be very useful to a con artist.
Unsurprisingly, there are already efforts to counteract the threat of adversarial attacks. In particular, it has been shown that deep neural networks can be trained to detect adversarial images. One study from the Bosch Center for AI demonstrated such a detector, an adversarial attack that fools the detector, and a training regime for the detector that nullifies the attack, hinting at the kind of arms race we are likely to see in the future.
While image recognition systems provide an easy-to-visualize demonstration, they’re not the only machine learning systems at risk. The techniques used to perturb pixel data can be applied to other kinds of data too.

“Bypassing cybersecurity defenses is one of the more worrying and probable near-term applications for this approach.”

Chinese researchers showed that adding specific words to a sentence or misspelling a word can completely throw off machine learning systems designed to analyze what a passage of text is about. Another group demonstrated that garbled sounds played over speakers could make a smartphone running the Google Now voice command system visit a particular web address, which could be used to download malware.
This last example points toward one of the more worrying and probable near-term applications for this approach: bypassing cybersecurity defenses. The industry is increasingly using machine learning and data analytics to identify malware and detect intrusions, but these systems are also highly susceptible to trickery.
At this summer’s DEF CON hacking convention, a security firm demonstrated they could bypass anti-malware AI using a similar approach to the earlier black box attack on the image classifier, but super-powered with an AI of their own.
Their system fed malicious code to the antivirus software and then noted the score it was given. It then used genetic algorithms to iteratively tweak the code until it was able to bypass the defenses while maintaining its function.
All the approaches noted so far are focused on tricking pre-trained machine learning systems, but another approach of major concern to the cybersecurity industry is that of “data poisoning.” This is the idea that introducing false data into a machine learning system’s training set will cause it to start misclassifying things.
This could be particularly challenging for things like anti-malware systems that are constantly being updated to take into account new viruses. A related approach bombards systems with data designed to generate false positives so the defenders recalibrate their systems in a way that then allows the attackers to sneak in.
How likely it is that these approaches will be used in the wild will depend on the potential reward and the sophistication of the attackers. Most of the techniques described above require high levels of domain expertise, but it’s becoming ever easier to access training materials and tools for machine learning.
Simpler versions of machine learning have been at the heart of email spam filters for years, and spammers have developed a host of innovative workarounds to circumvent them. As machine learning and AI increasingly embed themselves in our lives, the rewards for learning how to trick them will likely outweigh the costs.
Image Credit: Nejron Photo / Shutterstock.com Continue reading

Posted in Human Robots

#430854 Get a Live Look Inside Singularity ...

Singularity University’s (SU) second annual Global Summit begins today in San Francisco, and the Singularity Hub team will be there to give you a live look inside the event, exclusive speaker interviews, and articles on great talks.
Whereas SU’s other summits each focus on a specific field or industry, Global Summit is a broad look at emerging technologies and how they can help solve the world’s biggest challenges.
Talks will cover the latest in artificial intelligence, the brain and technology, augmented and virtual reality, space exploration, the future of work, the future of learning, and more.
We’re bringing three full days of live Facebook programming, streaming on Singularity Hub’s Facebook page, complete with 30+ speaker interviews, tours of the EXPO innovation hall, and tech demos. You can also livestream main stage talks at Singularity University’s Facebook page.
Interviews include Peter Diamandis, cofounder and chairman of Singularity University; Sylvia Earle, National Geographic explorer-in-residence; Esther Wojcicki, founder of the Palo Alto High Media Arts Center; Bob Richards, founder and CEO of Moon Express; Matt Oehrlein, cofounder of MegaBots; and Craig Newmark, founder of Craigslist and the Craig Newmark Foundation.
Pascal Finette, SU vice president of startup solutions, and Alison Berman, SU staff writer and digital producer, will host the show, and Lisa Kay Solomon, SU chair of transformational practices, will put on a special daily segment on exponential leadership with thought leaders.
Make sure you don’t miss anything by ‘liking’ the Singularity Hub and Singularity University Facebook pages and turn on notifications from both pages so you know when we go live. And to get a taste of what’s in store, check out the below selection of stories from last year’s event.
Are We at the Edge of a Second Sexual Revolution?By Vanessa Bates Ramirez
“Brace yourself, because according to serial entrepreneur Martin Varsavsky, all our existing beliefs about procreation are about to be shattered again…According to Varsavsky, the second sexual revolution will decouple procreation from sex, because sex will no longer be the best way to make babies.”
VR Pioneer Chris Milk: Virtual Reality Will Mirror Life Like Nothing Else BeforeBy Jason Ganz
“Milk is already a legend in the VR community…But [he] is just getting started. His company Within has plans to help shape the language we use for virtual reality storytelling. Because let’s be clear, VR storytelling is still very much in its infancy. This fact makes it even crazier there are already VR films out there that can inspire and captivate on such a profound level. And we’re only going up from here.”
7 Key Factors Driving the Artificial Intelligence RevolutionBy David Hill
“Jacobstein calmly and optimistically assures that this revolution isn’t going to disrupt humans completely, but usher in a future in which there’s a symbiosis between human and machine intelligence. He highlighted 7 factors driving this revolution.”
Are There Other Intelligent Civilizations Out There? Two Views on the Fermi ParadoxBy Alison Berman
“Cliché or not, when I stare up at the sky, I still wonder if we’re alone in the galaxy. Could there be another technologically advanced civilization out there? During a panel discussion on space exploration at Singularity University’s Global Summit, Jill Tarter, the Bernard M. Oliver chair at the SETI Institute, was asked to explain the Fermi paradox and her position on it. Her answer was pretty brilliant.”
Engineering Will Soon Be ‘More Parenting Than Programming’By Sveta McShane
“In generative design, the user states desired goals and constraints and allows the computer to generate entire designs, iterations and solution sets based on those constraints. It is, in fact, a lot like parents setting boundaries for their children’s activities. The user basically says, ‘Yes, it’s ok to do this, but it’s not ok to do that.’ The resulting solutions are ones you might never have thought of on your own.”
Biohacking Will Let You Connect Your Body to Anything You WantBy Vanessa Bates Ramirez
“How many cyborgs did you see during your morning commute today? I would guess at least five. Did they make you nervous? Probably not; you likely didn’t even realize they were there…[Hannes] Sjoblad said that the cyborgs we see today don’t look like Hollywood prototypes; they’re regular people who have integrated technology into their bodies to improve or monitor some aspect of their health.”
Peter Diamandis: We’ll Radically Extend Our Lives With New TechnologiesBy Jason Dorrier
“[Diamandis] said humans aren’t the longest-lived animals. Other species have multi-hundred-year lifespans. Last year, a study “dating” Greenland sharks found they can live roughly 400 years. Though the technique isn’t perfectly precise, they estimated one shark to be about 392. Its approximate birthday was 1624…Diamandis said he asked himself: If these animals can live centuries—why can’t I?” Continue reading

Posted in Human Robots

#430761 How Robots Are Getting Better at Making ...

The multiverse of science fiction is populated by robots that are indistinguishable from humans. They are usually smarter, faster, and stronger than us. They seem capable of doing any job imaginable, from piloting a starship and battling alien invaders to taking out the trash and cooking a gourmet meal.
The reality, of course, is far from fantasy. Aside from industrial settings, robots have yet to meet The Jetsons. The robots the public are exposed to seem little more than over-sized plastic toys, pre-programmed to perform a set of tasks without the ability to interact meaningfully with their environment or their creators.
To paraphrase PayPal co-founder and tech entrepreneur Peter Thiel, we wanted cool robots, instead we got 140 characters and Flippy the burger bot. But scientists are making progress to empower robots with the ability to see and respond to their surroundings just like humans.
Some of the latest developments in that arena were presented this month at the annual Robotics: Science and Systems Conference in Cambridge, Massachusetts. The papers drilled down into topics that ranged from how to make robots more conversational and help them understand language ambiguities to helping them see and navigate through complex spaces.
Improved Vision
Ben Burchfiel, a graduate student at Duke University, and his thesis advisor George Konidaris, an assistant professor of computer science at Brown University, developed an algorithm to enable machines to see the world more like humans.
In the paper, Burchfiel and Konidaris demonstrate how they can teach robots to identify and possibly manipulate three-dimensional objects even when they might be obscured or sitting in unfamiliar positions, such as a teapot that has been tipped over.
The researchers trained their algorithm by feeding it 3D scans of about 4,000 common household items such as beds, chairs, tables, and even toilets. They then tested its ability to identify about 900 new 3D objects just from a bird’s eye view. The algorithm made the right guess 75 percent of the time versus a success rate of about 50 percent for other computer vision techniques.
In an email interview with Singularity Hub, Burchfiel notes his research is not the first to train machines on 3D object classification. How their approach differs is that they confine the space in which the robot learns to classify the objects.
“Imagine the space of all possible objects,” Burchfiel explains. “That is to say, imagine you had tiny Legos, and I told you [that] you could stick them together any way you wanted, just build me an object. You have a huge number of objects you could make!”
The infinite possibilities could result in an object no human or machine might recognize.
To address that problem, the researchers had their algorithm find a more restricted space that would host the objects it wants to classify. “By working in this restricted space—mathematically we call it a subspace—we greatly simplify our task of classification. It is the finding of this space that sets us apart from previous approaches.”
Following Directions
Meanwhile, a pair of undergraduate students at Brown University figured out a way to teach robots to understand directions better, even at varying degrees of abstraction.
The research, led by Dilip Arumugam and Siddharth Karamcheti, addressed how to train a robot to understand nuances of natural language and then follow instructions correctly and efficiently.
“The problem is that commands can have different levels of abstraction, and that can cause a robot to plan its actions inefficiently or fail to complete the task at all,” says Arumugam in a press release.
In this project, the young researchers crowdsourced instructions for moving a virtual robot through an online domain. The space consisted of several rooms and a chair, which the robot was told to manipulate from one place to another. The volunteers gave various commands to the robot, ranging from general (“take the chair to the blue room”) to step-by-step instructions.
The researchers then used the database of spoken instructions to teach their system to understand the kinds of words used in different levels of language. The machine learned to not only follow instructions but to recognize the level of abstraction. That was key to kickstart its problem-solving abilities to tackle the job in the most appropriate way.
The research eventually moved from virtual pixels to a real place, using a Roomba-like robot that was able to respond to instructions within one second 90 percent of the time. Conversely, when unable to identify the specificity of the task, it took the robot 20 or more seconds to plan a task about 50 percent of the time.
One application of this new machine-learning technique referenced in the paper is a robot worker in a warehouse setting, but there are many fields that could benefit from a more versatile machine capable of moving seamlessly between small-scale operations and generalized tasks.
“Other areas that could possibly benefit from such a system include things from autonomous vehicles… to assistive robotics, all the way to medical robotics,” says Karamcheti, responding to a question by email from Singularity Hub.
More to Come
These achievements are yet another step toward creating robots that see, listen, and act more like humans. But don’t expect Disney to build a real-life Westworld next to Toon Town anytime soon.
“I think we’re a long way off from human-level communication,” Karamcheti says. “There are so many problems preventing our learning models from getting to that point, from seemingly simple questions like how to deal with words never seen before, to harder, more complicated questions like how to resolve the ambiguities inherent in language, including idiomatic or metaphorical speech.”
Even relatively verbose chatbots can run out of things to say, Karamcheti notes, as the conversation becomes more complex.
The same goes for human vision, according to Burchfiel.
While deep learning techniques have dramatically improved pattern matching—Google can find just about any picture of a cat—there’s more to human eyesight than, well, meets the eye.
“There are two big areas where I think perception has a long way to go: inductive bias and formal reasoning,” Burchfiel says.
The former is essentially all of the contextual knowledge people use to help them reason, he explains. Burchfiel uses the example of a puddle in the street. People are conditioned or biased to assume it’s a puddle of water rather than a patch of glass, for instance.
“This sort of bias is why we see faces in clouds; we have strong inductive bias helping us identify faces,” he says. “While it sounds simple at first, it powers much of what we do. Humans have a very intuitive understanding of what they expect to see, [and] it makes perception much easier.”
Formal reasoning is equally important. A machine can use deep learning, in Burchfiel’s example, to figure out the direction any river flows once it understands that water runs downhill. But it’s not yet capable of applying the sort of human reasoning that would allow us to transfer that knowledge to an alien setting, such as figuring out how water moves through a plumbing system on Mars.
“Much work was done in decades past on this sort of formal reasoning… but we have yet to figure out how to merge it with standard machine-learning methods to create a seamless system that is useful in the actual physical world.”
Robots still have a lot to learn about being human, which should make us feel good that we’re still by far the most complex machines on the planet.
Image Credit: Alex Knight via Unsplash Continue reading

Posted in Human Robots

#428433 UK Robotics Week To Return – 24th June ...

Today marks official launch of the second UK Robotics Week; entries now open in Surgical Robot, Autonomous Driving and School Robot Challenges
London, UK, 7th November 2016. – UK Robotics Week 2017 officially launches today, with a range of robotics activities and challenges open to schools, academic institutions and industry sectors. These activities culminate in a national week of celebration being held 24th – 30th June 2017. The second annual UK Robotics Week is set to be even bigger and better, building on the huge success of the inaugural event. Any institutions or organisations planning to hold their own robotics events – either in the run-up to and during the UK Robotics Week – can also apply now to be included in the official Programme of Activities (please visit www.roboticsweek.uk for details of how to register).
The first ever UK Robotics Week proved a huge success, encompassing a host of events up and down the UK, including public lectures, open labs, hackathons, tech weekends, conferences, and a state-of-the-art robotics showcase held on the last day. The UK Robotics Week initiative is jointly spearheaded by founding supporters, the Engineering and Physical Sciences Research Council (EPSRC), The Royal Academy of Engineering, the Institution of Engineering and Technology, the Institution of Mechanical Engineers and the UK-RAS Special Interest Group, and is being coordinated by the EPSRC UK-RAS network.
As part of the official launch, this year’s School Robot Challenge is now open for entries to all schools nationwide. The competition offers schoolchildren the opportunity to design their own virtual robot bug and teach it to move, with the option of printing their bug in 3D. The challenge aims to develop children’s interest and skills in digital technology, design, science, engineering and biology. This year’s competition has been split into two age group categories – 4-12 years and 13-18 years – with top prizes to be awarded in each. School are actively encouraged to register their interest on the website now to access the information packs and software at http://www.roboticsweek.uk/schoolrobotchallenge.htm
The first Surgical Robot Challenge attracted participation from the world’s leading institutions, with top robotics research teams travelling to the UK to demonstrate their outstanding innovations during last year’s competition finals. The 2017 competition is now open for entry, and any international researchers interested in participating in this prestigious challenge can download all the competition information at http://www.roboticsweek.uk/surgicalrobotchallenge.htm
The second Autonomous Driving Challenge is also launched today. This is an international competition to inspire the next generation of designers and engineers, and involves designing your own vehicle and teaching it to drive autonomously. The challenge is open to everyone: children and adults, amateurs and professionals.
Commenting on today’s official launch, Professor Guang-Zhong Yang PhD, FREng, Director and Co-founder of the Hamlyn Centre for Robotic Surgery, at Imperial College London and Chair of the UK-RAS Network, said: “We have been delighted with the response to UK Robotics Week, which looks set to become one of the key highlights in the science and technology calendar. This is a unique opportunity to celebrate the UK’s technology leadership in robotics and autonomous systems, and for individuals and institutions to get involved – hands-on – with robotics development.”
Professor Philip Nelson, Chief Executive of EPSRC, added: “From inspiring the nation’s budding engineers in STEM subjects to engaging people of all ages in a national debate about the contribution robotic technology can make to society and our economy, we’re looking forward to creating even more of a buzz with UK Robotics Week this year, and shining an even bigger spotlight on the fantastic robotics innovation being driven from the UK.”
For full information about all the activities planned for UK Robotics Week, please visit the website: www.roboticsweek.uk and follow UK Robotics Week on Twitter (@ukroboticsweek)
About the EPSRC UK-RAS Network (http://www.uk-ras.org) : The EPSRC UK Robotics and Autonomous Systems Network (UK-RAS Network) is dedicated to robotics innovation across the UK, with a mission to provide academic leadership in Robotics and Autonomous Systems (RAS), expand collaboration with industry, and integrate and coordinate activities at eight Engineering and Physical Sciences Research Council (EPSRC) funded RAS capital facilities and Centres for Doctoral Training (CDTs) across the country.
PRESS CONTACT:
Nicky Denovan
EvokedSet
Email: nicky[@]evokedset[dot]com
Mobile: +44 (0)7747 017654
The post UK Robotics Week To Return – 24th June to 30th June 2017 appeared first on Roboticmagazine. Continue reading

Posted in Human Robots