Tag Archives: Space

#430761 How Robots Are Getting Better at Making ...

The multiverse of science fiction is populated by robots that are indistinguishable from humans. They are usually smarter, faster, and stronger than us. They seem capable of doing any job imaginable, from piloting a starship and battling alien invaders to taking out the trash and cooking a gourmet meal.
The reality, of course, is far from fantasy. Aside from industrial settings, robots have yet to meet The Jetsons. The robots the public are exposed to seem little more than over-sized plastic toys, pre-programmed to perform a set of tasks without the ability to interact meaningfully with their environment or their creators.
To paraphrase PayPal co-founder and tech entrepreneur Peter Thiel, we wanted cool robots, instead we got 140 characters and Flippy the burger bot. But scientists are making progress to empower robots with the ability to see and respond to their surroundings just like humans.
Some of the latest developments in that arena were presented this month at the annual Robotics: Science and Systems Conference in Cambridge, Massachusetts. The papers drilled down into topics that ranged from how to make robots more conversational and help them understand language ambiguities to helping them see and navigate through complex spaces.
Improved Vision
Ben Burchfiel, a graduate student at Duke University, and his thesis advisor George Konidaris, an assistant professor of computer science at Brown University, developed an algorithm to enable machines to see the world more like humans.
In the paper, Burchfiel and Konidaris demonstrate how they can teach robots to identify and possibly manipulate three-dimensional objects even when they might be obscured or sitting in unfamiliar positions, such as a teapot that has been tipped over.
The researchers trained their algorithm by feeding it 3D scans of about 4,000 common household items such as beds, chairs, tables, and even toilets. They then tested its ability to identify about 900 new 3D objects just from a bird’s eye view. The algorithm made the right guess 75 percent of the time versus a success rate of about 50 percent for other computer vision techniques.
In an email interview with Singularity Hub, Burchfiel notes his research is not the first to train machines on 3D object classification. How their approach differs is that they confine the space in which the robot learns to classify the objects.
“Imagine the space of all possible objects,” Burchfiel explains. “That is to say, imagine you had tiny Legos, and I told you [that] you could stick them together any way you wanted, just build me an object. You have a huge number of objects you could make!”
The infinite possibilities could result in an object no human or machine might recognize.
To address that problem, the researchers had their algorithm find a more restricted space that would host the objects it wants to classify. “By working in this restricted space—mathematically we call it a subspace—we greatly simplify our task of classification. It is the finding of this space that sets us apart from previous approaches.”
Following Directions
Meanwhile, a pair of undergraduate students at Brown University figured out a way to teach robots to understand directions better, even at varying degrees of abstraction.
The research, led by Dilip Arumugam and Siddharth Karamcheti, addressed how to train a robot to understand nuances of natural language and then follow instructions correctly and efficiently.
“The problem is that commands can have different levels of abstraction, and that can cause a robot to plan its actions inefficiently or fail to complete the task at all,” says Arumugam in a press release.
In this project, the young researchers crowdsourced instructions for moving a virtual robot through an online domain. The space consisted of several rooms and a chair, which the robot was told to manipulate from one place to another. The volunteers gave various commands to the robot, ranging from general (“take the chair to the blue room”) to step-by-step instructions.
The researchers then used the database of spoken instructions to teach their system to understand the kinds of words used in different levels of language. The machine learned to not only follow instructions but to recognize the level of abstraction. That was key to kickstart its problem-solving abilities to tackle the job in the most appropriate way.
The research eventually moved from virtual pixels to a real place, using a Roomba-like robot that was able to respond to instructions within one second 90 percent of the time. Conversely, when unable to identify the specificity of the task, it took the robot 20 or more seconds to plan a task about 50 percent of the time.
One application of this new machine-learning technique referenced in the paper is a robot worker in a warehouse setting, but there are many fields that could benefit from a more versatile machine capable of moving seamlessly between small-scale operations and generalized tasks.
“Other areas that could possibly benefit from such a system include things from autonomous vehicles… to assistive robotics, all the way to medical robotics,” says Karamcheti, responding to a question by email from Singularity Hub.
More to Come
These achievements are yet another step toward creating robots that see, listen, and act more like humans. But don’t expect Disney to build a real-life Westworld next to Toon Town anytime soon.
“I think we’re a long way off from human-level communication,” Karamcheti says. “There are so many problems preventing our learning models from getting to that point, from seemingly simple questions like how to deal with words never seen before, to harder, more complicated questions like how to resolve the ambiguities inherent in language, including idiomatic or metaphorical speech.”
Even relatively verbose chatbots can run out of things to say, Karamcheti notes, as the conversation becomes more complex.
The same goes for human vision, according to Burchfiel.
While deep learning techniques have dramatically improved pattern matching—Google can find just about any picture of a cat—there’s more to human eyesight than, well, meets the eye.
“There are two big areas where I think perception has a long way to go: inductive bias and formal reasoning,” Burchfiel says.
The former is essentially all of the contextual knowledge people use to help them reason, he explains. Burchfiel uses the example of a puddle in the street. People are conditioned or biased to assume it’s a puddle of water rather than a patch of glass, for instance.
“This sort of bias is why we see faces in clouds; we have strong inductive bias helping us identify faces,” he says. “While it sounds simple at first, it powers much of what we do. Humans have a very intuitive understanding of what they expect to see, [and] it makes perception much easier.”
Formal reasoning is equally important. A machine can use deep learning, in Burchfiel’s example, to figure out the direction any river flows once it understands that water runs downhill. But it’s not yet capable of applying the sort of human reasoning that would allow us to transfer that knowledge to an alien setting, such as figuring out how water moves through a plumbing system on Mars.
“Much work was done in decades past on this sort of formal reasoning… but we have yet to figure out how to merge it with standard machine-learning methods to create a seamless system that is useful in the actual physical world.”
Robots still have a lot to learn about being human, which should make us feel good that we’re still by far the most complex machines on the planet.
Image Credit: Alex Knight via Unsplash Continue reading

Posted in Human Robots

#430645 How a One-Man Team from California Won ...

By mastering a Mars robot simulator, an engineer and stay-at-home dad took home the $125,000 top prize Continue reading

Posted in Human Robots

#430550 This Week’s Awesome Stories From ...

DRONES
MIT Is Building Autonomous Drones That Can Both Drive and FlyApril Glaser | Recode“The drones, which were built at MIT’s Computer Science and Artificial Intelligence Laboratory, also include route-planning software that can help calculate when the flying robot switches from air to ground in order to optimize its battery life.”
SPACE
SpaceX Is Making Commercial Space Launches Look Like Child’s PlayJamie Condliffe | MIT Technology Review“Late Friday, SpaceX launched a satellite into orbit from Florida using one of its refurbished Falcon 9 rockets. Then on Sunday, for good measure, it lofted 10 smaller satellites using a new version of the same rocket, which it launched from California. The feat is a sign that the private space company seems more likely than ever to turn its vision of competitively priced, rapid-turnaround rocket launches into reality.”
CYBERSECURITY
A New Ransomware Attack Is Infecting Airlines, Banks, and Utilities Across EuropeRussell Brandom | The Verge“The origins of the attack are still unclear, but the involvement of Ukraine’s electric utilities is likely to cast suspicion on Russia. Ukraine’s power grid was hit by a persistent and sophisticated attack in December 2015, which many attributed to Russia. The attack ultimately left 230,000 residents without power for as long as six hours.”
SILICON VALLEY NEWS
Mark Zuckerberg’s Probably Nonexistent 2020 Presidential Campaign, ExplainedTimothy B. Lee | VOX“After all, the kind of outreach Zuckerberg would do in a presidential campaign isn’t that different from the kind of outreach he’d do if he were simply trying to understand Facebook users better and build public goodwill for his massive social media site.”
AUTONOMOUS CARS
Riding in a Robocar That Sees Around CornersPhilip E. Ross | IEEE Spectrum“It takes 20 to 30 minutes to fit a car with the necessary hardware: a GPS sensor and a wireless transceiver. Here in the MCity compound, at least, the GPS system uses a repeater to enhance its accuracy down to centimeter level—good enough to locate a car precisely and to allow other cars to figure out its trajectory and measure its speed.”
Image Credit: SpaceX / Flickr Continue reading

Posted in Human Robots

#428367 Fusion for Energy signs multi-million ...

Fusion for Energy signs multi-million deal with Airbus Safran Launchers, Nuvia Limited and Cegelec CEM to develop robotics equipment for ITER
The contract for a value of nearly 100 million EUR is considered to be the single biggest robotics deal to date in the field of fusion energy. The state of the art equipment will form part of ITER, the world’s largest experimental fusion facility and the first in history to produce 500 MW. The prestigious project brings together seven parties (China, Europe, Japan, India, the Republic of Korea, the Russian Federation and the USA) which represent 50% of the world’s population and 80% of the global GDP.
The collaboration between Fusion for Energy (F4E), the EU organisation managing Europe’s contribution to ITER, with a consortium of companies consisting of Airbus Safran Launchers (France-Germany), Nuvia Limited (UK) and Cegelec CEM (France), companies of the VINCI Group, will run for a period of seven years. The UK Atomic Energy Authority (UK), Instituto Superior Tecnico (Portugal), AVT Europe NV (Belgium) and Millennium (France) will also be part of this deal which will deliver remotely operated systems for the transportation and confinement of components located in the ITER vacuum vessel.
The contract carries also a symbolic importance marking the signature all procurement packages managed by Europe in the field of remote handling. Carlo Damiani, F4E’s Project Manager for ITER Remote Handling Systems, explained that “F4E’s stake in ITER offers an unparalleled opportunity to companies and laboratories to develop expertise and an industrial culture in fusion reactors’ maintenance.”
Cut-away image of the ITER machine showing the casks at the three levels of the ITER machine. ITER IO © (Remote1 web). Photo Credit: f4e.europa.euIllustration of lorry next to an ITER cask. F4E © (Remote 2 web). Photo Credit: f4e.europa.euAerial view of the ITER construction site, October 2016. F4E © (ITER site aerial Oct). Photo Credit: f4e.europa.eu

Why ITER requires Remote Handling?
Remote handling refers to the high-tech systems that will help us maintain and repair the ITER machine. The space where the bulky equipment will operate is limited and the exposure of some of the components to radioactivity, prohibit any manual intervention inside the vacuum vessel.

What will be delivered through this contract?
The transfer of components from the ITER vacuum vessel to the Hot Cell building, where they will be deposited for maintenance, will need to be carried out with the help of massive double-door containers known as casks. According to current estimates, 15 of these casks will need to be manufactured and in their largest configuration they will measure 8.5 m x 3.7 m x 2.6 m approaching 100 tonnes when transporting the heaviest components. These enormous “boxes”, resembling to a conventional lorry container, will be remotely operated as they move between the different levels and buildings of the machine. Apart from the transportation and confinement of components, the ITER Cask and Plug Remote Handling System will also ensure the installation of the remote handling equipment entering into the vacuum vessel to pick up the components to be removed. The technologies underpinning this system will encompass a variety of high-tech skills and comply with nuclear safety requirements. A proven manufacturing experience in similar fields and the development of bespoke systems to perform mechanical transfers will be essential.

Background information
MEMO: Fusion for Energy signs multi-million deal with Airbus Safran Launchers, Nuvia Limited and Cegelec CEM to develop robotics equipment for ITER
Multimedia
To see how the ITER Remote Handling System will operate click on clip 1 and clip 2
To see the progress of the ITER construction site click here
To take a virtual tour on the ITER construction site click here

Image captions
Cut-away image of the ITER machine showing the casks at the three levels of the ITER machine. ITER IO © (Remote1 web)

Illustration of lorry next to an ITER cask. F4E © (Remote 2 web)

Aerial view of the ITER construction site, October 2016. F4E © (ITER site aerial Oct)

The consortium of companies
The consortium combines the space expertise of Airbus Safran Launchers, adapted to this extreme environment to ensure safe conditions for the ITER teams; with Nuvia comes a wealth of nuclear experience dating back to the beginnings of the UK Nuclear industry. Nuvia has delivered solutions to some of the world’s most complex nuclear challenges; and with Cegelec CEM as a specialist in mechanical projects for French nuclear sector, which contributes over 30 years in the nuclear arena, including turnkey projects for large scientific installations, as well as the realisation of complex mechanical systems.

Fusion for Energy
Fusion for Energy (F4E) is the European Union’s organisation for Europe’s contribution to ITER.
One of the main tasks of F4E is to work together with European industry, SMEs and research organisations to develop and provide a wide range of high technology components together with engineering, maintenance and support services for the ITER project.
F4E supports fusion R&D initiatives through the Broader Approach Agreement signed with Japan and prepares for the construction of demonstration fusion reactors (DEMO).
F4E was created by a decision of the Council of the European Union as an independent legal entity and was established in April 2007 for a period of 35 years.
Its offices are in Barcelona, Spain.
http://www.fusionforenergy.europa.eu
http://www.youtube.com/user/fusionforenergy
http://twitter.com/fusionforenergy
http://www.flickr.com/photos/fusionforenergy

ITER
ITER is a first-of-a-kind global collaboration. It will be the world’s largest experimental fusion facility and is designed to demonstrate the scientific and technological feasibility of fusion power. It is expected to produce a significant amount of fusion power (500 MW) for about seven minutes. Fusion is the process which powers the sun and the stars. When light atomic nuclei fuse together form heavier ones, a large amount of energy is released. Fusion research is aimed at developing a safe, limitless and environmentally responsible energy source.
Europe will contribute almost half of the costs of its construction, while the other six parties to this joint international venture (China, Japan, India, the Republic of Korea, the Russian Federation and the USA), will contribute equally to the rest.
The site of the ITER project is in Cadarache, in the South of France.
http://www.iter.org

For Fusion for Energy media enquiries contact:
Aris Apollonatos
E-mail: aris.apollonatos@f4e.europa.eu
Tel: + 34 93 3201833 + 34 649 179 42
The post Fusion for Energy signs multi-million deal to develop robotics equipment for ITER appeared first on Roboticmagazine. Continue reading

Posted in Human Robots

#428348 Google’s New AI Gets Smarter Thanks to ...

“The behavior of the computer at any moment is determined by the symbols which he is observing and his 'state of mind' at that moment.” – Alan Turing Artificial intelligence has a memory problem. Back in early 2015, Google’s mysterious DeepMind unveiled an algorithm that could teach itself to play Atari games. Based on deep neural nets, the AI impressively mastered nostalgic favorites such as Space Invaders and Pong without needing any explicit programming —… read more Continue reading

Posted in Human Robots