Tag Archives: Original

#434569 From Parkour to Surgery, Here Are the ...

The robot revolution may not be here quite yet, but our mechanical cousins have made some serious strides. And now some of the leading experts in the field have provided a rundown of what they see as the 10 most exciting recent developments.

Compiled by the editors of the journal Science Robotics, the list includes some of the most impressive original research and innovative commercial products to make a splash in 2018, as well as a couple from 2017 that really changed the game.

1. Boston Dynamics’ Atlas doing parkour

It seems like barely a few months go by without Boston Dynamics rewriting the book on what a robot can and can’t do. Last year they really outdid themselves when they got their Atlas humanoid robot to do parkour, leaping over logs and jumping between wooden crates.

Atlas’s creators have admitted that the videos we see are cherry-picked from multiple attempts, many of which don’t go so well. But they say they’re meant to be inspirational and aspirational rather than an accurate picture of where robotics is today. And combined with the company’s dog-like Spot robot, they are certainly pushing boundaries.

2. Intuitive Surgical’s da Vinci SP platform
Robotic surgery isn’t new, but the technology is improving rapidly. Market leader Intuitive’s da Vinci surgical robot was first cleared by the FDA in 2000, but since then it’s come a long way, with the company now producing three separate systems.

The latest addition is the da Vinci SP (single port) system, which is able to insert three instruments into the body through a single 2.5cm cannula (tube) bringing a whole new meaning to minimally invasive surgery. The system was granted FDA clearance for urological procedures last year, and the company has now started shipping the new system to customers.

3. Soft robot that navigates through growth

Roboticists have long borrowed principles from the animal kingdom, but a new robot design that mimics the way plant tendrils and fungi mycelium move by growing at the tip has really broken the mold on robot navigation.

The editors point out that this is the perfect example of bio-inspired design; the researchers didn’t simply copy nature, they took a general principle and expanded on it. The tube-like robot unfolds from the front as pneumatic pressure is applied, but unlike a plant, it can grow at the speed of an animal walking and can navigate using visual feedback from a camera.

4. 3D printed liquid crystal elastomers for soft robotics
Soft robotics is one of the fastest-growing sub-disciplines in the field, but powering these devices without rigid motors or pumps is an ongoing challenge. A variety of shape-shifting materials have been proposed as potential artificial muscles, including liquid crystal elastomeric actuators.

Harvard engineers have now demonstrated that these materials can be 3D printed using a special ink that allows the designer to easily program in all kinds of unusual shape-shifting abilities. What’s more, their technique produces actuators capable of lifting significantly more weight than previous approaches.

5. Muscle-mimetic, self-healing, and hydraulically amplified actuators
In another effort to find a way to power soft robots, last year researchers at the University of Colorado Boulder designed a series of super low-cost artificial muscles that can lift 200 times their own weight and even heal themselves.

The devices rely on pouches filled with a liquid that makes them contract with the force and speed of mammalian skeletal muscles when a voltage is applied. The most promising for robotics applications is the so-called Peano-HASEL, which features multiple rectangular pouches connected in series that contract linearly, just like real muscle.

6. Self-assembled nanoscale robot from DNA

While you may think of robots as hulking metallic machines, a substantial number of scientists are working on making nanoscale robots out of DNA. And last year German researchers built the first remote-controlled DNA robotic arm.

They created a length of tightly-bound DNA molecules to act as the arm and attached it to a DNA base plate via a flexible joint. Because DNA carries a charge, they were able to get the arm to swivel around like the hand of a clock by applying a voltage and switch direction by reversing that voltage. The hope is that this arm could eventually be used to build materials piece by piece at the nanoscale.

7. DelFly nimble bioinspired robotic flapper

Robotics doesn’t only borrow from biology—sometimes it gives back to it, too. And a new flapping-winged robot designed by Dutch engineers that mimics the humble fruit fly has done just that, by revealing how the animals that inspired it carry out predator-dodging maneuvers.

The lab has been building flapping robots for years, but this time they ditched the airplane-like tail used to control previous incarnations. Instead, they used insect-inspired adjustments to the motions of its twin pairs of flapping wings to hover, pitch, and roll with the agility of a fruit fly. That has provided a useful platform for investigating insect flight dynamics, as well as more practical applications.

8. Soft exosuit wearable robot

Exoskeletons could prevent workplace injuries, help people walk again, and even boost soldiers’ endurance. Strapping on bulky equipment isn’t ideal, though, so researchers at Harvard are working on a soft exoskeleton that combines specially-designed textiles, sensors, and lightweight actuators.

And last year the team made an important breakthrough by combining their novel exoskeleton with a machine-learning algorithm that automatically tunes the device to the user’s particular walking style. Using physiological data, it is able to adjust when and where the device needs to deliver a boost to the user’s natural movements to improve walking efficiency.

9. Universal Robots (UR) e-Series Cobots
Robots in factories are nothing new. The enormous mechanical arms you see in car factories normally have to be kept in cages to prevent them from accidentally crushing people. In recent years there’s been growing interest in “co-bots,” collaborative robots designed to work side-by-side with their human colleagues and even learn from them.

Earlier this year saw the demise of ReThink robotics, the pioneer of the approach. But the simple single arm devices made by Danish firm Universal Robotics are becoming ubiquitous in workshops and warehouses around the world, accounting for about half of global co-bot sales. Last year they released their latest e-Series, with enhanced safety features and force/torque sensing.

10. Sony’s aibo
After a nearly 20-year hiatus, Sony’s robotic dog aibo is back, and it’s had some serious upgrades. As well as a revamp to its appearance, the new robotic pet takes advantage of advances in AI, with improved environmental and command awareness and the ability to develop a unique character based on interactions with its owner.

The editors note that this new context awareness mark the device out as a significant evolution in social robots, which many hope could aid in childhood learning or provide companionship for the elderly.

Image Credit: DelFly Nimble / CC BY – SA 4.0 Continue reading

Posted in Human Robots

#434311 Understanding the Hidden Bias in ...

Facial recognition technology has progressed to point where it now interprets emotions in facial expressions. This type of analysis is increasingly used in daily life. For example, companies can use facial recognition software to help with hiring decisions. Other programs scan the faces in crowds to identify threats to public safety.

Unfortunately, this technology struggles to interpret the emotions of black faces. My new study, published last month, shows that emotional analysis technology assigns more negative emotions to black men’s faces than white men’s faces.

This isn’t the first time that facial recognition programs have been shown to be biased. Google labeled black faces as gorillas. Cameras identified Asian faces as blinking. Facial recognition programs struggled to correctly identify gender for people with darker skin.

My work contributes to a growing call to better understand the hidden bias in artificial intelligence software.

Measuring Bias
To examine the bias in the facial recognition systems that analyze people’s emotions, I used a data set of 400 NBA player photos from the 2016 to 2017 season, because players are similar in their clothing, athleticism, age and gender. Also, since these are professional portraits, the players look at the camera in the picture.

I ran the images through two well-known types of emotional recognition software. Both assigned black players more negative emotional scores on average, no matter how much they smiled.

For example, consider the official NBA pictures of Darren Collison and Gordon Hayward. Both players are smiling, and, according to the facial recognition and analysis program Face++, Darren Collison and Gordon Hayward have similar smile scores—48.7 and 48.1 out of 100, respectively.

Basketball players Darren Collision (left) and Gordon Hayward (right). basketball-reference.com

However, Face++ rates Hayward’s expression as 59.7 percent happy and 0.13 percent angry and Collison’s expression as 39.2 percent happy and 27 percent angry. Collison is viewed as nearly as angry as he is happy and far angrier than Hayward—despite the facial recognition program itself recognizing that both players are smiling.

In contrast, Microsoft’s Face API viewed both men as happy. Still, Collison is viewed as less happy than Hayward, with 98 and 93 percent happiness scores, respectively. Despite his smile, Collison is even scored with a small amount of contempt, whereas Hayward has none.

Across all the NBA pictures, the same pattern emerges. On average, Face++ rates black faces as twice as angry as white faces. Face API scores black faces as three times more contemptuous than white faces. After matching players based on their smiles, both facial analysis programs are still more likely to assign the negative emotions of anger or contempt to black faces.

Stereotyped by AI
My study shows that facial recognition programs exhibit two distinct types of bias.

First, black faces were consistently scored as angrier than white faces for every smile. Face++ showed this type of bias. Second, black faces were always scored as angrier if there was any ambiguity about their facial expression. Face API displayed this type of disparity. Even if black faces are partially smiling, my analysis showed that the systems assumed more negative emotions as compared to their white counterparts with similar expressions. The average emotional scores were much closer across races, but there were still noticeable differences for black and white faces.

This observation aligns with other research, which suggests that black professionals must amplify positive emotions to receive parity in their workplace performance evaluations. Studies show that people perceive black men as more physically threatening than white men, even when they are the same size.

Some researchers argue that facial recognition technology is more objective than humans. But my study suggests that facial recognition reflects the same biases that people have. Black men’s facial expressions are scored with emotions associated with threatening behaviors more often than white men, even when they are smiling. There is good reason to believe that the use of facial recognition could formalize preexisting stereotypes into algorithms, automatically embedding them into everyday life.

Until facial recognition assesses black and white faces similarly, black people may need to exaggerate their positive facial expressions—essentially smile more—to reduce ambiguity and potentially negative interpretations by the technology.

Although innovative, artificial intelligence can perpetrate and exacerbate existing power dynamics, leading to disparate impact across racial/ethnic groups. Some societal accountability is necessary to ensure fairness to all groups because facial recognition, like most artificial intelligence, is often invisible to the people most affected by its decisions.

Lauren Rhue, Assistant Professor of Information Systems and Analytics, Wake Forest University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Alex_Po / Shutterstock.com Continue reading

Posted in Human Robots

#434260 The Most Surprising Tech Breakthroughs ...

Development across the entire information technology landscape certainly didn’t slow down this year. From CRISPR babies, to the rapid decline of the crypto markets, to a new robot on Mars, and discovery of subatomic particles that could change modern physics as we know it, there was no shortage of headline-grabbing breakthroughs and discoveries.

As 2018 comes to a close, we can pause and reflect on some of the biggest technology breakthroughs and scientific discoveries that occurred this year.

I reached out to a few Singularity University speakers and faculty across the various technology domains we cover asking what they thought the biggest breakthrough was in their area of expertise. The question posed was:

“What, in your opinion, was the biggest development in your area of focus this year? Or, what was the breakthrough you were most surprised by in 2018?”

I can share that for me, hands down, the most surprising development I came across in 2018 was learning that a publicly-traded company that was briefly valued at over $1 billion, and has over 12,000 employees and contractors spread around the world, has no physical office space and the entire business is run and operated from inside an online virtual world. This is Ready Player One stuff happening now.

For the rest, here’s what our experts had to say.

DIGITAL BIOLOGY
Dr. Tiffany Vora | Faculty Director and Vice Chair, Digital Biology and Medicine, Singularity University

“That’s easy: CRISPR babies. I knew it was technically possible, and I’ve spent two years predicting it would happen first in China. I knew it was just a matter of time but I failed to predict the lack of oversight, the dubious consent process, the paucity of publicly-available data, and the targeting of a disease that we already know how to prevent and treat and that the children were at low risk of anyway.

I’m not convinced that this counts as a technical breakthrough, since one of the girls probably isn’t immune to HIV, but it sure was a surprise.”

For more, read Dr. Vora’s summary of this recent stunning news from China regarding CRISPR-editing human embryos.

QUANTUM COMPUTING
Andrew Fursman | Co-Founder/CEO 1Qbit, Faculty, Quantum Computing, Singularity University

“There were two last-minute holiday season surprise quantum computing funding and technology breakthroughs:

First, right before the government shutdown, one priority legislative accomplishment will provide $1.2 billion in quantum computing research over the next five years. Second, there’s the rise of ions as a truly viable, scalable quantum computing architecture.”

*Read this Gizmodo profile on an exciting startup in the space to learn more about this type of quantum computing

ENERGY
Ramez Naam | Chair, Energy and Environmental Systems, Singularity University

“2018 had plenty of energy surprises. In solar, we saw unsubsidized prices in the sunny parts of the world at just over two cents per kwh, or less than half the price of new coal or gas electricity. In the US southwest and Texas, new solar is also now cheaper than new coal or gas. But even more shockingly, in Germany, which is one of the least sunny countries on earth (it gets less sunlight than Canada) the average bid for new solar in a 2018 auction was less than 5 US cents per kwh. That’s as cheap as new natural gas in the US, and far cheaper than coal, gas, or any other new electricity source in most of Europe.

In fact, it’s now cheaper in some parts of the world to build new solar or wind than to run existing coal plants. Think tank Carbon Tracker calculates that, over the next 10 years, it will become cheaper to build new wind or solar than to operate coal power in most of the world, including specifically the US, most of Europe, and—most importantly—India and the world’s dominant burner of coal, China.

Here comes the sun.”

GLOBAL GRAND CHALLENGES
Darlene Damm | Vice Chair, Faculty, Global Grand Challenges, Singularity University

“In 2018 we saw a lot of areas in the Global Grand Challenges move forward—advancements in robotic farming technology and cultured meat, low-cost 3D printed housing, more sophisticated types of online education expanding to every corner of the world, and governments creating new policies to deal with the ethics of the digital world. These were the areas we were watching and had predicted there would be change.

What most surprised me was to see young people, especially teenagers, start to harness technology in powerful ways and use it as a platform to make their voices heard and drive meaningful change in the world. In 2018 we saw teenagers speak out on a number of issues related to their well-being and launch digital movements around issues such as gun and school safety, global warming and environmental issues. We often talk about the harm technology can cause to young people, but on the flip side, it can be a very powerful tool for youth to start changing the world today and something I hope we see more of in the future.”

BUSINESS STRATEGY
Pascal Finette | Chair, Entrepreneurship and Open Innovation, Singularity University

“Without a doubt the rapid and massive adoption of AI, specifically deep learning, across industries, sectors, and organizations. What was a curiosity for most companies at the beginning of the year has quickly made its way into the boardroom and leadership meetings, and all the way down into the innovation and IT department’s agenda. You are hard-pressed to find a mid- to large-sized company today that is not experimenting or implementing AI in various aspects of its business.

On the slightly snarkier side of answering this question: The very rapid decline in interest in blockchain (and cryptocurrencies). The blockchain party was short, ferocious, and ended earlier than most would have anticipated, with a huge hangover for some. The good news—with the hot air dissipated, we can now focus on exploring the unique use cases where blockchain does indeed offer real advantages over centralized approaches.”

*Author note: snark is welcome and appreciated

ROBOTICS
Hod Lipson | Director, Creative Machines Lab, Columbia University

“The biggest surprise for me this year in robotics was learning dexterity. For decades, roboticists have been trying to understand and imitate dexterous manipulation. We humans seem to be able to manipulate objects with our fingers with incredible ease—imagine sifting through a bunch of keys in the dark, or tossing and catching a cube. And while there has been much progress in machine perception, dexterous manipulation remained elusive.

There seemed to be something almost magical in how we humans can physically manipulate the physical world around us. Decades of research in grasping and manipulation, and millions of dollars spent on robot-hand hardware development, has brought us little progress. But in late 2018, the Berkley OpenAI group demonstrated that this hurdle may finally succumb to machine learning as well. Given 200 years worth of practice, machines learned to manipulate a physical object with amazing fluidity. This might be the beginning of a new age for dexterous robotics.”

MACHINE LEARNING
Jeremy Howard | Founding Researcher, fast.ai, Founder/CEO, Enlitic, Faculty Data Science, Singularity University

“The biggest development in machine learning this year has been the development of effective natural language processing (NLP).

The New York Times published an article last month titled “Finally, a Machine That Can Finish Your Sentence,” which argued that NLP neural networks have reached a significant milestone in capability and speed of development. The “finishing your sentence” capability mentioned in the title refers to a type of neural network called a “language model,” which is literally a model that learns how to finish your sentences.

Earlier this year, two systems (one, called ELMO, is from the Allen Institute for AI, and the other, called ULMFiT, was developed by me and Sebastian Ruder) showed that such a model could be fine-tuned to dramatically improve the state-of-the-art in nearly every NLP task that researchers study. This work was further developed by OpenAI, which in turn was greatly scaled up by Google Brain, who created a system called BERT which reached human-level performance on some of NLP’s toughest challenges.

Over the next year, expect to see fine-tuned language models used for everything from understanding medical texts to building disruptive social media troll armies.”

DIGITAL MANUFACTURING
Andre Wegner | Founder/CEO Authentise, Chair, Digital Manufacturing, Singularity University

“Most surprising to me was the extent and speed at which the industry finally opened up.

While previously, only few 3D printing suppliers had APIs and knew what to do with them, 2018 saw nearly every OEM (or original equipment manufacturer) enabling data access and, even more surprisingly, shying away from proprietary standards and adopting MTConnect, as stalwarts such as 3D Systems and Stratasys have been. This means that in two to three years, data access to machines will be easy, commonplace, and free. The value will be in what is being done with that data.

Another example of this openness are the seemingly endless announcements of integrated workflows: GE’s announcement with most major software players to enable integrated solutions, EOS’s announcement with Siemens, and many more. It’s clear that all actors in the additive ecosystem have taken a step forward in terms of openness. The result is a faster pace of innovation, particularly in the software and data domains that are crucial to enabling comprehensive digital workflow to drive agile and resilient manufacturing.

I’m more optimistic we’ll achieve that now than I was at the end of 2017.”

SCIENCE AND DISCOVERY
Paul Saffo | Chair, Future Studies, Singularity University, Distinguished Visiting Scholar, Stanford Media-X Research Network

“The most important development in technology this year isn’t a technology, but rather the astonishing science surprises made possible by recent technology innovations. My short list includes the discovery of the “neptmoon”, a Neptune-scale moon circling a Jupiter-scale planet 8,000 lightyears from us; the successful deployment of the Mars InSight Lander a month ago; and the tantalizing ANITA detection (what could be a new subatomic particle which would in turn blow the standard model wide open). The highest use of invention is to support science discovery, because those discoveries in turn lead us to the future innovations that will improve the state of the world—and fire up our imaginations.”

ROBOTICS
Pablos Holman | Inventor, Hacker, Faculty, Singularity University

“Just five or ten years ago, if you’d asked any of us technologists “What is harder for robots? Eyes, or fingers?” We’d have all said eyes. Robots have extraordinary eyes now, but even in a surgical robot, the fingers are numb and don’t feel anything. Stanford robotics researchers have invented fingertips that can feel, and this will be a kingpin that allows robots to go everywhere they haven’t been yet.”

BLOCKCHAIN
Nathana Sharma | Blockchain, Policy, Law, and Ethics, Faculty, Singularity University

“2017 was the year of peak blockchain hype. 2018 has been a year of resetting expectations and technological development, even as the broader cryptocurrency markets have faced a winter. It’s now about seeing adoption and applications that people want and need to use rise. An incredible piece of news from December 2018 is that Facebook is developing a cryptocurrency for users to make payments through Whatsapp. That’s surprisingly fast mainstream adoption of this new technology, and indicates how powerful it is.”

ARTIFICIAL INTELLIGENCE
Neil Jacobstein | Chair, Artificial Intelligence and Robotics, Singularity University

“I think one of the most visible improvements in AI was illustrated by the Boston Dynamics Parkour video. This was not due to an improvement in brushless motors, accelerometers, or gears. It was due to improvements in AI algorithms and training data. To be fair, the video released was cherry-picked from numerous attempts, many of which ended with a crash. However, the fact that it could be accomplished at all in 2018 was a real win for both AI and robotics.”

NEUROSCIENCE
Divya Chander | Chair, Neuroscience, Singularity University

“2018 ushered in a new era of exponential trends in non-invasive brain modulation. Changing behavior or restoring function takes on a new meaning when invasive interfaces are no longer needed to manipulate neural circuitry. The end of 2018 saw two amazing announcements: the ability to grow neural organoids (mini-brains) in a dish from neural stem cells that started expressing electrical activity, mimicking the brain function of premature babies, and the first (known) application of CRISPR to genetically alter two fetuses grown through IVF. Although this was ostensibly to provide genetic resilience against HIV infections, imagine what would happen if we started tinkering with neural circuitry and intelligence.”

Image Credit: Yurchanka Siarhei / Shutterstock.com Continue reading

Posted in Human Robots

#433954 The Next Great Leap Forward? Combining ...

The Internet of Things is a popular vision of objects with internet connections sending information back and forth to make our lives easier and more comfortable. It’s emerging in our homes, through everything from voice-controlled speakers to smart temperature sensors. To improve our fitness, smart watches and Fitbits are telling online apps how much we’re moving around. And across entire cities, interconnected devices are doing everything from increasing the efficiency of transport to flood detection.

In parallel, robots are steadily moving outside the confines of factory lines. They’re starting to appear as guides in shopping malls and cruise ships, for instance. As prices fall and the artificial intelligence (AI) and mechanical technology continues to improve, we will get more and more used to them making independent decisions in our homes, streets and workplaces.

Here lies a major opportunity. Robots become considerably more capable with internet connections. There is a growing view that the next evolution of the Internet of Things will be to incorporate them into the network, opening up thrilling possibilities along the way.

Home Improvements
Even simple robots become useful when connected to the internet—getting updates about their environment from sensors, say, or learning about their users’ whereabouts and the status of appliances in the vicinity. This lets them lend their bodies, eyes, and ears to give an otherwise impersonal smart environment a user-friendly persona. This can be particularly helpful for people at home who are older or have disabilities.

We recently unveiled a futuristic apartment at Heriot-Watt University to work on such possibilities. One of a few such test sites around the EU, our whole focus is around people with special needs—and how robots can help them by interacting with connected devices in a smart home.

Suppose a doorbell rings that has smart video features. A robot could find the person in the home by accessing their location via sensors, then tell them who is at the door and why. Or it could help make video calls to family members or a professional carer—including allowing them to make virtual visits by acting as a telepresence platform.

Equally, it could offer protection. It could inform them the oven has been left on, for example—phones or tablets are less reliable for such tasks because they can be misplaced or not heard.

Similarly, the robot could raise the alarm if its user appears to be in difficulty.Of course, voice-assistant devices like Alexa or Google Home can offer some of the same services. But robots are far better at moving, sensing and interacting with their environment. They can also engage their users by pointing at objects or acting more naturally, using gestures or facial expressions. These “social abilities” create bonds which are crucially important for making users more accepting of the support and making it more effective.

To help incentivize the various EU test sites, our apartment also hosts the likes of the European Robotic League Service Robot Competition—a sort of Champions League for robots geared to special needs in the home. This brought academics from around Europe to our laboratory for the first time in January this year. Their robots were tested in tasks like welcoming visitors to the home, turning the oven off, and fetching objects for their users; and a German team from Koblenz University won with a robot called Lisa.

Robots Offshore
There are comparable opportunities in the business world. Oil and gas companies are looking at the Internet of Things, for example; experimenting with wireless sensors to collect information such as temperature, pressure, and corrosion levels to detect and possibly predict faults in their offshore equipment.

In the future, robots could be alerted to problem areas by sensors to go and check the integrity of pipes and wells, and to make sure they are operating as efficiently and safely as possible. Or they could place sensors in parts of offshore equipment that are hard to reach, or help to calibrate them or replace their batteries.

The likes of the ORCA Hub, a £36m project led by the Edinburgh Centre for Robotics, bringing together leading experts and over 30 industry partners, is developing such systems. The aim is to reduce the costs and the risks of humans working in remote hazardous locations.

ORCA tests a drone robot. ORCA
Working underwater is particularly challenging, since radio waves don’t move well under the sea. Underwater autonomous vehicles and sensors usually communicate using acoustic waves, which are many times slower (1,500 meters a second vs. 300m meters a second for radio waves). Acoustic communication devices are also much more expensive than those used above the water.

This academic project is developing a new generation of low-cost acoustic communication devices, and trying to make underwater sensor networks more efficient. It should help sensors and underwater autonomous vehicles to do more together in future—repair and maintenance work similar to what is already possible above the water, plus other benefits such as helping vehicles to communicate with one another over longer distances and tracking their location.

Beyond oil and gas, there is similar potential in sector after sector. There are equivalents in nuclear power, for instance, and in cleaning and maintaining the likes of bridges and buildings. My colleagues and I are also looking at possibilities in areas such as farming, manufacturing, logistics, and waste.

First, however, the research sectors around the Internet of Things and robotics need to properly share their knowledge and expertise. They are often isolated from one another in different academic fields. There needs to be more effort to create a joint community, such as the dedicated workshops for such collaboration that we organized at the European Robotics Forum and the IoT Week in 2017.

To the same end, industry and universities need to look at setting up joint research projects. It is particularly important to address safety and security issues—hackers taking control of a robot and using it to spy or cause damage, for example. Such issues could make customers wary and ruin a market opportunity.

We also need systems that can work together, rather than in isolated applications. That way, new and more useful services can be quickly and effectively introduced with no disruption to existing ones. If we can solve such problems and unite robotics and the Internet of Things, it genuinely has the potential to change the world.

Mauro Dragone, Assistant Professor, Cognitive Robotics, Multiagent systems, Internet of Things, Heriot-Watt University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Willyam Bradberry/Shutterstock.com Continue reading

Posted in Human Robots

#433901 The SpiNNaker Supercomputer, Modeled ...

We’ve long used the brain as inspiration for computers, but the SpiNNaker supercomputer, switched on this month, is probably the closest we’ve come to recreating it in silicon. Now scientists hope to use the supercomputer to model the very thing that inspired its design.

The brain is the most complex machine in the known universe, but that complexity comes primarily from its architecture rather than the individual components that make it up. Its highly interconnected structure means that relatively simple messages exchanged between billions of individual neurons add up to carry out highly complex computations.

That’s the paradigm that has inspired the ‘Spiking Neural Network Architecture” (SpiNNaker) supercomputer at the University of Manchester in the UK. The project is the brainchild of Steve Furber, the designer of the original ARM processor. After a decade of development, a million-core version of the machine that will eventually be able to simulate up to a billion neurons was switched on earlier this month.

The idea of splitting computation into very small chunks and spreading them over many processors is already the leading approach to supercomputing. But even the most parallel systems require a lot of communication, and messages may have to pack in a lot of information, such as the task that needs to be completed or the data that needs to be processed.

In contrast, messages in the brain consist of simple electrochemical impulses, or spikes, passed between neurons, with information encoded primarily in the timing or rate of those spikes (which is more important is a topic of debate among neuroscientists). Each neuron is connected to thousands of others via synapses, and complex computation relies on how spikes cascade through these highly-connected networks.

The SpiNNaker machine attempts to replicate this using a model called Address Event Representation. Each of the million cores can simulate roughly a million synapses, so depending on the model, 1,000 neurons with 1,000 connections or 100 neurons with 10,000 connections. Information is encoded in the timing of spikes and the identity of the neuron sending them. When a neuron is activated it broadcasts a tiny packet of data that contains its address, and spike timing is implicitly conveyed.

By modeling their machine on the architecture of the brain, the researchers hope to be able to simulate more biological neurons in real time than any other machine on the planet. The project is funded by the European Human Brain Project, a ten-year science mega-project aimed at bringing together neuroscientists and computer scientists to understand the brain, and researchers will be able to apply for time on the machine to run their simulations.

Importantly, it’s possible to implement various different neuronal models on the machine. The operation of neurons involves a variety of complex biological processes, and it’s still unclear whether this complexity is an artefact of evolution or central to the brain’s ability to process information. The ability to simulate up to a billion simple neurons or millions of more complex ones on the same machine should help to slowly tease out the answer.

Even at a billion neurons, that still only represents about one percent of the human brain, so it’s still going to be limited to investigating isolated networks of neurons. But the previous 500,000-core machine has already been used to do useful simulations of the Basal Ganglia—an area affected in Parkinson’s disease—and an outer layer of the brain that processes sensory information.

The full-scale supercomputer will make it possible to study even larger networks previously out of reach, which could lead to breakthroughs in our understanding of both the healthy and unhealthy functioning of the brain.

And while neurological simulation is the main goal for the machine, it could also provide a useful research tool for roboticists. Previous research has already shown a small board of SpiNNaker chips can be used to control a simple wheeled robot, but Furber thinks the SpiNNaker supercomputer could also be used to run large-scale networks that can process sensory input and generate motor output in real time and at low power.

That low power operation is of particular promise for robotics. The brain is dramatically more power-efficient than conventional supercomputers, and by borrowing from its principles SpiNNaker has managed to capture some of that efficiency. That could be important for running mobile robotic platforms that need to carry their own juice around.

This ability to run complex neural networks at low power has been one of the main commercial drivers for so-called neuromorphic computing devices that are physically modeled on the brain, such as IBM’s TrueNorth chip and Intel’s Loihi. The hope is that complex artificial intelligence applications normally run in massive data centers could be run on edge devices like smartphones, cars, and robots.

But these devices, including SpiNNaker, operate very differently from the leading AI approaches, and its not clear how easy it would be to transfer between the two. The need to adopt an entirely new programming paradigm is likely to limit widespread adoption, and the lack of commercial traction for the aforementioned devices seems to back that up.

At the same time, though, this new paradigm could potentially lead to dramatic breakthroughs in massively parallel computing. SpiNNaker overturns many of the foundational principles of how supercomputers work that make it much more flexible and error-tolerant.

For now, the machine is likely to be firmly focused on accelerating our understanding of how the brain works. But its designers also hope those findings could in turn point the way to more efficient and powerful approaches to computing.

Image Credit: Adrian Grosu / Shutterstock.com Continue reading

Posted in Human Robots