Tag Archives: technology

#429985 7 Big Tech Trends That Are Changing the ...

Manufacturing is dirty, dull, and outmoded. It’s a slow-moving industry stuck in the past as new technologies out of Silicon Valley threaten to upend it. Stereotypes are fun, and misleading.
Let’s not forget manufacturing is the industry that made the modern age.
While many were musing about robots in science fiction, manufacturers were putting them to practical use. As tech news headlines hyped up 3D printing, manufacturers had been prototyping with it for decades. And though information technology is the source of the latest revolution, manufacturing is the source of the source. No chip fab facilities, no chips.
Manufacturing is high tech and low tech. Greasy, hands-on problem solving in some places and spotless clean rooms in others. Aging assembly lines and lines of choreographed robot arms. At Singularity University’s Exponential Manufacturing Summit, the industry was in focus, with a good look at what’s coming next. Manufacturing is changing, but that isn’t new.
What’s notable is the quickening pace of change.
The big themes this year: How to sift, identify, and make use of the latest technologies and tools to get nimble, break old habits, and stay ahead of the next big wave. Of course, it’s impossible to fit so much information into so little space, and what matters most depends on your lens.
Disclaimers aside, here’s what caught our eye at this year’s summit.

1. You too can use AI like Google, Facebook, and Amazon.
Jane Zavalishina, CEO at Yandex Data Factory, said the biggest AI misconception is that it’s this futuristic thing. It’s not. And it’s not just for tech giants either. The same machine learning software helping you find, watch, and buy what you want online can now be put to use in other contexts, such as analyzing raw factory data to dial in industrial processes and save costs.
Jane Zavalishina, CEO at Yandex Data Factory, at Exponential Manufacturing.Zavalishina said machine learning software like this is available now and, sometimes, even free.
“These systems have been really useful for quite a while. And what’s changed is, in 2017, the technology is now so available that you don’t have to have super-skilled people using it,” said Neil Jacobstein, faculty chair of Artificial Intelligence and Robotics at Singularity University.
“You can [apply] the technology…to a wide variety of problems in industry from design to quality control to manufacturing to customer services…You’re seeing really good results now.”
Further Exponential Manufacturing Summit reading:
Where AI Is Today, and Where It’s Going in the Future
2. Robots are now smart enough to avoid killing you.
Robots are old hat in manufacturing, but they’ve always needed tightly controlled environments to work and PhDs to program. Robotics legend Rodney Brooks demoed Rethink Robotics’ Sawyer robot live onstage to show it can be programmed by anyone. And thanks to cheap 3D modeling hardware and better software, robots are also getting smart, lightweight, and aware enough to work next to humans without accidentally hurting them. The next step isn’t the end of human workers, it’s a collaboration that combines the best of robots and the best of humans.
Further reading:

Veo Gives Robots ‘Eyes and a Brain’ So They Can Safely Work With People
4 Keys to Making the Robots of Our Imagination a Reality

3. 3D printing is gearing up to take on mass manufacturing.
The dream of 3D printing has always been to make anything, anywhere, anytime. But the challenges have been cost, quality, and speed. With emerging solutions from Carbon and others, 3D printing finally appears poised to take on mass manufacturing. In areas where 3D printed final parts are possible, assembly lines will be dematerialized. That is, we’ll go directly from design to part, without the need to retool and rebuild infrastructure for every new product.
“A 3D printer…is a programmable factory,” said futurist, hacker, and inventor Pablos Holman. “It doesn't care what it makes. It doesn't care if it ever makes the same thing twice. And that is the powerful thing about these machines. That lights up our imagination.”
Further Exponential Manufacturing Summit reading:

Carbon’s Bold Mission to Finally Dematerialize Manufacturing
How Reebok Is Breaking the Mold by ‘3D Drawing’ Shoe Soles

4. Augmented reality will transform how we design and build.
Lots of people have heard of or tried virtual reality by now. There are commercial devices on the market and plenty of speculation about when it’ll achieve mainstream adoption.
John Werner, VP Strategic Partnerships at Meta, at Exponential Manufacturing.Right behind virtual reality is augmented reality. Whereas virtual reality is completely immersive, augmented reality lays the digital world right on top of the real world. It’s a more complicated engineering problem, but it also has more applications. In a world of advanced AR, we’ll use a small wearable device to interface with computers like Tony Stark in Iron Man.
In manufacturing, this means designers ditching 2D modeling programs to do their work more quickly and intuitively in 3D spaces hovering over their desk. It means workers on the factory floor getting real-time big data insights about machines and processes laid out in front of their eyes or hands-free, step-by-step instructions guiding them to repair and build things.
“So, the whole world will be our display…and we'll be used to being in augmented reality all the time,” said Ray Kurzweil, Singularity University co-founder and chancellor. “I think that's the future of interacting with technology. It'll be an increasingly seamless part of our world.”
Further Exponential Manufacturing Summit reading:
The Next Great Computer Interface Is Emerging—But It Doesn’t Have a Name Yet

5. We’re reprogramming biology to make industrial stuff.
A bit further down the road, biomanufacturing will be a big deal, according to Raymond McCauley, chair of Digital Biology and Singularity University.
We’re learning to reprogram simple organisms into sensors and miniature factories for making fibers, fuel, and food, he said. “Anything that is not just metal being bent. Most of the materials and how they’re produced and recycled will happen because of biological means.”
And we’ve seen progress. McCauley noted gene-tweaked algae making biofuels and modified bacteria spinning spider silk. But, he said, while the tools to make biomanufacturing a reality are getting cheaper and more powerful every year, scaling up is still a big challenge.
Further Exponential Manufacturing Summit reading:
Chisels to Genes: How We’ll Soon Grow What We Used to Build

6. To survive change, innovate on the edge more…a lot more.
Technology is clearly moving fast. So how do you keep the pace?
Old companies on the S&P 500 once had 50- or 60-year lifespans. These days that number is more like 20. Small software startups can disrupt giants of industry. Innovation is no longer that thing you do on the side, it’s a critical and increasingly central survival skill.
Geoff Tuff, leader of digital transformation at Monitor Deloitte, and his team came up with the “golden ratio for innovation” five years ago. Their advice? Spend 70% of your innovation resources on the core, 20% on areas adjacent to the core, and 10% on the transformational space. This wasn’t supposed to be a rule set in stone, but rather a way to start the conversation: How much and where do we innovate? The short answer today: More and outside your comfort zone.
Tuff thinks his ratio is likely already outdated.
"70-20-10 no longer applies, and I have no idea what the right numbers are now," Tuff said. "But I'm pretty sure it's something more like 50-30-20…[or even] 50-25-25."
Further Exponential Manufacturing Summit reading:
How to Stay Innovative Amid the Fastest Pace of Change in History
7. The pace is accelerating. Can we keep up?
The tone of the conference was hopeful and excited, but the implications, some of them worrying, were also discussed. The pace of technology-fueled job creation and destruction was foremost of these. Advanced AI and robotics promise widespread automation. Historically, automation has done away with old less satisfactory jobs in favor of, on the whole, better ones.
“People say, great, what new jobs? I say, I don't know, we haven’t invented them yet,” Kurzweil said. “It's not a great political answer. It remains the answer today. It happens to be true.”
But transitioning from one skill set to another isn’t simple and can be too easily glossed over. In the past, such transitions have been very bumpy. And that’s the problem keeping Singularity University co-founder and chairman, Peter Diamandis, up at night. He worries the time it takes to make the transition won’t match the rate of change. It’ll all happen too fast.
“In 1810, the United States had 84% farmers. Today it's 2%. A huge change in our job markets. But that was over a long period of time,” Diamandis said. What if we lose “huge swathes” of jobs over a 20-year period instead? We’ll see social and political unrest on a grand scale.
Diamandis said universal basic income may be a way to help ease the transition. And while we can’t shrink from the coming challenges, neither can we let them blind us to the hugely positive and beneficial change being wrought alongside.
“The son or daughter of a billionaire in New York or the son or daughter of the poorest farmer in Kenya is going to have access to the same level of education delivered by an AI, the same level of healthcare delivered by an AI, or intervention delivered by a robot. So, we're going to start to demonetize all the things we think of as the higher stakes of living,” he said.
Further Exponential Manufacturing Summit reading:

What We’re Learning From a Big Universal Basic Income Experiment
Why the Cost of Living Is Poised to Plummet in the Next 20 Years
Why the World Is Better Than You Think in 10 Powerful Chart Continue reading

Posted in Human Robots

#429983 Chess-playing robot star of Taiwan tech ...

A chess-playing robot stole the show as Asia's largest tech fair kicked off in Taiwan Tuesday with artificial intelligence centre stage. Continue reading

Posted in Human Robots

#429981 CMU’s interactive tool helps ...

A new interactive design tool developed by Carnegie Mellon University's Robotics Institute enables both novices and experts to build customized legged or wheeled robots using 3D-printed components and off-the-shelf actuators. Continue reading

Posted in Human Robots

#429972 How to Build a Mind? This Theory May ...

From time to time, the Singularity Hub editorial team unearths a gem from the archives and wants to share it all over again. It's usually a piece that was popular back then and we think is still relevant now. This is one of those articles. It was originally published June 19, 2016. We hope you enjoy it!
How do intelligent minds learn?
Consider a toddler navigating her day, bombarded by a kaleidoscope of experiences. How does her mind discover what’s normal happenstance and begin building a model of the world? How does she recognize unusual events and incorporate them into her worldview? How does she understand new concepts, often from just a single example?
These are the same questions machine learning scientists ask as they inch closer to AI that matches — or even beats — human performance. Much of AI’s recent victories — IBM Watson against Ken Jennings, Google’s AlphaGo versus Lee Sedol — are rooted in network architectures inspired by multi-layered processing in the human brain.
In a review paper, published in Trends in Cognitive Sciences, scientists from Google DeepMind and Stanford University penned a long-overdue update on a prominent theory of how humans and other intelligent animals learn.
In broad strokes, the Complementary Learning Systems (CLS) theory states that the brain relies on two systems that allow it to rapidly soak in new information, while maintaining a structured model of the world that’s resilient to noise.
“The core principles of CLS have broad relevance … in understanding the organization of memory in biological systems,” wrote the authors in the paper.
What’s more, the theory’s core principles — already implemented in recent themes in machine learning — will no doubt guide us towards designing agents with artificial intelligence, they wrote.
Dynamic Duo
In 1995, a team of prominent psychologists sought to explain a memory phenomenon: patients with damage to their hippocampus could no longer form new memories but had full access to remote memories and concepts from their past.
Given the discrepancy, the team reasoned that new learning and old knowledge likely relied on two separate learning systems. Empirical evidence soon pointed to the hippocampus as the site of new learning, and the cortex — the outermost layer of the brain — as the seat of remote memories.
In a landmark paper, they formalized their ideas into the CLS theory.
According to CLS, the cortex is the memory warehouse of the brain. Rather than storing single experiences or fragmented knowledge, it serves as a well-organized scaffold that gradually accumulates general concepts about the world.
This idea, wrote the authors, was inspired by evidence from early AI research.
Experiments with multi-layer neural nets, the precursors to today’s powerful deep neural networks, showed that, with training, the artificial learning systems gradually learned to extract structure from the training data by adjusting connection weights — the computer equivalent to neural connections in the brain.
Put simply, the layered structure of the networks allows them to gradually distill individual experiences (or examples) into high-level concepts.
Similar to deep neural nets, the cortex is made up of multiple layers of neurons interconnected with each other, with several input and output layers. It readily receives data from other brain regions through input layers and distills them into databases (“prior knowledge”) to draw upon when needed.
“According to the theory, such networks underlie acquired cognitive abilities of all types in domains as diverse as perception, language, semantic knowledge representation and skilled action,” wrote the authors.
Perhaps unsurprisingly, the cortex is often touted as the basis of human intelligence.
Yet this system isn’t without fault. For one, it’s painfully slow. Since a single experience is considered a single “sample” in statistics, the cortex needs to aggregate over years of experience in order to build an accurate model of the world.
Another issue arises after the network matures. Information stored in the cortex is relatively faithful and stable. It’s a blessing and a curse. Consider when you need to dramatically change your perception of something after a single traumatic incident. It pays to be able to update your cortical database without having to go through multiple similar events.
But even the update process itself could radically disrupt the existing network. Jamming new knowledge into a multi-layer network, without regard for existing connections, results in intolerable changes to the network. The consequences are so dire that scientists call the phenomenon is “catastrophic interference.”
Thankfully, we have a second learning system that complements the cortex.
Unlike the slow-learning cortex, the hippocampus concerns itself with breaking news. Not only does it encode a specific event (for example, drinking your morning coffee), it also jots down the context in which the event occurred (you were in your bed checking email while drinking coffee). This lets you easily distinguish between similar events that happened at different times.
The reason that the hippocampus can encode and delineate detailed memories — even when they’re remarkably similar — is due to its peculiar connection pattern. When information flows into the structure, it activates a different neural activity pattern for each experience in the downstream pathway. Different network pattern; different memory.
In a way, the hippocampus learning system is the antithesis of its cortical counterpart: it’s fast, very specific and tailored to each individual experience. Yet the two are inextricably linked: new experiences, temporarily stored in the hippocampus, are gradually integrated into the cortical knowledge scaffold so that new learning becomes part of the databank.
But how do connections from one neural network “jump” to another?
System to System
The original CLS theory didn’t yet have an answer. In the new paper, the authors synthesized findings from recent experiments and pointed out one way system transfer could work.
Scientists don’t yet have all the answers, but the process seems to happen during rest, including sleep. By recording brain activity of sleeping rats that had been trained on a certain task the day before, scientists repeatedly found that their hippocampi produced a type of electrical activity called sharp-wave ripples (SWR) that propagate to the cortex.
When examined closely, the ripples were actually “replays” of the same neural pattern that the animal had generated during learning, but sped up to a factor of about 20. Picture fast-forwarding through a recording — that’s essentially what the hippocampus does during downtime. This speeding up process compresses peaks of neural activity into tighter time windows, which in turn boosts plasticity between the hippocampus and the cortex.
In this way, changes in the hippocampal network can correspondingly tweak neural connections in the cortex.
Unlike catastrophic interference, SWR represent a much gentler way to integrate new information into the cortical database.
Replay also has some other perks. You may remember that the cortex requires a lot of training data to build its concepts. Since a single event is often replayed many times during a sleep episode, SWRs offer a deluge of training data to the cortex.
SWR also offers a way for the brain to “hack reality” in a way that benefits the person. The hippocampus doesn’t faithfully replay all recent activation patterns. Instead, it picks rewarding events and selectively replays them to the cortex.
This means that rare but meaningful events might be given privileged status, allowing them to preferentially reshape cortical learning.
“These ideas…view memory systems as being optimized to the goals of an organism rather than simply mirroring the structure of the environment,” explained the authors in the paper.
This reweighting process is particularly important in enriching the memories of biological agents, something important to consider for artificial intelligence, they wrote.
Biological to Artificial
The two-system set-up is nature’s solution to efficient learning.
“By initially storing information about the new experience in the hippocampus, we make it available for immediate use and we also keep it around so that it can be replayed back to the cortex, interleaving it with ongoing experience and stored information from other relevant experiences,” says Stanford psychologist and article author Dr. James McClelland in a press interview.
According to DeepMind neuroscientists Dharshan Kumaran and Demis Hassabis, both authors of the paper, CLS has been instrumental in recent breakthroughs in machine learning.
Convolutional neural networks (CNN), for example, are a type of deep network modeled after the slow-learning neocortical system. Similar to its biological muse, CNNs also gradually learn through repeated, interleaved exposure to a large amount of training data. The system has been particularly successful in achieving state-of-the-art performance in challenging object-recognition tasks, including ImageNet.
Other aspects of CLS theory, such as hippocampal replay, has also been successfully implemented in systems such as DeepMind’s Deep Q-Network. Last year, the company reported that the system was capable of learning and playing dozens of Atari 2600 games at a level comparable to professional human gamers.
“As in the theory, these neural networks exploit a memory buffer akin to the hippocampus that stores recent episodes of gameplay and replays them in interleaved fashion. This greatly amplifies the use of actual gameplay experience and avoids the tendency for a particular local run of experience to dominate learning in the system,” explains Kumaran.
Hassabis agrees.
We believe that the updated CLS theory will likely continue to provide a framework for future research, for both neuroscience and the quest for artificial general intelligence, he says.
Image Credit: Shutterstock Continue reading

Posted in Human Robots

#429966 These Robots Can Teach Other Robots How ...

One advantage humans have over robots is that we’re good at quickly passing on our knowledge to each other. A new system developed at MIT now allows anyone to coach robots through simple tasks and even lets them teach each other.
Typically, robots learn tasks through demonstrations by humans, or through hand-coded motion planning systems where a programmer specifies each of the required movements. But the former approach is not good at translating skills to new situations, and the latter is very time-consuming.
Humans, on the other hand, can typically demonstrate a simple task, like how to stack logs, to someone else just once before they pick it up, and that person can easily adapt that knowledge to new situations, say if they come across an odd-shaped log or the pile collapses.
In an attempt to mimic this kind of adaptable, one-shot learning, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) combined motion planning and learning through demonstration in an approach they’ve dubbed C-LEARN.

First, a human teaches the robot a series of basic motions using an interactive 3D model on a computer. Using the mouse to show it how to reach and grasp various objects in different positions helps the machine build up a library of possible actions.
The operator then shows the robot a single demonstration of a multistep task, and using its database of potential moves, it devises a motion plan to carry out the job at hand.
“This approach is actually very similar to how humans learn in terms of seeing how something’s done and connecting it to what we already know about the world,” says Claudia Pérez-D’Arpino, a PhD student who wrote a paper on C-LEARN with MIT Professor Julie Shah, in a press release.
“We can’t magically learn from a single demonstration, so we take new information and match it to previous knowledge about our environment.”
The robot successfully carried out tasks 87.5 percent of the time on its own, but when a human operator was allowed to correct minor errors in the interactive model before the robot carried out the task, the accuracy rose to 100 percent.

"Most importantly, the robot could teach the skills it learned to another machine with a completely different configuration."

Most importantly, the robot could teach the skills it learned to another machine with a completely different configuration. The researchers tested C-LEARN on a new two-armed robot called Optimus that sits on a wheeled base and is designed for bomb disposal.
But in simulations, they were able to seamlessly transfer Optimus’ learned skills to CSAIL’s 6-foot-tall Atlas humanoid robot. They haven’t yet tested Atlas’ new skills in the real world, and they had to give Atlas some extra information on how to carry out tasks without falling over, but the demonstration shows that the approach can allow very different robots to learn from each other.
The research, which will be presented at the IEEE International Conference on Robotics and Automation in Singapore later this month, could have important implications for the large-scale roll-out of robot workers.
“Traditional programming of robots in real-world scenarios is difficult, tedious, and requires a lot of domain knowledge,” says Shah in the press release.
“It would be much more effective if we could train them more like how we train people: by giving them some basic knowledge and a single demonstration. This is an exciting step toward teaching robots to perform complex multi-arm and multi-step tasks necessary for assembly manufacturing and ship or aircraft maintenance.”
The MIT researchers aren’t the only people investigating the field of so-called transfer learning. The RoboEarth project and its spin-off RoboHow were both aimed at creating a shared language for robots and an online repository that would allow them to share their knowledge of how to carry out tasks over the web.
Google DeepMind has also been experimenting with ways to transfer knowledge from one machine to another, though in their case the aim is to help skills learned in simulations to be carried over into the real world.
A lot of their research involves deep reinforcement learning, in which robots learn how to carry out tasks in virtual environments through trial and error. But transferring this knowledge from highly-engineered simulations into the messy real world is not so simple.
So they have found a way for a model that has learned how to carry out a task in a simulation using deep reinforcement learning to transfer that knowledge to a so-called progressive neural network that controls a real-world robotic arm. This allows the system to take advantage of the accelerated learning possible in a simulation while still learning effectively in the real world.
These kinds of approaches make life easier for data scientists trying to build new models for AI and robots. As James Kobielus notes in InfoWorld, the approach “stands at the forefront of the data science community’s efforts to invent 'master learning algorithms' that automatically gain and apply fresh contextual knowledge through deep neural networks and other forms of AI.”
If you believe those who say we’re headed towards a technological singularity, you can bet transfer learning will be an important part of that process.
Image Credit: MITCSAIL/YouTube Continue reading

Posted in Human Robots