Tag Archives: industrial

#432572 Robots Can Swim, Fetch, Lift, and Dance. ...

Robotics has come a long way in the past few years. Robots can now fetch items from specific spots in massive warehouses, swim through the ocean to study marine life, and lift 200 times their own weight. They can even perform synchronized dance routines.

But the really big question is—can robots put together an Ikea chair?

A team of engineers from Nanyang Technological University in Singapore decided to find out, detailing their work in a paper published last week in the journal Science Robotics. The team took industrial robot arms and equipped them with parallel grippers, force-detecting sensors, and 3D cameras, and wrote software enabling the souped-up bots to tackle chair assembly. The robots’ starting point was a set of chair parts randomly scattered within reach.

As impressive as the above-mentioned robotic capabilities are, it’s worth noting that they’re mostly limited to a single skill. Putting together furniture, on the other hand, requires using and precisely coordinating multiple skills, including force control, visual localization, hand-eye coordination, and the patience to read each step of the manual without rushing through it and messing everything up.

Indeed, Ikea furniture, while meant to be simple and user-friendly, has left even the best of us scratching our heads and holding a spare oddly-shaped piece of wood as we stare at the desk or bed frame we just put together—or, for the less even-tempered among us, throwing said piece of wood across the room.

It’s a good thing robots don’t have tempers, because it took a few tries for the bots to get the chair assembly right.

Practice makes perfect, though (or in this case, rewriting code makes perfect), and these bots didn’t give up so easily. They had to hone three different skills: identifying which part was which among the scattered, differently-shaped pieces of wood, coordinating their movements to put those pieces in the right place, and knowing how much force to use in various steps of the process (i.e., more force is needed to connect two pieces than to pick up one piece).

A few tries later, the bots were able to assemble the chair from start to finish in about nine minutes.

On the whole, nicely done. But before we applaud the robots’ success too loudly, it’s important to note that they didn’t autonomously assemble the chair. Rather, each step of the process was planned and coded by engineers, down to the millimeter.

However, the team believes this closely-guided chair assembly was just a first step, and they see a not-so-distant future where combining artificial intelligence with advanced robotic capabilities could produce smart bots that would learn to assemble furniture and do other complex tasks on their own.

Future applications mentioned in the paper include electronics and aircraft manufacturing, logistics, and other high-mix, low-volume sectors.

Image Credit: Francisco Suárez-Ruiz and Quang-Cuong Pham/Nanyang Technological University Continue reading

Posted in Human Robots

#432519 Robot Cities: Three Urban Prototypes for ...

Before I started working on real-world robots, I wrote about their fictional and historical ancestors. This isn’t so far removed from what I do now. In factories, labs, and of course science fiction, imaginary robots keep fueling our imagination about artificial humans and autonomous machines.

Real-world robots remain surprisingly dysfunctional, although they are steadily infiltrating urban areas across the globe. This fourth industrial revolution driven by robots is shaping urban spaces and urban life in response to opportunities and challenges in economic, social, political, and healthcare domains. Our cities are becoming too big for humans to manage.

Good city governance enables and maintains smooth flow of things, data, and people. These include public services, traffic, and delivery services. Long queues in hospitals and banks imply poor management. Traffic congestion demonstrates that roads and traffic systems are inadequate. Goods that we increasingly order online don’t arrive fast enough. And the WiFi often fails our 24/7 digital needs. In sum, urban life, characterized by environmental pollution, speedy life, traffic congestion, connectivity and increased consumption, needs robotic solutions—or so we are led to believe.

Is this what the future holds? Image Credit: Photobank gallery / Shutterstock.com
In the past five years, national governments have started to see automation as the key to (better) urban futures. Many cities are becoming test beds for national and local governments for experimenting with robots in social spaces, where robots have both practical purpose (to facilitate everyday life) and a very symbolic role (to demonstrate good city governance). Whether through autonomous cars, automated pharmacists, service robots in local stores, or autonomous drones delivering Amazon parcels, cities are being automated at a steady pace.

Many large cities (Seoul, Tokyo, Shenzhen, Singapore, Dubai, London, San Francisco) serve as test beds for autonomous vehicle trials in a competitive race to develop “self-driving” cars. Automated ports and warehouses are also increasingly automated and robotized. Testing of delivery robots and drones is gathering pace beyond the warehouse gates. Automated control systems are monitoring, regulating and optimizing traffic flows. Automated vertical farms are innovating production of food in “non-agricultural” urban areas around the world. New mobile health technologies carry promise of healthcare “beyond the hospital.” Social robots in many guises—from police officers to restaurant waiters—are appearing in urban public and commercial spaces.

Vertical indoor farm. Image Credit: Aisyaqilumaranas / Shutterstock.com
As these examples show, urban automation is taking place in fits and starts, ignoring some areas and racing ahead in others. But as yet, no one seems to be taking account of all of these various and interconnected developments. So, how are we to forecast our cities of the future? Only a broad view allows us to do this. To give a sense, here are three examples: Tokyo, Dubai, and Singapore.

Tokyo
Currently preparing to host the Olympics 2020, Japan’s government also plans to use the event to showcase many new robotic technologies. Tokyo is therefore becoming an urban living lab. The institution in charge is the Robot Revolution Realization Council, established in 2014 by the government of Japan.

Tokyo: city of the future. Image Credit: ESB Professional / Shutterstock.com
The main objectives of Japan’s robotization are economic reinvigoration, cultural branding, and international demonstration. In line with this, the Olympics will be used to introduce and influence global technology trajectories. In the government’s vision for the Olympics, robot taxis transport tourists across the city, smart wheelchairs greet Paralympians at the airport, ubiquitous service robots greet customers in 20-plus languages, and interactively augmented foreigners speak with the local population in Japanese.

Tokyo shows us what the process of state-controlled creation of a robotic city looks like.

Singapore
Singapore, on the other hand, is a “smart city.” Its government is experimenting with robots with a different objective: as physical extensions of existing systems to improve management and control of the city.

In Singapore, the techno-futuristic national narrative sees robots and automated systems as a “natural” extension of the existing smart urban ecosystem. This vision is unfolding through autonomous delivery robots (the Singapore Post’s delivery drone trials in partnership with AirBus helicopters) and driverless bus shuttles from Easymile, EZ10.

Meanwhile, Singapore hotels are employing state-subsidized service robots to clean rooms and deliver linen and supplies, and robots for early childhood education have been piloted to understand how robots can be used in pre-schools in the future. Health and social care is one of the fastest growing industries for robots and automation in Singapore and globally.

Dubai
Dubai is another emerging prototype of a state-controlled smart city. But rather than seeing robotization simply as a way to improve the running of systems, Dubai is intensively robotizing public services with the aim of creating the “happiest city on Earth.” Urban robot experimentation in Dubai reveals that authoritarian state regimes are finding innovative ways to use robots in public services, transportation, policing, and surveillance.

National governments are in competition to position themselves on the global politico-economic landscape through robotics, and they are also striving to position themselves as regional leaders. This was the thinking behind the city’s September 2017 test flight of a flying taxi developed by the German drone firm Volocopter—staged to “lead the Arab world in innovation.” Dubai’s objective is to automate 25% of its transport system by 2030.

It is currently also experimenting with Barcelona-based PAL Robotics’ humanoid police officer and Singapore-based vehicle OUTSAW. If the experiments are successful, the government has announced it will robotize 25% of the police force by 2030.

While imaginary robots are fueling our imagination more than ever—from Ghost in the Shell to Blade Runner 2049—real-world robots make us rethink our urban lives.

These three urban robotic living labs—Tokyo, Singapore, Dubai—help us gauge what kind of future is being created, and by whom. From hyper-robotized Tokyo to smartest Singapore and happy, crime-free Dubai, these three comparisons show that, no matter what the context, robots are perceived as a means to achieve global futures based on a specific national imagination. Just like the films, they demonstrate the role of the state in envisioning and creating that future.

This article was originally published on The Conversation. Read the original article.

Image Credit: 3000ad / Shutterstock.com Continue reading

Posted in Human Robots

#432262 How We Can ‘Robot-Proof’ Education ...

Like millions of other individuals in the workforce, you’re probably wondering if you will one day be replaced by a machine. If you’re a student, you’re probably wondering if your chosen profession will even exist by the time you’ve graduated. From driving to legal research, there isn’t much that technology hasn’t already automated (or begun to automate). Many of us will need to adapt to this disruption in the workforce.

But it’s not enough for students and workers to adapt, become lifelong learners, and re-skill themselves. We also need to see innovation and initiative at an institutional and governmental level. According to research by The Economist, almost half of all jobs could be automated by computers within the next two decades, and no government in the world is prepared for it.

While many see the current trend in automation as a terrifying threat, others see it as an opportunity. In Robot-Proof: Higher Education in the Age of Artificial Intelligence, Northeastern University president Joseph Aoun proposes educating students in a way that will allow them to do the things that machines can’t. He calls for a new paradigm that teaches young minds “to invent, to create, and to discover”—filling the relevant needs of our world that robots simply can’t fill. Aoun proposes a much-needed novel framework that will allow us to “robot-proof” education.

Literacies and Core Cognitive Capacities of the Future
Aoun lays a framework for a new discipline, humanics, which discusses the important capacities and literacies for emerging education systems. At its core, the framework emphasizes our uniquely human abilities and strengths.

The three key literacies include data literacy (being able to manage and analyze big data), technological literacy (being able to understand exponential technologies and conduct computational thinking), and human literacy (being able to communicate and evaluate social, ethical, and existential impact).

Beyond the literacies, at the heart of Aoun’s framework are four cognitive capacities that are crucial to develop in our students if they are to be resistant to automation: critical thinking, systems thinking, entrepreneurship, and cultural agility.

“These capacities are mindsets rather than bodies of knowledge—mental architecture rather than mental furniture,” he writes. “Going forward, people will still need to know specific bodies of knowledge to be effective in the workplace, but that alone will not be enough when intelligent machines are doing much of the heavy lifting of information. To succeed, tomorrow’s employees will have to demonstrate a higher order of thought.”

Like many other experts in education, Joseph Aoun emphasizes the importance of critical thinking. This is important not just when it comes to taking a skeptical approach to information, but also being able to logically break down a claim or problem into multiple layers of analysis. We spend so much time teaching students how to answer questions that we often neglect to teach them how to ask questions. Asking questions—and asking good ones—is a foundation of critical thinking. Before you can solve a problem, you must be able to critically analyze and question what is causing it. This is why critical thinking and problem solving are coupled together.

The second capacity, systems thinking, involves being able to think holistically about a problem. The most creative problem-solvers and thinkers are able to take a multidisciplinary perspective and connect the dots between many different fields. According to Aoun, it “involves seeing across areas that machines might be able to comprehend individually but that they cannot analyze in an integrated way, as a whole.” It represents the absolute opposite of how most traditional curricula is structured with emphasis on isolated subjects and content knowledge.

Among the most difficult-to-automate tasks or professions is entrepreneurship.

In fact, some have gone so far as to claim that in the future, everyone will be an entrepreneur. Yet traditionally, initiative has been something students show in spite of or in addition to their schoolwork. For most students, developing a sense of initiative and entrepreneurial skills has often been part of their extracurricular activities. It needs to be at the core of our curricula, not a supplement to it. At its core, teaching entrepreneurship is about teaching our youth to solve complex problems with resilience, to become global leaders, and to solve grand challenges facing our species.

Finally, with an increasingly globalized world, there is a need for more workers with cultural agility, the ability to build amongst different cultural contexts and norms.

One of the major trends today is the rise of the contingent workforce. We are seeing an increasing percentage of full-time employees working on the cloud. Multinational corporations have teams of employees collaborating at different offices across the planet. Collaboration across online networks requires a skillset of its own. As education expert Tony Wagner points out, within these digital contexts, leadership is no longer about commanding with top-down authority, but rather about leading by influence.

An Emphasis on Creativity
The framework also puts an emphasis on experiential or project-based learning, wherein the heart of the student experience is not lectures or exams but solving real-life problems and learning by doing, creating, and executing. Unsurprisingly, humans continue to outdo machines when it comes to innovating and pushing intellectual, imaginative, and creative boundaries, making jobs involving these skills the hardest to automate.

In fact, technological trends are giving rise to what many thought leaders refer to as the imagination economy. This is defined as “an economy where intuitive and creative thinking create economic value, after logical and rational thinking have been outsourced to other economies.” Consequently, we need to develop our students’ creative abilities to ensure their success against machines.

In its simplest form, creativity represents the ability to imagine radical ideas and then go about executing them in reality.

In many ways, we are already living in our creative imaginations. Consider this: every invention or human construct—whether it be the spaceship, an architectural wonder, or a device like an iPhone—once existed as a mere idea, imagined in someone’s mind. The world we have designed and built around us is an extension of our imaginations and is only possible because of our creativity. Creativity has played a powerful role in human progress—now imagine what the outcomes would be if we tapped into every young mind’s creative potential.

The Need for a Radical Overhaul
What is clear from the recommendations of Aoun and many other leading thinkers in this space is that an effective 21st-century education system is radically different from the traditional systems we currently have in place. There is a dramatic contrast between these future-oriented frameworks and the way we’ve structured our traditional, industrial-era and cookie-cutter-style education systems.

It’s time for a change, and incremental changes or subtle improvements are no longer enough. What we need to see are more moonshots and disruption in the education sector. In a world of exponential growth and accelerating change, it is never too soon for a much-needed dramatic overhaul.

Image Credit: Besjunior / Shutterstock.com Continue reading

Posted in Human Robots

#432190 In the Future, There Will Be No Limit to ...

New planets found in distant corners of the galaxy. Climate models that may improve our understanding of sea level rise. The emergence of new antimalarial drugs. These scientific advances and discoveries have been in the news in recent months.

While representing wildly divergent disciplines, from astronomy to biotechnology, they all have one thing in common: Artificial intelligence played a key role in their scientific discovery.

One of the more recent and famous examples came out of NASA at the end of 2017. The US space agency had announced an eighth planet discovered in the Kepler-90 system. Scientists had trained a neural network—a computer with a “brain” modeled on the human mind—to re-examine data from Kepler, a space-borne telescope with a four-year mission to seek out new life and new civilizations. Or, more precisely, to find habitable planets where life might just exist.

The researchers trained the artificial neural network on a set of 15,000 previously vetted signals until it could identify true planets and false positives 96 percent of the time. It then went to work on weaker signals from nearly 700 star systems with known planets.

The machine detected Kepler 90i—a hot, rocky planet that orbits its sun about every two Earth weeks—through a nearly imperceptible change in brightness captured when a planet passes a star. It also found a sixth Earth-sized planet in the Kepler-80 system.

AI Handles Big Data
The application of AI to science is being driven by three great advances in technology, according to Ross King from the Manchester Institute of Biotechnology at the University of Manchester, leader of a team that developed an artificially intelligent “scientist” called Eve.

Those three advances include much faster computers, big datasets, and improved AI methods, King said. “These advances increasingly give AI superhuman reasoning abilities,” he told Singularity Hub by email.

AI systems can flawlessly remember vast numbers of facts and extract information effortlessly from millions of scientific papers, not to mention exhibit flawless logical reasoning and near-optimal probabilistic reasoning, King says.

AI systems also beat humans when it comes to dealing with huge, diverse amounts of data.

That’s partly what attracted a team of glaciologists to turn to machine learning to untangle the factors involved in how heat from Earth’s interior might influence the ice sheet that blankets Greenland.

Algorithms juggled 22 geologic variables—such as bedrock topography, crustal thickness, magnetic anomalies, rock types, and proximity to features like trenches, ridges, young rifts, and volcanoes—to predict geothermal heat flux under the ice sheet throughout Greenland.

The machine learning model, for example, predicts elevated heat flux upstream of Jakobshavn Glacier, the fastest-moving glacier in the world.

“The major advantage is that we can incorporate so many different types of data,” explains Leigh Stearns, associate professor of geology at Kansas University, whose research takes her to the polar regions to understand how and why Earth’s great ice sheets are changing, questions directly related to future sea level rise.

“All of the other models just rely on one parameter to determine heat flux, but the [machine learning] approach incorporates all of them,” Stearns told Singularity Hub in an email. “Interestingly, we found that there is not just one parameter…that determines the heat flux, but a combination of many factors.”

The research was published last month in Geophysical Research Letters.

Stearns says her team hopes to apply high-powered machine learning to characterize glacier behavior over both short and long-term timescales, thanks to the large amounts of data that she and others have collected over the last 20 years.

Emergence of Robot Scientists
While Stearns sees machine learning as another tool to augment her research, King believes artificial intelligence can play a much bigger role in scientific discoveries in the future.

“I am interested in developing AI systems that autonomously do science—robot scientists,” he said. Such systems, King explained, would automatically originate hypotheses to explain observations, devise experiments to test those hypotheses, physically run the experiments using laboratory robotics, and even interpret the results. The conclusions would then influence the next cycle of hypotheses and experiments.

His AI scientist Eve recently helped researchers discover that triclosan, an ingredient commonly found in toothpaste, could be used as an antimalarial drug against certain strains that have developed a resistance to other common drug therapies. The research was published in the journal Scientific Reports.

Automation using artificial intelligence for drug discovery has become a growing area of research, as the machines can work orders of magnitude faster than any human. AI is also being applied in related areas, such as synthetic biology for the rapid design and manufacture of microorganisms for industrial uses.

King argues that machines are better suited to unravel the complexities of biological systems, with even the most “simple” organisms are host to thousands of genes, proteins, and small molecules that interact in complicated ways.

“Robot scientists and semi-automated AI tools are essential for the future of biology, as there are simply not enough human biologists to do the necessary work,” he said.

Creating Shockwaves in Science
The use of machine learning, neural networks, and other AI methods can often get better results in a fraction of the time it would normally take to crunch data.

For instance, scientists at the National Center for Supercomputing Applications, located at the University of Illinois at Urbana-Champaign, have a deep learning system for the rapid detection and characterization of gravitational waves. Gravitational waves are disturbances in spacetime, emanating from big, high-energy cosmic events, such as the massive explosion of a star known as a supernova. The “Holy Grail” of this type of research is to detect gravitational waves from the Big Bang.

Dubbed Deep Filtering, the method allows real-time processing of data from LIGO, a gravitational wave observatory comprised of two enormous laser interferometers located thousands of miles apart in California and Louisiana. The research was published in Physics Letters B. You can watch a trippy visualization of the results below.

In a more down-to-earth example, scientists published a paper last month in Science Advances on the development of a neural network called ConvNetQuake to detect and locate minor earthquakes from ground motion measurements called seismograms.

ConvNetQuake uncovered 17 times more earthquakes than traditional methods. Scientists say the new method is particularly useful in monitoring small-scale seismic activity, which has become more frequent, possibly due to fracking activities that involve injecting wastewater deep underground. You can learn more about ConvNetQuake in this video:

King says he believes that in the long term there will be no limit to what AI can accomplish in science. He and his team, including Eve, are currently working on developing cancer therapies under a grant from DARPA.

“Robot scientists are getting smarter and smarter; human scientists are not,” he says. “Indeed, there is arguably a case that human scientists are less good. I don’t see any scientist alive today of the stature of a Newton or Einstein—despite the vast number of living scientists. The Physics Nobel [laureate] Frank Wilczek is on record as saying (10 years ago) that in 100 years’ time the best physicist will be a machine. I agree.”

Image Credit: Romaset / Shutterstock.com Continue reading

Posted in Human Robots

#432181 Putting AI in Your Pocket: MIT Chip Cuts ...

Neural networks are powerful things, but they need a lot of juice. Engineers at MIT have now developed a new chip that cuts neural nets’ power consumption by up to 95 percent, potentially allowing them to run on battery-powered mobile devices.

Smartphones these days are getting truly smart, with ever more AI-powered services like digital assistants and real-time translation. But typically the neural nets crunching the data for these services are in the cloud, with data from smartphones ferried back and forth.

That’s not ideal, as it requires a lot of communication bandwidth and means potentially sensitive data is being transmitted and stored on servers outside the user’s control. But the huge amounts of energy needed to power the GPUs neural networks run on make it impractical to implement them in devices that run on limited battery power.

Engineers at MIT have now designed a chip that cuts that power consumption by up to 95 percent by dramatically reducing the need to shuttle data back and forth between a chip’s memory and processors.

Neural nets consist of thousands of interconnected artificial neurons arranged in layers. Each neuron receives input from multiple neurons in the layer below it, and if the combined input passes a certain threshold it then transmits an output to multiple neurons above it. The strength of the connection between neurons is governed by a weight, which is set during training.

This means that for every neuron, the chip has to retrieve the input data for a particular connection and the connection weight from memory, multiply them, store the result, and then repeat the process for every input. That requires a lot of data to be moved around, expending a lot of energy.

The new MIT chip does away with that, instead computing all the inputs in parallel within the memory using analog circuits. That significantly reduces the amount of data that needs to be shoved around and results in major energy savings.

The approach requires the weights of the connections to be binary rather than a range of values, but previous theoretical work had suggested this wouldn’t dramatically impact accuracy, and the researchers found the chip’s results were generally within two to three percent of the conventional non-binary neural net running on a standard computer.

This isn’t the first time researchers have created chips that carry out processing in memory to reduce the power consumption of neural nets, but it’s the first time the approach has been used to run powerful convolutional neural networks popular for image-based AI applications.

“The results show impressive specifications for the energy-efficient implementation of convolution operations with memory arrays,” Dario Gil, vice president of artificial intelligence at IBM, said in a statement.

“It certainly will open the possibility to employ more complex convolutional neural networks for image and video classifications in IoT [the internet of things] in the future.”

It’s not just research groups working on this, though. The desire to get AI smarts into devices like smartphones, household appliances, and all kinds of IoT devices is driving the who’s who of Silicon Valley to pile into low-power AI chips.

Apple has already integrated its Neural Engine into the iPhone X to power things like its facial recognition technology, and Amazon is rumored to be developing its own custom AI chips for the next generation of its Echo digital assistant.

The big chip companies are also increasingly pivoting towards supporting advanced capabilities like machine learning, which has forced them to make their devices ever more energy-efficient. Earlier this year ARM unveiled two new chips: the Arm Machine Learning processor, aimed at general AI tasks from translation to facial recognition, and the Arm Object Detection processor for detecting things like faces in images.

Qualcomm’s latest mobile chip, the Snapdragon 845, features a GPU and is heavily focused on AI. The company has also released the Snapdragon 820E, which is aimed at drones, robots, and industrial devices.

Going a step further, IBM and Intel are developing neuromorphic chips whose architectures are inspired by the human brain and its incredible energy efficiency. That could theoretically allow IBM’s TrueNorth and Intel’s Loihi to run powerful machine learning on a fraction of the power of conventional chips, though they are both still highly experimental at this stage.

Getting these chips to run neural nets as powerful as those found in cloud services without burning through batteries too quickly will be a big challenge. But at the current pace of innovation, it doesn’t look like it will be too long before you’ll be packing some serious AI power in your pocket.

Image Credit: Blue Planet Studio / Shutterstock.com Continue reading

Posted in Human Robots