Tag Archives: mobile

#432671 Stuff 3.0: The Era of Programmable ...

It’s the end of a long day in your apartment in the early 2040s. You decide your work is done for the day, stand up from your desk, and yawn. “Time for a film!” you say. The house responds to your cues. The desk splits into hundreds of tiny pieces, which flow behind you and take on shape again as a couch. The computer screen you were working on flows up the wall and expands into a flat projection screen. You relax into the couch and, after a few seconds, a remote control surfaces from one of its arms.

In a few seconds flat, you’ve gone from a neatly-equipped office to a home cinema…all within the same four walls. Who needs more than one room?

This is the dream of those who work on “programmable matter.”

In his recent book about AI, Max Tegmark makes a distinction between three different levels of computational sophistication for organisms. Life 1.0 is single-celled organisms like bacteria; here, hardware is indistinguishable from software. The behavior of the bacteria is encoded into its DNA; it cannot learn new things.

Life 2.0 is where humans live on the spectrum. We are more or less stuck with our hardware, but we can change our software by choosing to learn different things, say, Spanish instead of Italian. Much like managing space on your smartphone, your brain’s hardware will allow you to download only a certain number of packages, but, at least theoretically, you can learn new behaviors without changing your underlying genetic code.

Life 3.0 marks a step-change from this: creatures that can change both their hardware and software in something like a feedback loop. This is what Tegmark views as a true artificial intelligence—one that can learn to change its own base code, leading to an explosion in intelligence. Perhaps, with CRISPR and other gene-editing techniques, we could be using our “software” to doctor our “hardware” before too long.

Programmable matter extends this analogy to the things in our world: what if your sofa could “learn” how to become a writing desk? What if, instead of a Swiss Army knife with dozens of tool attachments, you just had a single tool that “knew” how to become any other tool you could require, on command? In the crowded cities of the future, could houses be replaced by single, OmniRoom apartments? It would save space, and perhaps resources too.

Such are the dreams, anyway.

But when engineering and manufacturing individual gadgets is such a complex process, you can imagine that making stuff that can turn into many different items can be extremely complicated. Professor Skylar Tibbits at MIT referred to it as 4D printing in a TED Talk, and the website for his research group, the Self-Assembly Lab, excitedly claims, “We have also identified the key ingredients for self-assembly as a simple set of responsive building blocks, energy and interactions that can be designed within nearly every material and machining process available. Self-assembly promises to enable breakthroughs across many disciplines, from biology to material science, software, robotics, manufacturing, transportation, infrastructure, construction, the arts, and even space exploration.”

Naturally, their projects are still in the early stages, but the Self-Assembly Lab and others are genuinely exploring just the kind of science fiction applications we mooted.

For example, there’s the cell-phone self-assembly project, which brings to mind eerie, 24/7 factories where mobile phones assemble themselves from 3D printed kits without human or robotic intervention. Okay, so the phones they’re making are hardly going to fly off the shelves as fashion items, but if all you want is something that works, it could cut manufacturing costs substantially and automate even more of the process.

One of the major hurdles to overcome in making programmable matter a reality is choosing the right fundamental building blocks. There’s a very important balance to strike. To create fine details, you need to have things that aren’t too big, so as to keep your rearranged matter from being too lumpy. This might make the building blocks useless for certain applications—for example, if you wanted to make tools for fine manipulation. With big pieces, it might be difficult to simulate a range of textures. On the other hand, if the pieces are too small, different problems can arise.

Imagine a setup where each piece is a small robot. You have to contain the robot’s power source and its brain, or at least some kind of signal-generator and signal-processor, all in the same compact unit. Perhaps you can imagine that one might be able to simulate a range of textures and strengths by changing the strength of the “bond” between individual units—your desk might need to be a little bit more firm than your bed, which might be nicer with a little more give.

Early steps toward creating this kind of matter have been taken by those who are developing modular robots. There are plenty of different groups working on this, including MIT, Lausanne, and the University of Brussels.

In the latter configuration, one individual robot acts as a centralized decision-maker, referred to as the brain unit, but additional robots can autonomously join the brain unit as and when needed to change the shape and structure of the overall system. Although the system is only ten units at present, it’s a proof-of-concept that control can be orchestrated over a modular system of robots; perhaps in the future, smaller versions of the same thing could be the components of Stuff 3.0.

You can imagine that with machine learning algorithms, such swarms of robots might be able to negotiate obstacles and respond to a changing environment more easily than an individual robot (those of you with techno-fear may read “respond to a changing environment” and imagine a robot seamlessly rearranging itself to allow a bullet to pass straight through without harm).

Speaking of robotics, the form of an ideal robot has been a subject of much debate. In fact, one of the major recent robotics competitions—DARPA’s Robotics Challenge—was won by a robot that could adapt, beating Boston Dynamics’ infamous ATLAS humanoid with the simple addition of a wheel that allowed it to drive as well as walk.

Rather than building robots into a humanoid shape (only sometimes useful), allowing them to evolve and discover the ideal form for performing whatever you’ve tasked them to do could prove far more useful. This is particularly true in disaster response, where expensive robots can still be more valuable than humans, but conditions can be very unpredictable and adaptability is key.

Further afield, many futurists imagine “foglets” as the tiny nanobots that will be capable of constructing anything from raw materials, somewhat like the “Santa Claus machine.” But you don’t necessarily need anything quite so indistinguishable from magic to be useful. Programmable matter that can respond and adapt to its surroundings could be used in all kinds of industrial applications. How about a pipe that can strengthen or weaken at will, or divert its direction on command?

We’re some way off from being able to order our beds to turn into bicycles. As with many tech ideas, it may turn out that the traditional low-tech solution is far more practical and cost-effective, even as we can imagine alternatives. But as the march to put a chip in every conceivable object goes on, it seems certain that inanimate objects are about to get a lot more animated.

Image Credit: PeterVrabel / Shutterstock.com Continue reading

Posted in Human Robots

#432519 Robot Cities: Three Urban Prototypes for ...

Before I started working on real-world robots, I wrote about their fictional and historical ancestors. This isn’t so far removed from what I do now. In factories, labs, and of course science fiction, imaginary robots keep fueling our imagination about artificial humans and autonomous machines.

Real-world robots remain surprisingly dysfunctional, although they are steadily infiltrating urban areas across the globe. This fourth industrial revolution driven by robots is shaping urban spaces and urban life in response to opportunities and challenges in economic, social, political, and healthcare domains. Our cities are becoming too big for humans to manage.

Good city governance enables and maintains smooth flow of things, data, and people. These include public services, traffic, and delivery services. Long queues in hospitals and banks imply poor management. Traffic congestion demonstrates that roads and traffic systems are inadequate. Goods that we increasingly order online don’t arrive fast enough. And the WiFi often fails our 24/7 digital needs. In sum, urban life, characterized by environmental pollution, speedy life, traffic congestion, connectivity and increased consumption, needs robotic solutions—or so we are led to believe.

Is this what the future holds? Image Credit: Photobank gallery / Shutterstock.com
In the past five years, national governments have started to see automation as the key to (better) urban futures. Many cities are becoming test beds for national and local governments for experimenting with robots in social spaces, where robots have both practical purpose (to facilitate everyday life) and a very symbolic role (to demonstrate good city governance). Whether through autonomous cars, automated pharmacists, service robots in local stores, or autonomous drones delivering Amazon parcels, cities are being automated at a steady pace.

Many large cities (Seoul, Tokyo, Shenzhen, Singapore, Dubai, London, San Francisco) serve as test beds for autonomous vehicle trials in a competitive race to develop “self-driving” cars. Automated ports and warehouses are also increasingly automated and robotized. Testing of delivery robots and drones is gathering pace beyond the warehouse gates. Automated control systems are monitoring, regulating and optimizing traffic flows. Automated vertical farms are innovating production of food in “non-agricultural” urban areas around the world. New mobile health technologies carry promise of healthcare “beyond the hospital.” Social robots in many guises—from police officers to restaurant waiters—are appearing in urban public and commercial spaces.

Vertical indoor farm. Image Credit: Aisyaqilumaranas / Shutterstock.com
As these examples show, urban automation is taking place in fits and starts, ignoring some areas and racing ahead in others. But as yet, no one seems to be taking account of all of these various and interconnected developments. So, how are we to forecast our cities of the future? Only a broad view allows us to do this. To give a sense, here are three examples: Tokyo, Dubai, and Singapore.

Tokyo
Currently preparing to host the Olympics 2020, Japan’s government also plans to use the event to showcase many new robotic technologies. Tokyo is therefore becoming an urban living lab. The institution in charge is the Robot Revolution Realization Council, established in 2014 by the government of Japan.

Tokyo: city of the future. Image Credit: ESB Professional / Shutterstock.com
The main objectives of Japan’s robotization are economic reinvigoration, cultural branding, and international demonstration. In line with this, the Olympics will be used to introduce and influence global technology trajectories. In the government’s vision for the Olympics, robot taxis transport tourists across the city, smart wheelchairs greet Paralympians at the airport, ubiquitous service robots greet customers in 20-plus languages, and interactively augmented foreigners speak with the local population in Japanese.

Tokyo shows us what the process of state-controlled creation of a robotic city looks like.

Singapore
Singapore, on the other hand, is a “smart city.” Its government is experimenting with robots with a different objective: as physical extensions of existing systems to improve management and control of the city.

In Singapore, the techno-futuristic national narrative sees robots and automated systems as a “natural” extension of the existing smart urban ecosystem. This vision is unfolding through autonomous delivery robots (the Singapore Post’s delivery drone trials in partnership with AirBus helicopters) and driverless bus shuttles from Easymile, EZ10.

Meanwhile, Singapore hotels are employing state-subsidized service robots to clean rooms and deliver linen and supplies, and robots for early childhood education have been piloted to understand how robots can be used in pre-schools in the future. Health and social care is one of the fastest growing industries for robots and automation in Singapore and globally.

Dubai
Dubai is another emerging prototype of a state-controlled smart city. But rather than seeing robotization simply as a way to improve the running of systems, Dubai is intensively robotizing public services with the aim of creating the “happiest city on Earth.” Urban robot experimentation in Dubai reveals that authoritarian state regimes are finding innovative ways to use robots in public services, transportation, policing, and surveillance.

National governments are in competition to position themselves on the global politico-economic landscape through robotics, and they are also striving to position themselves as regional leaders. This was the thinking behind the city’s September 2017 test flight of a flying taxi developed by the German drone firm Volocopter—staged to “lead the Arab world in innovation.” Dubai’s objective is to automate 25% of its transport system by 2030.

It is currently also experimenting with Barcelona-based PAL Robotics’ humanoid police officer and Singapore-based vehicle OUTSAW. If the experiments are successful, the government has announced it will robotize 25% of the police force by 2030.

While imaginary robots are fueling our imagination more than ever—from Ghost in the Shell to Blade Runner 2049—real-world robots make us rethink our urban lives.

These three urban robotic living labs—Tokyo, Singapore, Dubai—help us gauge what kind of future is being created, and by whom. From hyper-robotized Tokyo to smartest Singapore and happy, crime-free Dubai, these three comparisons show that, no matter what the context, robots are perceived as a means to achieve global futures based on a specific national imagination. Just like the films, they demonstrate the role of the state in envisioning and creating that future.

This article was originally published on The Conversation. Read the original article.

Image Credit: 3000ad / Shutterstock.com Continue reading

Posted in Human Robots

#432287 Ubiquity Robotics Launches Beefy ROS ...

With a payload of 100 kilograms, Magni aims to make it easy to prototype a useful mobile robot Continue reading

Posted in Human Robots

#432181 Putting AI in Your Pocket: MIT Chip Cuts ...

Neural networks are powerful things, but they need a lot of juice. Engineers at MIT have now developed a new chip that cuts neural nets’ power consumption by up to 95 percent, potentially allowing them to run on battery-powered mobile devices.

Smartphones these days are getting truly smart, with ever more AI-powered services like digital assistants and real-time translation. But typically the neural nets crunching the data for these services are in the cloud, with data from smartphones ferried back and forth.

That’s not ideal, as it requires a lot of communication bandwidth and means potentially sensitive data is being transmitted and stored on servers outside the user’s control. But the huge amounts of energy needed to power the GPUs neural networks run on make it impractical to implement them in devices that run on limited battery power.

Engineers at MIT have now designed a chip that cuts that power consumption by up to 95 percent by dramatically reducing the need to shuttle data back and forth between a chip’s memory and processors.

Neural nets consist of thousands of interconnected artificial neurons arranged in layers. Each neuron receives input from multiple neurons in the layer below it, and if the combined input passes a certain threshold it then transmits an output to multiple neurons above it. The strength of the connection between neurons is governed by a weight, which is set during training.

This means that for every neuron, the chip has to retrieve the input data for a particular connection and the connection weight from memory, multiply them, store the result, and then repeat the process for every input. That requires a lot of data to be moved around, expending a lot of energy.

The new MIT chip does away with that, instead computing all the inputs in parallel within the memory using analog circuits. That significantly reduces the amount of data that needs to be shoved around and results in major energy savings.

The approach requires the weights of the connections to be binary rather than a range of values, but previous theoretical work had suggested this wouldn’t dramatically impact accuracy, and the researchers found the chip’s results were generally within two to three percent of the conventional non-binary neural net running on a standard computer.

This isn’t the first time researchers have created chips that carry out processing in memory to reduce the power consumption of neural nets, but it’s the first time the approach has been used to run powerful convolutional neural networks popular for image-based AI applications.

“The results show impressive specifications for the energy-efficient implementation of convolution operations with memory arrays,” Dario Gil, vice president of artificial intelligence at IBM, said in a statement.

“It certainly will open the possibility to employ more complex convolutional neural networks for image and video classifications in IoT [the internet of things] in the future.”

It’s not just research groups working on this, though. The desire to get AI smarts into devices like smartphones, household appliances, and all kinds of IoT devices is driving the who’s who of Silicon Valley to pile into low-power AI chips.

Apple has already integrated its Neural Engine into the iPhone X to power things like its facial recognition technology, and Amazon is rumored to be developing its own custom AI chips for the next generation of its Echo digital assistant.

The big chip companies are also increasingly pivoting towards supporting advanced capabilities like machine learning, which has forced them to make their devices ever more energy-efficient. Earlier this year ARM unveiled two new chips: the Arm Machine Learning processor, aimed at general AI tasks from translation to facial recognition, and the Arm Object Detection processor for detecting things like faces in images.

Qualcomm’s latest mobile chip, the Snapdragon 845, features a GPU and is heavily focused on AI. The company has also released the Snapdragon 820E, which is aimed at drones, robots, and industrial devices.

Going a step further, IBM and Intel are developing neuromorphic chips whose architectures are inspired by the human brain and its incredible energy efficiency. That could theoretically allow IBM’s TrueNorth and Intel’s Loihi to run powerful machine learning on a fraction of the power of conventional chips, though they are both still highly experimental at this stage.

Getting these chips to run neural nets as powerful as those found in cloud services without burning through batteries too quickly will be a big challenge. But at the current pace of innovation, it doesn’t look like it will be too long before you’ll be packing some serious AI power in your pocket.

Image Credit: Blue Planet Studio / Shutterstock.com Continue reading

Posted in Human Robots

#431995 The 10 Grand Challenges Facing Robotics ...

Robotics research has been making great strides in recent years, but there are still many hurdles to the machines becoming a ubiquitous presence in our lives. The journal Science Robotics has now identified 10 grand challenges the field will have to grapple with to make that a reality.

Editors conducted an online survey on unsolved challenges in robotics and assembled an expert panel of roboticists to shortlist the 30 most important topics, which were then grouped into 10 grand challenges that could have major impact in the next 5 to 10 years. Here’s what they came up with.

1. New Materials and Fabrication Schemes
Roboticists are beginning to move beyond motors, gears, and sensors by experimenting with things like artificial muscles, soft robotics, and new fabrication methods that combine multiple functions in one material. But most of these advances have been “one-off” demonstrations, which are not easy to combine.

Multi-functional materials merging things like sensing, movement, energy harvesting, or energy storage could allow more efficient robot designs. But combining these various properties in a single machine will require new approaches that blend micro-scale and large-scale fabrication techniques. Another promising direction is materials that can change over time to adapt or heal, but this requires much more research.

2. Bioinspired and Bio-Hybrid Robots
Nature has already solved many of the problems roboticists are trying to tackle, so many are turning to biology for inspiration or even incorporating living systems into their robots. But there are still major bottlenecks in reproducing the mechanical performance of muscle and the ability of biological systems to power themselves.

There has been great progress in artificial muscles, but their robustness, efficiency, and energy and power density need to be improved. Embedding living cells into robots can overcome challenges of powering small robots, as well as exploit biological features like self-healing and embedded sensing, though how to integrate these components is still a major challenge. And while a growing “robo-zoo” is helping tease out nature’s secrets, more work needs to be done on how animals transition between capabilities like flying and swimming to build multimodal platforms.

3. Power and Energy
Energy storage is a major bottleneck for mobile robotics. Rising demand from drones, electric vehicles, and renewable energy is driving progress in battery technology, but the fundamental challenges have remained largely unchanged for years.

That means that in parallel to battery development, there need to be efforts to minimize robots’ power utilization and give them access to new sources of energy. Enabling them to harvest energy from their environment and transmitting power to them wirelessly are two promising approaches worthy of investigation.

4. Robot Swarms
Swarms of simple robots that assemble into different configurations to tackle various tasks can be a cheaper, more flexible alternative to large, task-specific robots. Smaller, cheaper, more powerful hardware that lets simple robots sense their environment and communicate is combining with AI that can model the kind of behavior seen in nature’s flocks.

But there needs to be more work on the most efficient forms of control at different scales—small swarms can be controlled centrally, but larger ones need to be more decentralized. They also need to be made robust and adaptable to the changing conditions of the real world and resilient to deliberate or accidental damage. There also needs to be more work on swarms of non-homogeneous robots with complementary capabilities.

5. Navigation and Exploration
A key use case for robots is exploring places where humans cannot go, such as the deep sea, space, or disaster zones. That means they need to become adept at exploring and navigating unmapped, often highly disordered and hostile environments.

The major challenges include creating systems that can adapt, learn, and recover from navigation failures and are able to make and recognize new discoveries. This will require high levels of autonomy that allow the robots to monitor and reconfigure themselves while being able to build a picture of the world from multiple data sources of varying reliability and accuracy.

6. AI for Robotics
Deep learning has revolutionized machines’ ability to recognize patterns, but that needs to be combined with model-based reasoning to create adaptable robots that can learn on the fly.

Key to this will be creating AI that’s aware of its own limitations and can learn how to learn new things. It will also be important to create systems that are able to learn quickly from limited data rather than the millions of examples used in deep learning. Further advances in our understanding of human intelligence will be essential to solving these problems.

7. Brain-Computer Interfaces
BCIs will enable seamless control of advanced robotic prosthetics but could also prove a faster, more natural way to communicate instructions to robots or simply help them understand human mental states.

Most current approaches to measuring brain activity are expensive and cumbersome, though, so work on compact, low-power, and wireless devices will be important. They also tend to involve extended training, calibration, and adaptation due to the imprecise nature of reading brain activity. And it remains to be seen if they will outperform simpler techniques like eye tracking or reading muscle signals.

8. Social Interaction
If robots are to enter human environments, they will need to learn to deal with humans. But this will be difficult, as we have very few concrete models of human behavior and we are prone to underestimate the complexity of what comes naturally to us.

Social robots will need to be able to perceive minute social cues like facial expression or intonation, understand the cultural and social context they are operating in, and model the mental states of people they interact with to tailor their dealings with them, both in the short term and as they develop long-standing relationships with them.

9. Medical Robotics
Medicine is one of the areas where robots could have significant impact in the near future. Devices that augment a surgeon’s capabilities are already in regular use, but the challenge will be to increase the autonomy of these systems in such a high-stakes environment.

Autonomous robot assistants will need to be able to recognize human anatomy in a variety of contexts and be able to use situational awareness and spoken commands to understand what’s required of them. In surgery, autonomous robots could perform the routine steps of a procedure, giving way to the surgeon for more complicated patient-specific bits.

Micro-robots that operate inside the human body also hold promise, but there are still many roadblocks to their adoption, including effective delivery systems, tracking and control methods, and crucially, finding therapies where they improve on current approaches.

10. Robot Ethics and Security
As the preceding challenges are overcome and robots are increasingly integrated into our lives, this progress will create new ethical conundrums. Most importantly, we may become over-reliant on robots.

That could lead to humans losing certain skills and capabilities, making us unable to take the reins in the case of failures. We may end up delegating tasks that should, for ethical reasons, have some human supervision, and allow people to pass the buck to autonomous systems in the case of failure. It could also reduce self-determination, as human behaviors change to accommodate the routines and restrictions required for robots and AI to work effectively.

Image Credit: Zenzen / Shutterstock.com Continue reading

Posted in Human Robots