Tag Archives: big

#432519 Robot Cities: Three Urban Prototypes for ...

Before I started working on real-world robots, I wrote about their fictional and historical ancestors. This isn’t so far removed from what I do now. In factories, labs, and of course science fiction, imaginary robots keep fueling our imagination about artificial humans and autonomous machines.

Real-world robots remain surprisingly dysfunctional, although they are steadily infiltrating urban areas across the globe. This fourth industrial revolution driven by robots is shaping urban spaces and urban life in response to opportunities and challenges in economic, social, political, and healthcare domains. Our cities are becoming too big for humans to manage.

Good city governance enables and maintains smooth flow of things, data, and people. These include public services, traffic, and delivery services. Long queues in hospitals and banks imply poor management. Traffic congestion demonstrates that roads and traffic systems are inadequate. Goods that we increasingly order online don’t arrive fast enough. And the WiFi often fails our 24/7 digital needs. In sum, urban life, characterized by environmental pollution, speedy life, traffic congestion, connectivity and increased consumption, needs robotic solutions—or so we are led to believe.

Is this what the future holds? Image Credit: Photobank gallery / Shutterstock.com
In the past five years, national governments have started to see automation as the key to (better) urban futures. Many cities are becoming test beds for national and local governments for experimenting with robots in social spaces, where robots have both practical purpose (to facilitate everyday life) and a very symbolic role (to demonstrate good city governance). Whether through autonomous cars, automated pharmacists, service robots in local stores, or autonomous drones delivering Amazon parcels, cities are being automated at a steady pace.

Many large cities (Seoul, Tokyo, Shenzhen, Singapore, Dubai, London, San Francisco) serve as test beds for autonomous vehicle trials in a competitive race to develop “self-driving” cars. Automated ports and warehouses are also increasingly automated and robotized. Testing of delivery robots and drones is gathering pace beyond the warehouse gates. Automated control systems are monitoring, regulating and optimizing traffic flows. Automated vertical farms are innovating production of food in “non-agricultural” urban areas around the world. New mobile health technologies carry promise of healthcare “beyond the hospital.” Social robots in many guises—from police officers to restaurant waiters—are appearing in urban public and commercial spaces.

Vertical indoor farm. Image Credit: Aisyaqilumaranas / Shutterstock.com
As these examples show, urban automation is taking place in fits and starts, ignoring some areas and racing ahead in others. But as yet, no one seems to be taking account of all of these various and interconnected developments. So, how are we to forecast our cities of the future? Only a broad view allows us to do this. To give a sense, here are three examples: Tokyo, Dubai, and Singapore.

Currently preparing to host the Olympics 2020, Japan’s government also plans to use the event to showcase many new robotic technologies. Tokyo is therefore becoming an urban living lab. The institution in charge is the Robot Revolution Realization Council, established in 2014 by the government of Japan.

Tokyo: city of the future. Image Credit: ESB Professional / Shutterstock.com
The main objectives of Japan’s robotization are economic reinvigoration, cultural branding, and international demonstration. In line with this, the Olympics will be used to introduce and influence global technology trajectories. In the government’s vision for the Olympics, robot taxis transport tourists across the city, smart wheelchairs greet Paralympians at the airport, ubiquitous service robots greet customers in 20-plus languages, and interactively augmented foreigners speak with the local population in Japanese.

Tokyo shows us what the process of state-controlled creation of a robotic city looks like.

Singapore, on the other hand, is a “smart city.” Its government is experimenting with robots with a different objective: as physical extensions of existing systems to improve management and control of the city.

In Singapore, the techno-futuristic national narrative sees robots and automated systems as a “natural” extension of the existing smart urban ecosystem. This vision is unfolding through autonomous delivery robots (the Singapore Post’s delivery drone trials in partnership with AirBus helicopters) and driverless bus shuttles from Easymile, EZ10.

Meanwhile, Singapore hotels are employing state-subsidized service robots to clean rooms and deliver linen and supplies, and robots for early childhood education have been piloted to understand how robots can be used in pre-schools in the future. Health and social care is one of the fastest growing industries for robots and automation in Singapore and globally.

Dubai is another emerging prototype of a state-controlled smart city. But rather than seeing robotization simply as a way to improve the running of systems, Dubai is intensively robotizing public services with the aim of creating the “happiest city on Earth.” Urban robot experimentation in Dubai reveals that authoritarian state regimes are finding innovative ways to use robots in public services, transportation, policing, and surveillance.

National governments are in competition to position themselves on the global politico-economic landscape through robotics, and they are also striving to position themselves as regional leaders. This was the thinking behind the city’s September 2017 test flight of a flying taxi developed by the German drone firm Volocopter—staged to “lead the Arab world in innovation.” Dubai’s objective is to automate 25% of its transport system by 2030.

It is currently also experimenting with Barcelona-based PAL Robotics’ humanoid police officer and Singapore-based vehicle OUTSAW. If the experiments are successful, the government has announced it will robotize 25% of the police force by 2030.

While imaginary robots are fueling our imagination more than ever—from Ghost in the Shell to Blade Runner 2049—real-world robots make us rethink our urban lives.

These three urban robotic living labs—Tokyo, Singapore, Dubai—help us gauge what kind of future is being created, and by whom. From hyper-robotized Tokyo to smartest Singapore and happy, crime-free Dubai, these three comparisons show that, no matter what the context, robots are perceived as a means to achieve global futures based on a specific national imagination. Just like the films, they demonstrate the role of the state in envisioning and creating that future.

This article was originally published on The Conversation. Read the original article.

Image Credit: 3000ad / Shutterstock.com Continue reading

Posted in Human Robots

#432508 Drones will soon decide who to kill

The US Army recently announced that it is developing the first drones that can spot and target vehicles and people using artificial intelligence (AI). This is a big step forward. Whereas current military drones are still controlled by people, this new technology will decide who to kill with almost no human involvement. Continue reading

Posted in Human Robots

#432352 Watch This Lifelike Robot Fish Swim ...

Earth’s oceans are having a rough go of it these days. On top of being the repository for millions of tons of plastic waste, global warming is affecting the oceans and upsetting marine ecosystems in potentially irreversible ways.

Coral bleaching, for example, occurs when warming water temperatures or other stress factors cause coral to cast off the algae that live on them. The coral goes from lush and colorful to white and bare, and sometimes dies off altogether. This has a ripple effect on the surrounding ecosystem.

Warmer water temperatures have also prompted many species of fish to move closer to the north or south poles, disrupting fisheries and altering undersea environments.

To keep these issues in check or, better yet, try to address and improve them, it’s crucial for scientists to monitor what’s going on in the water. A paper released last week by a team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) unveiled a new tool for studying marine life: a biomimetic soft robotic fish, dubbed SoFi, that can swim with, observe, and interact with real fish.

SoFi isn’t the first robotic fish to hit the water, but it is the most advanced robot of its kind. Here’s what sets it apart.

It swims in three dimensions
Up until now, most robotic fish could only swim forward at a given water depth, advancing at a steady speed. SoFi blows older models out of the water. It’s equipped with side fins called dive planes, which move to adjust its angle and allow it to turn, dive downward, or head closer to the surface. Its density and thus its buoyancy can also be adjusted by compressing or decompressing air in an inner compartment.

“To our knowledge, this is the first robotic fish that can swim untethered in three dimensions for extended periods of time,” said CSAIL PhD candidate Robert Katzschmann, lead author of the study. “We are excited about the possibility of being able to use a system like this to get closer to marine life than humans can get on their own.”

The team took SoFi to the Rainbow Reef in Fiji to test out its swimming skills, and the robo fish didn’t disappoint—it was able to swim at depths of over 50 feet for 40 continuous minutes. What keeps it swimming? A lithium polymer battery just like the one that powers our smartphones.

It’s remote-controlled… by Super Nintendo
SoFi has sensors to help it see what’s around it, but it doesn’t have a mind of its own yet. Rather, it’s controlled by a nearby scuba-diving human, who can send it commands related to speed, diving, and turning. The best part? The commands come from an actual repurposed (and waterproofed) Super Nintendo controller. What’s not to love?

Image Credit: MIT CSAIL
Previous robotic fish built by this team had to be tethered to a boat, so the fact that SoFi can swim independently is a pretty big deal. Communication between the fish and the diver was most successful when the two were less than 10 meters apart.

It looks real, sort of
SoFi’s side fins are a bit stiff, and its camera may not pass for natural—but otherwise, it looks a lot like a real fish. This is mostly thanks to the way its tail moves; a motor pumps water between two chambers in the tail, and as one chamber fills, the tail bends towards that side, then towards the other side as water is pumped into the other chamber. The result is a motion that closely mimics the way fish swim. Not only that, the hydraulic system can change the water flow to get different tail movements that let SoFi swim at varying speeds; its average speed is around half a body length (21.7 centimeters) per second.

Besides looking neat, it’s important SoFi look lifelike so it can blend in with marine life and not scare real fish away, so it can get close to them and observe them.

“A robot like this can help explore the reef more closely than current robots, both because it can get closer more safely for the reef and because it can be better accepted by the marine species.” said Cecilia Laschi, a biorobotics professor at the Sant’Anna School of Advanced Studies in Pisa, Italy.

Just keep swimming
It sounds like this fish is nothing short of a regular Nemo. But its creators aren’t quite finished yet.

They’d like SoFi to be able to swim faster, so they’ll work on improving the robo fish’s pump system and streamlining its body and tail design. They also plan to tweak SoFi’s camera to help it follow real fish.

“We view SoFi as a first step toward developing almost an underwater observatory of sorts,” said CSAIL director Daniela Rus. “It has the potential to be a new type of tool for ocean exploration and to open up new avenues for uncovering the mysteries of marine life.”

The CSAIL team plans to make a whole school of SoFis to help biologists learn more about how marine life is reacting to environmental changes.

Image Credit: MIT CSAIL Continue reading

Posted in Human Robots

#432311 Everyone Is Talking About AI—But Do ...

In 2017, artificial intelligence attracted $12 billion of VC investment. We are only beginning to discover the usefulness of AI applications. Amazon recently unveiled a brick-and-mortar grocery store that has successfully supplanted cashiers and checkout lines with computer vision, sensors, and deep learning. Between the investment, the press coverage, and the dramatic innovation, “AI” has become a hot buzzword. But does it even exist yet?

At the World Economic Forum Dr. Kai-Fu Lee, a Taiwanese venture capitalist and the founding president of Google China, remarked, “I think it’s tempting for every entrepreneur to package his or her company as an AI company, and it’s tempting for every VC to want to say ‘I’m an AI investor.’” He then observed that some of these AI bubbles could burst by the end of 2018, referring specifically to “the startups that made up a story that isn’t fulfillable, and fooled VCs into investing because they don’t know better.”

However, Dr. Lee firmly believes AI will continue to progress and will take many jobs away from workers. So, what is the difference between legitimate AI, with all of its pros and cons, and a made-up story?

If you parse through just a few stories that are allegedly about AI, you’ll quickly discover significant variation in how people define it, with a blurred line between emulated intelligence and machine learning applications.

I spoke to experts in the field of AI to try to find consensus, but the very question opens up more questions. For instance, when is it important to be accurate to a term’s original definition, and when does that commitment to accuracy amount to the splitting of hairs? It isn’t obvious, and hype is oftentimes the enemy of nuance. Additionally, there is now a vested interest in that hype—$12 billion, to be precise.

This conversation is also relevant because world-renowned thought leaders have been publicly debating the dangers posed by AI. Facebook CEO Mark Zuckerberg suggested that naysayers who attempt to “drum up these doomsday scenarios” are being negative and irresponsible. On Twitter, business magnate and OpenAI co-founder Elon Musk countered that Zuckerberg’s understanding of the subject is limited. In February, Elon Musk engaged again in a similar exchange with Harvard professor Steven Pinker. Musk tweeted that Pinker doesn’t understand the difference between functional/narrow AI and general AI.

Given the fears surrounding this technology, it’s important for the public to clearly understand the distinctions between different levels of AI so that they can realistically assess the potential threats and benefits.

As Smart As a Human?
Erik Cambria, an expert in the field of natural language processing, told me, “Nobody is doing AI today and everybody is saying that they do AI because it’s a cool and sexy buzzword. It was the same with ‘big data’ a few years ago.”

Cambria mentioned that AI, as a term, originally referenced the emulation of human intelligence. “And there is nothing today that is even barely as intelligent as the most stupid human being on Earth. So, in a strict sense, no one is doing AI yet, for the simple fact that we don’t know how the human brain works,” he said.

He added that the term “AI” is often used in reference to powerful tools for data classification. These tools are impressive, but they’re on a totally different spectrum than human cognition. Additionally, Cambria has noticed people claiming that neural networks are part of the new wave of AI. This is bizarre to him because that technology already existed fifty years ago.

However, technologists no longer need to perform the feature extraction by themselves. They also have access to greater computing power. All of these advancements are welcomed, but it is perhaps dishonest to suggest that machines have emulated the intricacies of our cognitive processes.

“Companies are just looking at tricks to create a behavior that looks like intelligence but that is not real intelligence, it’s just a mirror of intelligence. These are expert systems that are maybe very good in a specific domain, but very stupid in other domains,” he said.

This mimicry of intelligence has inspired the public imagination. Domain-specific systems have delivered value in a wide range of industries. But those benefits have not lifted the cloud of confusion.

Assisted, Augmented, or Autonomous
When it comes to matters of scientific integrity, the issue of accurate definitions isn’t a peripheral matter. In a 1974 commencement address at the California Institute of Technology, Richard Feynman famously said, “The first principle is that you must not fool yourself—and you are the easiest person to fool.” In that same speech, Feynman also said, “You should not fool the layman when you’re talking as a scientist.” He opined that scientists should bend over backwards to show how they could be wrong. “If you’re representing yourself as a scientist, then you should explain to the layman what you’re doing—and if they don’t want to support you under those circumstances, then that’s their decision.”

In the case of AI, this might mean that professional scientists have an obligation to clearly state that they are developing extremely powerful, controversial, profitable, and even dangerous tools, which do not constitute intelligence in any familiar or comprehensive sense.

The term “AI” may have become overhyped and confused, but there are already some efforts underway to provide clarity. A recent PwC report drew a distinction between “assisted intelligence,” “augmented intelligence,” and “autonomous intelligence.” Assisted intelligence is demonstrated by the GPS navigation programs prevalent in cars today. Augmented intelligence “enables people and organizations to do things they couldn’t otherwise do.” And autonomous intelligence “establishes machines that act on their own,” such as autonomous vehicles.

Roman Yampolskiy is an AI safety researcher who wrote the book “Artificial Superintelligence: A Futuristic Approach.” I asked him whether the broad and differing meanings might present difficulties for legislators attempting to regulate AI.

Yampolskiy explained, “Intelligence (artificial or natural) comes on a continuum and so do potential problems with such technology. We typically refer to AI which one day will have the full spectrum of human capabilities as artificial general intelligence (AGI) to avoid some confusion. Beyond that point it becomes superintelligence. What we have today and what is frequently used in business is narrow AI. Regulating anything is hard, technology is no exception. The problem is not with terminology but with complexity of such systems even at the current level.”

When asked if people should fear AI systems, Dr. Yampolskiy commented, “Since capability comes on a continuum, so do problems associated with each level of capability.” He mentioned that accidents are already reported with AI-enabled products, and as the technology advances further, the impact could spread beyond privacy concerns or technological unemployment. These concerns about the real-world effects of AI will likely take precedence over dictionary-minded quibbles. However, the issue is also about honesty versus deception.

Is This Buzzword All Buzzed Out?
Finally, I directed my questions towards a company that is actively marketing an “AI Virtual Assistant.” Carl Landers, the CMO at Conversica, acknowledged that there are a multitude of explanations for what AI is and isn’t.

He said, “My definition of AI is technology innovation that helps solve a business problem. I’m really not interested in talking about the theoretical ‘can we get machines to think like humans?’ It’s a nice conversation, but I’m trying to solve a practical business problem.”

I asked him if AI is a buzzword that inspires publicity and attracts clients. According to Landers, this was certainly true three years ago, but those effects have already started to wane. Many companies now claim to have AI in their products, so it’s less of a differentiator. However, there is still a specific intention behind the word. Landers hopes to convey that previously impossible things are now possible. “There’s something new here that you haven’t seen before, that you haven’t heard of before,” he said.

According to Brian Decker, founder of Encom Lab, machine learning algorithms only work to satisfy their preexisting programming, not out of an interior drive for better understanding. Therefore, he views AI as an entirely semantic argument.

Decker stated, “A marketing exec will claim a photodiode controlled porch light has AI because it ‘knows when it is dark outside,’ while a good hardware engineer will point out that not one bit in a register in the entire history of computing has ever changed unless directed to do so according to the logic of preexisting programming.”

Although it’s important for everyone to be on the same page regarding specifics and underlying meaning, AI-powered products are already powering past these debates by creating immediate value for humans. And ultimately, humans care more about value than they do about semantic distinctions. In an interview with Quartz, Kai-Fu Lee revealed that algorithmic trading systems have already given him an 8X return over his private banking investments. “I don’t trade with humans anymore,” he said.

Image Credit: vrender / Shutterstock.com Continue reading

Posted in Human Robots

#432279 This Week’s Awesome Stories From ...

Google Thinks It’s Close to ‘Quantum Supremacy.’ Here’s What That Really Means.
Martin Giles and Will Knight | MIT Technology Review
“Seventy-two may not be a large number, but in quantum computing terms, it’s massive. This week Google unveiled Bristlecone, a new quantum computing chip with 72 quantum bits, or qubits—the fundamental units of computation in a quantum machine…John Martinis, who heads Google’s effort, says his team still needs to do more testing, but he thinks it’s ‘pretty likely’ that this year, perhaps even in just a few months, the new chip can achieve ‘quantum supremacy.'”

How Project Loon Built the Navigation System That Kept Its Balloons Over Puerto Rico
Amy Nordrum | IEEE Spectrum
“Last year, Alphabet’s Project Loon made a big shift in the way it flies its high-altitude balloons. And that shift—from steering every balloon in a huge circle around the world to clustering balloons over specific areas—allowed the project to provide basic Internet service to more than 200,000 people in Puerto Rico after Hurricane Maria.”

The Grim Conclusions of the Largest-Ever Study of Fake News
Robinson Meyer | The Atlantic
“The massive new study analyzes every major contested news story in English across the span of Twitter’s existence—some 126,000 stories, tweeted by 3 million users, over more than 10 years—and finds that the truth simply cannot compete with hoax and rumor.”

Magic Leap Raises $461 Million in Fresh Funding From the Kingdom of Saudi Arabia
Lucas Matney | TechCrunch
“Magic Leap still hasn’t released a product, but they’re continuing to raise a lot of cash to get there. The Plantation, Florida-based augmented reality startup announced today that it has raised $461 million from the Kingdom of Saudi Arabia’s sovereign investment arm, The Public Investment Fund…Magic Leap has raised more than $2.3 billion in funding to date.”

Social Inequality Will Not Be Solved by an App
Safiya Umoja Noble | Wired
“An app will not save us. We will not sort out social inequality lying in bed staring at smartphones. It will not stem from simply sending emails to people in power, one person at a time…We need more intense attention on how these types of artificial intelligence, under the auspices of individual freedom to make choices, forestall the ability to see what kinds of choices we are making and the collective impact of these choices in reversing decades of struggle for social, political, and economic equality. Digital technologies are implicated in these struggles.”

Image Credit: topseller / Shutterstock.com Continue reading

Posted in Human Robots