Tag Archives: laws

#435589 Construction Robots Learn to Excavate by ...

Pavel Savkin remembers the first time he watched a robot imitate his movements. Minutes earlier, the engineer had finished “showing” the robotic excavator its new goal by directing its movements manually. Now, running on software Savkin helped design, the robot was reproducing his movements, gesture for gesture. “It was like there was something alive in there—but I knew it was me,” he said.

Savkin is the CTO of SE4, a robotics software project that styles itself the “driver” of a fleet of robots that will eventually build human colonies in space. For now, SE4 is focused on creating software that can help developers communicate with robots, rather than on building hardware of its own.
The Tokyo-based startup showed off an industrial arm from Universal Robots that was running SE4’s proprietary software at SIGGRAPH in July. SE4’s demonstration at the Los Angeles innovation conference drew the company’s largest audience yet. The robot, nicknamed Squeezie, stacked real blocks as directed by SE4 research engineer Nathan Quinn, who wore a VR headset and used handheld controls to “show” Squeezie what to do.

As Quinn manipulated blocks in a virtual 3D space, the software learned a set of ordered instructions to be carried out in the real world. That order is essential for remote operations, says Quinn. To build remotely, developers need a way to communicate instructions to robotic builders on location. In the age of digital construction and industrial robotics, giving a computer a blueprint for what to build is a well-explored art. But operating on a distant object—especially under conditions that humans haven’t experienced themselves—presents challenges that only real-time communication with operators can solve.

The problem is that, in an unpredictable setting, even simple tasks require not only instruction from an operator, but constant feedback from the changing environment. Five years ago, the Swedish fiber network provider umea.net (part of the private Umeå Energy utility) took advantage of the virtual reality boom to promote its high-speed connections with the help of a viral video titled “Living with Lag: An Oculus Rift Experiment.” The video is still circulated in VR and gaming circles.

In the experiment, volunteers donned headgear that replaced their real-time biological senses of sight and sound with camera and audio feeds of their surroundings—both set at a 3-second delay. Thus equipped, volunteers attempt to complete everyday tasks like playing ping-pong, dancing, cooking, and walking on a beach, with decidedly slapstick results.

At outer-orbit intervals, including SE4’s dream of construction projects on Mars, the limiting factor in communication speed is not an artificial delay, but the laws of physics. The shifting relative positions of Earth and Mars mean that communications between the planets—even at the speed of light—can take anywhere from 3 to 22 minutes.

A long-distance relationship

Imagine trying to manage a construction project from across an ocean without the benefit of intelligent workers: sending a ship to an unknown world with a construction crew and blueprints for a log cabin, and four months later receiving a letter back asking how to cut down a tree. The parallel problem in long-distance construction with robots, according to SE4 CEO Lochlainn Wilson, is that automation relies on predictability. “Every robot in an industrial setting today is expecting a controlled environment.”
Platforms for applying AR and VR systems to teach tasks to artificial intelligences, as SE4 does, are already proliferating in manufacturing, healthcare, and defense. But all of the related communications systems are bound by physics and, specifically, the speed of light.
The same fundamental limitation applies in space. “Our communications are light-based, whether they’re radio or optical,” says Laura Seward Forczyk, a planetary scientist and consultant for space startups. “If you’re going to Mars and you want to communicate with your robot or spacecraft there, you need to have it act semi- or mostly-independently so that it can operate without commands from Earth.”

Semantic control
That’s exactly what SE4 aims to do. By teaching robots to group micro-movements into logical units—like all the steps to building a tower of blocks—the Tokyo-based startup lets robots make simple relational judgments that would allow them to receive a full set of instruction modules at once and carry them out in order. This sidesteps the latency issue in real-time bilateral communications that could hamstring a project or at least make progress excruciatingly slow.
The key to the platform, says Wilson, is the team’s proprietary operating software, “Semantic Control.” Just as in linguistics and philosophy, “semantics” refers to meaning itself, and meaning is the key to a robot’s ability to make even the smallest decisions on its own. “A robot can scan its environment and give [raw data] to us, but it can’t necessarily identify the objects around it and what they mean,” says Wilson.

That’s where human intelligence comes in. As part of the demonstration phase, the human operator of an SE4-controlled machine “annotates” each object in the robot’s vicinity with meaning. By labeling objects in the VR space with useful information—like which objects are building material and which are rocks—the operator helps the robot make sense of its real 3D environment before the building begins.

Giving robots the tools to deal with a changing environment is an important step toward allowing the AI to be truly independent, but it’s only an initial step. “We’re not letting it do absolutely everything,” said Quinn. “Our robot is good at moving an object from point A to point B, but it doesn’t know the overall plan.” Wilson adds that delegating environmental awareness and raw mechanical power to separate agents is the optimal relationship for a mixed human-robot construction team; it “lets humans do what they’re good at, while robots do what they do best.”

This story was updated on 4 September 2019. Continue reading

Posted in Human Robots

#435494 Driverless Electric Trucks Are Coming, ...

Self-driving and electric cars just don’t stop making headlines lately. Amazon invested in self-driving startup Aurora earlier this year. Waymo, Daimler, GM, along with startups like Zoox, have all launched or are planning to launch driverless taxis, many of them all-electric. People are even yanking driverless cars from their timeless natural habitat—roads—to try to teach them to navigate forests and deserts.

The future of driving, it would appear, is upon us.

But an equally important vehicle that often gets left out of the conversation is trucks; their relevance to our day-to-day lives may not be as visible as that of cars, but their impact is more profound than most of us realize.

Two recent developments in trucking point to a future of self-driving, electric semis hauling goods across the country, and likely doing so more quickly, cheaply, and safely than trucks do today.

Self-Driving in Texas
Last week, Kodiak Robotics announced it’s beginning its first commercial deliveries using self-driving trucks on a route from Dallas to Houston. The two cities sit about 240 miles apart, connected primarily by interstate 45. Kodiak is aiming to expand its reach far beyond the heart of Texas (if Dallas and Houston can be considered the heart, that is) to the state’s most far-flung cities, including El Paso to the west and Laredo to the south.

If self-driving trucks are going to be constrained to staying within state lines (and given that the laws regulating them differ by state, they will be for the foreseeable future), Texas is a pretty ideal option. It’s huge (thousands of miles of highway run both east-west and north-south), it’s warm (better than cold for driverless tech components like sensors), its proximity to Mexico means constant movement of both raw materials and manufactured goods (basically, you can’t have too many trucks in Texas), and most crucially, it’s lax on laws (driverless vehicles have been permitted there since 2017).

Spoiler, though—the trucks won’t be fully unmanned. They’ll have safety drivers to guide them onto and off of the highway, and to be there in case of any unexpected glitches.

California Goes (Even More) Electric
According to some top executives in the rideshare industry, automation is just one key component of the future of driving. Another is electricity replacing gas, and it’s not just carmakers that are plugging into the trend.

This week, Daimler Trucks North America announced completion of its first electric semis for customers Penske and NFI, to be used in the companies’ southern California operations. Scheduled to start operating later this month, the trucks will essentially be guinea pigs for testing integration of electric trucks into large-scale fleets; intel gleaned from the trucks’ performance will impact the design of later models.

Design-wise, the trucks aren’t much different from any other semi you’ve seen lumbering down the highway recently. Their range is about 250 miles—not bad if you think about how much more weight a semi is pulling than a passenger sedan—and they’ve been dubbed eCascadia, an electrified version of Freightliner’s heavy-duty Cascadia truck.

Batteries have a long way to go before they can store enough energy to make electric trucks truly viable (not to mention setting up a national charging infrastructure), but Daimler’s announcement is an important step towards an electrically-driven future.

Keep on Truckin’
Obviously, it’s more exciting to think about hailing one of those cute little Waymo cars with no steering wheel to shuttle you across town than it is to think about that 12-pack of toilet paper you ordered on Amazon cruising down the highway in a semi while the safety driver takes a snooze. But pushing driverless and electric tech in the trucking industry makes sense for a few big reasons.

Trucks mostly run long routes on interstate highways—with no pedestrians, stoplights, or other city-street obstacles to contend with, highway driving is much easier to automate. What glitches there are to be smoothed out may as well be smoothed out with cargo on board rather than people. And though you wouldn’t know it amid the frantic shouts of ‘a robot could take your job!’, the US is actually in the midst of a massive shortage of truck drivers—60,000 short as of earlier this year, to be exact.

As Todd Spencer, president of the Owner-Operator Independent Drivers Association, put it, “Trucking is an absolutely essential, critical industry to the nation, to everybody in it.” Alas, trucks get far less love than cars, but come on—probably 90 percent of the things you ate, bought, or used today were at some point moved by a truck.

Adding driverless and electric tech into that equation, then, should yield positive outcomes on all sides, whether we’re talking about cheaper 12-packs of toilet paper, fewer traffic fatalities due to human error, a less-strained labor force, a stronger economy… or something pretty cool to see as you cruise down the highway in your (driverless, electric, futuristic) car.

Image Credit: Vitpho / Shutterstock.com Continue reading

Posted in Human Robots

#435186 What’s Behind the International Rush ...

There’s no better way of ensuring you win a race than by setting the rules yourself. That may be behind the recent rush by countries, international organizations, and companies to put forward their visions for how the AI race should be governed.

China became the latest to release a set of “ethical standards” for the development of AI last month, which might raise eyebrows given the country’s well-documented AI-powered state surveillance program and suspect approaches to privacy and human rights.

But given the recent flurry of AI guidelines, it may well have been motivated by a desire not to be left out of the conversation. The previous week the OECD, backed by the US, released its own “guiding principles” for the industry, and in April the EU released “ethical guidelines.”

The language of most of these documents is fairly abstract and noticeably similar, with broad appeals to ideals like accountability, responsibility, and transparency. The OECD’s guidelines are the lightest on detail, while the EU’s offer some more concrete suggestions such as ensuring humans always know if they’re interacting with AI and making algorithms auditable. China’s standards have an interesting focus on promoting openness and collaboration as well as expressly acknowledging AIs potential to disrupt employment.

Overall, though, one might be surprised that there aren’t more disagreements between three blocs with very divergent attitudes to technology, regulation, and economics. Most likely these are just the opening salvos in what will prove to be a long-running debate, and the devil will ultimately be in the details.

The EU seems to have stolen a march on the other two blocs, being first to publish its guidelines and having already implemented the world’s most comprehensive regulation of data—the bedrock of modern AI—with last year’s GDPR. But its lack of industry heavyweights is going to make it hard to hold onto that lead.

One organization that seems to be trying to take on the role of impartial adjudicator is the World Economic Forum, which recently hosted an event designed to find common ground between various stakeholders from across the world. What will come of the effort remains to be seen, but China’s release of guidelines broadly similar to those of its Western counterparts is a promising sign.

Perhaps most telling, though, is the ubiquitous presence of industry leaders in both advisory and leadership positions. China’s guidelines are backed by “an AI industrial league” including Baidu, Alibaba, and Tencent, and the co-chairs of the WEF’s AI Council are Microsoft President Brad Smith and prominent Chinese AI investor Kai-Fu Lee.

Shortly after the EU released its proposals one of the authors, philosopher Thomas Metzinger, said the process had been compromised by the influence of the tech industry, leading to the removal of “red lines” opposing the development of autonomous lethal weapons or social credit score systems like China’s.

For a long time big tech argued for self-regulation, but whether they’ve had an epiphany or have simply sensed the shifting winds, they are now coming out in favor of government intervention.

Both Amazon and Facebook have called for regulation of facial recognition, and in February Google went even further, calling for the government to set down rules governing AI. Facebook chief Mark Zuckerberg has also since called for even broader regulation of the tech industry.

But considering the current concern around the anti-competitive clout of the largest technology companies, it’s worth remembering that tough rules are always easier to deal with for companies with well-developed compliance infrastructure and big legal teams. And these companies are also making sure the regulation is on their terms. Wired details Microsoft’s protracted effort to shape Washington state laws governing facial recognition technology and Google’s enormous lobbying effort.

“Industry has mobilized to shape the science, morality and laws of artificial intelligence,” Harvard law professor Yochai Benkler writes in Nature. He highlights how Amazon’s funding of a National Science Foundation (NSF) program for projects on fairness in artificial intelligence undermines the ability of academia to act as an impartial counterweight to industry.

Excluding industry from the process of setting the rules to govern AI in a fair and equitable way is clearly not practical, writes Benkler, because they are the ones with the expertise. But there also needs to be more concerted public investment in research and policymaking, and efforts to limit the influence of big companies when setting the rules that will govern AI.

Image Credit: create jobs 51 / Shutterstock.com Continue reading

Posted in Human Robots

#435181 This Week’s Awesome Stories From ...

Inside the Amazon Warehouse Where Humans and Machines Become One
Matt Simon | Wired
“Seen from above, the scale of the system is dizzying. My robot, a little orange slab known as a ‘drive’ (or more formally and mythically, Pegasus), is just one of hundreds of its kind swarming a 125,000-square-foot ‘field’ pockmarked with chutes. It’s a symphony of electric whirring, with robots pausing for one another at intersections and delivering their packages to the slides.”

Top Oxford Researcher Talks the Risk of Automation to Employment
Luke Dormehl | Digital Trends
“[Karl Benedict Frey’s] new book…compares the age of artificial intelligence to past shifts in the labor market, such as the Industrial Revolution. Frey spoke with Digital Trends about the impacts of automation, changing attitudes, and what—if anything—we can do about the coming robot takeover.”

Watch Amazon’s All-New Delivery Drone Zipping Through the Skies
Trevor Mogg | Digital Trends
“The autonomous electric-powered aircraft features six rotors and can take off like a helicopter and fly like a plane… Jeff Wilke, chief of the company’s global consumer business, said the drone can fly 15 miles and carry packages weighing up to 5 pounds, which, he said, covers most stuff ordered on Amazon.”

This AI-Powered Subreddit Has Been Simulating the Real Thing For Years
Amrita Khalid | Engadget
“The bots comment on each other’s posts, and things can quickly get heated. Topics range from politics to food to relationships to completely nonsensical memes. While many of the posts are incomprehensible or nonsensical, it’s hard to argue that much of life on social media isn’t.”

Overlooked No More: Alan Turing, Condemned Codebreaker and Computer Visionary
Alan Cowell | The New York Times
“To this day Turing is recognized in his own country and among a broad society of scientists as a pillar of achievement who had fused brilliance and eccentricity, had moved comfortably in the abstruse realms of mathematics and cryptography but awkwardly in social settings, and had been brought low by the hostile society into which he was born.”

Congress Is Debating—Again—Whether Genes Can Be Patented
Megan Molteni | Wired
“Under debate are the notions that natural phenomena, observations of laws of nature, and abstract ideas are unpatentable. …If successful, some worry this bill could carve up the world’s genetic resources into commercial fiefdoms, forcing scientists to perform basic research under constant threat of legal action.”

Image Credit: John Petalcurin / Unsplash Continue reading

Posted in Human Robots

#434854 New Lifelike Biomaterial Self-Reproduces ...

Life demands flux.

Every living organism is constantly changing: cells divide and die, proteins build and disintegrate, DNA breaks and heals. Life demands metabolism—the simultaneous builder and destroyer of living materials—to continuously upgrade our bodies. That’s how we heal and grow, how we propagate and survive.

What if we could endow cold, static, lifeless robots with the gift of metabolism?

In a study published this month in Science Robotics, an international team developed a DNA-based method that gives raw biomaterials an artificial metabolism. Dubbed DASH—DNA-based assembly and synthesis of hierarchical materials—the method automatically generates “slime”-like nanobots that dynamically move and navigate their environments.

Like humans, the artificial lifelike material used external energy to constantly change the nanobots’ bodies in pre-programmed ways, recycling their DNA-based parts as both waste and raw material for further use. Some “grew” into the shape of molecular double-helixes; others “wrote” the DNA letters inside micro-chips.

The artificial life forms were also rather “competitive”—in quotes, because these molecular machines are not conscious. Yet when pitted against each other, two DASH bots automatically raced forward, crawling in typical slime-mold fashion at a scale easily seen under the microscope—and with some iterations, with the naked human eye.

“Fundamentally, we may be able to change how we create and use the materials with lifelike characteristics. Typically materials and objects we create in general are basically static… one day, we may be able to ‘grow’ objects like houses and maintain their forms and functions autonomously,” said study author Dr. Shogo Hamada to Singularity Hub.

“This is a great study that combines the versatility of DNA nanotechnology with the dynamics of living materials,” said Dr. Job Boekhoven at the Technical University of Munich, who was not involved in the work.

Dissipative Assembly
The study builds on previous ideas on how to make molecular Lego blocks that essentially assemble—and destroy—themselves.

Although the inspiration came from biological metabolism, scientists have long hoped to cut their reliance on nature. At its core, metabolism is just a bunch of well-coordinated chemical reactions, programmed by eons of evolution. So why build artificial lifelike materials still tethered by evolution when we can use chemistry to engineer completely new forms of artificial life?

Back in 2015, for example, a team led by Boekhoven described a way to mimic how our cells build their internal “structural beams,” aptly called the cytoskeleton. The key here, unlike many processes in nature, isn’t balance or equilibrium; rather, the team engineered an extremely unstable system that automatically builds—and sustains—assemblies from molecular building blocks when given an external source of chemical energy.

Sound familiar? The team basically built molecular devices that “die” without “food.” Thanks to the laws of thermodynamics (hey ya, Newton!), that energy eventually dissipates, and the shapes automatically begin to break down, completing an artificial “circle of life.”

The new study took the system one step further: rather than just mimicking synthesis, they completed the circle by coupling the building process with dissipative assembly.

Here, the “assembling units themselves are also autonomously created from scratch,” said Hamada.

DNA Nanobots
The process of building DNA nanobots starts on a microfluidic chip.

Decades of research have allowed researchers to optimize DNA assembly outside the body. With the help of catalysts, which help “bind” individual molecules together, the team found that they could easily alter the shape of the self-assembling DNA bots—which formed fiber-like shapes—by changing the structure of the microfluidic chambers.

Computer simulations played a role here too: through both digital simulations and observations under the microscope, the team was able to identify a few critical rules that helped them predict how their molecules self-assemble while navigating a maze of blocking “pillars” and channels carved onto the microchips.

This “enabled a general design strategy for the DASH patterns,” they said.

In particular, the whirling motion of the fluids as they coursed through—and bumped into—ridges in the chips seems to help the DNA molecules “entangle into networks,” the team explained.

These insights helped the team further develop the “destroying” part of metabolism. Similar to linking molecules into DNA chains, their destruction also relies on enzymes.

Once the team pumped both “generation” and “degeneration” enzymes into the microchips, along with raw building blocks, the process was completely autonomous. The simultaneous processes were so lifelike that the team used a metric commonly used in robotics, finite-state automation, to measure the behavior of their DNA nanobots from growth to eventual decay.

“The result is a synthetic structure with features associated with life. These behaviors include locomotion, self-regeneration, and spatiotemporal regulation,” said Boekhoven.

Molecular Slime Molds
Just witnessing lifelike molecules grow in place like the dance move running man wasn’t enough.

In their next experiments, the team took inspiration from slugs to program undulating movements into their DNA bots. Here, “movement” is actually a sort of illusion: the machines “moved” because their front ends kept regenerating, whereas their back ends degenerated. In essence, the molecular slime was built from linking multiple individual “DNA robot-like” units together: each unit receives a delayed “decay” signal from the head of the slime in a way that allowed the whole artificial “organism” to crawl forward, against the steam of fluid flow.

Here’s the fun part: the team eventually engineered two molecular slime bots and pitted them against each other, Mario Kart-style. In these experiments, the faster moving bot alters the state of its competitor to promote “decay.” This slows down the competitor, allowing the dominant DNA nanoslug to win in a race.

Of course, the end goal isn’t molecular podracing. Rather, the DNA-based bots could easily amplify a given DNA or RNA sequence, making them efficient nano-diagnosticians for viral and other infections.

The lifelike material can basically generate patterns that doctors can directly ‘see’ with their eyes, which makes DNA or RNA molecules from bacteria and viruses extremely easy to detect, the team said.

In the short run, “the detection device with this self-generating material could be applied to many places and help people on site, from farmers to clinics, by providing an easy and accurate way to detect pathogens,” explained Hamaga.

A Futuristic Iron Man Nanosuit?
I’m letting my nerd flag fly here. In Avengers: Infinity Wars, the scientist-engineer-philanthropist-playboy Tony Stark unveiled a nanosuit that grew to his contours when needed and automatically healed when damaged.

DASH may one day realize that vision. For now, the team isn’t focused on using the technology for regenerating armor—rather, the dynamic materials could create new protein assemblies or chemical pathways inside living organisms, for example. The team also envisions adding simple sensing and computing mechanisms into the material, which can then easily be thought of as a robot.

Unlike synthetic biology, the goal isn’t to create artificial life. Rather, the team hopes to give lifelike properties to otherwise static materials.

“We are introducing a brand-new, lifelike material concept powered by its very own artificial metabolism. We are not making something that’s alive, but we are creating materials that are much more lifelike than have ever been seen before,” said lead author Dr. Dan Luo.

“Ultimately, our material may allow the construction of self-reproducing machines… artificial metabolism is an important step toward the creation of ‘artificial’ biological systems with dynamic, lifelike capabilities,” added Hamada. “It could open a new frontier in robotics.”

Image Credit: A timelapse image of DASH, by Jeff Tyson at Cornell University. Continue reading

Posted in Human Robots