Tag Archives: lines

#435816 This Light-based Nervous System Helps ...

Last night, way past midnight, I stumbled onto my porch blindly grasping for my keys after a hellish day of international travel. Lights were low, I was half-asleep, yet my hand grabbed the keychain, found the lock, and opened the door.

If you’re rolling your eyes—yeah, it’s not exactly an epic feat for a human. Thanks to the intricate wiring between our brain and millions of sensors dotted on—and inside—our skin, we know exactly where our hand is in space and what it’s touching without needing visual confirmation. But this combined sense of the internal and the external is completely lost to robots, which generally rely on computer vision or surface mechanosensors to track their movements and their interaction with the outside world. It’s not always a winning strategy.

What if, instead, we could give robots an artificial nervous system?

This month, a team led by Dr. Rob Shepard at Cornell University did just that, with a seriously clever twist. Rather than mimicking the electric signals in our nervous system, his team turned to light. By embedding optical fibers inside a 3D printed stretchable material, the team engineered an “optical lace” that can detect changes in pressure less than a fraction of a pound, and pinpoint the location to a spot half the width of a tiny needle.

The invention isn’t just an artificial skin. Instead, the delicate fibers can be distributed both inside a robot and on its surface, giving it both a sense of tactile touch and—most importantly—an idea of its own body position in space. Optical lace isn’t a superficial coating of mechanical sensors; it’s an entire platform that may finally endow robots with nerve-like networks throughout the body.

Eventually, engineers hope to use this fleshy, washable material to coat the sharp, cold metal interior of current robots, transforming C-3PO more into the human-like hosts of Westworld. Robots with a “bodily” sense could act as better caretakers for the elderly, said Shepard, because they can assist fragile people without inadvertently bruising or otherwise harming them. The results were published in Science Robotics.

An Unconventional Marriage
The optical lace is especially creative because it marries two contrasting ideas: one biological-inspired, the other wholly alien.

The overarching idea for optical lace is based on the animal kingdom. Through sight, hearing, smell, taste, touch, and other senses, we’re able to interpret the outside world—something scientists call exteroception. Thanks to our nervous system, we perform these computations subconsciously, allowing us to constantly “perceive” what’s going on around us.

Our other perception is purely internal. Proprioception (sorry, it’s not called “inception” though it should be) is how we know where our body parts are in space without having to look at them, which lets us perform complex tasks when blind. Although less intuitive than exteroception, proprioception also relies on stretching and other deformations within the muscles and tendons and receptors under the skin, which generate electrical currents that shoot up into the brain for further interpretation.

In other words, in theory it’s possible to recreate both perceptions with a single information-carrying system.

Here’s where the alien factor comes in. Rather than using electrical properties, the team turned to light as their data carrier. They had good reason. “Compared with electricity, light carries information faster and with higher data densities,” the team explained. Light can also transmit in multiple directions simultaneously, and is less susceptible to electromagnetic interference. Although optical nervous systems don’t exist in the biological world, the team decided to improve on Mother Nature and give it a shot.

Optical Lace
The construction starts with engineering a “sheath” for the optical nerve fibers. The team first used an elastic polyurethane—a synthetic material used in foam cushioning, for example—to make a lattice structure filled with large pores, somewhat like a lattice pie crust. Thanks to rapid, high-resolution 3D printing, the scaffold can have different stiffness from top to bottom. To increase sensitivity to the outside world, the team made the top of the lattice soft and pliable, to better transfer force to mechanical sensors. In contrast, the “deeper” regions held their structure better, and kept their structure under pressure.

Now the fun part. The team next threaded stretchable “light guides” into the scaffold. These fibers transmit photons, and are illuminated with a blue LED light. One, the input light guide, ran horizontally across the soft top part of the scaffold. Others ran perpendicular to the input in a “U” shape, going from more surface regions to deeper ones. These are the output guides. The architecture loosely resembles the wiring in our skin and flesh.

Normally, the output guides are separated from the input by a small air gap. When pressed down, the input light fiber distorts slightly, and if the pressure is high enough, it contacts one of the output guides. This causes light from the input fiber to “leak” to the output one, so that it lights up—the stronger the pressure, the brighter the output.

“When the structure deforms, you have contact between the input line and the output lines, and the light jumps into these output loops in the structure, so you can tell where the contact is happening,” said study author Patricia Xu. “The intensity of this determines the intensity of the deformation itself.”

Double Perception
As a proof-of-concept for proprioception, the team made a cylindrical lace with one input and 12 output channels. They varied the stiffness of the scaffold along the cylinder, and by pressing down at different points, were able to calculate how much each part stretched and deformed—a prominent precursor to knowing where different regions of the structure are moving in space. It’s a very rudimentary sort of proprioception, but one that will become more sophisticated with increasing numbers of strategically-placed mechanosensors.

The test for exteroception was a whole lot stranger. Here, the team engineered another optical lace with 15 output channels and turned it into a squishy piano. When pressed down, an Arduino microcontroller translated light output signals into sound based on the position of each touch. The stronger the pressure, the louder the volume. While not a musical masterpiece, the demo proved their point: the optical lace faithfully reported the strength and location of each touch.

A More Efficient Robot
Although remarkably novel, the optical lace isn’t yet ready for prime time. One problem is scalability: because of light loss, the material is limited to a certain size. However, rather than coating an entire robot, it may help to add optical lace to body parts where perception is critical—for example, fingertips and hands.

The team sees plenty of potential to keep developing the artificial flesh. Depending on particular needs, both the light guides and scaffold can be modified for sensitivity, spatial resolution, and accuracy. Multiple optical fibers that measure for different aspects—pressure, pain, temperature—can potentially be embedded in the same region, giving robots a multitude of senses.

In this way, we hope to reduce the number of electronics and combine signals from multiple sensors without losing information, the authors said. By taking inspiration from biological networks, it may even be possible to use various inputs through an optical lace to control how the robot behaves, closing the loop from sensation to action.

Image Credit: Cornell Organic Robotics Lab. A flexible, porous lattice structure is threaded with stretchable optical fibers containing more than a dozen mechanosensors and attached to an LED light. When the lattice structure is pressed, the sensors pinpoint changes in the photon flow. Continue reading

Posted in Human Robots

#435674 MIT Future of Work Report: We ...

Robots aren’t going to take everyone’s jobs, but technology has already reshaped the world of work in ways that are creating clear winners and losers. And it will continue to do so without intervention, says the first report of MIT’s Task Force on the Work of the Future.

The supergroup of MIT academics was set up by MIT President Rafael Reif in early 2018 to investigate how emerging technologies will impact employment and devise strategies to steer developments in a positive direction. And the headline finding from their first publication is that it’s not the quantity of jobs we should be worried about, but the quality.

Widespread press reports of a looming “employment apocalypse” brought on by AI and automation are probably wide of the mark, according to the authors. Shrinking workforces as developed countries age and outstanding limitations in what machines can do mean we’re unlikely to have a shortage of jobs.

But while unemployment is historically low, recent decades have seen a polarization of the workforce as the number of both high- and low-skilled jobs have grown at the expense of the middle-skilled ones, driving growing income inequality and depriving the non-college-educated of viable careers.

This is at least partly attributable to the growth of digital technology and automation, the report notes, which are rendering obsolete many middle-skilled jobs based around routine work like assembly lines and administrative support.

That leaves workers to either pursue high-skilled jobs that require deep knowledge and creativity, or settle for low-paid jobs that rely on skills—like manual dexterity or interpersonal communication—that are still beyond machines, but generic to most humans and therefore not valued by employers. And the growth of emerging technology like AI and robotics is only likely to exacerbate the problem.

This isn’t the first report to note this trend. The World Bank’s 2016 World Development Report noted how technology is causing a “hollowing out” of labor markets. But the MIT report goes further in saying that the cause isn’t simply technology, but the institutions and policies we’ve built around it.

The motivation for introducing new technology is broadly assumed to be to increase productivity, but the authors note a rarely-acknowledged fact: “Not all innovations that raise productivity displace workers, and not all innovations that displace workers substantially raise productivity.”

Examples of the former include computer-aided design software that makes engineers and architects more productive, while examples of the latter include self-service checkouts and automated customer support that replace human workers, often at the expense of a worse customer experience.

While the report notes that companies have increasingly adopted the language of technology augmenting labor, in reality this has only really benefited high-skilled workers. For lower-skilled jobs the motivation is primarily labor cost savings, which highlights the other major force shaping technology’s impact on employment: shareholder capitalism.

The authors note that up until the 1980s, increasing productivity resulted in wage growth across the economic spectrum, but since then average wage growth has failed to keep pace and gains have dramatically skewed towards the top earners.

The report shies away from directly linking this trend to the birth of Reaganomics (something others have been happy to do), but it notes that American veneration of the shareholder as the primary stakeholder in a business and tax policies that incentivize investment in capital rather than labor have exacerbated the negative impacts technology can have on employment.

That means the current focus on re-skilling workers to thrive in the new economy is a necessary, but not sufficient, solution to the disruptive impact technology is having on work, the authors say.

Alongside significant investment in education, fiscal policies need to be re-balanced away from subsidizing investment in physical capital and towards boosting investment in human capital, the authors write, and workers need to have a greater say in corporate decision-making.

The authors point to other developed economies where productivity growth, income growth, and equality haven’t become so disconnected thanks to investments in worker skills, social safety nets, and incentives to invest in human capital. Whether such a radical reshaping of US economic policy is achievable in today’s political climate remains to be seen, but the authors conclude with a call to arms.

“The failure of the US labor market to deliver broadly shared prosperity despite rising productivity is not an inevitable byproduct of current technologies or free markets,” they write. “We can and should do better.”

Image Credit: Simon Abrams / Unsplash/a> Continue reading

Posted in Human Robots

#435494 Driverless Electric Trucks Are Coming, ...

Self-driving and electric cars just don’t stop making headlines lately. Amazon invested in self-driving startup Aurora earlier this year. Waymo, Daimler, GM, along with startups like Zoox, have all launched or are planning to launch driverless taxis, many of them all-electric. People are even yanking driverless cars from their timeless natural habitat—roads—to try to teach them to navigate forests and deserts.

The future of driving, it would appear, is upon us.

But an equally important vehicle that often gets left out of the conversation is trucks; their relevance to our day-to-day lives may not be as visible as that of cars, but their impact is more profound than most of us realize.

Two recent developments in trucking point to a future of self-driving, electric semis hauling goods across the country, and likely doing so more quickly, cheaply, and safely than trucks do today.

Self-Driving in Texas
Last week, Kodiak Robotics announced it’s beginning its first commercial deliveries using self-driving trucks on a route from Dallas to Houston. The two cities sit about 240 miles apart, connected primarily by interstate 45. Kodiak is aiming to expand its reach far beyond the heart of Texas (if Dallas and Houston can be considered the heart, that is) to the state’s most far-flung cities, including El Paso to the west and Laredo to the south.

If self-driving trucks are going to be constrained to staying within state lines (and given that the laws regulating them differ by state, they will be for the foreseeable future), Texas is a pretty ideal option. It’s huge (thousands of miles of highway run both east-west and north-south), it’s warm (better than cold for driverless tech components like sensors), its proximity to Mexico means constant movement of both raw materials and manufactured goods (basically, you can’t have too many trucks in Texas), and most crucially, it’s lax on laws (driverless vehicles have been permitted there since 2017).

Spoiler, though—the trucks won’t be fully unmanned. They’ll have safety drivers to guide them onto and off of the highway, and to be there in case of any unexpected glitches.

California Goes (Even More) Electric
According to some top executives in the rideshare industry, automation is just one key component of the future of driving. Another is electricity replacing gas, and it’s not just carmakers that are plugging into the trend.

This week, Daimler Trucks North America announced completion of its first electric semis for customers Penske and NFI, to be used in the companies’ southern California operations. Scheduled to start operating later this month, the trucks will essentially be guinea pigs for testing integration of electric trucks into large-scale fleets; intel gleaned from the trucks’ performance will impact the design of later models.

Design-wise, the trucks aren’t much different from any other semi you’ve seen lumbering down the highway recently. Their range is about 250 miles—not bad if you think about how much more weight a semi is pulling than a passenger sedan—and they’ve been dubbed eCascadia, an electrified version of Freightliner’s heavy-duty Cascadia truck.

Batteries have a long way to go before they can store enough energy to make electric trucks truly viable (not to mention setting up a national charging infrastructure), but Daimler’s announcement is an important step towards an electrically-driven future.

Keep on Truckin’
Obviously, it’s more exciting to think about hailing one of those cute little Waymo cars with no steering wheel to shuttle you across town than it is to think about that 12-pack of toilet paper you ordered on Amazon cruising down the highway in a semi while the safety driver takes a snooze. But pushing driverless and electric tech in the trucking industry makes sense for a few big reasons.

Trucks mostly run long routes on interstate highways—with no pedestrians, stoplights, or other city-street obstacles to contend with, highway driving is much easier to automate. What glitches there are to be smoothed out may as well be smoothed out with cargo on board rather than people. And though you wouldn’t know it amid the frantic shouts of ‘a robot could take your job!’, the US is actually in the midst of a massive shortage of truck drivers—60,000 short as of earlier this year, to be exact.

As Todd Spencer, president of the Owner-Operator Independent Drivers Association, put it, “Trucking is an absolutely essential, critical industry to the nation, to everybody in it.” Alas, trucks get far less love than cars, but come on—probably 90 percent of the things you ate, bought, or used today were at some point moved by a truck.

Adding driverless and electric tech into that equation, then, should yield positive outcomes on all sides, whether we’re talking about cheaper 12-packs of toilet paper, fewer traffic fatalities due to human error, a less-strained labor force, a stronger economy… or something pretty cool to see as you cruise down the highway in your (driverless, electric, futuristic) car.

Image Credit: Vitpho / Shutterstock.com Continue reading

Posted in Human Robots

#435199 The Rise of AI Art—and What It Means ...

Artificially intelligent systems are slowly taking over tasks previously done by humans, and many processes involving repetitive, simple movements have already been fully automated. In the meantime, humans continue to be superior when it comes to abstract and creative tasks.

However, it seems like even when it comes to creativity, we’re now being challenged by our own creations.

In the last few years, we’ve seen the emergence of hundreds of “AI artists.” These complex algorithms are creating unique (and sometimes eerie) works of art. They’re generating stunning visuals, profound poetry, transcendent music, and even realistic movie scripts. The works of these AI artists are raising questions about the nature of art and the role of human creativity in future societies.

Here are a few works of art created by non-human entities.

Unsecured Futures
by Ai.Da

Ai-Da Robot with Painting. Image Credit: Ai-Da portraits by Nicky Johnston. Published with permission from Midas Public Relations.
Earlier this month we saw the announcement of Ai.Da, considered the first ultra-realistic drawing robot artist. Her mechanical abilities, combined with AI-based algorithms, allow her to draw, paint, and even sculpt. She is able to draw people using her artificial eye and a pencil in her hand. Ai.Da’s artwork and first solo exhibition, Unsecured Futures, will be showcased at Oxford University in July.

Ai-Da Cartesian Painting. Image Credit: Ai-Da Artworks. Published with permission from Midas Public Relations.
Obviously Ai.Da has no true consciousness, thoughts, or feelings. Despite that, the (human) organizers of the exhibition believe that Ai.Da serves as a basis for crucial conversations about the ethics of emerging technologies. The exhibition will serve as a stimulant for engaging with critical questions about what kind of future we ought to create via such technologies.

The exhibition’s creators wrote, “Humans are confident in their position as the most powerful species on the planet, but how far do we actually want to take this power? To a Brave New World (Nightmare)? And if we use new technologies to enhance the power of the few, we had better start safeguarding the future of the many.”

Google’s PoemPortraits
Our transcendence adorns,
That society of the stars seem to be the secret.

The two lines of poetry above aren’t like any poetry you’ve come across before. They are generated by an algorithm that was trained via deep learning neural networks trained on 20 million words of 19th-century poetry.

Google’s latest art project, named PoemPortraits, takes a word of your suggestion and generates a unique poem (once again, a collaboration of man and machine). You can even add a selfie in the final “PoemPortrait.” Artist Es Devlin, the project’s creator, explains that the AI “doesn’t copy or rework existing phrases, but uses its training material to build a complex statistical model. As a result, the algorithm generates original phrases emulating the style of what it’s been trained on.”

The generated poetry can sometimes be profound, and sometimes completely meaningless.But what makes the PoemPortraits project even more interesting is that it’s a collaborative project. All of the generated lines of poetry are combined to form a consistently growing collective poem, which you can view after your lines are generated. In many ways, the final collective poem is a collaboration of people from around the world working with algorithms.

Faceless Portraits Transcending Time
AICAN + Ahmed Elgammal

Image Credit: AICAN + Ahmed Elgammal | Faceless Portrait #2 (2019) | Artsy.
In March of this year, an AI artist called AICAN and its creator Ahmed Elgammal took over a New York gallery. The exhibition at HG Commentary showed two series of canvas works portraying harrowing, dream-like faceless portraits.

The exhibition was not simply credited to a machine, but rather attributed to the collaboration between a human and machine. Ahmed Elgammal is the founder and director of the Art and Artificial Intelligence Laboratory at Rutgers University. He considers AICAN to not only be an autonomous AI artist, but also a collaborator for artistic endeavors.

How did AICAN create these eerie faceless portraits? The system was presented with 100,000 photos of Western art from over five centuries, allowing it to learn the aesthetics of art via machine learning. It then drew from this historical knowledge and the mandate to create something new to create an artwork without human intervention.

Genesis
by AIVA Technologies

Listen to the score above. While you do, reflect on the fact that it was generated by an AI.

AIVA is an AI that composes soundtrack music for movies, commercials, games, and trailers. Its creative works span a wide range of emotions and moods. The scores it generates are indistinguishable from those created by the most talented human composers.

The AIVA music engine allows users to generate original scores in multiple ways. One is to upload an existing human-generated score and select the temp track to base the composition process on. Another method involves using preset algorithms to compose music in pre-defined styles, including everything from classical to Middle Eastern.

Currently, the platform is promoted as an opportunity for filmmakers and producers. But in the future, perhaps every individual will have personalized music generated for them based on their interests, tastes, and evolving moods. We already have algorithms on streaming websites recommending novel music to us based on our interests and history. Soon, algorithms may be used to generate music and other works of art that are tailored to impact our unique psyches.

The Future of Art: Pushing Our Creative Limitations
These works of art are just a glimpse into the breadth of the creative works being generated by algorithms and machines. Many of us will rightly fear these developments. We have to ask ourselves what our role will be in an era where machines are able to perform what we consider complex, abstract, creative tasks. The implications on the future of work, education, and human societies are profound.

At the same time, some of these works demonstrate that AI artists may not necessarily represent a threat to human artists, but rather an opportunity for us to push our creative boundaries. The most exciting artistic creations involve collaborations between humans and machines.

We have always used our technological scaffolding to push ourselves beyond our biological limitations. We use the telescope to extend our line of sight, planes to fly, and smartphones to connect with others. Our machines are not always working against us, but rather working as an extension of our minds. Similarly, we could use our machines to expand on our creativity and push the boundaries of art.

Image Credit: Ai-Da portraits by Nicky Johnston. Published with permission from Midas Public Relations. Continue reading

Posted in Human Robots

#435186 What’s Behind the International Rush ...

There’s no better way of ensuring you win a race than by setting the rules yourself. That may be behind the recent rush by countries, international organizations, and companies to put forward their visions for how the AI race should be governed.

China became the latest to release a set of “ethical standards” for the development of AI last month, which might raise eyebrows given the country’s well-documented AI-powered state surveillance program and suspect approaches to privacy and human rights.

But given the recent flurry of AI guidelines, it may well have been motivated by a desire not to be left out of the conversation. The previous week the OECD, backed by the US, released its own “guiding principles” for the industry, and in April the EU released “ethical guidelines.”

The language of most of these documents is fairly abstract and noticeably similar, with broad appeals to ideals like accountability, responsibility, and transparency. The OECD’s guidelines are the lightest on detail, while the EU’s offer some more concrete suggestions such as ensuring humans always know if they’re interacting with AI and making algorithms auditable. China’s standards have an interesting focus on promoting openness and collaboration as well as expressly acknowledging AIs potential to disrupt employment.

Overall, though, one might be surprised that there aren’t more disagreements between three blocs with very divergent attitudes to technology, regulation, and economics. Most likely these are just the opening salvos in what will prove to be a long-running debate, and the devil will ultimately be in the details.

The EU seems to have stolen a march on the other two blocs, being first to publish its guidelines and having already implemented the world’s most comprehensive regulation of data—the bedrock of modern AI—with last year’s GDPR. But its lack of industry heavyweights is going to make it hard to hold onto that lead.

One organization that seems to be trying to take on the role of impartial adjudicator is the World Economic Forum, which recently hosted an event designed to find common ground between various stakeholders from across the world. What will come of the effort remains to be seen, but China’s release of guidelines broadly similar to those of its Western counterparts is a promising sign.

Perhaps most telling, though, is the ubiquitous presence of industry leaders in both advisory and leadership positions. China’s guidelines are backed by “an AI industrial league” including Baidu, Alibaba, and Tencent, and the co-chairs of the WEF’s AI Council are Microsoft President Brad Smith and prominent Chinese AI investor Kai-Fu Lee.

Shortly after the EU released its proposals one of the authors, philosopher Thomas Metzinger, said the process had been compromised by the influence of the tech industry, leading to the removal of “red lines” opposing the development of autonomous lethal weapons or social credit score systems like China’s.

For a long time big tech argued for self-regulation, but whether they’ve had an epiphany or have simply sensed the shifting winds, they are now coming out in favor of government intervention.

Both Amazon and Facebook have called for regulation of facial recognition, and in February Google went even further, calling for the government to set down rules governing AI. Facebook chief Mark Zuckerberg has also since called for even broader regulation of the tech industry.

But considering the current concern around the anti-competitive clout of the largest technology companies, it’s worth remembering that tough rules are always easier to deal with for companies with well-developed compliance infrastructure and big legal teams. And these companies are also making sure the regulation is on their terms. Wired details Microsoft’s protracted effort to shape Washington state laws governing facial recognition technology and Google’s enormous lobbying effort.

“Industry has mobilized to shape the science, morality and laws of artificial intelligence,” Harvard law professor Yochai Benkler writes in Nature. He highlights how Amazon’s funding of a National Science Foundation (NSF) program for projects on fairness in artificial intelligence undermines the ability of academia to act as an impartial counterweight to industry.

Excluding industry from the process of setting the rules to govern AI in a fair and equitable way is clearly not practical, writes Benkler, because they are the ones with the expertise. But there also needs to be more concerted public investment in research and policymaking, and efforts to limit the influence of big companies when setting the rules that will govern AI.

Image Credit: create jobs 51 / Shutterstock.com Continue reading

Posted in Human Robots