Tag Archives: pop

#437103 How to Make Sense of Uncertainty in a ...

As the internet churns with information about Covid-19, about the virus that causes the disease, and about what we’re supposed to do to fight it, it can be difficult to see the forest for the trees. What can we realistically expect for the rest of 2020? And how do we even know what’s realistic?

Today, humanity’s primary, ideal goal is to eliminate the virus, SARS-CoV-2, and Covid-19. Our second-choice goal is to control virus transmission. Either way, we have three big aims: to save lives, to return to public life, and to keep the economy functioning.

To hit our second-choice goal—and maybe even our primary goal—countries are pursuing five major public health strategies. Note that many of these advances cross-fertilize: for example, advances in virus testing and antibody testing will drive data-based prevention efforts.

Five major public health strategies are underway to bring Covid-19 under control and to contain the spread of SARS-CoV-2.
These strategies arise from things we can control based on the things that we know at any given moment. But what about the things we can’t control and don’t yet know?

The biology of the virus and how it interacts with our bodies is what it is, so we should seek to understand it as thoroughly as possible. How long any immunity gained from prior infection lasts—and indeed whether people develop meaningful immunity at all after infection—are open questions urgently in need of greater clarity. Similarly, right now it’s important to focus on understanding rather than making assumptions about environmental factors like seasonality.

But the biggest question on everyone’s lips is, “When?” When will we see therapeutic progress against Covid-19? And when will life get “back to normal”? There are lots of models out there on the internet; which of those models are right? The simple answer is “none of them.” That’s right—it’s almost certain that every model you’ve seen is wrong in at least one detail, if not all of them. But modeling is meant to be a tool for deeper thinking, a way to run mental (and computational) experiments before—and while—taking action. As George E. P. Box famously wrote in 1976, “All models are wrong, but some are useful.”

Here, we’re seeking useful insights, as opposed to exact predictions, which is why we’re pulling back from quantitative details to get at the mindsets that will support agency and hope. To that end, I’ve been putting together timelines that I believe will yield useful expectations for the next year or two—and asking how optimistic I need to be in order to believe a particular timeline.

For a moderately optimistic scenario to be relevant, breakthroughs in science and technology come at paces expected based on previous efforts and assumptions that turn out to be basically correct; accessibility of those breakthroughs increases at a reasonable pace; regulation achieves its desired effects, without major surprises; and compliance with regulations is reasonably high.

In contrast, if I’m being highly optimistic, breakthroughs in science and technology and their accessibility come more quickly than they ever have before; regulation is evidence-based and successful in the first try or two; and compliance with those regulations is high and uniform. If I’m feeling not-so-optimistic, then I anticipate serious setbacks to breakthroughs and accessibility (with the overturning of many important assumptions), repeated failure of regulations to achieve their desired outcomes, and low compliance with those regulations.

The following scenarios outline the things that need to happen in the fight against Covid-19, when I expect to see them, and how confident I feel in those expectations. They focus on North America and Europe because there are data missing about China’s 2019 outbreak and other regions are still early in their outbreaks. Perhaps the most important thing to keep in mind throughout: We know more today than we did yesterday, but we still have much to learn. New knowledge derived from greater study and debate will almost certainly inspire ongoing course corrections.

As you dive into the scenarios below, practice these three mindset shifts. First, defeating Covid-19 will be a marathon, not a sprint. We shouldn’t expect life to look like 2019 for the next year or two—if ever. As Ed Yong wrote recently in The Atlantic, “There won’t be an obvious moment when everything is under control and regular life can safely resume.” Second, remember that you have important things to do for at least a year. And third, we are all in this together. There is no “us” and “them.” We must all be alert, responsive, generous, and strong throughout 2020 and 2021—and willing to throw away our assumptions when scientific evidence invalidates them.

The Middle Way: Moderate Optimism
Let’s start with the case in which I have the most confidence: moderate optimism.

This timeline considers milestones through late 2021, the earliest that I believe vaccines will become available. The “normal” timeline for developing a vaccine for diseases like seasonal flu is 18 months, which leads to my projection that we could potentially have vaccines as soon as 18 months from the first quarter of 2020. While Melinda Gates agrees with that projection, others (including AI) believe that 3 to 5 years is far more realistic, based on past vaccine development and the need to test safety and efficacy in humans. However, repurposing existing vaccines against other diseases—or piggybacking off clever synthetic platforms—could lead to vaccines being available sooner. I tried to balance these considerations for this moderately optimistic scenario. Either way, deploying vaccines at the end of 2021 is probably much later than you may have been led to believe by the hype engine. Again, if you take away only one message from this article, remember that the fight against Covid-19 is a marathon, not a sprint.

Here, I’ve visualized a moderately optimistic scenario as a baseline. Think of these timelines as living guides, as opposed to exact predictions. There are still many unknowns. More or less optimistic views (see below) and new information could shift these timelines forward or back and change the details of the strategies.
Based on current data, I expect that the first wave of Covid-19 cases (where we are now) will continue to subside in many areas, leading governments to ease restrictions in an effort to get people back to work. We’re already seeing movement in that direction, with a variety of benchmarks and changes at state and country levels around the world. But depending on the details of the changes, easing restrictions will probably cause a second wave of sickness (see Germany and Singapore), which should lead governments to reimpose at least some restrictions.

In tandem, therapeutic efforts will be transitioning from emergency treatments to treatments that have been approved based on safety and efficacy data in clinical trials. In a moderately optimistic scenario, assuming clinical trials currently underway yield at least a few positive results, this shift to mostly approved therapies could happen as early as the third or fourth quarter of this year and continue from there. One approval that should come rather quickly is for plasma therapies, in which the blood from people who have recovered from Covid-19 is used as a source of antibodies for people who are currently sick.

Companies around the world are working on both viral and antibody testing, focusing on speed, accuracy, reliability, and wide accessibility. While these tests are currently being run in hospitals and research laboratories, at-home testing is a critical component of the mass testing we’ll need to keep viral spread in check. These are needed to minimize the impact of asymptomatic cases, test the assumption that infection yields resistance to subsequent infection (and whether it lasts), and construct potential immunity passports if this assumption holds. Testing is also needed for contact tracing efforts to prevent further spread and get people back to public life. Finally, it’s crucial to our fundamental understanding of the biology of SARS-CoV-2 and Covid-19.

We need tests that are very reliable, both in the clinic and at home. So, don’t go buying any at-home test kits just yet, even if you find them online. Wait for reliable test kits and deeper understanding of how a test result translates to everyday realities. If we’re moderately optimistic, in-clinic testing will rapidly expand this quarter and/or next, with the possibility of broadly available, high-quality at-home sampling (and perhaps even analysis) thereafter.

Note that testing is not likely to be a “one-and-done” endeavor, as a person’s infection and immunity status change over time. Expect to be testing yourself—and your family—often as we move later into 2020.

Testing data are also going to inform distancing requirements at the country and local levels. In this scenario, restrictions—at some level of stringency—could persist at least through the end of 2020, as most countries are way behind the curve on testing (Iceland is an informative exception). Governments will likely continue to ask citizens to work from home if at all possible; to wear masks or face coverings in public; to employ heightened hygiene and social distancing in workplaces; and to restrict travel and social gatherings. So while it’s likely we’ll be eating in local restaurants again in 2020 in this scenario, at least for a little while, it’s not likely we’ll be heading to big concerts any time soon.

The Extremes: High and Low Optimism
How would high and low levels of optimism change our moderately optimistic timeline? The milestones are the same, but the time required to achieve them is shorter or longer, respectively. Quantifying these shifts is less important than acknowledging and incorporating a range of possibilities into our view. It pays to pay attention to our bias. Here are a few examples of reasonable possibilities that could shift the moderately optimistic timeline.

When vaccines become available
Vaccine repurposing could shorten the time for vaccines to become available; today, many vaccine candidates are in various stages of testing. On the other hand, difficulties in manufacture and distribution, or faster-than-expected mutation of SARS-CoV-2, could slow vaccine development. Given what we know now, I am not strongly concerned about either of these possibilities—drug companies are rapidly expanding their capabilities, and viral mutation isn’t an urgent concern at this time based on sequencing data—but they could happen.

At first, governments will likely supply vaccines to essential workers such as healthcare workers, but it is essential that vaccines become widely available around the world as quickly and as safely as possible. Overall, I suggest a dose of skepticism when reading highly optimistic claims about a vaccine (or multiple vaccines) being available in 2020. Remember, a vaccine is a knockout punch, not a first line of defense for an outbreak.

When testing hits its stride
While I am confident that testing is a critical component of our response to Covid-19, reliability is incredibly important to testing for SARS-CoV-2 and for immunity to the disease, particularly at home. For an individual, a false negative (being told you don’t have antibodies when you really do) could be just as bad as a false positive (being told you do have antibodies when you really don’t). Those errors are compounded when governments are trying to make evidence-based policies for social and physical distancing.

If you’re highly optimistic, high-quality testing will ramp up quickly as companies and scientists innovate rapidly by cleverly combining multiple test modalities, digital signals, and cutting-edge tech like CRISPR. Pop-up testing labs could also take some pressure off hospitals and clinics.

If things don’t go well, reliability issues could hinder testing, manufacturing bottlenecks could limit availability, and both could hamstring efforts to control spread and ease restrictions. And if it turns out that immunity to Covid-19 isn’t working the way we assumed, then we must revisit our assumptions about our path(s) back to public life, as well as our vaccine-development strategies.

How quickly safe and effective treatments appear
Drug development is known to be long, costly, and fraught with failure. It’s not uncommon to see hope in a drug spike early only to be dashed later on down the road. With that in mind, the number of treatments currently under investigation is astonishing, as is the speed through which they’re proceeding through testing. Breakthroughs in a therapeutic area—for example in treating the seriously ill or in reducing viral spread after an infection takes hold—could motivate changes in the focus of distancing regulations.

While speed will save lives, we cannot overlook the importance of knowing a treatment’s efficacy (does it work against Covid-19?) and safety (does it make you sick in a different, or worse, way?). Repurposing drugs that have already been tested for other diseases is speeding innovation here, as is artificial intelligence.

Remarkable collaborations among governments and companies, large and small, are driving innovation in therapeutics and devices such as ventilators for treating the sick.

Whether government policies are effective and responsive
Those of us who have experienced lockdown are eager for it to be over. Businesses, economists, and governments are also eager to relieve the terrible pressure that is being exerted on the global economy. However, lifting restrictions will almost certainly lead to a resurgence in sickness.

Here, the future is hard to model because there are many, many factors at play, and at play differently in different places—including the extent to which individuals actually comply with regulations.

Reliable testing—both in the clinic and at home—is crucial to designing and implementing restrictions, monitoring their effectiveness, and updating them; delays in reliable testing could seriously hamper this design cycle. Lack of trust in governments and/or companies could also suppress uptake. That said, systems are already in place for contact tracing in East Asia. Other governments could learn important lessons, but must also earn—and keep—their citizens’ trust.

Expect to see restrictions descend and then lift in response to changes in the number of Covid-19 cases and in the effectiveness of our prevention strategies. Also expect country-specific and perhaps even area-specific responses that differ from each other. The benefit of this approach? Governments around the world are running perhaps hundreds of real-time experiments and design cycles in balancing health and the economy, and we can learn from the results.

A Way Out
As Jeremy Farrar, head of the Wellcome Trust, told Science magazine, “Science is the exit strategy.” Some of our greatest technological assistance is coming from artificial intelligence, digital tools for collaboration, and advances in biotechnology.

Our exit strategy also needs to include empathy and future visioning—because in the midst of this crisis, we are breaking ground for a new, post-Covid future.

What do we want that future to look like? How will the hard choices we make now about data ethics impact the future of surveillance? Will we continue to embrace inclusiveness and mass collaboration? Perhaps most importantly, will we lay the foundation for successfully confronting future challenges? Whether we’re thinking about the next pandemic (and there will be others) or the cascade of catastrophes that climate change is bringing ever closer—it’s important to remember that we all have the power to become agents of that change.

Special thanks to Ola Kowalewski and Jason Dorrier for significant conversations.

Image Credit: Drew Beamer / Unsplash Continue reading

Posted in Human Robots

#436220 How Boston Dynamics Is Redefining Robot ...

Gif: Bob O’Connor/IEEE Spectrum

With their jaw-dropping agility and animal-like reflexes, Boston Dynamics’ bioinspired robots have always seemed to have no equal. But that preeminence hasn’t stopped the company from pushing its technology to new heights, sometimes literally. Its latest crop of legged machines can trudge up and down hills, clamber over obstacles, and even leap into the air like a gymnast. There’s no denying their appeal: Every time Boston Dynamics uploads a new video to YouTube, it quickly racks up millions of views. These are probably the first robots you could call Internet stars.

Spot

Photo: Bob O’Connor

84 cm HEIGHT

25 kg WEIGHT

5.76 km/h SPEED

SENSING: Stereo cameras, inertial measurement unit, position/force sensors

ACTUATION: 12 DC motors

POWER: Battery (90 minutes per charge)

Boston Dynamics, once owned by Google’s parent company, Alphabet, and now by the Japanese conglomerate SoftBank, has long been secretive about its designs. Few publications have been granted access to its Waltham, Mass., headquarters, near Boston. But one morning this past August, IEEE Spectrum got in. We were given permission to do a unique kind of photo shoot that day. We set out to capture the company’s robots in action—running, climbing, jumping—by using high-speed cameras coupled with powerful strobes. The results you see on this page: freeze-frames of pure robotic agility.

We also used the photos to create interactive views, which you can explore online on our Robots Guide. These interactives let you spin the robots 360 degrees, or make them walk and jump on your screen.

Boston Dynamics has amassed a minizoo of robotic beasts over the years, with names like BigDog, SandFlea, and WildCat. When we visited, we focused on the two most advanced machines the company has ever built: Spot, a nimble quadruped, and Atlas, an adult-size humanoid.

Spot can navigate almost any kind of terrain while sensing its environment. Boston Dynamics recently made it available for lease, with plans to manufacture something like a thousand units per year. It envisions Spot, or even packs of them, inspecting industrial sites, carrying out hazmat missions, and delivering packages. And its YouTube fame has not gone unnoticed: Even entertainment is a possibility, with Cirque du Soleil auditioning Spot as a potential new troupe member.

“It’s really a milestone for us going from robots that work in the lab to these that are hardened for work out in the field,” Boston Dynamics CEO Marc Raibert says in an interview.

Atlas

Photo: Bob O’Connor

150 cm HEIGHT

80 kg WEIGHT

5.4 km/h SPEED

SENSING: Lidar and stereo vision

ACTUATION: 28 hydraulic actuators

POWER: Battery

Our other photographic subject, Atlas, is Boston Dynamics’ biggest celebrity. This 150-centimeter-tall (4-foot-11-inch-tall) humanoid is capable of impressive athletic feats. Its actuators are driven by a compact yet powerful hydraulic system that the company engineered from scratch. The unique system gives the 80-kilogram (176-pound) robot the explosive strength needed to perform acrobatic leaps and flips that don’t seem possible for such a large humanoid to do. Atlas has inspired a string of parody videos on YouTube and more than a few jokes about a robot takeover.

While Boston Dynamics excels at making robots, it has yet to prove that it can sell them. Ever since its founding in 1992 as a spin-off from MIT, the company has been an R&D-centric operation, with most of its early funding coming from U.S. military programs. The emphasis on commercialization seems to have intensified after the acquisition by SoftBank, in 2017. SoftBank’s founder and CEO, Masayoshi Son, is known to love robots—and profits.

The launch of Spot is a significant step for Boston Dynamics as it seeks to “productize” its creations. Still, Raibert says his long-term goals have remained the same: He wants to build machines that interact with the world dynamically, just as animals and humans do. Has anything changed at all? Yes, one thing, he adds with a grin. In his early career as a roboticist, he used to write papers and count his citations. Now he counts YouTube views.

In the Spotlight

Photo: Bob O’Connor

Boston Dynamics designed Spot as a versatile mobile machine suitable for a variety of applications. The company has not announced how much Spot will cost, saying only that it is being made available to select customers, which will be able to lease the robot. A payload bay lets you add up to 14 kilograms of extra hardware to the robot’s back. One of the accessories that Boston Dynamics plans to offer is a 6-degrees-of-freedom arm, which will allow Spot to grasp objects and open doors.

Super Senses

Photo: Bob O’Connor

Spot’s hardware is almost entirely custom-designed. It includes powerful processing boards for control as well as sensor modules for perception. The ­sensors are located on the front, rear, and sides of the robot’s body. Each module consists of a pair of stereo cameras, a wide-angle camera, and a texture projector, which enhances 3D sensing in low light. The sensors allow the robot to use the navigation method known as SLAM, or simultaneous localization and mapping, to get around autonomously.

Stepping Up

Photo: Bob O’Connor

In addition to its autonomous behaviors, Spot can also be steered by a remote operator with a game-style controller. But even when in manual mode, the robot still exhibits a high degree of autonomy. If there’s an obstacle ahead, Spot will go around it. If there are stairs, Spot will climb them. The robot goes into these operating modes and then performs the related actions completely on its own, without any input from the operator. To go down a flight of stairs, Spot walks backward, an approach Boston Dynamics says provides greater stability.

Funky Feet

Gif: Bob O’Connor/IEEE Spectrum

Spot’s legs are powered by 12 custom DC motors, each geared down to provide high torque. The robot can walk forward, sideways, and backward, and trot at a top speed of 1.6 meters per second. It can also turn in place. Other gaits include crawling and pacing. In one wildly popular YouTube video, Spot shows off its fancy footwork by dancing to the pop hit “Uptown Funk.”

Robot Blood

Photo: Bob O’Connor

Atlas is powered by a hydraulic system consisting of 28 actuators. These actuators are basically cylinders filled with pressurized fluid that can drive a piston with great force. Their high performance is due in part to custom servo valves that are significantly smaller and lighter than the aerospace models that Boston Dynamics had been using in earlier designs. Though not visible from the outside, the innards of an Atlas are filled with these hydraulic actuators as well as the lines of fluid that connect them. When one of those lines ruptures, Atlas bleeds the hydraulic fluid, which happens to be red.

Next Generation

Gif: Bob O’Connor/IEEE Spectrum

The current version of Atlas is a thorough upgrade of the original model, which was built for the DARPA Robotics Challenge in 2015. The newest robot is lighter and more agile. Boston Dynamics used industrial-grade 3D printers to make key structural parts, giving the robot greater strength-to-weight ratio than earlier designs. The next-gen Atlas can also do something that its predecessor, famously, could not: It can get up after a fall.

Walk This Way

Photo: Bob O’Connor

To control Atlas, an operator provides general steering via a manual controller while the robot uses its stereo cameras and lidar to adjust to changes in the environment. Atlas can also perform certain tasks autonomously. For example, if you add special bar-code-type tags to cardboard boxes, Atlas can pick them up and stack them or place them on shelves.

Biologically Inspired

Photos: Bob O’Connor

Atlas’s control software doesn’t explicitly tell the robot how to move its joints, but rather it employs mathematical models of the underlying physics of the robot’s body and how it interacts with the environment. Atlas relies on its whole body to balance and move. When jumping over an obstacle or doing acrobatic stunts, the robot uses not only its legs but also its upper body, swinging its arms to propel itself just as an athlete would.

This article appears in the December 2019 print issue as “By Leaps and Bounds.” Continue reading

Posted in Human Robots

#436190 What Is the Uncanny Valley?

Have you ever encountered a lifelike humanoid robot or a realistic computer-generated face that seem a bit off or unsettling, though you can’t quite explain why?

Take for instance AVA, one of the “digital humans” created by New Zealand tech startup Soul Machines as an on-screen avatar for Autodesk. Watching a lifelike digital being such as AVA can be both fascinating and disconcerting. AVA expresses empathy through her demeanor and movements: slightly raised brows, a tilt of the head, a nod.

By meticulously rendering every lash and line in its avatars, Soul Machines aimed to create a digital human that is virtually undistinguishable from a real one. But to many, rather than looking natural, AVA actually looks creepy. There’s something about it being almost human but not quite that can make people uneasy.

Like AVA, many other ultra-realistic avatars, androids, and animated characters appear stuck in a disturbing in-between world: They are so lifelike and yet they are not “right.” This void of strangeness is known as the uncanny valley.

Uncanny Valley: Definition and History
The uncanny valley is a concept first introduced in the 1970s by Masahiro Mori, then a professor at the Tokyo Institute of Technology. The term describes Mori’s observation that as robots appear more humanlike, they become more appealing—but only up to a certain point. Upon reaching the uncanny valley, our affinity descends into a feeling of strangeness, a sense of unease, and a tendency to be scared or freaked out.

Image: Masahiro Mori

The uncanny valley as depicted in Masahiro Mori’s original graph: As a robot’s human likeness [horizontal axis] increases, our affinity towards the robot [vertical axis] increases too, but only up to a certain point. For some lifelike robots, our response to them plunges, and they appear repulsive or creepy. That’s the uncanny valley.

In his seminal essay for Japanese journal Energy, Mori wrote:

I have noticed that, in climbing toward the goal of making robots appear human, our affinity for them increases until we come to a valley, which I call the uncanny valley.

Later in the essay, Mori describes the uncanny valley by using an example—the first prosthetic hands:

One might say that the prosthetic hand has achieved a degree of resemblance to the human form, perhaps on a par with false teeth. However, when we realize the hand, which at first site looked real, is in fact artificial, we experience an eerie sensation. For example, we could be startled during a handshake by its limp boneless grip together with its texture and coldness. When this happens, we lose our sense of affinity, and the hand becomes uncanny.

In an interview with IEEE Spectrum, Mori explained how he came up with the idea for the uncanny valley:

“Since I was a child, I have never liked looking at wax figures. They looked somewhat creepy to me. At that time, electronic prosthetic hands were being developed, and they triggered in me the same kind of sensation. These experiences had made me start thinking about robots in general, which led me to write that essay. The uncanny valley was my intuition. It was one of my ideas.”

Uncanny Valley Examples
To better illustrate how the uncanny valley works, here are some examples of the phenomenon. Prepare to be freaked out.

1. Telenoid

Photo: Hiroshi Ishiguro/Osaka University/ATR

Taking the top spot in the “creepiest” rankings of IEEE Spectrum’s Robots Guide, Telenoid is a robotic communication device designed by Japanese roboticist Hiroshi Ishiguro. Its bald head, lifeless face, and lack of limbs make it seem more alien than human.

2. Diego-san

Photo: Andrew Oh/Javier Movellan/Calit2

Engineers and roboticists at the University of California San Diego’s Machine Perception Lab developed this robot baby to help parents better communicate with their infants. At 1.2 meters (4 feet) tall and weighing 30 kilograms (66 pounds), Diego-san is a big baby—bigger than an average 1-year-old child.

“Even though the facial expression is sophisticated and intuitive in this infant robot, I still perceive a false smile when I’m expecting the baby to appear happy,” says Angela Tinwell, a senior lecturer at the University of Bolton in the U.K. and author of The Uncanny Valley in Games and Animation. “This, along with a lack of detail in the eyes and forehead, can make the baby appear vacant and creepy, so I would want to avoid those ‘dead eyes’ rather than interacting with Diego-san.”

​3. Geminoid HI

Photo: Osaka University/ATR/Kokoro

Another one of Ishiguro’s creations, Geminoid HI is his android replica. He even took hair from his own scalp to put onto his robot twin. Ishiguro says he created Geminoid HI to better understand what it means to be human.

4. Sophia

Photo: Mikhail Tereshchenko/TASS/Getty Images

Designed by David Hanson of Hanson Robotics, Sophia is one of the most famous humanoid robots. Like Soul Machines’ AVA, Sophia displays a range of emotional expressions and is equipped with natural language processing capabilities.

5. Anthropomorphized felines

The uncanny valley doesn’t only happen with robots that adopt a human form. The 2019 live-action versions of the animated film The Lion King and the musical Cats brought the uncanny valley to the forefront of pop culture. To some fans, the photorealistic computer animations of talking lions and singing cats that mimic human movements were just creepy.

Are you feeling that eerie sensation yet?

Uncanny Valley: Science or Pseudoscience?
Despite our continued fascination with the uncanny valley, its validity as a scientific concept is highly debated. The uncanny valley wasn’t actually proposed as a scientific concept, yet has often been criticized in that light.

Mori himself said in his IEEE Spectrum interview that he didn’t explore the concept from a rigorous scientific perspective but as more of a guideline for robot designers:

Pointing out the existence of the uncanny valley was more of a piece of advice from me to people who design robots rather than a scientific statement.

Karl MacDorman, an associate professor of human-computer interaction at Indiana University who has long studied the uncanny valley, interprets the classic graph not as expressing Mori’s theory but as a heuristic for learning the concept and organizing observations.

“I believe his theory is instead expressed by his examples, which show that a mismatch in the human likeness of appearance and touch or appearance and motion can elicit a feeling of eeriness,” MacDorman says. “In my own experiments, I have consistently reproduced this effect within and across sense modalities. For example, a mismatch in the human realism of the features of a face heightens eeriness; a robot with a human voice or a human with a robotic voice is eerie.”

How to Avoid the Uncanny Valley
Unless you intend to create creepy characters or evoke a feeling of unease, you can follow certain design principles to avoid the uncanny valley. “The effect can be reduced by not creating robots or computer-animated characters that combine features on different sides of a boundary—for example, human and nonhuman, living and nonliving, or real and artificial,” MacDorman says.

To make a robot or avatar more realistic and move it beyond the valley, Tinwell says to ensure that a character’s facial expressions match its emotive tones of speech, and that its body movements are responsive and reflect its hypothetical emotional state. Special attention must also be paid to facial elements such as the forehead, eyes, and mouth, which depict the complexities of emotion and thought. “The mouth must be modeled and animated correctly so the character doesn’t appear aggressive or portray a ‘false smile’ when they should be genuinely happy,” she says.

For Christoph Bartneck, an associate professor at the University of Canterbury in New Zealand, the goal is not to avoid the uncanny valley, but to avoid bad character animations or behaviors, stressing the importance of matching the appearance of a robot with its ability. “We’re trained to spot even the slightest divergence from ‘normal’ human movements or behavior,” he says. “Hence, we often fail in creating highly realistic, humanlike characters.”

But he warns that the uncanny valley appears to be more of an uncanny cliff. “We find the likability to increase and then crash once robots become humanlike,” he says. “But we have never observed them ever coming out of the valley. You fall off and that’s it.” Continue reading

Posted in Human Robots

#434324 Big Brother Nation: The Case for ...

Powerful surveillance cameras have crept into public spaces. We are filmed and photographed hundreds of times a day. To further raise the stakes, the resulting video footage is fed to new forms of artificial intelligence software that can recognize faces in real time, read license plates, even instantly detect when a particular pre-defined action or activity takes place in front of a camera.

As most modern cities have quietly become surveillance cities, the law has been slow to catch up. While we wait for robust legal frameworks to emerge, the best way to protect our civil liberties right now is to fight technology with technology. All cities should place local surveillance video into a public cloud-based data trust. Here’s how it would work.

In Public Data We Trust
To democratize surveillance, every city should implement three simple rules. First, anyone who aims a camera at public space must upload that day’s haul of raw video file (and associated camera meta-data) into a cloud-based repository. Second, this cloud-based repository must have open APIs and a publicly-accessible log file that records search histories and tracks who has accessed which video files. And third, everyone in the city should be given the same level of access rights to the stored video data—no exceptions.

This kind of public data repository is called a “data trust.” Public data trusts are not just wishful thinking. Different types of trusts are already in successful use in Estonia and Barcelona, and have been proposed as the best way to store and manage the urban data that will be generated by Alphabet’s planned Sidewalk Labs project in Toronto.

It’s true that few people relish the thought of public video footage of themselves being looked at by strangers and friends, by ex-spouses, potential employers, divorce attorneys, and future romantic prospects. In fact, when I propose this notion when I give talks about smart cities, most people recoil in horror. Some turn red in the face and jeer at my naiveté. Others merely blink quietly in consternation.

The reason we should take this giant step towards extreme transparency is to combat the secrecy that surrounds surveillance. Openness is a powerful antidote to oppression. Edward Snowden summed it up well when he said, “Surveillance is not about public safety, it’s about power. It’s about control.”

Let Us Watch Those Watching Us
If public surveillance video were put back into the hands of the people, citizens could watch their government as it watches them. Right now, government cameras are controlled by the state. Camera locations are kept secret, and only the agencies that control the cameras get to see the footage they generate.

Because of these information asymmetries, civilians have no insight into the size and shape of the modern urban surveillance infrastructure that surrounds us, nor the uses (or abuses) of the video footage it spawns. For example, there is no swift and efficient mechanism to request a copy of video footage from the cameras that dot our downtown. Nor can we ask our city’s police force to show us a map that documents local traffic camera locations.

By exposing all public surveillance videos to the public gaze, cities could give regular people tools to assess the size, shape, and density of their local surveillance infrastructure and neighborhood “digital dragnet.” Using the meta-data that’s wrapped around video footage, citizens could geo-locate individual cameras onto a digital map to generate surveillance “heat maps.” This way people could assess whether their city’s camera density was higher in certain zip codes, or in neighborhoods populated by a dominant ethnic group.

Surveillance heat maps could be used to document which government agencies were refusing to upload their video files, or which neighborhoods were not under surveillance. Given what we already know today about the correlation between camera density, income, and social status, these “dark” camera-free regions would likely be those located near government agencies and in more affluent parts of a city.

Extreme transparency would democratize surveillance. Every city’s data trust would keep a publicly-accessible log of who’s searching for what, and whom. People could use their local data trust’s search history to check whether anyone was searching for their name, face, or license plate. As a result, clandestine spying on—and stalking of—particular individuals would become difficult to hide and simpler to prove.

Protect the Vulnerable and Exonerate the Falsely Accused
Not all surveillance video automatically works against the underdog. As the bungled (and consequently no longer secret) assassination of journalist Jamal Khashoggi demonstrated, one of the unexpected upsides of surveillance cameras has been the fact that even kings become accountable for their crimes. If opened up to the public, surveillance cameras could serve as witnesses to justice.

Video evidence has the power to protect vulnerable individuals and social groups by shedding light onto messy, unreliable (and frequently conflicting) human narratives of who did what to whom, and why. With access to a data trust, a person falsely accused of a crime could prove their innocence. By searching for their own face in video footage or downloading time/date stamped footage from a particular camera, a potential suspect could document their physical absence from the scene of a crime—no lengthy police investigation or high-priced attorney needed.

Given Enough Eyeballs, All Crimes Are Shallow
Placing public surveillance video into a public trust could make cities safer and would streamline routine police work. Linus Torvalds, the developer of open-source operating system Linux, famously observed that “given enough eyeballs, all bugs are shallow.” In the case of public cameras and a common data repository, Torvald’s Law could be restated as “given enough eyeballs, all crimes are shallow.”

If thousands of citizen eyeballs were given access to a city’s public surveillance videos, local police forces could crowdsource the work of solving crimes and searching for missing persons. Unfortunately, at the present time, cities are unable to wring any social benefit from video footage of public spaces. The most formidable barrier is not government-imposed secrecy, but the fact that as cameras and computers have grown cheaper, a large and fast-growing “mom and pop” surveillance state has taken over most of the filming of public spaces.

While we fear spooky government surveillance, the reality is that we’re much more likely to be filmed by security cameras owned by shopkeepers, landlords, medical offices, hotels, homeowners, and schools. These businesses, organizations, and individuals install cameras in public areas for practical reasons—to reduce their insurance costs, to prevent lawsuits, or to combat shoplifting. In the absence of regulations governing their use, private camera owners store video footage in a wide variety of locations, for varying retention periods.

The unfortunate (and unintended) result of this informal and decentralized network of public surveillance is that video files are not easy to access, even for police officers on official business. After a crime or terrorist attack occurs, local police (or attorneys armed with a subpoena) go from door to door to manually collect video evidence. Once they have the videos in hand, their next challenge is searching for the right “codex” to crack the dozens of different file formats they encounter so they can watch and analyze the footage.

The result of these practical barriers is that as it stands today, only people with considerable legal or political clout are able to successfully gain access into a city’s privately-owned, ad-hoc collections of public surveillance videos. Not only are cities missing the opportunity to streamline routine evidence-gathering police work, they’re missing a radically transformative benefit that would become possible once video footage from thousands of different security cameras were pooled into a single repository: the ability to apply the power of citizen eyeballs to the work of improving public safety.

Why We Need Extreme Transparency
When regular people can’t access their own surveillance videos, there can be no data justice. While we wait for the law to catch up with the reality of modern urban life, citizens and city governments should use technology to address the problem that lies at the heart of surveillance: a power imbalance between those who control the cameras and those who don’t.

Cities should permit individuals and organizations to install and deploy as many public-facing cameras as they wish, but with the mandate that camera owners must place all resulting video footage into the mercilessly bright sunshine of an open data trust. This way, cloud computing, open APIs, and artificial intelligence software can help combat abuses of surveillance and give citizens insight into who’s filming us, where, and why.

Image Credit: VladFotoMag / Shutterstock.com Continue reading

Posted in Human Robots

#433799 The First Novel Written by AI Is ...

Last year, a novelist went on a road trip across the USA. The trip was an attempt to emulate Jack Kerouac—to go out on the road and find something essential to write about in the experience. There is, however, a key difference between this writer and anyone else talking your ear off in the bar. This writer is just a microphone, a GPS, and a camera hooked up to a laptop and a whole bunch of linear algebra.

People who are optimistic that artificial intelligence and machine learning won’t put us all out of a job say that human ingenuity and creativity will be difficult to imitate. The classic argument is that, just as machines freed us from repetitive manual tasks, machine learning will free us from repetitive intellectual tasks.

This leaves us free to spend more time on the rewarding aspects of our work, pursuing creative hobbies, spending time with loved ones, and generally being human.

In this worldview, creative works like a great novel or symphony, and the emotions they evoke, cannot be reduced to lines of code. Humans retain a dimension of superiority over algorithms.

But is creativity a fundamentally human phenomenon? Or can it be learned by machines?

And if they learn to understand us better than we understand ourselves, could the great AI novel—tailored, of course, to your own predispositions in fiction—be the best you’ll ever read?

Maybe Not a Beach Read
This is the futurist’s view, of course. The reality, as the jury-rigged contraption in Ross Goodwin’s Cadillac for that road trip can attest, is some way off.

“This is very much an imperfect document, a rapid prototyping project. The output isn’t perfect. I don’t think it’s a human novel, or anywhere near it,” Goodwin said of the novel that his machine created. 1 The Road is currently marketed as the first novel written by AI.

Once the neural network has been trained, it can generate any length of text that the author desires, either at random or working from a specific seed word or phrase. Goodwin used the sights and sounds of the road trip to provide these seeds: the novel is written one sentence at a time, based on images, locations, dialogue from the microphone, and even the computer’s own internal clock.

The results are… mixed.

The novel begins suitably enough, quoting the time: “It was nine seventeen in the morning, and the house was heavy.” Descriptions of locations begin according to the Foursquare dataset fed into the algorithm, but rapidly veer off into the weeds, becoming surreal. While experimentation in literature is a wonderful thing, repeatedly quoting longitude and latitude coordinates verbatim is unlikely to win anyone the Booker Prize.

Data In, Art Out?
Neural networks as creative agents have some advantages. They excel at being trained on large datasets, identifying the patterns in those datasets, and producing output that follows those same rules. Music inspired by or written by AI has become a growing subgenre—there’s even a pop album by human-machine collaborators called the Songularity.

A neural network can “listen to” all of Bach and Mozart in hours, and train itself on the works of Shakespeare to produce passable pseudo-Bard. The idea of artificial creativity has become so widespread that there’s even a meme format about forcibly training neural network ‘bots’ on human writing samples, with hilarious consequences—although the best joke was undoubtedly human in origin.

The AI that roamed from New York to New Orleans was an LSTM (long short-term memory) neural net. By default, information contained in individual neurons is preserved, and only small parts can be “forgotten” or “learned” in an individual timestep, rather than neurons being entirely overwritten.

The LSTM architecture performs better than previous recurrent neural networks at tasks such as handwriting and speech recognition. The neural net—and its programmer—looked further in search of literary influences, ingesting 60 million words (360 MB) of raw literature according to Goodwin’s recipe: one third poetry, one third science fiction, and one third “bleak” literature.

In this way, Goodwin has some creative control over the project; the source material influences the machine’s vocabulary and sentence structuring, and hence the tone of the piece.

The Thoughts Beneath the Words
The problem with artificially intelligent novelists is the same problem with conversational artificial intelligence that computer scientists have been trying to solve from Turing’s day. The machines can understand and reproduce complex patterns increasingly better than humans can, but they have no understanding of what these patterns mean.

Goodwin’s neural network spits out sentences one letter at a time, on a tiny printer hooked up to the laptop. Statistical associations such as those tracked by neural nets can form words from letters, and sentences from words, but they know nothing of character or plot.

When talking to a chatbot, the code has no real understanding of what’s been said before, and there is no dataset large enough to train it through all of the billions of possible conversations.

Unless restricted to a predetermined set of options, it loses the thread of the conversation after a reply or two. In a similar way, the creative neural nets have no real grasp of what they’re writing, and no way to produce anything with any overarching coherence or narrative.

Goodwin’s experiment is an attempt to add some coherent backbone to the AI “novel” by repeatedly grounding it with stimuli from the cameras or microphones—the thematic links and narrative provided by the American landscape the neural network drives through.

Goodwin feels that this approach (the car itself moving through the landscape, as if a character) borrows some continuity and coherence from the journey itself. “Coherent prose is the holy grail of natural-language generation—feeling that I had somehow solved a small part of the problem was exhilarating. And I do think it makes a point about language in time that’s unexpected and interesting.”

AI Is Still No Kerouac
A coherent tone and semantic “style” might be enough to produce some vaguely-convincing teenage poetry, as Google did, and experimental fiction that uses neural networks can have intriguing results. But wading through the surreal AI prose of this era, searching for some meaning or motif beyond novelty value, can be a frustrating experience.

Maybe machines can learn the complexities of the human heart and brain, or how to write evocative or entertaining prose. But they’re a long way off, and somehow “more layers!” or a bigger corpus of data doesn’t feel like enough to bridge that gulf.

Real attempts by machines to write fiction have so far been broadly incoherent, but with flashes of poetry—dreamlike, hallucinatory ramblings.

Neural networks might not be capable of writing intricately-plotted works with charm and wit, like Dickens or Dostoevsky, but there’s still an eeriness to trying to decipher the surreal, Finnegans’ Wake mish-mash.

You might see, in the odd line, the flickering ghost of something like consciousness, a deeper understanding. Or you might just see fragments of meaning thrown into a neural network blender, full of hype and fury, obeying rules in an occasionally striking way, but ultimately signifying nothing. In that sense, at least, the RNN’s grappling with metaphor feels like a metaphor for the hype surrounding the latest AI summer as a whole.

Or, as the human author of On The Road put it: “You guys are going somewhere or just going?”

Image Credit: eurobanks / Shutterstock.com Continue reading

Posted in Human Robots