Tag Archives: the

#439157 This Week’s Awesome Tech Stories From ...

COMPUTING
Now for AI’s Latest Trick: Writing Computer Code
Will Knight | Wired
“It can take years to learn how to write computer code well. SourceAI, a Paris startup, thinks programming shouldn’t be such a big deal. The company is fine-tuning a tool that uses artificial intelligence to write code based on a short text description of what the code should do. Tell the company’s tool to ‘multiply two numbers given by a user,’ for example, and it will whip up a dozen or so lines in Python to do just that.”

SPACE
NASA’s Perseverance Rover Just Turned CO2 Into Oxygen
Morgan McFall-Johnsen | Business Insider
“That’s good news for the prospect of sending human explorers to Mars. Oxygen takes up a lot of room on a spacecraft, and it’s unlikely that astronauts will be able to bring enough with them to Mars. So they’ll need to produce their own oxygen from the Martian atmosphere, both for breathing and for fueling rockets to return to Earth.”

ARTIFICIAL INTELLIGENCE
Latest Neural Nets Solve World’s Hardest Equations Faster Than Ever Before
Anil Ananthaswamy | Quanta
“…researchers have built new kinds of artificial neural networks that can approximate solutions to partial differential equations orders of magnitude faster than traditional PDE solvers. And once trained, the new neural nets can solve not just a single PDE but an entire family of them without retraining.”

SPACE
NASA’s Bold Bet on Starship for the Moon May Change Spaceflight Forever
Eric Berger | Ars Technica
“Until now, the plans NASA had contemplated for human exploration in deep space all had echoes of the Apollo program. …By betting on Starship, which entails a host of development risks, NASA is taking a chance on what would be a much brighter future. One in which not a handful of astronauts go to the Moon or Mars, but dozens and then hundreds. In this sense, Starship represents a radical departure for NASA and human exploration.”

AUTOMATION
Who Will Win the Self-Driving Race? Here Are Eight Possibilities
Timothy B. Lee | Ars Technica
“…predicting what the next couple of years will bring is a challenge. So rather than offering a single prediction, here are eight: I’ve broken down the future into eight possible scenarios, each with a rough probability. …A decade from now, we’ll be able to look back and say which companies or approaches were on the right track. For now, we can only guess.”

TECHNOLOGY
Europe’s Proposed Limits on AI Would Have Global Consequences
Will Knight | Wired
“The rules are the most significant international effort to regulate AI to date, covering facial recognition, autonomous driving, and the algorithms that drive online advertising, automated hiring, and credit scoring. The proposed rules could help shape global norms and regulations around a promising but contentious technology.”

SCIENCE
What Do You Call a Bunch of Black Holes: A Crush? A Scream?
Dennis Overbye | The New York Times
“[Astrophysicist Jocelyn Kelly Holley-Bockelmann] was trying to run a Zoom meeting of the [Laser Interferometer Space Antenna] recently ‘when one of the members said his daughter was wondering what you call a collective of black holes—and then the meeting fell apart, with everyone trying to up one another,’ she said in an email. ‘Each time I saw a suggestion, I had to stop and giggle like a loon, which egged us all on more.’i”

ENVIRONMENT
Stopping Plastic in Rivers From Reaching the Ocean With New Tech From the Ocean Cleanup Project
Stephen Beacham | CNET
“First announced by Ocean Cleanup founder and CEO Boyan Slat in 2019, the Interceptors are moored to river beds and use the currents to snag debris floating on the surface. Then they direct the trash onto a conveyor belt that shuttles it into six large onboard dumpsters. The Interceptors run completely autonomously day and night, getting power from solar panels.”

FUTURE
Hackers Used to Be Humans. Soon, AIs Will Hack Humanity
Bruce Schneier | Wired
“Hacking is as old as humanity. We are creative problem solvers. We exploit loopholes, manipulate systems, and strive for more influence, power, and wealth. To date, hacking has exclusively been a human activity. Not for long. As I lay out in a report I just published, artificial intelligence will eventually find vulnerabilities in all sorts of social, economic, and political systems, and then exploit them at unprecedented speed, scale, and scope.”

Image Credit: NASA (Image of Martian sand dunes taken by NASA’s Curiosity rover) Continue reading

Posted in Human Robots

#438754 TALOS Humanoid Robot in Scotland

Video of TALOS arriving at the University of Edinburgh, being unpacked, and activated.

Posted in Human Robots

#439147 Robots Versus Toasters: How The Power of ...

Kate Darling is an expert on human robot interaction, robot ethics, intellectual property, and all sorts of other things at the MIT Media Lab. She’s written several excellent articles for us in the past, and we’re delighted to be able to share this excerpt from her new book, which comes out today. Entitled The New Breed: What Our History with Animals Reveals about Our Future with Robots, Kate’s book is an exploration of how animals can help us understand our robot relationships, and how far that comparison can really be extended. It’s solidly based on well-cited research, including many HRI studies that we’ve written about in the past, but Kate brings everything together and tells us what it all could mean as robots continue to integrate themselves into our lives.

The following excerpt is The Power of Movement, a section from the chapter Robots Versus Toasters, which features one of the saddest robot videos I’ve ever seen, even after nearly a decade. Enjoy!

When the first black-and-white motion pictures came to the screen, an 1896 film showing in a Paris cinema is said to have caused a stampede: the first-time moviegoers, watching a giant train barrel toward them, jumped out of their seats and ran away from the screen in panic. According to film scholar Martin Loiperdinger, this story is no more than an urban legend. But this new media format, “moving pictures,” proved to be both immersive and compelling, and was here to stay. Thanks to a baked-in ability to interpret motion, we’re fascinated even by very simple animation because it tells stories we intuitively understand.

In a seminal study from the 1940s, psychologists Fritz Heider and Marianne Simmel showed participants a black-and-white movie of simple, geometrical shapes moving around on a screen. When instructed to describe what they were seeing, nearly every single one of their participants interpreted the shapes to be moving around with agency and purpose. They described the behavior of the triangles and circle the way we describe people’s behavior, by assuming intent and motives. Many of them went so far as to create a complex narrative around the moving shapes. According to one participant: “A man has planned to meet a girl and the girl comes along with another man. [ . . . ] The girl gets worried and races from one corner to the other in the far part of the room. [ . . . ] The girl gets out of the room in a sudden dash just as man number two gets the door open. The two chase around the outside of the room together, followed by man number one. But they finally elude him and get away. The first man goes back and tries to open his door, but he is so blinded by rage and frustration that he can not open it.”

What brought the shapes to life for Heider and Simmel’s participants was solely their movement. We can interpret certain movement in other entities as “worried,” “frustrated,” or “blinded by rage,” even when the “other” is a simple black triangle moving across a white background. A number of studies document how much information we can extract from very basic cues, getting us to assign emotions and gender identity to things as simple as moving points of light. And while we might not run away from a train on a screen, we’re still able to interpret the movement and may even get a little thrill from watching the train in a more modern 3D screening. (There are certainly some embarrassing videos of people—maybe even of me—when we first played games wearing virtual reality headsets.)

Many scientists believe that autonomous movement activates our “life detector.” Because we’ve evolved needing to quickly identify natural predators, our brains are on constant lookout for moving agents. In fact, our perception is so attuned to movement that we separate things into objects and agents, even if we’re looking at a still image. Researchers Joshua New, Leda Cosmides, and John Tooby showed people photos of a variety of scenes, like a nature landscape, a city scene, or an office desk. Then, they switched in an identical image with one addition; for example, a bird, a coffee mug, an elephant, a silo, or a vehicle. They measured how quickly the participants could identify the new appearance. People were substantially quicker and more accurate at detecting the animals compared to all of the other categories, including larger objects and vehicles.

The researchers also found evidence that animal detection activated an entirely different region of people’s brains. Research like this suggests that a specific part of our brain is constantly monitoring for lifelike animal movement. This study in particular also suggests that our ability to separate animals and objects is more likely to be driven by deep ancestral priorities than our own life experiences. Even though we have been living with cars for our whole lives, and they are now more dangerous to us than bears or tigers, we’re still much quicker to detect the presence of an animal.

The biological hardwiring that detects and interprets life in autonomous agent movement is even stronger when it has a body and is in the room with us. John Harris and Ehud Sharlin at the University of Calgary tested this projection with a moving stick. They took a long piece of wood, about the size of a twirler’s baton, and attached one end to a base with motors and eight degrees of freedom. This allowed the researchers to control the stick remotely and wave it around: fast, slow, doing figure eights, etc. They asked the experiment participants to spend some time alone in a room with the moving stick. Then, they had the participants describe their experience.

Only two of the thirty participants described the stick’s movement in technical terms. The others told the researchers that the stick was bowing or otherwise greeting them, claimed it was aggressive and trying to attack them, described it as pensive, “hiding something,” or even “purring happily.” At least ten people said the stick was “dancing.” One woman told the stick to stop pointing at her.

If people can imbue a moving stick with agency, what happens when they meet R2-D2? Given our social tendencies and ingrained responses to lifelike movement in our physical space, it’s fairly unsurprising that people perceive robots as being alive. Robots are physical objects in our space that often move in a way that seems (to our lizard brains) to have agency. A lot of the time, we don’t perceive robots as objects—to us, they are agents. And, while we may enjoy the concept of pet rocks, we love to anthropomorphize agent behavior even more.

We already have a slew of interesting research in this area. For example, people think a robot that’s present in a room with them is more enjoyable than the same robot on a screen and will follow its gaze, mimic its behavior, and be more willing to take the physical robot’s advice. We speak more to embodied robots, smile more, and are more likely to want to interact with them again. People are more willing to obey orders from a physical robot than a computer. When left alone in a room and given the opportunity to cheat on a game, people cheat less when a robot is with them. And children learn more from working with a robot compared to the same character on a screen. We are better at recognizing a robot’s emotional cues and empathize more with physical robots. When researchers told children to put a robot in a closet (while the robot protested and said it was afraid of the dark), many of the kids were hesitant.

Even adults will hesitate to switch off or hit a robot, especially when they perceive it as intelligent. People are polite to robots and try to help them. People greet robots even if no greeting is required and are friendlier if a robot greets them first. People reciprocate when robots help them. And, like the socially inept [software office assistant] Clippy, when people don’t like a robot, they will call it names. What’s noteworthy in the context of our human comparison is that the robots don’t need to look anything like humans for this to happen. In fact, even very simple robots, when they move around with “purpose,” elicit an inordinate amount of projection from the humans they encounter. Take robot vacuum cleaners. By 2004, a million of them had been deployed and were sweeping through people’s homes, vacuuming dirt, entertaining cats, and occasionally getting stuck in shag rugs. The first versions of the disc-shaped devices had sensors to detect things like steep drop-offs, but for the most part they just bumbled around randomly, changing direction whenever they hit a wall or a chair.

iRobot, the company that makes the most popular version (the Roomba) soon noticed that their customers would send their vacuum cleaners in for repair with names (Dustin Bieber being one of my favorites). Some Roomba owners would talk about their robot as though it were a pet. People who sent in malfunctioning devices would complain about the company’s generous policy to offer them a brand-new replacement, demanding that they instead fix “Meryl Sweep” and send her back. The fact that the Roombas roamed around on their own lent them a social presence that people’s traditional, handheld vacuum cleaners lacked. People decorated them, talked to them, and felt bad for them when they got tangled in the curtains.

Tech journalists reported on the Roomba’s effect, calling robovacs “the new pet craze.” A 2007 study found that many people had a social relationship with their Roombas and would describe them in terms that evoked people or animals. Today, over 80 percent of Roombas have names. I don’t have access to naming statistics for the handheld Dyson vacuum cleaner, but I’m pretty sure the number is lower.

Robots are entering our lives in many shapes and forms, and even some of the most simple or mechanical robots can prompt a visceral response. And the design of robots isn’t likely to shift away from evoking our biological reactions—especially because some robots are designed to mimic lifelike movement on purpose.

Excerpted from THE NEW BREED: What Our History with Animals Reveals about Our Future with Robots by Kate Darling. Published by Henry Holt and Company. Copyright © 2021 by Kate Darling. All rights reserved.

Kate’s book is available today from Annie Bloom’s Books in SW Portland, Oregon. It’s also available from Powell’s Books, and if you don’t have the good fortune of living in Portland, you can find it in both print and digital formats pretty much everywhere else books are sold.

As for Robovie, the claustrophobic robot that kept getting shoved in a closet, we recently checked in with Peter Kahn, the researcher who created the experiment nearly a decade ago, to make sure that the poor robot ended up okay. “Robovie is doing well,” Khan told us. “He visited my lab on 2-3 other occasions and participated in other experiments. Now he’s back in Japan with the person who helped make him, and who cares a lot about him.” That person is Takayuki Kanda at ATR, who we’re happy to report is still working with Robovie in the context of human-robot interaction. Thanks Robovie! Continue reading

Posted in Human Robots

#439132 This Week’s Awesome Tech Stories From ...

ARTIFICIAL INTELLIGENCE
15 Graphs You Need to See to Understand AI in 2021
Charles Q. Choi | IEEE Spectrum
“If you haven’t had time to read the AI Index Report for 2021, which clocks in at 222 pages, don’t worry—we’ve got you covered. The massive document, produced by the Stanford Institute for Human-Centered Artificial Intelligence, is packed full of data and graphs, and we’ve plucked out 15 that provide a snapshot of the current state of AI.”

FUTURE
Geoffrey Hinton Has a Hunch About What’s Next for Artificial Intelligence
Siobhan Roberts | MIT Technology Review
“Back in November, the computer scientist and cognitive psychologist Geoffrey Hinton had a hunch. After a half-century’s worth of attempts—some wildly successful—he’d arrived at another promising insight into how the brain works and how to replicate its circuitry in a computer.”

ROBOTICS
Robotic Exoskeletons Could One Day Walk by Themselves
Charles Q. Choi | IEEE Spectrum
“Ultimately, the ExoNet researchers want to explore how AI software can transmit commands to exoskeletons so they can perform tasks such as climbing stairs or avoiding obstacles based on a system’s analysis of a user’s current movements and the upcoming terrain. With autonomous cars as inspiration, they are seeking to develop autonomous exoskeletons that can handle the walking task without human input, Laschowski says.”

TECHNOLOGY
Microsoft Buys AI Speech Tech Company Nuance for $19.7 Billion
James Vincent | The Verge
“The $19.7 billion acquisition of Nuance is Microsoft’s second-largest behind its purchase of LinkedIn in 2016 for $26 billion. It comes at a time when speech tech is improving rapidly, thanks to the deep learning boom in AI, and there are simultaneously more opportunities for its use.”

ENVIRONMENT
Google’s New 3D Time-Lapse Feature Shows How Humans Are Affecting the Planet
Sam Rutherford | Gizmodo
“Described by Google Earth director Rebecca Moore as the biggest update to Google Earth since 2017, Timelapse in Google Earth combines more than 24 million satellite photos, two petabytes of data, and 2 million hours of CPU processing time to create a 4.4-terapixel interactive view showing how the Earth has changed from 1984 to 2020.”

GENETICS
The Genetic Mistakes That Could Shape Our Species
Zaria Gorvett | BBC
“New technologies may have already introduced genetic errors to the human gene pool. How long will they last? And how could they affect us? …According to [Stanford’s Hank] Greely, who has written a book about the implications of He [Jiankui]’s project, the answer depends on what the edits do and how they’re inherited.”

SPACE
The Era of Reusability in Space Has Begun
Eric Berger | Ars Technica
“As [Earth orbit] becomes more cluttered [due to falling launch costs], the responsible thing is to more actively refuel, recycle, and dispose of satellites. Northrop Grumman has made meaningful progress toward such a future of satellite servicing. As a result, reusability is now moving into space.”

COMPUTING
100 Million More IoT Devices Are Exposed—and They Won’t Be the Last
Lily Hay Newman | Wired
“Over the last few years, researchers have found a shocking number of vulnerabilities in seemingly basic code that underpins how devices communicate with the internet. Now a new set of nine such vulnerabilities are exposing an estimated 100 million devices worldwide, including an array of internet-of-things products and IT management servers.”

Image Credit: Naitian (Tony) Wang / Unsplash Continue reading

Posted in Human Robots

#438745 Social robot from India

The Indian humanoid “SHALU” is able to speak in 9 Indian, and 38 foreign languages, can recognize faces, and identity people and objects!

Posted in Human Robots