Tag Archives: both
#439023 In ‘Klara and the Sun,’ We Glimpse ...
In a store in the center of an unnamed city, humanoid robots are displayed alongside housewares and magazines. They watch the fast-moving world outside the window, anxiously awaiting the arrival of customers who might buy them and take them home. Among them is Klara, a particularly astute robot who loves the sun and wants to learn as much as possible about humans and the world they live in.
So begins Kazuo Ishiguro’s new novel Klara and the Sun, published earlier this month. The book, told from Klara’s perspective, portrays an eerie future society in which intelligent machines and other advanced technologies have been integrated into daily life, but not everyone is happy about it.
Technological unemployment, the progress of artificial intelligence, inequality, the safety and ethics of gene editing, increasing loneliness and isolation—all of which we’re grappling with today—show up in Ishiguro’s world. It’s like he hit a fast-forward button, mirroring back to us how things might play out if we don’t approach these technologies with caution and foresight.
The wealthy genetically edit or “lift” their children to set them up for success, while the poor have to make do with the regular old brains and bodies bequeathed them by evolution. Lifted and unlifted kids generally don’t mix, and this is just one of many sinister delineations between a new breed of haves and have-nots.
There’s anger about robots’ steady infiltration into everyday life, and questions about how similar their rights should be to those of humans. “First they take the jobs. Then they take the seats at the theater?” one woman fumes.
References to “changes” and “substitutions” allude to an economy where automation has eliminated millions of jobs. While “post-employed” people squat in abandoned buildings and fringe communities arm themselves in preparation for conflict, those whose livelihoods haven’t been destroyed can afford to have live-in housekeepers and buy Artificial Friends (or AFs) for their lonely children.
“The old traditional model that we still live with now—where most of us can get some kind of paid work in exchange for our services or the goods we make—has broken down,” Ishiguro said in a podcast discussion of the novel. “We’re not talking just about the difference between rich and poor getting bigger. We’re talking about a gap appearing between people who participate in society in an obvious way and people who do not.”
He has a point; as much as techno-optimists claim that the economic changes brought by automation and AI will give us all more free time, let us work less, and devote time to our passion projects, how would that actually play out? What would millions of “post-employed” people receiving basic income actually do with their time and energy?
In the novel, we don’t get much of a glimpse of this side of the equation, but we do see how the wealthy live. After a long wait, just as the store manager seems ready to give up on selling her, Klara is chosen by a 14-year-old girl named Josie, the daughter of a woman who wears “high-rank clothes” and lives in a large, sunny home outside the city. Cheerful and kind, Josie suffers from an unspecified illness that periodically flares up and leaves her confined to her bed for days at a time.
Her life seems somewhat bleak, the need for an AF clear. In this future world, the children of the wealthy no longer go to school together, instead studying alone at home on their digital devices. “Interaction meetings” are set up for them to learn to socialize, their parents carefully eavesdropping from the next room and trying not to intervene when there’s conflict or hurt feelings.
Klara does her best to be a friend, aide, and confidante to Josie while continuing to learn about the world around her and decode the mysteries of human behavior. We surmise that she was programmed with a basic ability to understand emotions, which evolves along with her other types of intelligence. “I believe I have many feelings. The more I observe, the more feelings become available to me,” she explains to one character.
Ishiguro does an excellent job of representing Klara’s mind: a blend of pre-determined programming, observation, and continuous learning. Her narration has qualities both robotic and human; we can tell when something has been programmed in—she “Gives Privacy” to the humans around her when that’s appropriate, for example—and when she’s figured something out for herself.
But the author maintains some mystery around Klara’s inner emotional life. “Does she actually understand human emotions, or is she just observing human emotions and simulating them within herself?” he said. “I suppose the question comes back to, what are our emotions as human beings? What do they amount to?”
Klara is particularly attuned to human loneliness, since she essentially was made to help prevent it. It is, in her view, peoples’ biggest fear, and something they’ll go to great lengths to avoid, yet can never fully escape. “Perhaps all humans are lonely,” she says.
Warding off loneliness through technology isn’t a futuristic idea, it’s something we’ve been doing for a long time, with the technologies at hand growing more and more sophisticated. Products like AFs already exist. There’s XiaoIce, a chatbot that uses “sentiment analysis” to keep its 660 million users engaged, and Azuma Hikari, a character-based AI designed to “bring comfort” to users whose lives lack emotional connection with other humans.
The mere existence of these tools would be sinister if it wasn’t for their widespread adoption; when millions of people use AIs to fill a void in their lives, it raises deeper questions about our ability to connect with each other and whether technology is building it up or tearing it down.
This isn’t the only big question the novel tackles. An overarching theme is one we’ve been increasingly contemplating as computers start to acquire more complex capabilities, like the beginnings of creativity or emotional awareness: What is it that truly makes us human?
“Do you believe in the human heart?” one character asks. “I don’t mean simply the organ, obviously. I’m speaking in the poetic sense. The human heart. Do you think there is such a thing? Something that makes each of us special and individual?”
The alternative, at least in the story, is that people don’t have a unique essence, but rather we’re all a blend of traits and personalities that can be reduced to strings of code. Our understanding of the brain is still elementary, but at some level, doesn’t all human experience boil down to the firing of billions of neurons between our ears? Will we one day—in a future beyond that painted by Ishiguro, but certainly foreshadowed by it—be able to “decode” our humanity to the point that there’s nothing mysterious left about it? “A human heart is bound to be complex,” Klara says. “But it must be limited.”
Whether or not you agree, Klara and the Sun is worth the read. It’s both a marvelous, engaging story about what it means to love and be human, and a prescient warning to approach technological change with caution and nuance. We’re already living in a world where AI keeps us company, influences our behavior, and is wreaking various forms of havoc. Ishiguro’s novel is a snapshot of one of our possible futures, told through the eyes of a robot who keeps you rooting for her to the end.
Image Credit: Marion Wellmann from Pixabay Continue reading
#438982 Quantum Computing and Reinforcement ...
Deep reinforcement learning is having a superstar moment.
Powering smarter robots. Simulating human neural networks. Trouncing physicians at medical diagnoses and crushing humanity’s best gamers at Go and Atari. While far from achieving the flexible, quick thinking that comes naturally to humans, this powerful machine learning idea seems unstoppable as a harbinger of better thinking machines.
Except there’s a massive roadblock: they take forever to run. Because the concept behind these algorithms is based on trial and error, a reinforcement learning AI “agent” only learns after being rewarded for its correct decisions. For complex problems, the time it takes an AI agent to try and fail to learn a solution can quickly become untenable.
But what if you could try multiple solutions at once?
This week, an international collaboration led by Dr. Philip Walther at the University of Vienna took the “classic” concept of reinforcement learning and gave it a quantum spin. They designed a hybrid AI that relies on both quantum and run-of-the-mill classic computing, and showed that—thanks to quantum quirkiness—it could simultaneously screen a handful of different ways to solve a problem.
The result is a reinforcement learning AI that learned over 60 percent faster than its non-quantum-enabled peers. This is one of the first tests that shows adding quantum computing can speed up the actual learning process of an AI agent, the authors explained.
Although only challenged with a “toy problem” in the study, the hybrid AI, once scaled, could impact real-world problems such as building an efficient quantum internet. The setup “could readily be integrated within future large-scale quantum communication networks,” the authors wrote.
The Bottleneck
Learning from trial and error comes intuitively to our brains.
Say you’re trying to navigate a new convoluted campground without a map. The goal is to get from the communal bathroom back to your campsite. Dead ends and confusing loops abound. We tackle the problem by deciding to turn either left or right at every branch in the road. One will get us closer to the goal; the other leads to a half hour of walking in circles. Eventually, our brain chemistry rewards correct decisions, so we gradually learn the correct route. (If you’re wondering…yeah, true story.)
Reinforcement learning AI agents operate in a similar trial-and-error way. As a problem becomes more complex, the number—and time—of each trial also skyrockets.
“Even in a moderately realistic environment, it may simply take too long to rationally respond to a given situation,” explained study author Dr. Hans Briegel at the Universität Innsbruck in Austria, who previously led efforts to speed up AI decision-making using quantum mechanics. If there’s pressure that allows “only a certain time for a response, an agent may then be unable to cope with the situation and to learn at all,” he wrote.
Many attempts have tried speeding up reinforcement learning. Giving the AI agent a short-term “memory.” Tapping into neuromorphic computing, which better resembles the brain. In 2014, Briegel and colleagues showed that a “quantum brain” of sorts can help propel an AI agent’s decision-making process after learning. But speeding up the learning process itself has eluded our best attempts.
The Hybrid AI
The new study went straight for that previously untenable jugular.
The team’s key insight was to tap into the best of both worlds—quantum and classical computing. Rather than building an entire reinforcement learning system using quantum mechanics, they turned to a hybrid approach that could prove to be more practical. Here, the AI agent uses quantum weirdness as it’s trying out new approaches—the “trial” in trial and error. The system then passes the baton to a classical computer to give the AI its reward—or not—based on its performance.
At the heart of the quantum “trial” process is a quirk called superposition. Stay with me. Our computers are powered by electrons, which can represent only two states—0 or 1. Quantum mechanics is far weirder, in that photons (particles of light) can simultaneously be both 0 and 1, with a slightly different probability of “leaning towards” one or the other.
This noncommittal oddity is part of what makes quantum computing so powerful. Take our reinforcement learning example of navigating a new campsite. In our classic world, we—and our AI—need to decide between turning left or right at an intersection. In a quantum setup, however, the AI can (in a sense) turn left and right at the same time. So when searching for the correct path back to home base, the quantum system has a leg up in that it can simultaneously explore multiple routes, making it far faster than conventional, consecutive trail and error.
“As a consequence, an agent that can explore its environment in superposition will learn significantly faster than its classical counterpart,” said Briegel.
It’s not all theory. To test out their idea, the team turned to a programmable chip called a nanophotonic processor. Think of it as a CPU-like computer chip, but it processes particles of light—photons—rather than electricity. These light-powered chips have been a long time in the making. Back in 2017, for example, a team from MIT built a fully optical neural network into an optical chip to bolster deep learning.
The chips aren’t all that exotic. Nanophotonic processors act kind of like our eyeglasses, which can carry out complex calculations that transform light that passes through them. In the glasses case, they let people see better. For a light-based computer chip, it allows computation. Rather than using electrical cables, the chips use “wave guides” to shuttle photons and perform calculations based on their interactions.
The “error” or “reward” part of the new hardware comes from a classical computer. The nanophotonic processor is coupled to a traditional computer, where the latter provides the quantum circuit with feedback—that is, whether to reward a solution or not. This setup, the team explains, allows them to more objectively judge any speed-ups in learning in real time.
In this way, a hybrid reinforcement learning agent alternates between quantum and classical computing, trying out ideas in wibbly-wobbly “multiverse” land while obtaining feedback in grounded, classic physics “normality.”
A Quantum Boost
In simulations using 10,000 AI agents and actual experimental data from 165 trials, the hybrid approach, when challenged with a more complex problem, showed a clear leg up.
The key word is “complex.” The team found that if an AI agent has a high chance of figuring out the solution anyway—as for a simple problem—then classical computing works pretty well. The quantum advantage blossoms when the task becomes more complex or difficult, allowing quantum mechanics to fully flex its superposition muscles. For these problems, the hybrid AI was 63 percent faster at learning a solution compared to traditional reinforcement learning, decreasing its learning effort from 270 guesses to 100.
Now that scientists have shown a quantum boost for reinforcement learning speeds, the race for next-generation computing is even more lit. Photonics hardware required for long-range light-based communications is rapidly shrinking, while improving signal quality. The partial-quantum setup could “aid specifically in problems where frequent search is needed, for example, network routing problems” that’s prevalent for a smooth-running internet, the authors wrote. With a quantum boost, reinforcement learning may be able to tackle far more complex problems—those in the real world—than currently possible.
“We are just at the beginning of understanding the possibilities of quantum artificial intelligence,” said lead author Walther.
Image Credit: Oleg Gamulinskiy from Pixabay Continue reading
#438925 Nanophotonics Could Be the ‘Dark ...
The race to build the first practical quantum computers looks like a two-horse contest between machines built from superconducting qubits and those that use trapped ions. But new research suggests a third contender—machines based on optical technology—could sneak up on the inside.
The most advanced quantum computers today are the ones built by Google and IBM, which rely on superconducting circuits to generate the qubits that form the basis of quantum calculations. They are now able to string together tens of qubits, and while controversial, Google claims its machines have achieved quantum supremacy—the ability to carry out a computation beyond normal computers.
Recently this approach has been challenged by a wave of companies looking to use trapped ion qubits, which are more stable and less error-prone than superconducting ones. While these devices are less developed, engineering giant Honeywell has already released a machine with 10 qubits, which it says is more powerful than a machine made of a greater number of superconducting qubits.
But despite this progress, both of these approaches have some major drawbacks. They require specialized fabrication methods, incredibly precise control mechanisms, and they need to be cooled to close to absolute zero to protect the qubits from any outside interference.
That’s why researchers at Canadian quantum computing hardware and software startup Xanadu are backing an alternative quantum computing approach based on optics, which was long discounted as impractical. In a paper published last week in Nature, they unveiled the first fully programmable and scalable optical chip that can run quantum algorithms. Not only does the system run at room temperature, but the company says it could scale to millions of qubits.
The idea isn’t exactly new. As Chris Lee notes in Ars Technica, people have been experimenting with optical approaches to quantum computing for decades, because encoding information in photons’ quantum states and manipulating those states is relatively easy. The biggest problem was that optical circuits were very large and not readily programmable, which meant you had to build a new computer for every new problem you wanted to solve.
That started to change thanks to the growing maturity of photonic integrated circuits. While early experiments with optical computing involved complex table-top arrangements of lasers, lenses, and detectors, today it’s possible to buy silicon chips not dissimilar to electronic ones that feature hundreds of tiny optical components.
In recent years, the reliability and performance of these devices has improved dramatically, and they’re now regularly used by the telecommunications industry. Some companies believe they could be the future of artificial intelligence too.
This allowed the Xanadu researchers to design a silicon chip that implements a complex optical network made up of beam splitters, waveguides, and devices called interferometers that cause light sources to interact with each other.
The chip can generate and manipulate up to eight qubits, but unlike conventional qubits, which can simultaneously be in two states, these qubits can be in any configuration of three states, which means they can carry more information.
Once the light has travelled through the network, it is then fed out to cutting-edge photon-counting detectors that provide the result. This is one of the potential limitations of the system, because currently these detectors need to be cryogenically cooled, although the rest of the chip does not.
But most importantly, the chip is easily re-programmable, which allows it to tackle a variety of problems. The computation can be controlled by adjusting the settings of these interferometers, but the researchers have also developed a software platform that hides the physical complexity from users and allows them to program it using fairly conventional code.
The company announced that its chips were available on the cloud in September of 2020, but the Nature paper is the first peer-reviewed test of their system. The researchers verified that the computations being done were genuinely quantum mechanical in nature, but they also implemented two more practical algorithms: one for simulating molecules and the other for judging how similar two graphs are, which has applications in a variety of pattern recognition problems.
In an accompanying opinion piece, Ulrik Andersen from the Technical University of Denmark says the quality of the qubits needs to be improved considerably and photon losses reduced if the technology is ever to scale to practical problems. But, he says, this breakthrough suggests optical approaches “could turn out to be the dark horse of quantum computing.”
Image Credit: Shahadat Rahman on Unsplash Continue reading