Tag Archives: ever
#432882 Why the Discovery of Room-Temperature ...
Superconductors are among the most bizarre and exciting materials yet discovered. Counterintuitive quantum-mechanical effects mean that, below a critical temperature, they have zero electrical resistance. This property alone is more than enough to spark the imagination.
A current that could flow forever without losing any energy means transmission of power with virtually no losses in the cables. When renewable energy sources start to dominate the grid and high-voltage transmission across continents becomes important to overcome intermittency, lossless cables will result in substantial savings.
What’s more, a superconducting wire carrying a current that never, ever diminishes would act as a perfect store of electrical energy. Unlike batteries, which degrade over time, if the resistance is truly zero, you could return to the superconductor in a billion years and find that same old current flowing through it. Energy could be captured and stored indefinitely!
With no resistance, a huge current could be passed through the superconducting wire and, in turn, produce magnetic fields of incredible power.
You could use them to levitate trains and produce astonishing accelerations, thereby revolutionizing the transport system. You could use them in power plants—replacing conventional methods which spin turbines in magnetic fields to generate electricity—and in quantum computers as the two-level system required for a “qubit,” in which the zeros and ones are replaced by current flowing clockwise or counterclockwise in a superconductor.
Arthur C. Clarke famously said that any sufficiently advanced technology is indistinguishable from magic; superconductors can certainly seem like magical devices. So, why aren’t they busy remaking the world? There’s a problem—that critical temperature.
For all known materials, it’s hundreds of degrees below freezing. Superconductors also have a critical magnetic field; beyond a certain magnetic field strength, they cease to work. There’s a tradeoff: materials with an intrinsically high critical temperature can also often provide the largest magnetic fields when cooled well below that temperature.
This has meant that superconductor applications so far have been limited to situations where you can afford to cool the components of your system to close to absolute zero: in particle accelerators and experimental nuclear fusion reactors, for example.
But even as some aspects of superconductor technology become mature in limited applications, the search for higher temperature superconductors moves on. Many physicists still believe a room-temperature superconductor could exist. Such a discovery would unleash amazing new technologies.
The Quest for Room-Temperature Superconductors
After Heike Kamerlingh Onnes discovered superconductivity by accident while attempting to prove Lord Kelvin’s theory that resistance would increase with decreasing temperature, theorists scrambled to explain the new property in the hope that understanding it might allow for room-temperature superconductors to be synthesized.
They came up with the BCS theory, which explained some of the properties of superconductors. It also predicted that the dream of technologists, a room-temperature superconductor, could not exist; the maximum temperature for superconductivity according to BCS theory was just 30 K.
Then, in the 1980s, the field changed again with the discovery of unconventional, or high-temperature, superconductivity. “High temperature” is still very cold: the highest temperature for superconductivity achieved was -70°C for hydrogen sulphide at extremely high pressures. For normal pressures, -140°C is near the upper limit. Unfortunately, high-temperature superconductors—which require relatively cheap liquid nitrogen, rather than liquid helium, to cool—are mostly brittle ceramics, which are expensive to form into wires and have limited application.
Given the limitations of high-temperature superconductors, researchers continue to believe there’s a better option awaiting discovery—an incredible new material that checks boxes like superconductivity approaching room temperature, affordability, and practicality.
Tantalizing Clues
Without a detailed theoretical understanding of how this phenomenon occurs—although incremental progress happens all the time—scientists can occasionally feel like they’re taking educated guesses at materials that might be likely candidates. It’s a little like trying to guess a phone number, but with the periodic table of elements instead of digits.
Yet the prospect remains, in the words of one researcher, tantalizing. A Nobel Prize and potentially changing the world of energy and electricity is not bad for a day’s work.
Some research focuses on cuprates, complex crystals that contain layers of copper and oxygen atoms. Doping cuprates with various different elements, such exotic compounds as mercury barium calcium copper oxide, are amongst the best superconductors known today.
Research also continues into some anomalous but unexplained reports that graphite soaked in water can act as a room-temperature superconductor, but there’s no indication that this could be used for technological applications yet.
In early 2017, as part of the ongoing effort to explore the most extreme and exotic forms of matter we can create on Earth, researchers managed to compress hydrogen into a metal.
The pressure required to do this was more than that at the core of the Earth and thousands of times higher than that at the bottom of the ocean. Some researchers in the field, called condensed-matter physics, doubt that metallic hydrogen was produced at all.
It’s considered possible that metallic hydrogen could be a room-temperature superconductor. But getting the samples to stick around long enough for detailed testing has proved tricky, with the diamonds containing the metallic hydrogen suffering a “catastrophic failure” under the pressure.
Superconductivity—or behavior that strongly resembles it—was also observed in yttrium barium copper oxide (YBCO) at room temperature in 2014. The only catch was that this electron transport lasted for a tiny fraction of a second and required the material to be bombarded with pulsed lasers.
Not very practical, you might say, but tantalizing nonetheless.
Other new materials display enticing properties too. The 2016 Nobel Prize in Physics was awarded for the theoretical work that characterizes topological insulators—materials that exhibit similarly strange quantum behaviors. They can be considered perfect insulators for the bulk of the material but extraordinarily good conductors in a thin layer on the surface.
Microsoft is betting on topological insulators as the key component in their attempt at a quantum computer. They’ve also been considered potentially important components in miniaturized circuitry.
A number of remarkable electronic transport properties have also been observed in new, “2D” structures—like graphene, these are materials synthesized to be as thick as a single atom or molecule. And research continues into how we can utilize the superconductors we’ve already discovered; for example, some teams are trying to develop insulating material that prevents superconducting HVDC cable from overheating.
Room-temperature superconductivity remains as elusive and exciting as it has been for over a century. It is unclear whether a room-temperature superconductor can exist, but the discovery of high-temperature superconductors is a promising indicator that unconventional and highly useful quantum effects may be discovered in completely unexpected materials.
Perhaps in the future—through artificial intelligence simulations or the serendipitous discoveries of a 21st century Kamerlingh Onnes—this little piece of magic could move into the realm of reality.
Image Credit: ktsdesign / Shutterstock.com Continue reading
#432880 Google’s Duplex Raises the Question: ...
By now, you’ve probably seen Google’s new Duplex software, which promises to call people on your behalf to book appointments for haircuts and the like. As yet, it only exists in demo form, but already it seems like Google has made a big stride towards capturing a market that plenty of companies have had their eye on for quite some time. This software is impressive, but it raises questions.
Many of you will be familiar with the stilted, robotic conversations you can have with early chatbots that are, essentially, glorified menus. Instead of pressing 1 to confirm or 2 to re-enter, some of these bots would allow for simple commands like “Yes” or “No,” replacing the buttons with limited ability to recognize a few words. Using them was often a far more frustrating experience than attempting to use a menu—there are few things more irritating than a robot saying, “Sorry, your response was not recognized.”
Google Duplex scheduling a hair salon appointment:
Google Duplex calling a restaurant:
Even getting the response recognized is hard enough. After all, there are countless different nuances and accents to baffle voice recognition software, and endless turns of phrase that amount to saying the same thing that can confound natural language processing (NLP), especially if you like your phrasing quirky.
You may think that standard customer-service type conversations all travel the same route, using similar words and phrasing. But when there are over 80,000 ways to order coffee, and making a mistake is frowned upon, even simple tasks require high accuracy over a huge dataset.
Advances in audio processing, neural networks, and NLP, as well as raw computing power, have meant that basic recognition of what someone is trying to say is less of an issue. Soundhound’s virtual assistant prides itself on being able to process complicated requests (perhaps needlessly complicated).
The deeper issue, as with all attempts to develop conversational machines, is one of understanding context. There are so many ways a conversation can go that attempting to construct a conversation two or three layers deep quickly runs into problems. Multiply the thousands of things people might say by the thousands they might say next, and the combinatorics of the challenge runs away from most chatbots, leaving them as either glorified menus, gimmicks, or rather bizarre to talk to.
Yet Google, who surely remembers from Glass the risk of premature debuts for technology, especially the kind that ask you to rethink how you interact with or trust in software, must have faith in Duplex to show it on the world stage. We know that startups like Semantic Machines and x.ai have received serious funding to perform very similar functions, using natural-language conversations to perform computing tasks, schedule meetings, book hotels, or purchase items.
It’s no great leap to imagine Google will soon do the same, bringing us closer to a world of onboard computing, where Lens labels the world around us and their assistant arranges it for us (all the while gathering more and more data it can convert into personalized ads). The early demos showed some clever tricks for keeping the conversation within a fairly narrow realm where the AI should be comfortable and competent, and the blog post that accompanied the release shows just how much effort has gone into the technology.
Yet given the privacy and ethics funk the tech industry finds itself in, and people’s general unease about AI, the main reaction to Duplex’s impressive demo was concern. The voice sounded too natural, bringing to mind Lyrebird and their warnings of deepfakes. You might trust “Do the Right Thing” Google with this technology, but it could usher in an era when automated robo-callers are far more convincing.
A more human-like voice may sound like a perfectly innocuous improvement, but the fact that the assistant interjects naturalistic “umm” and “mm-hm” responses to more perfectly mimic a human rubbed a lot of people the wrong way. This wasn’t just a voice assistant trying to sound less grinding and robotic; it was actively trying to deceive people into thinking they were talking to a human.
Google is running the risk of trying to get to conversational AI by going straight through the uncanny valley.
“Google’s experiments do appear to have been designed to deceive,” said Dr. Thomas King of the Oxford Internet Institute’s Digital Ethics Lab, according to Techcrunch. “Their main hypothesis was ‘can you distinguish this from a real person?’ In this case it’s unclear why their hypothesis was about deception and not the user experience… there should be some kind of mechanism there to let people know what it is they are speaking to.”
From Google’s perspective, being able to say “90 percent of callers can’t tell the difference between this and a human personal assistant” is an excellent marketing ploy, even though statistics about how many interactions are successful might be more relevant.
In fact, Duplex runs contrary to pretty much every major recommendation about ethics for the use of robotics or artificial intelligence, not to mention certain eavesdropping laws. Transparency is key to holding machines (and the people who design them) accountable, especially when it comes to decision-making.
Then there are the more subtle social issues. One prominent effect social media has had is to allow people to silo themselves; in echo chambers of like-minded individuals, it’s hard to see how other opinions exist. Technology exacerbates this by removing the evolutionary cues that go along with face-to-face interaction. Confronted with a pair of human eyes, people are more generous. Confronted with a Twitter avatar or a Facebook interface, people hurl abuse and criticism they’d never dream of using in a public setting.
Now that we can use technology to interact with ever fewer people, will it change us? Is it fair to offload the burden of dealing with a robot onto the poor human at the other end of the line, who might have to deal with dozens of such calls a day? Google has said that if the AI is in trouble, it will put you through to a human, which might help save receptionists from the hell of trying to explain a concept to dozens of dumbfounded AI assistants all day. But there’s always the risk that failures will be blamed on the person and not the machine.
As AI advances, could we end up treating the dwindling number of people in these “customer-facing” roles as the buggiest part of a fully automatic service? Will people start accusing each other of being robots on the phone, as well as on Twitter?
Google has provided plenty of reassurances about how the system will be used. They have said they will ensure that the system is identified, and it’s hardly difficult to resolve this problem; a slight change in the script from their demo would do it. For now, consumers will likely appreciate moves that make it clear whether the “intelligent agents” that make major decisions for us, that we interact with daily, and that hide behind social media avatars or phone numbers are real or artificial.
Image Credit: Besjunior / Shutterstock.com Continue reading
#432568 Tech Optimists See a Golden ...
Technology evangelists dream about a future where we’re all liberated from the more mundane aspects of our jobs by artificial intelligence. Other futurists go further, imagining AI will enable us to become superhuman, enhancing our intelligence, abandoning our mortal bodies, and uploading ourselves to the cloud.
Paradise is all very well, although your mileage may vary on whether these scenarios are realistic or desirable. The real question is, how do we get there?
Economist John Maynard Keynes notably argued in favor of active intervention when an economic crisis hits, rather than waiting for the markets to settle down to a more healthy equilibrium in the long run. His rebuttal to critics was, “In the long run, we are all dead.” After all, if it takes 50 years of upheaval and economic chaos for things to return to normality, there has been an immense amount of human suffering first.
Similar problems arise with the transition to a world where AI is intimately involved in our lives. In the long term, automation of labor might benefit the human species immensely. But in the short term, it has all kinds of potential pitfalls, especially in exacerbating inequality within societies where AI takes on a larger role. A new report from the Institute for Public Policy Research has deep concerns about the future of work.
Uneven Distribution
While the report doesn’t foresee the same gloom and doom of mass unemployment that other commentators have considered, the concern is that the gains in productivity and economic benefits from AI will be unevenly distributed. In the UK, jobs that account for £290 billion worth of wages in today’s economy could potentially be automated with current technology. But these are disproportionately jobs held by people who are already suffering from social inequality.
Low-wage jobs are five times more likely to be automated than high-wage jobs. A greater proportion of jobs held by women are likely to be automated. The solution that’s often suggested is that people should simply “retrain”; but if no funding or assistance is provided, this burden is too much to bear. You can’t expect people to seamlessly transition from driving taxis to writing self-driving car software without help. As we have already seen, inequality is exacerbated when jobs that don’t require advanced education (even if they require a great deal of technical skill) are the first to go.
No Room for Beginners
Optimists say algorithms won’t replace humans, but will instead liberate us from the dull parts of our jobs. Lawyers used to have to spend hours trawling through case law to find legal precedents; now AI can identify the most relevant documents for them. Doctors no longer need to look through endless scans and perform diagnostic tests; machines can do this, leaving the decision-making to humans. This boosts productivity and provides invaluable tools for workers.
But there are issues with this rosy picture. If humans need to do less work, the economic incentive is for the boss to reduce their hours. Some of these “dull, routine” parts of the job were traditionally how people getting into the field learned the ropes: paralegals used to look through case law, but AI may render them obsolete. Even in the field of journalism, there’s now software that will rewrite press releases for publication, traditionally something close to an entry-level task. If there are no entry-level jobs, or if entry-level now requires years of training, the result is to exacerbate inequality and reduce social mobility.
Automating Our Biases
The adoption of algorithms into employment has already had negative impacts on equality. Cathy O’Neil, mathematics PhD from Harvard, raises these concerns in her excellent book Weapons of Math Destruction. She notes that algorithms designed by humans often encode the biases of that society, whether they’re racial or based on gender and sexuality.
Google’s search engine advertises more executive-level jobs to users it thinks are male. AI programs predict that black offenders are more likely to re-offend than white offenders; they receive correspondingly longer sentences. It needn’t necessarily be that bias has been actively programmed; perhaps the algorithms just learn from historical data, but this means they will perpetuate historical inequalities.
Take candidate-screening software HireVue, used by many major corporations to assess new employees. It analyzes “verbal and non-verbal cues” of candidates, comparing them to employees that historically did well. Either way, according to Cathy O’Neil, they are “using people’s fear and trust of mathematics to prevent them from asking questions.” With no transparency or understanding of how the algorithm generates its results, and no consensus over who’s responsible for the results, discrimination can occur automatically, on a massive scale.
Combine this with other demographic trends. In rich countries, people are living longer. An increasing burden will be placed on a shrinking tax base to support that elderly population. A recent study said that due to the accumulation of wealth in older generations, millennials stand to inherit more than any previous generation, but it won’t happen until they’re in their 60s. Meanwhile, those with savings and capital will benefit as the economy shifts: the stock market and GDP will grow, but wages and equality will fall, a situation that favors people who are already wealthy.
Even in the most dramatic AI scenarios, inequality is exacerbated. If someone develops a general intelligence that’s near-human or super-human, and they manage to control and monopolize it, they instantly become immensely wealthy and powerful. If the glorious technological future that Silicon Valley enthusiasts dream about is only going to serve to make the growing gaps wider and strengthen existing unfair power structures, is it something worth striving for?
What Makes a Utopia?
We urgently need to redefine our notion of progress. Philosophers worry about an AI that is misaligned—the things it seeks to maximize are not the things we want maximized. At the same time, we measure the development of our countries by GDP, not the quality of life of workers or the equality of opportunity in the society. Growing wealth with increased inequality is not progress.
Some people will take the position that there are always winners and losers in society, and that any attempt to redress the inequalities of our society will stifle economic growth and leave everyone worse off. Some will see this as an argument for a new economic model, based around universal basic income. Any moves towards this will need to take care that it’s affordable, sustainable, and doesn’t lead towards an entrenched two-tier society.
Walter Schiedel’s book The Great Leveller is a huge survey of inequality across all of human history, from the 21st century to prehistoric cave-dwellers. He argues that only revolutions, wars, and other catastrophes have historically reduced inequality: a perfect example is the Black Death in Europe, which (by reducing the population and therefore the labor supply that was available) increased wages and reduced inequality. Meanwhile, our solution to the financial crisis of 2007-8 may have only made the problem worse.
But in a world of nuclear weapons, of biowarfare, of cyberwarfare—a world of unprecedented, complex, distributed threats—the consequences of these “safety valves” could be worse than ever before. Inequality increases the risk of global catastrophe, and global catastrophes could scupper any progress towards the techno-utopia that the utopians dream of. And a society with entrenched inequality is no utopia at all.
Image Credit: OliveTree / Shutterstock.com Continue reading