Tag Archives: ways
#432467 Dungeons and Dragons, Not Chess and Go: ...
Everyone had died—not that you’d know it, from how they were laughing about their poor choices and bad rolls of the dice. As a social anthropologist, I study how people understand artificial intelligence (AI) and our efforts towards attaining it; I’m also a life-long fan of Dungeons and Dragons (D&D), the inventive fantasy roleplaying game. During a recent quest, when I was playing an elf ranger, the trainee paladin (or holy knight) acted according to his noble character, and announced our presence at the mouth of a dragon’s lair. The results were disastrous. But while success in D&D means “beating the bad guy,” the game is also a creative sandbox, where failure can count as collective triumph so long as you tell a great tale.
What does this have to do with AI? In computer science, games are frequently used as a benchmark for an algorithm’s “intelligence.” The late Robert Wilensky, a professor at the University of California, Berkeley and a leading figure in AI, offered one reason why this might be. Computer scientists “looked around at who the smartest people were, and they were themselves, of course,” he told the authors of Compulsive Technology: Computers as Culture (1985). “They were all essentially mathematicians by training, and mathematicians do two things—they prove theorems and play chess. And they said, hey, if it proves a theorem or plays chess, it must be smart.” No surprise that demonstrations of AI’s “smarts” have focused on the artificial player’s prowess.
Yet the games that get chosen—like Go, the main battlefield for Google DeepMind’s algorithms in recent years—tend to be tightly bounded, with set objectives and clear paths to victory or defeat. These experiences have none of the open-ended collaboration of D&D. Which got me thinking: do we need a new test for intelligence, where the goal is not simply about success, but storytelling? What would it mean for an AI to “pass” as human in a game of D&D? Instead of the Turing test, perhaps we need an elf ranger test?
Of course, this is just a playful thought experiment, but it does highlight the flaws in certain models of intelligence. First, it reveals how intelligence has to work across a variety of environments. D&D participants can inhabit many characters in many games, and the individual player can “switch” between roles (the fighter, the thief, the healer). Meanwhile, AI researchers know that it’s super difficult to get a well-trained algorithm to apply its insights in even slightly different domains—something that we humans manage surprisingly well.
Second, D&D reminds us that intelligence is embodied. In computer games, the bodily aspect of the experience might range from pressing buttons on a controller in order to move an icon or avatar (a ping-pong paddle; a spaceship; an anthropomorphic, eternally hungry, yellow sphere), to more recent and immersive experiences involving virtual-reality goggles and haptic gloves. Even without these add-ons, games can still produce biological responses associated with stress and fear (if you’ve ever played Alien: Isolation you’ll understand). In the original D&D, the players encounter the game while sitting around a table together, feeling the story and its impact. Recent research in cognitive science suggests that bodily interactions are crucial to how we grasp more abstract mental concepts. But we give minimal attention to the embodiment of artificial agents, and how that might affect the way they learn and process information.
Finally, intelligence is social. AI algorithms typically learn through multiple rounds of competition, in which successful strategies get reinforced with rewards. True, it appears that humans also evolved to learn through repetition, reward and reinforcement. But there’s an important collaborative dimension to human intelligence. In the 1930s, the psychologist Lev Vygotsky identified the interaction of an expert and a novice as an example of what became called “scaffolded” learning, where the teacher demonstrates and then supports the learner in acquiring a new skill. In unbounded games, this cooperation is channelled through narrative. Games of It among small children can evolve from win/lose into attacks by terrible monsters, before shifting again to more complex narratives that explain why the monsters are attacking, who is the hero, and what they can do and why—narratives that aren’t always logical or even internally compatible. An AI that could engage in social storytelling is doubtless on a surer, more multifunctional footing than one that plays chess; and there’s no guarantee that chess is even a step on the road to attaining intelligence of this sort.
In some ways, this failure to look at roleplaying as a technical hurdle for intelligence is strange. D&D was a key cultural touchstone for technologists in the 1980s and the inspiration for many early text-based computer games, as Katie Hafner and Matthew Lyon point out in Where Wizards Stay up Late: The Origins of the Internet (1996). Even today, AI researchers who play games in their free time often mention D&D specifically. So instead of beating adversaries in games, we might learn more about intelligence if we tried to teach artificial agents to play together as we do: as paladins and elf rangers.
This article was originally published at Aeon and has been republished under Creative Commons.
Image Credit:Benny Mazur/Flickr / CC BY 2.0 Continue reading
#432352 Watch This Lifelike Robot Fish Swim ...
Earth’s oceans are having a rough go of it these days. On top of being the repository for millions of tons of plastic waste, global warming is affecting the oceans and upsetting marine ecosystems in potentially irreversible ways.
Coral bleaching, for example, occurs when warming water temperatures or other stress factors cause coral to cast off the algae that live on them. The coral goes from lush and colorful to white and bare, and sometimes dies off altogether. This has a ripple effect on the surrounding ecosystem.
Warmer water temperatures have also prompted many species of fish to move closer to the north or south poles, disrupting fisheries and altering undersea environments.
To keep these issues in check or, better yet, try to address and improve them, it’s crucial for scientists to monitor what’s going on in the water. A paper released last week by a team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) unveiled a new tool for studying marine life: a biomimetic soft robotic fish, dubbed SoFi, that can swim with, observe, and interact with real fish.
SoFi isn’t the first robotic fish to hit the water, but it is the most advanced robot of its kind. Here’s what sets it apart.
It swims in three dimensions
Up until now, most robotic fish could only swim forward at a given water depth, advancing at a steady speed. SoFi blows older models out of the water. It’s equipped with side fins called dive planes, which move to adjust its angle and allow it to turn, dive downward, or head closer to the surface. Its density and thus its buoyancy can also be adjusted by compressing or decompressing air in an inner compartment.
“To our knowledge, this is the first robotic fish that can swim untethered in three dimensions for extended periods of time,” said CSAIL PhD candidate Robert Katzschmann, lead author of the study. “We are excited about the possibility of being able to use a system like this to get closer to marine life than humans can get on their own.”
The team took SoFi to the Rainbow Reef in Fiji to test out its swimming skills, and the robo fish didn’t disappoint—it was able to swim at depths of over 50 feet for 40 continuous minutes. What keeps it swimming? A lithium polymer battery just like the one that powers our smartphones.
It’s remote-controlled… by Super Nintendo
SoFi has sensors to help it see what’s around it, but it doesn’t have a mind of its own yet. Rather, it’s controlled by a nearby scuba-diving human, who can send it commands related to speed, diving, and turning. The best part? The commands come from an actual repurposed (and waterproofed) Super Nintendo controller. What’s not to love?
Image Credit: MIT CSAIL
Previous robotic fish built by this team had to be tethered to a boat, so the fact that SoFi can swim independently is a pretty big deal. Communication between the fish and the diver was most successful when the two were less than 10 meters apart.
It looks real, sort of
SoFi’s side fins are a bit stiff, and its camera may not pass for natural—but otherwise, it looks a lot like a real fish. This is mostly thanks to the way its tail moves; a motor pumps water between two chambers in the tail, and as one chamber fills, the tail bends towards that side, then towards the other side as water is pumped into the other chamber. The result is a motion that closely mimics the way fish swim. Not only that, the hydraulic system can change the water flow to get different tail movements that let SoFi swim at varying speeds; its average speed is around half a body length (21.7 centimeters) per second.
Besides looking neat, it’s important SoFi look lifelike so it can blend in with marine life and not scare real fish away, so it can get close to them and observe them.
“A robot like this can help explore the reef more closely than current robots, both because it can get closer more safely for the reef and because it can be better accepted by the marine species.” said Cecilia Laschi, a biorobotics professor at the Sant’Anna School of Advanced Studies in Pisa, Italy.
Just keep swimming
It sounds like this fish is nothing short of a regular Nemo. But its creators aren’t quite finished yet.
They’d like SoFi to be able to swim faster, so they’ll work on improving the robo fish’s pump system and streamlining its body and tail design. They also plan to tweak SoFi’s camera to help it follow real fish.
“We view SoFi as a first step toward developing almost an underwater observatory of sorts,” said CSAIL director Daniela Rus. “It has the potential to be a new type of tool for ocean exploration and to open up new avenues for uncovering the mysteries of marine life.”
The CSAIL team plans to make a whole school of SoFis to help biologists learn more about how marine life is reacting to environmental changes.
Image Credit: MIT CSAIL Continue reading
#432262 How We Can ‘Robot-Proof’ Education ...
Like millions of other individuals in the workforce, you’re probably wondering if you will one day be replaced by a machine. If you’re a student, you’re probably wondering if your chosen profession will even exist by the time you’ve graduated. From driving to legal research, there isn’t much that technology hasn’t already automated (or begun to automate). Many of us will need to adapt to this disruption in the workforce.
But it’s not enough for students and workers to adapt, become lifelong learners, and re-skill themselves. We also need to see innovation and initiative at an institutional and governmental level. According to research by The Economist, almost half of all jobs could be automated by computers within the next two decades, and no government in the world is prepared for it.
While many see the current trend in automation as a terrifying threat, others see it as an opportunity. In Robot-Proof: Higher Education in the Age of Artificial Intelligence, Northeastern University president Joseph Aoun proposes educating students in a way that will allow them to do the things that machines can’t. He calls for a new paradigm that teaches young minds “to invent, to create, and to discover”—filling the relevant needs of our world that robots simply can’t fill. Aoun proposes a much-needed novel framework that will allow us to “robot-proof” education.
Literacies and Core Cognitive Capacities of the Future
Aoun lays a framework for a new discipline, humanics, which discusses the important capacities and literacies for emerging education systems. At its core, the framework emphasizes our uniquely human abilities and strengths.
The three key literacies include data literacy (being able to manage and analyze big data), technological literacy (being able to understand exponential technologies and conduct computational thinking), and human literacy (being able to communicate and evaluate social, ethical, and existential impact).
Beyond the literacies, at the heart of Aoun’s framework are four cognitive capacities that are crucial to develop in our students if they are to be resistant to automation: critical thinking, systems thinking, entrepreneurship, and cultural agility.
“These capacities are mindsets rather than bodies of knowledge—mental architecture rather than mental furniture,” he writes. “Going forward, people will still need to know specific bodies of knowledge to be effective in the workplace, but that alone will not be enough when intelligent machines are doing much of the heavy lifting of information. To succeed, tomorrow’s employees will have to demonstrate a higher order of thought.”
Like many other experts in education, Joseph Aoun emphasizes the importance of critical thinking. This is important not just when it comes to taking a skeptical approach to information, but also being able to logically break down a claim or problem into multiple layers of analysis. We spend so much time teaching students how to answer questions that we often neglect to teach them how to ask questions. Asking questions—and asking good ones—is a foundation of critical thinking. Before you can solve a problem, you must be able to critically analyze and question what is causing it. This is why critical thinking and problem solving are coupled together.
The second capacity, systems thinking, involves being able to think holistically about a problem. The most creative problem-solvers and thinkers are able to take a multidisciplinary perspective and connect the dots between many different fields. According to Aoun, it “involves seeing across areas that machines might be able to comprehend individually but that they cannot analyze in an integrated way, as a whole.” It represents the absolute opposite of how most traditional curricula is structured with emphasis on isolated subjects and content knowledge.
Among the most difficult-to-automate tasks or professions is entrepreneurship.
In fact, some have gone so far as to claim that in the future, everyone will be an entrepreneur. Yet traditionally, initiative has been something students show in spite of or in addition to their schoolwork. For most students, developing a sense of initiative and entrepreneurial skills has often been part of their extracurricular activities. It needs to be at the core of our curricula, not a supplement to it. At its core, teaching entrepreneurship is about teaching our youth to solve complex problems with resilience, to become global leaders, and to solve grand challenges facing our species.
Finally, with an increasingly globalized world, there is a need for more workers with cultural agility, the ability to build amongst different cultural contexts and norms.
One of the major trends today is the rise of the contingent workforce. We are seeing an increasing percentage of full-time employees working on the cloud. Multinational corporations have teams of employees collaborating at different offices across the planet. Collaboration across online networks requires a skillset of its own. As education expert Tony Wagner points out, within these digital contexts, leadership is no longer about commanding with top-down authority, but rather about leading by influence.
An Emphasis on Creativity
The framework also puts an emphasis on experiential or project-based learning, wherein the heart of the student experience is not lectures or exams but solving real-life problems and learning by doing, creating, and executing. Unsurprisingly, humans continue to outdo machines when it comes to innovating and pushing intellectual, imaginative, and creative boundaries, making jobs involving these skills the hardest to automate.
In fact, technological trends are giving rise to what many thought leaders refer to as the imagination economy. This is defined as “an economy where intuitive and creative thinking create economic value, after logical and rational thinking have been outsourced to other economies.” Consequently, we need to develop our students’ creative abilities to ensure their success against machines.
In its simplest form, creativity represents the ability to imagine radical ideas and then go about executing them in reality.
In many ways, we are already living in our creative imaginations. Consider this: every invention or human construct—whether it be the spaceship, an architectural wonder, or a device like an iPhone—once existed as a mere idea, imagined in someone’s mind. The world we have designed and built around us is an extension of our imaginations and is only possible because of our creativity. Creativity has played a powerful role in human progress—now imagine what the outcomes would be if we tapped into every young mind’s creative potential.
The Need for a Radical Overhaul
What is clear from the recommendations of Aoun and many other leading thinkers in this space is that an effective 21st-century education system is radically different from the traditional systems we currently have in place. There is a dramatic contrast between these future-oriented frameworks and the way we’ve structured our traditional, industrial-era and cookie-cutter-style education systems.
It’s time for a change, and incremental changes or subtle improvements are no longer enough. What we need to see are more moonshots and disruption in the education sector. In a world of exponential growth and accelerating change, it is never too soon for a much-needed dramatic overhaul.
Image Credit: Besjunior / Shutterstock.com Continue reading
#432249 New Malicious AI Report Outlines Biggest ...
Everyone’s talking about deep fakes: audio-visual imitations of people, generated by increasingly powerful neural networks, that will soon be indistinguishable from the real thing. Politicians are regularly laid low by scandals that arise from audio-visual recordings. Try watching the footage that could be created of Barack Obama from his speeches, and the Lyrebird impersonations. You could easily, today or in the very near future, create a forgery that might be indistinguishable from the real thing. What would that do to politics?
Once the internet is flooded with plausible-seeming tapes and recordings of this sort, how are we going to decide what’s real and what isn’t? Democracy, and our ability to counteract threats, is already threatened by a lack of agreement on the facts. Once you can’t believe the evidence of your senses anymore, we’re in serious trouble. Ultimately, you can dream up all kinds of utterly terrifying possibilities for these deep fakes, from fake news to blackmail.
How to solve the problem? Some have suggested that media websites like Facebook or Twitter should carry software that probes every video to see if it’s a deep fake or not and labels the fakes. But this will prove computationally intensive. Plus, imagine a case where we have such a system, and a fake is “verified as real” by news media algorithms that have been fooled by clever hackers.
The other alternative is even more dystopian: you can prove something isn’t true simply by always having an alibi. Lawfare describes a “solution” where those concerned about deep fakes have all of their movements and interactions recorded. So to avoid being blackmailed or having your reputation ruined, you just consent to some company engaging in 24/7 surveillance of everything you say or do and having total power over that information. What could possibly go wrong?
The point is, in the same way that you don’t need human-level, general AI or humanoid robotics to create systems that can cause disruption in the world of work, you also don’t need a general intelligence to threaten security and wreak havoc on society. Andrew Ng, AI researcher, says that worrying about the risks from superintelligent AI is like “worrying about overpopulation on Mars.” There are clearly risks that arise even from the simple algorithms we have today.
The looming issue of deep fakes is just one of the threats considered by the new malicious AI report, which has co-authors from the Future of Humanity Institute and the Centre for the Study of Existential Risk (among other organizations.) They limit their focus to the technologies of the next five years.
Some of the concerns the report explores are enhancements to familiar threats.
Automated hacking can get better, smarter, and algorithms can adapt to changing security protocols. “Phishing emails,” where people are scammed by impersonating someone they trust or an official organization, could be generated en masse and made more realistic by scraping data from social media. Standard phishing works by sending such a great volume of emails that even a very low success rate can be profitable. Spear phishing aims at specific targets by impersonating family members, but can be labor intensive. If AI algorithms enable every phishing scam to become sharper in this way, more people are going to get gouged.
Then there are novel threats that come from our own increasing use of and dependence on artificial intelligence to make decisions.
These algorithms may be smart in some ways, but as any human knows, computers are utterly lacking in common sense; they can be fooled. A rather scary application is adversarial examples. Machine learning algorithms are often used for image recognition. But it’s possible, if you know a little about how the algorithm is structured, to construct the perfect level of noise to add to an image, and fool the machine. Two images can be almost completely indistinguishable to the human eye. But by adding some cleverly-calculated noise, the hackers can fool the algorithm into thinking an image of a panda is really an image of a gibbon (in the OpenAI example). Research conducted by OpenAI demonstrates that you can fool algorithms even by printing out examples on stickers.
Now imagine that instead of tricking a computer into thinking that a panda is actually a gibbon, you fool it into thinking that a stop sign isn’t there, or that the back of someone’s car is really a nice open stretch of road. In the adversarial example case, the images are almost indistinguishable to humans. By the time anyone notices the road sign has been “hacked,” it could already be too late.
As the OpenAI foundation freely admits, worrying about whether we’d be able to tame a superintelligent AI is a hard problem. It looks all the more difficult when you realize some of our best algorithms can be fooled by stickers; even “modern simple algorithms can behave in ways we do not intend.”
There are ways around this approach.
Adversarial training can generate lots of adversarial examples and explicitly train the algorithm not to be fooled by them—but it’s costly in terms of time and computation, and puts you in an arms race with hackers. Many strategies for defending against adversarial examples haven’t proved adaptive enough; correcting against vulnerabilities one at a time is too slow. Moreover, it demonstrates a point that can be lost in the AI hype: algorithms can be fooled in ways we didn’t anticipate. If we don’t learn about these vulnerabilities until the algorithms are everywhere, serious disruption can occur. And no matter how careful you are, some vulnerabilities are likely to remain to be exploited, even if it takes years to find them.
Just look at the Meltdown and Spectre vulnerabilities, which weren’t widely known about for more than 20 years but could enable hackers to steal personal information. Ultimately, the more blind faith we put into algorithms and computers—without understanding the opaque inner mechanics of how they work—the more vulnerable we will be to these forms of attack. And, as China dreams of using AI to predict crimes and enhance the police force, the potential for unjust arrests can only increase.
This is before you get into the truly nightmarish territory of “killer robots”—not the Terminator, but instead autonomous or consumer drones which could potentially be weaponized by bad actors and used to conduct attacks remotely. Some reports have indicated that terrorist organizations are already trying to do this.
As with any form of technology, new powers for humanity come with new risks. And, as with any form of technology, closing Pandora’s box will prove very difficult.
Somewhere between the excessively hyped prospects of AI that will do everything for us and AI that will destroy the world lies reality: a complex, ever-changing set of risks and rewards. The writers of the malicious AI report note that one of their key motivations is ensuring that the benefits of new technology can be delivered to people as quickly, but as safely, as possible. In the rush to exploit the potential for algorithms and create 21st-century infrastructure, we must ensure we’re not building in new dangers.
Image Credit: lolloj / Shutterstock.com Continue reading
#432190 In the Future, There Will Be No Limit to ...
New planets found in distant corners of the galaxy. Climate models that may improve our understanding of sea level rise. The emergence of new antimalarial drugs. These scientific advances and discoveries have been in the news in recent months.
While representing wildly divergent disciplines, from astronomy to biotechnology, they all have one thing in common: Artificial intelligence played a key role in their scientific discovery.
One of the more recent and famous examples came out of NASA at the end of 2017. The US space agency had announced an eighth planet discovered in the Kepler-90 system. Scientists had trained a neural network—a computer with a “brain” modeled on the human mind—to re-examine data from Kepler, a space-borne telescope with a four-year mission to seek out new life and new civilizations. Or, more precisely, to find habitable planets where life might just exist.
The researchers trained the artificial neural network on a set of 15,000 previously vetted signals until it could identify true planets and false positives 96 percent of the time. It then went to work on weaker signals from nearly 700 star systems with known planets.
The machine detected Kepler 90i—a hot, rocky planet that orbits its sun about every two Earth weeks—through a nearly imperceptible change in brightness captured when a planet passes a star. It also found a sixth Earth-sized planet in the Kepler-80 system.
AI Handles Big Data
The application of AI to science is being driven by three great advances in technology, according to Ross King from the Manchester Institute of Biotechnology at the University of Manchester, leader of a team that developed an artificially intelligent “scientist” called Eve.
Those three advances include much faster computers, big datasets, and improved AI methods, King said. “These advances increasingly give AI superhuman reasoning abilities,” he told Singularity Hub by email.
AI systems can flawlessly remember vast numbers of facts and extract information effortlessly from millions of scientific papers, not to mention exhibit flawless logical reasoning and near-optimal probabilistic reasoning, King says.
AI systems also beat humans when it comes to dealing with huge, diverse amounts of data.
That’s partly what attracted a team of glaciologists to turn to machine learning to untangle the factors involved in how heat from Earth’s interior might influence the ice sheet that blankets Greenland.
Algorithms juggled 22 geologic variables—such as bedrock topography, crustal thickness, magnetic anomalies, rock types, and proximity to features like trenches, ridges, young rifts, and volcanoes—to predict geothermal heat flux under the ice sheet throughout Greenland.
The machine learning model, for example, predicts elevated heat flux upstream of Jakobshavn Glacier, the fastest-moving glacier in the world.
“The major advantage is that we can incorporate so many different types of data,” explains Leigh Stearns, associate professor of geology at Kansas University, whose research takes her to the polar regions to understand how and why Earth’s great ice sheets are changing, questions directly related to future sea level rise.
“All of the other models just rely on one parameter to determine heat flux, but the [machine learning] approach incorporates all of them,” Stearns told Singularity Hub in an email. “Interestingly, we found that there is not just one parameter…that determines the heat flux, but a combination of many factors.”
The research was published last month in Geophysical Research Letters.
Stearns says her team hopes to apply high-powered machine learning to characterize glacier behavior over both short and long-term timescales, thanks to the large amounts of data that she and others have collected over the last 20 years.
Emergence of Robot Scientists
While Stearns sees machine learning as another tool to augment her research, King believes artificial intelligence can play a much bigger role in scientific discoveries in the future.
“I am interested in developing AI systems that autonomously do science—robot scientists,” he said. Such systems, King explained, would automatically originate hypotheses to explain observations, devise experiments to test those hypotheses, physically run the experiments using laboratory robotics, and even interpret the results. The conclusions would then influence the next cycle of hypotheses and experiments.
His AI scientist Eve recently helped researchers discover that triclosan, an ingredient commonly found in toothpaste, could be used as an antimalarial drug against certain strains that have developed a resistance to other common drug therapies. The research was published in the journal Scientific Reports.
Automation using artificial intelligence for drug discovery has become a growing area of research, as the machines can work orders of magnitude faster than any human. AI is also being applied in related areas, such as synthetic biology for the rapid design and manufacture of microorganisms for industrial uses.
King argues that machines are better suited to unravel the complexities of biological systems, with even the most “simple” organisms are host to thousands of genes, proteins, and small molecules that interact in complicated ways.
“Robot scientists and semi-automated AI tools are essential for the future of biology, as there are simply not enough human biologists to do the necessary work,” he said.
Creating Shockwaves in Science
The use of machine learning, neural networks, and other AI methods can often get better results in a fraction of the time it would normally take to crunch data.
For instance, scientists at the National Center for Supercomputing Applications, located at the University of Illinois at Urbana-Champaign, have a deep learning system for the rapid detection and characterization of gravitational waves. Gravitational waves are disturbances in spacetime, emanating from big, high-energy cosmic events, such as the massive explosion of a star known as a supernova. The “Holy Grail” of this type of research is to detect gravitational waves from the Big Bang.
Dubbed Deep Filtering, the method allows real-time processing of data from LIGO, a gravitational wave observatory comprised of two enormous laser interferometers located thousands of miles apart in California and Louisiana. The research was published in Physics Letters B. You can watch a trippy visualization of the results below.
In a more down-to-earth example, scientists published a paper last month in Science Advances on the development of a neural network called ConvNetQuake to detect and locate minor earthquakes from ground motion measurements called seismograms.
ConvNetQuake uncovered 17 times more earthquakes than traditional methods. Scientists say the new method is particularly useful in monitoring small-scale seismic activity, which has become more frequent, possibly due to fracking activities that involve injecting wastewater deep underground. You can learn more about ConvNetQuake in this video:
King says he believes that in the long term there will be no limit to what AI can accomplish in science. He and his team, including Eve, are currently working on developing cancer therapies under a grant from DARPA.
“Robot scientists are getting smarter and smarter; human scientists are not,” he says. “Indeed, there is arguably a case that human scientists are less good. I don’t see any scientist alive today of the stature of a Newton or Einstein—despite the vast number of living scientists. The Physics Nobel [laureate] Frank Wilczek is on record as saying (10 years ago) that in 100 years’ time the best physicist will be a machine. I agree.”
Image Credit: Romaset / Shutterstock.com Continue reading