Tag Archives: day
#434297 How Can Leaders Ensure Humanity in a ...
It’s hard to avoid the prominence of AI in our lives, and there is a plethora of predictions about how it will influence our future. In their new book Solomon’s Code: Humanity in a World of Thinking Machines, co-authors Olaf Groth, Professor of Strategy, Innovation and Economics at HULT International Business School and CEO of advisory network Cambrian.ai, and Mark Nitzberg, Executive Director of UC Berkeley’s Center for Human-Compatible AI, believe that the shift in balance of power between intelligent machines and humans is already here.
I caught up with the authors about how the continued integration between technology and humans, and their call for a “Digital Magna Carta,” a broadly-accepted charter developed by a multi-stakeholder congress that would help guide the development of advanced technologies to harness their power for the benefit of all humanity.
Lisa Kay Solomon: Your new book, Solomon’s Code, explores artificial intelligence and its broader human, ethical, and societal implications that all leaders need to consider. AI is a technology that’s been in development for decades. Why is it so urgent to focus on these topics now?
Olaf Groth and Mark Nitzberg: Popular perception always thinks of AI in terms of game-changing narratives—for instance, Deep Blue beating Gary Kasparov at chess. But it’s the way these AI applications are “getting into our heads” and making decisions for us that really influences our lives. That’s not to say the big, headline-grabbing breakthroughs aren’t important; they are.
But it’s the proliferation of prosaic apps and bots that changes our lives the most, by either empowering or counteracting who we are and what we do. Today, we turn a rapidly growing number of our decisions over to these machines, often without knowing it—and even more often without understanding the second- and third-order effects of both the technologies and our decisions to rely on them.
There is genuine power in what we call a “symbio-intelligent” partnership between human, machine, and natural intelligences. These relationships can optimize not just economic interests, but help improve human well-being, create a more purposeful workplace, and bring more fulfillment to our lives.
However, mitigating the risks while taking advantage of the opportunities will require a serious, multidisciplinary consideration of how AI influences human values, trust, and power relationships. Whether or not we acknowledge their existence in our everyday life, these questions are no longer just thought exercises or fodder for science fiction.
In many ways, these technologies can challenge what it means to be human, and their ramifications already affect us in real and often subtle ways. We need to understand how
LKS: There is a lot of hype and misconceptions about AI. In your book, you provide a useful distinction between the cognitive capability that we often associate with AI processes, and the more human elements of consciousness and conscience. Why are these distinctions so important to understand?
OG & MN: Could machines take over consciousness some day as they become more powerful and complex? It’s hard to say. But there’s little doubt that, as machines become more capable, humans will start to think of them as something conscious—if for no other reason than our natural inclination to anthropomorphize.
Machines are already learning to recognize our emotional states and our physical health. Once they start talking that back to us and adjusting their behavior accordingly, we will be tempted to develop a certain rapport with them, potentially more trusting or more intimate because the machine recognizes us in our various states.
Consciousness is hard to define and may well be an emergent property, rather than something you can easily create or—in turn—deduce to its parts. So, could it happen as we put more and more elements together, from the realms of AI, quantum computing, or brain-computer interfaces? We can’t exclude that possibility.
Either way, we need to make sure we’re charting out a clear path and guardrails for this development through the Three Cs in machines: cognition (where AI is today); consciousness (where AI could go); and conscience (what we need to instill in AI before we get there). The real concern is that we reach machine consciousness—or what humans decide to grant as consciousness—without a conscience. If that happens, we will have created an artificial sociopath.
LKS: We have been seeing major developments in how AI is influencing product development and industry shifts. How is the rise of AI changing power at the global level?
OG & MN: Both in the public and private sectors, the data holder has the power. We’ve already seen the ascendance of about 10 “digital barons” in the US and China who sit on huge troves of data, massive computing power, and the resources and money to attract the world’s top AI talent. With these gaps already open between the haves and the have-nots on the technological and corporate side, we’re becoming increasingly aware that similar inequalities are forming at a societal level as well.
Economic power flows with data, leaving few options for socio-economically underprivileged populations and their corrupt, biased, or sparse digital footprints. By concentrating power and overlooking values, we fracture trust.
We can already see this tension emerging between the two dominant geopolitical models of AI. China and the US have emerged as the most powerful in both technological and economic terms, and both remain eager to drive that influence around the world. The EU countries are more contained on these economic and geopolitical measures, but they’ve leaped ahead on privacy and social concerns.
The problem is, no one has yet combined leadership on all three critical elements of values, trust, and power. The nations and organizations that foster all three of these elements in their AI systems and strategies will lead the future. Some are starting to recognize the need for the combination, but we found just 13 countries that have created significant AI strategies. Countries that wait too long to join them risk subjecting themselves to a new “data colonialism” that could change their economies and societies from the outside.
LKS: Solomon’s Code looks at AI from a variety of perspectives, considering both positive and potentially dangerous effects. You caution against the rising global threat and weaponization of AI and data, suggesting that “biased or dirty data is more threatening than nuclear arms or a pandemic.” For global leaders, entrepreneurs, technologists, policy makers and social change agents reading this, what specific strategies do you recommend to ensure ethical development and application of AI?
OG & MN: We’ve surrendered many of our most critical decisions to the Cult of Data. In most cases, that’s a great thing, as we rely more on scientific evidence to understand our world and our way through it. But we swing too far in other instances, assuming that datasets and algorithms produce a complete story that’s unsullied by human biases or intellectual shortcomings. We might choose to ignore it, but no one is blind to the dangers of nuclear war or pandemic disease. Yet, we willfully blind ourselves to the threat of dirty data, instead believing it to be pristine.
So, what do we do about it? On an individual level, it’s a matter of awareness, knowing who controls your data and how outsourcing of decisions to thinking machines can present opportunities and threats alike.
For business, government, and political leaders, we need to see a much broader expansion of ethics committees with transparent criteria with which to evaluate new products and services. We might consider something akin to clinical trials for pharmaceuticals—a sort of testing scheme that can transparently and independently measure the effects on humans of algorithms, bots, and the like. All of this needs to be multidisciplinary, bringing in expertise from across technology, social systems, ethics, anthropology, psychology, and so on.
Finally, on a global level, we need a new charter of rights—a Digital Magna Carta—that formalizes these protections and guides the development of new AI technologies toward all of humanity’s benefit. We’ve suggested the creation of a multi-stakeholder Cambrian Congress (harkening back to the explosion of life during the Cambrian period) that can not only begin to frame benefits for humanity, but build the global consensus around principles for a basic code-of-conduct, and ideas for evaluation and enforcement mechanisms, so we can get there without any large-scale failures or backlash in society. So, it’s not one or the other—it’s both.
Image Credit: whiteMocca / Shutterstock.com Continue reading