Tag Archives: center
#434297 How Can Leaders Ensure Humanity in a ...
It’s hard to avoid the prominence of AI in our lives, and there is a plethora of predictions about how it will influence our future. In their new book Solomon’s Code: Humanity in a World of Thinking Machines, co-authors Olaf Groth, Professor of Strategy, Innovation and Economics at HULT International Business School and CEO of advisory network Cambrian.ai, and Mark Nitzberg, Executive Director of UC Berkeley’s Center for Human-Compatible AI, believe that the shift in balance of power between intelligent machines and humans is already here.
I caught up with the authors about how the continued integration between technology and humans, and their call for a “Digital Magna Carta,” a broadly-accepted charter developed by a multi-stakeholder congress that would help guide the development of advanced technologies to harness their power for the benefit of all humanity.
Lisa Kay Solomon: Your new book, Solomon’s Code, explores artificial intelligence and its broader human, ethical, and societal implications that all leaders need to consider. AI is a technology that’s been in development for decades. Why is it so urgent to focus on these topics now?
Olaf Groth and Mark Nitzberg: Popular perception always thinks of AI in terms of game-changing narratives—for instance, Deep Blue beating Gary Kasparov at chess. But it’s the way these AI applications are “getting into our heads” and making decisions for us that really influences our lives. That’s not to say the big, headline-grabbing breakthroughs aren’t important; they are.
But it’s the proliferation of prosaic apps and bots that changes our lives the most, by either empowering or counteracting who we are and what we do. Today, we turn a rapidly growing number of our decisions over to these machines, often without knowing it—and even more often without understanding the second- and third-order effects of both the technologies and our decisions to rely on them.
There is genuine power in what we call a “symbio-intelligent” partnership between human, machine, and natural intelligences. These relationships can optimize not just economic interests, but help improve human well-being, create a more purposeful workplace, and bring more fulfillment to our lives.
However, mitigating the risks while taking advantage of the opportunities will require a serious, multidisciplinary consideration of how AI influences human values, trust, and power relationships. Whether or not we acknowledge their existence in our everyday life, these questions are no longer just thought exercises or fodder for science fiction.
In many ways, these technologies can challenge what it means to be human, and their ramifications already affect us in real and often subtle ways. We need to understand how
LKS: There is a lot of hype and misconceptions about AI. In your book, you provide a useful distinction between the cognitive capability that we often associate with AI processes, and the more human elements of consciousness and conscience. Why are these distinctions so important to understand?
OG & MN: Could machines take over consciousness some day as they become more powerful and complex? It’s hard to say. But there’s little doubt that, as machines become more capable, humans will start to think of them as something conscious—if for no other reason than our natural inclination to anthropomorphize.
Machines are already learning to recognize our emotional states and our physical health. Once they start talking that back to us and adjusting their behavior accordingly, we will be tempted to develop a certain rapport with them, potentially more trusting or more intimate because the machine recognizes us in our various states.
Consciousness is hard to define and may well be an emergent property, rather than something you can easily create or—in turn—deduce to its parts. So, could it happen as we put more and more elements together, from the realms of AI, quantum computing, or brain-computer interfaces? We can’t exclude that possibility.
Either way, we need to make sure we’re charting out a clear path and guardrails for this development through the Three Cs in machines: cognition (where AI is today); consciousness (where AI could go); and conscience (what we need to instill in AI before we get there). The real concern is that we reach machine consciousness—or what humans decide to grant as consciousness—without a conscience. If that happens, we will have created an artificial sociopath.
LKS: We have been seeing major developments in how AI is influencing product development and industry shifts. How is the rise of AI changing power at the global level?
OG & MN: Both in the public and private sectors, the data holder has the power. We’ve already seen the ascendance of about 10 “digital barons” in the US and China who sit on huge troves of data, massive computing power, and the resources and money to attract the world’s top AI talent. With these gaps already open between the haves and the have-nots on the technological and corporate side, we’re becoming increasingly aware that similar inequalities are forming at a societal level as well.
Economic power flows with data, leaving few options for socio-economically underprivileged populations and their corrupt, biased, or sparse digital footprints. By concentrating power and overlooking values, we fracture trust.
We can already see this tension emerging between the two dominant geopolitical models of AI. China and the US have emerged as the most powerful in both technological and economic terms, and both remain eager to drive that influence around the world. The EU countries are more contained on these economic and geopolitical measures, but they’ve leaped ahead on privacy and social concerns.
The problem is, no one has yet combined leadership on all three critical elements of values, trust, and power. The nations and organizations that foster all three of these elements in their AI systems and strategies will lead the future. Some are starting to recognize the need for the combination, but we found just 13 countries that have created significant AI strategies. Countries that wait too long to join them risk subjecting themselves to a new “data colonialism” that could change their economies and societies from the outside.
LKS: Solomon’s Code looks at AI from a variety of perspectives, considering both positive and potentially dangerous effects. You caution against the rising global threat and weaponization of AI and data, suggesting that “biased or dirty data is more threatening than nuclear arms or a pandemic.” For global leaders, entrepreneurs, technologists, policy makers and social change agents reading this, what specific strategies do you recommend to ensure ethical development and application of AI?
OG & MN: We’ve surrendered many of our most critical decisions to the Cult of Data. In most cases, that’s a great thing, as we rely more on scientific evidence to understand our world and our way through it. But we swing too far in other instances, assuming that datasets and algorithms produce a complete story that’s unsullied by human biases or intellectual shortcomings. We might choose to ignore it, but no one is blind to the dangers of nuclear war or pandemic disease. Yet, we willfully blind ourselves to the threat of dirty data, instead believing it to be pristine.
So, what do we do about it? On an individual level, it’s a matter of awareness, knowing who controls your data and how outsourcing of decisions to thinking machines can present opportunities and threats alike.
For business, government, and political leaders, we need to see a much broader expansion of ethics committees with transparent criteria with which to evaluate new products and services. We might consider something akin to clinical trials for pharmaceuticals—a sort of testing scheme that can transparently and independently measure the effects on humans of algorithms, bots, and the like. All of this needs to be multidisciplinary, bringing in expertise from across technology, social systems, ethics, anthropology, psychology, and so on.
Finally, on a global level, we need a new charter of rights—a Digital Magna Carta—that formalizes these protections and guides the development of new AI technologies toward all of humanity’s benefit. We’ve suggested the creation of a multi-stakeholder Cambrian Congress (harkening back to the explosion of life during the Cambrian period) that can not only begin to frame benefits for humanity, but build the global consensus around principles for a basic code-of-conduct, and ideas for evaluation and enforcement mechanisms, so we can get there without any large-scale failures or backlash in society. So, it’s not one or the other—it’s both.
Image Credit: whiteMocca / Shutterstock.com Continue reading
#434235 The Milestones of Human Progress We ...
When you look back at 2018, do you see a good or a bad year? Chances are, your perception of the year involves fixating on all the global and personal challenges it brought. In fact, every year, we tend to look back at the previous year as “one of the most difficult” and hope that the following year is more exciting and fruitful.
But in the grander context of human history, 2018 was an extraordinarily positive year. In fact, every year has been getting progressively better.
Before we dive into some of the highlights of human progress from 2018, let’s make one thing clear. There is no doubt that there are many overwhelming global challenges facing our species. From climate change to growing wealth inequality, we are far from living in a utopia.
Yet it’s important to recognize that both our news outlets and audiences have been disproportionately fixated on negative news. This emphasis on bad news is detrimental to our sense of empowerment as a species.
So let’s take a break from all the disproportionate negativity and have a look back on how humanity pushed boundaries in 2018.
On Track to Becoming an Interplanetary Species
We often forget how far we’ve come since the very first humans left the African savanna, populated the entire planet, and developed powerful technological capabilities. Our desire to explore the unknown has shaped the course of human evolution and will continue to do so.
This year, we continued to push the boundaries of space exploration. As depicted in the enchanting short film Wanderers, humanity’s destiny is the stars. We are born to be wanderers of the cosmos and the everlasting unknown.
SpaceX had 21 successful launches in 2018 and closed the year with a successful GPS launch. The latest test flight by Virgin Galactic was also an incredible milestone, as SpaceShipTwo was welcomed into space. Richard Branson and his team expect that space tourism will be a reality within the next 18 months.
Our understanding of the cosmos is also moving forward with continuous breakthroughs in astrophysics and astronomy. One notable example is the MARS InSight Mission, which uses cutting-edge instruments to study Mars’ interior structure and has even given us the first recordings of sound on Mars.
Understanding and Tackling Disease
Thanks to advancements in science and medicine, we are currently living longer, healthier, and wealthier lives than at any other point in human history. In fact, for most of human history, life expectancy at birth was around 30. Today it is more than 70 worldwide, and in the developed parts of the world, more than 80.
Brilliant researchers around the world are pushing for even better health outcomes. This year, we saw promising treatments emerge against Alzheimers disease, rheumatoid arthritis, multiple scleroris, and even the flu.
The deadliest disease of them all, cancer, is also being tackled. According to the American Association of Cancer Research, 22 revolutionary treatments for cancer were approved in the last year, and the death rate in adults is also in decline. Advancements in immunotherapy, genetic engineering, stem cells, and nanotechnology are all powerful resources to tackle killer diseases.
Breakthrough Mental Health Therapy
While cleaner energy, access to education, and higher employment rates can improve quality of life, they do not guarantee happiness and inner peace. According to the World Economic Forum, mental health disorders affect one in four people globally, and in many places they are significantly under-reported. More people are beginning to realize that our mental health is just as important as our physical health, and that we ought to take care of our minds just as much as our bodies.
We are seeing the rise of applications that put mental well-being at their center. Breakthrough advancements in genetics are allowing us to better understand the genetic makeup of disorders like clinical depression or Schizophrenia, and paving the way for personalized medical treatment. We are also seeing the rise of increasingly effective therapeutic treatments for anxiety.
This year saw many milestones for a whole new revolutionary area in mental health: psychedelic therapy. Earlier this summer, the FDA granted breakthrough therapy designation to MDMA for the treatment of PTSD, after several phases of successful trails. Similar research has discovered that Psilocybin (also known as magic mushrooms) combined with therapy is far more effective than traditional forms of treatment for depression and anxiety.
Moral and Social Progress
Innovation is often associated with economic and technological progress. However, we also need leaps of progress in our morality, values, and policies. Throughout the 21st century, we’ve made massive strides in rights for women and children, civil rights, LGBT rights, animal rights, and beyond. However, with rising nationalism and xenophobia in many parts of the developed world, there is significant work to be done on this front.
All hope is not lost, as we saw many noteworthy milestones this year. In January 2018, Iceland introduced the equal wage law, bringing an end to the gender wage gap. On September 6th, the Indian Supreme Court decriminalized homosexuality, marking a historical moment. Earlier in December, the European Commission released a draft of ethics guidelines for trustworthy artificial intelligence. Such are just a few examples of positive progress in social justice, ethics, and policy.
We are also seeing a global rise in social impact entrepreneurship. Emerging startups are no longer valued simply based on their profits and revenue, but also on the level of positive impact they are having on the world at large. The world’s leading innovators are not asking themselves “How can I become rich?” but rather “How can I solve this global challenge?”
Intelligently Optimistic for 2019
It’s becoming more and more clear that we are living in the most exciting time in human history. Even more, we mustn’t be afraid to be optimistic about 2019.
An optimistic mindset can be grounded in rationality and evidence. Intelligent optimism is all about being excited about the future in an informed and rational way. The mindset is critical if we are to get everyone excited about the future by highlighting the rapid progress we have made and recognizing the tremendous potential humans have to find solutions to our problems.
In his latest TED talk, Steven Pinker points out, “Progress does not mean that everything becomes better for everyone everywhere all the time. That would be a miracle, and progress is not a miracle but problem-solving. Problems are inevitable and solutions create new problems which have to be solved in their turn.”
Let us not forget that in cosmic time scales, our entire species’ lifetime, including all of human history, is the equivalent of the blink of an eye. The probability of us existing both as an intelligent species and as individuals is so astoundingly low that it’s practically non-existent. We are the products of 14 billion years of cosmic evolution and extraordinarily good fortune. Let’s recognize and leverage this wondrous opportunity, and pave an exciting way forward.
Image Credit: Virgin Galactic / Virgin Galactic 2018. Continue reading
#433852 How Do We Teach Autonomous Cars To Drive ...
Autonomous vehicles can follow the general rules of American roads, recognizing traffic signals and lane markings, noticing crosswalks and other regular features of the streets. But they work only on well-marked roads that are carefully scanned and mapped in advance.
Many paved roads, though, have faded paint, signs obscured behind trees and unusual intersections. In addition, 1.4 million miles of U.S. roads—one-third of the country’s public roadways—are unpaved, with no on-road signals like lane markings or stop-here lines. That doesn’t include miles of private roads, unpaved driveways or off-road trails.
What’s a rule-following autonomous car to do when the rules are unclear or nonexistent? And what are its passengers to do when they discover their vehicle can’t get them where they’re going?
Accounting for the Obscure
Most challenges in developing advanced technologies involve handling infrequent or uncommon situations, or events that require performance beyond a system’s normal capabilities. That’s definitely true for autonomous vehicles. Some on-road examples might be navigating construction zones, encountering a horse and buggy, or seeing graffiti that looks like a stop sign. Off-road, the possibilities include the full variety of the natural world, such as trees down over the road, flooding and large puddles—or even animals blocking the way.
At Mississippi State University’s Center for Advanced Vehicular Systems, we have taken up the challenge of training algorithms to respond to circumstances that almost never happen, are difficult to predict and are complex to create. We seek to put autonomous cars in the hardest possible scenario: driving in an area the car has no prior knowledge of, with no reliable infrastructure like road paint and traffic signs, and in an unknown environment where it’s just as likely to see a cactus as a polar bear.
Our work combines virtual technology and the real world. We create advanced simulations of lifelike outdoor scenes, which we use to train artificial intelligence algorithms to take a camera feed and classify what it sees, labeling trees, sky, open paths and potential obstacles. Then we transfer those algorithms to a purpose-built all-wheel-drive test vehicle and send it out on our dedicated off-road test track, where we can see how our algorithms work and collect more data to feed into our simulations.
Starting Virtual
We have developed a simulator that can create a wide range of realistic outdoor scenes for vehicles to navigate through. The system generates a range of landscapes of different climates, like forests and deserts, and can show how plants, shrubs and trees grow over time. It can also simulate weather changes, sunlight and moonlight, and the accurate locations of 9,000 stars.
The system also simulates the readings of sensors commonly used in autonomous vehicles, such as lidar and cameras. Those virtual sensors collect data that feeds into neural networks as valuable training data.
Simulated desert, meadow and forest environments generated by the Mississippi State University Autonomous Vehicle Simulator. Chris Goodin, Mississippi State University, Author provided.
Building a Test Track
Simulations are only as good as their portrayals of the real world. Mississippi State University has purchased 50 acres of land on which we are developing a test track for off-road autonomous vehicles. The property is excellent for off-road testing, with unusually steep grades for our area of Mississippi—up to 60 percent inclines—and a very diverse population of plants.
We have selected certain natural features of this land that we expect will be particularly challenging for self-driving vehicles, and replicated them exactly in our simulator. That allows us to directly compare results from the simulation and real-life attempts to navigate the actual land. Eventually, we’ll create similar real and virtual pairings of other types of landscapes to improve our vehicle’s capabilities.
A road washout, as seen in real life, left, and in simulation. Chris Goodin, Mississippi State University, Author provided.
Collecting More Data
We have also built a test vehicle, called the Halo Project, which has an electric motor and sensors and computers that can navigate various off-road environments. The Halo Project car has additional sensors to collect detailed data about its actual surroundings, which can help us build virtual environments to run new tests in.
The Halo Project car can collect data about driving and navigating in rugged terrain. Beth Newman Wynn, Mississippi State University, Author provided.
Two of its lidar sensors, for example, are mounted at intersecting angles on the front of the car so their beams sweep across the approaching ground. Together, they can provide information on how rough or smooth the surface is, as well as capturing readings from grass and other plants and items on the ground.
Lidar beams intersect, scanning the ground in front of the vehicle. Chris Goodin, Mississippi State University, Author provided
We’ve seen some exciting early results from our research. For example, we have shown promising preliminary results that machine learning algorithms trained on simulated environments can be useful in the real world. As with most autonomous vehicle research, there is still a long way to go, but our hope is that the technologies we’re developing for extreme cases will also help make autonomous vehicles more functional on today’s roads.
Matthew Doude, Associate Director, Center for Advanced Vehicular Systems; Ph.D. Student in Industrial and Systems Engineering, Mississippi State University; Christopher Goodin, Assistant Research Professor, Center for Advanced Vehicular Systems, Mississippi State University, and Daniel Carruth, Assistant Research Professor and Associate Director for Human Factors and Advanced Vehicle System, Center for Advanced Vehicular Systems, Mississippi State University
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Photo provided for The Conversation by Matthew Goudin / CC BY ND Continue reading
#433785 DeepMind’s Eerie Reimagination of the ...
If a recent project using Google’s DeepMind were a recipe, you would take a pair of AI systems, images of animals, and a whole lot of computing power. Mix it all together, and you’d get a series of imagined animals dreamed up by one of the AIs. A look through the research paper about the project—or this open Google Folder of images it produced—will likely lead you to agree that the results are a mix of impressive and downright eerie.
But the eerie factor doesn’t mean the project shouldn’t be considered a success and a step forward for future uses of AI.
From GAN To BigGAN
The team behind the project consists of Andrew Brock, a PhD student at Edinburgh Center for Robotics, and DeepMind intern and researcher Jeff Donahue and Karen Simonyan.
They used a so-called Generative Adversarial Network (GAN) to generate the images. In a GAN, two AI systems collaborate in a game-like manner. One AI produces images of an object or creature. The human equivalent would be drawing pictures of, for example, a dog—without necessarily knowing what a dog exactly looks like. Those images are then shown to the second AI, which has already been fed images of dogs. The second AI then tells the first one how far off its efforts were. The first one uses this information to improve its images. The two go back and forth in an iterative process, and the goal is for the first AI to become so good at creating images of dogs that the second can’t tell the difference between its creations and actual pictures of dogs.
The team was able to draw on Google’s vast vaults of computational power to create images of a quality and life-like nature that were beyond almost anything seen before. In part, this was achieved by feeding the GAN with more images than is usually the case. According to IFLScience, the standard is to feed about 64 images per subject into the GAN. In this case, the research team fed about 2,000 images per subject into the system, leading to it being nicknamed BigGAN.
Their results showed that feeding the system with more images and using masses of raw computer power markedly increased the GAN’s precision and ability to create life-like renditions of the subjects it was trained to reproduce.
“The main thing these models need is not algorithmic improvements, but computational ones. […] When you increase model capacity and you increase the number of images you show at every step, you get this twofold combined effect,” Andrew Brock told Fast Company.
The Power Drain
The team used 512 of Google’s AI-focused Tensor Processing Units (TPU) to generate 512-pixel images. Each experiment took between 24 and 48 hours to run.
That kind of computing power needs a lot of electricity. As artist and Innovator-In-Residence at the Library of Congress Jer Thorp tongue-in-cheek put it on Twitter: “The good news is that AI can now give you a more believable image of a plate of spaghetti. The bad news is that it used roughly enough energy to power Cleveland for the afternoon.”
Thorp added that a back-of-the-envelope calculation showed that the computations to produce the images would require about 27,000 square feet of solar panels to have adequate power.
BigGAN’s images have been hailed by researchers, with Oriol Vinyals, research scientist at DeepMind, rhetorically asking if these were the ‘Best GAN samples yet?’
However, they are still not perfect. The number of legs on a given creature is one example of where the BigGAN seemed to struggle. The system was good at recognizing that something like a spider has a lot of legs, but seemed unable to settle on how many ‘a lot’ was supposed to be. The same applied to dogs, especially if the images were supposed to show said dogs in motion.
Those eerie images are contrasted by other renditions that show such lifelike qualities that a human mind has a hard time identifying them as fake. Spaniels with lolling tongues, ocean scenery, and butterflies were all rendered with what looks like perfection. The same goes for an image of a hamburger that was good enough to make me stop writing because I suddenly needed lunch.
The Future Use Cases
GAN networks were first introduced in 2014, and given their relative youth, researchers and companies are still busy trying out possible use cases.
One possible use is image correction—making pixillated images clearer. Not only does this help your future holiday snaps, but it could be applied in industries such as space exploration. A team from the University of Michigan and the Max Planck Institute have developed a method for GAN networks to create images from text descriptions. At Berkeley, a research group has used GAN to create an interface that lets users change the shape, size, and design of objects, including a handbag.
For anyone who has seen a film like Wag the Dog or read 1984, the possibilities are also starkly alarming. GANs could, in other words, make fake news look more real than ever before.
For now, it seems that while not all GANs require the computational and electrical power of the BigGAN, there is still some way to reach these potential use cases. However, if there’s one lesson from Moore’s Law and exponential technology, it is that today’s technical roadblock quickly becomes tomorrow’s minor issue as technology progresses.
Image Credit: Ondrej Prosicky/Shutterstock Continue reading