Tag Archives: TAKE
#435174 Revolt on the Horizon? How Young People ...
As digital technologies facilitate the growth of both new and incumbent organizations, we have started to see the darker sides of the digital economy unravel. In recent years, many unethical business practices have been exposed, including the capture and use of consumers’ data, anticompetitive activities, and covert social experiments.
But what do young people who grew up with the internet think about this development? Our research with 400 digital natives—19- to 24-year-olds—shows that this generation, dubbed “GenTech,” may be the one to turn the digital revolution on its head. Our findings point to a frustration and disillusionment with the way organizations have accumulated real-time information about consumers without their knowledge and often without their explicit consent.
Many from GenTech now understand that their online lives are of commercial value to an array of organizations that use this insight for the targeting and personalization of products, services, and experiences.
This era of accumulation and commercialization of user data through real-time monitoring has been coined “surveillance capitalism” and signifies a new economic system.
Artificial Intelligence
A central pillar of the modern digital economy is our interaction with artificial intelligence (AI) and machine learning algorithms. We found that 47 percent of GenTech do not want AI technology to monitor their lifestyle, purchases, and financial situation in order to recommend them particular things to buy.
In fact, only 29 percent see this as a positive intervention. Instead, they wish to maintain a sense of autonomy in their decision making and have the opportunity to freely explore new products, services, and experiences.
As individuals living in the digital age, we constantly negotiate with technology to let go of or retain control. This pendulum-like effect reflects the ongoing battle between humans and technology.
My Life, My Data?
Our research also reveals that 54 percent of GenTech are very concerned about the access organizations have to their data, while only 19 percent were not worried. Despite the EU General Data Protection Regulation being introduced in May 2018, this is still a major concern, grounded in a belief that too much of their data is in the possession of a small group of global companies, including Google, Amazon, and Facebook. Some 70 percent felt this way.
In recent weeks, both Facebook and Google have vowed to make privacy a top priority in the way they interact with users. Both companies have faced public outcry for their lack of openness and transparency when it comes to how they collect and store user data. It wasn’t long ago that a hidden microphone was found in one of Google’s home alarm products.
Google now plans to offer auto-deletion of users’ location history data, browsing, and app activity as well as extend its “incognito mode” to Google Maps and search. This will enable users to turn off tracking.
At Facebook, CEO Mark Zuckerberg is keen to reposition the platform as a “privacy focused communications platform” built on principles such as private interactions, encryption, safety, interoperability (communications across Facebook-owned apps and platforms), and secure data storage. This will be a tough turnaround for the company that is fundamentally dependent on turning user data into opportunities for highly individualized advertising.
Privacy and transparency are critically important themes for organizations today, both for those that have “grown up” online as well as the incumbents. While GenTech want organizations to be more transparent and responsible, 64 percent also believe that they cannot do much to keep their data private. Being tracked and monitored online by organizations is seen as part and parcel of being a digital consumer.
Despite these views, there is a growing revolt simmering under the surface. GenTech want to take ownership of their own data. They see this as a valuable commodity, which they should be given the opportunity to trade with organizations. Some 50 percent would willingly share their data with companies if they got something in return, for example a financial incentive.
Rewiring the Power Shift
GenTech are looking to enter into a transactional relationship with organizations. This reflects a significant change in attitudes from perceiving the free access to digital platforms as the “product” in itself (in exchange for user data), to now wishing to use that data to trade for explicit benefits.
This has created an opportunity for companies that seek to empower consumers and give them back control of their data. Several companies now offer consumers the opportunity to sell the data they are comfortable sharing or take part in research that they get paid for. More and more companies are joining this space, including People.io, Killi, and Ocean Protocol.
Sir Tim Berners Lee, the creator of the world wide web, has also been working on a way to shift the power from organizations and institutions back to citizens and consumers. The platform, Solid, offers users the opportunity to be in charge of where they store their data and who can access it. It is a form of re-decentralization.
The Solid POD (Personal Online Data storage) is a secure place on a hosted server or the individual’s own server. Users can grant apps access to their POD as a person’s data is stored centrally and not by an app developer or on an organization’s server. We see this as potentially being a way to let people take back control from technology and other companies.
GenTech have woken up to a reality where a life lived “plugged in” has significant consequences for their individual privacy and are starting to push back, questioning those organizations that have shown limited concern and continue to exercise exploitative practices.
It’s no wonder that we see these signs of revolt. GenTech is the generation with the most to lose. They face a life ahead intertwined with digital technology as part of their personal and private lives. With continued pressure on organizations to become more transparent, the time is now for young people to make their move.
Dr Mike Cooray, Professor of Practice, Hult International Business School and Dr Rikke Duus, Research Associate and Senior Teaching Fellow, UCL
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Image Credit: Ser Borakovskyy / Shutterstock.com Continue reading
#435161 Less Like Us: An Alternate Theory of ...
The question of whether an artificial general intelligence will be developed in the future—and, if so, when it might arrive—is controversial. One (very uncertain) estimate suggests 2070 might be the earliest we could expect to see such technology.
Some futurists point to Moore’s Law and the increasing capacity of machine learning algorithms to suggest that a more general breakthrough is just around the corner. Others suggest that extrapolating exponential improvements in hardware is unwise, and that creating narrow algorithms that can beat humans at specialized tasks brings us no closer to a “general intelligence.”
But evolution has produced minds like the human mind at least once. Surely we could create artificial intelligence simply by copying nature, either by guided evolution of simple algorithms or wholesale emulation of the human brain.
Both of these ideas are far easier to conceive of than they are to achieve. The 302 neurons of the nematode worm’s brain are still an extremely difficult engineering challenge, let alone the 86 billion in a human brain.
Leaving aside these caveats, though, many people are worried about artificial general intelligence. Nick Bostrom’s influential book on superintelligence imagines it will be an agent—an intelligence with a specific goal. Once such an agent reaches a human level of intelligence, it will improve itself—increasingly rapidly as it gets smarter—in pursuit of whatever goal it has, and this “recursive self-improvement” will lead it to become superintelligent.
This “intelligence explosion” could catch humans off guard. If the initial goal is poorly specified or malicious, or if improper safety features are in place, or if the AI decides it would prefer to do something else instead, humans may be unable to control our own creation. Bostrom gives examples of how a seemingly innocuous goal, such as “Make everyone happy,” could be misinterpreted; perhaps the AI decides to drug humanity into a happy stupor, or convert most of the world into computing infrastructure to pursue its goal.
Drexler and Comprehensive AI Services
These are increasingly familiar concerns for an AI that behaves like an agent, seeking to achieve its goal. There are dissenters to this picture of how artificial general intelligence might arise. One notable alternative point of view comes from Eric Drexler, famous for his work on molecular nanotechnology and Engines of Creation, the book that popularized it.
With respect to AI, Drexler believes our view of an artificial intelligence as a single “agent” that acts to maximize a specific goal is too narrow, almost anthropomorphizing AI, or modeling it as a more realistic route towards general intelligence. Instead, he proposes “Comprehensive AI Services” (CAIS) as an alternative route to artificial general intelligence.
What does this mean? Drexler’s argument is that we should look more closely at how machine learning and AI algorithms are actually being developed in the real world. The optimization effort is going into producing algorithms that can provide services and perform tasks like translation, music recommendations, classification, medical diagnoses, and so forth.
AI-driven improvements in technology, argues Drexler, will lead to a proliferation of different algorithms: technology and software improvement, which can automate increasingly more complicated tasks. Recursive improvement in this regime is already occurring—take the newer versions of AlphaGo, which can learn to improve themselves by playing against previous versions.
Many Smart Arms, No Smart Brain
Instead of relying on some unforeseen breakthrough, the CAIS model of AI just assumes that specialized, narrow AI will continue to improve at performing each of its tasks, and the range of tasks that machine learning algorithms will be able to perform will become wider. Ultimately, once a sufficient number of tasks have been automated, the services that an AI will provide will be so comprehensive that they will resemble a general intelligence.
One could then imagine a “general” intelligence as simply an algorithm that is extremely good at matching the task you ask it to perform to the specialized service algorithm that can perform that task. Rather than acting like a single brain that strives to achieve a particular goal, the central AI would be more like a search engine, looking through the tasks it can perform to find the closest match and calling upon a series of subroutines to achieve the goal.
For Drexler, this is inherently a safety feature. Rather than Bostrom’s single, impenetrable, conscious and superintelligent brain (which we must try to psychoanalyze in advance without really knowing what it will look like), we have a network of capabilities. If you don’t want your system to perform certain tasks, you can simply cut it off from access to those services. There is no superintelligent consciousness to outwit or “trap”: more like an extremely high-level programming language that can respond to complicated commands by calling upon one of the myriad specialized algorithms that have been developed by different groups.
This skirts the complex problem of consciousness and all of the sticky moral quandaries that arise in making minds that might be like ours. After all, if you could simulate a human mind, you could simulate it experiencing unimaginable pain. Black Mirror-esque dystopias where emulated minds have no rights and are regularly “erased” or forced to labor in dull and repetitive tasks, hove into view.
Drexler argues that, in this world, there is no need to ever build a conscious algorithm. Yet it seems likely that, at some point, humans will attempt to simulate our own brains, if only in the vain attempt to pursue immortality. This model cannot hold forever. Yet its proponents argue that any world in which we could develop general AI would probably also have developed superintelligent capabilities in a huge range of different tasks, such as computer programming, natural language understanding, and so on. In other words, CAIS arrives first.
The Future In Our Hands?
Drexler argues that his model already incorporates many of the ideas from general AI development. In the marketplace, algorithms compete all the time to perform these services: they undergo the same evolutionary pressures that lead to “higher intelligence,” but the behavior that’s considered superior is chosen by humans, and the nature of the “general intelligence” is far more shaped by human decision-making and human programmers. Development in AI services could still be rapid and disruptive.
But in Drexler’s case, the research and development capacity comes from humans and organizations driven by the desire to improve algorithms that are performing individualized and useful tasks, rather than from a conscious AI recursively reprogramming and improving itself.
In other words, this vision does not absolve us of the responsibility of making our AI safe; if anything, it gives us a greater degree of responsibility. As more and more complex “services” are automated, performing what used to be human jobs at superhuman speed, the economic disruption will be severe.
Equally, as machine learning is trusted to carry out more complex decisions, avoiding algorithmic bias becomes crucial. Shaping each of these individual decision-makers—and trying to predict the complex ways they might interact with each other—is no less daunting a task than specifying the goal for a hypothetical, superintelligent, God-like AI. Arguably, the consequences of the “misalignment” of these services algorithms are already multiplying around us.
The CAIS model bridges the gap between real-world AI, machine learning developments, and real-world safety considerations, as well as the speculative world of superintelligent agents and the safety considerations involved with controlling their behavior. We should keep our minds open as to what form AI and machine learning will take, and how it will influence our societies—and we must take care to ensure that the systems we create don’t end up forcing us all to live in a world of unintended consequences.
Image Credit: MF Production/Shutterstock.com Continue reading