Tag Archives: development
#435186 What’s Behind the International Rush ...
There’s no better way of ensuring you win a race than by setting the rules yourself. That may be behind the recent rush by countries, international organizations, and companies to put forward their visions for how the AI race should be governed.
China became the latest to release a set of “ethical standards” for the development of AI last month, which might raise eyebrows given the country’s well-documented AI-powered state surveillance program and suspect approaches to privacy and human rights.
But given the recent flurry of AI guidelines, it may well have been motivated by a desire not to be left out of the conversation. The previous week the OECD, backed by the US, released its own “guiding principles” for the industry, and in April the EU released “ethical guidelines.”
The language of most of these documents is fairly abstract and noticeably similar, with broad appeals to ideals like accountability, responsibility, and transparency. The OECD’s guidelines are the lightest on detail, while the EU’s offer some more concrete suggestions such as ensuring humans always know if they’re interacting with AI and making algorithms auditable. China’s standards have an interesting focus on promoting openness and collaboration as well as expressly acknowledging AIs potential to disrupt employment.
Overall, though, one might be surprised that there aren’t more disagreements between three blocs with very divergent attitudes to technology, regulation, and economics. Most likely these are just the opening salvos in what will prove to be a long-running debate, and the devil will ultimately be in the details.
The EU seems to have stolen a march on the other two blocs, being first to publish its guidelines and having already implemented the world’s most comprehensive regulation of data—the bedrock of modern AI—with last year’s GDPR. But its lack of industry heavyweights is going to make it hard to hold onto that lead.
One organization that seems to be trying to take on the role of impartial adjudicator is the World Economic Forum, which recently hosted an event designed to find common ground between various stakeholders from across the world. What will come of the effort remains to be seen, but China’s release of guidelines broadly similar to those of its Western counterparts is a promising sign.
Perhaps most telling, though, is the ubiquitous presence of industry leaders in both advisory and leadership positions. China’s guidelines are backed by “an AI industrial league” including Baidu, Alibaba, and Tencent, and the co-chairs of the WEF’s AI Council are Microsoft President Brad Smith and prominent Chinese AI investor Kai-Fu Lee.
Shortly after the EU released its proposals one of the authors, philosopher Thomas Metzinger, said the process had been compromised by the influence of the tech industry, leading to the removal of “red lines” opposing the development of autonomous lethal weapons or social credit score systems like China’s.
For a long time big tech argued for self-regulation, but whether they’ve had an epiphany or have simply sensed the shifting winds, they are now coming out in favor of government intervention.
Both Amazon and Facebook have called for regulation of facial recognition, and in February Google went even further, calling for the government to set down rules governing AI. Facebook chief Mark Zuckerberg has also since called for even broader regulation of the tech industry.
But considering the current concern around the anti-competitive clout of the largest technology companies, it’s worth remembering that tough rules are always easier to deal with for companies with well-developed compliance infrastructure and big legal teams. And these companies are also making sure the regulation is on their terms. Wired details Microsoft’s protracted effort to shape Washington state laws governing facial recognition technology and Google’s enormous lobbying effort.
“Industry has mobilized to shape the science, morality and laws of artificial intelligence,” Harvard law professor Yochai Benkler writes in Nature. He highlights how Amazon’s funding of a National Science Foundation (NSF) program for projects on fairness in artificial intelligence undermines the ability of academia to act as an impartial counterweight to industry.
Excluding industry from the process of setting the rules to govern AI in a fair and equitable way is clearly not practical, writes Benkler, because they are the ones with the expertise. But there also needs to be more concerted public investment in research and policymaking, and efforts to limit the influence of big companies when setting the rules that will govern AI.
Image Credit: create jobs 51 / Shutterstock.com Continue reading
#435174 Revolt on the Horizon? How Young People ...
As digital technologies facilitate the growth of both new and incumbent organizations, we have started to see the darker sides of the digital economy unravel. In recent years, many unethical business practices have been exposed, including the capture and use of consumers’ data, anticompetitive activities, and covert social experiments.
But what do young people who grew up with the internet think about this development? Our research with 400 digital natives—19- to 24-year-olds—shows that this generation, dubbed “GenTech,” may be the one to turn the digital revolution on its head. Our findings point to a frustration and disillusionment with the way organizations have accumulated real-time information about consumers without their knowledge and often without their explicit consent.
Many from GenTech now understand that their online lives are of commercial value to an array of organizations that use this insight for the targeting and personalization of products, services, and experiences.
This era of accumulation and commercialization of user data through real-time monitoring has been coined “surveillance capitalism” and signifies a new economic system.
Artificial Intelligence
A central pillar of the modern digital economy is our interaction with artificial intelligence (AI) and machine learning algorithms. We found that 47 percent of GenTech do not want AI technology to monitor their lifestyle, purchases, and financial situation in order to recommend them particular things to buy.
In fact, only 29 percent see this as a positive intervention. Instead, they wish to maintain a sense of autonomy in their decision making and have the opportunity to freely explore new products, services, and experiences.
As individuals living in the digital age, we constantly negotiate with technology to let go of or retain control. This pendulum-like effect reflects the ongoing battle between humans and technology.
My Life, My Data?
Our research also reveals that 54 percent of GenTech are very concerned about the access organizations have to their data, while only 19 percent were not worried. Despite the EU General Data Protection Regulation being introduced in May 2018, this is still a major concern, grounded in a belief that too much of their data is in the possession of a small group of global companies, including Google, Amazon, and Facebook. Some 70 percent felt this way.
In recent weeks, both Facebook and Google have vowed to make privacy a top priority in the way they interact with users. Both companies have faced public outcry for their lack of openness and transparency when it comes to how they collect and store user data. It wasn’t long ago that a hidden microphone was found in one of Google’s home alarm products.
Google now plans to offer auto-deletion of users’ location history data, browsing, and app activity as well as extend its “incognito mode” to Google Maps and search. This will enable users to turn off tracking.
At Facebook, CEO Mark Zuckerberg is keen to reposition the platform as a “privacy focused communications platform” built on principles such as private interactions, encryption, safety, interoperability (communications across Facebook-owned apps and platforms), and secure data storage. This will be a tough turnaround for the company that is fundamentally dependent on turning user data into opportunities for highly individualized advertising.
Privacy and transparency are critically important themes for organizations today, both for those that have “grown up” online as well as the incumbents. While GenTech want organizations to be more transparent and responsible, 64 percent also believe that they cannot do much to keep their data private. Being tracked and monitored online by organizations is seen as part and parcel of being a digital consumer.
Despite these views, there is a growing revolt simmering under the surface. GenTech want to take ownership of their own data. They see this as a valuable commodity, which they should be given the opportunity to trade with organizations. Some 50 percent would willingly share their data with companies if they got something in return, for example a financial incentive.
Rewiring the Power Shift
GenTech are looking to enter into a transactional relationship with organizations. This reflects a significant change in attitudes from perceiving the free access to digital platforms as the “product” in itself (in exchange for user data), to now wishing to use that data to trade for explicit benefits.
This has created an opportunity for companies that seek to empower consumers and give them back control of their data. Several companies now offer consumers the opportunity to sell the data they are comfortable sharing or take part in research that they get paid for. More and more companies are joining this space, including People.io, Killi, and Ocean Protocol.
Sir Tim Berners Lee, the creator of the world wide web, has also been working on a way to shift the power from organizations and institutions back to citizens and consumers. The platform, Solid, offers users the opportunity to be in charge of where they store their data and who can access it. It is a form of re-decentralization.
The Solid POD (Personal Online Data storage) is a secure place on a hosted server or the individual’s own server. Users can grant apps access to their POD as a person’s data is stored centrally and not by an app developer or on an organization’s server. We see this as potentially being a way to let people take back control from technology and other companies.
GenTech have woken up to a reality where a life lived “plugged in” has significant consequences for their individual privacy and are starting to push back, questioning those organizations that have shown limited concern and continue to exercise exploitative practices.
It’s no wonder that we see these signs of revolt. GenTech is the generation with the most to lose. They face a life ahead intertwined with digital technology as part of their personal and private lives. With continued pressure on organizations to become more transparent, the time is now for young people to make their move.
Dr Mike Cooray, Professor of Practice, Hult International Business School and Dr Rikke Duus, Research Associate and Senior Teaching Fellow, UCL
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Image Credit: Ser Borakovskyy / Shutterstock.com Continue reading
#435161 Less Like Us: An Alternate Theory of ...
The question of whether an artificial general intelligence will be developed in the future—and, if so, when it might arrive—is controversial. One (very uncertain) estimate suggests 2070 might be the earliest we could expect to see such technology.
Some futurists point to Moore’s Law and the increasing capacity of machine learning algorithms to suggest that a more general breakthrough is just around the corner. Others suggest that extrapolating exponential improvements in hardware is unwise, and that creating narrow algorithms that can beat humans at specialized tasks brings us no closer to a “general intelligence.”
But evolution has produced minds like the human mind at least once. Surely we could create artificial intelligence simply by copying nature, either by guided evolution of simple algorithms or wholesale emulation of the human brain.
Both of these ideas are far easier to conceive of than they are to achieve. The 302 neurons of the nematode worm’s brain are still an extremely difficult engineering challenge, let alone the 86 billion in a human brain.
Leaving aside these caveats, though, many people are worried about artificial general intelligence. Nick Bostrom’s influential book on superintelligence imagines it will be an agent—an intelligence with a specific goal. Once such an agent reaches a human level of intelligence, it will improve itself—increasingly rapidly as it gets smarter—in pursuit of whatever goal it has, and this “recursive self-improvement” will lead it to become superintelligent.
This “intelligence explosion” could catch humans off guard. If the initial goal is poorly specified or malicious, or if improper safety features are in place, or if the AI decides it would prefer to do something else instead, humans may be unable to control our own creation. Bostrom gives examples of how a seemingly innocuous goal, such as “Make everyone happy,” could be misinterpreted; perhaps the AI decides to drug humanity into a happy stupor, or convert most of the world into computing infrastructure to pursue its goal.
Drexler and Comprehensive AI Services
These are increasingly familiar concerns for an AI that behaves like an agent, seeking to achieve its goal. There are dissenters to this picture of how artificial general intelligence might arise. One notable alternative point of view comes from Eric Drexler, famous for his work on molecular nanotechnology and Engines of Creation, the book that popularized it.
With respect to AI, Drexler believes our view of an artificial intelligence as a single “agent” that acts to maximize a specific goal is too narrow, almost anthropomorphizing AI, or modeling it as a more realistic route towards general intelligence. Instead, he proposes “Comprehensive AI Services” (CAIS) as an alternative route to artificial general intelligence.
What does this mean? Drexler’s argument is that we should look more closely at how machine learning and AI algorithms are actually being developed in the real world. The optimization effort is going into producing algorithms that can provide services and perform tasks like translation, music recommendations, classification, medical diagnoses, and so forth.
AI-driven improvements in technology, argues Drexler, will lead to a proliferation of different algorithms: technology and software improvement, which can automate increasingly more complicated tasks. Recursive improvement in this regime is already occurring—take the newer versions of AlphaGo, which can learn to improve themselves by playing against previous versions.
Many Smart Arms, No Smart Brain
Instead of relying on some unforeseen breakthrough, the CAIS model of AI just assumes that specialized, narrow AI will continue to improve at performing each of its tasks, and the range of tasks that machine learning algorithms will be able to perform will become wider. Ultimately, once a sufficient number of tasks have been automated, the services that an AI will provide will be so comprehensive that they will resemble a general intelligence.
One could then imagine a “general” intelligence as simply an algorithm that is extremely good at matching the task you ask it to perform to the specialized service algorithm that can perform that task. Rather than acting like a single brain that strives to achieve a particular goal, the central AI would be more like a search engine, looking through the tasks it can perform to find the closest match and calling upon a series of subroutines to achieve the goal.
For Drexler, this is inherently a safety feature. Rather than Bostrom’s single, impenetrable, conscious and superintelligent brain (which we must try to psychoanalyze in advance without really knowing what it will look like), we have a network of capabilities. If you don’t want your system to perform certain tasks, you can simply cut it off from access to those services. There is no superintelligent consciousness to outwit or “trap”: more like an extremely high-level programming language that can respond to complicated commands by calling upon one of the myriad specialized algorithms that have been developed by different groups.
This skirts the complex problem of consciousness and all of the sticky moral quandaries that arise in making minds that might be like ours. After all, if you could simulate a human mind, you could simulate it experiencing unimaginable pain. Black Mirror-esque dystopias where emulated minds have no rights and are regularly “erased” or forced to labor in dull and repetitive tasks, hove into view.
Drexler argues that, in this world, there is no need to ever build a conscious algorithm. Yet it seems likely that, at some point, humans will attempt to simulate our own brains, if only in the vain attempt to pursue immortality. This model cannot hold forever. Yet its proponents argue that any world in which we could develop general AI would probably also have developed superintelligent capabilities in a huge range of different tasks, such as computer programming, natural language understanding, and so on. In other words, CAIS arrives first.
The Future In Our Hands?
Drexler argues that his model already incorporates many of the ideas from general AI development. In the marketplace, algorithms compete all the time to perform these services: they undergo the same evolutionary pressures that lead to “higher intelligence,” but the behavior that’s considered superior is chosen by humans, and the nature of the “general intelligence” is far more shaped by human decision-making and human programmers. Development in AI services could still be rapid and disruptive.
But in Drexler’s case, the research and development capacity comes from humans and organizations driven by the desire to improve algorithms that are performing individualized and useful tasks, rather than from a conscious AI recursively reprogramming and improving itself.
In other words, this vision does not absolve us of the responsibility of making our AI safe; if anything, it gives us a greater degree of responsibility. As more and more complex “services” are automated, performing what used to be human jobs at superhuman speed, the economic disruption will be severe.
Equally, as machine learning is trusted to carry out more complex decisions, avoiding algorithmic bias becomes crucial. Shaping each of these individual decision-makers—and trying to predict the complex ways they might interact with each other—is no less daunting a task than specifying the goal for a hypothetical, superintelligent, God-like AI. Arguably, the consequences of the “misalignment” of these services algorithms are already multiplying around us.
The CAIS model bridges the gap between real-world AI, machine learning developments, and real-world safety considerations, as well as the speculative world of superintelligent agents and the safety considerations involved with controlling their behavior. We should keep our minds open as to what form AI and machine learning will take, and how it will influence our societies—and we must take care to ensure that the systems we create don’t end up forcing us all to live in a world of unintended consequences.
Image Credit: MF Production/Shutterstock.com Continue reading
#435119 Are These Robots Better Than You at ...
Robot technology is evolving at breakneck speed. SoftBank’s Pepper is found in companies across the globe and is rapidly improving its conversation skills. Telepresence robots open up new opportunities for remote working, while Boston Dynamics’ Handle robot could soon (literally) take a load off human colleagues in warehouses.
But warehouses and offices aren’t the only places where robots are lining up next to humans.
Toyota’s Cue 3 robot recently showed off its basketball skills, putting up better numbers than the NBA’s most accurate three-point shooter, the Golden State Warriors’ Steph Curry.
Cue 3 is still some way from being ready to take on Curry, or even amateur basketball players, in a real game. However, it is the latest member of a growing cast of robots challenging human dominance in sports.
As these robots continue to develop, they not only exemplify the speed of exponential technology development, but also how those technologies are improving human capabilities.
Meet the Contestants
The list of robots in sports is surprisingly long and diverse. There are robot skiers, tumblers, soccer players, sumos, and even robot game jockeys. Introductions to a few of them are in order.
Robot: Forpheus
Sport: Table tennis
Intro: Looks like something out of War of the Worlds equipped with a ping pong bat instead of a death ray.
Ability level: Capable of counteracting spin shots and good enough to beat many beginners.
Robot: Sumo bot
Sport: Sumo wrestling
Intro: Hyper-fast, hyper-aggressive. Think robot equivalent to an angry wasp on six cans of Red Bull crossed with a very small tank.
Ability level: Flies around the ring way faster than any human sumo. Tend to drive straight out of the ring at times.
Robot: Cue 3
Sport: Basketball
Intro: Stands at an imposing 6 foot and 10 inches, so pretty much built for the NBA. Looks a bit like something that belongs in a video game.
Ability level: A 62.5 percent three-pointer percentage, which is better than Steph Curry’s; is less mobile than Charles Barkley – in his current form.
Robot: Robo Cup Robots
Intro: The future of soccer. If everything goes to plan, a team of robots will take on the Lionel Messis and Cristiano Ronaldos of 2050 and beat them in a full 11 vs. 11 game.
Ability level: Currently plays soccer more like the six-year-olds I used to coach than Lionel Messi.
The Limiting Factor
The skill level of all the robots above is impressive, and they are doing things that no human contestant can. The sumo bots’ inhuman speed is self-evident. Forpheus’ ability to track the ball with two cameras while simultaneously tracking its opponent with two other cameras requires a look at the spec sheet, but is similarly beyond human capability. While Cue 3 can’t move, it makes shots from the mid-court logo look easy.
Robots are performing at a level that was confined to the realm of science fiction at the start of the millennium. The speed of development indicates that in the near future, my national team soccer coach would likely call up a robot instead of me (he must have lost my number since he hasn’t done so yet. It’s the only logical explanation), and he’d be right to do so.
It is also worth considering that many current sports robots have a humanoid form, which limits their ability. If engineers were to optimize robot design to outperform humans in specific categories, many world champions would likely already be metallic.
Swimming is perhaps one of the most obvious. Even Michael Phelps would struggle to keep up with a torpedo-shaped robot, and if you beefed up a sumo robot to human size, human sumos might impress you by running away from them with a 100-meter speed close to Usain Bolt’s.
In other areas, the playing field for humans and robots is rapidly leveling. One likely candidate for the first head-to-head competitions is racing, where self-driving cars from the Roborace League could perhaps soon be ready to race the likes of Lewis Hamilton.
Tech Pushing Humans
Perhaps one of the biggest reasons why it may still take some time for robots to surpass us is that they, along with other exponential technologies, are already making us better at sports.
In Japan, elite volleyball players use a robot to practice their attacks. Some American football players also practice against robot opponents and hone their skills using VR.
On the sidelines, AI is being used to analyze and improve athletes’ performance, and we may soon see the first AI coaches, not to mention referees.
We may even compete in games dreamt up by our electronic cousins. SpeedGate, a new game created by an AI by studying 400 different sports, is a prime example of that quickly becoming a possibility.
However, we will likely still need to make the final call on what constitutes a good game. The AI that created SpeedGate reportedly also suggested less suitable pastimes, like underwater parkour and a game that featured exploding frisbees. Both of these could be fun…but only if you’re as sturdy as a robot.
Image Credit: RoboCup Standard Platform League 2018, ©The Robocup Federation. Published with permission of reproduction granted by the RoboCup Federation. Continue reading