Tag Archives: ways
#433758 DeepMind’s New Research Plan to Make ...
Making sure artificial intelligence does what we want and behaves in predictable ways will be crucial as the technology becomes increasingly ubiquitous. It’s an area frequently neglected in the race to develop products, but DeepMind has now outlined its research agenda to tackle the problem.
AI safety, as the field is known, has been gaining prominence in recent years. That’s probably at least partly down to the overzealous warnings of a coming AI apocalypse from well-meaning, but underqualified pundits like Elon Musk and Stephen Hawking. But it’s also recognition of the fact that AI technology is quickly pervading all aspects of our lives, making decisions on everything from what movies we watch to whether we get a mortgage.
That’s why DeepMind hired a bevy of researchers who specialize in foreseeing the unforeseen consequences of the way we built AI back in 2016. And now the team has spelled out the three key domains they think require research if we’re going to build autonomous machines that do what we want.
In a new blog designed to provide updates on the team’s work, they introduce the ideas of specification, robustness, and assurance, which they say will act as the cornerstones of their future research. Specification involves making sure AI systems do what their operator intends; robustness means a system can cope with changes to its environment and attempts to throw it off course; and assurance involves our ability to understand what systems are doing and how to control them.
A classic thought experiment designed to illustrate how we could lose control of an AI system can help illustrate the problem of specification. Philosopher Nick Bostrom’s posited a hypothetical machine charged with making as many paperclips as possible. Because the creators fail to add what they might assume are obvious additional goals like not harming people, the AI wipes out humanity so we can’t switch it off before turning all matter in the universe into paperclips.
Obviously the example is extreme, but it shows how a poorly-specified goal can lead to unexpected and disastrous outcomes. Properly codifying the desires of the designer is no easy feat, though; often there are not neat ways to encompass both the explicit and implicit goals in ways that are understandable to the machine and don’t leave room for ambiguities, meaning we often rely on incomplete approximations.
The researchers note recent research by OpenAI in which an AI was trained to play a boat-racing game called CoastRunners. The game rewards players for hitting targets laid out along the race route. The AI worked out that it could get a higher score by repeatedly knocking over regenerating targets rather than actually completing the course. The blog post includes a link to a spreadsheet detailing scores of such examples.
Another key concern for AI designers is making their creation robust to the unpredictability of the real world. Despite their superhuman abilities on certain tasks, most cutting-edge AI systems are remarkably brittle. They tend to be trained on highly-curated datasets and so can fail when faced with unfamiliar input. This can happen by accident or by design—researchers have come up with numerous ways to trick image recognition algorithms into misclassifying things, including thinking a 3D printed tortoise was actually a gun.
Building systems that can deal with every possible encounter may not be feasible, so a big part of making AIs more robust may be getting them to avoid risks and ensuring they can recover from errors, or that they have failsafes to ensure errors don’t lead to catastrophic failure.
And finally, we need to have ways to make sure we can tell whether an AI is performing the way we expect it to. A key part of assurance is being able to effectively monitor systems and interpret what they’re doing—if we’re basing medical treatments or sentencing decisions on the output of an AI, we’d like to see the reasoning. That’s a major outstanding problem for popular deep learning approaches, which are largely indecipherable black boxes.
The other half of assurance is the ability to intervene if a machine isn’t behaving the way we’d like. But designing a reliable off switch is tough, because most learning systems have a strong incentive to prevent anyone from interfering with their goals.
The authors don’t pretend to have all the answers, but they hope the framework they’ve come up with can help guide others working on AI safety. While it may be some time before AI is truly in a position to do us harm, hopefully early efforts like these will mean it’s built on a solid foundation that ensures it is aligned with our goals.
Image Credit: cono0430 / Shutterstock.com Continue reading
#433728 AI Is Kicking Space Exploration into ...
Artificial intelligence in space exploration is gathering momentum. Over the coming years, new missions look likely to be turbo-charged by AI as we voyage to comets, moons, and planets and explore the possibilities of mining asteroids.
“AI is already a game-changer that has made scientific research and exploration much more efficient. We are not just talking about a doubling but about a multiple of ten,” Leopold Summerer, Head of the Advanced Concepts and Studies Office at ESA, said in an interview with Singularity Hub.
Examples Abound
The history of AI and space exploration is older than many probably think. It has already played a significant role in research into our planet, the solar system, and the universe. As computer systems and software have developed, so have AI’s potential use cases.
The Earth Observer 1 (EO-1) satellite is a good example. Since its launch in the early 2000s, its onboard AI systems helped optimize analysis of and response to natural occurrences, like floods and volcanic eruptions. In some cases, the AI was able to tell EO-1 to start capturing images before the ground crew were even aware that the occurrence had taken place.
Other satellite and astronomy examples abound. Sky Image Cataloging and Analysis Tool (SKICAT) has assisted with the classification of objects discovered during the second Palomar Sky Survey, classifying thousands more objects caught in low resolution than a human would be able to. Similar AI systems have helped astronomers to identify 56 new possible gravitational lenses that play a crucial role in connection with research into dark matter.
AI’s ability to trawl through vast amounts of data and find correlations will become increasingly important in relation to getting the most out of the available data. ESA’s ENVISAT produces around 400 terabytes of new data every year—but will be dwarfed by the Square Kilometre Array, which will produce around the same amount of data that is currently on the internet in a day.
AI Readying For Mars
AI is also being used for trajectory and payload optimization. Both are important preliminary steps to NASA’s next rover mission to Mars, the Mars 2020 Rover, which is, slightly ironically, set to land on the red planet in early 2021.
An AI known as AEGIS is already on the red planet onboard NASA’s current rovers. The system can handle autonomous targeting of cameras and choose what to investigate. However, the next generation of AIs will be able to control vehicles, autonomously assist with study selection, and dynamically schedule and perform scientific tasks.
Throughout his career, John Leif Jørgensen from DTU Space in Denmark has designed equipment and systems that have been on board about 100 satellites—and counting. He is part of the team behind the Mars 2020 Rover’s autonomous scientific instrument PIXL, which makes extensive use of AI. Its purpose is to investigate whether there have been lifeforms like stromatolites on Mars.
“PIXL’s microscope is situated on the rover’s arm and needs to be placed 14 millimetres from what we want it to study. That happens thanks to several cameras placed on the rover. It may sound simple, but the handover process and finding out exactly where to place the arm can be likened to identifying a building from the street from a picture taken from the roof. This is something that AI is eminently suited for,” he said in an interview with Singularity Hub.
AI also helps PIXL operate autonomously throughout the night and continuously adjust as the environment changes—the temperature changes between day and night can be more than 100 degrees Celsius, meaning that the ground beneath the rover, the cameras, the robotic arm, and the rock being studied all keep changing distance.
“AI is at the core of all of this work, and helps almost double productivity,” Jørgensen said.
First Mars, Then Moons
Mars is likely far from the final destination for AIs in space. Jupiter’s moons have long fascinated scientists. Especially Europa, which could house a subsurface ocean, buried beneath an approximately 10 km thick ice crust. It is one of the most likely candidates for finding life elsewhere in the solar system.
While that mission may be some time in the future, NASA is currently planning to launch the James Webb Space Telescope into an orbit of around 1.5 million kilometers from Earth in 2020. Part of the mission will involve AI-empowered autonomous systems overseeing the full deployment of the telescope’s 705-kilo mirror.
The distances between Earth and Europa, or Earth and the James Webb telescope, means a delay in communications. That, in turn, makes it imperative for the crafts to be able to make their own decisions. Examples from the Mars Rover project show that communication between a rover and Earth can take 20 minutes because of the vast distance. A Europa mission would see much longer communication times.
Both missions, to varying degrees, illustrate one of the most significant challenges currently facing the use of AI in space exploration. There tends to be a direct correlation between how well AI systems perform and how much data they have been fed. The more, the better, as it were. But we simply don’t have very much data to feed such a system about what it’s likely to encounter on a mission to a place like Europa.
Computing power presents a second challenge. A strenuous, time-consuming approval process and the risk of radiation mean that your computer at home would likely be more powerful than anything going into space in the near future. A 200 GHz processor, 256 megabytes of ram, and 2 gigabytes of memory sounds a lot more like a Nokia 3210 (the one you could use as an ice hockey puck without it noticing) than an iPhone X—but it’s actually the ‘brain’ that will be onboard the next rover.
Private Companies Taking Off
Private companies are helping to push those limitations. CB Insights charts 57 startups in the space-space, covering areas as diverse as natural resources, consumer tourism, R&D, satellites, spacecraft design and launch, and data analytics.
David Chew works as an engineer for the Japanese satellite company Axelspace. He explained how private companies are pushing the speed of exploration and lowering costs.
“Many private space companies are taking advantage of fall-back systems and finding ways of using parts and systems that traditional companies have thought of as non-space-grade. By implementing fall-backs, and using AI, it is possible to integrate and use parts that lower costs without adding risk of failure,” he said in an interview with Singularity Hub.
Terraforming Our Future Home
Further into the future, moonshots like terraforming Mars await. Without AI, these kinds of projects to adapt other planets to Earth-like conditions would be impossible.
Autonomous crafts are already terraforming here on Earth. BioCarbon Engineering uses drones to plant up to 100,000 trees in a single day. Drones first survey and map an area, then an algorithm decides the optimal locations for the trees before a second wave of drones carry out the actual planting.
As is often the case with exponential technologies, there is a great potential for synergies and convergence. For example with AI and robotics, or quantum computing and machine learning. Why not send an AI-driven robot to Mars and use it as a telepresence for scientists on Earth? It could be argued that we are already in the early stages of doing just that by using VR and AR systems that take data from the Mars rovers and create a virtual landscape scientists can walk around in and make decisions on what the rovers should explore next.
One of the biggest benefits of AI in space exploration may not have that much to do with its actual functions. Chew believes that within as little as ten years, we could see the first mining of asteroids in the Kuiper Belt with the help of AI.
“I think one of the things that AI does to space exploration is that it opens up a whole range of new possible industries and services that have a more immediate effect on the lives of people on Earth,” he said. “It becomes a relatable industry that has a real effect on people’s daily lives. In a way, space exploration becomes part of people’s mindset, and the border between our planet and the solar system becomes less important.”
Image Credit: Taily / Shutterstock.com Continue reading
#433696 3 Big Ways Tech Is Disrupting Global ...
Disruptive business models are often powered by alternative financing. In Part 1 of this series, I discussed how mobile is redefining money and banking and shared some of the dramatic transformations in the global remittance infrastructure.
In this article, we’ll discuss:
Peer-to-peer lending
AI financial advisors and robo traders
Seamless Transactions
Let’s dive right back in…
Decentralized Lending = Democratized Access to Finances
Peer-to-peer (P2P) lending is an age-old practice, traditionally with high risk and extreme locality. Now, the P2P funding model is being digitized and delocalized, bringing lending online and across borders.
Zopa, the first official crowdlending platform, arrived in the United Kingdom in 2004. Since then, the consumer crowdlending platform has facilitated lending of over 3 billion euros ($3.5 billion USD) of loans.
Person-to-business crowdlending took off, again in the U.K., in 2005 with Funding Circle, now with over 5 billion euros (~5.8 billion USD) of capital loaned to small businesses around the world.
Crowdlending next took off in the US in 2006, with platforms like Prosper and Lending Club. The US crowdlending industry has boomed to $21 billion in loans, across 515,000 loans.
Let’s take a step back… to a time before banks, when lending took place between trusted neighbors in small villages across the globe. Lending started as peer-to-peer transactions.
As villages turned into towns, towns turned into cities, and cities turned into sprawling metropolises, neighborly trust and the ability to communicate across urban landscapes broke down. That’s where banks and other financial institutions came into play—to add trust back into the lending equation.
With crowdlending, we are evidently returning to this pre-centralized-banking model of loans, and moving away from cumbersome intermediaries (e.g. high fees, regulations, and extra complexity).
Fueled by the permeation of the internet, P2P lending took on a new form as ‘crowdlending’ in the early 2000s. Now, as blockchain and artificial intelligence arrive on the digital scene, P2P lending platforms are being overhauled with transparency, accountability, reliability, and immutability.
Artificial Intelligence Micro Lending & Credit Scores
We are beginning to augment our quantitative decision-making with neural networks processing borrowers’ financial data to determine their financial ‘fate’ (or, as some call it, your credit score). Companies like Smart Finance Group (backed by Kai Fu Lee and Sinovation Ventures) are using artificial intelligence to minimize default rates for tens of millions of microloans.
Smart Finance is fueled by users’ personal data, particularly smartphone data and usage behavior. Users are required to give Smart Finance access to their smartphone data, so that Smart Finance’s artificial intelligence engine can generate a credit score from the personal information.
The benefits of this AI-powered lending platform do not stop at increased loan payback rates; there’s a massive speed increase as well. Smart Finance loans are frequently approved in under eight seconds. As we’ve seen with other artificial intelligence disruptions, data is the new gold.
Digitizing access to P2P loans paves the way for billions of people currently without access to banking to leapfrog the centralized banking system, just as Africa bypassed landline phones and went straight to mobile. Leapfrogging centralized banking and the credit system is exactly what Smart Finance has done for hundreds of millions of people in China.
Blockchain-Backed Crowdlending
As artificial intelligence accesses even the most mundane mobile browsing data to assign credit scores, blockchain technologies, particularly immutable ledgers and smart contracts, are massive disruptors to the archaic banking system, building additional trust and transparency on top of current P2P lending models.
Immutable ledgers provide the necessary transparency for accurate credit and loan defaulting history. Smart contracts executed on these immutable ledgers bring the critical ability to digitally replace cumbersome, expensive third parties (like banks), allowing individual borrowers or businesses to directly connect with willing lenders.
Two of the leading blockchain platforms for P2P lending are ETHLend and SALT Lending.
ETHLend is an Ethereum-based decentralized application aiming to bring transparency and trust to P2P lending through Ethereum network smart contracts.
Secure Automated Lending Technology (SALT) allows cryptocurrency asset holders to use their digital assets as collateral for cash loans, without the need to liquidate their holdings, giving rise to a digital-asset-backed lending market.
While blockchain poses a threat to many of the large, centralized banking institutions, some are taking advantage of the new technology to optimize their internal lending, credit scoring, and collateral operations.
In March 2018, ING and Credit Suisse successfully exchanged 25 million euros using HQLA-X, a blockchain-based collateral lending platform.
HQLA-X runs on the R3 Corda blockchain, a platform designed specifically to help heritage financial and commerce institutions migrate away from their inefficient legacy financial infrastructure.
Blockchain and tokenization are going through their own fintech and regulation shakeup right now. In a future blog, I’ll discuss the various efforts to more readily assure smart contracts, and the disruptive business model of security tokens and the US Securities and Exchange Commission.
Parallels to the Global Abundance of Capital
The abundance of capital being created by the advent of P2P loans closely relates to the unprecedented global abundance of capital.
Initial coin offerings (ICOs) and crowdfunding are taking a strong stand in disrupting the $164 billion venture capital market. The total amount invested in ICOs has risen from $6.6 billion in 2017 to $7.15 billion USD in the first half of 2018. Crowdfunding helped projects raise more than $34 billion in 2017, with experts projecting that global crowdfunding investments will reach $300 billion by 2025.
In the last year alone, using ICOs, over a dozen projects have raised hundreds of millions of dollars in mere hours. Take Filecoin, for example, which raised $257 million in only 30 days; its first $135 million was raised in the first hour. Similarly, the Dragon Coin project (which itself is revolutionizing remittance in high-stakes casinos around the world) raised $320 million in its 30-day public ICO.
Some Important Takeaways…
Technology-backed fundraising and financial services are disrupting the world’s largest financial institutions. Anyone, anywhere, at anytime will be able to access the capital they need to pursue their idea.
The speed at which we can go from “I’ve got an idea” to “I run a billion-dollar company” is moving faster than ever.
Following Ray Kurzweil’s Law of Accelerating Returns, the rapid decrease in time to access capital is intimately linked (and greatly dependent on) a financial infrastructure (technology, institutions, platforms, and policies) that can adapt and evolve just as rapidly.
This new abundance of capital requires financial decision-making with ever-higher market prediction precision. That’s exactly where artificial intelligence is already playing a massive role.
Artificial Intelligence, Robo Traders, and Financial Advisors
On May 6, 2010, the Dow Jones Industrial Average suddenly collapsed by 998.5 points (equal to 8 percent, or $1 trillion). The crash lasted over 35 minutes and is now known as the ‘Flash Crash’. While no one knows the specific reason for this 2010 stock market anomaly, experts widely agree that the Flash Crash had to do with algorithmic trading.
With the ability to have instant, trillion-dollar market impacts, algorithmic trading and artificial intelligence are undoubtedly ingrained in how financial markets operate.
In 2017, CNBC.com estimated that 90 percent of daily trading volume in stock trading is done by machine algorithms, and only 10 percent is carried out directly by humans.
Artificial intelligence and financial management algorithms are not only available to top Wall Street players.
Robo-advisor financial management apps, like Wealthfront and Betterment, are rapidly permeating the global market. Wealthfront currently has $9.5 billion in assets under management, and Betterment has $10 billion.
Artificial intelligent financial agents are already helping financial institutions protect your money and fight fraud. A prime application for machine learning is in detecting anomalies in your spending and transaction habits, and flagging potentially fraudulent transactions.
As artificial intelligence continues to exponentially increase in power and capabilities, increasingly powerful trading and financial management bots will come online, finding massive new and previously lost streams of wealth.
How else are artificial intelligence and automation transforming finance?
Disruptive Remittance and Seamless Transactions
When was the last time you paid in cash at a toll booth? How about for a taxi ride?
EZ-Pass, the electronic tolling company implemented extensively on the East Coast, has done wonders to reduce traffic congestion and increase traffic flow.
Driving down I-95 on the East Coast of the United States, drivers rarely notice their financial transaction with the state’s tolling agencies. The transactions are seamless.
The Uber app enables me to travel without my wallet. I can forget about payment on my trip, free up my mental bandwidth and time for higher-priority tasks. The entire process is digitized and, by extension, automated and integrated into Uber’s platform (Note: This incredible convenience many times causes me to accidentally walk out of taxi cabs without paying!).
In January 2018, we saw the success of the first cutting-edge, AI-powered Amazon Go store open in Seattle, Washington. The store marked a new era in remittance and transactions. Gone are the days of carrying credit cards and cash, and gone are the cash registers. And now, on the heals of these early ‘beta-tests’, Amazon is considering opening as many as 3,000 of these cashierless stores by 2023.
Amazon Go stores use AI algorithms that watch various video feeds (from advanced cameras) throughout the store to identify who picks up groceries, exactly what products they select, and how much to charge that person when they walk out of the store. It’s a grab and go experience.
Let’s extrapolate the notion of seamless, integrated payment systems from Amazon Go and Uber’s removal of post-ride payment to the rest of our day-to-day experience.
Imagine this near future:
As you near the front door of your home, your AI assistant summons a self-driving Uber that takes you to the Hyperloop station (after all, you work in L.A. but live in San Francisco).
At the station, you board your pod, without noticing that your ticket purchase was settled via a wireless payment checkpoint.
After work, you stop at the Amazon Go and pick up dinner. Your virtual AI assistant passes your Amazon account information to the store’s payment checkpoint, as the store’s cameras and sensors track you, your cart and charge you auto-magically.
At home, unbeknownst to you, your AI has already restocked your fridge and pantry with whatever items you failed to pick up at the Amazon Go.
Once we remove the actively transacting aspect of finance, what else becomes possible?
Top Conclusions
Extraordinary transformations are happening in the finance world. We’ve only scratched the surface of the fintech revolution. All of these transformative financial technologies require high-fidelity assurance, robust insurance, and a mechanism for storing value.
I’ll dive into each of these other facets of financial services in future articles.
For now, thanks to coming global communication networks being deployed on 5G, Alphabet’s LUNE, SpaceX’s Starlink and OneWeb, by 2024, nearly all 8 billion people on Earth will be online.
Once connected, these new minds, entrepreneurs, and customers need access to money and financial services to meaningfully participate in the world economy.
By connecting lenders and borrowers around the globe, decentralized lending drives down global interest rates, increases global financial market participation, and enables economic opportunity to the billions of people who are about to come online.
We’re living in the most abundant time in human history, and fintech is just getting started.
Join Me
Abundance Digital Online Community: I have created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance Digital. This is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.
Image Credit: Novikov Aleksey / Shutterstock.com Continue reading
#433689 The Rise of Dataism: A Threat to Freedom ...
What would happen if we made all of our data public—everything from wearables monitoring our biometrics, all the way to smartphones monitoring our location, our social media activity, and even our internet search history?
Would such insights into our lives simply provide companies and politicians with greater power to invade our privacy and manipulate us by using our psychological profiles against us?
A burgeoning new philosophy called dataism doesn’t think so.
In fact, this trending ideology believes that liberating the flow of data is the supreme value of the universe, and that it could be the key to unleashing the greatest scientific revolution in the history of humanity.
What Is Dataism?
First mentioned by David Brooks in his 2013 New York Times article “The Philosophy of Data,” dataism is an ethical system that has been most heavily explored and popularized by renowned historian, Yuval Noah Harari.
In his 2016 book Homo Deus, Harari described dataism as a new form of religion that celebrates the growing importance of big data.
Its core belief centers around the idea that the universe gives greater value and support to systems, individuals, and societies that contribute most heavily and efficiently to data processing. In an interview with Wired, Harari stated, “Humans were special and important because up until now they were the most sophisticated data processing system in the universe, but this is no longer the case.”
Now, big data and machine learning are proving themselves more sophisticated, and dataists believe we should hand over as much information and power to these algorithms as possible, allowing the free flow of data to unlock innovation and progress unlike anything we’ve ever seen before.
Pros: Progress and Personal Growth
When you let data run freely, it’s bound to be mixed and matched in new ways that inevitably spark progress. And as we enter the exponential future where every person is constantly connected and sharing their data, the potential for such collaborative epiphanies becomes even greater.
We can already see important increases in quality of life thanks to companies like Google. With Google Maps on your phone, your position is constantly updating on their servers. This information, combined with everyone else on the planet using a phone with Google Maps, allows your phone to inform you of traffic conditions. Based on the speed and location of nearby phones, Google can reroute you to less congested areas or help you avoid accidents. And since you trust that these algorithms have more data than you, you gladly hand over your power to them, following your GPS’s directions rather than your own.
We can do the same sort of thing with our bodies.
Imagine, for instance, a world where each person has biosensors in their bloodstreams—a not unlikely or distant possibility when considering diabetic people already wear insulin pumps that constantly monitor their blood sugar levels. And let’s assume this data was freely shared to the world.
Now imagine a virus like Zika or the Bird Flu breaks out. Thanks to this technology, the odd change in biodata coming from a particular region flags an artificial intelligence that feeds data to the CDC (Center for Disease Control and Prevention). Recognizing that a pandemic could be possible, AIs begin 3D printing vaccines on-demand, predicting the number of people who may be afflicted. When our personal AIs tell us the locations of the spreading epidemic and to take the vaccine it just delivered by drone to our homes, are we likely to follow its instructions? Almost certainly—and if so, it’s likely millions, if not billions, of lives will have been saved.
But to quickly create such vaccines, we’ll also need to liberate research.
Currently, universities and companies seeking to benefit humankind with medical solutions have to pay extensively to organize clinical trials and to find people who match their needs. But if all our biodata was freely aggregated, perhaps they could simply say “monitor all people living with cancer” to an AI, and thanks to the constant stream of data coming in from the world’s population, a machine learning program may easily be able to detect a pattern and create a cure.
As always in research, the more sample data you have, the higher the chance that such patterns will emerge. If data is flowing freely, then anyone in the world can suddenly decide they have a hunch they want to explore, and without having to spend months and months of time and money hunting down the data, they can simply test their hypothesis.
Whether garage tinkerers, at-home scientists, or PhD students—an abundance of free data allows for science to progress unhindered, each person able to operate without being slowed by lack of data. And any progress they make is immediately liberated, becoming free data shared with anyone else that may find a use for it.
Any individual with a curious passion would have the entire world’s data at their fingertips, empowering every one of us to become an expert in any subject that inspires us. Expertise we can then share back into the data stream—a positive feedback loop spearheading progress for the entirety of humanity’s knowledge.
Such exponential gains represent a dataism utopia.
Unfortunately, our current incentives and economy also show us the tragic failures of this model.
As Harari has pointed out, the rise of datism means that “humanism is now facing an existential challenge and the idea of ‘free will’ is under threat.”
Cons: Manipulation and Extortion
In 2017, The Economist declared that data was the most valuable resource on the planet—even more valuable than oil.
Perhaps this is because data is ‘priceless’: it represents understanding, and understanding represents control. And so, in the world of advertising and politics, having data on your consumers and voters gives you an incredible advantage.
This was evidenced by the Cambridge Analytica scandal, in which it’s believed that Donald Trump and the architects of Brexit leveraged users’ Facebook data to create psychological profiles that enabled them to manipulate the masses.
How powerful are these psychological models?
A team who built a model similar to that used by Cambridge Analytica said their model could understand someone as well as a coworker with access to only 10 Facebook likes. With 70 likes they could know them as well as a friend might, 150 likes to match their parents’ understanding, and at 300 likes they could even come to know someone better than their lovers. With more likes, they could even come to know someone better than that person knows themselves.
Proceeding With Caution
In a capitalist democracy, do we want businesses and politicians to know us better than we know ourselves?
In spite of the remarkable benefits that may result for our species by freely giving away our information, do we run the risk of that data being used to exploit and manipulate the masses towards a future without free will, where our daily lives are puppeteered by those who own our data?
It’s extremely possible.
And it’s for this reason that one of the most important conversations we’ll have as a species centers around data ownership: do we just give ownership of the data back to the users, allowing them to choose who to sell or freely give their data to? Or will that simply deter the entrepreneurial drive and cause all of the free services we use today, like Google Search and Facebook, to begin charging inaccessible prices? How much are we willing to pay for our freedom? And how much do we actually care?
If recent history has taught us anything, it’s that humans are willing to give up more privacy than they like to think. Fifteen years ago, it would have been crazy to suggest we’d all allow ourselves to be tracked by our cars, phones, and daily check-ins to our favorite neighborhood locations; but now most of us see it as a worthwhile trade for optimized commutes and dating. As we continue navigating that fine line between exploitation and innovation into a more technological future, what other trade-offs might we be willing to make?
Image Credit: graphicINmotion / Shutterstock.com Continue reading