Tag Archives: ways

#433895 Sci-Fi Movies Are the Secret Weapon That ...

If there’s one line that stands the test of time in Steven Spielberg’s 1993 classic Jurassic Park, it’s probably Jeff Goldblum’s exclamation, “Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.”

Goldblum’s character, Dr. Ian Malcolm, was warning against the hubris of naively tinkering with dinosaur DNA in an effort to bring these extinct creatures back to life. Twenty-five years on, his words are taking on new relevance as a growing number of scientists and companies are grappling with how to tread the line between “could” and “should” in areas ranging from gene editing and real-world “de-extinction” to human augmentation, artificial intelligence and many others.

Despite growing concerns that powerful emerging technologies could lead to unexpected and wide-ranging consequences, innovators are struggling with how to develop beneficial new products while being socially responsible. Part of the answer could lie in watching more science fiction movies like Jurassic Park.

Hollywood Lessons in Societal Risks
I’ve long been interested in how innovators and others can better understand the increasingly complex landscape around the social risks and benefits associated with emerging technologies. Growing concerns over the impacts of tech on jobs, privacy, security and even the ability of people to live their lives without undue interference highlight the need for new thinking around how to innovate responsibly.

New ideas require creativity and imagination, and a willingness to see the world differently. And this is where science fiction movies can help.

Sci-fi flicks are, of course, notoriously unreliable when it comes to accurately depicting science and technology. But because their plots are often driven by the intertwined relationships between people and technology, they can be remarkably insightful in revealing social factors that affect successful and responsible innovation.

This is clearly seen in Jurassic Park. The movie provides a surprisingly good starting point for thinking about the pros and cons of modern-day genetic engineering and the growing interest in bringing extinct species back from the dead. But it also opens up conversations around the nature of complex systems that involve both people and technology, and the potential dangers of “permissionless” innovation that’s driven by power, wealth and a lack of accountability.

Similar insights emerge from a number of other movies, including Spielberg’s 2002 film “Minority Report”—which presaged a growing capacity for AI-enabled crime prediction and the ethical conundrums it’s raising—as well as the 2014 film Ex Machina.

As with Jurassic Park, Ex Machina centers around a wealthy and unaccountable entrepreneur who is supremely confident in his own abilities. In this case, the technology in question is artificial intelligence.

The movie tells a tale of an egotistical genius who creates a remarkable intelligent machine—but he lacks the awareness to recognize his limitations and the risks of what he’s doing. It also provides a chilling insight into potential dangers of creating machines that know us better than we know ourselves, while not being bound by human norms or values.

The result is a sobering reminder of how, without humility and a good dose of humanity, our innovations can come back to bite us.

The technologies in Jurassic Park, Minority Report, and Ex Machina lie beyond what is currently possible. Yet these films are often close enough to emerging trends that they help reveal the dangers of irresponsible, or simply naive, innovation. This is where these and other science fiction movies can help innovators better understand the social challenges they face and how to navigate them.

Real-World Problems Worked Out On-Screen
In a recent op-ed in the New York Times, journalist Kara Swisher asked, “Who will teach Silicon Valley to be ethical?” Prompted by a growing litany of socially questionable decisions amongst tech companies, Swisher suggests that many of them need to grow up and get serious about ethics. But ethics alone are rarely enough. It’s easy for good intentions to get swamped by fiscal pressures and mired in social realities.

Elon Musk has shown that brilliant tech innovators can take ethical missteps along the way. Image Credit:AP Photo/Chris Carlson
Technology companies increasingly need to find some way to break from business as usual if they are to become more responsible. High-profile cases involving companies like Facebook and Uber as well as Tesla’s Elon Musk have highlighted the social as well as the business dangers of operating without fully understanding the consequences of people-oriented actions.

Many more companies are struggling to create socially beneficial technologies and discovering that, without the necessary insights and tools, they risk blundering about in the dark.

For instance, earlier this year, researchers from Google and DeepMind published details of an artificial intelligence-enabled system that can lip-read far better than people. According to the paper’s authors, the technology has enormous potential to improve the lives of people who have trouble speaking aloud. Yet it doesn’t take much to imagine how this same technology could threaten the privacy and security of millions—especially when coupled with long-range surveillance cameras.

Developing technologies like this in socially responsible ways requires more than good intentions or simply establishing an ethics board. People need a sophisticated understanding of the often complex dynamic between technology and society. And while, as Mozilla’s Mitchell Baker suggests, scientists and technologists engaging with the humanities can be helpful, it’s not enough.

An Easy Way into a Serious Discipline
The “new formulation” of complementary skills Baker says innovators desperately need already exists in a thriving interdisciplinary community focused on socially responsible innovation. My home institution, the School for the Future of Innovation in Society at Arizona State University, is just one part of this.

Experts within this global community are actively exploring ways to translate good ideas into responsible practices. And this includes the need for creative insights into the social landscape around technology innovation, and the imagination to develop novel ways to navigate it.

People love to come together as a movie audience.Image credit: The National Archives UK, CC BY 4.0
Here is where science fiction movies become a powerful tool for guiding innovators, technology leaders and the companies where they work. Their fictional scenarios can reveal potential pitfalls and opportunities that can help steer real-world decisions toward socially beneficial and responsible outcomes, while avoiding unnecessary risks.

And science fiction movies bring people together. By their very nature, these films are social and educational levelers. Look at who’s watching and discussing the latest sci-fi blockbuster, and you’ll often find a diverse cross-section of society. The genre can help build bridges between people who know how science and technology work, and those who know what’s needed to ensure they work for the good of society.

This is the underlying theme in my new book Films from the Future: The Technology and Morality of Sci-Fi Movies. It’s written for anyone who’s curious about emerging trends in technology innovation and how they might potentially affect society. But it’s also written for innovators who want to do the right thing and just don’t know where to start.

Of course, science fiction films alone aren’t enough to ensure socially responsible innovation. But they can help reveal some profound societal challenges facing technology innovators and possible ways to navigate them. And what better way to learn how to innovate responsibly than to invite some friends round, open the popcorn and put on a movie?

It certainly beats being blindsided by risks that, with hindsight, could have been avoided.

Andrew Maynard, Director, Risk Innovation Lab, Arizona State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Fred Mantel / Shutterstock.com Continue reading

Posted in Human Robots

#433776 Why We Should Stop Conflating Human and ...

It’s common to hear phrases like ‘machine learning’ and ‘artificial intelligence’ and believe that somehow, someone has managed to replicate a human mind inside a computer. This, of course, is untrue—but part of the reason this idea is so pervasive is because the metaphor of human learning and intelligence has been quite useful in explaining machine learning and artificial intelligence.

Indeed, some AI researchers maintain a close link with the neuroscience community, and inspiration runs in both directions. But the metaphor can be a hindrance to people trying to explain machine learning to those less familiar with it. One of the biggest risks of conflating human and machine intelligence is that we start to hand over too much agency to machines. For those of us working with software, it’s essential that we remember the agency is human—it’s humans who build these systems, after all.

It’s worth unpacking the key differences between machine and human intelligence. While there are certainly similarities, it’s by looking at what makes them different that we can better grasp how artificial intelligence works, and how we can build and use it effectively.

Neural Networks
Central to the metaphor that links human and machine learning is the concept of a neural network. The biggest difference between a human brain and an artificial neural net is the sheer scale of the brain’s neural network. What’s crucial is that it’s not simply the number of neurons in the brain (which reach into the billions), but more precisely, the mind-boggling number of connections between them.

But the issue runs deeper than questions of scale. The human brain is qualitatively different from an artificial neural network for two other important reasons: the connections that power it are analogue, not digital, and the neurons themselves aren’t uniform (as they are in an artificial neural network).

This is why the brain is such a complex thing. Even the most complex artificial neural network, while often difficult to interpret and unpack, has an underlying architecture and principles guiding it (this is what we’re trying to do, so let’s construct the network like this…).

Intricate as they may be, neural networks in AIs are engineered with a specific outcome in mind. The human mind, however, doesn’t have the same degree of intentionality in its engineering. Yes, it should help us do all the things we need to do to stay alive, but it also allows us to think critically and creatively in a way that doesn’t need to be programmed.

The Beautiful Simplicity of AI
The fact that artificial intelligence systems are so much simpler than the human brain is, ironically, what enables AIs to deal with far greater computational complexity than we can.

Artificial neural networks can hold much more information and data than the human brain, largely due to the type of data that is stored and processed in a neural network. It is discrete and specific, like an entry on an excel spreadsheet.

In the human brain, data doesn’t have this same discrete quality. So while an artificial neural network can process very specific data at an incredible scale, it isn’t able to process information in the rich and multidimensional manner a human brain can. This is the key difference between an engineered system and the human mind.

Despite years of research, the human mind still remains somewhat opaque. This is because the analog synaptic connections between neurons are almost impenetrable to the digital connections within an artificial neural network.

Speed and Scale
Consider what this means in practice. The relative simplicity of an AI allows it to do a very complex task very well, and very quickly. A human brain simply can’t process data at scale and speed in the way AIs need to if they’re, say, translating speech to text, or processing a huge set of oncology reports.

Essential to the way AI works in both these contexts is that it breaks data and information down into tiny constituent parts. For example, it could break sounds down into phonetic text, which could then be translated into full sentences, or break images into pieces to understand the rules of how a huge set of them is composed.

Humans often do a similar thing, and this is the point at which machine learning is most like human learning; like algorithms, humans break data or information into smaller chunks in order to process it.

But there’s a reason for this similarity. This breakdown process is engineered into every neural network by a human engineer. What’s more, the way this process is designed will be down to the problem at hand. How an artificial intelligence system breaks down a data set is its own way of ‘understanding’ it.

Even while running a highly complex algorithm unsupervised, the parameters of how an AI learns—how it breaks data down in order to process it—are always set from the start.

Human Intelligence: Defining Problems
Human intelligence doesn’t have this set of limitations, which is what makes us so much more effective at problem-solving. It’s the human ability to ‘create’ problems that makes us so good at solving them. There’s an element of contextual understanding and decision-making in the way humans approach problems.

AIs might be able to unpack problems or find new ways into them, but they can’t define the problem they’re trying to solve.

Algorithmic insensitivity has come into focus in recent years, with an increasing number of scandals around bias in AI systems. Of course, this is caused by the biases of those making the algorithms, but underlines the point that algorithmic biases can only be identified by human intelligence.

Human and Artificial Intelligence Should Complement Each Other
We must remember that artificial intelligence and machine learning aren’t simply things that ‘exist’ that we can no longer control. They are built, engineered, and designed by us. This mindset puts us in control of the future, and makes algorithms even more elegant and remarkable.

Image Credit: Liu zishan/Shutterstock Continue reading

Posted in Human Robots

#433758 DeepMind’s New Research Plan to Make ...

Making sure artificial intelligence does what we want and behaves in predictable ways will be crucial as the technology becomes increasingly ubiquitous. It’s an area frequently neglected in the race to develop products, but DeepMind has now outlined its research agenda to tackle the problem.

AI safety, as the field is known, has been gaining prominence in recent years. That’s probably at least partly down to the overzealous warnings of a coming AI apocalypse from well-meaning, but underqualified pundits like Elon Musk and Stephen Hawking. But it’s also recognition of the fact that AI technology is quickly pervading all aspects of our lives, making decisions on everything from what movies we watch to whether we get a mortgage.

That’s why DeepMind hired a bevy of researchers who specialize in foreseeing the unforeseen consequences of the way we built AI back in 2016. And now the team has spelled out the three key domains they think require research if we’re going to build autonomous machines that do what we want.

In a new blog designed to provide updates on the team’s work, they introduce the ideas of specification, robustness, and assurance, which they say will act as the cornerstones of their future research. Specification involves making sure AI systems do what their operator intends; robustness means a system can cope with changes to its environment and attempts to throw it off course; and assurance involves our ability to understand what systems are doing and how to control them.

A classic thought experiment designed to illustrate how we could lose control of an AI system can help illustrate the problem of specification. Philosopher Nick Bostrom’s posited a hypothetical machine charged with making as many paperclips as possible. Because the creators fail to add what they might assume are obvious additional goals like not harming people, the AI wipes out humanity so we can’t switch it off before turning all matter in the universe into paperclips.

Obviously the example is extreme, but it shows how a poorly-specified goal can lead to unexpected and disastrous outcomes. Properly codifying the desires of the designer is no easy feat, though; often there are not neat ways to encompass both the explicit and implicit goals in ways that are understandable to the machine and don’t leave room for ambiguities, meaning we often rely on incomplete approximations.

The researchers note recent research by OpenAI in which an AI was trained to play a boat-racing game called CoastRunners. The game rewards players for hitting targets laid out along the race route. The AI worked out that it could get a higher score by repeatedly knocking over regenerating targets rather than actually completing the course. The blog post includes a link to a spreadsheet detailing scores of such examples.

Another key concern for AI designers is making their creation robust to the unpredictability of the real world. Despite their superhuman abilities on certain tasks, most cutting-edge AI systems are remarkably brittle. They tend to be trained on highly-curated datasets and so can fail when faced with unfamiliar input. This can happen by accident or by design—researchers have come up with numerous ways to trick image recognition algorithms into misclassifying things, including thinking a 3D printed tortoise was actually a gun.

Building systems that can deal with every possible encounter may not be feasible, so a big part of making AIs more robust may be getting them to avoid risks and ensuring they can recover from errors, or that they have failsafes to ensure errors don’t lead to catastrophic failure.

And finally, we need to have ways to make sure we can tell whether an AI is performing the way we expect it to. A key part of assurance is being able to effectively monitor systems and interpret what they’re doing—if we’re basing medical treatments or sentencing decisions on the output of an AI, we’d like to see the reasoning. That’s a major outstanding problem for popular deep learning approaches, which are largely indecipherable black boxes.

The other half of assurance is the ability to intervene if a machine isn’t behaving the way we’d like. But designing a reliable off switch is tough, because most learning systems have a strong incentive to prevent anyone from interfering with their goals.

The authors don’t pretend to have all the answers, but they hope the framework they’ve come up with can help guide others working on AI safety. While it may be some time before AI is truly in a position to do us harm, hopefully early efforts like these will mean it’s built on a solid foundation that ensures it is aligned with our goals.

Image Credit: cono0430 / Shutterstock.com Continue reading

Posted in Human Robots

#433728 AI Is Kicking Space Exploration into ...

Artificial intelligence in space exploration is gathering momentum. Over the coming years, new missions look likely to be turbo-charged by AI as we voyage to comets, moons, and planets and explore the possibilities of mining asteroids.

“AI is already a game-changer that has made scientific research and exploration much more efficient. We are not just talking about a doubling but about a multiple of ten,” Leopold Summerer, Head of the Advanced Concepts and Studies Office at ESA, said in an interview with Singularity Hub.

Examples Abound
The history of AI and space exploration is older than many probably think. It has already played a significant role in research into our planet, the solar system, and the universe. As computer systems and software have developed, so have AI’s potential use cases.

The Earth Observer 1 (EO-1) satellite is a good example. Since its launch in the early 2000s, its onboard AI systems helped optimize analysis of and response to natural occurrences, like floods and volcanic eruptions. In some cases, the AI was able to tell EO-1 to start capturing images before the ground crew were even aware that the occurrence had taken place.

Other satellite and astronomy examples abound. Sky Image Cataloging and Analysis Tool (SKICAT) has assisted with the classification of objects discovered during the second Palomar Sky Survey, classifying thousands more objects caught in low resolution than a human would be able to. Similar AI systems have helped astronomers to identify 56 new possible gravitational lenses that play a crucial role in connection with research into dark matter.

AI’s ability to trawl through vast amounts of data and find correlations will become increasingly important in relation to getting the most out of the available data. ESA’s ENVISAT produces around 400 terabytes of new data every year—but will be dwarfed by the Square Kilometre Array, which will produce around the same amount of data that is currently on the internet in a day.

AI Readying For Mars
AI is also being used for trajectory and payload optimization. Both are important preliminary steps to NASA’s next rover mission to Mars, the Mars 2020 Rover, which is, slightly ironically, set to land on the red planet in early 2021.

An AI known as AEGIS is already on the red planet onboard NASA’s current rovers. The system can handle autonomous targeting of cameras and choose what to investigate. However, the next generation of AIs will be able to control vehicles, autonomously assist with study selection, and dynamically schedule and perform scientific tasks.

Throughout his career, John Leif Jørgensen from DTU Space in Denmark has designed equipment and systems that have been on board about 100 satellites—and counting. He is part of the team behind the Mars 2020 Rover’s autonomous scientific instrument PIXL, which makes extensive use of AI. Its purpose is to investigate whether there have been lifeforms like stromatolites on Mars.

“PIXL’s microscope is situated on the rover’s arm and needs to be placed 14 millimetres from what we want it to study. That happens thanks to several cameras placed on the rover. It may sound simple, but the handover process and finding out exactly where to place the arm can be likened to identifying a building from the street from a picture taken from the roof. This is something that AI is eminently suited for,” he said in an interview with Singularity Hub.

AI also helps PIXL operate autonomously throughout the night and continuously adjust as the environment changes—the temperature changes between day and night can be more than 100 degrees Celsius, meaning that the ground beneath the rover, the cameras, the robotic arm, and the rock being studied all keep changing distance.

“AI is at the core of all of this work, and helps almost double productivity,” Jørgensen said.

First Mars, Then Moons
Mars is likely far from the final destination for AIs in space. Jupiter’s moons have long fascinated scientists. Especially Europa, which could house a subsurface ocean, buried beneath an approximately 10 km thick ice crust. It is one of the most likely candidates for finding life elsewhere in the solar system.

While that mission may be some time in the future, NASA is currently planning to launch the James Webb Space Telescope into an orbit of around 1.5 million kilometers from Earth in 2020. Part of the mission will involve AI-empowered autonomous systems overseeing the full deployment of the telescope’s 705-kilo mirror.

The distances between Earth and Europa, or Earth and the James Webb telescope, means a delay in communications. That, in turn, makes it imperative for the crafts to be able to make their own decisions. Examples from the Mars Rover project show that communication between a rover and Earth can take 20 minutes because of the vast distance. A Europa mission would see much longer communication times.

Both missions, to varying degrees, illustrate one of the most significant challenges currently facing the use of AI in space exploration. There tends to be a direct correlation between how well AI systems perform and how much data they have been fed. The more, the better, as it were. But we simply don’t have very much data to feed such a system about what it’s likely to encounter on a mission to a place like Europa.

Computing power presents a second challenge. A strenuous, time-consuming approval process and the risk of radiation mean that your computer at home would likely be more powerful than anything going into space in the near future. A 200 GHz processor, 256 megabytes of ram, and 2 gigabytes of memory sounds a lot more like a Nokia 3210 (the one you could use as an ice hockey puck without it noticing) than an iPhone X—but it’s actually the ‘brain’ that will be onboard the next rover.

Private Companies Taking Off
Private companies are helping to push those limitations. CB Insights charts 57 startups in the space-space, covering areas as diverse as natural resources, consumer tourism, R&D, satellites, spacecraft design and launch, and data analytics.

David Chew works as an engineer for the Japanese satellite company Axelspace. He explained how private companies are pushing the speed of exploration and lowering costs.

“Many private space companies are taking advantage of fall-back systems and finding ways of using parts and systems that traditional companies have thought of as non-space-grade. By implementing fall-backs, and using AI, it is possible to integrate and use parts that lower costs without adding risk of failure,” he said in an interview with Singularity Hub.

Terraforming Our Future Home
Further into the future, moonshots like terraforming Mars await. Without AI, these kinds of projects to adapt other planets to Earth-like conditions would be impossible.

Autonomous crafts are already terraforming here on Earth. BioCarbon Engineering uses drones to plant up to 100,000 trees in a single day. Drones first survey and map an area, then an algorithm decides the optimal locations for the trees before a second wave of drones carry out the actual planting.

As is often the case with exponential technologies, there is a great potential for synergies and convergence. For example with AI and robotics, or quantum computing and machine learning. Why not send an AI-driven robot to Mars and use it as a telepresence for scientists on Earth? It could be argued that we are already in the early stages of doing just that by using VR and AR systems that take data from the Mars rovers and create a virtual landscape scientists can walk around in and make decisions on what the rovers should explore next.

One of the biggest benefits of AI in space exploration may not have that much to do with its actual functions. Chew believes that within as little as ten years, we could see the first mining of asteroids in the Kuiper Belt with the help of AI.

“I think one of the things that AI does to space exploration is that it opens up a whole range of new possible industries and services that have a more immediate effect on the lives of people on Earth,” he said. “It becomes a relatable industry that has a real effect on people’s daily lives. In a way, space exploration becomes part of people’s mindset, and the border between our planet and the solar system becomes less important.”

Image Credit: Taily / Shutterstock.com Continue reading

Posted in Human Robots

#433696 3 Big Ways Tech Is Disrupting Global ...

Disruptive business models are often powered by alternative financing. In Part 1 of this series, I discussed how mobile is redefining money and banking and shared some of the dramatic transformations in the global remittance infrastructure.

In this article, we’ll discuss:

Peer-to-peer lending
AI financial advisors and robo traders
Seamless Transactions

Let’s dive right back in…

Decentralized Lending = Democratized Access to Finances
Peer-to-peer (P2P) lending is an age-old practice, traditionally with high risk and extreme locality. Now, the P2P funding model is being digitized and delocalized, bringing lending online and across borders.

Zopa, the first official crowdlending platform, arrived in the United Kingdom in 2004. Since then, the consumer crowdlending platform has facilitated lending of over 3 billion euros ($3.5 billion USD) of loans.

Person-to-business crowdlending took off, again in the U.K., in 2005 with Funding Circle, now with over 5 billion euros (~5.8 billion USD) of capital loaned to small businesses around the world.

Crowdlending next took off in the US in 2006, with platforms like Prosper and Lending Club. The US crowdlending industry has boomed to $21 billion in loans, across 515,000 loans.

Let’s take a step back… to a time before banks, when lending took place between trusted neighbors in small villages across the globe. Lending started as peer-to-peer transactions.

As villages turned into towns, towns turned into cities, and cities turned into sprawling metropolises, neighborly trust and the ability to communicate across urban landscapes broke down. That’s where banks and other financial institutions came into play—to add trust back into the lending equation.

With crowdlending, we are evidently returning to this pre-centralized-banking model of loans, and moving away from cumbersome intermediaries (e.g. high fees, regulations, and extra complexity).

Fueled by the permeation of the internet, P2P lending took on a new form as ‘crowdlending’ in the early 2000s. Now, as blockchain and artificial intelligence arrive on the digital scene, P2P lending platforms are being overhauled with transparency, accountability, reliability, and immutability.

Artificial Intelligence Micro Lending & Credit Scores
We are beginning to augment our quantitative decision-making with neural networks processing borrowers’ financial data to determine their financial ‘fate’ (or, as some call it, your credit score). Companies like Smart Finance Group (backed by Kai Fu Lee and Sinovation Ventures) are using artificial intelligence to minimize default rates for tens of millions of microloans.

Smart Finance is fueled by users’ personal data, particularly smartphone data and usage behavior. Users are required to give Smart Finance access to their smartphone data, so that Smart Finance’s artificial intelligence engine can generate a credit score from the personal information.

The benefits of this AI-powered lending platform do not stop at increased loan payback rates; there’s a massive speed increase as well. Smart Finance loans are frequently approved in under eight seconds. As we’ve seen with other artificial intelligence disruptions, data is the new gold.

Digitizing access to P2P loans paves the way for billions of people currently without access to banking to leapfrog the centralized banking system, just as Africa bypassed landline phones and went straight to mobile. Leapfrogging centralized banking and the credit system is exactly what Smart Finance has done for hundreds of millions of people in China.

Blockchain-Backed Crowdlending
As artificial intelligence accesses even the most mundane mobile browsing data to assign credit scores, blockchain technologies, particularly immutable ledgers and smart contracts, are massive disruptors to the archaic banking system, building additional trust and transparency on top of current P2P lending models.

Immutable ledgers provide the necessary transparency for accurate credit and loan defaulting history. Smart contracts executed on these immutable ledgers bring the critical ability to digitally replace cumbersome, expensive third parties (like banks), allowing individual borrowers or businesses to directly connect with willing lenders.

Two of the leading blockchain platforms for P2P lending are ETHLend and SALT Lending.

ETHLend is an Ethereum-based decentralized application aiming to bring transparency and trust to P2P lending through Ethereum network smart contracts.

Secure Automated Lending Technology (SALT) allows cryptocurrency asset holders to use their digital assets as collateral for cash loans, without the need to liquidate their holdings, giving rise to a digital-asset-backed lending market.

While blockchain poses a threat to many of the large, centralized banking institutions, some are taking advantage of the new technology to optimize their internal lending, credit scoring, and collateral operations.

In March 2018, ING and Credit Suisse successfully exchanged 25 million euros using HQLA-X, a blockchain-based collateral lending platform.

HQLA-X runs on the R3 Corda blockchain, a platform designed specifically to help heritage financial and commerce institutions migrate away from their inefficient legacy financial infrastructure.

Blockchain and tokenization are going through their own fintech and regulation shakeup right now. In a future blog, I’ll discuss the various efforts to more readily assure smart contracts, and the disruptive business model of security tokens and the US Securities and Exchange Commission.

Parallels to the Global Abundance of Capital
The abundance of capital being created by the advent of P2P loans closely relates to the unprecedented global abundance of capital.

Initial coin offerings (ICOs) and crowdfunding are taking a strong stand in disrupting the $164 billion venture capital market. The total amount invested in ICOs has risen from $6.6 billion in 2017 to $7.15 billion USD in the first half of 2018. Crowdfunding helped projects raise more than $34 billion in 2017, with experts projecting that global crowdfunding investments will reach $300 billion by 2025.

In the last year alone, using ICOs, over a dozen projects have raised hundreds of millions of dollars in mere hours. Take Filecoin, for example, which raised $257 million  in only 30 days; its first $135 million was raised in the first hour. Similarly, the Dragon Coin project (which itself is revolutionizing remittance in high-stakes casinos around the world) raised $320 million in its 30-day public ICO.

Some Important Takeaways…

Technology-backed fundraising and financial services are disrupting the world’s largest financial institutions. Anyone, anywhere, at anytime will be able to access the capital they need to pursue their idea.

The speed at which we can go from “I’ve got an idea” to “I run a billion-dollar company” is moving faster than ever.

Following Ray Kurzweil’s Law of Accelerating Returns, the rapid decrease in time to access capital is intimately linked (and greatly dependent on) a financial infrastructure (technology, institutions, platforms, and policies) that can adapt and evolve just as rapidly.

This new abundance of capital requires financial decision-making with ever-higher market prediction precision. That’s exactly where artificial intelligence is already playing a massive role.

Artificial Intelligence, Robo Traders, and Financial Advisors
On May 6, 2010, the Dow Jones Industrial Average suddenly collapsed by 998.5 points (equal to 8 percent, or $1 trillion). The crash lasted over 35 minutes and is now known as the ‘Flash Crash’. While no one knows the specific reason for this 2010 stock market anomaly, experts widely agree that the Flash Crash had to do with algorithmic trading.

With the ability to have instant, trillion-dollar market impacts, algorithmic trading and artificial intelligence are undoubtedly ingrained in how financial markets operate.

In 2017, CNBC.com estimated that 90 percent of daily trading volume in stock trading is done by machine algorithms, and only 10 percent is carried out directly by humans.

Artificial intelligence and financial management algorithms are not only available to top Wall Street players.

Robo-advisor financial management apps, like Wealthfront and Betterment, are rapidly permeating the global market. Wealthfront currently has $9.5 billion in assets under management, and Betterment has $10 billion.

Artificial intelligent financial agents are already helping financial institutions protect your money and fight fraud. A prime application for machine learning is in detecting anomalies in your spending and transaction habits, and flagging potentially fraudulent transactions.

As artificial intelligence continues to exponentially increase in power and capabilities, increasingly powerful trading and financial management bots will come online, finding massive new and previously lost streams of wealth.

How else are artificial intelligence and automation transforming finance?

Disruptive Remittance and Seamless Transactions
When was the last time you paid in cash at a toll booth? How about for a taxi ride?

EZ-Pass, the electronic tolling company implemented extensively on the East Coast, has done wonders to reduce traffic congestion and increase traffic flow.

Driving down I-95 on the East Coast of the United States, drivers rarely notice their financial transaction with the state’s tolling agencies. The transactions are seamless.

The Uber app enables me to travel without my wallet. I can forget about payment on my trip, free up my mental bandwidth and time for higher-priority tasks. The entire process is digitized and, by extension, automated and integrated into Uber’s platform (Note: This incredible convenience many times causes me to accidentally walk out of taxi cabs without paying!).

In January 2018, we saw the success of the first cutting-edge, AI-powered Amazon Go store open in Seattle, Washington. The store marked a new era in remittance and transactions. Gone are the days of carrying credit cards and cash, and gone are the cash registers. And now, on the heals of these early ‘beta-tests’, Amazon is considering opening as many as 3,000 of these cashierless stores by 2023.

Amazon Go stores use AI algorithms that watch various video feeds (from advanced cameras) throughout the store to identify who picks up groceries, exactly what products they select, and how much to charge that person when they walk out of the store. It’s a grab and go experience.

Let’s extrapolate the notion of seamless, integrated payment systems from Amazon Go and Uber’s removal of post-ride payment to the rest of our day-to-day experience.

Imagine this near future:

As you near the front door of your home, your AI assistant summons a self-driving Uber that takes you to the Hyperloop station (after all, you work in L.A. but live in San Francisco).

At the station, you board your pod, without noticing that your ticket purchase was settled via a wireless payment checkpoint.

After work, you stop at the Amazon Go and pick up dinner. Your virtual AI assistant passes your Amazon account information to the store’s payment checkpoint, as the store’s cameras and sensors track you, your cart and charge you auto-magically.

At home, unbeknownst to you, your AI has already restocked your fridge and pantry with whatever items you failed to pick up at the Amazon Go.

Once we remove the actively transacting aspect of finance, what else becomes possible?

Top Conclusions
Extraordinary transformations are happening in the finance world. We’ve only scratched the surface of the fintech revolution. All of these transformative financial technologies require high-fidelity assurance, robust insurance, and a mechanism for storing value.

I’ll dive into each of these other facets of financial services in future articles.

For now, thanks to coming global communication networks being deployed on 5G, Alphabet’s LUNE, SpaceX’s Starlink and OneWeb, by 2024, nearly all 8 billion people on Earth will be online.

Once connected, these new minds, entrepreneurs, and customers need access to money and financial services to meaningfully participate in the world economy.

By connecting lenders and borrowers around the globe, decentralized lending drives down global interest rates, increases global financial market participation, and enables economic opportunity to the billions of people who are about to come online.

We’re living in the most abundant time in human history, and fintech is just getting started.

Join Me
Abundance Digital Online Community: I have created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance Digital. This is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: Novikov Aleksey / Shutterstock.com Continue reading

Posted in Human Robots