Tag Archives: cameras
#433852 How Do We Teach Autonomous Cars To Drive ...
Autonomous vehicles can follow the general rules of American roads, recognizing traffic signals and lane markings, noticing crosswalks and other regular features of the streets. But they work only on well-marked roads that are carefully scanned and mapped in advance.
Many paved roads, though, have faded paint, signs obscured behind trees and unusual intersections. In addition, 1.4 million miles of U.S. roads—one-third of the country’s public roadways—are unpaved, with no on-road signals like lane markings or stop-here lines. That doesn’t include miles of private roads, unpaved driveways or off-road trails.
What’s a rule-following autonomous car to do when the rules are unclear or nonexistent? And what are its passengers to do when they discover their vehicle can’t get them where they’re going?
Accounting for the Obscure
Most challenges in developing advanced technologies involve handling infrequent or uncommon situations, or events that require performance beyond a system’s normal capabilities. That’s definitely true for autonomous vehicles. Some on-road examples might be navigating construction zones, encountering a horse and buggy, or seeing graffiti that looks like a stop sign. Off-road, the possibilities include the full variety of the natural world, such as trees down over the road, flooding and large puddles—or even animals blocking the way.
At Mississippi State University’s Center for Advanced Vehicular Systems, we have taken up the challenge of training algorithms to respond to circumstances that almost never happen, are difficult to predict and are complex to create. We seek to put autonomous cars in the hardest possible scenario: driving in an area the car has no prior knowledge of, with no reliable infrastructure like road paint and traffic signs, and in an unknown environment where it’s just as likely to see a cactus as a polar bear.
Our work combines virtual technology and the real world. We create advanced simulations of lifelike outdoor scenes, which we use to train artificial intelligence algorithms to take a camera feed and classify what it sees, labeling trees, sky, open paths and potential obstacles. Then we transfer those algorithms to a purpose-built all-wheel-drive test vehicle and send it out on our dedicated off-road test track, where we can see how our algorithms work and collect more data to feed into our simulations.
Starting Virtual
We have developed a simulator that can create a wide range of realistic outdoor scenes for vehicles to navigate through. The system generates a range of landscapes of different climates, like forests and deserts, and can show how plants, shrubs and trees grow over time. It can also simulate weather changes, sunlight and moonlight, and the accurate locations of 9,000 stars.
The system also simulates the readings of sensors commonly used in autonomous vehicles, such as lidar and cameras. Those virtual sensors collect data that feeds into neural networks as valuable training data.
Simulated desert, meadow and forest environments generated by the Mississippi State University Autonomous Vehicle Simulator. Chris Goodin, Mississippi State University, Author provided.
Building a Test Track
Simulations are only as good as their portrayals of the real world. Mississippi State University has purchased 50 acres of land on which we are developing a test track for off-road autonomous vehicles. The property is excellent for off-road testing, with unusually steep grades for our area of Mississippi—up to 60 percent inclines—and a very diverse population of plants.
We have selected certain natural features of this land that we expect will be particularly challenging for self-driving vehicles, and replicated them exactly in our simulator. That allows us to directly compare results from the simulation and real-life attempts to navigate the actual land. Eventually, we’ll create similar real and virtual pairings of other types of landscapes to improve our vehicle’s capabilities.
A road washout, as seen in real life, left, and in simulation. Chris Goodin, Mississippi State University, Author provided.
Collecting More Data
We have also built a test vehicle, called the Halo Project, which has an electric motor and sensors and computers that can navigate various off-road environments. The Halo Project car has additional sensors to collect detailed data about its actual surroundings, which can help us build virtual environments to run new tests in.
The Halo Project car can collect data about driving and navigating in rugged terrain. Beth Newman Wynn, Mississippi State University, Author provided.
Two of its lidar sensors, for example, are mounted at intersecting angles on the front of the car so their beams sweep across the approaching ground. Together, they can provide information on how rough or smooth the surface is, as well as capturing readings from grass and other plants and items on the ground.
Lidar beams intersect, scanning the ground in front of the vehicle. Chris Goodin, Mississippi State University, Author provided
We’ve seen some exciting early results from our research. For example, we have shown promising preliminary results that machine learning algorithms trained on simulated environments can be useful in the real world. As with most autonomous vehicle research, there is still a long way to go, but our hope is that the technologies we’re developing for extreme cases will also help make autonomous vehicles more functional on today’s roads.
Matthew Doude, Associate Director, Center for Advanced Vehicular Systems; Ph.D. Student in Industrial and Systems Engineering, Mississippi State University; Christopher Goodin, Assistant Research Professor, Center for Advanced Vehicular Systems, Mississippi State University, and Daniel Carruth, Assistant Research Professor and Associate Director for Human Factors and Advanced Vehicle System, Center for Advanced Vehicular Systems, Mississippi State University
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Photo provided for The Conversation by Matthew Goudin / CC BY ND Continue reading
#433799 The First Novel Written by AI Is ...
Last year, a novelist went on a road trip across the USA. The trip was an attempt to emulate Jack Kerouac—to go out on the road and find something essential to write about in the experience. There is, however, a key difference between this writer and anyone else talking your ear off in the bar. This writer is just a microphone, a GPS, and a camera hooked up to a laptop and a whole bunch of linear algebra.
People who are optimistic that artificial intelligence and machine learning won’t put us all out of a job say that human ingenuity and creativity will be difficult to imitate. The classic argument is that, just as machines freed us from repetitive manual tasks, machine learning will free us from repetitive intellectual tasks.
This leaves us free to spend more time on the rewarding aspects of our work, pursuing creative hobbies, spending time with loved ones, and generally being human.
In this worldview, creative works like a great novel or symphony, and the emotions they evoke, cannot be reduced to lines of code. Humans retain a dimension of superiority over algorithms.
But is creativity a fundamentally human phenomenon? Or can it be learned by machines?
And if they learn to understand us better than we understand ourselves, could the great AI novel—tailored, of course, to your own predispositions in fiction—be the best you’ll ever read?
Maybe Not a Beach Read
This is the futurist’s view, of course. The reality, as the jury-rigged contraption in Ross Goodwin’s Cadillac for that road trip can attest, is some way off.
“This is very much an imperfect document, a rapid prototyping project. The output isn’t perfect. I don’t think it’s a human novel, or anywhere near it,” Goodwin said of the novel that his machine created. 1 The Road is currently marketed as the first novel written by AI.
Once the neural network has been trained, it can generate any length of text that the author desires, either at random or working from a specific seed word or phrase. Goodwin used the sights and sounds of the road trip to provide these seeds: the novel is written one sentence at a time, based on images, locations, dialogue from the microphone, and even the computer’s own internal clock.
The results are… mixed.
The novel begins suitably enough, quoting the time: “It was nine seventeen in the morning, and the house was heavy.” Descriptions of locations begin according to the Foursquare dataset fed into the algorithm, but rapidly veer off into the weeds, becoming surreal. While experimentation in literature is a wonderful thing, repeatedly quoting longitude and latitude coordinates verbatim is unlikely to win anyone the Booker Prize.
Data In, Art Out?
Neural networks as creative agents have some advantages. They excel at being trained on large datasets, identifying the patterns in those datasets, and producing output that follows those same rules. Music inspired by or written by AI has become a growing subgenre—there’s even a pop album by human-machine collaborators called the Songularity.
A neural network can “listen to” all of Bach and Mozart in hours, and train itself on the works of Shakespeare to produce passable pseudo-Bard. The idea of artificial creativity has become so widespread that there’s even a meme format about forcibly training neural network ‘bots’ on human writing samples, with hilarious consequences—although the best joke was undoubtedly human in origin.
The AI that roamed from New York to New Orleans was an LSTM (long short-term memory) neural net. By default, information contained in individual neurons is preserved, and only small parts can be “forgotten” or “learned” in an individual timestep, rather than neurons being entirely overwritten.
The LSTM architecture performs better than previous recurrent neural networks at tasks such as handwriting and speech recognition. The neural net—and its programmer—looked further in search of literary influences, ingesting 60 million words (360 MB) of raw literature according to Goodwin’s recipe: one third poetry, one third science fiction, and one third “bleak” literature.
In this way, Goodwin has some creative control over the project; the source material influences the machine’s vocabulary and sentence structuring, and hence the tone of the piece.
The Thoughts Beneath the Words
The problem with artificially intelligent novelists is the same problem with conversational artificial intelligence that computer scientists have been trying to solve from Turing’s day. The machines can understand and reproduce complex patterns increasingly better than humans can, but they have no understanding of what these patterns mean.
Goodwin’s neural network spits out sentences one letter at a time, on a tiny printer hooked up to the laptop. Statistical associations such as those tracked by neural nets can form words from letters, and sentences from words, but they know nothing of character or plot.
When talking to a chatbot, the code has no real understanding of what’s been said before, and there is no dataset large enough to train it through all of the billions of possible conversations.
Unless restricted to a predetermined set of options, it loses the thread of the conversation after a reply or two. In a similar way, the creative neural nets have no real grasp of what they’re writing, and no way to produce anything with any overarching coherence or narrative.
Goodwin’s experiment is an attempt to add some coherent backbone to the AI “novel” by repeatedly grounding it with stimuli from the cameras or microphones—the thematic links and narrative provided by the American landscape the neural network drives through.
Goodwin feels that this approach (the car itself moving through the landscape, as if a character) borrows some continuity and coherence from the journey itself. “Coherent prose is the holy grail of natural-language generation—feeling that I had somehow solved a small part of the problem was exhilarating. And I do think it makes a point about language in time that’s unexpected and interesting.”
AI Is Still No Kerouac
A coherent tone and semantic “style” might be enough to produce some vaguely-convincing teenage poetry, as Google did, and experimental fiction that uses neural networks can have intriguing results. But wading through the surreal AI prose of this era, searching for some meaning or motif beyond novelty value, can be a frustrating experience.
Maybe machines can learn the complexities of the human heart and brain, or how to write evocative or entertaining prose. But they’re a long way off, and somehow “more layers!” or a bigger corpus of data doesn’t feel like enough to bridge that gulf.
Real attempts by machines to write fiction have so far been broadly incoherent, but with flashes of poetry—dreamlike, hallucinatory ramblings.
Neural networks might not be capable of writing intricately-plotted works with charm and wit, like Dickens or Dostoevsky, but there’s still an eeriness to trying to decipher the surreal, Finnegans’ Wake mish-mash.
You might see, in the odd line, the flickering ghost of something like consciousness, a deeper understanding. Or you might just see fragments of meaning thrown into a neural network blender, full of hype and fury, obeying rules in an occasionally striking way, but ultimately signifying nothing. In that sense, at least, the RNN’s grappling with metaphor feels like a metaphor for the hype surrounding the latest AI summer as a whole.
Or, as the human author of On The Road put it: “You guys are going somewhere or just going?”
Image Credit: eurobanks / Shutterstock.com Continue reading
#433728 AI Is Kicking Space Exploration into ...
Artificial intelligence in space exploration is gathering momentum. Over the coming years, new missions look likely to be turbo-charged by AI as we voyage to comets, moons, and planets and explore the possibilities of mining asteroids.
“AI is already a game-changer that has made scientific research and exploration much more efficient. We are not just talking about a doubling but about a multiple of ten,” Leopold Summerer, Head of the Advanced Concepts and Studies Office at ESA, said in an interview with Singularity Hub.
Examples Abound
The history of AI and space exploration is older than many probably think. It has already played a significant role in research into our planet, the solar system, and the universe. As computer systems and software have developed, so have AI’s potential use cases.
The Earth Observer 1 (EO-1) satellite is a good example. Since its launch in the early 2000s, its onboard AI systems helped optimize analysis of and response to natural occurrences, like floods and volcanic eruptions. In some cases, the AI was able to tell EO-1 to start capturing images before the ground crew were even aware that the occurrence had taken place.
Other satellite and astronomy examples abound. Sky Image Cataloging and Analysis Tool (SKICAT) has assisted with the classification of objects discovered during the second Palomar Sky Survey, classifying thousands more objects caught in low resolution than a human would be able to. Similar AI systems have helped astronomers to identify 56 new possible gravitational lenses that play a crucial role in connection with research into dark matter.
AI’s ability to trawl through vast amounts of data and find correlations will become increasingly important in relation to getting the most out of the available data. ESA’s ENVISAT produces around 400 terabytes of new data every year—but will be dwarfed by the Square Kilometre Array, which will produce around the same amount of data that is currently on the internet in a day.
AI Readying For Mars
AI is also being used for trajectory and payload optimization. Both are important preliminary steps to NASA’s next rover mission to Mars, the Mars 2020 Rover, which is, slightly ironically, set to land on the red planet in early 2021.
An AI known as AEGIS is already on the red planet onboard NASA’s current rovers. The system can handle autonomous targeting of cameras and choose what to investigate. However, the next generation of AIs will be able to control vehicles, autonomously assist with study selection, and dynamically schedule and perform scientific tasks.
Throughout his career, John Leif Jørgensen from DTU Space in Denmark has designed equipment and systems that have been on board about 100 satellites—and counting. He is part of the team behind the Mars 2020 Rover’s autonomous scientific instrument PIXL, which makes extensive use of AI. Its purpose is to investigate whether there have been lifeforms like stromatolites on Mars.
“PIXL’s microscope is situated on the rover’s arm and needs to be placed 14 millimetres from what we want it to study. That happens thanks to several cameras placed on the rover. It may sound simple, but the handover process and finding out exactly where to place the arm can be likened to identifying a building from the street from a picture taken from the roof. This is something that AI is eminently suited for,” he said in an interview with Singularity Hub.
AI also helps PIXL operate autonomously throughout the night and continuously adjust as the environment changes—the temperature changes between day and night can be more than 100 degrees Celsius, meaning that the ground beneath the rover, the cameras, the robotic arm, and the rock being studied all keep changing distance.
“AI is at the core of all of this work, and helps almost double productivity,” Jørgensen said.
First Mars, Then Moons
Mars is likely far from the final destination for AIs in space. Jupiter’s moons have long fascinated scientists. Especially Europa, which could house a subsurface ocean, buried beneath an approximately 10 km thick ice crust. It is one of the most likely candidates for finding life elsewhere in the solar system.
While that mission may be some time in the future, NASA is currently planning to launch the James Webb Space Telescope into an orbit of around 1.5 million kilometers from Earth in 2020. Part of the mission will involve AI-empowered autonomous systems overseeing the full deployment of the telescope’s 705-kilo mirror.
The distances between Earth and Europa, or Earth and the James Webb telescope, means a delay in communications. That, in turn, makes it imperative for the crafts to be able to make their own decisions. Examples from the Mars Rover project show that communication between a rover and Earth can take 20 minutes because of the vast distance. A Europa mission would see much longer communication times.
Both missions, to varying degrees, illustrate one of the most significant challenges currently facing the use of AI in space exploration. There tends to be a direct correlation between how well AI systems perform and how much data they have been fed. The more, the better, as it were. But we simply don’t have very much data to feed such a system about what it’s likely to encounter on a mission to a place like Europa.
Computing power presents a second challenge. A strenuous, time-consuming approval process and the risk of radiation mean that your computer at home would likely be more powerful than anything going into space in the near future. A 200 GHz processor, 256 megabytes of ram, and 2 gigabytes of memory sounds a lot more like a Nokia 3210 (the one you could use as an ice hockey puck without it noticing) than an iPhone X—but it’s actually the ‘brain’ that will be onboard the next rover.
Private Companies Taking Off
Private companies are helping to push those limitations. CB Insights charts 57 startups in the space-space, covering areas as diverse as natural resources, consumer tourism, R&D, satellites, spacecraft design and launch, and data analytics.
David Chew works as an engineer for the Japanese satellite company Axelspace. He explained how private companies are pushing the speed of exploration and lowering costs.
“Many private space companies are taking advantage of fall-back systems and finding ways of using parts and systems that traditional companies have thought of as non-space-grade. By implementing fall-backs, and using AI, it is possible to integrate and use parts that lower costs without adding risk of failure,” he said in an interview with Singularity Hub.
Terraforming Our Future Home
Further into the future, moonshots like terraforming Mars await. Without AI, these kinds of projects to adapt other planets to Earth-like conditions would be impossible.
Autonomous crafts are already terraforming here on Earth. BioCarbon Engineering uses drones to plant up to 100,000 trees in a single day. Drones first survey and map an area, then an algorithm decides the optimal locations for the trees before a second wave of drones carry out the actual planting.
As is often the case with exponential technologies, there is a great potential for synergies and convergence. For example with AI and robotics, or quantum computing and machine learning. Why not send an AI-driven robot to Mars and use it as a telepresence for scientists on Earth? It could be argued that we are already in the early stages of doing just that by using VR and AR systems that take data from the Mars rovers and create a virtual landscape scientists can walk around in and make decisions on what the rovers should explore next.
One of the biggest benefits of AI in space exploration may not have that much to do with its actual functions. Chew believes that within as little as ten years, we could see the first mining of asteroids in the Kuiper Belt with the help of AI.
“I think one of the things that AI does to space exploration is that it opens up a whole range of new possible industries and services that have a more immediate effect on the lives of people on Earth,” he said. “It becomes a relatable industry that has a real effect on people’s daily lives. In a way, space exploration becomes part of people’s mindset, and the border between our planet and the solar system becomes less important.”
Image Credit: Taily / Shutterstock.com Continue reading
#433696 3 Big Ways Tech Is Disrupting Global ...
Disruptive business models are often powered by alternative financing. In Part 1 of this series, I discussed how mobile is redefining money and banking and shared some of the dramatic transformations in the global remittance infrastructure.
In this article, we’ll discuss:
Peer-to-peer lending
AI financial advisors and robo traders
Seamless Transactions
Let’s dive right back in…
Decentralized Lending = Democratized Access to Finances
Peer-to-peer (P2P) lending is an age-old practice, traditionally with high risk and extreme locality. Now, the P2P funding model is being digitized and delocalized, bringing lending online and across borders.
Zopa, the first official crowdlending platform, arrived in the United Kingdom in 2004. Since then, the consumer crowdlending platform has facilitated lending of over 3 billion euros ($3.5 billion USD) of loans.
Person-to-business crowdlending took off, again in the U.K., in 2005 with Funding Circle, now with over 5 billion euros (~5.8 billion USD) of capital loaned to small businesses around the world.
Crowdlending next took off in the US in 2006, with platforms like Prosper and Lending Club. The US crowdlending industry has boomed to $21 billion in loans, across 515,000 loans.
Let’s take a step back… to a time before banks, when lending took place between trusted neighbors in small villages across the globe. Lending started as peer-to-peer transactions.
As villages turned into towns, towns turned into cities, and cities turned into sprawling metropolises, neighborly trust and the ability to communicate across urban landscapes broke down. That’s where banks and other financial institutions came into play—to add trust back into the lending equation.
With crowdlending, we are evidently returning to this pre-centralized-banking model of loans, and moving away from cumbersome intermediaries (e.g. high fees, regulations, and extra complexity).
Fueled by the permeation of the internet, P2P lending took on a new form as ‘crowdlending’ in the early 2000s. Now, as blockchain and artificial intelligence arrive on the digital scene, P2P lending platforms are being overhauled with transparency, accountability, reliability, and immutability.
Artificial Intelligence Micro Lending & Credit Scores
We are beginning to augment our quantitative decision-making with neural networks processing borrowers’ financial data to determine their financial ‘fate’ (or, as some call it, your credit score). Companies like Smart Finance Group (backed by Kai Fu Lee and Sinovation Ventures) are using artificial intelligence to minimize default rates for tens of millions of microloans.
Smart Finance is fueled by users’ personal data, particularly smartphone data and usage behavior. Users are required to give Smart Finance access to their smartphone data, so that Smart Finance’s artificial intelligence engine can generate a credit score from the personal information.
The benefits of this AI-powered lending platform do not stop at increased loan payback rates; there’s a massive speed increase as well. Smart Finance loans are frequently approved in under eight seconds. As we’ve seen with other artificial intelligence disruptions, data is the new gold.
Digitizing access to P2P loans paves the way for billions of people currently without access to banking to leapfrog the centralized banking system, just as Africa bypassed landline phones and went straight to mobile. Leapfrogging centralized banking and the credit system is exactly what Smart Finance has done for hundreds of millions of people in China.
Blockchain-Backed Crowdlending
As artificial intelligence accesses even the most mundane mobile browsing data to assign credit scores, blockchain technologies, particularly immutable ledgers and smart contracts, are massive disruptors to the archaic banking system, building additional trust and transparency on top of current P2P lending models.
Immutable ledgers provide the necessary transparency for accurate credit and loan defaulting history. Smart contracts executed on these immutable ledgers bring the critical ability to digitally replace cumbersome, expensive third parties (like banks), allowing individual borrowers or businesses to directly connect with willing lenders.
Two of the leading blockchain platforms for P2P lending are ETHLend and SALT Lending.
ETHLend is an Ethereum-based decentralized application aiming to bring transparency and trust to P2P lending through Ethereum network smart contracts.
Secure Automated Lending Technology (SALT) allows cryptocurrency asset holders to use their digital assets as collateral for cash loans, without the need to liquidate their holdings, giving rise to a digital-asset-backed lending market.
While blockchain poses a threat to many of the large, centralized banking institutions, some are taking advantage of the new technology to optimize their internal lending, credit scoring, and collateral operations.
In March 2018, ING and Credit Suisse successfully exchanged 25 million euros using HQLA-X, a blockchain-based collateral lending platform.
HQLA-X runs on the R3 Corda blockchain, a platform designed specifically to help heritage financial and commerce institutions migrate away from their inefficient legacy financial infrastructure.
Blockchain and tokenization are going through their own fintech and regulation shakeup right now. In a future blog, I’ll discuss the various efforts to more readily assure smart contracts, and the disruptive business model of security tokens and the US Securities and Exchange Commission.
Parallels to the Global Abundance of Capital
The abundance of capital being created by the advent of P2P loans closely relates to the unprecedented global abundance of capital.
Initial coin offerings (ICOs) and crowdfunding are taking a strong stand in disrupting the $164 billion venture capital market. The total amount invested in ICOs has risen from $6.6 billion in 2017 to $7.15 billion USD in the first half of 2018. Crowdfunding helped projects raise more than $34 billion in 2017, with experts projecting that global crowdfunding investments will reach $300 billion by 2025.
In the last year alone, using ICOs, over a dozen projects have raised hundreds of millions of dollars in mere hours. Take Filecoin, for example, which raised $257 million in only 30 days; its first $135 million was raised in the first hour. Similarly, the Dragon Coin project (which itself is revolutionizing remittance in high-stakes casinos around the world) raised $320 million in its 30-day public ICO.
Some Important Takeaways…
Technology-backed fundraising and financial services are disrupting the world’s largest financial institutions. Anyone, anywhere, at anytime will be able to access the capital they need to pursue their idea.
The speed at which we can go from “I’ve got an idea” to “I run a billion-dollar company” is moving faster than ever.
Following Ray Kurzweil’s Law of Accelerating Returns, the rapid decrease in time to access capital is intimately linked (and greatly dependent on) a financial infrastructure (technology, institutions, platforms, and policies) that can adapt and evolve just as rapidly.
This new abundance of capital requires financial decision-making with ever-higher market prediction precision. That’s exactly where artificial intelligence is already playing a massive role.
Artificial Intelligence, Robo Traders, and Financial Advisors
On May 6, 2010, the Dow Jones Industrial Average suddenly collapsed by 998.5 points (equal to 8 percent, or $1 trillion). The crash lasted over 35 minutes and is now known as the ‘Flash Crash’. While no one knows the specific reason for this 2010 stock market anomaly, experts widely agree that the Flash Crash had to do with algorithmic trading.
With the ability to have instant, trillion-dollar market impacts, algorithmic trading and artificial intelligence are undoubtedly ingrained in how financial markets operate.
In 2017, CNBC.com estimated that 90 percent of daily trading volume in stock trading is done by machine algorithms, and only 10 percent is carried out directly by humans.
Artificial intelligence and financial management algorithms are not only available to top Wall Street players.
Robo-advisor financial management apps, like Wealthfront and Betterment, are rapidly permeating the global market. Wealthfront currently has $9.5 billion in assets under management, and Betterment has $10 billion.
Artificial intelligent financial agents are already helping financial institutions protect your money and fight fraud. A prime application for machine learning is in detecting anomalies in your spending and transaction habits, and flagging potentially fraudulent transactions.
As artificial intelligence continues to exponentially increase in power and capabilities, increasingly powerful trading and financial management bots will come online, finding massive new and previously lost streams of wealth.
How else are artificial intelligence and automation transforming finance?
Disruptive Remittance and Seamless Transactions
When was the last time you paid in cash at a toll booth? How about for a taxi ride?
EZ-Pass, the electronic tolling company implemented extensively on the East Coast, has done wonders to reduce traffic congestion and increase traffic flow.
Driving down I-95 on the East Coast of the United States, drivers rarely notice their financial transaction with the state’s tolling agencies. The transactions are seamless.
The Uber app enables me to travel without my wallet. I can forget about payment on my trip, free up my mental bandwidth and time for higher-priority tasks. The entire process is digitized and, by extension, automated and integrated into Uber’s platform (Note: This incredible convenience many times causes me to accidentally walk out of taxi cabs without paying!).
In January 2018, we saw the success of the first cutting-edge, AI-powered Amazon Go store open in Seattle, Washington. The store marked a new era in remittance and transactions. Gone are the days of carrying credit cards and cash, and gone are the cash registers. And now, on the heals of these early ‘beta-tests’, Amazon is considering opening as many as 3,000 of these cashierless stores by 2023.
Amazon Go stores use AI algorithms that watch various video feeds (from advanced cameras) throughout the store to identify who picks up groceries, exactly what products they select, and how much to charge that person when they walk out of the store. It’s a grab and go experience.
Let’s extrapolate the notion of seamless, integrated payment systems from Amazon Go and Uber’s removal of post-ride payment to the rest of our day-to-day experience.
Imagine this near future:
As you near the front door of your home, your AI assistant summons a self-driving Uber that takes you to the Hyperloop station (after all, you work in L.A. but live in San Francisco).
At the station, you board your pod, without noticing that your ticket purchase was settled via a wireless payment checkpoint.
After work, you stop at the Amazon Go and pick up dinner. Your virtual AI assistant passes your Amazon account information to the store’s payment checkpoint, as the store’s cameras and sensors track you, your cart and charge you auto-magically.
At home, unbeknownst to you, your AI has already restocked your fridge and pantry with whatever items you failed to pick up at the Amazon Go.
Once we remove the actively transacting aspect of finance, what else becomes possible?
Top Conclusions
Extraordinary transformations are happening in the finance world. We’ve only scratched the surface of the fintech revolution. All of these transformative financial technologies require high-fidelity assurance, robust insurance, and a mechanism for storing value.
I’ll dive into each of these other facets of financial services in future articles.
For now, thanks to coming global communication networks being deployed on 5G, Alphabet’s LUNE, SpaceX’s Starlink and OneWeb, by 2024, nearly all 8 billion people on Earth will be online.
Once connected, these new minds, entrepreneurs, and customers need access to money and financial services to meaningfully participate in the world economy.
By connecting lenders and borrowers around the globe, decentralized lending drives down global interest rates, increases global financial market participation, and enables economic opportunity to the billions of people who are about to come online.
We’re living in the most abundant time in human history, and fintech is just getting started.
Join Me
Abundance Digital Online Community: I have created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance Digital. This is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.
Image Credit: Novikov Aleksey / Shutterstock.com Continue reading