Tag Archives: cars

#433911 Thanksgiving Food for Thought: The Tech ...

With the Thanksgiving holiday upon us, it’s a great time to reflect on the future of food. Over the last few years, we have seen a dramatic rise in exponential technologies transforming the food industry from seed to plate. Food is important in many ways—too little or too much of it can kill us, and it is often at the heart of family, culture, our daily routines, and our biggest celebrations. The agriculture and food industries are also two of the world’s biggest employers. Let’s take a look to see what is in store for the future.

Robotic Farms
Over the last few years, we have seen a number of new companies emerge in the robotic farming industry. This includes new types of farming equipment used in arable fields, as well as indoor robotic vertical farms. In November 2017, Hands Free Hectare became the first in the world to remotely grow an arable crop. They used autonomous tractors to sow and spray crops, small rovers to take soil samples, drones to monitor crop growth, and an unmanned combine harvester to collect the crops. Since then, they’ve also grown and harvested a field of winter wheat, and have been adding additional technologies and capabilities to their arsenal of robotic farming equipment.

Indoor vertical farming is also rapidly expanding. As Engadget reported in October 2018, a number of startups are now growing crops like leafy greens, tomatoes, flowers, and herbs. These farms can grow food in urban areas, reducing transport, water, and fertilizer costs, and often don’t need pesticides since they are indoors. IronOx, which is using robots to grow plants with navigation technology used by self-driving cars, can grow 30 times more food per acre of land using 90 percent less water than traditional farmers. Vertical farming company Plenty was recently funded by Softbank’s Vision Fund, Jeff Bezos, and others to build 300 vertical farms in China.

These startups are not only succeeding in wealthy countries. Hello Tractor, an “uberized” tractor, has worked with 250,000 smallholder farms in Africa, creating both food security and tech-infused agriculture jobs. The World Food Progam’s Innovation Accelerator (an impact partner of Singularity University) works with hundreds of startups aimed at creating zero hunger. One project is focused on supporting refugees in developing “food computers” in refugee camps—computerized devices that grow food while also adjusting to the conditions around them. As exponential trends drive down the costs of robotics, sensors, software, and energy, we should see robotic farming scaling around the world and becoming the main way farming takes place.

Cultured Meat
Exponential technologies are not only revolutionizing how we grow vegetables and grains, but also how we generate protein and meat. The new cultured meat industry is rapidly expanding, led by startups such as Memphis Meats, Mosa Meats, JUST Meat, Inc. and Finless Foods, and backed by heavyweight investors including DFJ, Bill Gates, Richard Branson, Cargill, and Tyson Foods.

Cultured meat is grown in a bioreactor using cells from an animal, a scaffold, and a culture. The process is humane and, potentially, scientists can make the meat healthier by adding vitamins, removing fat, or customizing it to an individual’s diet and health concerns. Another benefit is that cultured meats, if grown at scale, would dramatically reduce environmental destruction, pollution, and climate change caused by the livestock and fishing industries. Similar to vertical farms, cultured meat is produced using technology and can be grown anywhere, on-demand and in a decentralized way.

Similar to robotic farming equipment, bioreactors will also follow exponential trends, rapidly falling in cost. In fact, the first cultured meat hamburger (created by Singularity University faculty Member Mark Post of Mosa Meats in 2013) cost $350,000 dollars. In 2018, Fast Company reported the cost was now about $11 per burger, and the Israeli startup Future Meat Technologies predicted they will produce beef at about $2 per pound in 2020, which will be competitive with existing prices. For those who have turkey on their mind, one can read about New Harvest’s work (one of the leading think tanks and research centers for the cultured meat and cellular agriculture industry) in funding efforts to generate a nugget of cultured turkey meat.

One outstanding question is whether cultured meat is safe to eat and how it will interact with the overall food supply chain. In the US, regulators like the Food and Drug Administration (FDA) and the US Department of Agriculture (USDA) are working out their roles in this process, with the FDA overseeing the cellular process and the FDA overseeing production and labeling.

Food Processing
Tech companies are also making great headway in streamlining food processing. Norwegian company Tomra Foods was an early leader in using imaging recognition, sensors, artificial intelligence, and analytics to more efficiently sort food based on shape, composition of fat, protein, and moisture, and other food safety and quality indicators. Their technologies have improved food yield by 5-10 percent, which is significant given they own 25 percent of their market.

These advances are also not limited to large food companies. In 2016 Google reported how a small family farm in Japan built a world-class cucumber sorting device using their open-source machine learning tool TensorFlow. SU startup Impact Vision uses hyper-spectral imaging to analyze food quality, which increases revenues and reduces food waste and product recalls from contamination.

These examples point to a question many have on their mind: will we live in a future where a few large companies use advanced technologies to grow the majority of food on the planet, or will the falling costs of these technologies allow family farms, startups, and smaller players to take part in creating a decentralized system? Currently, the future could flow either way, but it is important for smaller companies to take advantage of the most cutting-edge technology in order to stay competitive.

Food Purchasing and Delivery
In the last year, we have also seen a number of new developments in technology improving access to food. Amazon Go is opening grocery stores in Seattle, San Francisco, and Chicago where customers use an app that allows them to pick up their products and pay without going through cashier lines. Sam’s Club is not far behind, with an app that also allows a customer to purchase goods in-store.

The market for food delivery is also growing. In 2017, Morgan Stanley estimated that the online food delivery market from restaurants could grow to $32 billion by 2021, from $12 billion in 2017. Companies like Zume are pioneering robot-powered pizza making and delivery. In addition to using robotics to create affordable high-end gourmet pizzas in their shop, they also have a pizza delivery truck that can assemble and cook pizzas while driving. Their system combines predictive analytics using past customer data to prepare pizzas for certain neighborhoods before the orders even come in. In early November 2018, the Wall Street Journal estimated that Zume is valued at up to $2.25 billion.

Looking Ahead
While each of these developments is promising on its own, it’s also important to note that since all these technologies are in some way digitized and connected to the internet, the various food tech players can collaborate. In theory, self-driving delivery restaurants could share data on what they are selling to their automated farm equipment, facilitating coordination of future crops. There is a tremendous opportunity to improve efficiency, lower costs, and create an abundance of healthy, sustainable food for all.

On the other hand, these technologies are also deeply disruptive. According to the Food and Agricultural Organization of the United Nations, in 2010 about one billion people, or a third of the world’s workforce, worked in the farming and agricultural industries. We need to ensure these farmers are linked to new job opportunities, as well as facilitate collaboration between existing farming companies and technologists so that the industries can continue to grow and lead rather than be displaced.

Just as importantly, each of us might think about how these changes in the food industry might impact our own ways of life and culture. Thanksgiving celebrates community and sharing of food during a time of scarcity. Technology will help create an abundance of food and less need for communities to depend on one another. What are the ways that you will create community, sharing, and culture in this new world?

Image Credit: nikkytok / Shutterstock.com Continue reading

Posted in Human Robots

#433901 The SpiNNaker Supercomputer, Modeled ...

We’ve long used the brain as inspiration for computers, but the SpiNNaker supercomputer, switched on this month, is probably the closest we’ve come to recreating it in silicon. Now scientists hope to use the supercomputer to model the very thing that inspired its design.

The brain is the most complex machine in the known universe, but that complexity comes primarily from its architecture rather than the individual components that make it up. Its highly interconnected structure means that relatively simple messages exchanged between billions of individual neurons add up to carry out highly complex computations.

That’s the paradigm that has inspired the ‘Spiking Neural Network Architecture” (SpiNNaker) supercomputer at the University of Manchester in the UK. The project is the brainchild of Steve Furber, the designer of the original ARM processor. After a decade of development, a million-core version of the machine that will eventually be able to simulate up to a billion neurons was switched on earlier this month.

The idea of splitting computation into very small chunks and spreading them over many processors is already the leading approach to supercomputing. But even the most parallel systems require a lot of communication, and messages may have to pack in a lot of information, such as the task that needs to be completed or the data that needs to be processed.

In contrast, messages in the brain consist of simple electrochemical impulses, or spikes, passed between neurons, with information encoded primarily in the timing or rate of those spikes (which is more important is a topic of debate among neuroscientists). Each neuron is connected to thousands of others via synapses, and complex computation relies on how spikes cascade through these highly-connected networks.

The SpiNNaker machine attempts to replicate this using a model called Address Event Representation. Each of the million cores can simulate roughly a million synapses, so depending on the model, 1,000 neurons with 1,000 connections or 100 neurons with 10,000 connections. Information is encoded in the timing of spikes and the identity of the neuron sending them. When a neuron is activated it broadcasts a tiny packet of data that contains its address, and spike timing is implicitly conveyed.

By modeling their machine on the architecture of the brain, the researchers hope to be able to simulate more biological neurons in real time than any other machine on the planet. The project is funded by the European Human Brain Project, a ten-year science mega-project aimed at bringing together neuroscientists and computer scientists to understand the brain, and researchers will be able to apply for time on the machine to run their simulations.

Importantly, it’s possible to implement various different neuronal models on the machine. The operation of neurons involves a variety of complex biological processes, and it’s still unclear whether this complexity is an artefact of evolution or central to the brain’s ability to process information. The ability to simulate up to a billion simple neurons or millions of more complex ones on the same machine should help to slowly tease out the answer.

Even at a billion neurons, that still only represents about one percent of the human brain, so it’s still going to be limited to investigating isolated networks of neurons. But the previous 500,000-core machine has already been used to do useful simulations of the Basal Ganglia—an area affected in Parkinson’s disease—and an outer layer of the brain that processes sensory information.

The full-scale supercomputer will make it possible to study even larger networks previously out of reach, which could lead to breakthroughs in our understanding of both the healthy and unhealthy functioning of the brain.

And while neurological simulation is the main goal for the machine, it could also provide a useful research tool for roboticists. Previous research has already shown a small board of SpiNNaker chips can be used to control a simple wheeled robot, but Furber thinks the SpiNNaker supercomputer could also be used to run large-scale networks that can process sensory input and generate motor output in real time and at low power.

That low power operation is of particular promise for robotics. The brain is dramatically more power-efficient than conventional supercomputers, and by borrowing from its principles SpiNNaker has managed to capture some of that efficiency. That could be important for running mobile robotic platforms that need to carry their own juice around.

This ability to run complex neural networks at low power has been one of the main commercial drivers for so-called neuromorphic computing devices that are physically modeled on the brain, such as IBM’s TrueNorth chip and Intel’s Loihi. The hope is that complex artificial intelligence applications normally run in massive data centers could be run on edge devices like smartphones, cars, and robots.

But these devices, including SpiNNaker, operate very differently from the leading AI approaches, and its not clear how easy it would be to transfer between the two. The need to adopt an entirely new programming paradigm is likely to limit widespread adoption, and the lack of commercial traction for the aforementioned devices seems to back that up.

At the same time, though, this new paradigm could potentially lead to dramatic breakthroughs in massively parallel computing. SpiNNaker overturns many of the foundational principles of how supercomputers work that make it much more flexible and error-tolerant.

For now, the machine is likely to be firmly focused on accelerating our understanding of how the brain works. But its designers also hope those findings could in turn point the way to more efficient and powerful approaches to computing.

Image Credit: Adrian Grosu / Shutterstock.com Continue reading

Posted in Human Robots

#433852 How Do We Teach Autonomous Cars To Drive ...

Autonomous vehicles can follow the general rules of American roads, recognizing traffic signals and lane markings, noticing crosswalks and other regular features of the streets. But they work only on well-marked roads that are carefully scanned and mapped in advance.

Many paved roads, though, have faded paint, signs obscured behind trees and unusual intersections. In addition, 1.4 million miles of U.S. roads—one-third of the country’s public roadways—are unpaved, with no on-road signals like lane markings or stop-here lines. That doesn’t include miles of private roads, unpaved driveways or off-road trails.

What’s a rule-following autonomous car to do when the rules are unclear or nonexistent? And what are its passengers to do when they discover their vehicle can’t get them where they’re going?

Accounting for the Obscure
Most challenges in developing advanced technologies involve handling infrequent or uncommon situations, or events that require performance beyond a system’s normal capabilities. That’s definitely true for autonomous vehicles. Some on-road examples might be navigating construction zones, encountering a horse and buggy, or seeing graffiti that looks like a stop sign. Off-road, the possibilities include the full variety of the natural world, such as trees down over the road, flooding and large puddles—or even animals blocking the way.

At Mississippi State University’s Center for Advanced Vehicular Systems, we have taken up the challenge of training algorithms to respond to circumstances that almost never happen, are difficult to predict and are complex to create. We seek to put autonomous cars in the hardest possible scenario: driving in an area the car has no prior knowledge of, with no reliable infrastructure like road paint and traffic signs, and in an unknown environment where it’s just as likely to see a cactus as a polar bear.

Our work combines virtual technology and the real world. We create advanced simulations of lifelike outdoor scenes, which we use to train artificial intelligence algorithms to take a camera feed and classify what it sees, labeling trees, sky, open paths and potential obstacles. Then we transfer those algorithms to a purpose-built all-wheel-drive test vehicle and send it out on our dedicated off-road test track, where we can see how our algorithms work and collect more data to feed into our simulations.

Starting Virtual
We have developed a simulator that can create a wide range of realistic outdoor scenes for vehicles to navigate through. The system generates a range of landscapes of different climates, like forests and deserts, and can show how plants, shrubs and trees grow over time. It can also simulate weather changes, sunlight and moonlight, and the accurate locations of 9,000 stars.

The system also simulates the readings of sensors commonly used in autonomous vehicles, such as lidar and cameras. Those virtual sensors collect data that feeds into neural networks as valuable training data.

Simulated desert, meadow and forest environments generated by the Mississippi State University Autonomous Vehicle Simulator. Chris Goodin, Mississippi State University, Author provided.
Building a Test Track
Simulations are only as good as their portrayals of the real world. Mississippi State University has purchased 50 acres of land on which we are developing a test track for off-road autonomous vehicles. The property is excellent for off-road testing, with unusually steep grades for our area of Mississippi—up to 60 percent inclines—and a very diverse population of plants.

We have selected certain natural features of this land that we expect will be particularly challenging for self-driving vehicles, and replicated them exactly in our simulator. That allows us to directly compare results from the simulation and real-life attempts to navigate the actual land. Eventually, we’ll create similar real and virtual pairings of other types of landscapes to improve our vehicle’s capabilities.

A road washout, as seen in real life, left, and in simulation. Chris Goodin, Mississippi State University, Author provided.
Collecting More Data
We have also built a test vehicle, called the Halo Project, which has an electric motor and sensors and computers that can navigate various off-road environments. The Halo Project car has additional sensors to collect detailed data about its actual surroundings, which can help us build virtual environments to run new tests in.

The Halo Project car can collect data about driving and navigating in rugged terrain. Beth Newman Wynn, Mississippi State University, Author provided.
Two of its lidar sensors, for example, are mounted at intersecting angles on the front of the car so their beams sweep across the approaching ground. Together, they can provide information on how rough or smooth the surface is, as well as capturing readings from grass and other plants and items on the ground.

Lidar beams intersect, scanning the ground in front of the vehicle. Chris Goodin, Mississippi State University, Author provided
We’ve seen some exciting early results from our research. For example, we have shown promising preliminary results that machine learning algorithms trained on simulated environments can be useful in the real world. As with most autonomous vehicle research, there is still a long way to go, but our hope is that the technologies we’re developing for extreme cases will also help make autonomous vehicles more functional on today’s roads.

Matthew Doude, Associate Director, Center for Advanced Vehicular Systems; Ph.D. Student in Industrial and Systems Engineering, Mississippi State University; Christopher Goodin, Assistant Research Professor, Center for Advanced Vehicular Systems, Mississippi State University, and Daniel Carruth, Assistant Research Professor and Associate Director for Human Factors and Advanced Vehicle System, Center for Advanced Vehicular Systems, Mississippi State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Photo provided for The Conversation by Matthew Goudin / CC BY ND Continue reading

Posted in Human Robots

#433803 This Week’s Awesome Stories From ...

ARTIFICIAL INTELLIGENCE
The AI Cold War That Could Doom Us All
Nicholas Thompson | Wired
“At the dawn of a new stage in the digital revolution, the world’s two most powerful nations are rapidly retreating into positions of competitive isolation, like players across a Go board. …Is the arc of the digital revolution bending toward tyranny, and is there any way to stop it?”

LONGEVITY
Finally, the Drug That Keeps You Young
Stephen S. Hall | MIT Technology Review
“The other thing that has changed is that the field of senescence—and the recognition that senescent cells can be such drivers of aging—has finally gained acceptance. Whether those drugs will work in people is still an open question. But the first human trials are under way right now.”

SYNTHETIC BIOLOGY
Ginkgo Bioworks Is Turning Human Cells Into On-Demand Factories
Megan Molteni | Wired
“The biotech unicorn is already cranking out an impressive number of microbial biofactories that grow and multiply and burp out fragrances, fertilizers, and soon, psychoactive substances. And they do it at a fraction of the cost of traditional systems. But Kelly is thinking even bigger.”

CYBERNETICS
Thousands of Swedes Are Inserting Microchips Under Their Skin
Maddy Savage | NPR
“Around the size of a grain of rice, the chips typically are inserted into the skin just above each user’s thumb, using a syringe similar to that used for giving vaccinations. The procedure costs about $180. So many Swedes are lining up to get the microchips that the country’s main chipping company says it can’t keep up with the number of requests.”

ART
AI Art at Christie’s Sells for $432,500
Gabe Cohn | The New York Times
“Last Friday, a portrait produced by artificial intelligence was hanging at Christie’s New York opposite an Andy Warhol print and beside a bronze work by Roy Lichtenstein. On Thursday, it sold for well over double the price realized by both those pieces combined.”

ETHICS
Should a Self-Driving Car Kill the Baby or the Grandma? Depends on Where You’re From
Karen Hao | MIT Technology Review
“The researchers never predicted the experiment’s viral reception. Four years after the platform went live, millions of people in 233 countries and territories have logged 40 million decisions, making it one of the largest studies ever done on global moral preferences.”

TECHNOLOGY
The Rodney Brooks Rules for Predicting a Technology’s Success
Rodney Brooks | IEEE Spectrum
“Building electric cars and reusable rockets is fairly easy. Building a nuclear fusion reactor, flying cars, self-driving cars, or a Hyperloop system is very hard. What makes the difference?”

Image Source: spainter_vfx / Shutterstock.com Continue reading

Posted in Human Robots

#433770 Will Tech Make Insurance Obsolete in the ...

We profit from it, we fear it, and we find it impossibly hard to quantify: risk.

While not the sexiest of industries, insurance can be a life-saving protector, pooling everyone’s premiums to safeguard against some of our greatest, most unexpected losses.

One of the most profitable in the world, the insurance industry exceeded $1.2 trillion in annual revenue since 2011 in the US alone.

But risk is becoming predictable. And insurance is getting disrupted fast.

By 2025, we’ll be living in a trillion-sensor economy. And as we enter a world where everything is measured all the time, we’ll start to transition from protecting against damages to preventing them in the first place.

But what happens to health insurance when Big Brother is always watching? Do rates go up when you sneak a cigarette? Do they go down when you eat your vegetables?

And what happens to auto insurance when most cars are autonomous? Or life insurance when the human lifespan doubles?

For that matter, what happens to insurance brokers when blockchain makes them irrelevant?

In this article, I’ll be discussing four key transformations:

Sensors and AI replacing your traditional broker
Blockchain
The ecosystem approach
IoT and insurance connectivity

Let’s dive in.

AI and the Trillion-Sensor Economy
As sensors continue to proliferate across every context—from smart infrastructure to millions of connected home devices to medicine—smart environments will allow us to ask any question, anytime, anywhere.

And as I often explain, once your AI has access to this treasure trove of ubiquitous sensor data in real time, it will be the quality of your questions that make or break your business.

But perhaps the most exciting insurance application of AI’s convergence with sensors is in healthcare. Tremendous advances in genetic screening are empowering us with predictive knowledge about our long-term health risks.

Leading the charge in genome sequencing, Illumina predicts that in a matter of years, decoding the full human genome will drop to $100, taking merely one hour to complete. Other companies are racing to get you sequences faster and cheaper.

Adopting an ecosystem approach, incumbent insurers and insurtech firms will soon be able to collaborate to provide risk-minimizing services in the health sector. Using sensor data and AI-driven personalized recommendations, insurance partnerships could keep consumers healthy, dramatically reducing the cost of healthcare.

Some fear that information asymmetry will allow consumers to learn of their health risks and leave insurers in the dark. However, both parties could benefit if insurers become part of the screening process.

A remarkable example of this is Gilad Meiri’s company, Neura AI. Aiming to predict health patterns, Neura has developed machine learning algorithms that analyze data from all of a user’s connected devices (sometimes from up to 54 apps!).

Neura predicts a user’s behavior and draws staggering insights about consumers’ health risks. Meiri soon began selling his personal risk assessment tool to insurers, who could then help insured customers mitigate long-term health risks.

But artificial intelligence will impact far more than just health insurance.

In October of 2016, a claim was submitted to Lemonade, the world’s first peer-to-peer insurance company. Rather than being processed by a human, every step in this claim resolution chain—from initial triage through fraud mitigation through final payment—was handled by an AI.

This transaction marks the first time an AI has processed an insurance claim. And it won’t be the last. A traditional human-processed claim takes 40 days to pay out. In Lemonade’s case, payment was transferred within three seconds.

However, Lemonade’s achievement only marks a starting point. Over the course of the next decade, nearly every facet of the insurance industry will undergo a similarly massive transformation.

New business models like peer-to-peer insurance are replacing traditional brokerage relationships, while AI and blockchain pairings significantly reduce the layers of bureaucracy required (with each layer getting a cut) for traditional insurance.

Consider Juniper, a startup that scrapes social media to build your risk assessment, subsequently asking you 12 questions via an iPhone app. Geared with advanced analytics, the platform can generate a million-dollar life insurance policy, approved in less than five minutes.

But what’s keeping all your data from unwanted hands?

Blockchain Building Trust
Current distrust in centralized financial services has led to staggering rates of underinsurance. Add to this fear of poor data and privacy protection, particularly in the wake of 2017’s widespread cybercriminal hacks.

Enabling secure storage and transfer of personal data, blockchain holds remarkable promise against the fraudulent activity that often plagues insurance firms.

The centralized model of insurance companies and other organizations is becoming redundant. Developing blockchain-based solutions for capital markets, Symbiont develops smart contracts to execute payments with little to no human involvement.

But distributed ledger technology (DLT) is enabling far more than just smart contracts.

Also targeting insurance is Tradle, leveraging blockchain for its proclaimed goal of “building a trust provisioning network.” Built around “know-your-customer” (KYC) data, Tradle aims to verify KYC data so that it can be securely forwarded to other firms without any further verification.

By requiring a certain number of parties to reuse pre-verified data, the platform makes your data much less vulnerable to hacking and allows you to keep it on a personal device. Only its verification—let’s say of a transaction or medical exam—is registered in the blockchain.

As insurance data grow increasingly decentralized, key insurance players will experience more and more pressure to adopt an ecosystem approach.

The Ecosystem Approach
Just as exponential technologies converge to provide new services, exponential businesses must combine the strengths of different sectors to expand traditional product lines.

By partnering with platform-based insurtech firms, forward-thinking insurers will no longer serve only as reactive policy-providers, but provide risk-mitigating services as well.

Especially as digital technologies demonetize security services—think autonomous vehicles—insurers must create new value chains and span more product categories.

For instance, France’s multinational AXA recently partnered with Alibaba and Ant Financial Services to sell a varied range of insurance products on Alibaba’s global e-commerce platform at the click of a button.

Building another ecosystem, Alibaba has also collaborated with Ping An Insurance and Tencent to create ZhongAn Online Property and Casualty Insurance—China’s first internet-only insurer, offering over 300 products. Now with a multibillion-dollar valuation, Zhong An has generated about half its business from selling shipping return insurance to Alibaba consumers.

But it doesn’t stop there. Insurers that participate in digital ecosystems can now sell risk-mitigating services that prevent damage before it occurs.

Imagine a corporate manufacturer whose sensors collect data on environmental factors affecting crop yield in an agricultural community. With the backing of investors and advanced risk analytics, such a manufacturer could sell crop insurance to farmers. By implementing an automated, AI-driven UI, they could automatically make payments when sensors detect weather damage to crops.

Now let’s apply this concept to your house, your car, your health insurance.

What’s stopping insurers from partnering with third-party IoT platforms to predict fires, collisions, chronic heart disease—and then empowering the consumer with preventive services?

This brings us to the powerful field of IoT.

Internet of Things and Insurance Connectivity
Leap ahead a few years. With a centralized hub like Echo, your smart home protects itself with a network of sensors. While gone, you’ve left on a gas burner and your internet-connected stove notifies you via a home app.

Better yet, home sensors monitoring heat and humidity levels run this data through an AI, which then remotely controls heating, humidity levels, and other connected devices based on historical data patterns and fire risk factors.

Several firms are already working toward this reality.

AXA plans to one day cooperate with a centralized home hub whereby remote monitoring will collect data for future analysis and detect abnormalities.

With remote monitoring and app-centralized control for users, MonAXA is aimed at customizing insurance bundles. These would reflect exact security features embedded in smart homes.

Wouldn’t you prefer not to have to rely on insurance after a burglary? With digital ecosystems, insurers may soon prevent break-ins from the start.

By gathering sensor data from third parties on neighborhood conditions, historical theft data, suspicious activity and other risk factors, an insurtech firm might automatically put your smart home on high alert, activating alarms and specialized locks in advance of an attack.

Insurance policy premiums are predicted to vastly reduce with lessened likelihood of insured losses. But insurers moving into preventive insurtech will likely turn a profit from other areas of their business. PricewaterhouseCoopers predicts that the connected home market will reach $149 billion USD by 2020.

Let’s look at car insurance.

Car insurance premiums are currently calculated according to the driver and traits of the car. But as more autonomous vehicles take to the roads, not only does liability shift to manufacturers and software engineers, but the risk of collision falls dramatically.

But let’s take this a step further.

In a future of autonomous cars, you will no longer own your car, instead subscribing to Transport as a Service (TaaS) and giving up the purchase of automotive insurance altogether.

This paradigm shift has already begun with Waymo, which automatically provides passengers with insurance every time they step into a Waymo vehicle.

And with the rise of smart traffic systems, sensor-embedded roads, and skyrocketing autonomous vehicle technology, the risks involved in transit only continue to plummet.

Final Thoughts
Insurtech firms are hitting the market fast. IoT, autonomous vehicles and genetic screening are rapidly making us invulnerable to risk. And AI-driven services are quickly pushing conventional insurers out of the market.

By 2024, roll-out of 5G on the ground, as well as OneWeb and Starlink in orbit are bringing 4.2 billion new consumers to the web—most of whom will need insurance. Yet, because of the changes afoot in the industry, none of them will buy policies from a human broker.

While today’s largest insurance companies continue to ignore this fact at their peril (and this segment of the market), thousands of entrepreneurs see it more clearly: as one of the largest opportunities ahead.

Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: 24Novembers / Shutterstock.com Continue reading

Posted in Human Robots