Tag Archives: features

#433954 The Next Great Leap Forward? Combining ...

The Internet of Things is a popular vision of objects with internet connections sending information back and forth to make our lives easier and more comfortable. It’s emerging in our homes, through everything from voice-controlled speakers to smart temperature sensors. To improve our fitness, smart watches and Fitbits are telling online apps how much we’re moving around. And across entire cities, interconnected devices are doing everything from increasing the efficiency of transport to flood detection.

In parallel, robots are steadily moving outside the confines of factory lines. They’re starting to appear as guides in shopping malls and cruise ships, for instance. As prices fall and the artificial intelligence (AI) and mechanical technology continues to improve, we will get more and more used to them making independent decisions in our homes, streets and workplaces.

Here lies a major opportunity. Robots become considerably more capable with internet connections. There is a growing view that the next evolution of the Internet of Things will be to incorporate them into the network, opening up thrilling possibilities along the way.

Home Improvements
Even simple robots become useful when connected to the internet—getting updates about their environment from sensors, say, or learning about their users’ whereabouts and the status of appliances in the vicinity. This lets them lend their bodies, eyes, and ears to give an otherwise impersonal smart environment a user-friendly persona. This can be particularly helpful for people at home who are older or have disabilities.

We recently unveiled a futuristic apartment at Heriot-Watt University to work on such possibilities. One of a few such test sites around the EU, our whole focus is around people with special needs—and how robots can help them by interacting with connected devices in a smart home.

Suppose a doorbell rings that has smart video features. A robot could find the person in the home by accessing their location via sensors, then tell them who is at the door and why. Or it could help make video calls to family members or a professional carer—including allowing them to make virtual visits by acting as a telepresence platform.

Equally, it could offer protection. It could inform them the oven has been left on, for example—phones or tablets are less reliable for such tasks because they can be misplaced or not heard.

Similarly, the robot could raise the alarm if its user appears to be in difficulty.Of course, voice-assistant devices like Alexa or Google Home can offer some of the same services. But robots are far better at moving, sensing and interacting with their environment. They can also engage their users by pointing at objects or acting more naturally, using gestures or facial expressions. These “social abilities” create bonds which are crucially important for making users more accepting of the support and making it more effective.

To help incentivize the various EU test sites, our apartment also hosts the likes of the European Robotic League Service Robot Competition—a sort of Champions League for robots geared to special needs in the home. This brought academics from around Europe to our laboratory for the first time in January this year. Their robots were tested in tasks like welcoming visitors to the home, turning the oven off, and fetching objects for their users; and a German team from Koblenz University won with a robot called Lisa.

Robots Offshore
There are comparable opportunities in the business world. Oil and gas companies are looking at the Internet of Things, for example; experimenting with wireless sensors to collect information such as temperature, pressure, and corrosion levels to detect and possibly predict faults in their offshore equipment.

In the future, robots could be alerted to problem areas by sensors to go and check the integrity of pipes and wells, and to make sure they are operating as efficiently and safely as possible. Or they could place sensors in parts of offshore equipment that are hard to reach, or help to calibrate them or replace their batteries.

The likes of the ORCA Hub, a £36m project led by the Edinburgh Centre for Robotics, bringing together leading experts and over 30 industry partners, is developing such systems. The aim is to reduce the costs and the risks of humans working in remote hazardous locations.

ORCA tests a drone robot. ORCA
Working underwater is particularly challenging, since radio waves don’t move well under the sea. Underwater autonomous vehicles and sensors usually communicate using acoustic waves, which are many times slower (1,500 meters a second vs. 300m meters a second for radio waves). Acoustic communication devices are also much more expensive than those used above the water.

This academic project is developing a new generation of low-cost acoustic communication devices, and trying to make underwater sensor networks more efficient. It should help sensors and underwater autonomous vehicles to do more together in future—repair and maintenance work similar to what is already possible above the water, plus other benefits such as helping vehicles to communicate with one another over longer distances and tracking their location.

Beyond oil and gas, there is similar potential in sector after sector. There are equivalents in nuclear power, for instance, and in cleaning and maintaining the likes of bridges and buildings. My colleagues and I are also looking at possibilities in areas such as farming, manufacturing, logistics, and waste.

First, however, the research sectors around the Internet of Things and robotics need to properly share their knowledge and expertise. They are often isolated from one another in different academic fields. There needs to be more effort to create a joint community, such as the dedicated workshops for such collaboration that we organized at the European Robotics Forum and the IoT Week in 2017.

To the same end, industry and universities need to look at setting up joint research projects. It is particularly important to address safety and security issues—hackers taking control of a robot and using it to spy or cause damage, for example. Such issues could make customers wary and ruin a market opportunity.

We also need systems that can work together, rather than in isolated applications. That way, new and more useful services can be quickly and effectively introduced with no disruption to existing ones. If we can solve such problems and unite robotics and the Internet of Things, it genuinely has the potential to change the world.

Mauro Dragone, Assistant Professor, Cognitive Robotics, Multiagent systems, Internet of Things, Heriot-Watt University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Willyam Bradberry/Shutterstock.com Continue reading

Posted in Human Robots

#433907 How the Spatial Web Will Fix What’s ...

Converging exponential technologies will transform media, advertising and the retail world. The world we see, through our digitally-enhanced eyes, will multiply and explode with intelligence, personalization, and brilliance.

This is the age of Web 3.0.

Last week, I discussed the what and how of Web 3.0 (also known as the Spatial Web), walking through its architecture and the converging technologies that enable it.

To recap, while Web 1.0 consisted of static documents and read-only data, Web 2.0 introduced multimedia content, interactive web applications, and participatory social media, all of these mediated by two-dimensional screens—a flat web of sensorily confined information.

During the next two to five years, the convergence of 5G, AI, a trillion sensors, and VR/AR will enable us to both map our physical world into virtual space and superimpose a digital layer onto our physical environments.

Web 3.0 is about to transform everything—from the way we learn and educate, to the way we trade (smart) assets, to our interactions with real and virtual versions of each other.

And while users grow rightly concerned about data privacy and misuse, the Spatial Web’s use of blockchain in its data and governance layer will secure and validate our online identities, protecting everything from your virtual assets to personal files.

In this second installment of the Web 3.0 series, I’ll be discussing the Spatial Web’s vast implications for a handful of industries:

News & Media Coverage
Smart Advertising
Personalized Retail

Let’s dive in.

Transforming Network News with Web 3.0
News media is big business. In 2016, global news media (including print) generated 168 billion USD in circulation and advertising revenue.

The news we listen to impacts our mindset. Listen to dystopian news on violence, disaster, and evil, and you’ll more likely be searching for a cave to hide in, rather than technology for the launch of your next business.

Today, different news media present starkly different realities of everything from foreign conflict to domestic policy. And outcomes are consequential. What reporters and news corporations decide to show or omit of a given news story plays a tremendous role in shaping the beliefs and resulting values of entire populations and constituencies.

But what if we could have an objective benchmark for today’s news, whereby crowdsourced and sensor-collected evidence allows you to tour the site of journalistic coverage, determining for yourself the most salient aspects of a story?

Enter mesh networks, AI, public ledgers, and virtual reality.

While traditional networks rely on a limited set of wired access points (or wireless hotspots), a wireless mesh network can connect entire cities via hundreds of dispersed nodes that communicate with each other and share a network connection non-hierarchically.

In short, this means that individual mobile users can together establish a local mesh network using nothing but the computing power in their own devices.

Take this a step further, and a local population of strangers could collectively broadcast countless 360-degree feeds across a local mesh network.

Imagine a scenario in which protests break out across the country, each cluster of activists broadcasting an aggregate of 360-degree videos, all fed through photogrammetry AIs that build out a live hologram of the march in real time. Want to see and hear what the NYC-based crowds are advocating for? Throw on some VR goggles and explore the event with full access. Or cue into the southern Texan border to assess for yourself the handling of immigrant entry and border conflicts.

Take a front seat in the Capitol during tomorrow’s Senate hearing, assessing each Senator’s reactions, questions and arguments without a Fox News or CNN filter. Or if you’re short on time, switch on the holographic press conference and host 3D avatars of live-broadcasting politicians in your living room.

We often think of modern media as taking away consumer agency, feeding tailored and often partisan ideology to a complacent audience. But as wireless mesh networks and agnostic sensor data allow for immersive VR-accessible news sites, the average viewer will necessarily become an active participant in her own education of current events.

And with each of us interpreting the news according to our own values, I envision a much less polarized world. A world in which civic engagement, moderately reasoned dialogue, and shared assumptions will allow us to empathize and make compromises.

The future promises an era in which news is verified and balanced; wherein public ledgers, AI, and new web interfaces bring you into the action and respect your intelligence—not manipulate your ignorance.

Web 3.0 Reinventing Advertising
Bringing about the rise of ‘user-owned data’ and self-established permissions, Web 3.0 is poised to completely disrupt digital advertising—a global industry worth over 192 billion USD.

Currently, targeted advertising leverages tomes of personal data and online consumer behavior to subtly engage you with products you might not want, or sell you on falsely advertised services promising inaccurate results.

With a new Web 3.0 data and governance layer, however, distributed ledger technologies will require advertisers to engage in more direct interaction with consumers, validating claims and upping transparency.

And with a data layer that allows users to own and authorize third-party use of their data, blockchain also holds extraordinary promise to slash not only data breaches and identity theft, but covert advertiser bombardment without your authorization.

Accessing crowdsourced reviews and AI-driven fact-checking, users will be able to validate advertising claims more efficiently and accurately than ever before, potentially rating and filtering out advertisers in the process. And in such a streamlined system of verified claims, sellers will face increased pressure to compete more on product and rely less on marketing.

But perhaps most exciting is the convergence of artificial intelligence and augmented reality.

As Spatial Web networks begin to associate digital information with physical objects and locations, products will begin to “sell themselves.” Each with built-in smart properties, products will become hyper-personalized, communicating information directly to users through Web 3.0 interfaces.

Imagine stepping into a department store in pursuit of a new web-connected fridge. As soon as you enter, your AR goggles register your location and immediately grant you access to a populated register of store products.

As you move closer to a kitchen set that catches your eye, a virtual salesperson—whether by holographic video or avatar—pops into your field of view next to the fridge you’ve been examining and begins introducing you to its various functions and features. You quickly decide you’d rather disable the avatar and get textual input instead, and preferences are reset to list appliance properties visually.

After a virtual tour of several other fridges, you decide on the one you want and seamlessly execute a smart contract, carried out by your smart wallet and the fridge. The transaction takes place in seconds, and the fridge’s blockchain-recorded ownership record has been updated.

Better yet, you head over to a friend’s home for dinner after moving into the neighborhood. While catching up in the kitchen, your eyes fixate on the cabinets, which quickly populate your AR glasses with a price-point and selection of colors.

But what if you’d rather not get auto-populated product info in the first place? No problem!

Now empowered with self-sovereign identities, users might be able to turn off advertising preferences entirely, turning on smart recommendations only when they want to buy a given product or need new supplies.

And with user-centric data, consumers might even sell such information to advertisers directly. Now, instead of Facebook or Google profiting off your data, you might earn a passive income by giving advertisers permission to personalize and market their services. Buy more, and your personal data marketplace grows in value. Buy less, and a lower-valued advertising profile causes an ebb in advertiser input.

With user-controlled data, advertisers now work on your terms, putting increased pressure on product iteration and personalizing products for each user.

This brings us to the transformative future of retail.

Personalized Retail–Power of the Spatial Web
In a future of smart and hyper-personalized products, I might walk through a virtual game space or a digitally reconstructed Target, browsing specific categories of clothing I’ve predetermined prior to entry.

As I pick out my selection, my AI assistant hones its algorithm reflecting new fashion preferences, and personal shoppers—also visiting the store in VR—help me pair different pieces as I go.

Once my personal shopper has finished constructing various outfits, I then sit back and watch a fashion show of countless Peter avatars with style and color variations of my selection, each customizable.

After I’ve made my selection, I might choose to purchase physical versions of three outfits and virtual versions of two others for my digital avatar. Payments are made automatically as I leave the store, including a smart wallet transaction made with the personal shopper at a per-outfit rate (for only the pieces I buy).

Already, several big players have broken into the VR market. Just this year, Walmart has announced its foray into the VR space, shipping 17,000 Oculus Go VR headsets to Walmart locations across the US.

And just this past January, Walmart filed two VR shopping-related patents. In a new bid to disrupt a rapidly changing retail market, Walmart now describes a system in which users couple their VR headset with haptic gloves for an immersive in-store experience, whether at 3am in your living room or during a lunch break at the office.

But Walmart is not alone. Big e-commerce players from Amazon to Alibaba are leaping onto the scene with new software buildout to ride the impending headset revolution.

Beyond virtual reality, players like IKEA have even begun using mobile-based augmented reality to map digitally replicated furniture in your physical living room, true to dimension. And this is just the beginning….

As AR headset hardware undergoes breakneck advancements in the next two to five years, we might soon be able to project watches onto our wrists, swapping out colors, styles, brand, and price points.

Or let’s say I need a new coffee table in my office. Pulling up multiple models in AR, I can position each option using advanced hand-tracking technology and customize height and width according to my needs. Once the smart payment is triggered, the manufacturer prints my newly-customized piece, droning it to my doorstep. As soon as I need to assemble the pieces, overlaid digital prompts walk me through each step, and any user confusions are communicated to a company database.

Perhaps one of the ripest industries for Spatial Web disruption, retail presents one of the greatest opportunities for profit across virtual apparel, digital malls, AI fashion startups and beyond.

In our next series iteration, I’ll be looking at the tremendous opportunities created by Web 3.0 for the Future of Work and Entertainment.

Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: nmedia / Shutterstock.com Continue reading

Posted in Human Robots

#433852 How Do We Teach Autonomous Cars To Drive ...

Autonomous vehicles can follow the general rules of American roads, recognizing traffic signals and lane markings, noticing crosswalks and other regular features of the streets. But they work only on well-marked roads that are carefully scanned and mapped in advance.

Many paved roads, though, have faded paint, signs obscured behind trees and unusual intersections. In addition, 1.4 million miles of U.S. roads—one-third of the country’s public roadways—are unpaved, with no on-road signals like lane markings or stop-here lines. That doesn’t include miles of private roads, unpaved driveways or off-road trails.

What’s a rule-following autonomous car to do when the rules are unclear or nonexistent? And what are its passengers to do when they discover their vehicle can’t get them where they’re going?

Accounting for the Obscure
Most challenges in developing advanced technologies involve handling infrequent or uncommon situations, or events that require performance beyond a system’s normal capabilities. That’s definitely true for autonomous vehicles. Some on-road examples might be navigating construction zones, encountering a horse and buggy, or seeing graffiti that looks like a stop sign. Off-road, the possibilities include the full variety of the natural world, such as trees down over the road, flooding and large puddles—or even animals blocking the way.

At Mississippi State University’s Center for Advanced Vehicular Systems, we have taken up the challenge of training algorithms to respond to circumstances that almost never happen, are difficult to predict and are complex to create. We seek to put autonomous cars in the hardest possible scenario: driving in an area the car has no prior knowledge of, with no reliable infrastructure like road paint and traffic signs, and in an unknown environment where it’s just as likely to see a cactus as a polar bear.

Our work combines virtual technology and the real world. We create advanced simulations of lifelike outdoor scenes, which we use to train artificial intelligence algorithms to take a camera feed and classify what it sees, labeling trees, sky, open paths and potential obstacles. Then we transfer those algorithms to a purpose-built all-wheel-drive test vehicle and send it out on our dedicated off-road test track, where we can see how our algorithms work and collect more data to feed into our simulations.

Starting Virtual
We have developed a simulator that can create a wide range of realistic outdoor scenes for vehicles to navigate through. The system generates a range of landscapes of different climates, like forests and deserts, and can show how plants, shrubs and trees grow over time. It can also simulate weather changes, sunlight and moonlight, and the accurate locations of 9,000 stars.

The system also simulates the readings of sensors commonly used in autonomous vehicles, such as lidar and cameras. Those virtual sensors collect data that feeds into neural networks as valuable training data.

Simulated desert, meadow and forest environments generated by the Mississippi State University Autonomous Vehicle Simulator. Chris Goodin, Mississippi State University, Author provided.
Building a Test Track
Simulations are only as good as their portrayals of the real world. Mississippi State University has purchased 50 acres of land on which we are developing a test track for off-road autonomous vehicles. The property is excellent for off-road testing, with unusually steep grades for our area of Mississippi—up to 60 percent inclines—and a very diverse population of plants.

We have selected certain natural features of this land that we expect will be particularly challenging for self-driving vehicles, and replicated them exactly in our simulator. That allows us to directly compare results from the simulation and real-life attempts to navigate the actual land. Eventually, we’ll create similar real and virtual pairings of other types of landscapes to improve our vehicle’s capabilities.

A road washout, as seen in real life, left, and in simulation. Chris Goodin, Mississippi State University, Author provided.
Collecting More Data
We have also built a test vehicle, called the Halo Project, which has an electric motor and sensors and computers that can navigate various off-road environments. The Halo Project car has additional sensors to collect detailed data about its actual surroundings, which can help us build virtual environments to run new tests in.

The Halo Project car can collect data about driving and navigating in rugged terrain. Beth Newman Wynn, Mississippi State University, Author provided.
Two of its lidar sensors, for example, are mounted at intersecting angles on the front of the car so their beams sweep across the approaching ground. Together, they can provide information on how rough or smooth the surface is, as well as capturing readings from grass and other plants and items on the ground.

Lidar beams intersect, scanning the ground in front of the vehicle. Chris Goodin, Mississippi State University, Author provided
We’ve seen some exciting early results from our research. For example, we have shown promising preliminary results that machine learning algorithms trained on simulated environments can be useful in the real world. As with most autonomous vehicle research, there is still a long way to go, but our hope is that the technologies we’re developing for extreme cases will also help make autonomous vehicles more functional on today’s roads.

Matthew Doude, Associate Director, Center for Advanced Vehicular Systems; Ph.D. Student in Industrial and Systems Engineering, Mississippi State University; Christopher Goodin, Assistant Research Professor, Center for Advanced Vehicular Systems, Mississippi State University, and Daniel Carruth, Assistant Research Professor and Associate Director for Human Factors and Advanced Vehicle System, Center for Advanced Vehicular Systems, Mississippi State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Photo provided for The Conversation by Matthew Goudin / CC BY ND Continue reading

Posted in Human Robots

#433772 Explore the World’s Coolest Robots, ...

New IEEE site features 200 robots from 19 countries with hundreds of photos, videos, and interactives to get people excited about robotics and STEM Continue reading

Posted in Human Robots

#433770 Will Tech Make Insurance Obsolete in the ...

We profit from it, we fear it, and we find it impossibly hard to quantify: risk.

While not the sexiest of industries, insurance can be a life-saving protector, pooling everyone’s premiums to safeguard against some of our greatest, most unexpected losses.

One of the most profitable in the world, the insurance industry exceeded $1.2 trillion in annual revenue since 2011 in the US alone.

But risk is becoming predictable. And insurance is getting disrupted fast.

By 2025, we’ll be living in a trillion-sensor economy. And as we enter a world where everything is measured all the time, we’ll start to transition from protecting against damages to preventing them in the first place.

But what happens to health insurance when Big Brother is always watching? Do rates go up when you sneak a cigarette? Do they go down when you eat your vegetables?

And what happens to auto insurance when most cars are autonomous? Or life insurance when the human lifespan doubles?

For that matter, what happens to insurance brokers when blockchain makes them irrelevant?

In this article, I’ll be discussing four key transformations:

Sensors and AI replacing your traditional broker
Blockchain
The ecosystem approach
IoT and insurance connectivity

Let’s dive in.

AI and the Trillion-Sensor Economy
As sensors continue to proliferate across every context—from smart infrastructure to millions of connected home devices to medicine—smart environments will allow us to ask any question, anytime, anywhere.

And as I often explain, once your AI has access to this treasure trove of ubiquitous sensor data in real time, it will be the quality of your questions that make or break your business.

But perhaps the most exciting insurance application of AI’s convergence with sensors is in healthcare. Tremendous advances in genetic screening are empowering us with predictive knowledge about our long-term health risks.

Leading the charge in genome sequencing, Illumina predicts that in a matter of years, decoding the full human genome will drop to $100, taking merely one hour to complete. Other companies are racing to get you sequences faster and cheaper.

Adopting an ecosystem approach, incumbent insurers and insurtech firms will soon be able to collaborate to provide risk-minimizing services in the health sector. Using sensor data and AI-driven personalized recommendations, insurance partnerships could keep consumers healthy, dramatically reducing the cost of healthcare.

Some fear that information asymmetry will allow consumers to learn of their health risks and leave insurers in the dark. However, both parties could benefit if insurers become part of the screening process.

A remarkable example of this is Gilad Meiri’s company, Neura AI. Aiming to predict health patterns, Neura has developed machine learning algorithms that analyze data from all of a user’s connected devices (sometimes from up to 54 apps!).

Neura predicts a user’s behavior and draws staggering insights about consumers’ health risks. Meiri soon began selling his personal risk assessment tool to insurers, who could then help insured customers mitigate long-term health risks.

But artificial intelligence will impact far more than just health insurance.

In October of 2016, a claim was submitted to Lemonade, the world’s first peer-to-peer insurance company. Rather than being processed by a human, every step in this claim resolution chain—from initial triage through fraud mitigation through final payment—was handled by an AI.

This transaction marks the first time an AI has processed an insurance claim. And it won’t be the last. A traditional human-processed claim takes 40 days to pay out. In Lemonade’s case, payment was transferred within three seconds.

However, Lemonade’s achievement only marks a starting point. Over the course of the next decade, nearly every facet of the insurance industry will undergo a similarly massive transformation.

New business models like peer-to-peer insurance are replacing traditional brokerage relationships, while AI and blockchain pairings significantly reduce the layers of bureaucracy required (with each layer getting a cut) for traditional insurance.

Consider Juniper, a startup that scrapes social media to build your risk assessment, subsequently asking you 12 questions via an iPhone app. Geared with advanced analytics, the platform can generate a million-dollar life insurance policy, approved in less than five minutes.

But what’s keeping all your data from unwanted hands?

Blockchain Building Trust
Current distrust in centralized financial services has led to staggering rates of underinsurance. Add to this fear of poor data and privacy protection, particularly in the wake of 2017’s widespread cybercriminal hacks.

Enabling secure storage and transfer of personal data, blockchain holds remarkable promise against the fraudulent activity that often plagues insurance firms.

The centralized model of insurance companies and other organizations is becoming redundant. Developing blockchain-based solutions for capital markets, Symbiont develops smart contracts to execute payments with little to no human involvement.

But distributed ledger technology (DLT) is enabling far more than just smart contracts.

Also targeting insurance is Tradle, leveraging blockchain for its proclaimed goal of “building a trust provisioning network.” Built around “know-your-customer” (KYC) data, Tradle aims to verify KYC data so that it can be securely forwarded to other firms without any further verification.

By requiring a certain number of parties to reuse pre-verified data, the platform makes your data much less vulnerable to hacking and allows you to keep it on a personal device. Only its verification—let’s say of a transaction or medical exam—is registered in the blockchain.

As insurance data grow increasingly decentralized, key insurance players will experience more and more pressure to adopt an ecosystem approach.

The Ecosystem Approach
Just as exponential technologies converge to provide new services, exponential businesses must combine the strengths of different sectors to expand traditional product lines.

By partnering with platform-based insurtech firms, forward-thinking insurers will no longer serve only as reactive policy-providers, but provide risk-mitigating services as well.

Especially as digital technologies demonetize security services—think autonomous vehicles—insurers must create new value chains and span more product categories.

For instance, France’s multinational AXA recently partnered with Alibaba and Ant Financial Services to sell a varied range of insurance products on Alibaba’s global e-commerce platform at the click of a button.

Building another ecosystem, Alibaba has also collaborated with Ping An Insurance and Tencent to create ZhongAn Online Property and Casualty Insurance—China’s first internet-only insurer, offering over 300 products. Now with a multibillion-dollar valuation, Zhong An has generated about half its business from selling shipping return insurance to Alibaba consumers.

But it doesn’t stop there. Insurers that participate in digital ecosystems can now sell risk-mitigating services that prevent damage before it occurs.

Imagine a corporate manufacturer whose sensors collect data on environmental factors affecting crop yield in an agricultural community. With the backing of investors and advanced risk analytics, such a manufacturer could sell crop insurance to farmers. By implementing an automated, AI-driven UI, they could automatically make payments when sensors detect weather damage to crops.

Now let’s apply this concept to your house, your car, your health insurance.

What’s stopping insurers from partnering with third-party IoT platforms to predict fires, collisions, chronic heart disease—and then empowering the consumer with preventive services?

This brings us to the powerful field of IoT.

Internet of Things and Insurance Connectivity
Leap ahead a few years. With a centralized hub like Echo, your smart home protects itself with a network of sensors. While gone, you’ve left on a gas burner and your internet-connected stove notifies you via a home app.

Better yet, home sensors monitoring heat and humidity levels run this data through an AI, which then remotely controls heating, humidity levels, and other connected devices based on historical data patterns and fire risk factors.

Several firms are already working toward this reality.

AXA plans to one day cooperate with a centralized home hub whereby remote monitoring will collect data for future analysis and detect abnormalities.

With remote monitoring and app-centralized control for users, MonAXA is aimed at customizing insurance bundles. These would reflect exact security features embedded in smart homes.

Wouldn’t you prefer not to have to rely on insurance after a burglary? With digital ecosystems, insurers may soon prevent break-ins from the start.

By gathering sensor data from third parties on neighborhood conditions, historical theft data, suspicious activity and other risk factors, an insurtech firm might automatically put your smart home on high alert, activating alarms and specialized locks in advance of an attack.

Insurance policy premiums are predicted to vastly reduce with lessened likelihood of insured losses. But insurers moving into preventive insurtech will likely turn a profit from other areas of their business. PricewaterhouseCoopers predicts that the connected home market will reach $149 billion USD by 2020.

Let’s look at car insurance.

Car insurance premiums are currently calculated according to the driver and traits of the car. But as more autonomous vehicles take to the roads, not only does liability shift to manufacturers and software engineers, but the risk of collision falls dramatically.

But let’s take this a step further.

In a future of autonomous cars, you will no longer own your car, instead subscribing to Transport as a Service (TaaS) and giving up the purchase of automotive insurance altogether.

This paradigm shift has already begun with Waymo, which automatically provides passengers with insurance every time they step into a Waymo vehicle.

And with the rise of smart traffic systems, sensor-embedded roads, and skyrocketing autonomous vehicle technology, the risks involved in transit only continue to plummet.

Final Thoughts
Insurtech firms are hitting the market fast. IoT, autonomous vehicles and genetic screening are rapidly making us invulnerable to risk. And AI-driven services are quickly pushing conventional insurers out of the market.

By 2024, roll-out of 5G on the ground, as well as OneWeb and Starlink in orbit are bringing 4.2 billion new consumers to the web—most of whom will need insurance. Yet, because of the changes afoot in the industry, none of them will buy policies from a human broker.

While today’s largest insurance companies continue to ignore this fact at their peril (and this segment of the market), thousands of entrepreneurs see it more clearly: as one of the largest opportunities ahead.

Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: 24Novembers / Shutterstock.com Continue reading

Posted in Human Robots