Tag Archives: every

#433785 DeepMind’s Eerie Reimagination of the ...

If a recent project using Google’s DeepMind were a recipe, you would take a pair of AI systems, images of animals, and a whole lot of computing power. Mix it all together, and you’d get a series of imagined animals dreamed up by one of the AIs. A look through the research paper about the project—or this open Google Folder of images it produced—will likely lead you to agree that the results are a mix of impressive and downright eerie.

But the eerie factor doesn’t mean the project shouldn’t be considered a success and a step forward for future uses of AI.

From GAN To BigGAN
The team behind the project consists of Andrew Brock, a PhD student at Edinburgh Center for Robotics, and DeepMind intern and researcher Jeff Donahue and Karen Simonyan.

They used a so-called Generative Adversarial Network (GAN) to generate the images. In a GAN, two AI systems collaborate in a game-like manner. One AI produces images of an object or creature. The human equivalent would be drawing pictures of, for example, a dog—without necessarily knowing what a dog exactly looks like. Those images are then shown to the second AI, which has already been fed images of dogs. The second AI then tells the first one how far off its efforts were. The first one uses this information to improve its images. The two go back and forth in an iterative process, and the goal is for the first AI to become so good at creating images of dogs that the second can’t tell the difference between its creations and actual pictures of dogs.

The team was able to draw on Google’s vast vaults of computational power to create images of a quality and life-like nature that were beyond almost anything seen before. In part, this was achieved by feeding the GAN with more images than is usually the case. According to IFLScience, the standard is to feed about 64 images per subject into the GAN. In this case, the research team fed about 2,000 images per subject into the system, leading to it being nicknamed BigGAN.

Their results showed that feeding the system with more images and using masses of raw computer power markedly increased the GAN’s precision and ability to create life-like renditions of the subjects it was trained to reproduce.

“The main thing these models need is not algorithmic improvements, but computational ones. […] When you increase model capacity and you increase the number of images you show at every step, you get this twofold combined effect,” Andrew Brock told Fast Company.

The Power Drain
The team used 512 of Google’s AI-focused Tensor Processing Units (TPU) to generate 512-pixel images. Each experiment took between 24 and 48 hours to run.

That kind of computing power needs a lot of electricity. As artist and Innovator-In-Residence at the Library of Congress Jer Thorp tongue-in-cheek put it on Twitter: “The good news is that AI can now give you a more believable image of a plate of spaghetti. The bad news is that it used roughly enough energy to power Cleveland for the afternoon.”

Thorp added that a back-of-the-envelope calculation showed that the computations to produce the images would require about 27,000 square feet of solar panels to have adequate power.

BigGAN’s images have been hailed by researchers, with Oriol Vinyals, research scientist at DeepMind, rhetorically asking if these were the ‘Best GAN samples yet?’

However, they are still not perfect. The number of legs on a given creature is one example of where the BigGAN seemed to struggle. The system was good at recognizing that something like a spider has a lot of legs, but seemed unable to settle on how many ‘a lot’ was supposed to be. The same applied to dogs, especially if the images were supposed to show said dogs in motion.

Those eerie images are contrasted by other renditions that show such lifelike qualities that a human mind has a hard time identifying them as fake. Spaniels with lolling tongues, ocean scenery, and butterflies were all rendered with what looks like perfection. The same goes for an image of a hamburger that was good enough to make me stop writing because I suddenly needed lunch.

The Future Use Cases
GAN networks were first introduced in 2014, and given their relative youth, researchers and companies are still busy trying out possible use cases.

One possible use is image correction—making pixillated images clearer. Not only does this help your future holiday snaps, but it could be applied in industries such as space exploration. A team from the University of Michigan and the Max Planck Institute have developed a method for GAN networks to create images from text descriptions. At Berkeley, a research group has used GAN to create an interface that lets users change the shape, size, and design of objects, including a handbag.

For anyone who has seen a film like Wag the Dog or read 1984, the possibilities are also starkly alarming. GANs could, in other words, make fake news look more real than ever before.

For now, it seems that while not all GANs require the computational and electrical power of the BigGAN, there is still some way to reach these potential use cases. However, if there’s one lesson from Moore’s Law and exponential technology, it is that today’s technical roadblock quickly becomes tomorrow’s minor issue as technology progresses.

Image Credit: Ondrej Prosicky/Shutterstock Continue reading

Posted in Human Robots

#433776 Why We Should Stop Conflating Human and ...

It’s common to hear phrases like ‘machine learning’ and ‘artificial intelligence’ and believe that somehow, someone has managed to replicate a human mind inside a computer. This, of course, is untrue—but part of the reason this idea is so pervasive is because the metaphor of human learning and intelligence has been quite useful in explaining machine learning and artificial intelligence.

Indeed, some AI researchers maintain a close link with the neuroscience community, and inspiration runs in both directions. But the metaphor can be a hindrance to people trying to explain machine learning to those less familiar with it. One of the biggest risks of conflating human and machine intelligence is that we start to hand over too much agency to machines. For those of us working with software, it’s essential that we remember the agency is human—it’s humans who build these systems, after all.

It’s worth unpacking the key differences between machine and human intelligence. While there are certainly similarities, it’s by looking at what makes them different that we can better grasp how artificial intelligence works, and how we can build and use it effectively.

Neural Networks
Central to the metaphor that links human and machine learning is the concept of a neural network. The biggest difference between a human brain and an artificial neural net is the sheer scale of the brain’s neural network. What’s crucial is that it’s not simply the number of neurons in the brain (which reach into the billions), but more precisely, the mind-boggling number of connections between them.

But the issue runs deeper than questions of scale. The human brain is qualitatively different from an artificial neural network for two other important reasons: the connections that power it are analogue, not digital, and the neurons themselves aren’t uniform (as they are in an artificial neural network).

This is why the brain is such a complex thing. Even the most complex artificial neural network, while often difficult to interpret and unpack, has an underlying architecture and principles guiding it (this is what we’re trying to do, so let’s construct the network like this…).

Intricate as they may be, neural networks in AIs are engineered with a specific outcome in mind. The human mind, however, doesn’t have the same degree of intentionality in its engineering. Yes, it should help us do all the things we need to do to stay alive, but it also allows us to think critically and creatively in a way that doesn’t need to be programmed.

The Beautiful Simplicity of AI
The fact that artificial intelligence systems are so much simpler than the human brain is, ironically, what enables AIs to deal with far greater computational complexity than we can.

Artificial neural networks can hold much more information and data than the human brain, largely due to the type of data that is stored and processed in a neural network. It is discrete and specific, like an entry on an excel spreadsheet.

In the human brain, data doesn’t have this same discrete quality. So while an artificial neural network can process very specific data at an incredible scale, it isn’t able to process information in the rich and multidimensional manner a human brain can. This is the key difference between an engineered system and the human mind.

Despite years of research, the human mind still remains somewhat opaque. This is because the analog synaptic connections between neurons are almost impenetrable to the digital connections within an artificial neural network.

Speed and Scale
Consider what this means in practice. The relative simplicity of an AI allows it to do a very complex task very well, and very quickly. A human brain simply can’t process data at scale and speed in the way AIs need to if they’re, say, translating speech to text, or processing a huge set of oncology reports.

Essential to the way AI works in both these contexts is that it breaks data and information down into tiny constituent parts. For example, it could break sounds down into phonetic text, which could then be translated into full sentences, or break images into pieces to understand the rules of how a huge set of them is composed.

Humans often do a similar thing, and this is the point at which machine learning is most like human learning; like algorithms, humans break data or information into smaller chunks in order to process it.

But there’s a reason for this similarity. This breakdown process is engineered into every neural network by a human engineer. What’s more, the way this process is designed will be down to the problem at hand. How an artificial intelligence system breaks down a data set is its own way of ‘understanding’ it.

Even while running a highly complex algorithm unsupervised, the parameters of how an AI learns—how it breaks data down in order to process it—are always set from the start.

Human Intelligence: Defining Problems
Human intelligence doesn’t have this set of limitations, which is what makes us so much more effective at problem-solving. It’s the human ability to ‘create’ problems that makes us so good at solving them. There’s an element of contextual understanding and decision-making in the way humans approach problems.

AIs might be able to unpack problems or find new ways into them, but they can’t define the problem they’re trying to solve.

Algorithmic insensitivity has come into focus in recent years, with an increasing number of scandals around bias in AI systems. Of course, this is caused by the biases of those making the algorithms, but underlines the point that algorithmic biases can only be identified by human intelligence.

Human and Artificial Intelligence Should Complement Each Other
We must remember that artificial intelligence and machine learning aren’t simply things that ‘exist’ that we can no longer control. They are built, engineered, and designed by us. This mindset puts us in control of the future, and makes algorithms even more elegant and remarkable.

Image Credit: Liu zishan/Shutterstock Continue reading

Posted in Human Robots

#433770 Will Tech Make Insurance Obsolete in the ...

We profit from it, we fear it, and we find it impossibly hard to quantify: risk.

While not the sexiest of industries, insurance can be a life-saving protector, pooling everyone’s premiums to safeguard against some of our greatest, most unexpected losses.

One of the most profitable in the world, the insurance industry exceeded $1.2 trillion in annual revenue since 2011 in the US alone.

But risk is becoming predictable. And insurance is getting disrupted fast.

By 2025, we’ll be living in a trillion-sensor economy. And as we enter a world where everything is measured all the time, we’ll start to transition from protecting against damages to preventing them in the first place.

But what happens to health insurance when Big Brother is always watching? Do rates go up when you sneak a cigarette? Do they go down when you eat your vegetables?

And what happens to auto insurance when most cars are autonomous? Or life insurance when the human lifespan doubles?

For that matter, what happens to insurance brokers when blockchain makes them irrelevant?

In this article, I’ll be discussing four key transformations:

Sensors and AI replacing your traditional broker
Blockchain
The ecosystem approach
IoT and insurance connectivity

Let’s dive in.

AI and the Trillion-Sensor Economy
As sensors continue to proliferate across every context—from smart infrastructure to millions of connected home devices to medicine—smart environments will allow us to ask any question, anytime, anywhere.

And as I often explain, once your AI has access to this treasure trove of ubiquitous sensor data in real time, it will be the quality of your questions that make or break your business.

But perhaps the most exciting insurance application of AI’s convergence with sensors is in healthcare. Tremendous advances in genetic screening are empowering us with predictive knowledge about our long-term health risks.

Leading the charge in genome sequencing, Illumina predicts that in a matter of years, decoding the full human genome will drop to $100, taking merely one hour to complete. Other companies are racing to get you sequences faster and cheaper.

Adopting an ecosystem approach, incumbent insurers and insurtech firms will soon be able to collaborate to provide risk-minimizing services in the health sector. Using sensor data and AI-driven personalized recommendations, insurance partnerships could keep consumers healthy, dramatically reducing the cost of healthcare.

Some fear that information asymmetry will allow consumers to learn of their health risks and leave insurers in the dark. However, both parties could benefit if insurers become part of the screening process.

A remarkable example of this is Gilad Meiri’s company, Neura AI. Aiming to predict health patterns, Neura has developed machine learning algorithms that analyze data from all of a user’s connected devices (sometimes from up to 54 apps!).

Neura predicts a user’s behavior and draws staggering insights about consumers’ health risks. Meiri soon began selling his personal risk assessment tool to insurers, who could then help insured customers mitigate long-term health risks.

But artificial intelligence will impact far more than just health insurance.

In October of 2016, a claim was submitted to Lemonade, the world’s first peer-to-peer insurance company. Rather than being processed by a human, every step in this claim resolution chain—from initial triage through fraud mitigation through final payment—was handled by an AI.

This transaction marks the first time an AI has processed an insurance claim. And it won’t be the last. A traditional human-processed claim takes 40 days to pay out. In Lemonade’s case, payment was transferred within three seconds.

However, Lemonade’s achievement only marks a starting point. Over the course of the next decade, nearly every facet of the insurance industry will undergo a similarly massive transformation.

New business models like peer-to-peer insurance are replacing traditional brokerage relationships, while AI and blockchain pairings significantly reduce the layers of bureaucracy required (with each layer getting a cut) for traditional insurance.

Consider Juniper, a startup that scrapes social media to build your risk assessment, subsequently asking you 12 questions via an iPhone app. Geared with advanced analytics, the platform can generate a million-dollar life insurance policy, approved in less than five minutes.

But what’s keeping all your data from unwanted hands?

Blockchain Building Trust
Current distrust in centralized financial services has led to staggering rates of underinsurance. Add to this fear of poor data and privacy protection, particularly in the wake of 2017’s widespread cybercriminal hacks.

Enabling secure storage and transfer of personal data, blockchain holds remarkable promise against the fraudulent activity that often plagues insurance firms.

The centralized model of insurance companies and other organizations is becoming redundant. Developing blockchain-based solutions for capital markets, Symbiont develops smart contracts to execute payments with little to no human involvement.

But distributed ledger technology (DLT) is enabling far more than just smart contracts.

Also targeting insurance is Tradle, leveraging blockchain for its proclaimed goal of “building a trust provisioning network.” Built around “know-your-customer” (KYC) data, Tradle aims to verify KYC data so that it can be securely forwarded to other firms without any further verification.

By requiring a certain number of parties to reuse pre-verified data, the platform makes your data much less vulnerable to hacking and allows you to keep it on a personal device. Only its verification—let’s say of a transaction or medical exam—is registered in the blockchain.

As insurance data grow increasingly decentralized, key insurance players will experience more and more pressure to adopt an ecosystem approach.

The Ecosystem Approach
Just as exponential technologies converge to provide new services, exponential businesses must combine the strengths of different sectors to expand traditional product lines.

By partnering with platform-based insurtech firms, forward-thinking insurers will no longer serve only as reactive policy-providers, but provide risk-mitigating services as well.

Especially as digital technologies demonetize security services—think autonomous vehicles—insurers must create new value chains and span more product categories.

For instance, France’s multinational AXA recently partnered with Alibaba and Ant Financial Services to sell a varied range of insurance products on Alibaba’s global e-commerce platform at the click of a button.

Building another ecosystem, Alibaba has also collaborated with Ping An Insurance and Tencent to create ZhongAn Online Property and Casualty Insurance—China’s first internet-only insurer, offering over 300 products. Now with a multibillion-dollar valuation, Zhong An has generated about half its business from selling shipping return insurance to Alibaba consumers.

But it doesn’t stop there. Insurers that participate in digital ecosystems can now sell risk-mitigating services that prevent damage before it occurs.

Imagine a corporate manufacturer whose sensors collect data on environmental factors affecting crop yield in an agricultural community. With the backing of investors and advanced risk analytics, such a manufacturer could sell crop insurance to farmers. By implementing an automated, AI-driven UI, they could automatically make payments when sensors detect weather damage to crops.

Now let’s apply this concept to your house, your car, your health insurance.

What’s stopping insurers from partnering with third-party IoT platforms to predict fires, collisions, chronic heart disease—and then empowering the consumer with preventive services?

This brings us to the powerful field of IoT.

Internet of Things and Insurance Connectivity
Leap ahead a few years. With a centralized hub like Echo, your smart home protects itself with a network of sensors. While gone, you’ve left on a gas burner and your internet-connected stove notifies you via a home app.

Better yet, home sensors monitoring heat and humidity levels run this data through an AI, which then remotely controls heating, humidity levels, and other connected devices based on historical data patterns and fire risk factors.

Several firms are already working toward this reality.

AXA plans to one day cooperate with a centralized home hub whereby remote monitoring will collect data for future analysis and detect abnormalities.

With remote monitoring and app-centralized control for users, MonAXA is aimed at customizing insurance bundles. These would reflect exact security features embedded in smart homes.

Wouldn’t you prefer not to have to rely on insurance after a burglary? With digital ecosystems, insurers may soon prevent break-ins from the start.

By gathering sensor data from third parties on neighborhood conditions, historical theft data, suspicious activity and other risk factors, an insurtech firm might automatically put your smart home on high alert, activating alarms and specialized locks in advance of an attack.

Insurance policy premiums are predicted to vastly reduce with lessened likelihood of insured losses. But insurers moving into preventive insurtech will likely turn a profit from other areas of their business. PricewaterhouseCoopers predicts that the connected home market will reach $149 billion USD by 2020.

Let’s look at car insurance.

Car insurance premiums are currently calculated according to the driver and traits of the car. But as more autonomous vehicles take to the roads, not only does liability shift to manufacturers and software engineers, but the risk of collision falls dramatically.

But let’s take this a step further.

In a future of autonomous cars, you will no longer own your car, instead subscribing to Transport as a Service (TaaS) and giving up the purchase of automotive insurance altogether.

This paradigm shift has already begun with Waymo, which automatically provides passengers with insurance every time they step into a Waymo vehicle.

And with the rise of smart traffic systems, sensor-embedded roads, and skyrocketing autonomous vehicle technology, the risks involved in transit only continue to plummet.

Final Thoughts
Insurtech firms are hitting the market fast. IoT, autonomous vehicles and genetic screening are rapidly making us invulnerable to risk. And AI-driven services are quickly pushing conventional insurers out of the market.

By 2024, roll-out of 5G on the ground, as well as OneWeb and Starlink in orbit are bringing 4.2 billion new consumers to the web—most of whom will need insurance. Yet, because of the changes afoot in the industry, none of them will buy policies from a human broker.

While today’s largest insurance companies continue to ignore this fact at their peril (and this segment of the market), thousands of entrepreneurs see it more clearly: as one of the largest opportunities ahead.

Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: 24Novembers / Shutterstock.com Continue reading

Posted in Human Robots

#433758 DeepMind’s New Research Plan to Make ...

Making sure artificial intelligence does what we want and behaves in predictable ways will be crucial as the technology becomes increasingly ubiquitous. It’s an area frequently neglected in the race to develop products, but DeepMind has now outlined its research agenda to tackle the problem.

AI safety, as the field is known, has been gaining prominence in recent years. That’s probably at least partly down to the overzealous warnings of a coming AI apocalypse from well-meaning, but underqualified pundits like Elon Musk and Stephen Hawking. But it’s also recognition of the fact that AI technology is quickly pervading all aspects of our lives, making decisions on everything from what movies we watch to whether we get a mortgage.

That’s why DeepMind hired a bevy of researchers who specialize in foreseeing the unforeseen consequences of the way we built AI back in 2016. And now the team has spelled out the three key domains they think require research if we’re going to build autonomous machines that do what we want.

In a new blog designed to provide updates on the team’s work, they introduce the ideas of specification, robustness, and assurance, which they say will act as the cornerstones of their future research. Specification involves making sure AI systems do what their operator intends; robustness means a system can cope with changes to its environment and attempts to throw it off course; and assurance involves our ability to understand what systems are doing and how to control them.

A classic thought experiment designed to illustrate how we could lose control of an AI system can help illustrate the problem of specification. Philosopher Nick Bostrom’s posited a hypothetical machine charged with making as many paperclips as possible. Because the creators fail to add what they might assume are obvious additional goals like not harming people, the AI wipes out humanity so we can’t switch it off before turning all matter in the universe into paperclips.

Obviously the example is extreme, but it shows how a poorly-specified goal can lead to unexpected and disastrous outcomes. Properly codifying the desires of the designer is no easy feat, though; often there are not neat ways to encompass both the explicit and implicit goals in ways that are understandable to the machine and don’t leave room for ambiguities, meaning we often rely on incomplete approximations.

The researchers note recent research by OpenAI in which an AI was trained to play a boat-racing game called CoastRunners. The game rewards players for hitting targets laid out along the race route. The AI worked out that it could get a higher score by repeatedly knocking over regenerating targets rather than actually completing the course. The blog post includes a link to a spreadsheet detailing scores of such examples.

Another key concern for AI designers is making their creation robust to the unpredictability of the real world. Despite their superhuman abilities on certain tasks, most cutting-edge AI systems are remarkably brittle. They tend to be trained on highly-curated datasets and so can fail when faced with unfamiliar input. This can happen by accident or by design—researchers have come up with numerous ways to trick image recognition algorithms into misclassifying things, including thinking a 3D printed tortoise was actually a gun.

Building systems that can deal with every possible encounter may not be feasible, so a big part of making AIs more robust may be getting them to avoid risks and ensuring they can recover from errors, or that they have failsafes to ensure errors don’t lead to catastrophic failure.

And finally, we need to have ways to make sure we can tell whether an AI is performing the way we expect it to. A key part of assurance is being able to effectively monitor systems and interpret what they’re doing—if we’re basing medical treatments or sentencing decisions on the output of an AI, we’d like to see the reasoning. That’s a major outstanding problem for popular deep learning approaches, which are largely indecipherable black boxes.

The other half of assurance is the ability to intervene if a machine isn’t behaving the way we’d like. But designing a reliable off switch is tough, because most learning systems have a strong incentive to prevent anyone from interfering with their goals.

The authors don’t pretend to have all the answers, but they hope the framework they’ve come up with can help guide others working on AI safety. While it may be some time before AI is truly in a position to do us harm, hopefully early efforts like these will mean it’s built on a solid foundation that ensures it is aligned with our goals.

Image Credit: cono0430 / Shutterstock.com Continue reading

Posted in Human Robots

#433754 This Robotic Warehouse Fills Orders in ...

Shopping is becoming less and less of a consumer experience—or, for many, less of a chore—as the list of things that can be bought online and delivered to our homes grows to include, well, almost anything you can think of. An Israeli startup is working to make shopping and deliveries even faster and cheaper—and they’re succeeding.

Last week, CommonSense Robotics announced the launch of its first autonomous micro-fulfillment center in Tel Aviv. The company claims the facility is the smallest of its type in the world at 6,000 square feet. For comparison’s sake—most fulfillment hubs that incorporate robotics are at least 120,000 square feet. Amazon’s upcoming facility in Bessemer, Alabama will be a massive 855,000 square feet.

The thing about a building whose square footage is in the hundred-thousands is, you can fit a lot of stuff inside it, but there aren’t many places you can fit the building itself, especially not in major urban areas. So most fulfillment centers are outside cities, which means more time and more money to get your Moroccan oil shampoo, or your vegetable garden starter kit, or your 100-pack of organic protein bars from that fulfillment center to your front door.

CommonSense Robotics built the Tel Aviv center in an area that was previously thought too small for warehouse infrastructure. “In order to fit our site into small, tight urban spaces, we’ve designed every single element of it to optimize for space efficiency,” said Avital Sterngold, VP of operations. Using a robotic sorting system that includes hundreds of robots, plus AI software that assigns them specific tasks, the facility can prepare orders in less than five minutes end-to-end.

It’s not all automated, though—there’s still some human labor in the mix. The robots fetch goods and bring them to a team of people, who then pack the individual orders.

CommonSense raised $20 million this year in a funding round led by Palo Alto-based Playground Global. The company hopes to expand its operations to the US and UK in 2019. Its business model is to charge retailers a fee for each order fulfilled, while maintaining ownership and operation of the fulfillment centers. The first retailers to jump on the bandwagon were Super-Pharm, a drugstore chain, and Rami Levy, a retail supermarket chain.

“Staying competitive in today’s market is anchored by delivering orders quickly and determining how to fulfill and deliver orders efficiently, which are always the most complex aspects of any ecommerce operation. With robotics, we will be able to fulfill and deliver orders in under one hour, all while saving costs on said fulfillment and delivery,” said Super-Pharm VP Yossi Cohen. “Before CommonSense Robotics, we offered our customers next-day home delivery. With this partnership, we are now able to offer our customers same-day delivery and will very soon be offering them one-hour delivery.”

Long live the instant gratification economy—and the increasingly sophisticated technology that’s enabling it.

Image Credit: SasinTipchai / Shutterstock.com Continue reading

Posted in Human Robots