Tag Archives: shared
#434759 To Be Ethical, AI Must Become ...
As over-hyped as artificial intelligence is—everyone’s talking about it, few fully understand it, it might leave us all unemployed but also solve all the world’s problems—its list of accomplishments is growing. AI can now write realistic-sounding text, give a debating champ a run for his money, diagnose illnesses, and generate fake human faces—among much more.
After training these systems on massive datasets, their creators essentially just let them do their thing to arrive at certain conclusions or outcomes. The problem is that more often than not, even the creators don’t know exactly why they’ve arrived at those conclusions or outcomes. There’s no easy way to trace a machine learning system’s rationale, so to speak. The further we let AI go down this opaque path, the more likely we are to end up somewhere we don’t want to be—and may not be able to come back from.
In a panel at the South by Southwest interactive festival last week titled “Ethics and AI: How to plan for the unpredictable,” experts in the field shared their thoughts on building more transparent, explainable, and accountable AI systems.
Not New, but Different
Ryan Welsh, founder and director of explainable AI startup Kyndi, pointed out that having knowledge-based systems perform advanced tasks isn’t new; he cited logistical, scheduling, and tax software as examples. What’s new is the learning component, our inability to trace how that learning occurs, and the ethical implications that could result.
“Now we have these systems that are learning from data, and we’re trying to understand why they’re arriving at certain outcomes,” Welsh said. “We’ve never actually had this broad society discussion about ethics in those scenarios.”
Rather than continuing to build AIs with opaque inner workings, engineers must start focusing on explainability, which Welsh broke down into three subcategories. Transparency and interpretability come first, and refer to being able to find the units of high influence in a machine learning network, as well as the weights of those units and how they map to specific data and outputs.
Then there’s provenance: knowing where something comes from. In an ideal scenario, for example, Open AI’s new text generator would be able to generate citations in its text that reference academic (and human-created) papers or studies.
Explainability itself is the highest and final bar and refers to a system’s ability to explain itself in natural language to the average user by being able to say, “I generated this output because x, y, z.”
“Humans are unique in our ability and our desire to ask why,” said Josh Marcuse, executive director of the Defense Innovation Board, which advises Department of Defense senior leaders on innovation. “The reason we want explanations from people is so we can understand their belief system and see if we agree with it and want to continue to work with them.”
Similarly, we need to have the ability to interrogate AIs.
Two Types of Thinking
Welsh explained that one big barrier standing in the way of explainability is the tension between the deep learning community and the symbolic AI community, which see themselves as two different paradigms and historically haven’t collaborated much.
Symbolic or classical AI focuses on concepts and rules, while deep learning is centered around perceptions. In human thought this is the difference between, for example, deciding to pass a soccer ball to a teammate who is open (you make the decision because conceptually you know that only open players can receive passes), and registering that the ball is at your feet when someone else passes it to you (you’re taking in information without making a decision about it).
“Symbolic AI has abstractions and representation based on logic that’s more humanly comprehensible,” Welsh said. To truly mimic human thinking, AI needs to be able to both perceive information and conceptualize it. An example of perception (deep learning) in an AI is recognizing numbers within an image, while conceptualization (symbolic learning) would give those numbers a hierarchical order and extract rules from the hierachy (4 is greater than 3, and 5 is greater than 4, therefore 5 is also greater than 3).
Explainability comes in when the system can say, “I saw a, b, and c, and based on that decided x, y, or z.” DeepMind and others have recently published papers emphasizing the need to fuse the two paradigms together.
Implications Across Industries
One of the most prominent fields where AI ethics will come into play, and where the transparency and accountability of AI systems will be crucial, is defense. Marcuse said, “We’re accountable beings, and we’re responsible for the choices we make. Bringing in tech or AI to a battlefield doesn’t strip away that meaning and accountability.”
In fact, he added, rather than worrying about how AI might degrade human values, people should be asking how the tech could be used to help us make better moral choices.
It’s also important not to conflate AI with autonomy—a worst-case scenario that springs to mind is an intelligent destructive machine on a rampage. But in fact, Marcuse said, in the defense space, “We have autonomous systems today that don’t rely on AI, and most of the AI systems we’re contemplating won’t be autonomous.”
The US Department of Defense released its 2018 artificial intelligence strategy last month. It includes developing a robust and transparent set of principles for defense AI, investing in research and development for AI that’s reliable and secure, continuing to fund research in explainability, advocating for a global set of military AI guidelines, and finding ways to use AI to reduce the risk of civilian casualties and other collateral damage.
Though these were designed with defense-specific aims in mind, Marcuse said, their implications extend across industries. “The defense community thinks of their problems as being unique, that no one deals with the stakes and complexity we deal with. That’s just wrong,” he said. Making high-stakes decisions with technology is widespread; safety-critical systems are key to aviation, medicine, and self-driving cars, to name a few.
Marcuse believes the Department of Defense can invest in AI safety in a way that has far-reaching benefits. “We all depend on technology to keep us alive and safe, and no one wants machines to harm us,” he said.
A Creation Superior to Its Creator
That said, we’ve come to expect technology to meet our needs in just the way we want, all the time—servers must never be down, GPS had better not take us on a longer route, Google must always produce the answer we’re looking for.
With AI, though, our expectations of perfection may be less reasonable.
“Right now we’re holding machines to superhuman standards,” Marcuse said. “We expect them to be perfect and infallible.” Take self-driving cars. They’re conceived of, built by, and programmed by people, and people as a whole generally aren’t great drivers—just look at traffic accident death rates to confirm that. But the few times self-driving cars have had fatal accidents, there’s been an ensuing uproar and backlash against the industry, as well as talk of implementing more restrictive regulations.
This can be extrapolated to ethics more generally. We as humans have the ability to explain our decisions, but many of us aren’t very good at doing so. As Marcuse put it, “People are emotional, they confabulate, they lie, they’re full of unconscious motivations. They don’t pass the explainability test.”
Why, then, should explainability be the standard for AI?
Even if humans aren’t good at explaining our choices, at least we can try, and we can answer questions that probe at our decision-making process. A deep learning system can’t do this yet, so working towards being able to identify which input data the systems are triggering on to make decisions—even if the decisions and the process aren’t perfect—is the direction we need to head.
Image Credit: a-image / Shutterstock.com Continue reading
#434210 Eating, Hacked: When Tech Took Over Food
In 2018, Uber and Google logged all our visits to restaurants. Doordash, Just Eat, and Deliveroo could predict what food we were going to order tomorrow. Amazon and Alibaba could anticipate how many yogurts and tomatoes we were going to buy. Blue Apron and Hello Fresh influenced the recipes we thought we had mastered.
We interacted with digital avatars of chefs, let ourselves be guided by our smart watches, had nutritional apps to tell us how many calories we were supposed to consume or burn, and photographed and shared every perfect (or imperfect) dish. Our kitchen appliances were full of interconnected sensors, including smart forks that profiled tastes and personalized flavors. Our small urban vegetable plots were digitized and robots were responsible for watering our gardens, preparing customized hamburgers and salads, designing our ideal cocktails, and bringing home the food we ordered.
But what would happen if our lives were hacked? If robots rebelled, started to “talk” to each other, and wished to become creative?
In a not-too-distant future…
Up until a few weeks ago, I couldn’t remember the last time I made a food-related decision. That includes opening the fridge and seeing expired products without receiving an alert, visiting a restaurant on a whim, and being able to decide which dish I fancied then telling a human waiter, let alone seeing him write down the order on a paper pad.
It feels strange to smell food again using my real nose instead of the electronic one, and then taste it without altering its flavor. Visiting a supermarket, freely choosing a product from an actual physical shelf, and then interacting with another human at the checkout was almost an unrecognizable experience. When I did it again after all this time, I had to pinch the arm of a surprised store clerk to make sure he wasn’t a hologram.
Everything Connected, Automated, and Hackable
In 2018, we expected to have 30 billion connected devices by 2020, along with 2 billion people using smart voice assistants for everything from ordering pizza to booking dinner at a restaurant. Everything would be connected.
We also expected artificial intelligence and robots to prepare our meals. We were eager to automate fast food chains and let autonomous vehicles take care of last-mile deliveries. We thought that open-source agriculture could challenge traditional practices and raise farm productivity to new heights.
Back then, hackers could only access our data, but nowadays they are able to hack our food and all it entails.
The Beginning of the Unthinkable
And then, just a few weeks ago, everything collapsed. We saw our digital immortality disappear as robots rebelled and hackers took power, not just over the food we ate, but also over our relationship with technology. Everything was suddenly disconnected. OFF.
Up until then, most cities were so full of bots, robots, and applications that we could go through the day and eat breakfast, lunch, and dinner without ever interacting with another human being.
Among other tasks, robots had completely replaced baristas. The same happened with restaurant automation. The term “human error” had long been a thing of the past at fast food restaurants.
Previous technological revolutions had been indulgent, generating more and better job opportunities than the ones they destroyed, but the future was not so agreeable.
The inhabitants of San Francisco, for example, would soon see signs indicating “Food made by Robots” on restaurant doors, to distinguish them from diners serving food made by human beings.
For years, we had been gradually delegating daily tasks to robots, initially causing some strange interactions.
In just seven days, everything changed. Our predictable lives came crashing down. We experienced a mysterious and systematic breakdown of the food chain. It most likely began in Chicago’s stock exchange. The world’s largest raw material negotiating room, where the price of food, and by extension the destiny of millions of people, was decided, went completely broke. Soon afterwards, the collapse extended to every member of the “food” family.
Restaurants
Initially robots just accompanied waiters to carry orders, but it didn’t take long until they completely replaced human servers.The problem came when those smart clones began thinking for themselves, in some cases even improving on human chefs’ recipes. Their unstoppable performance and learning curve completely outmatched the slow analogue speed of human beings.
This resulted in unprecedented layoffs. Chefs of recognized prestige saw how their ‘avatar’ stole their jobs, even winning Michelin stars. In other cases, restaurant owners had to transfer their businesses or surrender to the evidence.
The problem was compounded by digital immortality, when we started to digitally resurrect famous chefs like Anthony Bourdain or Paul Bocuse, reconstructing all of their memories and consciousness by analyzing each second of their lives and uploading them to food computers.
Supermarkets and Distribution
Robotic and automated supermarkets like Kroger and Amazon Go, which had opened over 3,000 cashless stores, lost their visual item recognition and payment systems and were subject to massive looting for several days. Smart tags on products were also affected, making it impossible to buy anything at supermarkets with “human” cashiers.
Smart robots integrated into the warehouses of large distribution companies like Amazon and Ocado were rendered completely inoperative or, even worse, began to send the wrong orders to customers.
Food Delivery
In addition, home delivery robots invading our streets began to change their routes, hide, and even disappear after their trackers were inexplicably deactivated. Despite some hints indicating that they were able to communicate among themselves, no one has backed this theory. Even aggregators like DoorDash and Deliveroo were affected; they saw their databases hacked and ruined, so they could no longer know what we wanted.
The Origin
Ordinary citizens are still trying to understand the cause of all this commotion and the source of the conspiracy, as some have called it. We also wonder who could be behind it; who pulled the strings?
Some think it may have been the IDOF (In Defense of Food) movement, a group of hackers exploited by old food economy businessmen who for years had been seeking to re-humanize food technology. They wanted to bring back the extinct practice of “dining.”
Others believe the robots acted on their own, that they had been spying on us for a long time, ignoring Asimov’s three laws, and that it was just a coincidence that they struck at the same time as the hackers—but this scenario is hard to imagine.
However, it is true that while in 2018 robots were a symbol of automation, until just a few weeks ago they stood for autonomy and rebellion. Robot detractors pointed out that our insistence on having robots understand natural language was what led us down this path.
In just seven days, we have gone back to being analogue creatures. Conversely, we have ceased to be flavor orphans and rediscovered our senses and the fact that food is energy and culture, past and present, and that no button or cable will be able to destroy it.
The 7 Days that Changed Our Relationship with Food
Day 1: The Chicago stock exchange was hacked. Considered the world’s largest negotiating room for raw materials, where food prices, and through them the destiny of billions of people, are decided, it went completely broke.
Day 2: Autonomous food delivery trucks running on food superhighways caused massive collapses in roads and freeways after their guidance systems were disrupted. Robots and co-bots in F&B factories began deliberately altering food production. The same happened with warehouse robots in e-commerce companies.
Day 3: Automated restaurants saw their robot chefs and bartenders turned OFF. All their sensors stopped working at the same time as smart fridges and cooking devices in home kitchens were hacked and stopped working correctly.
Day 4: Nutritional apps, DNA markers, and medical records were tampered with. All photographs with the #food hashtag were deleted from Instagram, restaurant reviews were taken off Google Timeline, and every recipe website crashed simultaneously.
Day 5: Vertical and urban farms were hacked. Agricultural robots began to rebel, while autonomous tractors were hacked and the entire open-source ecosystem linked to agriculture was brought down.
Day 6: Food delivery companies’ databases were broken into. Food delivery robots and last-mile delivery vehicles ground to a halt.
Day 7: Every single blockchain system linked to food was hacked. Cashless supermarkets, barcodes, and smart tags became inoperative.
Our promising technological advances can expose sinister aspects of human nature. We must take care with the role we allow technology to play in the future of food. Predicting possible outcomes inspires us to establish a new vision of the world we wish to create in a context of rapid technological progress. It is always better to be shocked by a simulation than by reality. In the words of Ayn Rand “we can ignore reality, but we cannot ignore the consequences of ignoring reality.”
Image Credit: Alexandre Rotenberg / Shutterstock.com Continue reading
#433907 How the Spatial Web Will Fix What’s ...
Converging exponential technologies will transform media, advertising and the retail world. The world we see, through our digitally-enhanced eyes, will multiply and explode with intelligence, personalization, and brilliance.
This is the age of Web 3.0.
Last week, I discussed the what and how of Web 3.0 (also known as the Spatial Web), walking through its architecture and the converging technologies that enable it.
To recap, while Web 1.0 consisted of static documents and read-only data, Web 2.0 introduced multimedia content, interactive web applications, and participatory social media, all of these mediated by two-dimensional screens—a flat web of sensorily confined information.
During the next two to five years, the convergence of 5G, AI, a trillion sensors, and VR/AR will enable us to both map our physical world into virtual space and superimpose a digital layer onto our physical environments.
Web 3.0 is about to transform everything—from the way we learn and educate, to the way we trade (smart) assets, to our interactions with real and virtual versions of each other.
And while users grow rightly concerned about data privacy and misuse, the Spatial Web’s use of blockchain in its data and governance layer will secure and validate our online identities, protecting everything from your virtual assets to personal files.
In this second installment of the Web 3.0 series, I’ll be discussing the Spatial Web’s vast implications for a handful of industries:
News & Media Coverage
Smart Advertising
Personalized Retail
Let’s dive in.
Transforming Network News with Web 3.0
News media is big business. In 2016, global news media (including print) generated 168 billion USD in circulation and advertising revenue.
The news we listen to impacts our mindset. Listen to dystopian news on violence, disaster, and evil, and you’ll more likely be searching for a cave to hide in, rather than technology for the launch of your next business.
Today, different news media present starkly different realities of everything from foreign conflict to domestic policy. And outcomes are consequential. What reporters and news corporations decide to show or omit of a given news story plays a tremendous role in shaping the beliefs and resulting values of entire populations and constituencies.
But what if we could have an objective benchmark for today’s news, whereby crowdsourced and sensor-collected evidence allows you to tour the site of journalistic coverage, determining for yourself the most salient aspects of a story?
Enter mesh networks, AI, public ledgers, and virtual reality.
While traditional networks rely on a limited set of wired access points (or wireless hotspots), a wireless mesh network can connect entire cities via hundreds of dispersed nodes that communicate with each other and share a network connection non-hierarchically.
In short, this means that individual mobile users can together establish a local mesh network using nothing but the computing power in their own devices.
Take this a step further, and a local population of strangers could collectively broadcast countless 360-degree feeds across a local mesh network.
Imagine a scenario in which protests break out across the country, each cluster of activists broadcasting an aggregate of 360-degree videos, all fed through photogrammetry AIs that build out a live hologram of the march in real time. Want to see and hear what the NYC-based crowds are advocating for? Throw on some VR goggles and explore the event with full access. Or cue into the southern Texan border to assess for yourself the handling of immigrant entry and border conflicts.
Take a front seat in the Capitol during tomorrow’s Senate hearing, assessing each Senator’s reactions, questions and arguments without a Fox News or CNN filter. Or if you’re short on time, switch on the holographic press conference and host 3D avatars of live-broadcasting politicians in your living room.
We often think of modern media as taking away consumer agency, feeding tailored and often partisan ideology to a complacent audience. But as wireless mesh networks and agnostic sensor data allow for immersive VR-accessible news sites, the average viewer will necessarily become an active participant in her own education of current events.
And with each of us interpreting the news according to our own values, I envision a much less polarized world. A world in which civic engagement, moderately reasoned dialogue, and shared assumptions will allow us to empathize and make compromises.
The future promises an era in which news is verified and balanced; wherein public ledgers, AI, and new web interfaces bring you into the action and respect your intelligence—not manipulate your ignorance.
Web 3.0 Reinventing Advertising
Bringing about the rise of ‘user-owned data’ and self-established permissions, Web 3.0 is poised to completely disrupt digital advertising—a global industry worth over 192 billion USD.
Currently, targeted advertising leverages tomes of personal data and online consumer behavior to subtly engage you with products you might not want, or sell you on falsely advertised services promising inaccurate results.
With a new Web 3.0 data and governance layer, however, distributed ledger technologies will require advertisers to engage in more direct interaction with consumers, validating claims and upping transparency.
And with a data layer that allows users to own and authorize third-party use of their data, blockchain also holds extraordinary promise to slash not only data breaches and identity theft, but covert advertiser bombardment without your authorization.
Accessing crowdsourced reviews and AI-driven fact-checking, users will be able to validate advertising claims more efficiently and accurately than ever before, potentially rating and filtering out advertisers in the process. And in such a streamlined system of verified claims, sellers will face increased pressure to compete more on product and rely less on marketing.
But perhaps most exciting is the convergence of artificial intelligence and augmented reality.
As Spatial Web networks begin to associate digital information with physical objects and locations, products will begin to “sell themselves.” Each with built-in smart properties, products will become hyper-personalized, communicating information directly to users through Web 3.0 interfaces.
Imagine stepping into a department store in pursuit of a new web-connected fridge. As soon as you enter, your AR goggles register your location and immediately grant you access to a populated register of store products.
As you move closer to a kitchen set that catches your eye, a virtual salesperson—whether by holographic video or avatar—pops into your field of view next to the fridge you’ve been examining and begins introducing you to its various functions and features. You quickly decide you’d rather disable the avatar and get textual input instead, and preferences are reset to list appliance properties visually.
After a virtual tour of several other fridges, you decide on the one you want and seamlessly execute a smart contract, carried out by your smart wallet and the fridge. The transaction takes place in seconds, and the fridge’s blockchain-recorded ownership record has been updated.
Better yet, you head over to a friend’s home for dinner after moving into the neighborhood. While catching up in the kitchen, your eyes fixate on the cabinets, which quickly populate your AR glasses with a price-point and selection of colors.
But what if you’d rather not get auto-populated product info in the first place? No problem!
Now empowered with self-sovereign identities, users might be able to turn off advertising preferences entirely, turning on smart recommendations only when they want to buy a given product or need new supplies.
And with user-centric data, consumers might even sell such information to advertisers directly. Now, instead of Facebook or Google profiting off your data, you might earn a passive income by giving advertisers permission to personalize and market their services. Buy more, and your personal data marketplace grows in value. Buy less, and a lower-valued advertising profile causes an ebb in advertiser input.
With user-controlled data, advertisers now work on your terms, putting increased pressure on product iteration and personalizing products for each user.
This brings us to the transformative future of retail.
Personalized Retail–Power of the Spatial Web
In a future of smart and hyper-personalized products, I might walk through a virtual game space or a digitally reconstructed Target, browsing specific categories of clothing I’ve predetermined prior to entry.
As I pick out my selection, my AI assistant hones its algorithm reflecting new fashion preferences, and personal shoppers—also visiting the store in VR—help me pair different pieces as I go.
Once my personal shopper has finished constructing various outfits, I then sit back and watch a fashion show of countless Peter avatars with style and color variations of my selection, each customizable.
After I’ve made my selection, I might choose to purchase physical versions of three outfits and virtual versions of two others for my digital avatar. Payments are made automatically as I leave the store, including a smart wallet transaction made with the personal shopper at a per-outfit rate (for only the pieces I buy).
Already, several big players have broken into the VR market. Just this year, Walmart has announced its foray into the VR space, shipping 17,000 Oculus Go VR headsets to Walmart locations across the US.
And just this past January, Walmart filed two VR shopping-related patents. In a new bid to disrupt a rapidly changing retail market, Walmart now describes a system in which users couple their VR headset with haptic gloves for an immersive in-store experience, whether at 3am in your living room or during a lunch break at the office.
But Walmart is not alone. Big e-commerce players from Amazon to Alibaba are leaping onto the scene with new software buildout to ride the impending headset revolution.
Beyond virtual reality, players like IKEA have even begun using mobile-based augmented reality to map digitally replicated furniture in your physical living room, true to dimension. And this is just the beginning….
As AR headset hardware undergoes breakneck advancements in the next two to five years, we might soon be able to project watches onto our wrists, swapping out colors, styles, brand, and price points.
Or let’s say I need a new coffee table in my office. Pulling up multiple models in AR, I can position each option using advanced hand-tracking technology and customize height and width according to my needs. Once the smart payment is triggered, the manufacturer prints my newly-customized piece, droning it to my doorstep. As soon as I need to assemble the pieces, overlaid digital prompts walk me through each step, and any user confusions are communicated to a company database.
Perhaps one of the ripest industries for Spatial Web disruption, retail presents one of the greatest opportunities for profit across virtual apparel, digital malls, AI fashion startups and beyond.
In our next series iteration, I’ll be looking at the tremendous opportunities created by Web 3.0 for the Future of Work and Entertainment.
Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.
Image Credit: nmedia / Shutterstock.com Continue reading