Tag Archives: there

#436252 After AI, Fashion and Shopping Will ...

AI and broadband are eating retail for breakfast. In the first half of 2019, we’ve seen 19 retailer bankruptcies. And the retail apocalypse is only accelerating.

What’s coming next is astounding. Why drive when you can speak? Revenue from products purchased via voice commands is expected to quadruple from today’s US$2 billion to US$8 billion by 2023.

Virtual reality, augmented reality, and 3D printing are converging with artificial intelligence, drones, and 5G to transform shopping on every dimension. And as a result, shopping is becoming dematerialized, demonetized, democratized, and delocalized… a top-to-bottom transformation of the retail world.

Welcome to Part 1 of our series on the future of retail, a deep-dive into AI and its far-reaching implications.

Let’s dive in.

A Day in the Life of 2029
Welcome to April 21, 2029, a sunny day in Dallas. You’ve got a fundraising luncheon tomorrow, but nothing to wear. The last thing you want to do is spend the day at the mall.

No sweat. Your body image data is still current, as you were scanned only a week ago. Put on your VR headset and have a conversation with your AI. “It’s time to buy a dress for tomorrow’s event” is all you have to say. In a moment, you’re teleported to a virtual clothing store. Zero travel time. No freeway traffic, parking hassles, or angry hordes wielding baby strollers.

Instead, you’ve entered your own personal clothing store. Everything is in your exact size…. And I mean everything. The store has access to nearly every designer and style on the planet. Ask your AI to show you what’s hot in Shanghai, and presto—instant fashion show. Every model strutting down the runway looks exactly like you, only dressed in Shanghai’s latest.

When you’re done selecting an outfit, your AI pays the bill. And as your new clothes are being 3D printed at a warehouse—before speeding your way via drone delivery—a digital version has been added to your personal inventory for use at future virtual events.

The cost? Thanks to an era of no middlemen, less than half of what you pay in stores today. Yet this future is not all that far off…

Digital Assistants
Let’s begin with the basics: the act of turning desire into purchase.

Most of us navigate shopping malls or online marketplaces alone, hoping to stumble across the right item and fit. But if you’re lucky enough to employ a personal assistant, you have the luxury of describing what you want to someone who knows you well enough to buy that exact right thing most of the time.

For most of us who don’t, enter the digital assistant.

Right now, the four horsemen of the retail apocalypse are waging war for our wallets. Amazon’s Alexa, Google’s Now, Apple’s Siri, and Alibaba’s Tmall Genie are going head-to-head in a battle to become the platform du jour for voice-activated, AI-assisted commerce.

For baby boomers who grew up watching Captain Kirk talk to the Enterprise’s computer on Star Trek, digital assistants seem a little like science fiction. But for millennials, it’s just the next logical step in a world that is auto-magical.

And as those millennials enter their consumer prime, revenue from products purchased via voice-driven commands is projected to leap from today’s US$2 billion to US$8 billion by 2023.

We are already seeing a major change in purchasing habits. On average, consumers using Amazon Echo spent more than standard Amazon Prime customers: US$1,700 versus US$1,300.

And as far as an AI fashion advisor goes, those too are here, courtesy of both Alibaba and Amazon. During its annual Singles’ Day (November 11) shopping festival, Alibaba’s FashionAI concept store uses deep learning to make suggestions based on advice from human fashion experts and store inventory, driving a significant portion of the day’s US$25 billion in sales.

Similarly, Amazon’s shopping algorithm makes personalized clothing recommendations based on user preferences and social media behavior.

Customer Service
But AI is disrupting more than just personalized fashion and e-commerce. Its next big break will take place in the customer service arena.

According to a recent Zendesk study, good customer service increases the possibility of a purchase by 42 percent, while bad customer service translates into a 52 percent chance of losing that sale forever. This means more than half of us will stop shopping at a store due to a single disappointing customer service interaction. These are significant financial stakes. They’re also problems perfectly suited for an AI solution.

During the 2018 Google I/O conference, CEO Sundar Pichai demoed the Google Duplex, their next generation digital assistant. Pichai played the audience a series of pre-recorded phone calls made by Google Duplex. The first call made a reservation at a restaurant, the second one booked a haircut appointment, amusing the audience with a long “hmmm” mid-call.

In neither case did the person on the other end of the phone have any idea they were talking to an AI. The system’s success speaks to how seamlessly AI can blend into our retail lives and how convenient it will continue to make them. The same technology Pichai demonstrated that can make phone calls for consumers can also answer phones for retailers—a development that’s unfolding in two different ways:

(1) Customer service coaches: First, for organizations interested in keeping humans involved, there’s Beyond Verbal, a Tel Aviv-based startup that has built an AI customer service coach. Simply by analyzing customer voice intonation, the system can tell whether the person on the phone is about to blow a gasket, is genuinely excited, or anything in between.

Based on research of over 70,000 subjects in more than 30 languages, Beyond Verbal’s app can detect 400 different markers of human moods, attitudes, and personality traits. Already it’s been integrated in call centers to help human sales agents understand and react to customer emotions, making those calls more pleasant, and also more profitable.

For example, by analyzing word choice and vocal style, Beyond Verbal’s system can tell what kind of shopper the person on the line actually is. If they’re an early adopter, the AI alerts the sales agent to offer them the latest and greatest. If they’re more conservative, it suggests items more tried-and-true.

(2) Replacing customer service agents: Second, companies like New Zealand’s Soul Machines are working to replace human customer service agents altogether. Powered by IBM’s Watson, Soul Machines builds lifelike customer service avatars designed for empathy, making them one of many helping to pioneer the field of emotionally intelligent computing.

With their technology, 40 percent of all customer service interactions are now resolved with a high degree of satisfaction, no human intervention needed. And because the system is built using neural nets, it’s continuously learning from every interaction—meaning that percentage will continue to improve.

The number of these interactions continues to grow as well. Software manufacturer Autodesk now includes a Soul Machine avatar named AVA (Autodesk Virtual Assistant) in all of its new offerings. She lives in a small window on the screen, ready to soothe tempers, troubleshoot problems, and forever banish those long tech support hold times.

For Daimler Financial Services, Soul Machines built an avatar named Sarah, who helps customers with arguably three of modernity’s most annoying tasks: financing, leasing, and insuring a car.

This isn’t just about AI—it’s about AI converging with additional exponentials. Add networks and sensors to the story and it raises the scale of disruption, upping the FQ—the frictionless quotient—in our frictionless shopping adventure.

Final Thoughts
AI makes retail cheaper, faster, and more efficient, touching everything from customer service to product delivery. It also redefines the shopping experience, making it frictionless and—once we allow AI to make purchases for us—ultimately invisible.

Prepare for a future in which shopping is dematerialized, demonetized, democratized, and delocalized—otherwise known as “the end of malls.”

Of course, if you wait a few more years, you’ll be able to take an autonomous flying taxi to Westfield’s Destination 2028—so perhaps today’s converging exponentials are not so much spelling the end of malls but rather the beginning of an experience economy far smarter, more immersive, and whimsically imaginative than today’s shopping centers.

Either way, it’s a top-to-bottom transformation of the retail world.

Over the coming blog series, we will continue our discussion of the future of retail. Stay tuned to learn new implications for your business and how to future-proof your company in an age of smart, ultra-efficient, experiential retail.

Want a copy of my next book? If you’ve enjoyed this blogified snippet of The Future is Faster Than You Think, sign up here to be eligible for an early copy and access up to $800 worth of pre-launch giveaways!

Join Me
(1) A360 Executive Mastermind: If you’re an exponentially and abundance-minded entrepreneur who would like coaching directly from me, consider joining my Abundance 360 Mastermind, a highly selective community of 360 CEOs and entrepreneurs who I coach for 3 days every January in Beverly Hills, Ca. Through A360, I provide my members with context and clarity about how converging exponential technologies will transform every industry. I’m committed to running A360 for the course of an ongoing 25-year journey as a “countdown to the Singularity.”

If you’d like to learn more and consider joining our 2020 membership, apply here.

(2) Abundance-Digital Online Community: I’ve also created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is Singularity University’s ‘onramp’ for exponential entrepreneurs — those who want to get involved and play at a higher level. Click here to learn more.

(Both A360 and Abundance-Digital are part of Singularity University — your participation opens you to a global community.)

This article originally appeared on diamandis.com. Read the original article here.

Image Credit: Image by Pexels from Pixabay Continue reading

Posted in Human Robots

#436220 How Boston Dynamics Is Redefining Robot ...

Gif: Bob O’Connor/IEEE Spectrum

With their jaw-dropping agility and animal-like reflexes, Boston Dynamics’ bioinspired robots have always seemed to have no equal. But that preeminence hasn’t stopped the company from pushing its technology to new heights, sometimes literally. Its latest crop of legged machines can trudge up and down hills, clamber over obstacles, and even leap into the air like a gymnast. There’s no denying their appeal: Every time Boston Dynamics uploads a new video to YouTube, it quickly racks up millions of views. These are probably the first robots you could call Internet stars.

Spot

Photo: Bob O’Connor

84 cm HEIGHT

25 kg WEIGHT

5.76 km/h SPEED

SENSING: Stereo cameras, inertial measurement unit, position/force sensors

ACTUATION: 12 DC motors

POWER: Battery (90 minutes per charge)

Boston Dynamics, once owned by Google’s parent company, Alphabet, and now by the Japanese conglomerate SoftBank, has long been secretive about its designs. Few publications have been granted access to its Waltham, Mass., headquarters, near Boston. But one morning this past August, IEEE Spectrum got in. We were given permission to do a unique kind of photo shoot that day. We set out to capture the company’s robots in action—running, climbing, jumping—by using high-speed cameras coupled with powerful strobes. The results you see on this page: freeze-frames of pure robotic agility.

We also used the photos to create interactive views, which you can explore online on our Robots Guide. These interactives let you spin the robots 360 degrees, or make them walk and jump on your screen.

Boston Dynamics has amassed a minizoo of robotic beasts over the years, with names like BigDog, SandFlea, and WildCat. When we visited, we focused on the two most advanced machines the company has ever built: Spot, a nimble quadruped, and Atlas, an adult-size humanoid.

Spot can navigate almost any kind of terrain while sensing its environment. Boston Dynamics recently made it available for lease, with plans to manufacture something like a thousand units per year. It envisions Spot, or even packs of them, inspecting industrial sites, carrying out hazmat missions, and delivering packages. And its YouTube fame has not gone unnoticed: Even entertainment is a possibility, with Cirque du Soleil auditioning Spot as a potential new troupe member.

“It’s really a milestone for us going from robots that work in the lab to these that are hardened for work out in the field,” Boston Dynamics CEO Marc Raibert says in an interview.

Atlas

Photo: Bob O’Connor

150 cm HEIGHT

80 kg WEIGHT

5.4 km/h SPEED

SENSING: Lidar and stereo vision

ACTUATION: 28 hydraulic actuators

POWER: Battery

Our other photographic subject, Atlas, is Boston Dynamics’ biggest celebrity. This 150-centimeter-tall (4-foot-11-inch-tall) humanoid is capable of impressive athletic feats. Its actuators are driven by a compact yet powerful hydraulic system that the company engineered from scratch. The unique system gives the 80-kilogram (176-pound) robot the explosive strength needed to perform acrobatic leaps and flips that don’t seem possible for such a large humanoid to do. Atlas has inspired a string of parody videos on YouTube and more than a few jokes about a robot takeover.

While Boston Dynamics excels at making robots, it has yet to prove that it can sell them. Ever since its founding in 1992 as a spin-off from MIT, the company has been an R&D-centric operation, with most of its early funding coming from U.S. military programs. The emphasis on commercialization seems to have intensified after the acquisition by SoftBank, in 2017. SoftBank’s founder and CEO, Masayoshi Son, is known to love robots—and profits.

The launch of Spot is a significant step for Boston Dynamics as it seeks to “productize” its creations. Still, Raibert says his long-term goals have remained the same: He wants to build machines that interact with the world dynamically, just as animals and humans do. Has anything changed at all? Yes, one thing, he adds with a grin. In his early career as a roboticist, he used to write papers and count his citations. Now he counts YouTube views.

In the Spotlight

Photo: Bob O’Connor

Boston Dynamics designed Spot as a versatile mobile machine suitable for a variety of applications. The company has not announced how much Spot will cost, saying only that it is being made available to select customers, which will be able to lease the robot. A payload bay lets you add up to 14 kilograms of extra hardware to the robot’s back. One of the accessories that Boston Dynamics plans to offer is a 6-degrees-of-freedom arm, which will allow Spot to grasp objects and open doors.

Super Senses

Photo: Bob O’Connor

Spot’s hardware is almost entirely custom-designed. It includes powerful processing boards for control as well as sensor modules for perception. The ­sensors are located on the front, rear, and sides of the robot’s body. Each module consists of a pair of stereo cameras, a wide-angle camera, and a texture projector, which enhances 3D sensing in low light. The sensors allow the robot to use the navigation method known as SLAM, or simultaneous localization and mapping, to get around autonomously.

Stepping Up

Photo: Bob O’Connor

In addition to its autonomous behaviors, Spot can also be steered by a remote operator with a game-style controller. But even when in manual mode, the robot still exhibits a high degree of autonomy. If there’s an obstacle ahead, Spot will go around it. If there are stairs, Spot will climb them. The robot goes into these operating modes and then performs the related actions completely on its own, without any input from the operator. To go down a flight of stairs, Spot walks backward, an approach Boston Dynamics says provides greater stability.

Funky Feet

Gif: Bob O’Connor/IEEE Spectrum

Spot’s legs are powered by 12 custom DC motors, each geared down to provide high torque. The robot can walk forward, sideways, and backward, and trot at a top speed of 1.6 meters per second. It can also turn in place. Other gaits include crawling and pacing. In one wildly popular YouTube video, Spot shows off its fancy footwork by dancing to the pop hit “Uptown Funk.”

Robot Blood

Photo: Bob O’Connor

Atlas is powered by a hydraulic system consisting of 28 actuators. These actuators are basically cylinders filled with pressurized fluid that can drive a piston with great force. Their high performance is due in part to custom servo valves that are significantly smaller and lighter than the aerospace models that Boston Dynamics had been using in earlier designs. Though not visible from the outside, the innards of an Atlas are filled with these hydraulic actuators as well as the lines of fluid that connect them. When one of those lines ruptures, Atlas bleeds the hydraulic fluid, which happens to be red.

Next Generation

Gif: Bob O’Connor/IEEE Spectrum

The current version of Atlas is a thorough upgrade of the original model, which was built for the DARPA Robotics Challenge in 2015. The newest robot is lighter and more agile. Boston Dynamics used industrial-grade 3D printers to make key structural parts, giving the robot greater strength-to-weight ratio than earlier designs. The next-gen Atlas can also do something that its predecessor, famously, could not: It can get up after a fall.

Walk This Way

Photo: Bob O’Connor

To control Atlas, an operator provides general steering via a manual controller while the robot uses its stereo cameras and lidar to adjust to changes in the environment. Atlas can also perform certain tasks autonomously. For example, if you add special bar-code-type tags to cardboard boxes, Atlas can pick them up and stack them or place them on shelves.

Biologically Inspired

Photos: Bob O’Connor

Atlas’s control software doesn’t explicitly tell the robot how to move its joints, but rather it employs mathematical models of the underlying physics of the robot’s body and how it interacts with the environment. Atlas relies on its whole body to balance and move. When jumping over an obstacle or doing acrobatic stunts, the robot uses not only its legs but also its upper body, swinging its arms to propel itself just as an athlete would.

This article appears in the December 2019 print issue as “By Leaps and Bounds.” Continue reading

Posted in Human Robots

#436215 Help Rescuers Find Missing Persons With ...

There’s a definite sense that robots are destined to become a critical part of search and rescue missions and disaster relief efforts, working alongside humans to help first responders move faster and more efficiently. And we’ve seen all kinds of studies that include the claim “this robot could potentially help with disaster relief,” to varying degrees of plausibility.

But it takes a long time, and a lot of extra effort, for academic research to actually become anything useful—especially for first responders, where there isn’t a lot of financial incentive for further development.

It turns out that if you actually ask first responders what they most need for disaster relief, they’re not necessarily interested in the latest and greatest robotic platform or other futuristic technology. They’re using commercial off-the-shelf drones, often consumer-grade ones, because they’re simple and cheap and great at surveying large areas. The challenge is doing something useful with all of the imagery that these drones collect. Computer vision algorithms could help with that, as long as those algorithms are readily accessible and nearly effortless to use.

The IEEE Robotics and Automation Society and the Center for Robotic-Assisted Search and Rescue (CRASAR) at Texas A&M University have launched a contest to bridge this gap between the kinds of tools that roboticists and computer vision researchers might call “basic” and a system that’s useful to first responders in the field. It’s a simple and straightforward idea, and somewhat surprising that no one had thought of it before now. And if you can develop such a system, it’s worth some cash.

CRASAR does already have a Computer Vision Emergency Response Toolkit (created right after Hurricane Harvey), which includes a few pixel filters and some edge and corner detectors. Through this contest, you can get paid your share of a $3,000 prize pool for adding some other excessively basic tools, including:

Image enhancement through histogram equalization, which can be applied to electro-optical (visible light cameras) and thermal imagery

Color segmentation for a range

Grayscale segmentation for a range in a thermal image

If it seems like this contest is really not that hard, that’s because it isn’t. “The first thing to understand about this contest is that strictly speaking, it’s really not that hard,” says Robin Murphy, director of CRASAR. “This contest isn’t necessarily about coming up with algorithms that are brand new, or even state-of-the-art, but rather algorithms that are functional and reliable and implemented in a way that’s immediately [usable] by inexperienced users in the field.”

Murphy readily admits that some of what needs to be done is not particularly challenging at all, but that’s not the point—the point is to make these functionalities accessible to folks who have better things to do than solve these problems themselves, as Murphy explains.

“A lot of my research is driven by problems that I’ve seen in the field that you’d think somebody would have solved, but apparently not. More than half of this is available in OpenCV, but who’s going to find it, download it, learn Python, that kind of thing? We need to get these tools into an open framework. We’re happy if you take libraries that already exist (just don’t steal code)—not everything needs to be rewritten from scratch. Just use what’s already there. Some of it may seem too simple, because it IS that simple. It already exists and you just need to move some code around.”

If you want to get very slightly more complicated, there’s a second category that involves a little bit of math:

Coders must provide a system that does the following for each nadir image in a set:

Reads the geotag embedded in the .jpg
Overlays a USNG grid for a user-specified interval (e.g., every 50, 100, or 200 meters)
Gives the GPS coordinates of each pixel if a cursor is rolled over the image
Given a set of images with the GPS or USNG coordinate and a bounding box, finds all images in the set that have a pixel intersecting that location

The final category awards prizes to anyone who comes up with anything else that turns out to be useful. Or, more specifically, “entrants can submit any algorithm they believe will be of value.” Whether or not it’s actually of value will be up to a panel of judges that includes both first responders and computer vision experts. More detailed rules can be found here, along with sample datasets that you can use for testing.

The contest deadline is 16 December, so you’ve got about a month to submit an entry. Winners will be announced at the beginning of January. Continue reading

Posted in Human Robots

#436204 We’re at IROS 2019 to Bring You ...

The 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) is taking place in Macau this week, featuring well over a thousand presentations on the newest and most amazing robotics research from around the world. There are also posters, workshops, tutorials, an exhibit hall, and plenty of social events where roboticists have the chance to get a little tipsy and talk about all the really interesting stuff.

As always, our plan is to bring you all of the coolest, weirdest, and most interesting things that we find at the show, and here are just a few of the things we’re looking forward to this week:

Flying robots with wings, tails, and… arms?
Spherical robot turtles
An update on that crazy jet-powered iCub
Agile and tiny robot insects
Metallic self-healing robot bones
How to train robots by messing with them
A weird robot sea urchin

And all that is happening just on Tuesday!

Our IROS coverage will continue beyond this week, so keep checking back for more of the best new robotics from Macau.

[ IROS 2019 ] Continue reading

Posted in Human Robots

#436202 Trump CTO Addresses AI, Facial ...

Michael Kratsios, the Chief Technology Officer of the United States, took the stage at Stanford University last week to field questions from Stanford’s Eileen Donahoe and attendees at the 2019 Fall Conference of the Institute for Human-Centered Artificial Intelligence (HAI).

Kratsios, the fourth to hold the U.S. CTO position since its creation by President Barack Obama in 2009, was confirmed in August as President Donald Trump’s first CTO. Before joining the Trump administration, he was chief of staff at investment firm Thiel Capital and chief financial officer of hedge fund Clarium Capital. Donahoe is Executive Director of Stanford’s Global Digital Policy Incubator and served as the first U.S. Ambassador to the United Nations Human Rights Council during the Obama Administration.

The conversation jumped around, hitting on both accomplishments and controversies. Kratsios touted the administration’s success in fixing policy around the use of drones, its memorandum on STEM education, and an increase in funding for basic research in AI—though the magnitude of that increase wasn’t specified. He pointed out that the Trump administration’s AI policy has been a continuation of the policies of the Obama administration, and will continue to build on that foundation. As proof of this, he pointed to Trump’s signing of the American AI Initiative earlier this year. That executive order, Kratsios said, was intended to bring various government agencies together to coordinate their AI efforts and to push the idea that AI is a tool for the American worker. The AI Initiative, he noted, also took into consideration that AI will cause job displacement, and asked private companies to pledge to retrain workers.

The administration, he said, is also looking to remove barriers to AI innovation. In service of that goal, the government will, in the next month or so, release a regulatory guidance memo instructing government agencies about “how they should think about AI technologies,” said Kratsios.

U.S. vs China in AI

A few of the exchanges between Kratsios and Donahoe hit on current hot topics, starting with the tension between the U.S. and China.

Donahoe:

“You talk a lot about unique U.S. ecosystem. In which aspect of AI is the U.S. dominant, and where is China challenging us in dominance?

Kratsios:

“They are challenging us on machine vision. They have more data to work with, given that they have surveillance data.”

Donahoe:

“To what extent would you say the quantity of data collected and available will be a determining factor in AI dominance?”

Kratsios:

“It makes a big difference in the short term. But we do research on how we get over these data humps. There is a future where you don’t need as much data, a lot of federal grants are going to [research in] how you can train models using less data.”

Donahoe turned the conversation to a different tension—that between innovation and values.

Donahoe:

“A lot of conversation yesterday was about the tension between innovation and values, and how do you hold those things together and lead in both realms.”

Kratsios:

“We recognized that the U.S. hadn’t signed on to principles around developing AI. In May, we signed [the Organization for Economic Cooperation and Development Principles on Artificial Intelligence], coming together with other Western democracies to say that these are values that we hold dear.

[Meanwhile,] we have adversaries around the world using AI to surveil people, to suppress human rights. That is why American leadership is so critical: We want to come out with the next great product. And we want our values to underpin the use cases.”

A member of the audience pushed further:

“Maintaining U.S. leadership in AI might have costs in terms of individuals and society. What costs should individuals and society bear to maintain leadership?”

Kratsios:

“I don’t view the world that way. Our companies big and small do not hesitate to talk about the values that underpin their technology. [That is] markedly different from the way our adversaries think. The alternatives are so dire [that we] need to push efforts to bake the values that we hold dear into this technology.”

Facial recognition

And then the conversation turned to the use of AI for facial recognition, an application which (at least for police and other government agencies) was recently banned in San Francisco.

Donahoe:

“Some private sector companies have called for government regulation of facial recognition, and there already are some instances of local governments regulating it. Do you expect federal regulation of facial recognition anytime soon? If not, what ought the parameters be?”

Kratsios:

“A patchwork of regulation of technology is not beneficial for the country. We want to avoid that. Facial recognition has important roles—for example, finding lost or displaced children. There are use cases, but they need to be underpinned by values.”

A member of the audience followed up on that topic, referring to some data presented earlier at the HAI conference on bias in AI:

“Frequently the example of finding missing children is given as the example of why we should not restrict use of facial recognition. But we saw Joy Buolamwini’s presentation on bias in data. I would like to hear your thoughts about how government thinks we should use facial recognition, knowing about this bias.”

Kratsios:

“Fairness, accountability, and robustness are things we want to bake into any technology—not just facial recognition—as we build rules governing use cases.”

Immigration and innovation

A member of the audience brought up the issue of immigration:

“One major pillar of innovation is immigration, does your office advocate for it?”

Kratsios:

“Our office pushes for best and brightest people from around the world to come to work here and study here. There are a few efforts we have made to move towards a more merit-based immigration system, without congressional action. [For example, in] the H1-B visa system, you go through two lotteries. We switched the order of them in order to get more people with advanced degrees through.”

The government’s tech infrastructure

Donahoe brought the conversation around to the tech infrastructure of the government itself:

“We talk about the shiny object, AI, but the 80 percent is the unsexy stuff, at federal and state levels. We don’t have a modern digital infrastructure to enable all the services—like a research cloud. How do we create this digital infrastructure?”

Kratsios:

“I couldn’t agree more; the least partisan issue in Washington is about modernizing IT infrastructure. We spend like $85 billion a year on IT at the federal level, we can certainly do a better job of using those dollars.” Continue reading

Posted in Human Robots