Tag Archives: keep

#435174 Revolt on the Horizon? How Young People ...

As digital technologies facilitate the growth of both new and incumbent organizations, we have started to see the darker sides of the digital economy unravel. In recent years, many unethical business practices have been exposed, including the capture and use of consumers’ data, anticompetitive activities, and covert social experiments.

But what do young people who grew up with the internet think about this development? Our research with 400 digital natives—19- to 24-year-olds—shows that this generation, dubbed “GenTech,” may be the one to turn the digital revolution on its head. Our findings point to a frustration and disillusionment with the way organizations have accumulated real-time information about consumers without their knowledge and often without their explicit consent.

Many from GenTech now understand that their online lives are of commercial value to an array of organizations that use this insight for the targeting and personalization of products, services, and experiences.

This era of accumulation and commercialization of user data through real-time monitoring has been coined “surveillance capitalism” and signifies a new economic system.

Artificial Intelligence
A central pillar of the modern digital economy is our interaction with artificial intelligence (AI) and machine learning algorithms. We found that 47 percent of GenTech do not want AI technology to monitor their lifestyle, purchases, and financial situation in order to recommend them particular things to buy.

In fact, only 29 percent see this as a positive intervention. Instead, they wish to maintain a sense of autonomy in their decision making and have the opportunity to freely explore new products, services, and experiences.

As individuals living in the digital age, we constantly negotiate with technology to let go of or retain control. This pendulum-like effect reflects the ongoing battle between humans and technology.

My Life, My Data?
Our research also reveals that 54 percent of GenTech are very concerned about the access organizations have to their data, while only 19 percent were not worried. Despite the EU General Data Protection Regulation being introduced in May 2018, this is still a major concern, grounded in a belief that too much of their data is in the possession of a small group of global companies, including Google, Amazon, and Facebook. Some 70 percent felt this way.

In recent weeks, both Facebook and Google have vowed to make privacy a top priority in the way they interact with users. Both companies have faced public outcry for their lack of openness and transparency when it comes to how they collect and store user data. It wasn’t long ago that a hidden microphone was found in one of Google’s home alarm products.

Google now plans to offer auto-deletion of users’ location history data, browsing, and app activity as well as extend its “incognito mode” to Google Maps and search. This will enable users to turn off tracking.

At Facebook, CEO Mark Zuckerberg is keen to reposition the platform as a “privacy focused communications platform” built on principles such as private interactions, encryption, safety, interoperability (communications across Facebook-owned apps and platforms), and secure data storage. This will be a tough turnaround for the company that is fundamentally dependent on turning user data into opportunities for highly individualized advertising.

Privacy and transparency are critically important themes for organizations today, both for those that have “grown up” online as well as the incumbents. While GenTech want organizations to be more transparent and responsible, 64 percent also believe that they cannot do much to keep their data private. Being tracked and monitored online by organizations is seen as part and parcel of being a digital consumer.

Despite these views, there is a growing revolt simmering under the surface. GenTech want to take ownership of their own data. They see this as a valuable commodity, which they should be given the opportunity to trade with organizations. Some 50 percent would willingly share their data with companies if they got something in return, for example a financial incentive.

Rewiring the Power Shift
GenTech are looking to enter into a transactional relationship with organizations. This reflects a significant change in attitudes from perceiving the free access to digital platforms as the “product” in itself (in exchange for user data), to now wishing to use that data to trade for explicit benefits.

This has created an opportunity for companies that seek to empower consumers and give them back control of their data. Several companies now offer consumers the opportunity to sell the data they are comfortable sharing or take part in research that they get paid for. More and more companies are joining this space, including People.io, Killi, and Ocean Protocol.

Sir Tim Berners Lee, the creator of the world wide web, has also been working on a way to shift the power from organizations and institutions back to citizens and consumers. The platform, Solid, offers users the opportunity to be in charge of where they store their data and who can access it. It is a form of re-decentralization.

The Solid POD (Personal Online Data storage) is a secure place on a hosted server or the individual’s own server. Users can grant apps access to their POD as a person’s data is stored centrally and not by an app developer or on an organization’s server. We see this as potentially being a way to let people take back control from technology and other companies.

GenTech have woken up to a reality where a life lived “plugged in” has significant consequences for their individual privacy and are starting to push back, questioning those organizations that have shown limited concern and continue to exercise exploitative practices.

It’s no wonder that we see these signs of revolt. GenTech is the generation with the most to lose. They face a life ahead intertwined with digital technology as part of their personal and private lives. With continued pressure on organizations to become more transparent, the time is now for young people to make their move.

Dr Mike Cooray, Professor of Practice, Hult International Business School and Dr Rikke Duus, Research Associate and Senior Teaching Fellow, UCL

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Ser Borakovskyy / Shutterstock.com Continue reading

Posted in Human Robots

#435161 Less Like Us: An Alternate Theory of ...

The question of whether an artificial general intelligence will be developed in the future—and, if so, when it might arrive—is controversial. One (very uncertain) estimate suggests 2070 might be the earliest we could expect to see such technology.

Some futurists point to Moore’s Law and the increasing capacity of machine learning algorithms to suggest that a more general breakthrough is just around the corner. Others suggest that extrapolating exponential improvements in hardware is unwise, and that creating narrow algorithms that can beat humans at specialized tasks brings us no closer to a “general intelligence.”

But evolution has produced minds like the human mind at least once. Surely we could create artificial intelligence simply by copying nature, either by guided evolution of simple algorithms or wholesale emulation of the human brain.

Both of these ideas are far easier to conceive of than they are to achieve. The 302 neurons of the nematode worm’s brain are still an extremely difficult engineering challenge, let alone the 86 billion in a human brain.

Leaving aside these caveats, though, many people are worried about artificial general intelligence. Nick Bostrom’s influential book on superintelligence imagines it will be an agent—an intelligence with a specific goal. Once such an agent reaches a human level of intelligence, it will improve itself—increasingly rapidly as it gets smarter—in pursuit of whatever goal it has, and this “recursive self-improvement” will lead it to become superintelligent.

This “intelligence explosion” could catch humans off guard. If the initial goal is poorly specified or malicious, or if improper safety features are in place, or if the AI decides it would prefer to do something else instead, humans may be unable to control our own creation. Bostrom gives examples of how a seemingly innocuous goal, such as “Make everyone happy,” could be misinterpreted; perhaps the AI decides to drug humanity into a happy stupor, or convert most of the world into computing infrastructure to pursue its goal.

Drexler and Comprehensive AI Services
These are increasingly familiar concerns for an AI that behaves like an agent, seeking to achieve its goal. There are dissenters to this picture of how artificial general intelligence might arise. One notable alternative point of view comes from Eric Drexler, famous for his work on molecular nanotechnology and Engines of Creation, the book that popularized it.

With respect to AI, Drexler believes our view of an artificial intelligence as a single “agent” that acts to maximize a specific goal is too narrow, almost anthropomorphizing AI, or modeling it as a more realistic route towards general intelligence. Instead, he proposes “Comprehensive AI Services” (CAIS) as an alternative route to artificial general intelligence.

What does this mean? Drexler’s argument is that we should look more closely at how machine learning and AI algorithms are actually being developed in the real world. The optimization effort is going into producing algorithms that can provide services and perform tasks like translation, music recommendations, classification, medical diagnoses, and so forth.

AI-driven improvements in technology, argues Drexler, will lead to a proliferation of different algorithms: technology and software improvement, which can automate increasingly more complicated tasks. Recursive improvement in this regime is already occurring—take the newer versions of AlphaGo, which can learn to improve themselves by playing against previous versions.

Many Smart Arms, No Smart Brain
Instead of relying on some unforeseen breakthrough, the CAIS model of AI just assumes that specialized, narrow AI will continue to improve at performing each of its tasks, and the range of tasks that machine learning algorithms will be able to perform will become wider. Ultimately, once a sufficient number of tasks have been automated, the services that an AI will provide will be so comprehensive that they will resemble a general intelligence.

One could then imagine a “general” intelligence as simply an algorithm that is extremely good at matching the task you ask it to perform to the specialized service algorithm that can perform that task. Rather than acting like a single brain that strives to achieve a particular goal, the central AI would be more like a search engine, looking through the tasks it can perform to find the closest match and calling upon a series of subroutines to achieve the goal.

For Drexler, this is inherently a safety feature. Rather than Bostrom’s single, impenetrable, conscious and superintelligent brain (which we must try to psychoanalyze in advance without really knowing what it will look like), we have a network of capabilities. If you don’t want your system to perform certain tasks, you can simply cut it off from access to those services. There is no superintelligent consciousness to outwit or “trap”: more like an extremely high-level programming language that can respond to complicated commands by calling upon one of the myriad specialized algorithms that have been developed by different groups.

This skirts the complex problem of consciousness and all of the sticky moral quandaries that arise in making minds that might be like ours. After all, if you could simulate a human mind, you could simulate it experiencing unimaginable pain. Black Mirror-esque dystopias where emulated minds have no rights and are regularly “erased” or forced to labor in dull and repetitive tasks, hove into view.

Drexler argues that, in this world, there is no need to ever build a conscious algorithm. Yet it seems likely that, at some point, humans will attempt to simulate our own brains, if only in the vain attempt to pursue immortality. This model cannot hold forever. Yet its proponents argue that any world in which we could develop general AI would probably also have developed superintelligent capabilities in a huge range of different tasks, such as computer programming, natural language understanding, and so on. In other words, CAIS arrives first.

The Future In Our Hands?
Drexler argues that his model already incorporates many of the ideas from general AI development. In the marketplace, algorithms compete all the time to perform these services: they undergo the same evolutionary pressures that lead to “higher intelligence,” but the behavior that’s considered superior is chosen by humans, and the nature of the “general intelligence” is far more shaped by human decision-making and human programmers. Development in AI services could still be rapid and disruptive.

But in Drexler’s case, the research and development capacity comes from humans and organizations driven by the desire to improve algorithms that are performing individualized and useful tasks, rather than from a conscious AI recursively reprogramming and improving itself.

In other words, this vision does not absolve us of the responsibility of making our AI safe; if anything, it gives us a greater degree of responsibility. As more and more complex “services” are automated, performing what used to be human jobs at superhuman speed, the economic disruption will be severe.

Equally, as machine learning is trusted to carry out more complex decisions, avoiding algorithmic bias becomes crucial. Shaping each of these individual decision-makers—and trying to predict the complex ways they might interact with each other—is no less daunting a task than specifying the goal for a hypothetical, superintelligent, God-like AI. Arguably, the consequences of the “misalignment” of these services algorithms are already multiplying around us.

The CAIS model bridges the gap between real-world AI, machine learning developments, and real-world safety considerations, as well as the speculative world of superintelligent agents and the safety considerations involved with controlling their behavior. We should keep our minds open as to what form AI and machine learning will take, and how it will influence our societies—and we must take care to ensure that the systems we create don’t end up forcing us all to live in a world of unintended consequences.

Image Credit: MF Production/Shutterstock.com Continue reading

Posted in Human Robots

#435145 How Big Companies Can Simultaneously Run ...

We live in the age of entrepreneurs. New startups seem to appear out of nowhere and challenge not only established companies, but entire industries. Where startup unicorns were once mythical creatures, they now seem abundant, not only increasing in numbers but also in the speed with which they can gain the minimum one-billion-dollar valuations to achieve this status.

But no matter how well things go for innovative startups, how many new success stories we hear, and how much space they take up in the media, the story that they are the best or only source of innovation isn’t entirely accurate.

Established organizations, or legacy organizations, can be incredibly innovative too. And while innovation is much more difficult in established organizations than in startups because they have much more complex systems—nobody is more likely to succeed in their innovation efforts than established organizations.

Unlike startups, established organizations have all the resources. They have money, customers, data, suppliers, partners, and infrastructure, which put them in a far better position to transform new ideas into concrete, value-creating, successful offerings than startups.

However, for established organizations, becoming an innovation champion in these times of rapid change requires new rules of engagement.

Many organizations commit the mistake of engaging in innovation as if it were a homogeneous thing that should be approached in the same way every time, regardless of its purpose. In my book, Transforming Legacy Organizations, I argue that innovation in established organizations must actually be divided into three different tracks: optimizing, augmenting, and mutating innovation.

All three are important, and to complicate matters further, organizations must execute all three types of innovation at the same time.

Optimizing Innovation
The first track is optimizing innovation. This type of innovation is the majority of what legacy organizations already do today. It is, metaphorically speaking, the extra blade on the razor. A razor manufacturer might launch a new razor that has not just three, but four blades, to ensure an even better, closer, and more comfortable shave. Then one or two years later, they say they are now launching a razor that has not only four, but five blades for an even better, closer, and more comfortable shave. That is optimizing innovation.

Adding extra blades on the razor is where the established player reigns.

No startup with so much as a modicum of sense would even try to beat the established company in this type of innovation. And this continuous optimization, both on the operational and customer facing sides, is important. In the short term. It pays the rent. But it’s far from enough. There are limits to how many blades a razor needs, and optimizing innovation only improves upon the past.

Augmenting Innovation
Established players must also go beyond optimization and prepare for the future through augmenting innovation.

The digital transformation projects that many organizations are initiating can be characterized as augmenting innovation. In the first instance, it is about upgrading core offerings and processes from analog to digital. Or, if you’re born digital, you’ve probably had to augment the core to become mobile-first. Perhaps you have even entered the next augmentation phase, which involves implementing artificial intelligence. Becoming AI-first, like the Amazons, Microsofts, Baidus, and Googles of the world, requires great technological advancements. And it’s difficult. But technology may, in fact, be a minor part of the task.

The biggest challenge for augmenting innovation is probably culture.

Only legacy organizations that manage to transform their cultures from status quo cultures—cultures with a preference for things as they are—into cultures full of incremental innovators can thrive in constant change.

To create a strong innovation culture, an organization needs to thoroughly understand its immune systems. These are the mechanisms that protect the organization and operate around the clock to keep it healthy and stable, just as the body’s immune system operates to keep the body healthy and stable. But in a rapidly changing world, many of these defense mechanisms are no longer appropriate and risk weakening organizations’ innovation power.

When talking about organizational immune systems, there is a clear tendency to simply point to the individual immune system, people’s unwillingness to change.

But this is too simplistic.

Of course, there is human resistance to change, but the organizational immune system, consisting of a company’s key performance indicators (KPIs), rewards systems, legacy IT infrastructure and processes, and investor and shareholder demands, is far more important. So is the organization’s societal immune system, such as legislative barriers, legacy customers and providers, and economic climate.

Luckily, there are many culture hacks that organizations can apply to strengthen their innovation cultures by upgrading their physical and digital workspaces, transforming their top-down work processes into decentralized, agile ones, and empowering their employees.

Mutating Innovation
Upgrading your core and preparing for the future by augmenting innovation is crucial if you want success in the medium term. But to win in the long run and be as or more successful 20 to 30 years from now, you need to invent the future, and challenge your core, through mutating innovation.

This requires involving radical innovators who have a bold focus on experimenting with that which is not currently understood and for which a business case cannot be prepared.

Here you must also physically move away from the core organization when you initiate and run such initiatives. This is sometimes called “innovation on the edges” because the initiatives will not have a chance at succeeding within the core. It will be too noisy as they challenge what currently exists—precisely what the majority of the organization’s employees are working to optimize or augment.

Forward-looking organizations experiment to mutate their core through “X divisions,” sometimes called skunk works or innovation labs.

Lowe’s Innovation Labs, for instance, worked with startups to build in-store robot assistants and zero-gravity 3D printers to explore the future. Mutating innovation might include pursuing partnerships across all imaginable domains or establishing brand new companies, rather than traditional business units, as we see automakers such as Toyota now doing to build software for autonomous vehicles. Companies might also engage in radical open innovation by sponsoring others’ ingenuity. Japan’s top airline ANA is exploring a future of travel that does not involve flying people from point A to point B via the ANA Avatar XPRIZE competition.

Increasing technological opportunities challenge the core of any organization but also create unprecedented potential. No matter what product, service, or experience you create, you can’t rest on your laurels. You have to bring yourself to a position where you have a clear strategy for optimizing, augmenting, and mutating your core and thus transforming your organization.

It’s not an easy job. But, hey, if it were easy, everyone would be doing it. Those who make it, on the other hand, will be the innovation champions of the future.

Image Credit: rock-the-stock / Shutterstock.com

We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites. Continue reading

Posted in Human Robots

#435119 Are These Robots Better Than You at ...

Robot technology is evolving at breakneck speed. SoftBank’s Pepper is found in companies across the globe and is rapidly improving its conversation skills. Telepresence robots open up new opportunities for remote working, while Boston Dynamics’ Handle robot could soon (literally) take a load off human colleagues in warehouses.

But warehouses and offices aren’t the only places where robots are lining up next to humans.

Toyota’s Cue 3 robot recently showed off its basketball skills, putting up better numbers than the NBA’s most accurate three-point shooter, the Golden State Warriors’ Steph Curry.

Cue 3 is still some way from being ready to take on Curry, or even amateur basketball players, in a real game. However, it is the latest member of a growing cast of robots challenging human dominance in sports.

As these robots continue to develop, they not only exemplify the speed of exponential technology development, but also how those technologies are improving human capabilities.

Meet the Contestants
The list of robots in sports is surprisingly long and diverse. There are robot skiers, tumblers, soccer players, sumos, and even robot game jockeys. Introductions to a few of them are in order.

Robot: Forpheus
Sport: Table tennis
Intro: Looks like something out of War of the Worlds equipped with a ping pong bat instead of a death ray.
Ability level: Capable of counteracting spin shots and good enough to beat many beginners.

Robot: Sumo bot
Sport: Sumo wrestling
Intro: Hyper-fast, hyper-aggressive. Think robot equivalent to an angry wasp on six cans of Red Bull crossed with a very small tank.
Ability level: Flies around the ring way faster than any human sumo. Tend to drive straight out of the ring at times.

Robot: Cue 3
Sport: Basketball
Intro: Stands at an imposing 6 foot and 10 inches, so pretty much built for the NBA. Looks a bit like something that belongs in a video game.
Ability level: A 62.5 percent three-pointer percentage, which is better than Steph Curry’s; is less mobile than Charles Barkley – in his current form.

Robot: Robo Cup Robots
Intro: The future of soccer. If everything goes to plan, a team of robots will take on the Lionel Messis and Cristiano Ronaldos of 2050 and beat them in a full 11 vs. 11 game.
Ability level: Currently plays soccer more like the six-year-olds I used to coach than Lionel Messi.

The Limiting Factor
The skill level of all the robots above is impressive, and they are doing things that no human contestant can. The sumo bots’ inhuman speed is self-evident. Forpheus’ ability to track the ball with two cameras while simultaneously tracking its opponent with two other cameras requires a look at the spec sheet, but is similarly beyond human capability. While Cue 3 can’t move, it makes shots from the mid-court logo look easy.

Robots are performing at a level that was confined to the realm of science fiction at the start of the millennium. The speed of development indicates that in the near future, my national team soccer coach would likely call up a robot instead of me (he must have lost my number since he hasn’t done so yet. It’s the only logical explanation), and he’d be right to do so.

It is also worth considering that many current sports robots have a humanoid form, which limits their ability. If engineers were to optimize robot design to outperform humans in specific categories, many world champions would likely already be metallic.

Swimming is perhaps one of the most obvious. Even Michael Phelps would struggle to keep up with a torpedo-shaped robot, and if you beefed up a sumo robot to human size, human sumos might impress you by running away from them with a 100-meter speed close to Usain Bolt’s.

In other areas, the playing field for humans and robots is rapidly leveling. One likely candidate for the first head-to-head competitions is racing, where self-driving cars from the Roborace League could perhaps soon be ready to race the likes of Lewis Hamilton.

Tech Pushing Humans
Perhaps one of the biggest reasons why it may still take some time for robots to surpass us is that they, along with other exponential technologies, are already making us better at sports.

In Japan, elite volleyball players use a robot to practice their attacks. Some American football players also practice against robot opponents and hone their skills using VR.

On the sidelines, AI is being used to analyze and improve athletes’ performance, and we may soon see the first AI coaches, not to mention referees.

We may even compete in games dreamt up by our electronic cousins. SpeedGate, a new game created by an AI by studying 400 different sports, is a prime example of that quickly becoming a possibility.

However, we will likely still need to make the final call on what constitutes a good game. The AI that created SpeedGate reportedly also suggested less suitable pastimes, like underwater parkour and a game that featured exploding frisbees. Both of these could be fun…but only if you’re as sturdy as a robot.

Image Credit: RoboCup Standard Platform League 2018, ©The Robocup Federation. Published with permission of reproduction granted by the RoboCup Federation. Continue reading

Posted in Human Robots

#435106 Could Artificial Photosynthesis Help ...

Plants are the planet’s lungs, but they’re struggling to keep up due to rising CO2 emissions and deforestation. Engineers are giving them a helping hand, though, by augmenting their capacity with new technology and creating artificial substitutes to help them clean up our atmosphere.

Imperial College London, one of the UK’s top engineering schools, recently announced that it was teaming up with startup Arborea to build the company’s first outdoor pilot of its BioSolar Leaf cultivation system at the university’s White City campus in West London.

Arborea is developing large solar panel-like structures that house microscopic plants and can be installed on buildings or open land. The plants absorb light and carbon dioxide as they photosynthesize, removing greenhouse gases from the air and producing organic material, which can be processed to extract valuable food additives like omega-3 fatty acids.

The idea of growing algae to produce useful materials isn’t new, but Arborea’s pitch seems to be flexibility and affordability. The more conventional approach is to grow algae in open ponds, which are less efficient and open to contamination, or in photo-bioreactors, which typically require CO2 to be piped in rather than getting it from the air and can be expensive to run.

There’s little detail on how the technology deals with issues like nutrient supply and harvesting or how efficient it is. The company claims it can remove carbon dioxide as fast as 100 trees using the surface area of just a single tree, but there’s no published research to back that up, and it’s hard to compare the surface area of flat panels to that of a complex object like a tree. If you flattened out every inch of a tree’s surface it would cover a surprisingly large area.

Nonetheless, the ability to install these panels directly on buildings could present a promising way to soak up the huge amount of CO2 produced in our cities by transport and industry. And Arborea isn’t the only one trying to give plants a helping hand.

For decades researchers have been working on ways to use light-activated catalysts to split water into oxygen and hydrogen fuel, and more recently there have been efforts to fuse this with additional processes to combine the hydrogen with carbon from CO2 to produce all kinds of useful products.

Most notably, in 2016 Harvard researchers showed that water-splitting catalysts could be augmented with bacteria that combines the resulting hydrogen with CO2 to create oxygen and biomass, fuel, or other useful products. The approach was more efficient than plants at turning CO2 to fuel and was built using cheap materials, but turning it into a commercially viable technology will take time.

Not everyone is looking to mimic or borrow from biology in their efforts to suck CO2 out of the atmosphere. There’s been a recent glut of investment in startups working on direct-air capture (DAC) technology, which had previously been written off for using too much power and space to be practical. The looming climate change crisis appears to be rewriting some of those assumptions, though.

Most approaches aim to use the concentrated CO2 to produce synthetic fuels or other useful products, creating a revenue stream that could help improve their commercial viability. But we look increasingly likely to surpass the safe greenhouse gas limits, so attention is instead turning to carbon-negative technologies.

That means capturing CO2 from the air and then putting it into long-term storage. One way could be to grow lots of biomass and then bury it, mimicking the process that created fossil fuels in the first place. Or DAC plants could pump the CO2 they produce into deep underground wells.

But the former would take up unreasonably large amounts of land to make a significant dent in emissions, while the latter would require huge amounts of already scant and expensive renewable power. According to a recent analysis, artificial photosynthesis could sidestep these issues because it’s up to five times more efficient than its natural counterpart and could be cheaper than DAC.

Whether the technology will develop quickly enough for it to be deployed at scale and in time to mitigate the worst effects of climate change remains to be seen. Emissions reductions certainly present a more sure-fire way to deal with the problem, but nonetheless, cyborg plants could soon be a common sight in our cities.

Image Credit: GiroScience / Shutterstock.com Continue reading

Posted in Human Robots