Tag Archives: lines

#435070 5 Breakthroughs Coming Soon in Augmented ...

Convergence is accelerating disruption… everywhere! Exponential technologies are colliding into each other, reinventing products, services, and industries.

In this third installment of my Convergence Catalyzer series, I’ll be synthesizing key insights from my annual entrepreneurs’ mastermind event, Abundance 360. This five-blog series looks at 3D printing, artificial intelligence, VR/AR, energy and transportation, and blockchain.

Today, let’s dive into virtual and augmented reality.

Today’s most prominent tech giants are leaping onto the VR/AR scene, each driving forward new and upcoming product lines. Think: Microsoft’s HoloLens, Facebook’s Oculus, Amazon’s Sumerian, and Google’s Cardboard (Apple plans to release a headset by 2021).

And as plummeting prices meet exponential advancements in VR/AR hardware, this burgeoning disruptor is on its way out of the early adopters’ market and into the majority of consumers’ homes.

My good friend Philip Rosedale is my go-to expert on AR/VR and one of the foremost creators of today’s most cutting-edge virtual worlds. After creating the virtual civilization Second Life in 2013, now populated by almost 1 million active users, Philip went on to co-found High Fidelity, which explores the future of next-generation shared VR.

In just the next five years, he predicts five emerging trends will take hold, together disrupting major players and birthing new ones.

Let’s dive in…

Top 5 Predictions for VR/AR Breakthroughs (2019-2024)
“If you think you kind of understand what’s going on with that tech today, you probably don’t,” says Philip. “We’re still in the middle of landing the airplane of all these new devices.”

(1) Transition from PC-based to standalone mobile VR devices

Historically, VR devices have relied on PC connections, usually involving wires and clunky hardware that restrict a user’s field of motion. However, as VR enters the dematerialization stage, we are about to witness the rapid rise of a standalone and highly mobile VR experience economy.

Oculus Go, the leading standalone mobile VR device on the market, requires only a mobile app for setup and can be transported anywhere with WiFi.

With a consumer audience in mind, the 32GB headset is priced at $200 and shares an app ecosystem with Samsung’s Gear VR. While Google Daydream are also standalone VR devices, they require a docked mobile phone instead of the built-in screen of Oculus Go.

In the AR space, Lenovo’s standalone Microsoft’s HoloLens 2 leads the way in providing tetherless experiences.

Freeing headsets from the constraints of heavy hardware will make VR/AR increasingly interactive and transportable, a seamless add-on whenever, wherever. Within a matter of years, it may be as simple as carrying lightweight VR goggles wherever you go and throwing them on at a moment’s notice.

(2) Wide field-of-view AR displays

Microsoft’s HoloLens 2 leads the AR industry in headset comfort and display quality. The most significant issue with their prior version was the limited rectangular field of view (FOV).

By implementing laser technology to create a microelectromechanical systems (MEMS) display, however, HoloLens 2 can position waveguides in front of users’ eyes, directed by mirrors. Subsequently enlarging images can be accomplished by shifting the angles of these mirrors. Coupled with a 47 pixel per degree resolution, HoloLens 2 has now doubled its predecessor’s FOV. Microsoft anticipates the release of its headset by the end of this year at a $3,500 price point, first targeting businesses and eventually rolling it out to consumers.

Magic Leap provides a similar FOV but with lower resolution than the HoloLens 2. The Meta 2 boasts an even wider 90-degree FOV, but requires a cable attachment. The race to achieve the natural human 120-degree horizontal FOV continues.

“The technology to expand the field of view is going to make those devices much more usable by giving you bigger than a small box to look through,” Rosedale explains.

(3) Mapping of real world to enable persistent AR ‘mirror worlds’

‘Mirror worlds’ are alternative dimensions of reality that can blanket a physical space. While seated in your office, the floor beneath you could dissolve into a calm lake and each desk into a sailboat. In the classroom, mirror worlds would convert pencils into magic wands and tabletops into touch screens.

Pokémon Go provides an introductory glimpse into the mirror world concept and its massive potential to unite people in real action.

To create these mirror worlds, AR headsets must precisely understand the architecture of the surrounding world. Rosedale predicts the scanning accuracy of devices will improve rapidly over the next five years to make these alternate dimensions possible.

(4) 5G mobile devices reduce latency to imperceptible levels

Verizon has already launched 5G networks in Minneapolis and Chicago, compatible with the Moto Z3. Sprint plans to follow with its own 5G launch in May. Samsung, LG, Huawei, and ZTE have all announced upcoming 5G devices.

“5G is rolling out this year and it’s going to materially affect particularly my work, which is making you feel like you’re talking to somebody else directly face to face,” explains Rosedale. “5G is critical because currently the cell devices impose too much delay, so it doesn’t feel real to talk to somebody face to face on these devices.”

To operate seamlessly from anywhere on the planet, standalone VR/AR devices will require a strong 5G network. Enhancing real-time connectivity in VR/AR will transform the communication methods of tomorrow.

(5) Eye-tracking and facial expressions built in for full natural communication

Companies like Pupil Labs and Tobii provide eye tracking hardware add-ons and software to VR/AR headsets. This technology allows for foveated rendering, which renders a given scene in high resolution only in the fovea region, while the peripheral regions appear in lower resolution, conserving processing power.

As seen in the HoloLens 2, eye tracking can also be used to identify users and customize lens widths to provide a comfortable, personalized experience for each individual.

According to Rosedale, “The fundamental opportunity for both VR and AR is to improve human communication.” He points out that current VR/AR headsets miss many of the subtle yet important aspects of communication. Eye movements and microexpressions provide valuable insight into a user’s emotions and desires.

Coupled with emotion-detecting AI software, such as Affectiva, VR/AR devices might soon convey much more richly textured and expressive interactions between any two people, transcending physical boundaries and even language gaps.

Final Thoughts
As these promising trends begin to transform the market, VR/AR will undoubtedly revolutionize our lives… possibly to the point at which our virtual worlds become just as consequential and enriching as our physical world.

A boon for next-gen education, VR/AR will empower youth and adults alike with holistic learning that incorporates social, emotional, and creative components through visceral experiences, storytelling, and simulation. Traveling to another time, manipulating the insides of a cell, or even designing a new city will become daily phenomena of tomorrow’s classrooms.

In real estate, buyers will increasingly make decisions through virtual tours. Corporate offices might evolve into spaces that only exist in ‘mirror worlds’ or grow virtual duplicates for remote workers.

In healthcare, accuracy of diagnosis will skyrocket, while surgeons gain access to digital aids as they conduct life-saving procedures. Or take manufacturing, wherein training and assembly will become exponentially more efficient as visual cues guide complex tasks.

In the mere matter of a decade, VR and AR will unlock limitless applications for new and converging industries. And as virtual worlds converge with AI, 3D printing, computing advancements and beyond, today’s experience economies will explode in scale and scope. Prepare yourself for the exciting disruption ahead!

Join Me
Abundance-Digital Online Community: Stay ahead of technological advancements, and turn your passion into action. Abundance Digital is now part of Singularity University. Learn more.

Image Credit: Mariia Korneeva / Shutterstock.com Continue reading

Posted in Human Robots

#435056 How Researchers Used AI to Better ...

A few years back, DeepMind’s Demis Hassabis famously prophesized that AI and neuroscience will positively feed into each other in a “virtuous circle.” If realized, this would fundamentally expand our insight into intelligence, both machine and human.

We’ve already seen some proofs of concept, at least in the brain-to-AI direction. For example, memory replay, a biological mechanism that fortifies our memories during sleep, also boosted AI learning when abstractly appropriated into deep learning models. Reinforcement learning, loosely based on our motivation circuits, is now behind some of AI’s most powerful tools.

Hassabis is about to be proven right again.

Last week, two studies independently tapped into the power of ANNs to solve a 70-year-old neuroscience mystery: how does our visual system perceive reality?

The first, published in Cell, used generative networks to evolve DeepDream-like images that hyper-activate complex visual neurons in monkeys. These machine artworks are pure nightmare fuel to the human eye; but together, they revealed a fundamental “visual hieroglyph” that may form a basic rule for how we piece together visual stimuli to process sight into perception.

In the second study, a team used a deep ANN model—one thought to mimic biological vision—to synthesize new patterns tailored to control certain networks of visual neurons in the monkey brain. When directly shown to monkeys, the team found that the machine-generated artworks could reliably activate predicted populations of neurons. Future improved ANN models could allow even better control, giving neuroscientists a powerful noninvasive tool to study the brain. The work was published in Science.

The individual results, though fascinating, aren’t necessarily the point. Rather, they illustrate how scientists are now striving to complete the virtuous circle: tapping AI to probe natural intelligence. Vision is only the beginning—the tools can potentially be expanded into other sensory domains. And the more we understand about natural brains, the better we can engineer artificial ones.

It’s a “great example of leveraging artificial intelligence to study organic intelligence,” commented Dr. Roman Sandler at Kernel.co on Twitter.

Why Vision?
ANNs and biological vision have quite the history.

In the late 1950s, the legendary neuroscientist duo David Hubel and Torsten Wiesel became some of the first to use mathematical equations to understand how neurons in the brain work together.

In a series of experiments—many using cats—the team carefully dissected the structure and function of the visual cortex. Using myriads of images, they revealed that vision is processed in a hierarchy: neurons in “earlier” brain regions, those closer to the eyes, tend to activate when they “see” simple patterns such as lines. As we move deeper into the brain, from the early V1 to a nub located slightly behind our ears, the IT cortex, neurons increasingly respond to more complex or abstract patterns, including faces, animals, and objects. The discovery led some scientists to call certain IT neurons “Jennifer Aniston cells,” which fire in response to pictures of the actress regardless of lighting, angle, or haircut. That is, IT neurons somehow extract visual information into the “gist” of things.

That’s not trivial. The complex neural connections that lead to increasing abstraction of what we see into what we think we see—what we perceive—is a central question in machine vision: how can we teach machines to transform numbers encoding stimuli into dots, lines, and angles that eventually form “perceptions” and “gists”? The answer could transform self-driving cars, facial recognition, and other computer vision applications as they learn to better generalize.

Hubel and Wiesel’s Nobel-prize-winning studies heavily influenced the birth of ANNs and deep learning. Much of earlier ANN “feed-forward” model structures are based on our visual system; even today, the idea of increasing layers of abstraction—for perception or reasoning—guide computer scientists to build AI that can better generalize. The early romance between vision and deep learning is perhaps the bond that kicked off our current AI revolution.

It only seems fair that AI would feed back into vision neuroscience.

Hieroglyphs and Controllers
In the Cell study, a team led by Dr. Margaret Livingstone at Harvard Medical School tapped into generative networks to unravel IT neurons’ complex visual alphabet.

Scientists have long known that neurons in earlier visual regions (V1) tend to fire in response to “grating patches” oriented in certain ways. Using a limited set of these patches like letters, V1 neurons can “express a visual sentence” and represent any image, said Dr. Arash Afraz at the National Institute of Health, who was not involved in the study.

But how IT neurons operate remained a mystery. Here, the team used a combination of genetic algorithms and deep generative networks to “evolve” computer art for every studied neuron. In seven monkeys, the team implanted electrodes into various parts of the visual IT region so that they could monitor the activity of a single neuron.

The team showed each monkey an initial set of 40 images. They then picked the top 10 images that stimulated the highest neural activity, and married them to 30 new images to “evolve” the next generation of images. After 250 generations, the technique, XDREAM, generated a slew of images that mashed up contorted face-like shapes with lines, gratings, and abstract shapes.

This image shows the evolution of an optimum image for stimulating a visual neuron in a monkey. Image Credit: Ponce, Xiao, and Schade et al. – Cell.
“The evolved images look quite counter-intuitive,” explained Afraz. Some clearly show detailed structures that resemble natural images, while others show complex structures that can’t be characterized by our puny human brains.

This figure shows natural images (right) and images evolved by neurons in the inferotemporal cortex of a monkey (left). Image Credit: Ponce, Xiao, and Schade et al. – Cell.
“What started to emerge during each experiment were pictures that were reminiscent of shapes in the world but were not actual objects in the world,” said study author Carlos Ponce. “We were seeing something that was more like the language cells use with each other.”

This image was evolved by a neuron in the inferotemporal cortex of a monkey using AI. Image Credit: Ponce, Xiao, and Schade et al. – Cell.
Although IT neurons don’t seem to use a simple letter alphabet, it does rely on a vast array of characters like hieroglyphs or Chinese characters, “each loaded with more information,” said Afraz.

The adaptive nature of XDREAM turns it into a powerful tool to probe the inner workings of our brains—particularly for revealing discrepancies between biology and models.

The Science study, led by Dr. James DiCarlo at MIT, takes a similar approach. Using ANNs to generate new patterns and images, the team was able to selectively predict and independently control neuron populations in a high-level visual region called V4.

“So far, what has been done with these models is predicting what the neural responses would be to other stimuli that they have not seen before,” said study author Dr. Pouya Bashivan. “The main difference here is that we are going one step further and using the models to drive the neurons into desired states.”

It suggests that our current ANN models for visual computation “implicitly capture a great deal of visual knowledge” which we can’t really describe, but which the brain uses to turn vision information into perception, the authors said. By testing AI-generated images on biological vision, however, the team concluded that today’s ANNs have a degree of understanding and generalization. The results could potentially help engineer even more accurate ANN models of biological vision, which in turn could feed back into machine vision.

“One thing is clear already: Improved ANN models … have led to control of a high-level neural population that was previously out of reach,” the authors said. “The results presented here have likely only scratched the surface of what is possible with such implemented characterizations of the brain’s neural networks.”

To Afraz, the power of AI here is to find cracks in human perception—both our computational models of sensory processes, as well as our evolved biological software itself. AI can be used “as a perfect adversarial tool to discover design cracks” of IT, said Afraz, such as finding computer art that “fools” a neuron into thinking the object is something else.

“As artificial intelligence researchers develop models that work as well as the brain does—or even better—we will still need to understand which networks are more likely to behave safely and further human goals,” said Ponce. “More efficient AI can be grounded by knowledge of how the brain works.”

Image Credit: Sangoiri / Shutterstock.com Continue reading

Posted in Human Robots

#434786 AI Performed Like a Human on a Gestalt ...

Dr. Been Kim wants to rip open the black box of deep learning.

A senior researcher at Google Brain, Kim specializes in a sort of AI psychology. Like cognitive psychologists before her, she develops various ways to probe the alien minds of artificial neural networks (ANNs), digging into their gory details to better understand the models and their responses to inputs.

The more interpretable ANNs are, the reasoning goes, the easier it is to reveal potential flaws in their reasoning. And if we understand when or why our systems choke, we’ll know when not to use them—a foundation for building responsible AI.

There are already several ways to tap into ANN reasoning, but Kim’s inspiration for unraveling the AI black box came from an entirely different field: cognitive psychology. The field aims to discover fundamental rules of how the human mind—essentially also a tantalizing black box—operates, Kim wrote with her colleagues.

In a new paper uploaded to the pre-publication server arXiv, the team described a way to essentially perform a human cognitive test on ANNs. The test probes how we automatically complete gaps in what we see, so that they form entire objects—for example, perceiving a circle from a bunch of loose dots arranged along a clock face. Psychologist dub this the “law of completion,” a highly influential idea that led to explanations of how our minds generalize data into concepts.

Because deep neural networks in machine vision loosely mimic the structure and connections of the visual cortex, the authors naturally asked: do ANNs also exhibit the law of completion? And what does that tell us about how an AI thinks?

Enter the Germans
The law of completion is part of a series of ideas from Gestalt psychology. Back in the 1920s, long before the advent of modern neuroscience, a group of German experimental psychologists asked: in this chaotic, flashy, unpredictable world, how do we piece together input in a way that leads to meaningful perceptions?

The result is a group of principles known together as the Gestalt effect: that the mind self-organizes to form a global whole. In the more famous words of Gestalt psychologist Kurt Koffka, our perception forms a whole that’s “something else than the sum of its parts.” Not greater than; just different.

Although the theory has its critics, subsequent studies in humans and animals suggest that the law of completion happens on both the cognitive and neuroanatomical level.

Take a look at the drawing below. You immediately “see” a shape that’s actually the negative: a triangle or a square (A and B). Or you further perceive a 3D ball (C), or a snake-like squiggle (D). Your mind fills in blank spots, so that the final perception is more than just the black shapes you’re explicitly given.

Image Credit: Wikimedia Commons contributors, the free media repository.
Neuroscientists now think that the effect comes from how our visual system processes information. Arranged in multiple layers and columns, lower-level neurons—those first to wrangle the data—tend to extract simpler features such as lines or angles. In Gestalt speak, they “see” the parts.

Then, layer by layer, perception becomes more abstract, until higher levels of the visual system directly interpret faces or objects—or things that don’t really exist. That is, the “whole” emerges.

The Experiment Setup
Inspired by these classical experiments, Kim and team developed a protocol to test the Gestalt effect on feed-forward ANNs: one simple, the other, dubbed the “Inception V3,” far more complex and widely used in the machine vision community.

The main idea is similar to the triangle drawings above. First, the team generated three datasets: one set shows complete, ordinary triangles. The second—the “Illusory” set, shows triangles with the edges removed but the corners intact. Thanks to the Gestalt effect, to us humans these generally still look like triangles. The third set also only shows incomplete triangle corners. But here, the corners are randomly rotated so that we can no longer imagine a line connecting them—hence, no more triangle.

To generate a dataset large enough to tease out small effects, the authors changed the background color, image rotation, and other aspects of the dataset. In all, they produced nearly 1,000 images to test their ANNs on.

“At a high level, we compare an ANN’s activation similarities between the three sets of stimuli,” the authors explained. The process is two steps: first, train the AI on complete triangles. Second, test them on the datasets. If the response is more similar between the illusory set and the complete triangle—rather than the randomly rotated set—it should suggest a sort of Gestalt closure effect in the network.

Machine Gestalt
Right off the bat, the team got their answer: yes, ANNs do seem to exhibit the law of closure.

When trained on natural images, the networks better classified the illusory set as triangles than those with randomized connection weights or networks trained on white noise.

When the team dug into the “why,” things got more interesting. The ability to complete an image correlated with the network’s ability to generalize.

Humans subconsciously do this constantly: anything with a handle made out of ceramic, regardless of shape, could easily be a mug. ANNs still struggle to grasp common features—clues that immediately tells us “hey, that’s a mug!” But when they do, it sometimes allows the networks to better generalize.

“What we observe here is that a network that is able to generalize exhibits…more of the closure effect [emphasis theirs], hinting that the closure effect reflects something beyond simply learning features,” the team wrote.

What’s more, remarkably similar to the visual cortex, “higher” levels of the ANNs showed more of the closure effect than lower layers, and—perhaps unsurprisingly—the more layers a network had, the more it exhibited the closure effect.

As the networks learned, their ability to map out objects from fragments also improved. When the team messed around with the brightness and contrast of the images, the AI still learned to see the forest from the trees.

“Our findings suggest that neural networks trained with natural images do exhibit closure,” the team concluded.

AI Psychology
That’s not to say that ANNs recapitulate the human brain. As Google’s Deep Dream, an effort to coax AIs into spilling what they’re perceiving, clearly demonstrates, machine vision sees some truly weird stuff.

In contrast, because they’re modeled after the human visual cortex, perhaps it’s not all that surprising that these networks also exhibit higher-level properties inherent to how we process information.

But to Kim and her colleagues, that’s exactly the point.

“The field of psychology has developed useful tools and insights to study human brains– tools that we may be able to borrow to analyze artificial neural networks,” they wrote.

By tweaking these tools to better analyze machine minds, the authors were able to gain insight on how similarly or differently they see the world from us. And that’s the crux: the point isn’t to say that ANNs perceive the world sort of, kind of, maybe similar to humans. It’s to tap into a wealth of cognitive psychology tools, established over decades using human minds, to probe that of ANNs.

“The work here is just one step along a much longer path,” the authors conclude.

“Understanding where humans and neural networks differ will be helpful for research on interpretability by enlightening the fundamental differences between the two interesting species.”

Image Credit: Popova Alena / Shutterstock.com Continue reading

Posted in Human Robots

#434753 Top Takeaways From The Economist ...

Over the past few years, the word ‘innovation’ has degenerated into something of a buzzword. In fact, according to Vijay Vaitheeswaran, US business editor at The Economist, it’s one of the most abused words in the English language.

The word is over-used precisely because we’re living in a great age of invention. But the pace at which those inventions are changing our lives is fast, new, and scary.

So what strategies do companies need to adopt to make sure technology leads to growth that’s not only profitable, but positive? How can business and government best collaborate? Can policymakers regulate the market without suppressing innovation? Which technologies will impact us most, and how soon?

At The Economist Innovation Summit in Chicago last week, entrepreneurs, thought leaders, policymakers, and academics shared their insights on the current state of exponential technologies, and the steps companies and individuals should be taking to ensure a tech-positive future. Here’s their expert take on the tech and trends shaping the future.

Blockchain
There’s been a lot of hype around blockchain; apparently it can be used for everything from distributing aid to refugees to voting. However, it’s too often conflated with cryptocurrencies like Bitcoin, and we haven’t heard of many use cases. Where does the technology currently stand?

Julie Sweet, chief executive of Accenture North America, emphasized that the technology is still in its infancy. “Everything we see today are pilots,” she said. The most promising of these pilots are taking place across three different areas: supply chain, identity, and financial services.

When you buy something from outside the US, Sweet explained, it goes through about 80 different parties. 70 percent of the relevant data is replicated and is prone to error, with paper-based documents often to blame. Blockchain is providing a secure way to eliminate paper in supply chains, upping accuracy and cutting costs in the process.

One of the most prominent use cases in the US is Walmart—the company has mandated that all suppliers in its leafy greens segment be on a blockchain, and its food safety has improved as a result.

Beth Devin, head of Citi Ventures’ innovation network, added “Blockchain is an infrastructure technology. It can be leveraged in a lot of ways. There’s so much opportunity to create new types of assets and securities that aren’t accessible to people today. But there’s a lot to figure out around governance.”

Open Source Technology
Are the days of proprietary technology numbered? More and more companies and individuals are making their source code publicly available, and its benefits are thus more widespread than ever before. But what are the limitations and challenges of open source tech, and where might it go in the near future?

Bob Lord, senior VP of cognitive applications at IBM, is a believer. “Open-sourcing technology helps innovation occur, and it’s a fundamental basis for creating great technology solutions for the world,” he said. However, the biggest challenge for open source right now is that companies are taking out more than they’re contributing back to the open-source world. Lord pointed out that IBM has a rule about how many lines of code employees take out relative to how many lines they put in.

Another challenge area is open governance; blockchain by its very nature should be transparent and decentralized, with multiple parties making decisions and being held accountable. “We have to embrace open governance at the same time that we’re contributing,” Lord said. He advocated for a hybrid-cloud environment where people can access public and private data and bring it together.

Augmented and Virtual Reality
Augmented and virtual reality aren’t just for fun and games anymore, and they’ll be even less so in the near future. According to Pearly Chen, vice president at HTC, they’ll also go from being two different things to being one and the same. “AR overlays digital information on top of the real world, and VR transports you to a different world,” she said. “In the near future we will not need to delineate between these two activities; AR and VR will come together naturally, and will change everything we do as we know it today.”

For that to happen, we’ll need a more ergonomically friendly device than we have today for interacting with this technology. “Whenever we use tech today, we’re multitasking,” said product designer and futurist Jody Medich. “When you’re using GPS, you’re trying to navigate in the real world and also manage this screen. Constant task-switching is killing our brain’s ability to think.” Augmented and virtual reality, she believes, will allow us to adapt technology to match our brain’s functionality.

This all sounds like a lot of fun for uses like gaming and entertainment, but what about practical applications? “Ultimately what we care about is how this technology will improve lives,” Chen said.

A few ways that could happen? Extended reality will be used to simulate hazardous real-life scenarios, reduce the time and resources needed to bring a product to market, train healthcare professionals (such as surgeons), or provide therapies for patients—not to mention education. “Think about the possibilities for children to learn about history, science, or math in ways they can’t today,” Chen said.

Quantum Computing
If there’s one technology that’s truly baffling, it’s quantum computing. Qubits, entanglement, quantum states—it’s hard to wrap our heads around these concepts, but they hold great promise. Where is the tech right now?

Mandy Birch, head of engineering strategy at Rigetti Computing, thinks quantum development is starting slowly but will accelerate quickly. “We’re at the innovation stage right now, trying to match this capability to useful applications,” she said. “Can we solve problems cheaper, better, and faster than classical computers can do?” She believes quantum’s first breakthrough will happen in two to five years, and that is highest potential is in applications like routing, supply chain, and risk optimization, followed by quantum chemistry (for materials science and medicine) and machine learning.

David Awschalom, director of the Chicago Quantum Exchange and senior scientist at Argonne National Laboratory, believes quantum communication and quantum sensing will become a reality in three to seven years. “We’ll use states of matter to encrypt information in ways that are completely secure,” he said. A quantum voting system, currently being prototyped, is one application.

Who should be driving quantum tech development? The panelists emphasized that no one entity will get very far alone. “Advancing quantum tech will require collaboration not only between business, academia, and government, but between nations,” said Linda Sapochak, division director of materials research at the National Science Foundation. She added that this doesn’t just go for the technology itself—setting up the infrastructure for quantum will be a big challenge as well.

Space
Space has always been the final frontier, and it still is—but it’s not quite as far-removed from our daily lives now as it was when Neil Armstrong walked on the moon in 1969.

The space industry has always been funded by governments and private defense contractors. But in 2009, SpaceX launched its first commercial satellite, and in subsequent years have drastically cut the cost of spaceflight. More importantly, they published their pricing, which brought transparency to a market that hadn’t seen it before.

Entrepreneurs around the world started putting together business plans, and there are now over 400 privately-funded space companies, many with consumer applications.

Chad Anderson, CEO of Space Angels and managing partner of Space Capital, pointed out that the technology floating around in space was, until recently, archaic. “A few NASA engineers saw they had more computing power in their phone than there was in satellites,” he said. “So they thought, ‘why don’t we just fly an iPhone?’” They did—and it worked.

Now companies have networks of satellites monitoring the whole planet, producing a huge amount of data that’s valuable for countless applications like agriculture, shipping, and observation. “A lot of people underestimate space,” Anderson said. “It’s already enabling our modern global marketplace.”

Next up in the space realm, he predicts, are mining and tourism.

Artificial Intelligence and the Future of Work
From the US to Europe to Asia, alarms are sounding about AI taking our jobs. What will be left for humans to do once machines can do everything—and do it better?

These fears may be unfounded, though, and are certainly exaggerated. It’s undeniable that AI and automation are changing the employment landscape (not to mention the way companies do business and the way we live our lives), but if we build these tools the right way, they’ll bring more good than harm, and more productivity than obsolescence.

Accenture’s Julie Sweet emphasized that AI alone is not what’s disrupting business and employment. Rather, it’s what she called the “triple A”: automation, analytics, and artificial intelligence. But even this fear-inducing trifecta of terms doesn’t spell doom, for workers or for companies. Accenture has automated 40,000 jobs—and hasn’t fired anyone in the process. Instead, they’ve trained and up-skilled people. The most important drivers to scale this, Sweet said, are a commitment by companies and government support (such as tax credits).

Imbuing AI with the best of human values will also be critical to its impact on our future. Tracy Frey, Google Cloud AI’s director of strategy, cited the company’s set of seven AI principles. “What’s important is the governance process that’s put in place to support those principles,” she said. “You can’t make macro decisions when you have technology that can be applied in many different ways.”

High Risks, High Stakes
This year, Vaitheeswaran said, 50 percent of the world’s population will have internet access (he added that he’s disappointed that percentage isn’t higher given the proliferation of smartphones). As technology becomes more widely available to people around the world and its influence grows even more, what are the biggest risks we should be monitoring and controlling?

Information integrity—being able to tell what’s real from what’s fake—is a crucial one. “We’re increasingly operating in siloed realities,” said Renee DiResta, director of research at New Knowledge and head of policy at Data for Democracy. “Inadvertent algorithmic amplification on social media elevates certain perspectives—what does that do to us as a society?”

Algorithms have also already been proven to perpetuate the bias of the people who create it—and those people are often wealthy, white, and male. Ensuring that technology doesn’t propagate unfair bias will be crucial to its ability to serve a diverse population, and to keep societies from becoming further polarized and inequitable. The polarization of experience that results from pronounced inequalities within countries, Vaitheeswaran pointed out, can end up undermining democracy.

We’ll also need to walk the line between privacy and utility very carefully. As Dan Wagner, founder of Civis Analytics put it, “We want to ensure privacy as much as possible, but open access to information helps us achieve important social good.” Medicine in the US has been hampered by privacy laws; if, for example, we had more data about biomarkers around cancer, we could provide more accurate predictions and ultimately better healthcare.

But going the Chinese way—a total lack of privacy—is likely not the answer, either. “We have to be very careful about the way we bake rights and freedom into our technology,” said Alex Gladstein, chief strategy officer at Human Rights Foundation.

Technology’s risks are clearly as fraught as its potential is promising. As Gary Shapiro, chief executive of the Consumer Technology Association, put it, “Everything we’ve talked about today is simply a tool, and can be used for good or bad.”

The decisions we’re making now, at every level—from the engineers writing algorithms, to the legislators writing laws, to the teenagers writing clever Instagram captions—will determine where on the spectrum we end up.

Image Credit: Rudy Balasko / Shutterstock.com Continue reading

Posted in Human Robots

#433954 The Next Great Leap Forward? Combining ...

The Internet of Things is a popular vision of objects with internet connections sending information back and forth to make our lives easier and more comfortable. It’s emerging in our homes, through everything from voice-controlled speakers to smart temperature sensors. To improve our fitness, smart watches and Fitbits are telling online apps how much we’re moving around. And across entire cities, interconnected devices are doing everything from increasing the efficiency of transport to flood detection.

In parallel, robots are steadily moving outside the confines of factory lines. They’re starting to appear as guides in shopping malls and cruise ships, for instance. As prices fall and the artificial intelligence (AI) and mechanical technology continues to improve, we will get more and more used to them making independent decisions in our homes, streets and workplaces.

Here lies a major opportunity. Robots become considerably more capable with internet connections. There is a growing view that the next evolution of the Internet of Things will be to incorporate them into the network, opening up thrilling possibilities along the way.

Home Improvements
Even simple robots become useful when connected to the internet—getting updates about their environment from sensors, say, or learning about their users’ whereabouts and the status of appliances in the vicinity. This lets them lend their bodies, eyes, and ears to give an otherwise impersonal smart environment a user-friendly persona. This can be particularly helpful for people at home who are older or have disabilities.

We recently unveiled a futuristic apartment at Heriot-Watt University to work on such possibilities. One of a few such test sites around the EU, our whole focus is around people with special needs—and how robots can help them by interacting with connected devices in a smart home.

Suppose a doorbell rings that has smart video features. A robot could find the person in the home by accessing their location via sensors, then tell them who is at the door and why. Or it could help make video calls to family members or a professional carer—including allowing them to make virtual visits by acting as a telepresence platform.

Equally, it could offer protection. It could inform them the oven has been left on, for example—phones or tablets are less reliable for such tasks because they can be misplaced or not heard.

Similarly, the robot could raise the alarm if its user appears to be in difficulty.Of course, voice-assistant devices like Alexa or Google Home can offer some of the same services. But robots are far better at moving, sensing and interacting with their environment. They can also engage their users by pointing at objects or acting more naturally, using gestures or facial expressions. These “social abilities” create bonds which are crucially important for making users more accepting of the support and making it more effective.

To help incentivize the various EU test sites, our apartment also hosts the likes of the European Robotic League Service Robot Competition—a sort of Champions League for robots geared to special needs in the home. This brought academics from around Europe to our laboratory for the first time in January this year. Their robots were tested in tasks like welcoming visitors to the home, turning the oven off, and fetching objects for their users; and a German team from Koblenz University won with a robot called Lisa.

Robots Offshore
There are comparable opportunities in the business world. Oil and gas companies are looking at the Internet of Things, for example; experimenting with wireless sensors to collect information such as temperature, pressure, and corrosion levels to detect and possibly predict faults in their offshore equipment.

In the future, robots could be alerted to problem areas by sensors to go and check the integrity of pipes and wells, and to make sure they are operating as efficiently and safely as possible. Or they could place sensors in parts of offshore equipment that are hard to reach, or help to calibrate them or replace their batteries.

The likes of the ORCA Hub, a £36m project led by the Edinburgh Centre for Robotics, bringing together leading experts and over 30 industry partners, is developing such systems. The aim is to reduce the costs and the risks of humans working in remote hazardous locations.

ORCA tests a drone robot. ORCA
Working underwater is particularly challenging, since radio waves don’t move well under the sea. Underwater autonomous vehicles and sensors usually communicate using acoustic waves, which are many times slower (1,500 meters a second vs. 300m meters a second for radio waves). Acoustic communication devices are also much more expensive than those used above the water.

This academic project is developing a new generation of low-cost acoustic communication devices, and trying to make underwater sensor networks more efficient. It should help sensors and underwater autonomous vehicles to do more together in future—repair and maintenance work similar to what is already possible above the water, plus other benefits such as helping vehicles to communicate with one another over longer distances and tracking their location.

Beyond oil and gas, there is similar potential in sector after sector. There are equivalents in nuclear power, for instance, and in cleaning and maintaining the likes of bridges and buildings. My colleagues and I are also looking at possibilities in areas such as farming, manufacturing, logistics, and waste.

First, however, the research sectors around the Internet of Things and robotics need to properly share their knowledge and expertise. They are often isolated from one another in different academic fields. There needs to be more effort to create a joint community, such as the dedicated workshops for such collaboration that we organized at the European Robotics Forum and the IoT Week in 2017.

To the same end, industry and universities need to look at setting up joint research projects. It is particularly important to address safety and security issues—hackers taking control of a robot and using it to spy or cause damage, for example. Such issues could make customers wary and ruin a market opportunity.

We also need systems that can work together, rather than in isolated applications. That way, new and more useful services can be quickly and effectively introduced with no disruption to existing ones. If we can solve such problems and unite robotics and the Internet of Things, it genuinely has the potential to change the world.

Mauro Dragone, Assistant Professor, Cognitive Robotics, Multiagent systems, Internet of Things, Heriot-Watt University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Willyam Bradberry/Shutterstock.com Continue reading

Posted in Human Robots