Tag Archives: wall

#437491 3.2 Billion Images and 720,000 Hours of ...

Twitter over the weekend “tagged” as manipulated a video showing US Democratic presidential candidate Joe Biden supposedly forgetting which state he’s in while addressing a crowd.

Biden’s “hello Minnesota” greeting contrasted with prominent signage reading “Tampa, Florida” and “Text FL to 30330.”

The Associated Press’s fact check confirmed the signs were added digitally and the original footage was indeed from a Minnesota rally. But by the time the misleading video was removed it already had more than one million views, The Guardian reports.

A FALSE video claiming Biden forgot what state he was in was viewed more than 1 million times on Twitter in the past 24 hours

In the video, Biden says “Hello, Minnesota.”

The event did indeed happen in MN — signs on stage read MN

But false video edited signs to read Florida pic.twitter.com/LdHQVaky8v

— Donie O'Sullivan (@donie) November 1, 2020

If you use social media, the chances are you see (and forward) some of the more than 3.2 billion images and 720,000 hours of video shared daily. When faced with such a glut of content, how can we know what’s real and what’s not?

While one part of the solution is an increased use of content verification tools, it’s equally important we all boost our digital media literacy. Ultimately, one of the best lines of defense—and the only one you can control—is you.

Seeing Shouldn’t Always Be Believing
Misinformation (when you accidentally share false content) and disinformation (when you intentionally share it) in any medium can erode trust in civil institutions such as news organizations, coalitions and social movements. However, fake photos and videos are often the most potent.

For those with a vested political interest, creating, sharing and/or editing false images can distract, confuse and manipulate viewers to sow discord and uncertainty (especially in already polarized environments). Posters and platforms can also make money from the sharing of fake, sensationalist content.

Only 11-25 percent of journalists globally use social media content verification tools, according to the International Centre for Journalists.

Could You Spot a Doctored Image?
Consider this photo of Martin Luther King Jr.

Dr. Martin Luther King Jr. Giving the middle finger #DopeHistoricPics pic.twitter.com/5W38DRaLHr

— Dope Historic Pics (@dopehistoricpic) December 20, 2013

This altered image clones part of the background over King Jr’s finger, so it looks like he’s flipping off the camera. It has been shared as genuine on Twitter, Reddit, and white supremacist websites.

In the original 1964 photo, King flashed the “V for victory” sign after learning the US Senate had passed the civil rights bill.

“Those who love peace must learn to organize as effectively as those who love war.”
Dr. Martin Luther King Jr.

This photo was taken on June 19th, 1964, showing Dr King giving a peace sign after hearing that the civil rights bill had passed the senate. @snopes pic.twitter.com/LXHmwMYZS5

— Willie's Reserve (@WilliesReserve) January 21, 2019

Beyond adding or removing elements, there’s a whole category of photo manipulation in which images are fused together.

Earlier this year, a photo of an armed man was photoshopped by Fox News, which overlaid the man onto other scenes without disclosing the edits, the Seattle Times reported.

You mean this guy who’s been photoshopped into three separate photos released by Fox News? pic.twitter.com/fAXpIKu77a

— Zander Yates ザンダーイェーツ (@ZanderYates) June 13, 2020

Similarly, the image below was shared thousands of times on social media in January, during Australia’s Black Summer bushfires. The AFP’s fact check confirmed it is not authentic and is actually a combination of several separate photos.

Image is more powerful than screams of Greta. A silent girl is holding a koala. She looks straight at you from the waters of the ocean where they found a refuge. She is wearing a breathing mask. A wall of fire is behind them. I do not know the name of the photographer #Australia pic.twitter.com/CrTX3lltdh

— EVC Music (@EVCMusicUK) January 6, 2020

Fully and Partially Synthetic Content
Online, you’ll also find sophisticated “deepfake” videos showing (usually famous) people saying or doing things they never did. Less advanced versions can be created using apps such as Zao and Reface.

Or, if you don’t want to use your photo for a profile picture, you can default to one of several websites offering hundreds of thousands of AI-generated, photorealistic images of people.

These people don’t exist, they’re just images generated by artificial intelligence. Generated Photos, CC BY

Editing Pixel Values and the (not so) Simple Crop
Cropping can greatly alter the context of a photo, too.

We saw this in 2017, when a US government employee edited official pictures of Donald Trump’s inauguration to make the crowd appear bigger, according to The Guardian. The staffer cropped out the empty space “where the crowd ended” for a set of pictures for Trump.

Views of the crowds at the inaugurations of former US President Barack Obama in 2009 (left) and President Donald Trump in 2017 (right). AP

But what about edits that only alter pixel values such as color, saturation, or contrast?

One historical example illustrates the consequences of this. In 1994, Time magazine’s cover of OJ Simpson considerably “darkened” Simpson in his police mugshot. This added fuel to a case already plagued by racial tension, to which the magazine responded, “No racial implication was intended, by Time or by the artist.”

Tools for Debunking Digital Fakery
For those of us who don’t want to be duped by visual mis/disinformation, there are tools available—although each comes with its own limitations (something we discuss in our recent paper).

Invisible digital watermarking has been proposed as a solution. However, it isn’t widespread and requires buy-in from both content publishers and distributors.

Reverse image search (such as Google’s) is often free and can be helpful for identifying earlier, potentially more authentic copies of images online. That said, it’s not foolproof because it:

Relies on unedited copies of the media already being online.
Doesn’t search the entire web.
Doesn’t always allow filtering by publication time. Some reverse image search services such as TinEye support this function, but Google’s doesn’t.
Returns only exact matches or near-matches, so it’s not thorough. For instance, editing an image and then flipping its orientation can fool Google into thinking it’s an entirely different one.

Most Reliable Tools Are Sophisticated
Meanwhile, manual forensic detection methods for visual mis/disinformation focus mostly on edits visible to the naked eye, or rely on examining features that aren’t included in every image (such as shadows). They’re also time-consuming, expensive, and need specialized expertise.

Still, you can access work in this field by visiting sites such as Snopes.com—which has a growing repository of “fauxtography.”

Computer vision and machine learning also offer relatively advanced detection capabilities for images and videos. But they too require technical expertise to operate and understand.

Moreover, improving them involves using large volumes of “training data,” but the image repositories used for this usually don’t contain the real-world images seen in the news.

If you use an image verification tool such as the REVEAL project’s image verification assistant, you might need an expert to help interpret the results.

The good news, however, is that before turning to any of the above tools, there are some simple questions you can ask yourself to potentially figure out whether a photo or video on social media is fake. Think:

Was it originally made for social media?
How widely and for how long was it circulated?
What responses did it receive?
Who were the intended audiences?

Quite often, the logical conclusions drawn from the answers will be enough to weed out inauthentic visuals. You can access the full list of questions, put together by Manchester Metropolitan University experts, here.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Simon Steinberger from Pixabay Continue reading

Posted in Human Robots

#437224 This Week’s Awesome Tech Stories From ...

VIRTUAL REALITY
How Holographic Tech Is Shrinking VR Displays to the Size of Sunglasses
Kyle Orland | Ars Technica
“…researchers at Facebook Reality Labs are using holographic film to create a prototype VR display that looks less like ski goggles and more like lightweight sunglasses. With a total thickness less than 9mm—and without significant compromises on field of view or resolution—these displays could one day make today’s bulky VR headset designs completely obsolete.”

TRANSPORTATION
Stock Surge Makes Tesla the World’s Most Valuable Automaker
Timothy B. Lee | Ars Technica
“It’s a remarkable milestone for a company that sells far fewer cars than its leading rivals. …But Wall Street is apparently very optimistic about Tesla’s prospects for future growth and profits. Many experts expect a global shift to battery electric vehicles over the next decade or two, and Tesla is leading that revolution.”

FUTURE OF FOOD
These Plant-Based Steaks Come Out of a 3D Printer
Adele Peters | Fast Company
“The startup, launched by cofounders who met while developing digital printers at HP, created custom 3D printers that aim to replicate meat by printing layers of what they call ‘alt-muscle,’ ‘alt-fat,’ and ‘alt-blood,’ forming a complex 3D model.”

AUTOMATION
The US Air Force Is Turning Old F-16s Into AI-Powered Fighters
Amit Katwala | Wired UK
“Maverick’s days are numbered. The long-awaited sequel to Top Gun is due to hit cinemas in December, but the virtuoso fighter pilots at its heart could soon be a thing of the past. The trustworthy wingman will soon be replaced by artificial intelligence, built into a drone, or an existing fighter jet with no one in the cockpit.”

ROBOTICS
NASA Wants to Build a Steam-Powered Hopping Robot to Explore Icy Worlds
Georgina Torbet | Digital Trends
“A bouncing, ball-like robot that’s powered by steam sounds like something out of a steampunk fantasy, but it could be the ideal way to explore some of the distant, icy environments of our solar system. …This round robot would be the size of a soccer ball, with instruments held in the center of a metal cage, and it would use steam-powered thrusters to make jumps from one area of terrain to the next.”

FUTURE
Could Teleporting Ever Work?
Daniel Kolitz | Gizmodo
“Have the major airlines spent decades suppressing teleportation research? Have a number of renowned scientists in the field of teleportation studies disappeared under mysterious circumstances? Is there a cork board at the FBI linking Delta Airlines, shady foreign security firms, and dozens of murdered research professors? …No. None of that is the case. Which begs the question: why doesn’t teleportation exist yet?”

ENERGY
Nuclear ‘Power Balls’ Could Make Meltdowns a Thing of the Past
Daniel Oberhaus | Wired
“Not only will these reactors be smaller and more efficient than current nuclear power plants, but their designers claim they’ll be virtually meltdown-proof. Their secret? Millions of submillimeter-size grains of uranium individually wrapped in protective shells. It’s called triso fuel, and it’s like a radioactive gobstopper.”

TECHNOLOGY
A Plan to Redesign the Internet Could Make Apps That No One Controls
Will Douglas Heaven | MIT Techology Review
“[John Perry] Barlow’s ‘home of Mind’ is ruled today by the likes of Google, Facebook, Amazon, Alibaba, Tencent, and Baidu—a small handful of the biggest companies on earth. Yet listening to the mix of computer scientists and tech investors speak at an online event on June 30 hosted by the Dfinity Foundation…it is clear that a desire for revolution is brewing.”

IMPACT
To Save the World, the UN Is Turning It Into a Computer Simulation
Will Bedingfield | Wired
“The UN has now announced its new secret recipe to achieve [its 17 sustainable development goals or SDGs]: a computer simulation called Policy Priority Inference (PPI). …PPI is a budgeting software—it simulates a government and its bureaucrats as they allocate money on projects that might move a country closer to an SDG.”

Image credit: Benjamin Suter / Unsplash Continue reading

Posted in Human Robots

#436119 How 3D Printing, Vertical Farming, and ...

Food. What we eat, and how we grow it, will be fundamentally transformed in the next decade.

Already, indoor farming is projected to be a US$40.25 billion industry by 2022, with a compound annual growth rate of 9.65 percent. Meanwhile, the food 3D printing industry is expected to grow at an even higher rate, averaging 50 percent annual growth.

And converging exponential technologies—from materials science to AI-driven digital agriculture—are not slowing down. Today’s breakthroughs will soon allow our planet to boost its food production by nearly 70 percent, using a fraction of the real estate and resources, to feed 9 billion by mid-century.

What you consume, how it was grown, and how it will end up in your stomach will all ride the wave of converging exponentials, revolutionizing the most basic of human needs.

Printing Food
3D printing has already had a profound impact on the manufacturing sector. We are now able to print in hundreds of different materials, making anything from toys to houses to organs. However, we are finally seeing the emergence of 3D printers that can print food itself.

Redefine Meat, an Israeli startup, wants to tackle industrial meat production using 3D printers that can generate meat, no animals required. The printer takes in fat, water, and three different plant protein sources, using these ingredients to print a meat fiber matrix with trapped fat and water, thus mimicking the texture and flavor of real meat.

Slated for release in 2020 at a cost of $100,000, their machines are rapidly demonetizing and will begin by targeting clients in industrial-scale meat production.

Anrich3D aims to take this process a step further, 3D printing meals that are customized to your medical records, heath data from your smart wearables, and patterns detected by your sleep trackers. The company plans to use multiple extruders for multi-material printing, allowing them to dispense each ingredient precisely for nutritionally optimized meals. Currently in an R&D phase at the Nanyang Technological University in Singapore, the company hopes to have its first taste tests in 2020.

These are only a few of the many 3D food printing startups springing into existence. The benefits from such innovations are boundless.

Not only will food 3D printing grant consumers control over the ingredients and mixtures they consume, but it is already beginning to enable new innovations in flavor itself, democratizing far healthier meal options in newly customizable cuisine categories.

Vertical Farming
Vertical farming, whereby food is grown in vertical stacks (in skyscrapers and buildings rather than outside in fields), marks a classic case of converging exponential technologies. Over just the past decade, the technology has surged from a handful of early-stage pilots to a full-grown industry.

Today, the average American meal travels 1,500-2,500 miles to get to your plate. As summed up by Worldwatch Institute researcher Brian Halweil, “We are spending far more energy to get food to the table than the energy we get from eating the food.” Additionally, the longer foods are out of the soil, the less nutritious they become, losing on average 45 percent of their nutrition before being consumed.

Yet beyond cutting down on time and transportation losses, vertical farming eliminates a whole host of issues in food production. Relying on hydroponics and aeroponics, vertical farms allows us to grow crops with 90 percent less water than traditional agriculture—which is critical for our increasingly thirsty planet.

Currently, the largest player around is Bay Area-based Plenty Inc. With over $200 million in funding from Softbank, Plenty is taking a smart tech approach to indoor agriculture. Plants grow on 20-foot-high towers, monitored by tens of thousands of cameras and sensors, optimized by big data and machine learning.

This allows the company to pack 40 plants in the space previously occupied by 1. The process also produces yields 350 times greater than outdoor farmland, using less than 1 percent as much water.

And rather than bespoke veggies for the wealthy few, Plenty’s processes allow them to knock 20-35 percent off the costs of traditional grocery stores. To date, Plenty has their home base in South San Francisco, a 100,000 square-foot farm in Kent, Washington, an indoor farm in the United Arab Emirates, and recently started construction on over 300 farms in China.

Another major player is New Jersey-based Aerofarms, which can now grow two million pounds of leafy greens without sunlight or soil.

To do this, Aerofarms leverages AI-controlled LEDs to provide optimized wavelengths of light for each plant. Using aeroponics, the company delivers nutrients by misting them directly onto the plants’ roots—no soil required. Rather, plants are suspended in a growth mesh fabric made from recycled water bottles. And here too, sensors, cameras, and machine learning govern the entire process.

While 50-80 percent of the cost of vertical farming is human labor, autonomous robotics promises to solve that problem. Enter contenders like Iron Ox, a firm that has developed the Angus robot, capable of moving around plant-growing containers.

The writing is on the wall, and traditional agriculture is fast being turned on its head.

Materials Science
In an era where materials science, nanotechnology, and biotechnology are rapidly becoming the same field of study, key advances are enabling us to create healthier, more nutritious, more efficient, and longer-lasting food.

For starters, we are now able to boost the photosynthetic abilities of plants. Using novel techniques to improve a micro-step in the photosynthesis process chain, researchers at UCLA were able to boost tobacco crop yield by 14-20 percent. Meanwhile, the RIPE Project, backed by Bill Gates and run out of the University of Illinois, has matched and improved those numbers.

And to top things off, The University of Essex was even able to improve tobacco yield by 27-47 percent by increasing the levels of protein involved in photo-respiration.

In yet another win for food-related materials science, Santa Barbara-based Apeel Sciences is further tackling the vexing challenge of food waste. Now approaching commercialization, Apeel uses lipids and glycerolipids found in the peels, seeds, and pulps of all fruits and vegetables to create “cutin”—the fatty substance that composes the skin of fruits and prevents them from rapidly spoiling by trapping moisture.

By then spraying fruits with this generated substance, Apeel can preserve foods 60 percent longer using an odorless, tasteless, colorless organic substance.

And stores across the US are already using this method. By leveraging our advancing knowledge of plants and chemistry, materials science is allowing us to produce more food with far longer-lasting freshness and more nutritious value than ever before.

Convergence
With advances in 3D printing, vertical farming, and materials sciences, we can now make food smarter, more productive, and far more resilient.

By the end of the next decade, you should be able to 3D print a fusion cuisine dish from the comfort of your home, using ingredients harvested from vertical farms, with nutritional value optimized by AI and materials science. However, even this picture doesn’t account for all the rapid changes underway in the food industry.

Join me next week for Part 2 of the Future of Food for a discussion on how food production will be transformed, quite literally, from the bottom up.

Join Me
Abundance-Digital Online Community: Stay ahead of technological advancements and turn your passion into action. Abundance Digital is now part of Singularity University. Learn more.

Image Credit: Vanessa Bates Ramirez Continue reading

Posted in Human Robots

#435765 The Four Converging Technologies Giving ...

How each of us sees the world is about to change dramatically.

For all of human history, the experience of looking at the world was roughly the same for everyone. But boundaries between the digital and physical are beginning to fade.

The world around us is gaining layer upon layer of digitized, virtually overlaid information—making it rich, meaningful, and interactive. As a result, our respective experiences of the same environment are becoming vastly different, personalized to our goals, dreams, and desires.

Welcome to Web 3.0, or the Spatial Web. In version 1.0, static documents and read-only interactions limited the internet to one-way exchanges. Web 2.0 provided quite an upgrade, introducing multimedia content, interactive web pages, and participatory social media. Yet, all this was still mediated by two-dimensional screens.

Today, we are witnessing the rise of Web 3.0, riding the convergence of high-bandwidth 5G connectivity, rapidly evolving AR eyewear, an emerging trillion-sensor economy, and powerful artificial intelligence.

As a result, we will soon be able to superimpose digital information atop any physical surrounding—freeing our eyes from the tyranny of the screen, immersing us in smart environments, and making our world endlessly dynamic.

In the third post of our five-part series on augmented reality, we will explore the convergence of AR, AI, sensors, and blockchain and dive into the implications through a key use case in manufacturing.

A Tale of Convergence
Let’s deconstruct everything beneath the sleek AR display.

It all begins with graphics processing units (GPUs)—electric circuits that perform rapid calculations to render images. (GPUs can be found in mobile phones, game consoles, and computers.)

However, because AR requires such extensive computing power, single GPUs will not suffice. Instead, blockchain can now enable distributed GPU processing power, and blockchains specifically dedicated to AR holographic processing are on the rise.

Next up, cameras and sensors will aggregate real-time data from any environment to seamlessly integrate physical and virtual worlds. Meanwhile, body-tracking sensors are critical for aligning a user’s self-rendering in AR with a virtually enhanced environment. Depth sensors then provide data for 3D spatial maps, while cameras absorb more surface-level, detailed visual input. In some cases, sensors might even collect biometric data, such as heart rate and brain activity, to incorporate health-related feedback in our everyday AR interfaces and personal recommendation engines.

The next step in the pipeline involves none other than AI. Processing enormous volumes of data instantaneously, embedded AI algorithms will power customized AR experiences in everything from artistic virtual overlays to personalized dietary annotations.

In retail, AIs will use your purchasing history, current closet inventory, and possibly even mood indicators to display digitally rendered items most suitable for your wardrobe, tailored to your measurements.

In healthcare, smart AR glasses will provide physicians with immediately accessible and maximally relevant information (parsed from the entirety of a patient’s medical records and current research) to aid in accurate diagnoses and treatments, freeing doctors to engage in the more human-centric tasks of establishing trust, educating patients and demonstrating empathy.

Image Credit: PHD Ventures.
Convergence in Manufacturing
One of the nearest-term use cases of AR is manufacturing, as large producers begin dedicating capital to enterprise AR headsets. And over the next ten years, AR will converge with AI, sensors, and blockchain to multiply manufacturer productivity and employee experience.

(1) Convergence with AI
In initial application, digital guides superimposed on production tables will vastly improve employee accuracy and speed, while minimizing error rates.

Already, the International Air Transport Association (IATA) — whose airlines supply 82 percent of air travel — recently implemented industrial tech company Atheer’s AR headsets in cargo management. And with barely any delay, IATA reported a whopping 30 percent improvement in cargo handling speed and no less than a 90 percent reduction in errors.

With similar success rates, Boeing brought Skylight’s smart AR glasses to the runway, now used in the manufacturing of hundreds of airplanes. Sure enough—the aerospace giant has now seen a 25 percent drop in production time and near-zero error rates.

Beyond cargo management and air travel, however, smart AR headsets will also enable on-the-job training without reducing the productivity of other workers or sacrificing hardware. Jaguar Land Rover, for instance, implemented Bosch’s Re’flekt One AR solution to gear technicians with “x-ray” vision: allowing them to visualize the insides of Range Rover Sport vehicles without removing any dashboards.

And as enterprise capabilities continue to soar, AIs will soon become the go-to experts, offering support to manufacturers in need of assembly assistance. Instant guidance and real-time feedback will dramatically reduce production downtime, boost overall output, and even help customers struggling with DIY assembly at home.

Perhaps one of the most profitable business opportunities, AR guidance through centralized AI systems will also serve to mitigate supply chain inefficiencies at extraordinary scale. Coordinating moving parts, eliminating the need for manned scanners at each checkpoint, and directing traffic within warehouses, joint AI-AR systems will vastly improve workflow while overseeing quality assurance.

After its initial implementation of AR “vision picking” in 2015, leading courier company DHL recently announced it would continue to use Google’s newest smart lens in warehouses across the world. Motivated by the initial group’s reported 15 percent jump in productivity, DHL’s decision is part of the logistics giant’s $300 million investment in new technologies.

And as direct-to-consumer e-commerce fundamentally transforms the retail sector, supply chain optimization will only grow increasingly vital. AR could very well prove the definitive step for gaining a competitive edge in delivery speeds.

As explained by Vital Enterprises CEO Ash Eldritch, “All these technologies that are coming together around artificial intelligence are going to augment the capabilities of the worker and that’s very powerful. I call it Augmented Intelligence. The idea is that you can take someone of a certain skill level and by augmenting them with artificial intelligence via augmented reality and the Internet of Things, you can elevate the skill level of that worker.”

Already, large producers like Goodyear, thyssenkrupp, and Johnson Controls are using the Microsoft HoloLens 2—priced at $3,500 per headset—for manufacturing and design purposes.

Perhaps the most heartening outcome of the AI-AR convergence is that, rather than replacing humans in manufacturing, AR is an ideal interface for human collaboration with AI. And as AI merges with human capital, prepare to see exponential improvements in productivity, professional training, and product quality.

(2) Convergence with Sensors
On the hardware front, these AI-AR systems will require a mass proliferation of sensors to detect the external environment and apply computer vision in AI decision-making.

To measure depth, for instance, some scanning depth sensors project a structured pattern of infrared light dots onto a scene, detecting and analyzing reflected light to generate 3D maps of the environment. Stereoscopic imaging, using two lenses, has also been commonly used for depth measurements. But leading technology like Microsoft’s HoloLens 2 and Intel’s RealSense 400-series camera implement a new method called “phased time-of-flight” (ToF).

In ToF sensing, the HoloLens 2 uses numerous lasers, each with 100 milliwatts (mW) of power, in quick bursts. The distance between nearby objects and the headset wearer is then measured by the amount of light in the return beam that has shifted from the original signal. Finally, the phase difference reveals the location of each object within the field of view, which enables accurate hand-tracking and surface reconstruction.

With a far lower computing power requirement, the phased ToF sensor is also more durable than stereoscopic sensing, which relies on the precise alignment of two prisms. The phased ToF sensor’s silicon base also makes it easily mass-produced, rendering the HoloLens 2 a far better candidate for widespread consumer adoption.

To apply inertial measurement—typically used in airplanes and spacecraft—the HoloLens 2 additionally uses a built-in accelerometer, gyroscope, and magnetometer. Further equipped with four “environment understanding cameras” that track head movements, the headset also uses a 2.4MP HD photographic video camera and ambient light sensor that work in concert to enable advanced computer vision.

For natural viewing experiences, sensor-supplied gaze tracking increasingly creates depth in digital displays. Nvidia’s work on Foveated AR Display, for instance, brings the primary foveal area into focus, while peripheral regions fall into a softer background— mimicking natural visual perception and concentrating computing power on the area that needs it most.

Gaze tracking sensors are also slated to grant users control over their (now immersive) screens without any hand gestures. Conducting simple visual cues, even staring at an object for more than three seconds, will activate commands instantaneously.

And our manufacturing example above is not the only one. Stacked convergence of blockchain, sensors, AI and AR will disrupt almost every major industry.

Take healthcare, for example, wherein biometric sensors will soon customize users’ AR experiences. Already, MIT Media Lab’s Deep Reality group has created an underwater VR relaxation experience that responds to real-time brain activity detected by a modified version of the Muse EEG. The experience even adapts to users’ biometric data, from heart rate to electro dermal activity (inputted from an Empatica E4 wristband).

Now rapidly dematerializing, sensors will converge with AR to improve physical-digital surface integration, intuitive hand and eye controls, and an increasingly personalized augmented world. Keep an eye on companies like MicroVision, now making tremendous leaps in sensor technology.

While I’ll be doing a deep dive into sensor applications across each industry in our next blog, it’s critical to first discuss how we might power sensor- and AI-driven augmented worlds.

(3) Convergence with Blockchain
Because AR requires much more compute power than typical 2D experiences, centralized GPUs and cloud computing systems are hard at work to provide the necessary infrastructure. Nonetheless, the workload is taxing and blockchain may prove the best solution.

A major player in this pursuit, Otoy aims to create the largest distributed GPU network in the world, called the Render Network RNDR. Built specifically on the Ethereum blockchain for holographic media, and undergoing Beta testing, this network is set to revolutionize AR deployment accessibility.

Alphabet Chairman Eric Schmidt (an investor in Otoy’s network), has even said, “I predicted that 90% of computing would eventually reside in the web based cloud… Otoy has created a remarkable technology which moves that last 10%—high-end graphics processing—entirely to the cloud. This is a disruptive and important achievement. In my view, it marks the tipping point where the web replaces the PC as the dominant computing platform of the future.”

Leveraging the crowd, RNDR allows anyone with a GPU to contribute their power to the network for a commission of up to $300 a month in RNDR tokens. These can then be redeemed in cash or used to create users’ own AR content.

In a double win, Otoy’s blockchain network and similar iterations not only allow designers to profit when not using their GPUs, but also democratize the experience for newer artists in the field.

And beyond these networks’ power suppliers, distributing GPU processing power will allow more manufacturing companies to access AR design tools and customize learning experiences. By further dispersing content creation across a broad network of individuals, blockchain also has the valuable potential to boost AR hardware investment across a number of industry beneficiaries.

On the consumer side, startups like Scanetchain are also entering the blockchain-AR space for a different reason. Allowing users to scan items with their smartphone, Scanetchain’s app provides access to a trove of information, from manufacturer and price, to origin and shipping details.

Based on NEM (a peer-to-peer cryptocurrency that implements a blockchain consensus algorithm), the app aims to make information far more accessible and, in the process, create a social network of purchasing behavior. Users earn tokens by watching ads, and all transactions are hashed into blocks and securely recorded.

The writing is on the wall—our future of brick-and-mortar retail will largely lean on blockchain to create the necessary digital links.

Final Thoughts
Integrating AI into AR creates an “auto-magical” manufacturing pipeline that will fundamentally transform the industry, cutting down on marginal costs, reducing inefficiencies and waste, and maximizing employee productivity.

Bolstering the AI-AR convergence, sensor technology is already blurring the boundaries between our augmented and physical worlds, soon to be near-undetectable. While intuitive hand and eye motions dictate commands in a hands-free interface, biometric data is poised to customize each AR experience to be far more in touch with our mental and physical health.

And underpinning it all, distributed computing power with blockchain networks like RNDR will democratize AR, boosting global consumer adoption at plummeting price points.

As AR soars in importance—whether in retail, manufacturing, entertainment, or beyond—the stacked convergence discussed above merits significant investment over the next decade. The augmented world is only just getting started.

Join Me
(1) A360 Executive Mastermind: Want even more context about how converging exponential technologies will transform your business and industry? Consider joining Abundance 360, a highly selective community of 360 exponentially minded CEOs, who are on a 25-year journey with me—or as I call it, a “countdown to the Singularity.” If you’d like to learn more and consider joining our 2020 membership, apply here.

Share this with your friends, especially if they are interested in any of the areas outlined above.

(2) Abundance-Digital Online Community: I’ve also created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is Singularity University’s ‘onramp’ for exponential entrepreneurs — those who want to get involved and play at a higher level. Click here to learn more.

This article originally appeared on Diamandis.com

Image Credit: Funky Focus / Pixabay Continue reading

Posted in Human Robots

#435752 T-RHex Is a Hexapod Robot With ...

In Aaron Johnson’s “Robot Design & Experimentation” class at CMU, teams of students have a semester to design and build an experimental robotic system based on a theme. For spring 2019, that theme was “Bioinspired Robotics,” which is definitely one of our favorite kinds of robotics—animals can do all kinds of crazy things, and it’s always a lot of fun watching robots try to match them. They almost never succeed, of course, but even basic imitation can lead to robots with some unique capabilities.

One of the projects from this year’s course, from Team ScienceParrot, is a new version of RHex called T-RHex (pronounced T-Rex, like the dinosaur). T-RHex comes with a tail, but more importantly, it has tiny tapered toes, which help it grip onto rough surfaces like bricks, wood, and concrete. It’s able to climb its way up very steep slopes, and hang from them, relying on its toes to keep itself from falling off.

T-RHex’s toes are called microspines, and we’ve seen them in all kinds of robots. The most famous of these is probably JPL’s LEMUR IIB (which wins on sheer microspine volume), although the concept goes back at least 15 years to Stanford’s SpinyBot. Robots that use microspines to climb tend to be fairly methodical at it, since the microspines have to be engaged and disengaged with care, limiting their non-climbing mobility.

T-RHex manages to perform many of the same sorts of climbing and hanging maneuvers without losing RHex’s ability for quick, efficient wheel-leg (wheg) locomotion.

If you look closely at T-RHex walking in the video, you’ll notice that in its normal forward gait, it’s sort of walking on its ankles, rather than its toes. This means that the microspines aren’t engaged most of the time, so that the robot can use its regular wheg motion to get around. To engage the microspines, the robot moves its whegs backwards, meaning that its tail is arguably coming out of its head. But since all of T-RHex’s capability is mechanical in nature and it has no active sensors, it doesn’t really need a head, so that’s fine.

The highest climbable slope that T-RHex could manage was 55 degrees, meaning that it can’t, yet, conquer vertical walls. The researchers were most surprised by the robot’s ability to cling to surfaces, where it was perfectly happy to hang out on a slope of 135 degrees, which is a 45 degree overhang (!). I have no idea how it would ever reach that kind of position on its own, but it’s nice to know that if it ever does, its spines will keep doing their job.

Photo: CMU

T-RHex uses laser-cut acrylic legs, with the microspines embedded into 3D-printed toes. The tail is needed to prevent the robot from tipping backward.

For more details about the project, we spoke with Team ScienceParrot member (and CMU PhD student) Catherine Pavlov via email.

IEEE Spectrum: We’re used to seeing RHex with compliant, springy legs—how do the new legs affect T-RHex’s mobility?

Catherine Pavlov: There’s some compliance in the legs, though not as much as RHex—this is driven by the use of acrylic, which was chosen for budget/manufacturing reasons. Matching the compliance of RHex with acrylic would have made the tines too weak (since often only a few hold the load of the robot during climbing). It definitely means you can’t use energy storage in the legs the way RHex does, for example when pronking. T-RHex is probably more limited by motor speed in terms of mobility though. We were using some borrowed Dynamixels that didn’t allow for good positioning at high speeds.

How did you design the climbing gait? Why not use the middle legs, and why is the tail necessary?

The gait was a lot of hand-tuning and trial-and-error. We wanted a left/right symmetric gait to enable load sharing among more spines and prevent out-of-plane twisting of the legs. When using all three pairs, you have to have very accurate angular positioning or one leg pair gets pushed off the wall. Since two legs should be able to hold the full robot gait, using the middle legs was hurting more than it was helping, with the middle legs sometimes pushing the rear ones off of the wall.

The tail is needed to prevent the robot from tipping backward and “sitting” on the wall. During static testing we saw the robot tip backward, disengaging the front legs, at around 35 degrees incline. The tail allows us to load the front legs, even when they’re at a shallow angle to the surface. The climbing gait we designed uses the tail to allow the rear legs to fully recirculate without the robot tipping backward.

Photo: CMU

Team ScienceParrot with T-RHex.

What prevents T-RHex from climbing even steeper surfaces?

There are a few limiting factors. One is that the tines of the legs break pretty easily. I think we also need a lighter platform to get fully vertical—we’re going to look at MiniRHex for future work. We’re also not convinced our gait is the best it can be, we can probably get marginal improvements with more tuning, which might be enough.

Can the microspines assist with more dynamic maneuvers?

Dynamic climbing maneuvers? I think that would only be possible on surfaces with very good surface adhesion and very good surface strength, but it’s certainly theoretically possible. The current instance of T-RHex would definitely break if you tried to wall jump though.

What are you working on next?

Our main target is exploring the space of materials for leg fabrication, such as fiberglass, PLA, urethanes, and maybe metallic glass. We think there’s a lot of room for improvement in the leg material and geometry. We’d also like to see MiniRHex equipped with microspines, which will require legs about half the scale of what we built for T-RHex. Longer-term improvements would be the addition of sensors e.g. for wall detection, and a reliable floor-to-wall transition and dynamic gait transitions.

[ T-RHex ] Continue reading

Posted in Human Robots