Tag Archives: Final

#437258 This Startup Is 3D Printing Custom ...

Around 1.9 million people in the US are currently living with limb loss. The trauma of losing a limb is just the beginning of what amputees have to face, with the sky-high cost of prosthetics making their circumstance that much more challenging.

Prosthetics can run over $50,000 for a complex limb (like an arm or a leg) and aren’t always covered by insurance. As if shelling out that sum one time wasn’t costly enough, kids’ prosthetics need to be replaced as they outgrow them, meaning the total expense can reach hundreds of thousands of dollars.

A startup called Unlimited Tomorrow is trying to change this, and using cutting-edge technology to do so. Based in Rhinebeck, New York, a town about two hours north of New York City, the company was founded by 23-year-old Easton LaChappelle. He’d been teaching himself the basics of robotics and building prosthetics since grade school (his 8th grade science fair project was a robotic arm) and launched his company in 2014.

After six years of research and development, the company launched its TrueLimb product last month, describing it as an affordable, next-generation prosthetic arm using a custom remote-fitting process where the user never has to leave home.

The technologies used for TrueLimb’s customization and manufacturing are pretty impressive, in that they both cut costs and make the user’s experience a lot less stressful.

For starters, the entire purchase, sizing, and customization process for the prosthetic can be done remotely. Here’s how it works. First, prospective users fill out an eligibility form and give information about their residual limb. If they’re a qualified candidate for a prosthetic, Unlimited Tomorrow sends them a 3D scanner, which they use to scan their residual limb.

The company uses the scans to design a set of test sockets (the component that connects the residual limb to the prosthetic), which are mailed to the user. The company schedules a video meeting with the user for them to try on and discuss the different sockets, with the goal of finding the one that’s most comfortable; new sockets can be made based on the information collected during the video consultation. The user selects their skin tone from a swatch with 450 options, then Unlimited Tomorrow 3D prints and assembles the custom prosthetic and tests it before shipping it out.

“We print the socket, forearm, palm, and all the fingers out of durable nylon material in full color,” LaChappelle told Singularity Hub in an email. “The only components that aren’t 3D printed are the actuators, tendons, electronics, batteries, sensors, and the nuts and bolts. We are an extreme example of final use 3D printing.”

Unlimited Tomorrow’s website lists TrueLimb’s cost as “as low as $7,995.” When you consider the customization and capabilities of the prosthetic, this is incredibly low. According to LaChappelle, the company created a muscle sensor that picks up muscle movement at a higher resolution than the industry standard electromyography sensors. The sensors read signals from nerves in the residual limb used to control motions like fingers bending. This means that when a user thinks about bending a finger, the nerve fires and the prosthetic’s sensors can detect the signal and translate it into the action.

“Working with children using our device, I’ve witnessed a physical moment where the brain “clicks” and starts moving the hand rather than focusing on moving the muscles,” LaChappelle said.

The cost savings come both from the direct-to-consumer model and the fact that Unlimited Tomorrow doesn’t use any outside suppliers. “We create every piece of our product,” LaChappelle said. “We don’t rely on another prosthetic manufacturer to make expensive sensors or electronics. By going direct to consumer, we cut out all the middlemen that usually drive costs up.” Similar devices on the market can cost up to $100,000.

Unlimited Tomorrow is primarily focused on making prosthetics for kids; when they outgrow their first TrueLimb, they send it back, where the company upcycles the expensive quality components and integrates them into a new customized device.

Unlimited Tomorrow isn’t the first to use 3D printing for prosthetics. Florida-based Limbitless Solutions does so too, and industry experts believe the technology is the future of artificial limbs.

“I am constantly blown away by this tech,” LaChappelle said. “We look at technology as the means to augment the human body and empower people.”

Image Credit: Unlimited Tomorrow Continue reading

Posted in Human Robots

#437145 3 Major Materials Science ...

Few recognize the vast implications of materials science.

To build today’s smartphone in the 1980s, it would cost about $110 million, require nearly 200 kilowatts of energy (compared to 2kW per year today), and the device would be 14 meters tall, according to Applied Materials CTO Omkaram Nalamasu.

That’s the power of materials advances. Materials science has democratized smartphones, bringing the technology to the pockets of over 3.5 billion people. But far beyond devices and circuitry, materials science stands at the center of innumerable breakthroughs across energy, future cities, transit, and medicine. And at the forefront of Covid-19, materials scientists are forging ahead with biomaterials, nanotechnology, and other materials research to accelerate a solution.

As the name suggests, materials science is the branch devoted to the discovery and development of new materials. It’s an outgrowth of both physics and chemistry, using the periodic table as its grocery store and the laws of physics as its cookbook.

And today, we are in the middle of a materials science revolution. In this article, we’ll unpack the most important materials advancements happening now.

Let’s dive in.

The Materials Genome Initiative
In June 2011 at Carnegie Mellon University, President Obama announced the Materials Genome Initiative, a nationwide effort to use open source methods and AI to double the pace of innovation in materials science. Obama felt this acceleration was critical to the US’s global competitiveness, and held the key to solving significant challenges in clean energy, national security, and human welfare. And it worked.

By using AI to map the hundreds of millions of different possible combinations of elements—hydrogen, boron, lithium, carbon, etc.—the initiative created an enormous database that allows scientists to play a kind of improv jazz with the periodic table.

This new map of the physical world lets scientists combine elements faster than ever before and is helping them create all sorts of novel elements. And an array of new fabrication tools are further amplifying this process, allowing us to work at altogether new scales and sizes, including the atomic scale, where we’re now building materials one atom at a time.

Biggest Materials Science Breakthroughs
These tools have helped create the metamaterials used in carbon fiber composites for lighter-weight vehicles, advanced alloys for more durable jet engines, and biomaterials to replace human joints. We’re also seeing breakthroughs in energy storage and quantum computing. In robotics, new materials are helping us create the artificial muscles needed for humanoid, soft robots—think Westworld in your world.

Let’s unpack some of the leading materials science breakthroughs of the past decade.

(1) Lithium-ion batteries

The lithium-ion battery, which today powers everything from our smartphones to our autonomous cars, was first proposed in the 1970s. It couldn’t make it to market until the 1990s, and didn’t begin to reach maturity until the past few years.

An exponential technology, these batteries have been dropping in price for three decades, plummeting 90 percent between 1990 and 2010, and 80 percent since. Concurrently, they’ve seen an eleven-fold increase in capacity.

But producing enough of them to meet demand has been an ongoing problem. Tesla has stepped up to the challenge: one of the company’s Gigafactories in Nevada churns out 20 gigawatts of energy storage per year, marking the first time we’ve seen lithium-ion batteries produced at scale.

Musk predicts 100 Gigafactories could store the energy needs of the entire globe. Other companies are moving quickly to integrate this technology as well: Renault is building a home energy storage based on their Zoe batteries, BMW’s 500 i3 battery packs are being integrated into the UK’s national energy grid, and Toyota, Nissan, and Audi have all announced pilot projects.

Lithium-ion batteries will continue to play a major role in renewable energy storage, helping bring down solar and wind energy prices to compete with those of coal and gasoline.

(2) Graphene

Derived from the same graphite found in everyday pencils, graphene is a sheet of carbon just one atom thick. It is nearly weightless, but 200 times stronger than steel. Conducting electricity and dissipating heat faster than any other known substance, this super-material has transformative applications.

Graphene enables sensors, high-performance transistors, and even gel that helps neurons communicate in the spinal cord. Many flexible device screens, drug delivery systems, 3D printers, solar panels, and protective fabric use graphene.

As manufacturing costs decrease, this material has the power to accelerate advancements of all kinds.

(3) Perovskite

Right now, the “conversion efficiency” of the average solar panel—a measure of how much captured sunlight can be turned into electricity—hovers around 16 percent, at a cost of roughly $3 per watt.

Perovskite, a light-sensitive crystal and one of our newer new materials, has the potential to get that up to 66 percent, which would double what silicon panels can muster.

Perovskite’s ingredients are widely available and inexpensive to combine. What do all these factors add up to? Affordable solar energy for everyone.

Materials of the Nano-World
Nanotechnology is the outer edge of materials science, the point where matter manipulation gets nano-small—that’s a million times smaller than an ant, 8,000 times smaller than a red blood cell, and 2.5 times smaller than a strand of DNA.

Nanobots are machines that can be directed to produce more of themselves, or more of whatever else you’d like. And because this takes place at an atomic scale, these nanobots can pull apart any kind of material—soil, water, air—atom by atom, and use these now raw materials to construct just about anything.

Progress has been surprisingly swift in the nano-world, with a bevy of nano-products now on the market. Never want to fold clothes again? Nanoscale additives to fabrics help them resist wrinkling and staining. Don’t do windows? Not a problem! Nano-films make windows self-cleaning, anti-reflective, and capable of conducting electricity. Want to add solar to your house? We’ve got nano-coatings that capture the sun’s energy.

Nanomaterials make lighter automobiles, airplanes, baseball bats, helmets, bicycles, luggage, power tools—the list goes on. Researchers at Harvard built a nanoscale 3D printer capable of producing miniature batteries less than one millimeter wide. And if you don’t like those bulky VR goggles, researchers are now using nanotech to create smart contact lenses with a resolution six times greater than that of today’s smartphones.

And even more is coming. Right now, in medicine, drug delivery nanobots are proving especially useful in fighting cancer. Computing is a stranger story, as a bioengineer at Harvard recently stored 700 terabytes of data in a single gram of DNA.

On the environmental front, scientists can take carbon dioxide from the atmosphere and convert it into super-strong carbon nanofibers for use in manufacturing. If we can do this at scale—powered by solar—a system one-tenth the size of the Sahara Desert could reduce CO2 in the atmosphere to pre-industrial levels in about a decade.

The applications are endless. And coming fast. Over the next decade, the impact of the very, very small is about to get very, very large.

Final Thoughts
With the help of artificial intelligence and quantum computing over the next decade, the discovery of new materials will accelerate exponentially.

And with these new discoveries, customized materials will grow commonplace. Future knee implants will be personalized to meet the exact needs of each body, both in terms of structure and composition.

Though invisible to the naked eye, nanoscale materials will integrate into our everyday lives, seamlessly improving medicine, energy, smartphones, and more.

Ultimately, the path to demonetization and democratization of advanced technologies starts with re-designing materials— the invisible enabler and catalyst. Our future depends on the materials we create.

(Note: This article is an excerpt from The Future Is Faster Than You Think—my new book, just released on January 28th! To get your own copy, click here!)

Join Me
(1) A360 Executive Mastermind: If you’re an exponentially and abundance-minded entrepreneur who would like coaching directly from me, consider joining my Abundance 360 Mastermind, a highly selective community of 360 CEOs and entrepreneurs who I coach for 3 days every January in Beverly Hills, Ca. Through A360, I provide my members with context and clarity about how converging exponential technologies will transform every industry. I’m committed to running A360 for the course of an ongoing 25-year journey as a “countdown to the Singularity.”

If you’d like to learn more and consider joining our 2021 membership, apply here.

(2) Abundance-Digital Online Community: I’ve also created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is Singularity University’s ‘onramp’ for exponential entrepreneurs—those who want to get involved and play at a higher level. Click here to learn more.

(Both A360 and Abundance-Digital are part of Singularity University—your participation opens you to a global community.)

This article originally appeared on diamandis.com. Read the original article here.

Image Credit: Anand Kumar from Pixabay Continue reading

Posted in Human Robots

#436977 The Top 100 AI Startups Out There Now, ...

New drug therapies for a range of chronic diseases. Defenses against various cyber attacks. Technologies to make cities work smarter. Weather and wildfire forecasts that boost safety and reduce risk. And commercial efforts to monetize so-called deepfakes.

What do all these disparate efforts have in common? They’re some of the solutions that the world’s most promising artificial intelligence startups are pursuing.

Data research firm CB Insights released its much-anticipated fourth annual list of the top 100 AI startups earlier this month. The New York-based company has become one of the go-to sources for emerging technology trends, especially in the startup scene.

About 10 years ago, it developed its own algorithm to assess the health of private companies using publicly-available information and non-traditional signals (think social media sentiment, for example) thanks to more than $1 million in grants from the National Science Foundation.

It uses that algorithm-generated data from what it calls a company’s Mosaic score—pulling together information on market trends, money, and momentum—along with other details ranging from patent activity to the latest news analysis to identify the best of the best.

“Our final list of companies is a mix of startups at various stages of R&D and product commercialization,” said Deepashri Varadharajanis, a lead analyst at CB Insights, during a recent presentation on the most prominent trends among the 2020 AI 100 startups.

About 10 companies on the list are among the world’s most valuable AI startups. For instance, there’s San Francisco-based Faire, which has raised at least $266 million since it was founded just three years ago. The company offers a wholesale marketplace that uses machine learning to match local retailers with goods that are predicted to sell well in their specific location.

Image courtesy of CB Insights
Funding for AI in Healthcare
Another startup valued at more than $1 billion, referred to as a unicorn in venture capital speak, is Butterfly Network, a company on the East Coast that has figured out a way to turn a smartphone phone into an ultrasound machine. Backed by $350 million in private investments, Butterfly Network uses AI to power the platform’s diagnostics. A more modestly funded San Francisco startup called Eko is doing something similar for stethoscopes.

In fact, there are more than a dozen AI healthcare startups on this year’s AI 100 list, representing the most companies of any industry on the list. In total, investors poured about $4 billion into AI healthcare startups last year, according to CB Insights, out of a record $26.6 billion raised by all private AI companies in 2019. Since 2014, more than 4,300 AI startups in 80 countries have raised about $83 billion.

One of the most intensive areas remains drug discovery, where companies unleash algorithms to screen potential drug candidates at an unprecedented speed and breadth that was impossible just a few years ago. It has led to the discovery of a new antibiotic to fight superbugs. There’s even a chance AI could help fight the coronavirus pandemic.

There are several AI drug discovery startups among the AI 100: San Francisco-based Atomwise claims its deep convolutional neural network, AtomNet, screens more than 100 million compounds each day. Cyclica is an AI drug discovery company in Toronto that just announced it would apply its platform to identify and develop novel cannabinoid-inspired drugs for neuropsychiatric conditions such as bipolar disorder and anxiety.

And then there’s OWKIN out of New York City, a startup that uses a type of machine learning called federated learning. Backed by Google, the company’s AI platform helps train algorithms without sharing the necessary patient data required to provide the sort of valuable insights researchers need for designing new drugs or even selecting the right populations for clinical trials.

Keeping Cyber Networks Healthy
Privacy and data security are the focus of a number of AI cybersecurity startups, as hackers attempt to leverage artificial intelligence to launch sophisticated attacks while also trying to fool the AI-powered systems rapidly coming online.

“I think this is an interesting field because it’s a bit of a cat and mouse game,” noted Varadharajanis. “As your cyber defenses get smarter, your cyber attacks get even smarter, and so it’s a constant game of who’s going to match the other in terms of tech capabilities.”

Few AI cybersecurity startups match Silicon Valley-based SentinelOne in terms of private capital. The company has raised more than $400 million, with a valuation of $1.1 billion following a $200 million Series E earlier this year. The company’s platform automates what’s called endpoint security, referring to laptops, phones, and other devices at the “end” of a centralized network.

Fellow AI 100 cybersecurity companies include Blue Hexagon, which protects the “edge” of the network against malware, and Abnormal Security, which stops targeted email attacks, both out of San Francisco. Just down the coast in Los Angeles is Obsidian Security, a startup offering cybersecurity for cloud services.

Deepfakes Get a Friendly Makeover
Deepfakes of videos and other types of AI-manipulated media where faces or voices are synthesized in order to fool viewers or listeners has been a different type of ongoing cybersecurity risk. However, some firms are swapping malicious intent for benign marketing and entertainment purposes.

Now anyone can be a supermodel thanks to Superpersonal, a London-based AI startup that has figured out a way to seamlessly swap a user’s face onto a fashionista modeling the latest threads on the catwalk. The most obvious use case is for shoppers to see how they will look in a particular outfit before taking the plunge on a plunging neckline.

Another British company called Synthesia helps users create videos where a talking head will deliver a customized speech or even talk in a different language. The startup’s claim to fame was releasing a campaign video for the NGO Malaria Must Die showing soccer star David Becham speak in nine different languages.

There’s also a Seattle-based company, Wellsaid Labs, which uses AI to produce voice-over narration where users can choose from a library of digital voices with human pitch, emphasis, and intonation. Because every narrator sounds just a little bit smarter with a British accent.

AI Helps Make Smart Cities Smarter
Speaking of smarter: A handful of AI 100 startups are helping create the smart city of the future, where a digital web of sensors, devices, and cloud-based analytics ensure that nobody is ever stuck in traffic again or without an umbrella at the wrong time. At least that’s the dream.

A couple of them are directly connected to Google subsidiary Sidewalk Labs, which focuses on tech solutions to improve urban design. A company called Replica was spun out just last year. It’s sort of SimCity for urban planning. The San Francisco startup uses location data from mobile phones to understand how people behave and travel throughout a typical day in the city. Those insights can then help city governments, for example, make better decisions about infrastructure development.

Denver-area startup AMP Robotics gets into the nitty gritty details of recycling by training robots on how to recycle trash, since humans have largely failed to do the job. The U.S. Environmental Protection Agency estimates that only about 30 percent of waste is recycled.

Some people might complain that weather forecasters don’t even do that well when trying to predict the weather. An Israeli AI startup, ClimaCell, claims it can forecast rain block by block. While the company taps the usual satellite and ground-based sources to create weather models, it has developed algorithms to analyze how precipitation and other conditions affect signals in cellular networks. By analyzing changes in microwave signals between cellular towers, the platform can predict the type and intensity of the precipitation down to street level.

And those are just some of the highlights of what some of the world’s most promising AI startups are doing.

“You have companies optimizing mining operations, warehouse logistics, insurance, workflows, and even working on bringing AI solutions to designing printed circuit boards,” Varadharajanis said. “So a lot of creative ways in which companies are applying AI to solve different issues in different industries.”

Image Credit: Butterfly Network Continue reading

Posted in Human Robots

#436550 Work in the Age of Web 3.0

What is the future of work? Is our future one of ‘technological socialism’ (where technology is taking care of our needs)? Or will tomorrow’s workplace be completely virtualized, allowing us to hang out at home in our PJs while “walking” about our virtual corporate headquarters?

This blog will look at the future of work during the age of Web 3.0, examining scenarios in which artificial intelligence, virtual reality, and the spatial web converge to transform every element of our careers, from training, to execution, to free time.

To offer a quick recap on what the Spatial Web is and how it works, let’s cover some brief history.

A Quick Recap on Web 3.0
While Web 1.0 consisted of static documents and read-only data (static web pages), Web 2.0 introduced multimedia content, interactive web applications, and participatory social media, all of these mediated by two-dimensional screens.

But over the next two to five years, the convergence of 5G, artificial intelligence, VR/AR, and a trillion-sensor economy will enable us to both map our physical world into virtual space and superimpose a digital data layer onto our physical environments. Suddenly, all our information will be manipulated, stored, understood and experienced in spatial ways.

In this blog, I’ll be discussing the Spatial Web’s vast implications for:

Professional Training
Delocalized Business & the Virtual Workplace
Smart Permissions & Data Security

Let’s dive in.

Virtual Training, Real-World Results
Virtual and augmented reality have already begun disrupting the professional training market. As projected by ABI Research, the enterprise VR training market is on track to exceed $6.3 billion in value by 2022.

Leading the charge, Walmart has already implemented VR across 200 Academy training centers, running over 45 modules and simulating everything from unusual customer requests to a Black Friday shopping rush.

Then in September 2018, Walmart committed to a 17,000-headset order of the Oculus Go to equip every US Supercenter, neighborhood market, and discount store with VR-based employee training. By mid-2019, Walmart had tracked a 10-15 percent boost in employee confidence as a result of newly implemented VR training.

In the engineering world, Bell Helicopter is using VR to massively expedite development and testing of its latest aircraft, FCX-001. Partnering with Sector 5 Digital and HTC VIVE, Bell found it could concentrate a typical 6-year aircraft design process into the course of 6 months, turning physical mock-ups into CAD-designed virtual replicas.

But beyond the design process itself, Bell is now one of a slew of companies pioneering VR pilot tests and simulations with real-world accuracy. Seated in a true-to-life virtual cockpit, pilots have now tested countless iterations of the FCX-001 in virtual flight, drawing directly onto the 3D model and enacting aircraft modifications in real-time.

And in an expansion of our virtual senses, several key players are already working on haptic feedback. In the case of VR flight, French company Go Touch VR is now partnering with software developer FlyInside on fingertip-mounted haptic tech for aviation.

Dramatically reducing time and trouble required for VR-testing pilots, they aim to give touch-based confirmation of every switch and dial activated on virtual flights, just as one would experience in a full-sized cockpit mockup. Replicating texture, stiffness, and even the sensation of holding an object, these piloted devices contain a suite of actuators to simulate everything from a light touch to higher-pressured contact, all controlled by gaze and finger movements.

When it comes to other high-risk simulations, virtual and augmented reality have barely scratched the surface.

Firefighters can now combat virtual wildfires with new platforms like FLAIM Trainer or TargetSolutions. And thanks to the expansion of medical AR/VR services like 3D4Medical or Echopixel, surgeons might soon perform operations on annotated organs and magnified incision sites, speeding up reaction times and vastly improving precision.

But perhaps most urgent, Web 3.0 and its VR interface will offer an immediate solution for today’s constant industry turnover and large-scale re-education demands. VR educational facilities with exact replicas of anything from large industrial equipment to minute circuitry will soon give anyone a second chance at the 21st-century job market.

Want to be an electric, autonomous vehicle mechanic at age 15? Throw on a demonetized VR module and learn by doing, testing your prototype iterations at almost zero cost and with no risk of harming others.

Want to be a plasma physicist and play around with a virtual nuclear fusion reactor? Now you’ll be able to simulate results and test out different tweaks, logging Smart Educational Record credits in the process.

As tomorrow’s career model shifts from a “one-and-done graduate degree” to continuous lifelong education, professional VR-based re-education will allow for a continuous education loop, reducing the barrier to entry for anyone wanting to enter a new industry.

But beyond professional training and virtually enriched, real-world work scenarios, Web 3.0 promises entirely virtual workplaces and blockchain-secured authorization systems.

Rise of the Virtual Workplace & Digital Data Integrity
In addition to enabling a virtual goods marketplace, the Spatial Web is also giving way to “virtual company headquarters” and completely virtualized companies, where employees can work from home or any place on the planet.

Too good to be true? Check out an incredible publicly listed company called eXp Realty.

Launched on the heels of the 2008 financial crisis, eXp Realty beat the odds, going public this past May and surpassing a $1B market cap on day one of trading. But how? Opting for a demonetized virtual model, eXp’s founder Glenn Sanford decided to ditch brick and mortar from the get-go, instead building out an online virtual campus for employees, contractors, and thousands of agents.

And after years of hosting team meetings, training seminars, and even agent discussions with potential buyers through 2D digital interfaces, eXp’s virtual headquarters went spatial. What is eXp’s primary corporate value? FUN! And Glenn Sanford’s employees love their jobs.

In a bid to transition from 2D interfaces to immersive, 3D work experiences, virtual platform VirBELA built out the company’s office space in VR, unlocking indefinite scaling potential and an extraordinary new precedent. Foregoing any physical locations for a centralized VR campus, eXp Realty has essentially thrown out all overhead and entered a lucrative market with barely any upfront costs.

Delocalize with VR, and you can now hire anyone with Internet access (right next door or on the other side of the planet), redesign your corporate office every month, throw in an ocean-view office or impromptu conference room for client meetings, and forget about guzzled-up hours in traffic.

Throw in the Spatial Web’s fundamental blockchain-based data layer, and now cryptographically secured virtual IDs will let you validate colleagues’ identities or any of the virtual avatars we will soon inhabit.

This becomes critically important for spatial information logs—keeping incorruptible records of who’s present at a meeting, which data each person has access to, and AI-translated reports of everything discussed and contracts agreed to.

But as I discussed in a previous Spatial Web blog, not only will Web 3.0 and VR advancements allow us to build out virtual worlds, but we’ll soon be able to digitally map our real-world physical offices or entire commercial high rises too.

As data gets added and linked to any given employee’s office, conference room, or security system, we might then access online-merge-offline environments and information through augmented reality.

Imagine showing up at your building’s concierge and your AR glasses automatically check you into the building, authenticating your identity and pulling up any reminders you’ve linked to that specific location.

You stop by a friend’s office, and his smart security system lets you know he’ll arrive in an hour. Need to book a public conference room that’s already been scheduled by another firm’s marketing team? Offer to pay them a fee and, once accepted, a smart transaction will automatically deliver a payment to their company account.

With blockchain-verified digital identities, spatially logged data, and virtually manifest information, business logistics take a fraction of the time, operations grow seamless, and corporate data will be safer than ever.

Final Thoughts
While converging technologies slash the lifespan of Fortune 500 companies, bring on the rise of vast new industries, and transform the job market, Web 3.0 is changing the way we work, where we work, and who we work with.

Life-like virtual modules are already unlocking countless professional training camps, modifiable in real time and easily updated. Virtual programming and blockchain-based authentication are enabling smart data logging, identity protection, and on-demand smart asset trading. And VR/AR-accessible worlds (and corporate campuses) not only demonetize, dematerialize, and delocalize our everyday workplaces, but enrich our physical worlds with AI-driven, context-specific data.

Welcome to the Spatial Web workplace.

Join Me
(1) A360 Executive Mastermind: If you’re an exponentially and abundance-minded entrepreneur who would like coaching directly from me, consider joining my Abundance 360 Mastermind, a highly selective community of 360 CEOs and entrepreneurs who I coach for 3 days every January in Beverly Hills, Ca. Through A360, I provide my members with context and clarity about how converging exponential technologies will transform every industry. I’m committed to running A360 for the course of an ongoing 25-year journey as a “countdown to the Singularity.”

If you’d like to learn more and consider joining our 2021 membership, apply here.

(2) Abundance-Digital Online Community: I’ve also created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is Singularity University’s ‘onramp’ for exponential entrepreneurs—those who want to get involved and play at a higher level. Click here to learn more.

(Both A360 and Abundance-Digital are part of Singularity University—your participation opens you to a global community.)

This article originally appeared on diamandis.com. Read the original article here.

Image Credit: Image by Gerd Altmann from Pixabay Continue reading

Posted in Human Robots

#436484 If Machines Want to Make Art, Will ...

Assuming that the emergence of consciousness in artificial minds is possible, those minds will feel the urge to create art. But will we be able to understand it? To answer this question, we need to consider two subquestions: when does the machine become an author of an artwork? And how can we form an understanding of the art that it makes?

Empathy, we argue, is the force behind our capacity to understand works of art. Think of what happens when you are confronted with an artwork. We maintain that, to understand the piece, you use your own conscious experience to ask what could possibly motivate you to make such an artwork yourself—and then you use that first-person perspective to try to come to a plausible explanation that allows you to relate to the artwork. Your interpretation of the work will be personal and could differ significantly from the artist’s own reasons, but if we share sufficient experiences and cultural references, it might be a plausible one, even for the artist. This is why we can relate so differently to a work of art after learning that it is a forgery or imitation: the artist’s intent to deceive or imitate is very different from the attempt to express something original. Gathering contextual information before jumping to conclusions about other people’s actions—in art, as in life—can enable us to relate better to their intentions.

But the artist and you share something far more important than cultural references: you share a similar kind of body and, with it, a similar kind of embodied perspective. Our subjective human experience stems, among many other things, from being born and slowly educated within a society of fellow humans, from fighting the inevitability of our own death, from cherishing memories, from the lonely curiosity of our own mind, from the omnipresence of the needs and quirks of our biological body, and from the way it dictates the space- and time-scales we can grasp. All conscious machines will have embodied experiences of their own, but in bodies that will be entirely alien to us.

We are able to empathize with nonhuman characters or intelligent machines in human-made fiction because they have been conceived by other human beings from the only subjective perspective accessible to us: “What would it be like for a human to behave as x?” In order to understand machinic art as such—and assuming that we stand a chance of even recognizing it in the first place—we would need a way to conceive a first-person experience of what it is like to be that machine. That is something we cannot do even for beings that are much closer to us. It might very well happen that we understand some actions or artifacts created by machines of their own volition as art, but in doing so we will inevitably anthropomorphize the machine’s intentions. Art made by a machine can be meaningfully interpreted in a way that is plausible only from the perspective of that machine, and any coherent anthropomorphized interpretation will be implausibly alien from the machine perspective. As such, it will be a misinterpretation of the artwork.

But what if we grant the machine privileged access to our ways of reasoning, to the peculiarities of our perception apparatus, to endless examples of human culture? Wouldn’t that enable the machine to make art that a human could understand? Our answer is yes, but this would also make the artworks human—not authentically machinic. All examples so far of “art made by machines” are actually just straightforward examples of human art made with computers, with the artists being the computer programmers. It might seem like a strange claim: how can the programmers be the authors of the artwork if, most of the time, they can’t control—or even anticipate—the actual materializations of the artwork? It turns out that this is a long-standing artistic practice.

Suppose that your local orchestra is playing Beethoven’s Symphony No 7 (1812). Even though Beethoven will not be directly responsible for any of the sounds produced there, you would still say that you are listening to Beethoven. Your experience might depend considerably on the interpretation of the performers, the acoustics of the room, the behavior of fellow audience members or your state of mind. Those and other aspects are the result of choices made by specific individuals or of accidents happening to them. But the author of the music? Ludwig van Beethoven. Let’s say that, as a somewhat odd choice for the program, John Cage’s Imaginary Landscape No 4 (March No 2) (1951) is also played, with 24 performers controlling 12 radios according to a musical score. In this case, the responsibility for the sounds being heard should be attributed to unsuspecting radio hosts, or even to electromagnetic fields. Yet, the shaping of sounds over time—the composition—should be credited to Cage. Each performance of this piece will vary immensely in its sonic materialization, but it will always be a performance of Imaginary Landscape No 4.

Why should we change these principles when artists use computers if, in these respects at least, computer art does not bring anything new to the table? The (human) artists might not be in direct control of the final materializations, or even be able to predict them but, despite that, they are the authors of the work. Various materializations of the same idea—in this case formalized as an algorithm—are instantiations of the same work manifesting different contextual conditions. In fact, a common use of computation in the arts is the production of variations of a process, and artists make extensive use of systems that are sensitive to initial conditions, external inputs, or pseudo-randomness to deliberately avoid repetition of outputs. Having a computer executing a procedure to build an artwork, even if using pseudo-random processes or machine-learning algorithms, is no different than throwing dice to arrange a piece of music, or to pursuing innumerable variations of the same formula. After all, the idea of machines that make art has an artistic tradition that long predates the current trend of artworks made by artificial intelligence.

Machinic art is a term that we believe should be reserved for art made by an artificial mind’s own volition, not for that based on (or directed towards) an anthropocentric view of art. From a human point of view, machinic artworks will still be procedural, algorithmic, and computational. They will be generative, because they will be autonomous from a human artist. And they might be interactive, with humans or other systems. But they will not be the result of a human deferring decisions to a machine, because the first of those—the decision to make art—needs to be the result of a machine’s volition, intentions, and decisions. Only then will we no longer have human art made with computers, but proper machinic art.

The problem is not whether machines will or will not develop a sense of self that leads to an eagerness to create art. The problem is that if—or when—they do, they will have such a different Umwelt that we will be completely unable to relate to it from our own subjective, embodied perspective. Machinic art will always lie beyond our ability to understand it because the boundaries of our comprehension—in art, as in life—are those of the human experience.

This article was originally published at Aeon and has been republished under Creative Commons.

Image Credit: Rene Böhmer / Unsplash Continue reading

Posted in Human Robots