Tag Archives: teeth

#438738 This Week’s Awesome Tech Stories From ...

ARTIFICIAL INTELLIGENCE
A New Artificial Intelligence Makes Mistakes—on Purpose
Will Knight | Wired
“It took about 50 years for computers to eviscerate humans in the venerable game of chess. A standard smartphone can now play the kind of moves that make a grandmaster’s head spin. But one artificial intelligence program is taking a few steps backward, to appreciate how average humans play—blunders and all.”

CRYPTOCURRENCY
Bitcoin’s Price Rises to $50,000 as Mainstream Institutions Hop On
Timothy B. Lee | Ars Technica
“Bitcoin’s price is now far above the previous peak of $19,500 reached in December 2017. Bitcoin’s value has risen by almost 70 percent since the start of 2021. No single factor seems to be driving the cryptocurrency’s rise. Instead, the price is rising as more and more mainstream organizations are deciding to treat it as an ordinary investment asset.”

SCIENCE
Million-Year-Old Mammoth Teeth Contain Oldest DNA Ever Found
Jeanne Timmons | Gizmodo
“An international team of scientists has sequenced DNA from mammoth teeth that is at least a million years old, if not older. This research, published today in Nature, not only provides exciting new insight into mammoth evolutionary history, it reveals an entirely unknown lineage of ancient mammoth.”

SCIENCE
Scientists Accidentally Discover Strange Creatures Under a Half Mile of Ice
Matt Simon | Wired
“i‘It’s like, bloody hell!’ Smith says. ‘It’s just one big boulder in the middle of a relatively flat seafloor. It’s not as if the seafloor is littered with these things.’ Just his luck to drill in the only wrong place. Wrong place for collecting seafloor muck, but the absolute right place for a one-in-a-million shot at finding life in an environment that scientists didn’t reckon could support much of it.”

BIOTECH
Highest-Resolution Images of DNA Reveal It’s Surprisingly Jiggly
George Dvorsky | Gizmodo
“Scientists have captured the highest-resolution images ever taken of DNA, revealing previously unseen twisting and squirming behaviors. …These hidden movements were revealed by computer simulations fed with the highest-resolution images ever taken of a single molecule of DNA. The new study is exposing previously unseen behaviors in the self-replicating molecule, and this research could eventually lead to the development of powerful new genetic therapies.”

TRANSPORTATION
The First Battery-Powered Tanker Is Coming to Tokyo
Maria Gallucci | IEEE Spectrum
“The Japanese tanker is Corvus’s first fully-electric coastal freighter project; the company hopes the e5 will be the first of hundreds more just like it. ‘We see it [as] a beachhead for the coastal shipping market globally,’ Puchalski said. ‘There are many other coastal freighter types that are similar in size and energy demand.’ The number of battery-powered ships has ballooned from virtually zero a decade ago to hundreds worldwide.”

SPACE
Report: NASA’s Only Realistic Path for Humans on Mars Is Nuclear Propulsion
Eric Berger | Ars Technica
“Conducted at the request of NASA, a broad-based committee of experts assessed the viability of two means of propulsion—nuclear thermal and nuclear electric—for a human mission launching to Mars in 2039. ‘One of the primary takeaways of the report is that if we want to send humans to Mars, and we want to do so repeatedly and in a sustainable way, nuclear space propulsion is on the path,’ said [JPL’s] Bobby Braun.”

NASA’s Perseverance Rover Successfully Lands on Mars
Joey Roulette | The Verge
“Perseverance hit Mars’ atmosphere on time at 3:48PM ET at speeds of about 12,100 miles per hour, diving toward the surface in an infamously challenging sequence engineers call the “seven minutes of terror.” With an 11-minute comms delay between Mars and Earth, the spacecraft had to carry out its seven-minute plunge at all by itself with a wickedly complex set of pre-programmed instructions.”

ENVIRONMENT
A First-of-Its-Kind Geoengineering Experiment Is About to Take Its First Step
James Temple | MIT Technology Review
“When I visited Frank Keutsch in the fall of 2019, he walked me down to the lab, where the tube, wrapped in gray insulation, ran the length of a bench in the back corner. By filling it with the right combination of gases, at particular temperatures and pressures, Keutsch and his colleagues had simulated the conditions some 20 kilometers above Earth’s surface. In testing how various chemicals react in this rarefied air, the team hoped to conduct a crude test of a controversial scheme known as solar geoengineering.”

Image Credit: Garcia / Unsplash Continue reading

Posted in Human Robots

#437562 Video Friday: Aquanaut Robot Takes to ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

IROS 2020 – October 25-25, 2020 – [Online]
ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA
Bay Area Robotics Symposium – November 20, 2020 – [Online]
ACRA 2020 – December 8-10, 2020 – [Online]
Let us know if you have suggestions for next week, and enjoy today's videos.

To prepare the Perseverance rover for its date with Mars, NASA’s Mars 2020 mission team conducted a wide array of tests to help ensure a successful entry, descent and landing at the Red Planet. From parachute verification in the world’s largest wind tunnel, to hazard avoidance practice in Death Valley, California, to wheel drop testing at NASA’s Jet Propulsion Laboratory and much more, every system was put through its paces to get ready for the big day. The Perseverance rover is scheduled to land on Mars on February 18, 2021.

[ JPL ]

Awesome to see Aquanaut—the “underwater transformer” we wrote about last year—take to the ocean!

Also their new website has SHARKS on it.

[ HMI ]

Nature has inspired engineers at UNSW Sydney to develop a soft fabric robotic gripper which behaves like an elephant's trunk to grasp, pick up and release objects without breaking them.

[ UNSW ]

Collaborative robots offer increased interaction capabilities at relatively low cost but, in contrast to their industrial counterparts, they inevitably lack precision. We address this problem by relying on a dual-arm system with laser-based sensing to measure relative poses between objects of interest and compensate for pose errors coming from robot proprioception.

[ Paper ]

Developed by NAVER LABS, with Korea University of Technology & Education (Koreatech), the robot arm now features an added waist, extending the available workspace, as well as a sensor head that can perceive objects. It has also been equipped with a robot hand “BLT Gripper” that can change to various grasping methods.

[ NAVER Labs ]

In case you were still wondering why SoftBank acquired Aldebaran and Boston Dynamics:

[ RobotStart ]

DJI's new Mini 2 drone is here with a commercial so hip it makes my teeth scream.

[ DJI ]

Using simple materials, such as plastic struts and cardboard rolls, the first prototype of the RBO Hand 3 is already capable of grasping a large range of different objects thanks to its opposable thumb.

The RBO Hand 3 performs an edge grasp before handing-over the object to a person. The hand actively exploits constraints in the environment (the tabletop) for grasping the object. Thanks to its compliance, this interaction is safe and robust.

[ TU Berlin ]

Flyability's Elios 2 helped researchers inspect Reactor Five at the Chernobyl nuclear disaster site in order to determine whether any uranium was present. Prior to this mission, Reactor Five had not been investigated since the disaster in April of 1986.

[ Flyability ]

Thanks Zacc!

SOTO 2 is here! Together with our development partners from the industry, we have greatly enhanced the SOTO prototype over the last two years. With the new version of the robot, Industry 4.0 will become a great deal more real: SOTO brings materials to the assembly line, just-in-time and completely autonomously.

[ Magazino ]

A drone that can fly sustainably for long distances over land and water, and can land almost anywhere, will be able to serve a wide range of applications. There are already drones that fly using ‘green’ hydrogen, but they either fly very slowly or cannot land vertically. That’s why researchers at TU Delft, together with the Royal Netherlands Navy and the Netherlands Coastguard, developed a hydrogen-powered drone that is capable of vertical take-off and landing whilst also being able to fly horizontally efficiently for several hours, much like regular aircraft. The drone uses a combination of hydrogen and batteries as its power source.

[ MAVLab ]

The National Nuclear User Facility for Hot Robotics (NNUF-HR) is an EPSRC funded facility to support UK academia and industry to deliver ground-breaking, impactful research in robotics and artificial intelligence for application in extreme and challenging nuclear environments.

[ NNUF ]

At the Karolinska University Laboratory in Sweden, an innovation project based around an ABB collaborative robot has increased efficiency and created a better working environment for lab staff.

[ ABB ]

What I find interesting about DJI's enormous new agricultural drone is that it's got a spinning obstacle detecting sensor that's a radar, not a lidar.

Also worth noting is that it seems to detect the telephone pole, but not the support wire that you can see in the video feed, although the visualization does make it seem like it can spot the power lines above.

[ DJI ]

Josh Pieper has spend the last year building his own quadruped, and you can see what he's been up to in just 12 minutes.

[ mjbots ]

Thanks Josh!

Dr. Ryan Eustice, TRI Senior Vice President of Automated Driving, delivers a keynote speech — “The Road to Vehicle Automation, a Toyota Guardian Approach” — to SPIE's Future Sensing Technologies 2020. During the presentation, Eustice provides his perspective on the current state of automated driving, summarizes TRI's Guardian approach — which amplifies human drivers, rather than replacing them — and summarizes TRI's recent developments in core AD capabilities.

[ TRI ]

Two excellent talks this week from UPenn GRASP Lab, from Ruzena Bajcsy and Vijay Kumar.

A panel discussion on the future of robotics and societal challenges with Dr. Ruzena Bajcsy as a Roboticist and Founder of the GRASP Lab.

In this talk I will describe the role of the White House Office of Science and Technology Policy in supporting science and technology research and education, and the lessons I learned while serving in the office. I will also identify a few opportunities at the intersection of technology and policy and broad societal challenges.

[ UPenn ]

The IROS 2020 “Perception, Learning, and Control for Autonomous Agile Vehicles” workshop is all online—here's the intro, but you can click through for a playlist that includes videos of the entire program, and slides are available as well.

[ NYU ] Continue reading

Posted in Human Robots

#436190 What Is the Uncanny Valley?

Have you ever encountered a lifelike humanoid robot or a realistic computer-generated face that seem a bit off or unsettling, though you can’t quite explain why?

Take for instance AVA, one of the “digital humans” created by New Zealand tech startup Soul Machines as an on-screen avatar for Autodesk. Watching a lifelike digital being such as AVA can be both fascinating and disconcerting. AVA expresses empathy through her demeanor and movements: slightly raised brows, a tilt of the head, a nod.

By meticulously rendering every lash and line in its avatars, Soul Machines aimed to create a digital human that is virtually undistinguishable from a real one. But to many, rather than looking natural, AVA actually looks creepy. There’s something about it being almost human but not quite that can make people uneasy.

Like AVA, many other ultra-realistic avatars, androids, and animated characters appear stuck in a disturbing in-between world: They are so lifelike and yet they are not “right.” This void of strangeness is known as the uncanny valley.

Uncanny Valley: Definition and History
The uncanny valley is a concept first introduced in the 1970s by Masahiro Mori, then a professor at the Tokyo Institute of Technology. The term describes Mori’s observation that as robots appear more humanlike, they become more appealing—but only up to a certain point. Upon reaching the uncanny valley, our affinity descends into a feeling of strangeness, a sense of unease, and a tendency to be scared or freaked out.

Image: Masahiro Mori

The uncanny valley as depicted in Masahiro Mori’s original graph: As a robot’s human likeness [horizontal axis] increases, our affinity towards the robot [vertical axis] increases too, but only up to a certain point. For some lifelike robots, our response to them plunges, and they appear repulsive or creepy. That’s the uncanny valley.

In his seminal essay for Japanese journal Energy, Mori wrote:

I have noticed that, in climbing toward the goal of making robots appear human, our affinity for them increases until we come to a valley, which I call the uncanny valley.

Later in the essay, Mori describes the uncanny valley by using an example—the first prosthetic hands:

One might say that the prosthetic hand has achieved a degree of resemblance to the human form, perhaps on a par with false teeth. However, when we realize the hand, which at first site looked real, is in fact artificial, we experience an eerie sensation. For example, we could be startled during a handshake by its limp boneless grip together with its texture and coldness. When this happens, we lose our sense of affinity, and the hand becomes uncanny.

In an interview with IEEE Spectrum, Mori explained how he came up with the idea for the uncanny valley:

“Since I was a child, I have never liked looking at wax figures. They looked somewhat creepy to me. At that time, electronic prosthetic hands were being developed, and they triggered in me the same kind of sensation. These experiences had made me start thinking about robots in general, which led me to write that essay. The uncanny valley was my intuition. It was one of my ideas.”

Uncanny Valley Examples
To better illustrate how the uncanny valley works, here are some examples of the phenomenon. Prepare to be freaked out.

1. Telenoid

Photo: Hiroshi Ishiguro/Osaka University/ATR

Taking the top spot in the “creepiest” rankings of IEEE Spectrum’s Robots Guide, Telenoid is a robotic communication device designed by Japanese roboticist Hiroshi Ishiguro. Its bald head, lifeless face, and lack of limbs make it seem more alien than human.

2. Diego-san

Photo: Andrew Oh/Javier Movellan/Calit2

Engineers and roboticists at the University of California San Diego’s Machine Perception Lab developed this robot baby to help parents better communicate with their infants. At 1.2 meters (4 feet) tall and weighing 30 kilograms (66 pounds), Diego-san is a big baby—bigger than an average 1-year-old child.

“Even though the facial expression is sophisticated and intuitive in this infant robot, I still perceive a false smile when I’m expecting the baby to appear happy,” says Angela Tinwell, a senior lecturer at the University of Bolton in the U.K. and author of The Uncanny Valley in Games and Animation. “This, along with a lack of detail in the eyes and forehead, can make the baby appear vacant and creepy, so I would want to avoid those ‘dead eyes’ rather than interacting with Diego-san.”

​3. Geminoid HI

Photo: Osaka University/ATR/Kokoro

Another one of Ishiguro’s creations, Geminoid HI is his android replica. He even took hair from his own scalp to put onto his robot twin. Ishiguro says he created Geminoid HI to better understand what it means to be human.

4. Sophia

Photo: Mikhail Tereshchenko/TASS/Getty Images

Designed by David Hanson of Hanson Robotics, Sophia is one of the most famous humanoid robots. Like Soul Machines’ AVA, Sophia displays a range of emotional expressions and is equipped with natural language processing capabilities.

5. Anthropomorphized felines

The uncanny valley doesn’t only happen with robots that adopt a human form. The 2019 live-action versions of the animated film The Lion King and the musical Cats brought the uncanny valley to the forefront of pop culture. To some fans, the photorealistic computer animations of talking lions and singing cats that mimic human movements were just creepy.

Are you feeling that eerie sensation yet?

Uncanny Valley: Science or Pseudoscience?
Despite our continued fascination with the uncanny valley, its validity as a scientific concept is highly debated. The uncanny valley wasn’t actually proposed as a scientific concept, yet has often been criticized in that light.

Mori himself said in his IEEE Spectrum interview that he didn’t explore the concept from a rigorous scientific perspective but as more of a guideline for robot designers:

Pointing out the existence of the uncanny valley was more of a piece of advice from me to people who design robots rather than a scientific statement.

Karl MacDorman, an associate professor of human-computer interaction at Indiana University who has long studied the uncanny valley, interprets the classic graph not as expressing Mori’s theory but as a heuristic for learning the concept and organizing observations.

“I believe his theory is instead expressed by his examples, which show that a mismatch in the human likeness of appearance and touch or appearance and motion can elicit a feeling of eeriness,” MacDorman says. “In my own experiments, I have consistently reproduced this effect within and across sense modalities. For example, a mismatch in the human realism of the features of a face heightens eeriness; a robot with a human voice or a human with a robotic voice is eerie.”

How to Avoid the Uncanny Valley
Unless you intend to create creepy characters or evoke a feeling of unease, you can follow certain design principles to avoid the uncanny valley. “The effect can be reduced by not creating robots or computer-animated characters that combine features on different sides of a boundary—for example, human and nonhuman, living and nonliving, or real and artificial,” MacDorman says.

To make a robot or avatar more realistic and move it beyond the valley, Tinwell says to ensure that a character’s facial expressions match its emotive tones of speech, and that its body movements are responsive and reflect its hypothetical emotional state. Special attention must also be paid to facial elements such as the forehead, eyes, and mouth, which depict the complexities of emotion and thought. “The mouth must be modeled and animated correctly so the character doesn’t appear aggressive or portray a ‘false smile’ when they should be genuinely happy,” she says.

For Christoph Bartneck, an associate professor at the University of Canterbury in New Zealand, the goal is not to avoid the uncanny valley, but to avoid bad character animations or behaviors, stressing the importance of matching the appearance of a robot with its ability. “We’re trained to spot even the slightest divergence from ‘normal’ human movements or behavior,” he says. “Hence, we often fail in creating highly realistic, humanlike characters.”

But he warns that the uncanny valley appears to be more of an uncanny cliff. “We find the likability to increase and then crash once robots become humanlike,” he says. “But we have never observed them ever coming out of the valley. You fall off and that’s it.” Continue reading

Posted in Human Robots

#435804 New AI Systems Are Here to Personalize ...

The narratives about automation and its impact on jobs go from urgent to hopeful and everything in between. Regardless where you land, it’s hard to argue against the idea that technologies like AI and robotics will change our economy and the nature of work in the coming years.

A recent World Economic Forum report noted that some estimates show automation could displace 75 million jobs by 2022, while at the same time creating 133 million new roles. While these estimates predict a net positive for the number of new jobs in the coming decade, displaced workers will need to learn new skills to adapt to the changes. If employees can’t be retrained quickly for jobs in the changing economy, society is likely to face some degree of turmoil.

According to Bryan Talebi, CEO and founder of AI education startup Ahura AI, the same technologies erasing and creating jobs can help workers bridge the gap between the two.

Ahura is developing a product to capture biometric data from adult learners who are using computers to complete online education programs. The goal is to feed this data to an AI system that can modify and adapt their program to optimize for the most effective teaching method.

While the prospect of a computer recording and scrutinizing a learner’s behavioral data will surely generate unease across a society growing more aware and uncomfortable with digital surveillance, some people may look past such discomfort if they experience improved learning outcomes. Users of the system would, in theory, have their own personalized instruction shaped specifically for their unique learning style.

And according to Talebi, their systems are showing some promise.

“Based on our early tests, our technology allows people to learn three to five times faster than traditional education,” Talebi told me.

Currently, Ahura’s system uses the video camera and microphone that come standard on the laptops, tablets, and mobile devices most students are using for their learning programs.

With the computer’s camera Ahura can capture facial movements and micro expressions, measure eye movements, and track fidget score (a measure of how much a student moves while learning). The microphone tracks voice sentiment, and the AI leverages natural language processing to review the learner’s word usage.

From this collection of data Ahura can, according to Talebi, identify the optimal way to deliver content to each individual.

For some users that might mean a video tutorial is the best style of learning, while others may benefit more from some form of experiential or text-based delivery.

“The goal is to alter the format of the content in real time to optimize for attention and retention of the information,” said Talebi. One of Ahura’s main goals is to reduce the frequency with which students switch from their learning program to distractions like social media.

“We can now predict with a 60 percent confidence interval ten seconds before someone switches over to Facebook or Instagram. There’s a lot of work to do to get that up to a 95 percent level, so I don’t want to overstate things, but that’s a promising indication that we can work to cut down on the amount of context-switching by our students,” Talebi said.

Talebi repeatedly mentioned his ambition to leverage the same design principles used by Facebook, Twitter, and others to increase the time users spend on those platforms, but instead use them to design more compelling and even addictive education programs that can compete for attention with social media.

But the notion that Ahura’s system could one day be used to create compelling or addictive education necessarily presses against a set of justified fears surrounding data privacy. Growing anxiety surrounding the potential to misuse user data for social manipulation is widespread.

“Of course there is a real danger, especially because we are collecting so much data about our users which is specifically connected to how they consume content. And because we are looking so closely at the ways people interact with content, it’s incredibly important that this technology never be used for propaganda or to sell things to people,” Talebi tried to assure me.

Unsurprisingly (and worrying), using this AI system to sell products to people is exactly where some investors’ ambitions immediately turn once they learn about the company’s capabilities, according to Talebi. During our discussion Talebi regularly cited the now infamous example of Cambridge Analytica, the political consulting firm hired by the Trump campaign to run a psychographically targeted persuasion campaign on the US population during the most recent presidential election.

“It’s important that we don’t use this technology in those ways. We’re aware that things can go sideways, so we’re hoping to put up guardrails to ensure our system is helping and not harming society,” Talebi said.

Talebi will surely need to take real action on such a claim, but says the company is in the process of identifying a structure for an ethics review board—one that carries significant influence with similar voting authority as the executive team and the regular board.

“Our goal is to build an ethics review board that has teeth, is diverse in both gender and background but also in thought and belief structures. The idea is to have our ethics review panel ensure we’re building things ethically,” he said.

Data privacy appears to be an important issue for Talebi, who occasionally referenced a major competitor in the space based in China. According to a recent article from MIT Tech Review outlining the astonishing growth of AI-powered education platforms in China, data privacy concerns may be less severe there than in the West.

Ahura is currently developing upgrades to an early alpha-stage prototype, but is already capturing data from students from at least one Ivy League school and a variety of other places. Their next step is to roll out a working beta version to over 200,000 users as part of a partnership with an unnamed corporate client who will be measuring the platform’s efficacy against a control group.

Going forward, Ahura hopes to add to its suite of biometric data capture by including things like pupil dilation and facial flushing, heart rate, sleep patterns, or whatever else may give their system an edge in improving learning outcomes.

As information technologies increasingly automate work, it’s likely we’ll also see rapid changes to our labor systems. It’s also looking increasingly likely that those same technologies will be used to improve our ability to give people the right skills when they need them. It may be one way to address the challenges automation is sure to bring.

Image Credit: Gerd Altmann / Pixabay Continue reading

Posted in Human Robots

#434792 Extending Human Longevity With ...

Lizards can regrow entire limbs. Flatworms, starfish, and sea cucumbers regrow entire bodies. Sharks constantly replace lost teeth, often growing over 20,000 teeth throughout their lifetimes. How can we translate these near-superpowers to humans?

The answer: through the cutting-edge innovations of regenerative medicine.

While big data and artificial intelligence transform how we practice medicine and invent new treatments, regenerative medicine is about replenishing, replacing, and rejuvenating our physical bodies.

In Part 5 of this blog series on Longevity and Vitality, I detail three of the regenerative technologies working together to fully augment our vital human organs.

Replenish: Stem cells, the regenerative engine of the body
Replace: Organ regeneration and bioprinting
Rejuvenate: Young blood and parabiosis

Let’s dive in.

Replenish: Stem Cells – The Regenerative Engine of the Body
Stem cells are undifferentiated cells that can transform into specialized cells such as heart, neurons, liver, lung, skin and so on, and can also divide to produce more stem cells.

In a child or young adult, these stem cells are in large supply, acting as a built-in repair system. They are often summoned to the site of damage or inflammation to repair and restore normal function.

But as we age, our supply of stem cells begins to diminish as much as 100- to 10,000-fold in different tissues and organs. In addition, stem cells undergo genetic mutations, which reduce their quality and effectiveness at renovating and repairing your body.

Imagine your stem cells as a team of repairmen in your newly constructed mansion. When the mansion is new and the repairmen are young, they can fix everything perfectly. But as the repairmen age and reduce in number, your mansion eventually goes into disrepair and finally crumbles.

What if you could restore and rejuvenate your stem cell population?

One option to accomplish this restoration and rejuvenation is to extract and concentrate your own autologous adult stem cells from places like your adipose (or fat) tissue or bone marrow.

These stem cells, however, are fewer in number and have undergone mutations (depending on your age) from their original ‘software code.’ Many scientists and physicians now prefer an alternative source, obtaining stem cells from the placenta or umbilical cord, the leftovers of birth.

These stem cells, available in large supply and expressing the undamaged software of a newborn, can be injected into joints or administered intravenously to rejuvenate and revitalize.

Think of these stem cells as chemical factories generating vital growth factors that can help to reduce inflammation, fight autoimmune disease, increase muscle mass, repair joints, and even revitalize skin and grow hair.

Over the last decade, the number of publications per year on stem cell-related research has increased 40x, and the stem cell market is expected to increase to $297 billion by 2022.

Rising research and development initiatives to develop therapeutic options for chronic diseases and growing demand for regenerative treatment options are the most significant drivers of this budding industry.

Biologists led by Kohji Nishida at Osaka University in Japan have discovered a new way to nurture and grow the tissues that make up the human eyeball. The scientists are able to grow retinas, corneas, the eye’s lens, and more, using only a small sample of adult skin.

In a Stanford study, seven of 18 stroke victims who agreed to stem cell treatments showed remarkable motor function improvements. This treatment could work for other neurodegenerative conditions such as Alzheimer’s, Parkinson’s, and ALS.

Doctors from the USC Neurorestoration Center and Keck Medicine of USC injected stem cells into the damaged cervical spine of a recently paralyzed 21-year-old man. Three months later, he showed dramatic improvement in sensation and movement of both arms.

In 2019, doctors in the U.K. cured a patient with HIV for the second time ever thanks to the efficacy of stem cells. After giving the cancer patient (who also had HIV) an allogeneic haematopoietic (e.g. blood) stem cell treatment for his Hodgkin’s lymphoma, the patient went into long-term HIV remission—18 months and counting at the time of the study’s publication.

Replace: Organ Regeneration and 3D Printing
Every 10 minutes, someone is added to the US organ transplant waiting list, totaling over 113,000 people waiting for replacement organs as of January 2019.

Countless more people in need of ‘spare parts’ never make it onto the waiting list. And on average, 20 people die each day while waiting for a transplant.

As a result, 35 percent of all US deaths (~900,000 people) could be prevented or delayed with access to organ replacements.

The excessive demand for donated organs will only intensify as technologies like self-driving cars make the world safer, given that many organ donors result from auto and motorcycle accidents. Safer vehicles mean less accidents and donations.

Clearly, replacement and regenerative medicine represent a massive opportunity.

Organ Entrepreneurs
Enter United Therapeutics CEO, Dr. Martine Rothblatt. A one-time aerospace entrepreneur (she was the founder of Sirius Satellite Radio), Rothblatt changed careers in the 1990s after her daughter developed a rare lung disease.

Her moonshot today is to create an industry of replacement organs. With an initial focus on diseases of the lung, Rothblatt set out to create replacement lungs. To accomplish this goal, her company United Therapeutics has pursued a number of technologies in parallel.

3D Printing Lungs
In 2017, United teamed up with one of the world’s largest 3D printing companies, 3D Systems, to build a collagen bioprinter and is paying another company, 3Scan, to slice up lungs and create detailed maps of their interior.

This 3D Systems bioprinter now operates according to a method called stereolithography. A UV laser flickers through a shallow pool of collagen doped with photosensitive molecules. Wherever the laser lingers, the collagen cures and becomes solid.

Gradually, the object being printed is lowered and new layers are added. The printer can currently lay down collagen at a resolution of around 20 micrometers, but will need to achieve resolution of a micrometer in size to make the lung functional.

Once a collagen lung scaffold has been printed, the next step is to infuse it with human cells, a process called recellularization.

The goal here is to use stem cells that grow on scaffolding and differentiate, ultimately providing the proper functionality. Early evidence indicates this approach can work.

In 2018, Harvard University experimental surgeon Harald Ott reported that he pumped billions of human cells (from umbilical cords and diced lungs) into a pig lung stripped of its own cells. When Ott’s team reconnected it to a pig’s circulation, the resulting organ showed rudimentary function.

Humanizing Pig Lungs
Another of Rothblatt’s organ manufacturing strategies is called xenotransplantation, the idea of transplanting an animal’s organs into humans who need a replacement.

Given the fact that adult pig organs are similar in size and shape to those of humans, United Therapeutics has focused on genetically engineering pigs to allow humans to use their organs. “It’s actually not rocket science,” said Rothblatt in her 2015 TED talk. “It’s editing one gene after another.”

To accomplish this goal, United Therapeutics made a series of investments in companies such as Revivicor Inc. and Synthetic Genomics Inc., and signed large funding agreements with the University of Maryland, University of Alabama, and New York Presbyterian/Columbia University Medical Center to create xenotransplantation programs for new hearts, kidneys, and lungs, respectively. Rothblatt hopes to see human translation in three to four years.

In preparation for that day, United Therapeutics owns a 132-acre property in Research Triangle Park and built a 275,000-square-foot medical laboratory that will ultimately have the capability to annually produce up to 1,000 sets of healthy pig lungs—known as xenolungs—from genetically engineered pigs.

Lung Ex Vivo Perfusion Systems
Beyond 3D printing and genetically engineering pig lungs, Rothblatt has already begun implementing a third near-term approach to improve the supply of lungs across the US.

Only about 30 percent of potential donor lungs meet transplant criteria in the first place; of those, only about 85 percent of those are usable once they arrive at the surgery center. As a result, nearly 75 percent of possible lungs never make it to the recipient in need.

What if these lungs could be rejuvenated? This concept informs Dr. Rothblatt’s next approach.

In 2016, United Therapeutics invested $41.8 million in TransMedics Inc., an Andover, Massachusetts company that develops ex vivo perfusion systems for donor lungs, hearts, and kidneys.

The XVIVO Perfusion System takes marginal-quality lungs that initially failed to meet transplantation standard-of-care criteria and perfuses and ventilates them at normothermic conditions, providing an opportunity for surgeons to reassess transplant suitability.

Rejuvenate Young Blood and Parabiosis
In HBO’s parody of the Bay Area tech community, Silicon Valley, one of the episodes (Season 4, Episode 5) is named “The Blood Boy.”

In this installment, tech billionaire Gavin Belson (Matt Ross) is meeting with Richard Hendricks (Thomas Middleditch) and his team, speaking about the future of the decentralized internet. A young, muscled twenty-something disrupts the meeting when he rolls in a transfusion stand and silently hooks an intravenous connection between himself and Belson.

Belson then introduces the newcomer as his “transfusion associate” and begins to explain the science of parabiosis: “Regular transfusions of the blood of a younger physically fit donor can significantly retard the aging process.”

While the sitcom is fiction, that science has merit, and the scenario portrayed in the episode is already happening today.

On the first point, research at Stanford and Harvard has demonstrated that older animals, when transfused with the blood of young animals, experience regeneration across many tissues and organs.

The opposite is also true: young animals, when transfused with the blood of older animals, experience accelerated aging. But capitalizing on this virtual fountain of youth has been tricky.

Ambrosia
One company, a San Francisco-based startup called Ambrosia, recently commenced one of the trials on parabiosis. Their protocol is simple: Healthy participants aged 35 and older get a transfusion of blood plasma from donors under 25, and researchers monitor their blood over the next two years for molecular indicators of health and aging.

Ambrosia’s founder Jesse Karmazin became interested in launching a company around parabiosis after seeing impressive data from animals and studies conducted abroad in humans: In one trial after another, subjects experience a reversal of aging symptoms across every major organ system. “The effects seem to be almost permanent,” he said. “It’s almost like there’s a resetting of gene expression.”

Infusing your own cord blood stem cells as you age may have tremendous longevity benefits. Following an FDA press release in February 2019, Ambrosia halted its consumer-facing treatment after several months of operation.

Understandably, the FDA raised concerns about the practice of parabiosis because to date, there is a marked lack of clinical data to support the treatment’s effectiveness.

Elevian
On the other end of the reputability spectrum is a startup called Elevian, spun out of Harvard University. Elevian is approaching longevity with a careful, scientifically validated strategy. (Full Disclosure: I am both an advisor to and investor in Elevian.)

CEO Mark Allen, MD, is joined by a dozen MDs and Ph.Ds out of Harvard. Elevian’s scientific founders started the company after identifying specific circulating factors that may be responsible for the “young blood” effect.

One example: A naturally occurring molecule known as “growth differentiation factor 11,” or GDF11, when injected into aged mice, reproduces many of the regenerative effects of young blood, regenerating heart, brain, muscles, lungs, and kidneys.

More specifically, GDF11 supplementation reduces age-related cardiac hypertrophy, accelerates skeletal muscle repair, improves exercise capacity, improves brain function and cerebral blood flow, and improves metabolism.

Elevian is developing a number of therapeutics that regulate GDF11 and other circulating factors. The goal is to restore our body’s natural regenerative capacity, which Elevian believes can address some of the root causes of age-associated disease with the promise of reversing or preventing many aging-related diseases and extending the healthy lifespan.

Conclusion
In 1992, futurist Leland Kaiser coined the term “regenerative medicine”:

“A new branch of medicine will develop that attempts to change the course of chronic disease and in many instances will regenerate tired and failing organ systems.”

Since then, the powerful regenerative medicine industry has grown exponentially, and this rapid growth is anticipated to continue.

A dramatic extension of the human healthspan is just over the horizon. Soon, we’ll all have the regenerative superpowers previously relegated to a handful of animals and comic books.

What new opportunities open up when anybody, anywhere, and at anytime can regenerate, replenish, and replace entire organs and metabolic systems on command?

Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: Giovanni Cancemi / Shutterstock.com Continue reading

Posted in Human Robots