Tag Archives: rather

#434580 How Genome Sequencing and Senolytics Can ...

The causes of aging are extremely complex and unclear. With the dramatic demonetization of genome reading and editing over the past decade, and Big Pharma, startups, and the FDA starting to face aging as a disease, we are starting to find practical ways to extend our healthspan.

Here, in Part 2 of a series of blogs on longevity and vitality, I explore how genome sequencing and editing, along with new classes of anti-aging drugs, are augmenting our biology to further extend our healthy lives.

In this blog I’ll cover two classes of emerging technologies:

Genome Sequencing and Editing;
Senolytics, Nutraceuticals & Pharmaceuticals.

Let’s dive in.

Genome Sequencing & Editing
Your genome is the software that runs your body.

A sequence of 3.2 billion letters makes you “you.” These base pairs of A’s, T’s, C’s, and G’s determine your hair color, your height, your personality, your propensity to disease, your lifespan, and so on.

Until recently, it’s been very difficult to rapidly and cheaply “read” these letters—and even more difficult to understand what they mean.

Since 2001, the cost to sequence a whole human genome has plummeted exponentially, outpacing Moore’s Law threefold. From an initial cost of $3.7 billion, it dropped to $10 million in 2006, and to $5,000 in 2012.

Today, the cost of genome sequencing has dropped below $500, and according to Illumina, the world’s leading sequencing company, the process will soon cost about $100 and take about an hour to complete.

This represents one of the most powerful and transformative technology revolutions in healthcare.

When we understand your genome, we’ll be able to understand how to optimize “you.”

We’ll know the perfect foods, the perfect drugs, the perfect exercise regimen, and the perfect supplements, just for you.
We’ll understand what microbiome types, or gut flora, are ideal for you (more on this in a later blog).
We’ll accurately predict how specific sedatives and medicines will impact you.
We’ll learn which diseases and illnesses you’re most likely to develop and, more importantly, how to best prevent them from developing in the first place (rather than trying to cure them after the fact).

CRISPR Gene Editing
In addition to reading the human genome, scientists can now edit a genome using a naturally-occurring biological system discovered in 1987 called CRISPR/Cas9.

Short for Clustered Regularly Interspaced Short Palindromic Repeats and CRISPR-associated protein 9, the editing system was adapted from a naturally-occurring defense system found in bacteria.

Here’s how it works:

The bacteria capture snippets of DNA from invading viruses (or bacteriophage) and use them to create DNA segments known as CRISPR arrays.
The CRISPR arrays allow the bacteria to “remember” the viruses (or closely related ones), and defend against future invasions.
If the viruses attack again, the bacteria produce RNA segments from the CRISPR arrays to target the viruses’ DNA. The bacteria then use Cas9 to cut the DNA apart, which disables the virus.

Most importantly, CRISPR is cheap, quick, easy to use, and more accurate than all previous gene editing methods. As a result, CRISPR/Cas9 has swept through labs around the world as the way to edit a genome.

A short search in the literature will show an exponential rise in the number of CRISPR-related publications and patents.

2018: Filled With CRISPR Breakthroughs
Early results are impressive. Researchers from the University of Chicago recently used CRISPR to genetically engineer cocaine resistance into mice.

Researchers at the University of Texas Southwestern Medical Center used CRISPR to reverse the gene defect causing Duchenne muscular dystrophy (DMD) in dogs (DMD is the most common fatal genetic disease in children).

With great power comes great responsibility, and moral and ethical dilemmas.

In 2015, Chinese scientists sparked global controversy when they first edited human embryo cells in the lab with the goal of modifying genes that would make the child resistant to smallpox, HIV, and cholera.

Three years later, in November 2018, researcher He Jiankui informed the world that the first set of CRISPR-engineered female twins had been delivered.

To accomplish his goal, Jiankui deleted a region of a receptor on the surface of white blood cells known as CCR5, introducing a rare, natural genetic variation that makes it more difficult for HIV to infect its favorite target, white blood cells.

Setting aside the significant ethical conversations, CRISPR will soon provide us the tools to eliminate diseases, create hardier offspring, produce new environmentally resistant crops, and even wipe out pathogens.

Senolytics, Nutraceuticals & Pharmaceuticals
Over the arc of your life, the cells in your body divide until they reach what is known as the Hayflick limit, or the number of times a normal human cell population will divide before cell division stops, which is typically about 50 divisions.

What normally follows next is programmed cell death or destruction by the immune system. A very small fraction of cells, however, become senescent cells and evade this fate to linger indefinitely.

These lingering cells secrete a potent mix of molecules that triggers chronic inflammation, damages the surrounding tissue structures, and changes the behavior of nearby cells for the worse.

Senescent cells appear to be one of the root causes of aging, causing everything from fibrosis and blood vessel calcification, to localized inflammatory conditions such as osteoarthritis, to diminished lung function.

Fortunately, both the scientific and entrepreneurial communities have begun to work on senolytic therapies, moving the technology for selectively destroying senescent cells out of the laboratory and into a half-dozen startup companies.

Prominent companies in the field include the following:

Unity Biotechnology is developing senolytic medicines to selectively eliminate senescent cells with an initial focus on delivering localized therapy in osteoarthritis, ophthalmology and pulmonary disease.
Oisin Biotechnologiesis pioneering a programmable gene therapy that can destroy cells based on their internal biochemistry.
SIWA Therapeuticsis working on an immunotherapy approach to the problem of senescent cells.

In recent years, researchers have identified or designed a handful of senolytic compounds that can curb aging by regulating senescent cells. Two of these drugs that have gained mainstay research traction are rapamycin and metformin.

Rapamycin
Originally extracted from bacteria found on Easter Island, Rapamycin acts on the m-TOR (mechanistic target of rapamycin) pathway to selectively block a key protein that facilitates cell division.

Currently, rapamycin derivatives are widely used as immunosuppression in organ and bone marrow transplants. Research now suggests that use results in prolonged lifespan and enhanced cognitive and immune function.

PureTech Health subsidiary resTORbio (which started 2018 by going public) is working on a rapamycin-based drug intended to enhance immunity and reduce infection. Their clinical-stage RTB101 drug works by inhibiting part of the mTOR pathway.

Results of the drug’s recent clinical trial include:

Decreased incidence of infection
Improved influenza vaccination response
A 30.6 percent decrease in respiratory tract infections

Impressive, to say the least.

Metformin
Metformin is a widely-used generic drug for mitigating liver sugar production in Type 2 diabetes patients.

Researchers have found that Metformin also reduces oxidative stress and inflammation, which otherwise increase as we age.

There is strong evidence that Metformin can augment cellular regeneration and dramatically mitigate cellular senescence by reducing both oxidative stress and inflammation.

Over 100 studies registered on ClinicalTrials.gov are currently following up on strong evidence of Metformin’s protective effect against cancer.

Nutraceuticals and NAD+
Beyond cellular senescence, certain critical nutrients and proteins tend to decline as a function of age. Nutraceuticals combat aging by supplementing and replenishing these declining nutrient levels.

NAD+ exists in every cell, participating in every process from DNA repair to creating the energy vital for cellular processes. It’s been shown that NAD+ levels decline as we age.

The Elysium Health Basis supplement aims to elevate NAD+ levels in the body to extend one’s lifespan. Elysium’s clinical study reports that Basis increases NAD+ levels consistently by a sustained 40 percent.

Conclusion
These are just a taste of the tremendous momentum that longevity and aging technology has right now. As artificial intelligence and quantum computing transform how we decode our DNA and how we discover drugs, genetics and pharmaceuticals will become truly personalized.

The next blog in this series will demonstrate how artificial intelligence is converging with genetics and pharmaceuticals to transform how we approach longevity, aging, and vitality.

We are edging closer to a dramatically extended healthspan—where 100 is the new 60. What will you create, where will you explore, and how will you spend your time if you are able to add an additional 40 healthy years to your life?

Join Me
Abundance Digital is my online educational portal and community of abundance-minded entrepreneurs. You’ll find weekly video updates from Peter, a curated newsfeed of exponential news, and a place to share your bold ideas. Click here to learn more and sign up.

Image Credit: ktsdesign / Shutterstock.com Continue reading

Posted in Human Robots

#434569 From Parkour to Surgery, Here Are the ...

The robot revolution may not be here quite yet, but our mechanical cousins have made some serious strides. And now some of the leading experts in the field have provided a rundown of what they see as the 10 most exciting recent developments.

Compiled by the editors of the journal Science Robotics, the list includes some of the most impressive original research and innovative commercial products to make a splash in 2018, as well as a couple from 2017 that really changed the game.

1. Boston Dynamics’ Atlas doing parkour

It seems like barely a few months go by without Boston Dynamics rewriting the book on what a robot can and can’t do. Last year they really outdid themselves when they got their Atlas humanoid robot to do parkour, leaping over logs and jumping between wooden crates.

Atlas’s creators have admitted that the videos we see are cherry-picked from multiple attempts, many of which don’t go so well. But they say they’re meant to be inspirational and aspirational rather than an accurate picture of where robotics is today. And combined with the company’s dog-like Spot robot, they are certainly pushing boundaries.

2. Intuitive Surgical’s da Vinci SP platform
Robotic surgery isn’t new, but the technology is improving rapidly. Market leader Intuitive’s da Vinci surgical robot was first cleared by the FDA in 2000, but since then it’s come a long way, with the company now producing three separate systems.

The latest addition is the da Vinci SP (single port) system, which is able to insert three instruments into the body through a single 2.5cm cannula (tube) bringing a whole new meaning to minimally invasive surgery. The system was granted FDA clearance for urological procedures last year, and the company has now started shipping the new system to customers.

3. Soft robot that navigates through growth

Roboticists have long borrowed principles from the animal kingdom, but a new robot design that mimics the way plant tendrils and fungi mycelium move by growing at the tip has really broken the mold on robot navigation.

The editors point out that this is the perfect example of bio-inspired design; the researchers didn’t simply copy nature, they took a general principle and expanded on it. The tube-like robot unfolds from the front as pneumatic pressure is applied, but unlike a plant, it can grow at the speed of an animal walking and can navigate using visual feedback from a camera.

4. 3D printed liquid crystal elastomers for soft robotics
Soft robotics is one of the fastest-growing sub-disciplines in the field, but powering these devices without rigid motors or pumps is an ongoing challenge. A variety of shape-shifting materials have been proposed as potential artificial muscles, including liquid crystal elastomeric actuators.

Harvard engineers have now demonstrated that these materials can be 3D printed using a special ink that allows the designer to easily program in all kinds of unusual shape-shifting abilities. What’s more, their technique produces actuators capable of lifting significantly more weight than previous approaches.

5. Muscle-mimetic, self-healing, and hydraulically amplified actuators
In another effort to find a way to power soft robots, last year researchers at the University of Colorado Boulder designed a series of super low-cost artificial muscles that can lift 200 times their own weight and even heal themselves.

The devices rely on pouches filled with a liquid that makes them contract with the force and speed of mammalian skeletal muscles when a voltage is applied. The most promising for robotics applications is the so-called Peano-HASEL, which features multiple rectangular pouches connected in series that contract linearly, just like real muscle.

6. Self-assembled nanoscale robot from DNA

While you may think of robots as hulking metallic machines, a substantial number of scientists are working on making nanoscale robots out of DNA. And last year German researchers built the first remote-controlled DNA robotic arm.

They created a length of tightly-bound DNA molecules to act as the arm and attached it to a DNA base plate via a flexible joint. Because DNA carries a charge, they were able to get the arm to swivel around like the hand of a clock by applying a voltage and switch direction by reversing that voltage. The hope is that this arm could eventually be used to build materials piece by piece at the nanoscale.

7. DelFly nimble bioinspired robotic flapper

Robotics doesn’t only borrow from biology—sometimes it gives back to it, too. And a new flapping-winged robot designed by Dutch engineers that mimics the humble fruit fly has done just that, by revealing how the animals that inspired it carry out predator-dodging maneuvers.

The lab has been building flapping robots for years, but this time they ditched the airplane-like tail used to control previous incarnations. Instead, they used insect-inspired adjustments to the motions of its twin pairs of flapping wings to hover, pitch, and roll with the agility of a fruit fly. That has provided a useful platform for investigating insect flight dynamics, as well as more practical applications.

8. Soft exosuit wearable robot

Exoskeletons could prevent workplace injuries, help people walk again, and even boost soldiers’ endurance. Strapping on bulky equipment isn’t ideal, though, so researchers at Harvard are working on a soft exoskeleton that combines specially-designed textiles, sensors, and lightweight actuators.

And last year the team made an important breakthrough by combining their novel exoskeleton with a machine-learning algorithm that automatically tunes the device to the user’s particular walking style. Using physiological data, it is able to adjust when and where the device needs to deliver a boost to the user’s natural movements to improve walking efficiency.

9. Universal Robots (UR) e-Series Cobots
Robots in factories are nothing new. The enormous mechanical arms you see in car factories normally have to be kept in cages to prevent them from accidentally crushing people. In recent years there’s been growing interest in “co-bots,” collaborative robots designed to work side-by-side with their human colleagues and even learn from them.

Earlier this year saw the demise of ReThink robotics, the pioneer of the approach. But the simple single arm devices made by Danish firm Universal Robotics are becoming ubiquitous in workshops and warehouses around the world, accounting for about half of global co-bot sales. Last year they released their latest e-Series, with enhanced safety features and force/torque sensing.

10. Sony’s aibo
After a nearly 20-year hiatus, Sony’s robotic dog aibo is back, and it’s had some serious upgrades. As well as a revamp to its appearance, the new robotic pet takes advantage of advances in AI, with improved environmental and command awareness and the ability to develop a unique character based on interactions with its owner.

The editors note that this new context awareness mark the device out as a significant evolution in social robots, which many hope could aid in childhood learning or provide companionship for the elderly.

Image Credit: DelFly Nimble / CC BY – SA 4.0 Continue reading

Posted in Human Robots

#434559 Can AI Tell the Difference Between a ...

Scarcely a day goes by without another headline about neural networks: some new task that deep learning algorithms can excel at, approaching or even surpassing human competence. As the application of this approach to computer vision has continued to improve, with algorithms capable of specialized recognition tasks like those found in medicine, the software is getting closer to widespread commercial use—for example, in self-driving cars. Our ability to recognize patterns is a huge part of human intelligence: if this can be done faster by machines, the consequences will be profound.

Yet, as ever with algorithms, there are deep concerns about their reliability, especially when we don’t know precisely how they work. State-of-the-art neural networks will confidently—and incorrectly—classify images that look like television static or abstract art as real-world objects like school-buses or armadillos. Specific algorithms could be targeted by “adversarial examples,” where adding an imperceptible amount of noise to an image can cause an algorithm to completely mistake one object for another. Machine learning experts enjoy constructing these images to trick advanced software, but if a self-driving car could be fooled by a few stickers, it might not be so fun for the passengers.

These difficulties are hard to smooth out in large part because we don’t have a great intuition for how these neural networks “see” and “recognize” objects. The main insight analyzing a trained network itself can give us is a series of statistical weights, associating certain groups of points with certain objects: this can be very difficult to interpret.

Now, new research from UCLA, published in the journal PLOS Computational Biology, is testing neural networks to understand the limits of their vision and the differences between computer vision and human vision. Nicholas Baker, Hongjing Lu, and Philip J. Kellman of UCLA, alongside Gennady Erlikhman of the University of Nevada, tested a deep convolutional neural network called VGG-19. This is state-of-the-art technology that is already outperforming humans on standardized tests like the ImageNet Large Scale Visual Recognition Challenge.

They found that, while humans tend to classify objects based on their overall (global) shape, deep neural networks are far more sensitive to the textures of objects, including local color gradients and the distribution of points on the object. This result helps explain why neural networks in image recognition make mistakes that no human ever would—and could allow for better designs in the future.

In the first experiment, a neural network was trained to sort images into 1 of 1,000 different categories. It was then presented with silhouettes of these images: all of the local information was lost, while only the outline of the object remained. Ordinarily, the trained neural net was capable of recognizing these objects, assigning more than 90% probability to the correct classification. Studying silhouettes, this dropped to 10%. While human observers could nearly always produce correct shape labels, the neural networks appeared almost insensitive to the overall shape of the images. On average, the correct object was ranked as the 209th most likely solution by the neural network, even though the overall shapes were an exact match.

A particularly striking example arose when they tried to get the neural networks to classify glass figurines of objects they could already recognize. While you or I might find it easy to identify a glass model of an otter or a polar bear, the neural network classified them as “oxygen mask” and “can opener” respectively. By presenting glass figurines, where the texture information that neural networks relied on for classifying objects is lost, the neural network was unable to recognize the objects by shape alone. The neural network was similarly hopeless at classifying objects based on drawings of their outline.

If you got one of these right, you’re better than state-of-the-art image recognition software. Image Credit: Nicholas Baker, Hongjing Lu, Gennady Erlikhman, Philip J. Kelman. “Deep convolutional networks do not classify based on global object shape.” Plos Computational Biology. 12/7/18. / CC BY 4.0
When the neural network was explicitly trained to recognize object silhouettes—given no information in the training data aside from the object outlines—the researchers found that slight distortions or “ripples” to the contour of the image were again enough to fool the AI, while humans paid them no mind.

The fact that neural networks seem to be insensitive to the overall shape of an object—relying instead on statistical similarities between local distributions of points—suggests a further experiment. What if you scrambled the images so that the overall shape was lost but local features were preserved? It turns out that the neural networks are far better and faster at recognizing scrambled versions of objects than outlines, even when humans struggle. Students could classify only 37% of the scrambled objects, while the neural network succeeded 83% of the time.

Humans vastly outperform machines at classifying object (a) as a bear, while the machine learning algorithm has few problems classifying the bear in figure (b). Image Credit: Nicholas Baker, Hongjing Lu, Gennady Erlikhman, Philip J. Kelman. “Deep convolutional networks do not classify based on global object shape.” Plos Computational Biology. 12/7/18. / CC BY 4.0
“This study shows these systems get the right answer in the images they were trained on without considering shape,” Kellman said. “For humans, overall shape is primary for object recognition, and identifying images by overall shape doesn’t seem to be in these deep learning systems at all.”

Naively, one might expect that—as the many layers of a neural network are modeled on connections between neurons in the brain and resemble the visual cortex specifically—the way computer vision operates must necessarily be similar to human vision. But this kind of research shows that, while the fundamental architecture might resemble that of the human brain, the resulting “mind” operates very differently.

Researchers can, increasingly, observe how the “neurons” in neural networks light up when exposed to stimuli and compare it to how biological systems respond to the same stimuli. Perhaps someday it might be possible to use these comparisons to understand how neural networks are “thinking” and how those responses differ from humans.

But, as yet, it takes a more experimental psychology to probe how neural networks and artificial intelligence algorithms perceive the world. The tests employed against the neural network are closer to how scientists might try to understand the senses of an animal or the developing brain of a young child rather than a piece of software.

By combining this experimental psychology with new neural network designs or error-correction techniques, it may be possible to make them even more reliable. Yet this research illustrates just how much we still don’t understand about the algorithms we’re creating and using: how they tick, how they make decisions, and how they’re different from us. As they play an ever-greater role in society, understanding the psychology of neural networks will be crucial if we want to use them wisely and effectively—and not end up missing the woods for the trees.

Image Credit: Irvan Pratama / Shutterstock.com Continue reading

Posted in Human Robots

#434508 The Top Biotech and Medicine Advances to ...

2018 was bonkers for science.

From a woman who gave birth using a transplanted uterus, to the infamous CRISPR baby scandal, to forensics adopting consumer-based genealogy test kits to track down criminals, last year was a factory churning out scientific “whoa” stories with consequences for years to come.

With CRISPR still in the headlines, Britain ready to bid Europe au revoir, and multiple scientific endeavors taking off, 2019 is shaping up to be just as tumultuous.

Here are the science and health stories that may blow up in the new year. But first, a note of caveat: predicting the future is tough. Forecasting is the lovechild between statistics and (a good deal of) intuition, and entire disciplines have been dedicated to the endeavor. But January is the perfect time to gaze into the crystal ball for wisps of insight into the year to come. Last year we predicted the widespread approval of gene therapy products—on the most part, we nailed it. This year we’re hedging our bets with multiple predictions.

Gene Drives Used in the Wild
The concept of gene drives scares many, for good reason. Gene drives are a step up in severity (and consequences) from CRISPR and other gene-editing tools. Even with germline editing, in which the sperm, egg, or embryos are altered, gene editing affects just one genetic line—one family—at least at the beginning, before they reproduce with the general population.

Gene drives, on the other hand, have the power to wipe out entire species.

In a nutshell, they’re little bits of DNA code that help a gene transfer from parent to child with almost 100 percent perfect probability. The “half of your DNA comes from dad, the other comes from mom” dogma? Gene drives smash that to bits.

In other words, the only time one would consider using a gene drive is to change the genetic makeup of an entire population. It sounds like the plot of a supervillain movie, but scientists have been toying around with the idea of deploying the technology—first in mosquitoes, then (potentially) in rodents.

By releasing just a handful of mutant mosquitoes that carry gene drives for infertility, for example, scientists could potentially wipe out entire populations that carry infectious scourges like malaria, dengue, or Zika. The technology is so potent—and dangerous—the US Defense Advances Research Projects Agency is shelling out $65 million to suss out how to deploy, control, counter, or even reverse the effects of tampering with ecology.

Last year, the U.N. gave a cautious go-ahead for the technology to be deployed in the wild in limited terms. Now, the first release of a genetically modified mosquito is set for testing in Burkina Faso in Africa—the first-ever field experiment involving gene drives.

The experiment will only release mosquitoes in the Anopheles genus, which are the main culprits transferring disease. As a first step, over 10,000 male mosquitoes are set for release into the wild. These dudes are genetically sterile but do not cause infertility, and will help scientists examine how they survive and disperse as a preparation for deploying gene-drive-carrying mosquitoes.

Hot on the project’s heels, the nonprofit consortium Target Malaria, backed by the Bill and Melinda Gates foundation, is engineering a gene drive called Mosq that will spread infertility across the population or kill out all female insects. Their attempt to hack the rules of inheritance—and save millions in the process—is slated for 2024.

A Universal Flu Vaccine
People often brush off flu as a mere annoyance, but the infection kills hundreds of thousands each year based on the CDC’s statistical estimates.

The flu virus is actually as difficult of a nemesis as HIV—it mutates at an extremely rapid rate, making effective vaccines almost impossible to engineer on time. Scientists currently use data to forecast the strains that will likely explode into an epidemic and urge the public to vaccinate against those predictions. That’s partly why, on average, flu vaccines only have a success rate of roughly 50 percent—not much better than a coin toss.

Tired of relying on educated guesses, scientists have been chipping away at a universal flu vaccine that targets all strains—perhaps even those we haven’t yet identified. Often referred to as the “holy grail” in epidemiology, these vaccines try to alert our immune systems to parts of a flu virus that are least variable from strain to strain.

Last November, a first universal flu vaccine developed by BiondVax entered Phase 3 clinical trials, which means it’s already been proven safe and effective in a small numbers and is now being tested in a broader population. The vaccine doesn’t rely on dead viruses, which is a common technique. Rather, it uses a small chain of amino acids—the chemical components that make up proteins—to stimulate the immune system into high alert.

With the government pouring $160 million into the research and several other universal candidates entering clinical trials, universal flu vaccines may finally experience a breakthrough this year.

In-Body Gene Editing Shows Further Promise
CRISPR and other gene editing tools headed the news last year, including both downers suggesting we already have immunity to the technology and hopeful news of it getting ready for treating inherited muscle-wasting diseases.

But what wasn’t widely broadcasted was the in-body gene editing experiments that have been rolling out with gusto. Last September, Sangamo Therapeutics in Richmond, California revealed that they had injected gene-editing enzymes into a patient in an effort to correct a genetic deficit that prevents him from breaking down complex sugars.

The effort is markedly different than the better-known CAR-T therapy, which extracts cells from the body for genetic engineering before returning them to the hosts. Rather, Sangamo’s treatment directly injects viruses carrying the edited genes into the body. So far, the procedure looks to be safe, though at the time of reporting it was too early to determine effectiveness.

This year the company hopes to finally answer whether it really worked.

If successful, it means that devastating genetic disorders could potentially be treated with just a few injections. With a gamut of new and more precise CRISPR and other gene-editing tools in the works, the list of treatable inherited diseases is likely to grow. And with the CRISPR baby scandal potentially dampening efforts at germline editing via regulations, in-body gene editing will likely receive more attention if Sangamo’s results return positive.

Neuralink and Other Brain-Machine Interfaces
Neuralink is the stuff of sci fi: tiny implanted particles into the brain could link up your biological wetware with silicon hardware and the internet.

But that’s exactly what Elon Musk’s company, founded in 2016, seeks to develop: brain-machine interfaces that could tinker with your neural circuits in an effort to treat diseases or even enhance your abilities.

Last November, Musk broke his silence on the secretive company, suggesting that he may announce something “interesting” in a few months, that’s “better than anyone thinks is possible.”

Musk’s aspiration for achieving symbiosis with artificial intelligence isn’t the driving force for all brain-machine interfaces (BMIs). In the clinics, the main push is to rehabilitate patients—those who suffer from paralysis, memory loss, or other nerve damage.

2019 may be the year that BMIs and neuromodulators cut the cord in the clinics. These devices may finally work autonomously within a malfunctioning brain, applying electrical stimulation only when necessary to reduce side effects without requiring external monitoring. Or they could allow scientists to control brains with light without needing bulky optical fibers.

Cutting the cord is just the first step to fine-tuning neurological treatments—or enhancements—to the tune of your own brain, and 2019 will keep on bringing the music.

Image Credit: angellodeco / Shutterstock.com Continue reading

Posted in Human Robots

#434336 These Smart Seafaring Robots Have a ...

Drones. Self-driving cars. Flying robo taxis. If the headlines of the last few years are to be believed, terrestrial transportation in the future will someday be filled with robotic conveyances and contraptions that will require little input from a human other than to download an app.

But what about the other 70 percent of the planet’s surface—the part that’s made up of water?

Sure, there are underwater drones that can capture 4K video for the next BBC documentary. Remotely operated vehicles (ROVs) are capable of diving down thousands of meters to investigate ocean vents or repair industrial infrastructure.

Yet most of the robots on or below the water today still lean heavily on the human element to operate. That’s not surprising given the unstructured environment of the seas and the poor communication capabilities for anything moving below the waves. Autonomous underwater vehicles (AUVs) are probably the closest thing today to smart cars in the ocean, but they generally follow pre-programmed instructions.

A new generation of seafaring robots—leveraging artificial intelligence, machine vision, and advanced sensors, among other technologies—are beginning to plunge into the ocean depths. Here are some of the latest and most exciting ones.

The Transformer of the Sea
Nic Radford, chief technology officer of Houston Mechatronics Inc. (HMI), is hesitant about throwing around the word “autonomy” when talking about his startup’s star creation, Aquanaut. He prefers the term “shared control.”

Whatever you want to call it, Aquanaut seems like something out of the script of a Transformers movie. The underwater robot begins each mission in a submarine-like shape, capable of autonomously traveling up to 200 kilometers on battery power, depending on the assignment.

When Aquanaut reaches its destination—oil and gas is the primary industry HMI hopes to disrupt to start—its four specially-designed and built linear actuators go to work. Aquanaut then unfolds into a robot with a head, upper torso, and two manipulator arms, all while maintaining proper buoyancy to get its job done.

The lightbulb moment of how to engineer this transformation from submarine to robot came one day while Aquanaut’s engineers were watching the office’s stand-up desks bob up and down. The answer to the engineering challenge of the hull suddenly seemed obvious.

“We’re just gonna build a big, gigantic, underwater stand-up desk,” Radford told Singularity Hub.

Hardware wasn’t the only problem the team, comprised of veteran NASA roboticists like Radford, had to solve. In order to ditch the expensive support vessels and large teams of humans required to operate traditional ROVs, Aquanaut would have to be able to sense its environment in great detail and relay that information back to headquarters using an underwater acoustics communications system that harkens back to the days of dial-up internet connections.

To tackle that problem of low bandwidth, HMI equipped Aquanaut with a machine vision system comprised of acoustic, optical, and laser-based sensors. All of that dense data is compressed using in-house designed technology and transmitted to a single human operator who controls Aquanaut with a few clicks of a mouse. In other words, no joystick required.

“I don’t know of anyone trying to do this level of autonomy as it relates to interacting with the environment,” Radford said.

HMI got $20 million earlier this year in Series B funding co-led by Transocean, one of the world’s largest offshore drilling contractors. That should be enough money to finish the Aquanaut prototype, which Radford said is about 99.8 percent complete. Some “high-profile” demonstrations are planned for early next year, with commercial deployments as early as 2020.

“What just gives us an incredible advantage here is that we have been born and bred on doing robotic systems for remote locations,” Radford noted. “This is my life, and I’ve bet the farm on it, and it takes this kind of fortitude and passion to see these things through, because these are not easy problems to solve.”

On Cruise Control
Meanwhile, a Boston-based startup is trying to solve the problem of making ships at sea autonomous. Sea Machines is backed by about $12.5 million in capital venture funding, with Toyota AI joining the list of investors in a $10 million Series A earlier this month.

Sea Machines is looking to the self-driving industry for inspiration, developing what it calls “vessel intelligence” systems that can be retrofitted on existing commercial vessels or installed on newly-built working ships.

For instance, the startup announced a deal earlier this year with Maersk, the world’s largest container shipping company, to deploy a system of artificial intelligence, computer vision, and LiDAR on the Danish company’s new ice-class container ship. The technology works similar to advanced driver-assistance systems found in automobiles to avoid hazards. The proof of concept will lay the foundation for a future autonomous collision avoidance system.

It’s not just startups making a splash in autonomous shipping. Radford noted that Rolls Royce—yes, that Rolls Royce—is leading the way in the development of autonomous ships. Its Intelligence Awareness system pulls in nearly every type of hyped technology on the market today: neural networks, augmented reality, virtual reality, and LiDAR.

In augmented reality mode, for example, a live feed video from the ship’s sensors can detect both static and moving objects, overlaying the scene with details about the types of vessels in the area, as well as their distance, heading, and other pertinent data.

While safety is a primary motivation for vessel automation—more than 1,100 ships have been lost over the past decade—these new technologies could make ships more efficient and less expensive to operate, according to a story in Wired about the Rolls Royce Intelligence Awareness system.

Sea Hunt Meets Science
As Singularity Hub noted in a previous article, ocean robots can also play a critical role in saving the seas from environmental threats. One poster child that has emerged—or, invaded—is the spindly lionfish.

A venomous critter endemic to the Indo-Pacific region, the lionfish is now found up and down the east coast of North America and beyond. And it is voracious, eating up to 30 times its own stomach volume and reducing juvenile reef fish populations by nearly 90 percent in as little as five weeks, according to the Ocean Support Foundation.

That has made the colorful but deadly fish Public Enemy No. 1 for many marine conservationists. Both researchers and startups are developing autonomous robots to hunt down the invasive predator.

At the Worcester Polytechnic Institute, for example, students are building a spear-carrying robot that uses machine learning and computer vision to distinguish lionfish from other aquatic species. The students trained the algorithms on thousands of different images of lionfish. The result: a lionfish-killing machine that boasts an accuracy of greater than 95 percent.

Meanwhile, a small startup called the American Marine Research Corporation out of Pensacola, Florida is applying similar technology to seek and destroy lionfish. Rather than spearfishing, the AMRC drone would stun and capture the lionfish, turning a profit by selling the creatures to local seafood restaurants.

Lionfish: It’s what’s for dinner.

Water Bots
A new wave of smart, independent robots are diving, swimming, and cruising across the ocean and its deepest depths. These autonomous systems aren’t necessarily designed to replace humans, but to venture where we can’t go or to improve safety at sea. And, perhaps, these latest innovations may inspire the robots that will someday plumb the depths of watery planets far from Earth.

Image Credit: Houston Mechatronics, Inc. Continue reading

Posted in Human Robots