Tag Archives: generation

#434658 The Next Data-Driven Healthtech ...

Increasing your healthspan (i.e. making 100 years old the new 60) will depend to a large degree on artificial intelligence. And, as we saw in last week’s blog, healthcare AI systems are extremely data-hungry.

Fortunately, a slew of new sensors and data acquisition methods—including over 122 million wearables shipped in 2018—are bursting onto the scene to meet the massive demand for medical data.

From ubiquitous biosensors, to the mobile healthcare revolution, to the transformative power of the Health Nucleus, converging exponential technologies are fundamentally transforming our approach to healthcare.

In Part 4 of this blog series on Longevity & Vitality, I expand on how we’re acquiring the data to fuel today’s AI healthcare revolution.

In this blog, I’ll explore:

How the Health Nucleus is transforming “sick care” to healthcare
Sensors, wearables, and nanobots
The advent of mobile health

Let’s dive in.

Health Nucleus: Transforming ‘Sick Care’ to Healthcare
Much of today’s healthcare system is actually sick care. Most of us assume that we’re perfectly healthy, with nothing going on inside our bodies, until the day we travel to the hospital writhing in pain only to discover a serious or life-threatening condition.

Chances are that your ailment didn’t materialize that morning; rather, it’s been growing or developing for some time. You simply weren’t aware of it. At that point, once you’re diagnosed as “sick,” our medical system engages to take care of you.

What if, instead of this retrospective and reactive approach, you were constantly monitored, so that you could know the moment anything was out of whack?

Better yet, what if you more closely monitored those aspects of your body that your gene sequence predicted might cause you difficulty? Think: your heart, your kidneys, your breasts. Such a system becomes personalized, predictive, and possibly preventative.

This is the mission of the Health Nucleus platform built by Human Longevity, Inc. (HLI). While not continuous—that will come later, with the next generation of wearable and implantable sensors—the Health Nucleus was designed to ‘digitize’ you once per year to help you determine whether anything is going on inside your body that requires immediate attention.

The Health Nucleus visit provides you with the following tests during a half-day visit:

Whole genome sequencing (30x coverage)
Whole body (non-contrast) MRI
Brain magnetic resonance imaging/angiography (MRI/MRA)
CT (computed tomography) of the heart and lungs
Coronary artery calcium scoring
Electrocardiogram
Echocardiogram
Continuous cardiac monitoring
Clinical laboratory tests and metabolomics

In late 2018, HLI published the results of the first 1,190 clients through the Health Nucleus. The results were eye-opening—especially since these patients were all financially well-off, and already had access to the best doctors.

Following are the physiological and genomic findings in these clients who self-selected to undergo evaluation at HLI’s Health Nucleus.

Physiological Findings [TG]

Two percent had previously unknown tumors detected by MRI
2.5 percent had previously undetected aneurysms detected by MRI
Eight percent had cardiac arrhythmia found on cardiac rhythm monitoring, not previously known
Nine percent had moderate-severe coronary artery disease risk, not previously known
16 percent discovered previously unknown cardiac structure/function abnormalities
30 percent had elevated liver fat, not previously known

Genomic Findings [TG]

24 percent of clients uncovered a rare (unknown) genetic mutation found on WGS
63 percent of clients had a rare genetic mutation with a corresponding phenotypic finding

In summary, HLI’s published results found that 14.4 percent of clients had significant findings that are actionable, requiring immediate or near-term follow-up and intervention.

Long-term value findings were found in 40 percent of the clients we screened. Long-term clinical findings include discoveries that require medical attention or monitoring but are not immediately life-threatening.

The bottom line: most people truly don’t know their actual state of health. The ability to take a fully digital deep dive into your health status at least once per year will enable you to detect disease at stage zero or stage one, when it is most curable.

Sensors, Wearables, and Nanobots
Wearables, connected devices, and quantified self apps will allow us to continuously collect enormous amounts of useful health information.

Wearables like the Quanttus wristband and Vital Connect can transmit your electrocardiogram data, vital signs, posture, and stress levels anywhere on the planet.

In April 2017, we were proud to grant $2.5 million in prize money to the winning team in the Qualcomm Tricorder XPRIZE, Final Frontier Medical Devices.

Using a group of noninvasive sensors that collect data on vital signs, body chemistry, and biological functions, Final Frontier integrates this data in their powerful, AI-based DxtER diagnostic engine for rapid, high-precision assessments.

Their engine combines learnings from clinical emergency medicine and data analysis from actual patients.

Google is developing a full range of internal and external sensors (e.g. smart contact lenses) that can monitor the wearer’s vitals, ranging from blood sugar levels to blood chemistry.

In September 2018, Apple announced its Series 4 Apple Watch, including an FDA-approved mobile, on-the-fly ECG. Granted its first FDA approval, Apple appears to be moving deeper into the sensing healthcare market.

Further, Apple is reportedly now developing sensors that can non-invasively monitor blood sugar levels in real time for diabetic treatment. IoT-connected sensors are also entering the world of prescription drugs.

Last year, the FDA approved the first sensor-embedded pill, Abilify MyCite. This new class of digital pills can now communicate medication data to a user-controlled app, to which doctors may be granted access for remote monitoring.

Perhaps what is most impressive about the next generation of wearables and implantables is the density of sensors, processing, networking, and battery capability that we can now cheaply and compactly integrate.

Take the second-generation OURA ring, for example, which focuses on sleep measurement and management.

The OURA ring looks like a slightly thick wedding band, yet contains an impressive array of sensors and capabilities, including:

Two infrared LED
One infrared sensor
Three temperature sensors
One accelerometer
A six-axis gyro
A curved battery with a seven-day life
The memory, processing, and transmission capability required to connect with your smartphone

Disrupting Medical Imaging Hardware
In 2018, we saw lab breakthroughs that will drive the cost of an ultrasound sensor to below $100, in a packaging smaller than most bandages, powered by a smartphone. Dramatically disrupting ultrasound is just the beginning.

Nanobots and Nanonetworks
While wearables have long been able to track and transmit our steps, heart rate, and other health data, smart nanobots and ingestible sensors will soon be able to monitor countless new parameters and even help diagnose disease.

Some of the most exciting breakthroughs in smart nanotechnology from the past year include:

Researchers from the École Polytechnique Fédérale de Lausanne (EPFL) and the Swiss Federal Institute of Technology in Zurich (ETH Zurich) demonstrated artificial microrobots that can swim and navigate through different fluids, independent of additional sensors, electronics, or power transmission.

Researchers at the University of Chicago proposed specific arrangements of DNA-based molecular logic gates to capture the information contained in the temporal portion of our cells’ communication mechanisms. Accessing the otherwise-lost time-dependent information of these cellular signals is akin to knowing the tune of a song, rather than solely the lyrics.

MIT researchers built micron-scale robots able to sense, record, and store information about their environment. These tiny robots, about 100 micrometers in diameter (approximately the size of a human egg cell), can also carry out pre-programmed computational tasks.

Engineers at University of California, San Diego developed ultrasound-powered nanorobots that swim efficiently through your blood, removing harmful bacteria and the toxins they produce.

But it doesn’t stop there.

As nanosensor and nanonetworking capabilities develop, these tiny bots may soon communicate with each other, enabling the targeted delivery of drugs and autonomous corrective action.

Mobile Health
The OURA ring and the Series 4 Apple Watch are just the tip of the spear when it comes to our future of mobile health. This field, predicted to become a $102 billion market by 2022, puts an on-demand virtual doctor in your back pocket.

Step aside, WebMD.

In true exponential technology fashion, mobile device penetration has increased dramatically, while image recognition error rates and sensor costs have sharply declined.

As a result, AI-powered medical chatbots are flooding the market; diagnostic apps can identify anything from a rash to diabetic retinopathy; and with the advent of global connectivity, mHealth platforms enable real-time health data collection, transmission, and remote diagnosis by medical professionals.

Already available to residents across North London, Babylon Health offers immediate medical advice through AI-powered chatbots and video consultations with doctors via its app.

Babylon now aims to build up its AI for advanced diagnostics and even prescription. Others, like Woebot, take on mental health, using cognitive behavioral therapy in communications over Facebook messenger with patients suffering from depression.

In addition to phone apps and add-ons that test for fertility or autism, the now-FDA-approved Clarius L7 Linear Array Ultrasound Scanner can connect directly to iOS and Android devices and perform wireless ultrasounds at a moment’s notice.

Next, Healthy.io, an Israeli startup, uses your smartphone and computer vision to analyze traditional urine test strips—all you need to do is take a few photos.

With mHealth platforms like ClickMedix, which connects remotely-located patients to medical providers through real-time health data collection and transmission, what’s to stop us from delivering needed treatments through drone delivery or robotic telesurgery?

Welcome to the age of smartphone-as-a-medical-device.

Conclusion
With these DIY data collection and diagnostic tools, we save on transportation costs (time and money), and time bottlenecks.

No longer will you need to wait for your urine or blood results to go through the current information chain: samples will be sent to the lab, analyzed by a technician, results interpreted by your doctor, and only then relayed to you.

Just like the “sage-on-the-stage” issue with today’s education system, healthcare has a “doctor-on-the-dais” problem. Current medical procedures are too complicated and expensive for a layperson to perform and analyze on their own.

The coming abundance of healthcare data promises to transform how we approach healthcare, putting the power of exponential technologies in the patient’s hands and revolutionizing how we live.

Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: Titima Ongkantong / Shutterstock.com Continue reading

Posted in Human Robots

#434336 These Smart Seafaring Robots Have a ...

Drones. Self-driving cars. Flying robo taxis. If the headlines of the last few years are to be believed, terrestrial transportation in the future will someday be filled with robotic conveyances and contraptions that will require little input from a human other than to download an app.

But what about the other 70 percent of the planet’s surface—the part that’s made up of water?

Sure, there are underwater drones that can capture 4K video for the next BBC documentary. Remotely operated vehicles (ROVs) are capable of diving down thousands of meters to investigate ocean vents or repair industrial infrastructure.

Yet most of the robots on or below the water today still lean heavily on the human element to operate. That’s not surprising given the unstructured environment of the seas and the poor communication capabilities for anything moving below the waves. Autonomous underwater vehicles (AUVs) are probably the closest thing today to smart cars in the ocean, but they generally follow pre-programmed instructions.

A new generation of seafaring robots—leveraging artificial intelligence, machine vision, and advanced sensors, among other technologies—are beginning to plunge into the ocean depths. Here are some of the latest and most exciting ones.

The Transformer of the Sea
Nic Radford, chief technology officer of Houston Mechatronics Inc. (HMI), is hesitant about throwing around the word “autonomy” when talking about his startup’s star creation, Aquanaut. He prefers the term “shared control.”

Whatever you want to call it, Aquanaut seems like something out of the script of a Transformers movie. The underwater robot begins each mission in a submarine-like shape, capable of autonomously traveling up to 200 kilometers on battery power, depending on the assignment.

When Aquanaut reaches its destination—oil and gas is the primary industry HMI hopes to disrupt to start—its four specially-designed and built linear actuators go to work. Aquanaut then unfolds into a robot with a head, upper torso, and two manipulator arms, all while maintaining proper buoyancy to get its job done.

The lightbulb moment of how to engineer this transformation from submarine to robot came one day while Aquanaut’s engineers were watching the office’s stand-up desks bob up and down. The answer to the engineering challenge of the hull suddenly seemed obvious.

“We’re just gonna build a big, gigantic, underwater stand-up desk,” Radford told Singularity Hub.

Hardware wasn’t the only problem the team, comprised of veteran NASA roboticists like Radford, had to solve. In order to ditch the expensive support vessels and large teams of humans required to operate traditional ROVs, Aquanaut would have to be able to sense its environment in great detail and relay that information back to headquarters using an underwater acoustics communications system that harkens back to the days of dial-up internet connections.

To tackle that problem of low bandwidth, HMI equipped Aquanaut with a machine vision system comprised of acoustic, optical, and laser-based sensors. All of that dense data is compressed using in-house designed technology and transmitted to a single human operator who controls Aquanaut with a few clicks of a mouse. In other words, no joystick required.

“I don’t know of anyone trying to do this level of autonomy as it relates to interacting with the environment,” Radford said.

HMI got $20 million earlier this year in Series B funding co-led by Transocean, one of the world’s largest offshore drilling contractors. That should be enough money to finish the Aquanaut prototype, which Radford said is about 99.8 percent complete. Some “high-profile” demonstrations are planned for early next year, with commercial deployments as early as 2020.

“What just gives us an incredible advantage here is that we have been born and bred on doing robotic systems for remote locations,” Radford noted. “This is my life, and I’ve bet the farm on it, and it takes this kind of fortitude and passion to see these things through, because these are not easy problems to solve.”

On Cruise Control
Meanwhile, a Boston-based startup is trying to solve the problem of making ships at sea autonomous. Sea Machines is backed by about $12.5 million in capital venture funding, with Toyota AI joining the list of investors in a $10 million Series A earlier this month.

Sea Machines is looking to the self-driving industry for inspiration, developing what it calls “vessel intelligence” systems that can be retrofitted on existing commercial vessels or installed on newly-built working ships.

For instance, the startup announced a deal earlier this year with Maersk, the world’s largest container shipping company, to deploy a system of artificial intelligence, computer vision, and LiDAR on the Danish company’s new ice-class container ship. The technology works similar to advanced driver-assistance systems found in automobiles to avoid hazards. The proof of concept will lay the foundation for a future autonomous collision avoidance system.

It’s not just startups making a splash in autonomous shipping. Radford noted that Rolls Royce—yes, that Rolls Royce—is leading the way in the development of autonomous ships. Its Intelligence Awareness system pulls in nearly every type of hyped technology on the market today: neural networks, augmented reality, virtual reality, and LiDAR.

In augmented reality mode, for example, a live feed video from the ship’s sensors can detect both static and moving objects, overlaying the scene with details about the types of vessels in the area, as well as their distance, heading, and other pertinent data.

While safety is a primary motivation for vessel automation—more than 1,100 ships have been lost over the past decade—these new technologies could make ships more efficient and less expensive to operate, according to a story in Wired about the Rolls Royce Intelligence Awareness system.

Sea Hunt Meets Science
As Singularity Hub noted in a previous article, ocean robots can also play a critical role in saving the seas from environmental threats. One poster child that has emerged—or, invaded—is the spindly lionfish.

A venomous critter endemic to the Indo-Pacific region, the lionfish is now found up and down the east coast of North America and beyond. And it is voracious, eating up to 30 times its own stomach volume and reducing juvenile reef fish populations by nearly 90 percent in as little as five weeks, according to the Ocean Support Foundation.

That has made the colorful but deadly fish Public Enemy No. 1 for many marine conservationists. Both researchers and startups are developing autonomous robots to hunt down the invasive predator.

At the Worcester Polytechnic Institute, for example, students are building a spear-carrying robot that uses machine learning and computer vision to distinguish lionfish from other aquatic species. The students trained the algorithms on thousands of different images of lionfish. The result: a lionfish-killing machine that boasts an accuracy of greater than 95 percent.

Meanwhile, a small startup called the American Marine Research Corporation out of Pensacola, Florida is applying similar technology to seek and destroy lionfish. Rather than spearfishing, the AMRC drone would stun and capture the lionfish, turning a profit by selling the creatures to local seafood restaurants.

Lionfish: It’s what’s for dinner.

Water Bots
A new wave of smart, independent robots are diving, swimming, and cruising across the ocean and its deepest depths. These autonomous systems aren’t necessarily designed to replace humans, but to venture where we can’t go or to improve safety at sea. And, perhaps, these latest innovations may inspire the robots that will someday plumb the depths of watery planets far from Earth.

Image Credit: Houston Mechatronics, Inc. Continue reading

Posted in Human Robots

#434303 Making Superhumans Through Radical ...

Imagine trying to read War and Peace one letter at a time. The thought alone feels excruciating. But in many ways, this painful idea holds parallels to how human-machine interfaces (HMI) force us to interact with and process data today.

Designed back in the 1970s at Xerox PARC and later refined during the 1980s by Apple, today’s HMI was originally conceived during fundamentally different times, and specifically, before people and machines were generating so much data. Fast forward to 2019, when humans are estimated to produce 44 zettabytes of data—equal to two stacks of books from here to Pluto—and we are still using the same HMI from the 1970s.

These dated interfaces are not equipped to handle today’s exponential rise in data, which has been ushered in by the rapid dematerialization of many physical products into computers and software.

Breakthroughs in perceptual and cognitive computing, especially machine learning algorithms, are enabling technology to process vast volumes of data, and in doing so, they are dramatically amplifying our brain’s abilities. Yet even with these powerful technologies that at times make us feel superhuman, the interfaces are still crippled with poor ergonomics.

Many interfaces are still designed around the concept that human interaction with technology is secondary, not instantaneous. This means that any time someone uses technology, they are inevitably multitasking, because they must simultaneously perform a task and operate the technology.

If our aim, however, is to create technology that truly extends and amplifies our mental abilities so that we can offload important tasks, the technology that helps us must not also overwhelm us in the process. We must reimagine interfaces to work in coherence with how our minds function in the world so that our brains and these tools can work together seamlessly.

Embodied Cognition
Most technology is designed to serve either the mind or the body. It is a problematic divide, because our brains use our entire body to process the world around us. Said differently, our minds and bodies do not operate distinctly. Our minds are embodied.

Studies using MRI scans have shown that when a person feels an emotion in their gut, blood actually moves to that area of the body. The body and the mind are linked in this way, sharing information back and forth continuously.

Current technology presents data to the brain differently from how the brain processes data. Our brains, for example, use sensory data to continually encode and decipher patterns within the neocortex. Our brains do not create a linguistic label for each item, which is how the majority of machine learning systems operate, nor do our brains have an image associated with each of these labels.

Our bodies move information through us instantaneously, in a sense “computing” at the speed of thought. What if our technology could do the same?

Using Cognitive Ergonomics to Design Better Interfaces
Well-designed physical tools, as philosopher Martin Heidegger once meditated on while using the metaphor of a hammer, seem to disappear into the “hand.” They are designed to amplify a human ability and not get in the way during the process.

The aim of physical ergonomics is to understand the mechanical movement of the human body and then adapt a physical system to amplify the human output in accordance. By understanding the movement of the body, physical ergonomics enables ergonomically sound physical affordances—or conditions—so that the mechanical movement of the body and the mechanical movement of the machine can work together harmoniously.

Cognitive ergonomics applied to HMI design uses this same idea of amplifying output, but rather than focusing on physical output, the focus is on mental output. By understanding the raw materials the brain uses to comprehend information and form an output, cognitive ergonomics allows technologists and designers to create technological affordances so that the brain can work seamlessly with interfaces and remove the interruption costs of our current devices. In doing so, the technology itself “disappears,” and a person’s interaction with technology becomes fluid and primary.

By leveraging cognitive ergonomics in HMI design, we can create a generation of interfaces that can process and present data the same way humans process real-world information, meaning through fully-sensory interfaces.

Several brain-machine interfaces are already on the path to achieving this. AlterEgo, a wearable device developed by MIT researchers, uses electrodes to detect and understand nonverbal prompts, which enables the device to read the user’s mind and act as an extension of the user’s cognition.

Another notable example is the BrainGate neural device, created by researchers at Stanford University. Just two months ago, a study was released showing that this brain implant system allowed paralyzed patients to navigate an Android tablet with their thoughts alone.

These are two extraordinary examples of what is possible for the future of HMI, but there is still a long way to go to bring cognitive ergonomics front and center in interface design.

Disruptive Innovation Happens When You Step Outside Your Existing Users
Most of today’s interfaces are designed by a narrow population, made up predominantly of white, non-disabled men who are prolific in the use of technology (you may recall The New York Times viral article from 2016, Artificial Intelligence’s White Guy Problem). If you ask this population if there is a problem with today’s HMIs, most will say no, and this is because the technology has been designed to serve them.

This lack of diversity means a limited perspective is being brought to interface design, which is problematic if we want HMI to evolve and work seamlessly with the brain. To use cognitive ergonomics in interface design, we must first gain a more holistic understanding of how people with different abilities understand the world and how they interact with technology.

Underserved groups, such as people with physical disabilities, operate on what Clayton Christensen coined in The Innovator’s Dilemma as the fringe segment of a market. Developing solutions that cater to fringe groups can in fact disrupt the larger market by opening a downward, much larger market.

Learning From Underserved Populations
When technology fails to serve a group of people, that group must adapt the technology to meet their needs.

The workarounds created are often ingenious, specifically because they have not been arrived at by preferences, but out of necessity that has forced disadvantaged users to approach the technology from a very different vantage point.

When a designer or technologist begins learning from this new viewpoint and understanding challenges through a different lens, they can bring new perspectives to design—perspectives that otherwise can go unseen.

Designers and technologists can also learn from people with physical disabilities who interact with the world by leveraging other senses that help them compensate for one they may lack. For example, some blind people use echolocation to detect objects in their environments.

The BrainPort device developed by Wicab is an incredible example of technology leveraging one human sense to serve or compliment another. The BrainPort device captures environmental information with a wearable video camera and converts this data into soft electrical stimulation sequences that are sent to a device on the user’s tongue—the most sensitive touch receptor in the body. The user learns how to interpret the patterns felt on their tongue, and in doing so, become able to “see” with their tongue.

Key to the future of HMI design is learning how different user groups navigate the world through senses beyond sight. To make cognitive ergonomics work, we must understand how to leverage the senses so we’re not always solely relying on our visual or verbal interactions.

Radical Inclusion for the Future of HMI
Bringing radical inclusion into HMI design is about gaining a broader lens on technology design at large, so that technology can serve everyone better.

Interestingly, cognitive ergonomics and radical inclusion go hand in hand. We can’t design our interfaces with cognitive ergonomics without bringing radical inclusion into the picture, and we also will not arrive at radical inclusion in technology so long as cognitive ergonomics are not considered.

This new mindset is the only way to usher in an era of technology design that amplifies the collective human ability to create a more inclusive future for all.

Image Credit: jamesteohart / Shutterstock.com Continue reading

Posted in Human Robots

#434182 Why AI robot toys could be good for kids

A new generation of robot toys with personalities powered by artificial intelligence could give kids more than just a holiday plaything, according to a University of Alberta researcher. Continue reading

Posted in Human Robots

#433954 The Next Great Leap Forward? Combining ...

The Internet of Things is a popular vision of objects with internet connections sending information back and forth to make our lives easier and more comfortable. It’s emerging in our homes, through everything from voice-controlled speakers to smart temperature sensors. To improve our fitness, smart watches and Fitbits are telling online apps how much we’re moving around. And across entire cities, interconnected devices are doing everything from increasing the efficiency of transport to flood detection.

In parallel, robots are steadily moving outside the confines of factory lines. They’re starting to appear as guides in shopping malls and cruise ships, for instance. As prices fall and the artificial intelligence (AI) and mechanical technology continues to improve, we will get more and more used to them making independent decisions in our homes, streets and workplaces.

Here lies a major opportunity. Robots become considerably more capable with internet connections. There is a growing view that the next evolution of the Internet of Things will be to incorporate them into the network, opening up thrilling possibilities along the way.

Home Improvements
Even simple robots become useful when connected to the internet—getting updates about their environment from sensors, say, or learning about their users’ whereabouts and the status of appliances in the vicinity. This lets them lend their bodies, eyes, and ears to give an otherwise impersonal smart environment a user-friendly persona. This can be particularly helpful for people at home who are older or have disabilities.

We recently unveiled a futuristic apartment at Heriot-Watt University to work on such possibilities. One of a few such test sites around the EU, our whole focus is around people with special needs—and how robots can help them by interacting with connected devices in a smart home.

Suppose a doorbell rings that has smart video features. A robot could find the person in the home by accessing their location via sensors, then tell them who is at the door and why. Or it could help make video calls to family members or a professional carer—including allowing them to make virtual visits by acting as a telepresence platform.

Equally, it could offer protection. It could inform them the oven has been left on, for example—phones or tablets are less reliable for such tasks because they can be misplaced or not heard.

Similarly, the robot could raise the alarm if its user appears to be in difficulty.Of course, voice-assistant devices like Alexa or Google Home can offer some of the same services. But robots are far better at moving, sensing and interacting with their environment. They can also engage their users by pointing at objects or acting more naturally, using gestures or facial expressions. These “social abilities” create bonds which are crucially important for making users more accepting of the support and making it more effective.

To help incentivize the various EU test sites, our apartment also hosts the likes of the European Robotic League Service Robot Competition—a sort of Champions League for robots geared to special needs in the home. This brought academics from around Europe to our laboratory for the first time in January this year. Their robots were tested in tasks like welcoming visitors to the home, turning the oven off, and fetching objects for their users; and a German team from Koblenz University won with a robot called Lisa.

Robots Offshore
There are comparable opportunities in the business world. Oil and gas companies are looking at the Internet of Things, for example; experimenting with wireless sensors to collect information such as temperature, pressure, and corrosion levels to detect and possibly predict faults in their offshore equipment.

In the future, robots could be alerted to problem areas by sensors to go and check the integrity of pipes and wells, and to make sure they are operating as efficiently and safely as possible. Or they could place sensors in parts of offshore equipment that are hard to reach, or help to calibrate them or replace their batteries.

The likes of the ORCA Hub, a £36m project led by the Edinburgh Centre for Robotics, bringing together leading experts and over 30 industry partners, is developing such systems. The aim is to reduce the costs and the risks of humans working in remote hazardous locations.

ORCA tests a drone robot. ORCA
Working underwater is particularly challenging, since radio waves don’t move well under the sea. Underwater autonomous vehicles and sensors usually communicate using acoustic waves, which are many times slower (1,500 meters a second vs. 300m meters a second for radio waves). Acoustic communication devices are also much more expensive than those used above the water.

This academic project is developing a new generation of low-cost acoustic communication devices, and trying to make underwater sensor networks more efficient. It should help sensors and underwater autonomous vehicles to do more together in future—repair and maintenance work similar to what is already possible above the water, plus other benefits such as helping vehicles to communicate with one another over longer distances and tracking their location.

Beyond oil and gas, there is similar potential in sector after sector. There are equivalents in nuclear power, for instance, and in cleaning and maintaining the likes of bridges and buildings. My colleagues and I are also looking at possibilities in areas such as farming, manufacturing, logistics, and waste.

First, however, the research sectors around the Internet of Things and robotics need to properly share their knowledge and expertise. They are often isolated from one another in different academic fields. There needs to be more effort to create a joint community, such as the dedicated workshops for such collaboration that we organized at the European Robotics Forum and the IoT Week in 2017.

To the same end, industry and universities need to look at setting up joint research projects. It is particularly important to address safety and security issues—hackers taking control of a robot and using it to spy or cause damage, for example. Such issues could make customers wary and ruin a market opportunity.

We also need systems that can work together, rather than in isolated applications. That way, new and more useful services can be quickly and effectively introduced with no disruption to existing ones. If we can solve such problems and unite robotics and the Internet of Things, it genuinely has the potential to change the world.

Mauro Dragone, Assistant Professor, Cognitive Robotics, Multiagent systems, Internet of Things, Heriot-Watt University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Willyam Bradberry/Shutterstock.com Continue reading

Posted in Human Robots