Tag Archives: IoT

#437337 6G Will Be 100 Times Faster Than ...

Though 5G—a next-generation speed upgrade to wireless networks—is scarcely up and running (and still nonexistent in many places) researchers are already working on what comes next. It lacks an official name, but they’re calling it 6G for the sake of simplicity (and hey, it’s tradition). 6G promises to be up to 100 times faster than 5G—fast enough to download 142 hours of Netflix in a second—but researchers are still trying to figure out exactly how to make such ultra-speedy connections happen.

A new chip, described in a paper in Nature Photonics by a team from Osaka University and Nanyang Technological University in Singapore, may give us a glimpse of our 6G future. The team was able to transmit data at a rate of 11 gigabits per second, topping 5G’s theoretical maximum speed of 10 gigabits per second and fast enough to stream 4K high-def video in real time. They believe the technology has room to grow, and with more development, might hit those blistering 6G speeds.

NTU final year PhD student Abhishek Kumar, Assoc Prof Ranjan Singh and postdoc Dr Yihao Yang. Dr Singh is holding the photonic topological insulator chip made from silicon, which can transmit terahertz waves at ultrahigh speeds. Credit: NTU Singapore
But first, some details about 5G and its predecessors so we can differentiate them from 6G.

Electromagnetic waves are characterized by a wavelength and a frequency; the wavelength is the distance a cycle of the wave covers (peak to peak or trough to trough, for example), and the frequency is the number of waves that pass a given point in one second. Cellphones use miniature radios to pick up electromagnetic signals and convert those signals into the sights and sounds on your phone.

4G wireless networks run on millimeter waves on the low- and mid-band spectrum, defined as a frequency of a little less (low-band) and a little more (mid-band) than one gigahertz (or one billion cycles per second). 5G kicked that up several notches by adding even higher frequency millimeter waves of up to 300 gigahertz, or 300 billion cycles per second. Data transmitted at those higher frequencies tends to be information-dense—like video—because they’re much faster.

The 6G chip kicks 5G up several more notches. It can transmit waves at more than three times the frequency of 5G: one terahertz, or a trillion cycles per second. The team says this yields a data rate of 11 gigabits per second. While that’s faster than the fastest 5G will get, it’s only the beginning for 6G. One wireless communications expert even estimates 6G networks could handle rates up to 8,000 gigabits per second; they’ll also have much lower latency and higher bandwidth than 5G.

Terahertz waves fall between infrared waves and microwaves on the electromagnetic spectrum. Generating and transmitting them is difficult and expensive, requiring special lasers, and even then the frequency range is limited. The team used a new material to transmit terahertz waves, called photonic topological insulators (PTIs). PTIs can conduct light waves on their surface and edges rather than having them run through the material, and allow light to be redirected around corners without disturbing its flow.

The chip is made completely of silicon and has rows of triangular holes. The team’s research showed the chip was able to transmit terahertz waves error-free.

Nanyang Technological University associate professor Ranjan Singh, who led the project, said, “Terahertz technology […] can potentially boost intra-chip and inter-chip communication to support artificial intelligence and cloud-based technologies, such as interconnected self-driving cars, which will need to transmit data quickly to other nearby cars and infrastructure to navigate better and also to avoid accidents.”

Besides being used for AI and self-driving cars (and, of course, downloading hundreds of hours of video in seconds), 6G would also make a big difference for data centers, IoT devices, and long-range communications, among other applications.

Given that 5G networks are still in the process of being set up, though, 6G won’t be coming on the scene anytime soon; a recent whitepaper on 6G from Japanese company NTTDoCoMo estimates we’ll see it in 2030, pointing out that wireless connection tech generations have thus far been spaced about 10 years apart; we got 3G in the early 2000s, 4G in 2010, and 5G in 2020.

In the meantime, as 6G continues to develop, we’re still looking forward to the widespread adoption of 5G.

Image Credit: Hans Braxmeier from Pixabay Continue reading

Posted in Human Robots

#436149 Blue Frog Robotics Answers (Some of) Our ...

In September of 2015, Buddy the social home robot closed its Indiegogo crowdfunding campaign more than 600 percent over its funding goal. A thousand people pledged for a robot originally scheduled to be delivered in December of 2016. But nearly three years later, the future of Buddy is still unclear. Last May, Blue Frog Robotics asked for forgiveness from its backers and announced the launch of an “equity crowdfunding campaign” to try to raise the additional funding necessary to deliver the robot in April of 2020.

By the time the crowdfunding campaign launched in August, the delivery date had slipped again, to September 2020, even as Blue Frog attempted to draw investors by estimating that sales of Buddy would “increase from 2000 robots in 2020 to 20,000 in 2023.” Blue Frog’s most recent communication with backers, in September, mentions a new CTO and a North American office, but does little to reassure backers of Buddy that they’ll ever be receiving their robot.

Backers of the robot are understandably concerned about the future of Buddy, so we sent a series of questions to the founder and CEO of Blue Frog Robotics, Rodolphe Hasselvander.

We’ve edited this interview slightly for clarity, but we should also note that Hasselvander was unable to provide answers to every question. In particular, we asked for some basic information about Blue Frog’s near-term financial plans, on which the entire future of Buddy seems to depend. We’ve left those questions in the interview anyway, along with Hasselvander’s response.

1. At this point, how much additional funding is necessary to deliver Buddy to backers?
2. Assuming funding is successful, when can backers expect to receive Buddy?
3. What happens if the fundraising goal is not met?
4. You estimate that sales of Buddy will increase 10x over three years. What is this estimate based on?

Rodolphe Hasselvander: Regarding the questions 1-4, unfortunately, as we are fundraising in a Regulation D, we do not comment on prospect, customer data, sales forecasts, or figures. Please refer to our press release here to have information about the fundraising.

5. Do you feel that you are currently being transparent enough about this process to satisfy backers?
6. Buddy’s launch date has moved from April 2020 to September 2020 over the last four months. Why should backers remain confident about Buddy’s schedule?

Since the last newsletter, we haven’t changed our communication, the backers will be the first to receive their Buddy, and we plan an official launch in September 2020.

7. What is the goal of My Buddy World?

At Blue Frog, we think that matching a great product with a big market can only happen through continual experimentation, iteration and incorporation of customer feedback. That’s why we created the forum My Buddy World. It has been designed for our Buddy Community to join us, discuss the world’s first emotional robot, and create with us. The objective is to deepen our conversation with Buddy’s fans and users, stay agile in testing our hypothesis and validate our product-market fit. We trust the value of collaboration. Behind Buddy, there is a team of roboticists, engineers, and programmers that are eager to know more about our consumers’ needs and are excited to work with them to create the perfect human/robot experience.

8. How is the current version of Buddy different from the 2015 version that backers pledged for during the successful crowdfunding campaign, in both hardware and software?

We have completely revised some parts of Buddy as well as replaced and/or added more accurate and reliable components to ensure we fully satisfy our customers’ requirements for a mature and high-quality robot from day one. We sourced more innovative components to make sure that Buddy has the most up-to-date technologies such as adding four microphones, a high def thermal matrix, a 3D camera, an 8-megapixel RGB camera, time-of-flight sensors, and touch sensors.
If you want more info, we just posted an article about what is Buddy here.

9. Will the version of Buddy that ships to backers in 2020 do everything that that was shown in the original crowdfunding video?

Concerning the capabilities of Buddy regarding the video published on YouTube, I confirm that Buddy will be able to do everything you can see, like patrol autonomously and secure your home, telepresence, mathematics applications, interactive stories for children, IoT/smart home management, face recognition, alarm clock, reminder, message/photo sharing, music, hands free call, people following, games like hide and seek (and more). In addition, everyone will be able to create their own apps thanks to the “BuddyLab” application.

10. What makes you confident that Buddy will be successful when Jibo, Kuri, and other social robots have not?

Consumer robotics is a new market. Some people think it is a tough one. But we, at Blue Frog Robotics, believe it is a path of learning, understanding, and finding new ways to serve consumers. Here are the five key factors that will make Buddy successful.

1) A market-fit robot

Blue Frog Robotics is a consumer-centric company. We know that a successful business model and a compelling fit to market Buddy must come up from solving consumers’ frustrations and problems in a way that’s new and exciting. We started from there.

By leveraged existing research and syndicated consumer data sets to understand our customers’ needs and aspirations, we get that creating a robot is not about the best tech innovation and features, but always about how well technology becomes a service to one’s basic human needs and assets: convenience, connection, security, fun, self-improvement, and time. To answer to these consumers’ needs and wants, we designed an all-in-one robot with four vital capabilities: intelligence, emotionality, mobility, and customization.

With his multi-purpose brain, he addresses a broad range of needs in modern-day life, from securing homes to carrying out his owners’ daily activities, from helping people with disabilities to educating children, from entertaining to just becoming a robot friend.

Buddy is a disruptive innovative robot that is about to transform the way we live, learn, utilize information, play, and even care about our health.
2) Endless possibilities

One of the major advantages of Buddy is his adaptability. Beyond to be adorable, playful, talkative, and to accompany anyone in their daily life at home whether you are comfortable with technology or not, he offers via his platform applications to engage his owners in a wide range of activities. From fitness to cooking, from health monitoring to education, from games to meditation, the combination of intelligence, sensors, mobility, multi-touch panel opens endless possibilities for consumers and organizations to adapt their Buddy to their own needs.
3) An affordable price

Buddy will be the first robot combining smart, social, and mobile capabilities and a developed platform with a personality to enter the U.S. market at affordable price.

Our competitors are social or assistant robots but rarely both. Competitors differentiate themselves by features: mobile, non-mobile; by shapes: humanoid or not; by skills: social versus smart; targeting a specific domain like entertainment, retail assistant, eldercare, or education for children; and by price. Regarding our six competitors: Moorebot, Elli-Q, and Olly are not mobile; Lynx and Nao are in toy category; Pepper is above $10k targeting B2B market; and finally, Temi can’t be considered an emotional robot.
Buddy remains highly differentiated as an all-in-one, best of his class experience, covering the needs for social interactions and assistance of his owners at each stage of their life at an affordable price.

The price range of Buddy will be between US $1700 and $2000.

4) A winning business model

Buddy’s great business model combines hardware, software, and services, and provides game-changing convenience for consumers, organizations, and developers.

Buddy offers a multi-sided value proposition focused on three vertical markets: direct consumers, corporations (healthcare, education, hospitality), and developers. The model creates engagement and sustained usage and produces stable and diverse cash flow.
5) A Passion for people and technology

From day one, we have always believed in the power of our dream: To bring the services and the fun of an emotional robot in every house, every hospital, in every care house. Each day, we refuse to think that we are stuck or limited; we work hard to make Buddy a reality that will help people all over the world and make them smile.

While we certainly appreciate Hasselvander’s consistent optimism and obvious enthusiasm, we’re obligated to point out that some of our most important questions were not directly answered. We haven’t learned anything that makes us all that much more confident that Blue Frog will be able to successfully deliver Buddy this time. Hasselvander also didn’t address our specific question about whether he feels like Blue Frog’s communication strategy with backers has been adequate, which is particularly relevant considering that over the four months between the last two newsletters, Buddy’s launch date slipped by six months.

At this point, all we can do is hope that the strategy Blue Frog has chosen will be successful. We’ll let you know if as soon as we learn more.

[ Buddy ] Continue reading

Posted in Human Robots

#435822 The Internet Is Coming to the Rest of ...

People surf it. Spiders crawl it. Gophers navigate it.

Now, a leading group of cognitive biologists and computer scientists want to make the tools of the Internet accessible to the rest of the animal kingdom.

Dubbed the Interspecies Internet, the project aims to provide intelligent animals such as elephants, dolphins, magpies, and great apes with a means to communicate among each other and with people online.

And through artificial intelligence, virtual reality, and other digital technologies, researchers hope to crack the code of all the chirps, yips, growls, and whistles that underpin animal communication.

Oh, and musician Peter Gabriel is involved.

“We can use data analysis and technology tools to give non-humans a lot more choice and control,” the former Genesis frontman, dressed in his signature Nehru-style collar shirt and loose, open waistcoat, told IEEE Spectrum at the inaugural Interspecies Internet Workshop, held Monday in Cambridge, Mass. “This will be integral to changing our relationship with the natural world.”

The workshop was a long time in the making.

Eighteen years ago, Gabriel visited a primate research center in Atlanta, Georgia, where he jammed with two bonobos, a male named Kanzi and his half-sister Panbanisha. It was the first time either bonobo had sat at a piano before, and both displayed an exquisite sense of musical timing and melody.

Gabriel seemed to be speaking to the great apes through his synthesizer. It was a shock to the man who once sang “Shock the Monkey.”

“It blew me away,” he says.

Add in the bonobos’ ability to communicate by pointing to abstract symbols, Gabriel notes, and “you’d have to be deaf, dumb, and very blind not to notice language being used.”

Gabriel eventually teamed up with Internet protocol co-inventor Vint Cerf, cognitive psychologist Diana Reiss, and IoT pioneer Neil Gershenfeld to propose building an Interspecies Internet. Presented in a 2013 TED Talk as an “idea in progress,” the concept proved to be ahead of the technology.

“It wasn’t ready,” says Gershenfeld, director of MIT’s Center for Bits and Atoms. “It needed to incubate.”

So, for the past six years, the architects of the Dolittlesque initiative embarked on two small pilot projects, one for dolphins and one for chimpanzees.

At her Hunter College lab in New York City, Reiss developed what she calls the D-Pad—a touchpad for dolphins.

Reiss had been trying for years to create an underwater touchscreen with which to probe the cognition and communication skills of bottlenose dolphins. But “it was a nightmare coming up with something that was dolphin-safe and would work,” she says.

Her first attempt emitted too much heat. A Wii-like system of gesture recognition proved too difficult to install in the dolphin tanks.

Eventually, she joined forces with Rockefeller University biophysicist Marcelo Magnasco and invented an optical detection system in which images and infrared sensors are projected through an underwater viewing window onto a glass panel, allowing the dolphins to play specially designed apps, including one dubbed Whack-a-Fish.

Meanwhile, in the United Kingdom, Gabriel worked with Alison Cronin, director of the ape rescue center Monkey World, to test the feasibility of using FaceTime with chimpanzees.

The chimps engaged with the technology, Cronin reported at this week’s workshop. However, our hominid cousins proved as adept at videotelephonic discourse as my three-year-old son is at video chatting with his grandparents—which is to say, there was a lot of pass-the-banana-through-the-screen and other silly games, and not much meaningful conversation.

“We can use data analysis and technology tools to give non-humans a lot more choice and control.”
—Peter Gabriel

The buggy, rudimentary attempt at interspecies online communication—what Cronin calls her “Max Headroom experiment”—shows that building the Interspecies Internet will not be as simple as giving out Skype-enabled tablets to smart animals.

“There are all sorts of problems with creating a human-centered experience for another animal,” says Gabriel Miller, director of research and development at the San Diego Zoo.

Miller has been working on animal-focused sensory tools such as an “Elephone” (for elephants) and a “Joybranch” (for birds), but it’s not easy to design efficient interactive systems for other creatures—and for the Interspecies Internet to be successful, Miller points out, “that will be super-foundational.”

Researchers are making progress on natural language processing of animal tongues. Through a non-profit organization called the Earth Species Project, former Firefox designer Aza Raskin and early Twitter engineer Britt Selvitelle are applying deep learning algorithms developed for unsupervised machine translation of human languages to fashion a Rosetta Stone–like tool capable of interpreting the vocalizations of whales, primates, and other animals.

Inspired by the scientists who first documented the complex sonic arrangements of humpback whales in the 1960s—a discovery that ushered in the modern marine conservation movement—Selvitelle hopes that an AI-powered animal translator can have a similar effect on environmentalism today.

“A lot of shifts happen when someone who doesn’t have a voice gains a voice,” he says.

A challenge with this sort of AI software remains verification and validation. Normally, machine-learning algorithms are benchmarked against a human expert, but who is to say if a cybernetic translation of a sperm whale’s clicks is accurate or not?

One could back-translate an English expression into sperm whale-ese and then into English again. But with the great apes, there might be a better option.

According to primatologist Sue Savage-Rumbaugh, expertly trained bonobos could serve as bilingual interpreters, translating the argot of apes into the parlance of people, and vice versa.

Not just any trained ape will do, though. They have to grow up in a mixed Pan/Homo environment, as Kanzi and Panbanisha were.

“If I can have a chat with a cow, maybe I can have more compassion for it.”
—Jeremy Coller

Those bonobos were raised effectively from birth both by Savage-Rumbaugh, who taught the animals to understand spoken English and to communicate via hundreds of different pictographic “lexigrams,” and a bonobo mother named Matata that had lived for six years in the Congolese rainforests before her capture.

Unlike all other research primates—which are brought into captivity as infants, reared by human caretakers, and have limited exposure to their natural cultures or languages—those apes thus grew up fluent in both bonobo and human.

Panbanisha died in 2012, but Kanzi, aged 38, is still going strong, living at an ape sanctuary in Des Moines, Iowa. Researchers continue to study his cognitive abilities—Francine Dolins, a primatologist at the University of Michigan-Dearborn, is running one study in which Kanzi and other apes hunt rabbits and forage for fruit through avatars on a touchscreen. Kanzi could, in theory, be recruited to check the accuracy of any Google Translate–like app for bonobo hoots, barks, grunts, and cries.

Alternatively, Kanzi could simply provide Internet-based interpreting services for our two species. He’s already proficient at video chatting with humans, notes Emily Walco, a PhD student at Harvard University who has personally Skyped with Kanzi. “He was super into it,” Walco says.

And if wild bonobos in Central Africa can be coaxed to gather around a computer screen, Savage-Rumbaugh is confident Kanzi could communicate with them that way. “It can all be put together,” she says. “We can have an Interspecies Internet.”

“Both the technology and the knowledge had to advance,” Savage-Rumbaugh notes. However, now, “the techniques that we learned could really be extended to a cow or a pig.”

That’s music to the ears of Jeremy Coller, a private equity specialist whose foundation partially funded the Interspecies Internet Workshop. Coller is passionate about animal welfare and has devoted much of his philanthropic efforts toward the goal of ending factory farming.

At the workshop, his foundation announced the creation of the Coller Doolittle Prize, a US $100,000 award to help fund further research related to the Interspecies Internet. (A working group also formed to synthesize plans for the emerging field, to facilitate future event planning, and to guide testing of shared technology platforms.)

Why would a multi-millionaire with no background in digital communication systems or cognitive psychology research want to back the initiative? For Coller, the motivation boils to interspecies empathy.

“If I can have a chat with a cow,” he says, “maybe I can have more compassion for it.”

An abridged version of this post appears in the September 2019 print issue as “Elephants, Dolphins, and Chimps Need the Internet, Too.” Continue reading

Posted in Human Robots

#434827 AI and Robotics Are Transforming ...

During the past 50 years, the frequency of recorded natural disasters has surged nearly five-fold.

In this blog, I’ll be exploring how converging exponential technologies (AI, robotics, drones, sensors, networks) are transforming the future of disaster relief—how we can prevent them in the first place and get help to victims during that first golden hour wherein immediate relief can save lives.

Here are the three areas of greatest impact:

AI, predictive mapping, and the power of the crowd
Next-gen robotics and swarm solutions
Aerial drones and immediate aid supply

Let’s dive in!

Artificial Intelligence and Predictive Mapping
When it comes to immediate and high-precision emergency response, data is gold.

Already, the meteoric rise of space-based networks, stratosphere-hovering balloons, and 5G telecommunications infrastructure is in the process of connecting every last individual on the planet.

Aside from democratizing the world’s information, however, this upsurge in connectivity will soon grant anyone the ability to broadcast detailed geo-tagged data, particularly those most vulnerable to natural disasters.

Armed with the power of data broadcasting and the force of the crowd, disaster victims now play a vital role in emergency response, turning a historically one-way blind rescue operation into a two-way dialogue between connected crowds and smart response systems.

With a skyrocketing abundance of data, however, comes a new paradigm: one in which we no longer face a scarcity of answers. Instead, it will be the quality of our questions that matters most.

This is where AI comes in: our mining mechanism.

In the case of emergency response, what if we could strategically map an almost endless amount of incoming data points? Or predict the dynamics of a flood and identify a tsunami’s most vulnerable targets before it even strikes? Or even amplify critical signals to trigger automatic aid by surveillance drones and immediately alert crowdsourced volunteers?

Already, a number of key players are leveraging AI, crowdsourced intelligence, and cutting-edge visualizations to optimize crisis response and multiply relief speeds.

Take One Concern, for instance. Born out of Stanford under the mentorship of leading AI expert Andrew Ng, One Concern leverages AI through analytical disaster assessment and calculated damage estimates.

Partnering with the cities of Los Angeles, San Francisco, and numerous cities in San Mateo County, the platform assigns verified, unique ‘digital fingerprints’ to every element in a city. Building robust models of each system, One Concern’s AI platform can then monitor site-specific impacts of not only climate change but each individual natural disaster, from sweeping thermal shifts to seismic movement.

This data, combined with that of city infrastructure and former disasters, are then used to predict future damage under a range of disaster scenarios, informing prevention methods and structures in need of reinforcement.

Within just four years, One Concern can now make precise predictions with an 85 percent accuracy rate in under 15 minutes.

And as IoT-connected devices and intelligent hardware continue to boom, a blooming trillion-sensor economy will only serve to amplify AI’s predictive capacity, offering us immediate, preventive strategies long before disaster strikes.

Beyond natural disasters, however, crowdsourced intelligence, predictive crisis mapping, and AI-powered responses are just as formidable a triage in humanitarian disasters.

One extraordinary story is that of Ushahidi. When violence broke out after the 2007 Kenyan elections, one local blogger proposed a simple yet powerful question to the web: “Any techies out there willing to do a mashup of where the violence and destruction is occurring and put it on a map?”

Within days, four ‘techies’ heeded the call, building a platform that crowdsourced first-hand reports via SMS, mined the web for answers, and—with over 40,000 verified reports—sent alerts back to locals on the ground and viewers across the world.

Today, Ushahidi has been used in over 150 countries, reaching a total of 20 million people across 100,000+ deployments. Now an open-source crisis-mapping software, its V3 (or “Ushahidi in the Cloud”) is accessible to anyone, mining millions of Tweets, hundreds of thousands of news articles, and geo-tagged, time-stamped data from countless sources.

Aggregating one of the longest-running crisis maps to date, Ushahidi’s Syria Tracker has proved invaluable in the crowdsourcing of witness reports. Providing real-time geographic visualizations of all verified data, Syria Tracker has enabled civilians to report everything from missing people and relief supply needs to civilian casualties and disease outbreaks— all while evading the government’s cell network, keeping identities private, and verifying reports prior to publication.

As mobile connectivity and abundant sensors converge with AI-mined crowd intelligence, real-time awareness will only multiply in speed and scale.

Imagining the Future….

Within the next 10 years, spatial web technology might even allow us to tap into mesh networks.

As I’ve explored in a previous blog on the implications of the spatial web, while traditional networks rely on a limited set of wired access points (or wireless hotspots), a wireless mesh network can connect entire cities via hundreds of dispersed nodes that communicate with each other and share a network connection non-hierarchically.

In short, this means that individual mobile users can together establish a local mesh network using nothing but the computing power in their own devices.

Take this a step further, and a local population of strangers could collectively broadcast countless 360-degree feeds across a local mesh network.

Imagine a scenario in which armed attacks break out across disjointed urban districts, each cluster of eye witnesses and at-risk civilians broadcasting an aggregate of 360-degree videos, all fed through photogrammetry AIs that build out a live hologram in real time, giving family members and first responders complete information.

Or take a coastal community in the throes of torrential rainfall and failing infrastructure. Now empowered by a collective live feed, verification of data reports takes a matter of seconds, and richly-layered data informs first responders and AI platforms with unbelievable accuracy and specificity of relief needs.

By linking all the right technological pieces, we might even see the rise of automated drone deliveries. Imagine: crowdsourced intelligence is first cross-referenced with sensor data and verified algorithmically. AI is then leveraged to determine the specific needs and degree of urgency at ultra-precise coordinates. Within minutes, once approved by personnel, swarm robots rush to collect the requisite supplies, equipping size-appropriate drones with the right aid for rapid-fire delivery.

This brings us to a second critical convergence: robots and drones.

While cutting-edge drone technology revolutionizes the way we deliver aid, new breakthroughs in AI-geared robotics are paving the way for superhuman emergency responses in some of today’s most dangerous environments.

Let’s explore a few of the most disruptive examples to reach the testing phase.

First up….

Autonomous Robots and Swarm Solutions
As hardware advancements converge with exploding AI capabilities, disaster relief robots are graduating from assistance roles to fully autonomous responders at a breakneck pace.

Born out of MIT’s Biomimetic Robotics Lab, the Cheetah III is but one of many robots that may form our first line of defense in everything from earthquake search-and-rescue missions to high-risk ops in dangerous radiation zones.

Now capable of running at 6.4 meters per second, Cheetah III can even leap up to a height of 60 centimeters, autonomously determining how to avoid obstacles and jump over hurdles as they arise.

Initially designed to perform spectral inspection tasks in hazardous settings (think: nuclear plants or chemical factories), the Cheetah’s various iterations have focused on increasing its payload capacity, range of motion, and even a gripping function with enhanced dexterity.

Cheetah III and future versions are aimed at saving lives in almost any environment.

And the Cheetah III is not alone. Just this February, Tokyo’s Electric Power Company (TEPCO) has put one of its own robots to the test. For the first time since Japan’s devastating 2011 tsunami, which led to three nuclear meltdowns in the nation’s Fukushima nuclear power plant, a robot has successfully examined the reactor’s fuel.

Broadcasting the process with its built-in camera, the robot was able to retrieve small chunks of radioactive fuel at five of the six test sites, offering tremendous promise for long-term plans to clean up the still-deadly interior.

Also out of Japan, Mitsubishi Heavy Industries (MHi) is even using robots to fight fires with full autonomy. In a remarkable new feat, MHi’s Water Cannon Bot can now put out blazes in difficult-to-access or highly dangerous fire sites.

Delivering foam or water at 4,000 liters per minute and 1 megapascal (MPa) of pressure, the Cannon Bot and its accompanying Hose Extension Bot even form part of a greater AI-geared system to conduct reconnaissance and surveillance on larger transport vehicles.

As wildfires grow ever more untameable, high-volume production of such bots could prove a true lifesaver. Paired with predictive AI forest fire mapping and autonomous hauling vehicles, not only will solutions like MHi’s Cannon Bot save numerous lives, but avoid population displacement and paralyzing damage to our natural environment before disaster has the chance to spread.

But even in cases where emergency shelter is needed, groundbreaking (literally) robotics solutions are fast to the rescue.

After multiple iterations by Fastbrick Robotics, the Hadrian X end-to-end bricklaying robot can now autonomously build a fully livable, 180-square-meter home in under three days. Using a laser-guided robotic attachment, the all-in-one brick-loaded truck simply drives to a construction site and directs blocks through its robotic arm in accordance with a 3D model.

Meeting verified building standards, Hadrian and similar solutions hold massive promise in the long-term, deployable across post-conflict refugee sites and regions recovering from natural catastrophes.

But what if we need to build emergency shelters from local soil at hand? Marking an extraordinary convergence between robotics and 3D printing, the Institute for Advanced Architecture of Catalonia (IAAC) is already working on a solution.

In a major feat for low-cost construction in remote zones, IAAC has found a way to convert almost any soil into a building material with three times the tensile strength of industrial clay. Offering myriad benefits, including natural insulation, low GHG emissions, fire protection, air circulation, and thermal mediation, IAAC’s new 3D printed native soil can build houses on-site for as little as $1,000.

But while cutting-edge robotics unlock extraordinary new frontiers for low-cost, large-scale emergency construction, novel hardware and computing breakthroughs are also enabling robotic scale at the other extreme of the spectrum.

Again, inspired by biological phenomena, robotics specialists across the US have begun to pilot tiny robotic prototypes for locating trapped individuals and assessing infrastructural damage.

Take RoboBees, tiny Harvard-developed bots that use electrostatic adhesion to ‘perch’ on walls and even ceilings, evaluating structural damage in the aftermath of an earthquake.

Or Carnegie Mellon’s prototyped Snakebot, capable of navigating through entry points that would otherwise be completely inaccessible to human responders. Driven by AI, the Snakebot can maneuver through even the most densely-packed rubble to locate survivors, using cameras and microphones for communication.

But when it comes to fast-paced reconnaissance in inaccessible regions, miniature robot swarms have good company.

Next-Generation Drones for Instantaneous Relief Supplies
Particularly in the case of wildfires and conflict zones, autonomous drone technology is fundamentally revolutionizing the way we identify survivors in need and automate relief supply.

Not only are drones enabling high-resolution imagery for real-time mapping and damage assessment, but preliminary research shows that UAVs far outpace ground-based rescue teams in locating isolated survivors.

As presented by a team of electrical engineers from the University of Science and Technology of China, drones could even build out a mobile wireless broadband network in record time using a “drone-assisted multi-hop device-to-device” program.

And as shown during Houston’s Hurricane Harvey, drones can provide scores of predictive intel on everything from future flooding to damage estimates.

Among multiple others, a team led by Texas A&M computer science professor and director of the university’s Center for Robot-Assisted Search and Rescue Dr. Robin Murphy flew a total of 119 drone missions over the city, from small-scale quadcopters to military-grade unmanned planes. Not only were these critical for monitoring levee infrastructure, but also for identifying those left behind by human rescue teams.

But beyond surveillance, UAVs have begun to provide lifesaving supplies across some of the most remote regions of the globe. One of the most inspiring examples to date is Zipline.

Created in 2014, Zipline has completed 12,352 life-saving drone deliveries to date. While drones are designed, tested, and assembled in California, Zipline primarily operates in Rwanda and Tanzania, hiring local operators and providing over 11 million people with instant access to medical supplies.

Providing everything from vaccines and HIV medications to blood and IV tubes, Zipline’s drones far outpace ground-based supply transport, in many instances providing life-critical blood cells, plasma, and platelets in under an hour.

But drone technology is even beginning to transcend the limited scale of medical supplies and food.

Now developing its drones under contracts with DARPA and the US Marine Corps, Logistic Gliders, Inc. has built autonomously-navigating drones capable of carrying 1,800 pounds of cargo over unprecedented long distances.

Built from plywood, Logistic’s gliders are projected to cost as little as a few hundred dollars each, making them perfect candidates for high-volume remote aid deliveries, whether navigated by a pilot or self-flown in accordance with real-time disaster zone mapping.

As hardware continues to advance, autonomous drone technology coupled with real-time mapping algorithms pose no end of abundant opportunities for aid supply, disaster monitoring, and richly layered intel previously unimaginable for humanitarian relief.

Concluding Thoughts
Perhaps one of the most consequential and impactful applications of converging technologies is their transformation of disaster relief methods.

While AI-driven intel platforms crowdsource firsthand experiential data from those on the ground, mobile connectivity and drone-supplied networks are granting newfound narrative power to those most in need.

And as a wave of new hardware advancements gives rise to robotic responders, swarm technology, and aerial drones, we are fast approaching an age of instantaneous and efficiently-distributed responses in the midst of conflict and natural catastrophes alike.

Empowered by these new tools, what might we create when everyone on the planet has the same access to relief supplies and immediate resources? In a new age of prevention and fast recovery, what futures can you envision?

Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: Arcansel / Shutterstock.com Continue reading

Posted in Human Robots

#434658 The Next Data-Driven Healthtech ...

Increasing your healthspan (i.e. making 100 years old the new 60) will depend to a large degree on artificial intelligence. And, as we saw in last week’s blog, healthcare AI systems are extremely data-hungry.

Fortunately, a slew of new sensors and data acquisition methods—including over 122 million wearables shipped in 2018—are bursting onto the scene to meet the massive demand for medical data.

From ubiquitous biosensors, to the mobile healthcare revolution, to the transformative power of the Health Nucleus, converging exponential technologies are fundamentally transforming our approach to healthcare.

In Part 4 of this blog series on Longevity & Vitality, I expand on how we’re acquiring the data to fuel today’s AI healthcare revolution.

In this blog, I’ll explore:

How the Health Nucleus is transforming “sick care” to healthcare
Sensors, wearables, and nanobots
The advent of mobile health

Let’s dive in.

Health Nucleus: Transforming ‘Sick Care’ to Healthcare
Much of today’s healthcare system is actually sick care. Most of us assume that we’re perfectly healthy, with nothing going on inside our bodies, until the day we travel to the hospital writhing in pain only to discover a serious or life-threatening condition.

Chances are that your ailment didn’t materialize that morning; rather, it’s been growing or developing for some time. You simply weren’t aware of it. At that point, once you’re diagnosed as “sick,” our medical system engages to take care of you.

What if, instead of this retrospective and reactive approach, you were constantly monitored, so that you could know the moment anything was out of whack?

Better yet, what if you more closely monitored those aspects of your body that your gene sequence predicted might cause you difficulty? Think: your heart, your kidneys, your breasts. Such a system becomes personalized, predictive, and possibly preventative.

This is the mission of the Health Nucleus platform built by Human Longevity, Inc. (HLI). While not continuous—that will come later, with the next generation of wearable and implantable sensors—the Health Nucleus was designed to ‘digitize’ you once per year to help you determine whether anything is going on inside your body that requires immediate attention.

The Health Nucleus visit provides you with the following tests during a half-day visit:

Whole genome sequencing (30x coverage)
Whole body (non-contrast) MRI
Brain magnetic resonance imaging/angiography (MRI/MRA)
CT (computed tomography) of the heart and lungs
Coronary artery calcium scoring
Electrocardiogram
Echocardiogram
Continuous cardiac monitoring
Clinical laboratory tests and metabolomics

In late 2018, HLI published the results of the first 1,190 clients through the Health Nucleus. The results were eye-opening—especially since these patients were all financially well-off, and already had access to the best doctors.

Following are the physiological and genomic findings in these clients who self-selected to undergo evaluation at HLI’s Health Nucleus.

Physiological Findings [TG]

Two percent had previously unknown tumors detected by MRI
2.5 percent had previously undetected aneurysms detected by MRI
Eight percent had cardiac arrhythmia found on cardiac rhythm monitoring, not previously known
Nine percent had moderate-severe coronary artery disease risk, not previously known
16 percent discovered previously unknown cardiac structure/function abnormalities
30 percent had elevated liver fat, not previously known

Genomic Findings [TG]

24 percent of clients uncovered a rare (unknown) genetic mutation found on WGS
63 percent of clients had a rare genetic mutation with a corresponding phenotypic finding

In summary, HLI’s published results found that 14.4 percent of clients had significant findings that are actionable, requiring immediate or near-term follow-up and intervention.

Long-term value findings were found in 40 percent of the clients we screened. Long-term clinical findings include discoveries that require medical attention or monitoring but are not immediately life-threatening.

The bottom line: most people truly don’t know their actual state of health. The ability to take a fully digital deep dive into your health status at least once per year will enable you to detect disease at stage zero or stage one, when it is most curable.

Sensors, Wearables, and Nanobots
Wearables, connected devices, and quantified self apps will allow us to continuously collect enormous amounts of useful health information.

Wearables like the Quanttus wristband and Vital Connect can transmit your electrocardiogram data, vital signs, posture, and stress levels anywhere on the planet.

In April 2017, we were proud to grant $2.5 million in prize money to the winning team in the Qualcomm Tricorder XPRIZE, Final Frontier Medical Devices.

Using a group of noninvasive sensors that collect data on vital signs, body chemistry, and biological functions, Final Frontier integrates this data in their powerful, AI-based DxtER diagnostic engine for rapid, high-precision assessments.

Their engine combines learnings from clinical emergency medicine and data analysis from actual patients.

Google is developing a full range of internal and external sensors (e.g. smart contact lenses) that can monitor the wearer’s vitals, ranging from blood sugar levels to blood chemistry.

In September 2018, Apple announced its Series 4 Apple Watch, including an FDA-approved mobile, on-the-fly ECG. Granted its first FDA approval, Apple appears to be moving deeper into the sensing healthcare market.

Further, Apple is reportedly now developing sensors that can non-invasively monitor blood sugar levels in real time for diabetic treatment. IoT-connected sensors are also entering the world of prescription drugs.

Last year, the FDA approved the first sensor-embedded pill, Abilify MyCite. This new class of digital pills can now communicate medication data to a user-controlled app, to which doctors may be granted access for remote monitoring.

Perhaps what is most impressive about the next generation of wearables and implantables is the density of sensors, processing, networking, and battery capability that we can now cheaply and compactly integrate.

Take the second-generation OURA ring, for example, which focuses on sleep measurement and management.

The OURA ring looks like a slightly thick wedding band, yet contains an impressive array of sensors and capabilities, including:

Two infrared LED
One infrared sensor
Three temperature sensors
One accelerometer
A six-axis gyro
A curved battery with a seven-day life
The memory, processing, and transmission capability required to connect with your smartphone

Disrupting Medical Imaging Hardware
In 2018, we saw lab breakthroughs that will drive the cost of an ultrasound sensor to below $100, in a packaging smaller than most bandages, powered by a smartphone. Dramatically disrupting ultrasound is just the beginning.

Nanobots and Nanonetworks
While wearables have long been able to track and transmit our steps, heart rate, and other health data, smart nanobots and ingestible sensors will soon be able to monitor countless new parameters and even help diagnose disease.

Some of the most exciting breakthroughs in smart nanotechnology from the past year include:

Researchers from the École Polytechnique Fédérale de Lausanne (EPFL) and the Swiss Federal Institute of Technology in Zurich (ETH Zurich) demonstrated artificial microrobots that can swim and navigate through different fluids, independent of additional sensors, electronics, or power transmission.

Researchers at the University of Chicago proposed specific arrangements of DNA-based molecular logic gates to capture the information contained in the temporal portion of our cells’ communication mechanisms. Accessing the otherwise-lost time-dependent information of these cellular signals is akin to knowing the tune of a song, rather than solely the lyrics.

MIT researchers built micron-scale robots able to sense, record, and store information about their environment. These tiny robots, about 100 micrometers in diameter (approximately the size of a human egg cell), can also carry out pre-programmed computational tasks.

Engineers at University of California, San Diego developed ultrasound-powered nanorobots that swim efficiently through your blood, removing harmful bacteria and the toxins they produce.

But it doesn’t stop there.

As nanosensor and nanonetworking capabilities develop, these tiny bots may soon communicate with each other, enabling the targeted delivery of drugs and autonomous corrective action.

Mobile Health
The OURA ring and the Series 4 Apple Watch are just the tip of the spear when it comes to our future of mobile health. This field, predicted to become a $102 billion market by 2022, puts an on-demand virtual doctor in your back pocket.

Step aside, WebMD.

In true exponential technology fashion, mobile device penetration has increased dramatically, while image recognition error rates and sensor costs have sharply declined.

As a result, AI-powered medical chatbots are flooding the market; diagnostic apps can identify anything from a rash to diabetic retinopathy; and with the advent of global connectivity, mHealth platforms enable real-time health data collection, transmission, and remote diagnosis by medical professionals.

Already available to residents across North London, Babylon Health offers immediate medical advice through AI-powered chatbots and video consultations with doctors via its app.

Babylon now aims to build up its AI for advanced diagnostics and even prescription. Others, like Woebot, take on mental health, using cognitive behavioral therapy in communications over Facebook messenger with patients suffering from depression.

In addition to phone apps and add-ons that test for fertility or autism, the now-FDA-approved Clarius L7 Linear Array Ultrasound Scanner can connect directly to iOS and Android devices and perform wireless ultrasounds at a moment’s notice.

Next, Healthy.io, an Israeli startup, uses your smartphone and computer vision to analyze traditional urine test strips—all you need to do is take a few photos.

With mHealth platforms like ClickMedix, which connects remotely-located patients to medical providers through real-time health data collection and transmission, what’s to stop us from delivering needed treatments through drone delivery or robotic telesurgery?

Welcome to the age of smartphone-as-a-medical-device.

Conclusion
With these DIY data collection and diagnostic tools, we save on transportation costs (time and money), and time bottlenecks.

No longer will you need to wait for your urine or blood results to go through the current information chain: samples will be sent to the lab, analyzed by a technician, results interpreted by your doctor, and only then relayed to you.

Just like the “sage-on-the-stage” issue with today’s education system, healthcare has a “doctor-on-the-dais” problem. Current medical procedures are too complicated and expensive for a layperson to perform and analyze on their own.

The coming abundance of healthcare data promises to transform how we approach healthcare, putting the power of exponential technologies in the patient’s hands and revolutionizing how we live.

Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: Titima Ongkantong / Shutterstock.com Continue reading

Posted in Human Robots