Tag Archives: hands

#435098 Coming of Age in the Age of AI: The ...

The first generation to grow up entirely in the 21st century will never remember a time before smartphones or smart assistants. They will likely be the first children to ride in self-driving cars, as well as the first whose healthcare and education could be increasingly turned over to artificially intelligent machines.

Futurists, demographers, and marketers have yet to agree on the specifics of what defines the next wave of humanity to follow Generation Z. That hasn’t stopped some, like Australian futurist Mark McCrindle, from coining the term Generation Alpha, denoting a sort of reboot of society in a fully-realized digital age.

“In the past, the individual had no power, really,” McCrindle told Business Insider. “Now, the individual has great control of their lives through being able to leverage this world. Technology, in a sense, transformed the expectations of our interactions.”

No doubt technology may impart Marvel superhero-like powers to Generation Alpha that even tech-savvy Millennials never envisioned over cups of chai latte. But the powers of machine learning, computer vision, and other disciplines under the broad category of artificial intelligence will shape this yet unformed generation more definitively than any before it.

What will it be like to come of age in the Age of AI?

The AI Doctor Will See You Now
Perhaps no other industry is adopting and using AI as much as healthcare. The term “artificial intelligence” appears in nearly 90,000 publications from biomedical literature and research on the PubMed database.

AI is already transforming healthcare and longevity research. Machines are helping to design drugs faster and detect disease earlier. And AI may soon influence not only how we diagnose and treat illness in children, but perhaps how we choose which children will be born in the first place.

A study published earlier this month in NPJ Digital Medicine by scientists from Weill Cornell Medicine used 12,000 photos of human embryos taken five days after fertilization to train an AI algorithm on how to tell which in vitro fertilized embryo had the best chance of a successful pregnancy based on its quality.

Investigators assigned each embryo a grade based on various aspects of its appearance. A statistical analysis then correlated that grade with the probability of success. The algorithm, dubbed Stork, was able to classify the quality of a new set of images with 97 percent accuracy.

“Our algorithm will help embryologists maximize the chances that their patients will have a single healthy pregnancy,” said Dr. Olivier Elemento, director of the Caryl and Israel Englander Institute for Precision Medicine at Weill Cornell Medicine, in a press release. “The IVF procedure will remain the same, but we’ll be able to improve outcomes by harnessing the power of artificial intelligence.”

Other medical researchers see potential in applying AI to detect possible developmental issues in newborns. Scientists in Europe, working with a Finnish AI startup that creates seizure monitoring technology, have developed a technique for detecting movement patterns that might indicate conditions like cerebral palsy.

Published last month in the journal Acta Pediatrica, the study relied on an algorithm to extract the movements from a newborn, turning it into a simplified “stick figure” that medical experts could use to more easily detect clinically relevant data.

The researchers are continuing to improve the datasets, including using 3D video recordings, and are now developing an AI-based method for determining if a child’s motor maturity aligns with its true age. Meanwhile, a study published in February in Nature Medicine discussed the potential of using AI to diagnose pediatric disease.

AI Gets Classy
After being weaned on algorithms, Generation Alpha will hit the books—about machine learning.

China is famously trying to win the proverbial AI arms race by spending billions on new technologies, with one Chinese city alone pledging nearly $16 billion to build a smart economy based on artificial intelligence.

To reach dominance by its stated goal of 2030, Chinese cities are also incorporating AI education into their school curriculum. Last year, China published its first high school textbook on AI, according to the South China Morning Post. More than 40 schools are participating in a pilot program that involves SenseTime, one of the country’s biggest AI companies.

In the US, where it seems every child has access to their own AI assistant, researchers are just beginning to understand how the ubiquity of intelligent machines will influence the ways children learn and interact with their highly digitized environments.

Sandra Chang-Kredl, associate professor of the department of education at Concordia University, told The Globe and Mail that AI could have detrimental effects on learning creativity or emotional connectedness.

Similar concerns inspired Stefania Druga, a member of the Personal Robots group at the MIT Media Lab (and former Education Teaching Fellow at SU), to study interactions between children and artificial intelligence devices in order to encourage positive interactions.

Toward that goal, Druga created Cognimates, a platform that enables children to program and customize their own smart devices such as Alexa or even a smart, functional robot. The kids can also use Cognimates to train their own AI models or even build a machine learning version of Rock Paper Scissors that gets better over time.

“I believe it’s important to also introduce young people to the concepts of AI and machine learning through hands-on projects so they can make more informed and critical use of these technologies,” Druga wrote in a Medium blog post.

Druga is also the founder of Hackidemia, an international organization that sponsors workshops and labs around the world to introduce kids to emerging technologies at an early age.

“I think we are in an arms race in education with the advancement of technology, and we need to start thinking about AI literacy before patterns of behaviors for children and their families settle in place,” she wrote.

AI Goes Back to School
It also turns out that AI has as much to learn from kids. More and more researchers are interested in understanding how children grasp basic concepts that still elude the most advanced machine minds.

For example, developmental psychologist Alison Gopnik has written and lectured extensively about how studying the minds of children can provide computer scientists clues on how to improve machine learning techniques.

In an interview on Vox, she described that while DeepMind’s AlpahZero was trained to be a chessmaster, it struggles with even the simplest changes in the rules, such as allowing the bishop to move horizontally instead of vertically.

“A human chess player, even a kid, will immediately understand how to transfer that new rule to their playing of the game,” she noted. “Flexibility and generalization are something that even human one-year-olds can do but that the best machine learning systems have a much harder time with.”

Last year, the federal defense agency DARPA announced a new program aimed at improving AI by teaching it “common sense.” One of the chief strategies is to develop systems for “teaching machines through experience, mimicking the way babies grow to understand the world.”

Such an approach is also the basis of a new AI program at MIT called the MIT Quest for Intelligence.

The research leverages cognitive science to understand human intelligence, according to an article on the project in MIT Technology Review, such as exploring how young children visualize the world using their own innate 3D models.

“Children’s play is really serious business,” said Josh Tenenbaum, who leads the Computational Cognitive Science lab at MIT and his head of the new program. “They’re experiments. And that’s what makes humans the smartest learners in the known universe.”

In a world increasingly driven by smart technologies, it’s good to know the next generation will be able to keep up.

Image Credit: phoelixDE / Shutterstock.com Continue reading

Posted in Human Robots

#434772 Traditional Higher Education Is Losing ...

Should you go to graduate school? If so, why? If not, what are your alternatives? Millions of young adults across the globe—and their parents and mentors—find themselves asking these questions every year.

Earlier this month, I explored how exponential technologies are rising to meet the needs of the rapidly changing workforce.

In this blog, I’ll dive into a highly effective way to build the business acumen and skills needed to make the most significant impact in these exponential times.

To start, let’s dive into the value of graduate school versus apprenticeship—especially during this time of extraordinarily rapid growth, and the micro-diversification of careers.

The True Value of an MBA
All graduate schools are not created equal.

For complex technical trades like medicine, engineering, and law, formal graduate-level training provides a critical foundation for safe, ethical practice (until these trades are fully augmented by artificial intelligence and automation…).

For the purposes of today’s blog, let’s focus on the value of a Master in Business Administration (MBA) degree, compared to acquiring your business acumen through various forms of apprenticeship.

The Waning of Business Degrees
Ironically, business schools are facing a tough business problem. The rapid rate of technological change, a booming job market, and the digitization of education are chipping away at the traditional graduate-level business program.

The data speaks for itself.

The Decline of Graduate School Admissions
Enrollment in two-year, full-time MBA programs in the US fell by more than one-third from 2010 to 2016.

While in previous years, top business schools (e.g. Stanford, Harvard, and Wharton) were safe from the decrease in applications, this year, they also felt the waning interest in MBA programs.

Harvard Business School: 4.5 percent decrease in applications, the school’s biggest drop since 2005.
Wharton: 6.7 percent decrease in applications.
Stanford Graduate School: 4.6 percent decrease in applications.

Another signal of change began unfolding over the past week. You may have read news headlines about an emerging college admissions scam, which implicates highly selective US universities, sports coaches, parents, and students in a conspiracy to game the undergraduate admissions process.

Already, students are filing multibillion-dollar civil lawsuits arguing that the scheme has devalued their degrees or denied them a fair admissions opportunity.

MBA Graduates in the Workforce
To meet today’s business needs, startups and massive companies alike are increasingly hiring technologists, developers, and engineers in place of the MBA graduates they may have preferentially hired in the past.

While 85 percent of US employers expect to hire MBA graduates this year (a decrease from 91 percent in 2017), 52 percent of employers worldwide expect to hire graduates with a master’s in data analytics (an increase from 35 percent last year).

We’re also seeing the waning of MBA degree holders at the CEO level.

For decades, an MBA was the hallmark of upward mobility towards the C-suite of top companies.

But as exponential technologies permeate not only products but every part of the supply chain—from manufacturing and shipping to sales, marketing and customer service—that trend is changing by necessity.

Looking at the Harvard Business Review’s Top 100 CEOs in 2018 list, more CEOs on the list held engineering degrees than MBAs (34 held engineering degrees, while 32 held MBAs).

There’s much more to leading innovative companies than an advanced business degree.

How Are Schools Responding?
With disruption to the advanced business education system already here, some business schools are applying notes from their own innovation classes to brace for change.

Over the past half-decade, we’ve seen schools with smaller MBA programs shut their doors in favor of advanced degrees with more specialization. This directly responds to market demand for skills in data science, supply chain, and manufacturing.

Some degrees resemble the precise skills training of technical trades. Others are very much in line with the apprenticeship models we’ll explore next.

Regardless, this new specialization strategy is working and attracting more new students. Over the past decade (2006 to 2016), enrollment in specialized graduate business programs doubled.

Higher education is also seeing a preference shift toward for-profit trade schools, like coding boot camps. This shift is one of several forces pushing universities to adopt skill-specific advanced degrees.

But some schools are slow to adapt, raising the question: how and when will these legacy programs be disrupted? A survey of over 170 business school deans around the world showed that many programs are operating at a loss.

But if these schools are world-class business institutions, as advertised, why do they keep the doors open even while they lose money? The surveyed deans revealed an important insight: they keep the degree program open because of the program’s prestige.

Why Go to Business School?
Shorthand Credibility, Cognitive Biases, and Prestige
Regardless of what knowledge a person takes away from graduate school, attending one of the world’s most rigorous and elite programs gives grads external validation.

With over 55 percent of MBA applicants applying to just 6 percent of graduate business schools, we have a clear cognitive bias toward the perceived elite status of certain universities.

To the outside world, thanks to the power of cognitive biases, an advanced degree is credibility shorthand for your capabilities.

Simply passing through a top school’s filtration system means that you had some level of abilities and merits.

And startup success statistics tend to back up that perceived enhanced capability. Let’s take, for example, universities with the most startup unicorn founders (see the figure below).

When you consider the 320+ unicorn startups around the world today, these numbers become even more impressive. Stanford’s 18 unicorn companies account for over 5 percent of global unicorns, and Harvard is responsible for producing just under 5 percent.

Combined, just these two universities (out of over 5,000 in the US, and thousands more around the world) account for 1 in 10 of the billion-dollar private companies in the world.

By the numbers, the prestigious reputation of these elite business programs has a firm basis in current innovation success.

While prestige may be inherent to the degree earned by graduates from these business programs, the credibility boost from holding one of these degrees is not a guaranteed path to success in the business world.

For example, you might expect that the Harvard School of Business or Stanford Graduate School of Business would come out on top when tallying up the alma maters of Fortune 500 CEOs.

It turns out that the University of Wisconsin-Madison leads the business school pack with 14 CEOs to Harvard’s 12. Beyond prestige, the success these elite business programs see translates directly into cultivating unmatched networks and relationships.

Graduate schools—particularly at the upper echelon—are excellent at attracting sharp students.

At an elite business school, if you meet just five to ten people with extraordinary skill sets, personalities, ideas, or networks, then you have returned your $200,000 education investment.

It’s no coincidence that some 40 percent of Silicon Valley venture capitalists are alumni of either Harvard or Stanford.

From future investors to advisors, friends, and potential business partners, relationships are critical to an entrepreneur’s success.

As we saw above, graduate business degree programs are melting away in the current wave of exponential change.

With an increasing $1.5 trillion in student debt, there must be a more impactful alternative to attending graduate school for those starting their careers.

When I think about the most important skills I use today as an entrepreneur, writer, and strategic thinker, they didn’t come from my decade of graduate school at Harvard or MIT… they came from my experiences building real technologies and companies, and working with mentors.

Apprenticeship comes in a variety of forms; here, I’ll cover three top-of-mind approaches:

Real-world business acumen via startup accelerators
A direct apprenticeship model
The 6 D’s of mentorship

Startup Accelerators and Business Practicum
Let’s contrast the shrinking interest in MBA programs with applications to a relatively new model of business education: startup accelerators.

Startup accelerators are short-term (typically three to six months), cohort-based programs focusing on providing startup founders with the resources (capital, mentorship, relationships, and education) needed to refine their entrepreneurial acumen.

While graduate business programs have been condensing, startup accelerators are alive, well, and expanding rapidly.

In the 10 years from 2005 (when Paul Graham founded Y Combinator) through 2015, the number of startup accelerators in the US increased by more than tenfold.

The increase in startup accelerator activity hints at a larger trend: our best and brightest business minds are opting to invest their time and efforts in obtaining hands-on experience, creating tangible value for themselves and others, rather than diving into the theory often taught in business school classrooms.

The “Strike Force” Model
The Strike Force is my elite team of young entrepreneurs who work directly with me across all of my companies, travel by my side, sit in on every meeting with me, and help build businesses that change the world.

Previous Strike Force members have gone on to launch successful companies, including Bold Capital Partners, my $250 million venture capital firm.

Strike Force is an apprenticeship for the next generation of exponential entrepreneurs.

To paraphrase my good friend Tony Robbins: If you want to short-circuit the video game, find someone who’s been there and done that and is now doing something you want to one day do.

Every year, over 500,000 apprentices in the US follow this precise template. These apprentices are learning a craft they wish to master, under the mentorship of experts (skilled metal workers, bricklayers, medical technicians, electricians, and more) who have already achieved the desired result.

What if we more readily applied this model to young adults with aspirations of creating massive value through the vehicles of entrepreneurship and innovation?

For the established entrepreneur: How can you bring young entrepreneurs into your organization to create more value for your company, while also passing on your ethos and lessons learned to the next generation?

For the young, driven millennial: How can you find your mentor and convince him or her to take you on as an apprentice? What value can you create for this person in exchange for their guidance and investment in your professional development?

The 6 D’s of Mentorship
In my last blog on education, I shared how mobile device and internet penetration will transform adult literacy and basic education. Mobile phones and connectivity already create extraordinary value for entrepreneurs and young professionals looking to take their business acumen and skill set to the next level.

For all of human history up until the last decade or so, if you wanted to learn from the best and brightest in business, leadership, or strategy, you either needed to search for a dated book that they wrote at the local library or bookstore, or you had to be lucky enough to meet that person for a live conversation.

Now you can access the mentorship of just about any thought leader on the planet, at any time, for free.

Thanks to the power of the internet, mentorship has digitized, demonetized, dematerialized, and democratized.

What do you want to learn about?

Investing? Leadership? Technology? Marketing? Project management?

You can access a near-infinite stream of cutting-edge tools, tactics, and lessons from thousands of top performers from nearly every field—instantaneously, and for free.

For example, every one of Warren Buffett’s letters to his Berkshire Hathaway investors over the past 40 years is available for free on a device that fits in your pocket.

The rise of audio—particularly podcasts and audiobooks—is another underestimated driving force away from traditional graduate business programs and toward apprenticeships.

Over 28 million podcast episodes are available for free. Once you identify the strong signals in the noise, you’re still left with thousands of hours of long-form podcast conversation from which to learn valuable lessons.

Whenever and wherever you want, you can learn from the world’s best. In the future, mentorship and apprenticeship will only become more personalized. Imagine accessing a high-fidelity, AI-powered avatar of Bill Gates, Richard Branson, or Arthur C. Clarke (one of my early mentors) to help guide you through your career.

Virtual mentorship and coaching are powerful education forces that are here to stay.

Bringing It All Together
The education system is rapidly changing. Traditional master’s programs for business are ebbing away in the tides of exponential technologies. Apprenticeship models are reemerging as an effective way to train tomorrow’s leaders.

In a future blog, I’ll revisit the concept of apprenticeships and other effective business school alternatives.

If you are a young, ambitious entrepreneur (or the parent of one), remember that you live in the most abundant time ever in human history to refine your craft.

Right now, you have access to world-class mentorship and cutting-edge best-practices—literally in the palm of your hand. What will you do with this extraordinary power?

Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: fongbeerredhot / Shutterstock.com Continue reading

Posted in Human Robots

#434658 The Next Data-Driven Healthtech ...

Increasing your healthspan (i.e. making 100 years old the new 60) will depend to a large degree on artificial intelligence. And, as we saw in last week’s blog, healthcare AI systems are extremely data-hungry.

Fortunately, a slew of new sensors and data acquisition methods—including over 122 million wearables shipped in 2018—are bursting onto the scene to meet the massive demand for medical data.

From ubiquitous biosensors, to the mobile healthcare revolution, to the transformative power of the Health Nucleus, converging exponential technologies are fundamentally transforming our approach to healthcare.

In Part 4 of this blog series on Longevity & Vitality, I expand on how we’re acquiring the data to fuel today’s AI healthcare revolution.

In this blog, I’ll explore:

How the Health Nucleus is transforming “sick care” to healthcare
Sensors, wearables, and nanobots
The advent of mobile health

Let’s dive in.

Health Nucleus: Transforming ‘Sick Care’ to Healthcare
Much of today’s healthcare system is actually sick care. Most of us assume that we’re perfectly healthy, with nothing going on inside our bodies, until the day we travel to the hospital writhing in pain only to discover a serious or life-threatening condition.

Chances are that your ailment didn’t materialize that morning; rather, it’s been growing or developing for some time. You simply weren’t aware of it. At that point, once you’re diagnosed as “sick,” our medical system engages to take care of you.

What if, instead of this retrospective and reactive approach, you were constantly monitored, so that you could know the moment anything was out of whack?

Better yet, what if you more closely monitored those aspects of your body that your gene sequence predicted might cause you difficulty? Think: your heart, your kidneys, your breasts. Such a system becomes personalized, predictive, and possibly preventative.

This is the mission of the Health Nucleus platform built by Human Longevity, Inc. (HLI). While not continuous—that will come later, with the next generation of wearable and implantable sensors—the Health Nucleus was designed to ‘digitize’ you once per year to help you determine whether anything is going on inside your body that requires immediate attention.

The Health Nucleus visit provides you with the following tests during a half-day visit:

Whole genome sequencing (30x coverage)
Whole body (non-contrast) MRI
Brain magnetic resonance imaging/angiography (MRI/MRA)
CT (computed tomography) of the heart and lungs
Coronary artery calcium scoring
Continuous cardiac monitoring
Clinical laboratory tests and metabolomics

In late 2018, HLI published the results of the first 1,190 clients through the Health Nucleus. The results were eye-opening—especially since these patients were all financially well-off, and already had access to the best doctors.

Following are the physiological and genomic findings in these clients who self-selected to undergo evaluation at HLI’s Health Nucleus.

Physiological Findings [TG]

Two percent had previously unknown tumors detected by MRI
2.5 percent had previously undetected aneurysms detected by MRI
Eight percent had cardiac arrhythmia found on cardiac rhythm monitoring, not previously known
Nine percent had moderate-severe coronary artery disease risk, not previously known
16 percent discovered previously unknown cardiac structure/function abnormalities
30 percent had elevated liver fat, not previously known

Genomic Findings [TG]

24 percent of clients uncovered a rare (unknown) genetic mutation found on WGS
63 percent of clients had a rare genetic mutation with a corresponding phenotypic finding

In summary, HLI’s published results found that 14.4 percent of clients had significant findings that are actionable, requiring immediate or near-term follow-up and intervention.

Long-term value findings were found in 40 percent of the clients we screened. Long-term clinical findings include discoveries that require medical attention or monitoring but are not immediately life-threatening.

The bottom line: most people truly don’t know their actual state of health. The ability to take a fully digital deep dive into your health status at least once per year will enable you to detect disease at stage zero or stage one, when it is most curable.

Sensors, Wearables, and Nanobots
Wearables, connected devices, and quantified self apps will allow us to continuously collect enormous amounts of useful health information.

Wearables like the Quanttus wristband and Vital Connect can transmit your electrocardiogram data, vital signs, posture, and stress levels anywhere on the planet.

In April 2017, we were proud to grant $2.5 million in prize money to the winning team in the Qualcomm Tricorder XPRIZE, Final Frontier Medical Devices.

Using a group of noninvasive sensors that collect data on vital signs, body chemistry, and biological functions, Final Frontier integrates this data in their powerful, AI-based DxtER diagnostic engine for rapid, high-precision assessments.

Their engine combines learnings from clinical emergency medicine and data analysis from actual patients.

Google is developing a full range of internal and external sensors (e.g. smart contact lenses) that can monitor the wearer’s vitals, ranging from blood sugar levels to blood chemistry.

In September 2018, Apple announced its Series 4 Apple Watch, including an FDA-approved mobile, on-the-fly ECG. Granted its first FDA approval, Apple appears to be moving deeper into the sensing healthcare market.

Further, Apple is reportedly now developing sensors that can non-invasively monitor blood sugar levels in real time for diabetic treatment. IoT-connected sensors are also entering the world of prescription drugs.

Last year, the FDA approved the first sensor-embedded pill, Abilify MyCite. This new class of digital pills can now communicate medication data to a user-controlled app, to which doctors may be granted access for remote monitoring.

Perhaps what is most impressive about the next generation of wearables and implantables is the density of sensors, processing, networking, and battery capability that we can now cheaply and compactly integrate.

Take the second-generation OURA ring, for example, which focuses on sleep measurement and management.

The OURA ring looks like a slightly thick wedding band, yet contains an impressive array of sensors and capabilities, including:

Two infrared LED
One infrared sensor
Three temperature sensors
One accelerometer
A six-axis gyro
A curved battery with a seven-day life
The memory, processing, and transmission capability required to connect with your smartphone

Disrupting Medical Imaging Hardware
In 2018, we saw lab breakthroughs that will drive the cost of an ultrasound sensor to below $100, in a packaging smaller than most bandages, powered by a smartphone. Dramatically disrupting ultrasound is just the beginning.

Nanobots and Nanonetworks
While wearables have long been able to track and transmit our steps, heart rate, and other health data, smart nanobots and ingestible sensors will soon be able to monitor countless new parameters and even help diagnose disease.

Some of the most exciting breakthroughs in smart nanotechnology from the past year include:

Researchers from the École Polytechnique Fédérale de Lausanne (EPFL) and the Swiss Federal Institute of Technology in Zurich (ETH Zurich) demonstrated artificial microrobots that can swim and navigate through different fluids, independent of additional sensors, electronics, or power transmission.

Researchers at the University of Chicago proposed specific arrangements of DNA-based molecular logic gates to capture the information contained in the temporal portion of our cells’ communication mechanisms. Accessing the otherwise-lost time-dependent information of these cellular signals is akin to knowing the tune of a song, rather than solely the lyrics.

MIT researchers built micron-scale robots able to sense, record, and store information about their environment. These tiny robots, about 100 micrometers in diameter (approximately the size of a human egg cell), can also carry out pre-programmed computational tasks.

Engineers at University of California, San Diego developed ultrasound-powered nanorobots that swim efficiently through your blood, removing harmful bacteria and the toxins they produce.

But it doesn’t stop there.

As nanosensor and nanonetworking capabilities develop, these tiny bots may soon communicate with each other, enabling the targeted delivery of drugs and autonomous corrective action.

Mobile Health
The OURA ring and the Series 4 Apple Watch are just the tip of the spear when it comes to our future of mobile health. This field, predicted to become a $102 billion market by 2022, puts an on-demand virtual doctor in your back pocket.

Step aside, WebMD.

In true exponential technology fashion, mobile device penetration has increased dramatically, while image recognition error rates and sensor costs have sharply declined.

As a result, AI-powered medical chatbots are flooding the market; diagnostic apps can identify anything from a rash to diabetic retinopathy; and with the advent of global connectivity, mHealth platforms enable real-time health data collection, transmission, and remote diagnosis by medical professionals.

Already available to residents across North London, Babylon Health offers immediate medical advice through AI-powered chatbots and video consultations with doctors via its app.

Babylon now aims to build up its AI for advanced diagnostics and even prescription. Others, like Woebot, take on mental health, using cognitive behavioral therapy in communications over Facebook messenger with patients suffering from depression.

In addition to phone apps and add-ons that test for fertility or autism, the now-FDA-approved Clarius L7 Linear Array Ultrasound Scanner can connect directly to iOS and Android devices and perform wireless ultrasounds at a moment’s notice.

Next, Healthy.io, an Israeli startup, uses your smartphone and computer vision to analyze traditional urine test strips—all you need to do is take a few photos.

With mHealth platforms like ClickMedix, which connects remotely-located patients to medical providers through real-time health data collection and transmission, what’s to stop us from delivering needed treatments through drone delivery or robotic telesurgery?

Welcome to the age of smartphone-as-a-medical-device.

With these DIY data collection and diagnostic tools, we save on transportation costs (time and money), and time bottlenecks.

No longer will you need to wait for your urine or blood results to go through the current information chain: samples will be sent to the lab, analyzed by a technician, results interpreted by your doctor, and only then relayed to you.

Just like the “sage-on-the-stage” issue with today’s education system, healthcare has a “doctor-on-the-dais” problem. Current medical procedures are too complicated and expensive for a layperson to perform and analyze on their own.

The coming abundance of healthcare data promises to transform how we approach healthcare, putting the power of exponential technologies in the patient’s hands and revolutionizing how we live.

Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: Titima Ongkantong / Shutterstock.com Continue reading

Posted in Human Robots

#434648 The Pediatric AI That Outperformed ...

Training a doctor takes years of grueling work in universities and hospitals. Building a doctor may be as easy as teaching an AI how to read.

Artificial intelligence has taken another step towards becoming an integral part of 21st-century medicine. New research out of Guangzhou, China, published February 11th in Nature Medicine Letters, has demonstrated a natural-language processing AI that is capable of out-performing rookie pediatricians in diagnosing common childhood ailments.

The massive study examined the electronic health records (EHR) from nearly 600,000 patients over an 18-month period at the Guangzhou Women and Children’s Medical Center and then compared AI-generated diagnoses against new assessments from physicians with a range of experience.

The verdict? On average, the AI was noticeably more accurate than junior physicians and nearly as reliable as the more senior ones. These results are the latest demonstration that artificial intelligence is on the cusp of becoming a healthcare staple on a global scale.

Less Like a Computer, More Like a Person
To outshine human doctors, the AI first had to become more human. Like IBM’s Watson, the pediatric AI leverages natural language processing, in essence “reading” written notes from EHRs not unlike how a human doctor would review those same records. But the similarities to human doctors don’t end there. The AI is a machine learning classifier (MLC), capable of placing the information learned from the EHRs into categories to improve performance.

Like traditionally-trained pediatricians, the AI broke cases down into major organ groups and infection areas (upper/lower respiratory, gastrointestinal, etc.) before breaking them down even further into subcategories. It could then develop associations between various symptoms and organ groups and use those associations to improve its diagnoses. This hierarchical approach mimics the deductive reasoning human doctors employ.

Another key strength of the AI developed for this study was the enormous size of the dataset collected to teach it: 1,362,559 outpatient visits from 567,498 patients yielded some 101.6 million data points for the MLC to devour on its quest for pediatric dominance. This allowed the AI the depth of learning needed to distinguish and accurately select from the 55 different diagnosis codes across the various organ groups and subcategories.

When comparing against the human doctors, the study used 11,926 records from an unrelated group of children, giving both the MLC and the 20 humans it was compared against an even playing field. The results were clear: while cohorts of senior pediatricians performed better than the AI, junior pediatricians (those with 3-15 years of experience) were outclassed.

Helping, Not Replacing
While the research used a competitive analysis to measure the success of the AI, the results should be seen as anything but hostile to human doctors. The near future of artificial intelligence in medicine will see these machine learning programs augment, not replace, human physicians. The authors of the study specifically call out augmentation as the key short-term application of their work. Triaging incoming patients via intake forms, performing massive metastudies using EHRs, providing rapid ‘second opinions’—the applications for an AI doctor that is better-but-not-the-best are as varied as the healthcare industry itself.

That’s only considering how artificial intelligence could make a positive impact immediately upon implementation. It’s easy to see how long-term use of a diagnostic assistant could reshape the way modern medical institutions approach their work.

Look at how the MLC results fit snugly between the junior and senior physician groups. Essentially, it took nearly 15 years before a physician could consistently out-diagnose the machine. That’s a decade and a half wherein an AI diagnostic assistant would be an invaluable partner—both as a training tool and a safety measure. Likewise, on the other side of the experience curve you have physicians whose performance could be continuously leveraged to improve the AI’s effectiveness. This is a clear opportunity for a symbiotic relationship, with humans and machines each assisting the other as they mature.

Closer to Us, But Still Dependent on Us
No matter the ultimate application, the AI doctors of the future are drawing nearer to us step by step. This latest research is a demonstration that artificial intelligence can mimic the results of human deductive reasoning even in some of the most complex and important decision-making processes. True, the MLC required input from humans to function; both the initial data points and the cases used to evaluate the AI depended on EHRs written by physicians. While every effort was made to design a test schema that removed any indication of the eventual diagnosis, some “data leakage” is bound to occur.

In other words, when AIs use human-created data, they inherit human insight to some degree. Yet the progress made in machine imaging, chatbots, sensors, and other fields all suggest that this dependence on human input is more about where we are right now than where we could be in the near future.

Data, and More Data
That near future may also have some clear winners and losers. For now, those winners seem to be the institutions that can capture and apply the largest sets of data. With a rapidly digitized society gathering incredible amounts of data, China has a clear advantage. Combined with their relatively relaxed approach to privacy, they are likely to continue as one of the driving forces behind machine learning and its applications. So too will Google/Alphabet with their massive medical studies. Data is the uranium in this AI arms race, and everyone seems to be scrambling to collect more.

In a global community that seems increasingly aware of the potential problems arising from this need for and reliance on data, it’s nice to know there’ll be an upside as well. The technology behind AI medical assistants is looking more and more mature—even if we are still struggling to find exactly where, when, and how that technology should first become universal.

Yet wherever we see the next push to make AI a standard tool in a real-world medical setting, I have little doubt it will greatly improve the lives of human patients. Today Doctor AI is performing as well as a human colleague with more than 10 years of experience. By next year or so, it may take twice as long for humans to be competitive. And in a decade, the combined medical knowledge of all human history may be a tool as common as a stethoscope in your doctor’s hands.

Image Credit: Nadia Snopek / Shutterstock.com Continue reading

Posted in Human Robots

#434643 Sensors and Machine Learning Are Giving ...

According to some scientists, humans really do have a sixth sense. There’s nothing supernatural about it: the sense of proprioception tells you about the relative positions of your limbs and the rest of your body. Close your eyes, block out all sound, and you can still use this internal “map” of your external body to locate your muscles and body parts – you have an innate sense of the distances between them, and the perception of how they’re moving, above and beyond your sense of touch.

This sense is invaluable for allowing us to coordinate our movements. In humans, the brain integrates senses including touch, heat, and the tension in muscle spindles to allow us to build up this map.

Replicating this complex sense has posed a great challenge for roboticists. We can imagine simulating the sense of sight with cameras, sound with microphones, or touch with pressure-pads. Robots with chemical sensors could be far more accurate than us in smell and taste, but building in proprioception, the robot’s sense of itself and its body, is far more difficult, and is a large part of why humanoid robots are so tricky to get right.

Simultaneous localization and mapping (SLAM) software allows robots to use their own senses to build up a picture of their surroundings and environment, but they’d need a keen sense of the position of their own bodies to interact with it. If something unexpected happens, or in dark environments where primary senses are not available, robots can struggle to keep track of their own position and orientation. For human-robot interaction, wearable robotics, and delicate applications like surgery, tiny differences can be extremely important.

Piecemeal Solutions
In the case of hard robotics, this is generally solved by using a series of strain and pressure sensors in each joint, which allow the robot to determine how its limbs are positioned. That works fine for rigid robots with a limited number of joints, but for softer, more flexible robots, this information is limited. Roboticists are faced with a dilemma: a vast, complex array of sensors for every degree of freedom in the robot’s movement, or limited skill in proprioception?

New techniques, often involving new arrays of sensory material and machine-learning algorithms to fill in the gaps, are starting to tackle this problem. Take the work of Thomas George Thuruthel and colleagues in Pisa and San Diego, who draw inspiration from the proprioception of humans. In a new paper in Science Robotics, they describe the use of soft sensors distributed through a robotic finger at random. This placement is much like the constant adaptation of sensors in humans and animals, rather than relying on feedback from a limited number of positions.

The sensors allow the soft robot to react to touch and pressure in many different locations, forming a map of itself as it contorts into complicated positions. The machine-learning algorithm serves to interpret the signals from the randomly-distributed sensors: as the finger moves around, it’s observed by a motion capture system. After training the robot’s neural network, it can associate the feedback from the sensors with the position of the finger detected in the motion-capture system, which can then be discarded. The robot observes its own motions to understand the shapes that its soft body can take, and translate them into the language of these soft sensors.

“The advantages of our approach are the ability to predict complex motions and forces that the soft robot experiences (which is difficult with traditional methods) and the fact that it can be applied to multiple types of actuators and sensors,” said Michael Tolley of the University of California San Diego. “Our method also includes redundant sensors, which improves the overall robustness of our predictions.”

The use of machine learning lets the roboticists come up with a reliable model for this complex, non-linear system of motions for the actuators, something difficult to do by directly calculating the expected motion of the soft-bot. It also resembles the human system of proprioception, built on redundant sensors that change and shift in position as we age.

In Search of a Perfect Arm
Another approach to training robots in using their bodies comes from Robert Kwiatkowski and Hod Lipson of Columbia University in New York. In their paper “Task-agnostic self-modeling machines,” also recently published in Science Robotics, they describe a new type of robotic arm.

Robotic arms and hands are getting increasingly dexterous, but training them to grasp a large array of objects and perform many different tasks can be an arduous process. It’s also an extremely valuable skill to get right: Amazon is highly interested in the perfect robot arm. Google hooked together an array of over a dozen robot arms so that they could share information about grasping new objects, in part to cut down on training time.

Individually training a robot arm to perform every individual task takes time and reduces the adaptability of your robot: either you need an ML algorithm with a huge dataset of experiences, or, even worse, you need to hard-code thousands of different motions. Kwiatkowski and Lipson attempt to overcome this by developing a robotic system that has a “strong sense of self”: a model of its own size, shape, and motions.

They do this using deep machine learning. The robot begins with no prior knowledge of its own shape or the underlying physics of its motion. It then repeats a series of a thousand random trajectories, recording the motion of its arm. Kwiatkowski and Lipson compare this to a baby in the first year of life observing the motions of its own hands and limbs, fascinated by picking up and manipulating objects.

Again, once the robot has trained itself to interpret these signals and build up a robust model of its own body, it’s ready for the next stage. Using that deep-learning algorithm, the researchers then ask the robot to design strategies to accomplish simple pick-up and place and handwriting tasks. Rather than laboriously and narrowly training itself for each individual task, limiting its abilities to a very narrow set of circumstances, the robot can now strategize how to use its arm for a much wider range of situations, with no additional task-specific training.

Damage Control
In a further experiment, the researchers replaced part of the arm with a “deformed” component, intended to simulate what might happen if the robot was damaged. The robot can then detect that something’s up and “reconfigure” itself, reconstructing its self-model by going through the training exercises once again; it was then able to perform the same tasks with only a small reduction in accuracy.

Machine learning techniques are opening up the field of robotics in ways we’ve never seen before. Combining them with our understanding of how humans and other animals are able to sense and interact with the world around us is bringing robotics closer and closer to becoming truly flexible and adaptable, and, eventually, omnipresent.

But before they can get out and shape the world, as these studies show, they will need to understand themselves.

Image Credit: jumbojan / Shutterstock.com Continue reading

Posted in Human Robots