Tag Archives: sense

#435436 Undeclared Wars in Cyberspace Are ...

The US is at war. That’s probably not exactly news, as the country has been engaged in one type of conflict or another for most of its history. The last time we officially declared war was after Japan bombed Pearl Harbor in December 1941.

Our biggest undeclared war today is not being fought by drones in the mountains of Afghanistan or even through the less-lethal barrage of threats over the nuclear programs in North Korea and Iran. In this particular war, it is the US that is under attack and on the defensive.

This is cyberwarfare.

The definition of what constitutes a cyber attack is a broad one, according to Greg White, executive director of the Center for Infrastructure Assurance and Security (CIAS) at The University of Texas at San Antonio (UTSA).

At the level of nation-state attacks, cyberwarfare could involve “attacking systems during peacetime—such as our power grid or election systems—or it could be during war time in which case the attacks may be designed to cause destruction, damage, deception, or death,” he told Singularity Hub.

For the US, the Pearl Harbor of cyberwarfare occurred during 2016 with the Russian interference in the presidential election. However, according to White, an Air Force veteran who has been involved in computer and network security since 1986, the history of cyber war can be traced back much further, to at least the first Gulf War of the early 1990s.

“We started experimenting with cyber attacks during the first Gulf War, so this has been going on a long time,” he said. “Espionage was the prime reason before that. After the war, the possibility of expanding the types of targets utilized expanded somewhat. What is really interesting is the use of social media and things like websites for [psychological operation] purposes during a conflict.”

The 2008 conflict between Russia and the Republic of Georgia is often cited as a cyberwarfare case study due to the large scale and overt nature of the cyber attacks. Russian hackers managed to bring down more than 50 news, government, and financial websites through denial-of-service attacks. In addition, about 35 percent of Georgia’s internet networks suffered decreased functionality during the attacks, coinciding with the Russian invasion of South Ossetia.

The cyberwar also offers lessons for today on Russia’s approach to cyberspace as a tool for “holistic psychological manipulation and information warfare,” according to a 2018 report called Understanding Cyberwarfare from the Modern War Institute at West Point.

US Fights Back
News in recent years has highlighted how Russian hackers have attacked various US government entities and critical infrastructure such as energy and manufacturing. In particular, a shadowy group known as Unit 26165 within the country’s military intelligence directorate is believed to be behind the 2016 US election interference campaign.

However, the US hasn’t been standing idly by. Since at least 2012, the US has put reconnaissance probes into the control systems of the Russian electric grid, The New York Times reported. More recently, we learned that the US military has gone on the offensive, putting “crippling malware” inside the Russian power grid as the U.S. Cyber Command flexes its online muscles thanks to new authority granted to it last year.

“Access to the power grid that is obtained now could be used to shut something important down in the future when we are in a war,” White noted. “Espionage is part of the whole program. It is important to remember that cyber has just provided a new domain in which to conduct the types of activities we have been doing in the real world for years.”

The US is also beginning to pour more money into cybersecurity. The 2020 fiscal budget calls for spending $17.4 billion throughout the government on cyber-related activities, with the Department of Defense (DoD) alone earmarked for $9.6 billion.

Despite the growing emphasis on cybersecurity in the US and around the world, the demand for skilled security professionals is well outpacing the supply, with a projected shortfall of nearly three million open or unfilled positions according to the non-profit IT security organization (ISC)².

UTSA is rare among US educational institutions in that security courses and research are being conducted across three different colleges, according to White. About 10 percent of the school’s 30,000-plus students are enrolled in a cyber-related program, he added, and UTSA is one of only 21 schools that has received the Cyber Operations Center of Excellence designation from the National Security Agency.

“This track in the computer science program is specifically designed to prepare students for the type of jobs they might be involved in if they went to work for the DoD,” White said.

However, White is extremely doubtful there will ever be enough cyber security professionals to meet demand. “I’ve been preaching that we’ve got to worry about cybersecurity in the workforce, not just the cybersecurity workforce, not just cybersecurity professionals. Everybody has a responsibility for cybersecurity.”

Artificial Intelligence in Cybersecurity
Indeed, humans are often seen as the weak link in cybersecurity. That point was driven home at a cybersecurity roundtable discussion during this year’s Brainstorm Tech conference in Aspen, Colorado.

Participant Dorian Daley, general counsel at Oracle, said insider threats are at the top of the list when it comes to cybersecurity. “Sadly, I think some of the biggest challenges are people, and I mean that in a number of ways. A lot of the breaches really come from insiders. So the more that you can automate things and you can eliminate human malicious conduct, the better.”

White noted that automation is already the norm in cybersecurity. “Humans can’t react as fast as systems can launch attacks, so we need to rely on automated defenses as well,” he said. “This doesn’t mean that humans are not in the loop, but much of what is done these days is ‘scripted’.”

The use of artificial intelligence, machine learning, and other advanced automation techniques have been part of the cybersecurity conversation for quite some time, according to White, such as pattern analysis to look for specific behaviors that might indicate an attack is underway.

“What we are seeing quite a bit of today falls under the heading of big data and data analytics,” he explained.

But there are signs that AI is going off-script when it comes to cyber attacks. In the hands of threat groups, AI applications could lead to an increase in the number of cyberattacks, wrote Michelle Cantos, a strategic intelligence analyst at cybersecurity firm FireEye.

“Current AI technology used by businesses to analyze consumer behavior and find new customer bases can be appropriated to help attackers find better targets,” she said. “Adversaries can use AI to analyze datasets and generate recommendations for high-value targets they think the adversary should hit.”

In fact, security researchers have already demonstrated how a machine learning system could be used for malicious purposes. The Social Network Automated Phishing with Reconnaissance system, or SNAP_R, generated more than four times as many spear-phishing tweets on Twitter than a human—and was just as successful at targeting victims in order to steal sensitive information.

Cyber war is upon us. And like the current war on terrorism, there are many battlefields from which the enemy can attack and then disappear. While total victory is highly unlikely in the traditional sense, innovations through AI and other technologies can help keep the lights on against the next cyber attack.

Image Credit: pinkeyes / Shutterstock.com Continue reading

Posted in Human Robots

#435423 Moving Beyond Mind-Controlled Limbs to ...

Brain-machine interface enthusiasts often gush about “closing the loop.” It’s for good reason. On the implant level, it means engineering smarter probes that only activate when they detect faulty electrical signals in brain circuits. Elon Musk’s Neuralink—among other players—are readily pursuing these bi-directional implants that both measure and zap the brain.

But to scientists laboring to restore functionality to paralyzed patients or amputees, “closing the loop” has broader connotations. Building smart mind-controlled robotic limbs isn’t enough; the next frontier is restoring sensation in offline body parts. To truly meld biology with machine, the robotic appendage has to “feel one” with the body.

This month, two studies from Science Robotics describe complementary ways forward. In one, scientists from the University of Utah paired a state-of-the-art robotic arm—the DEKA LUKE—with electrically stimulating remaining nerves above the attachment point. Using artificial zaps to mimic the skin’s natural response patterns to touch, the team dramatically increased the patient’s ability to identify objects. Without much training, he could easily discriminate between the small and large and the soft and hard while blindfolded and wearing headphones.

In another, a team based at the National University of Singapore took inspiration from our largest organ, the skin. Mimicking the neural architecture of biological skin, the engineered “electronic skin” not only senses temperature, pressure, and humidity, but continues to function even when scraped or otherwise damaged. Thanks to artificial nerves that transmit signals far faster than our biological ones, the flexible e-skin shoots electrical data 1,000 times quicker than human nerves.

Together, the studies marry neuroscience and robotics. Representing the latest push towards closing the loop, they show that integrating biological sensibilities with robotic efficiency isn’t impossible (super-human touch, anyone?). But more immediately—and more importantly—they’re beacons of hope for patients who hope to regain their sense of touch.

For one of the participants, a late middle-aged man with speckled white hair who lost his forearm 13 years ago, superpowers, cyborgs, or razzle-dazzle brain implants are the last thing on his mind. After a barrage of emotionally-neutral scientific tests, he grasped his wife’s hand and felt her warmth for the first time in over a decade. His face lit up in a blinding smile.

That’s what scientists are working towards.

Biomimetic Feedback
The human skin is a marvelous thing. Not only does it rapidly detect a multitude of sensations—pressure, temperature, itch, pain, humidity—its wiring “binds” disparate signals together into a sensory fingerprint that helps the brain identify what it’s feeling at any moment. Thanks to over 45 miles of nerves that connect the skin, muscles, and brain, you can pick up a half-full coffee cup, knowing that it’s hot and sloshing, while staring at your computer screen. Unfortunately, this complexity is also why restoring sensation is so hard.

The sensory electrode array implanted in the participant’s arm. Image Credit: George et al., Sci. Robot. 4, eaax2352 (2019)..
However, complex neural patterns can also be a source of inspiration. Previous cyborg arms are often paired with so-called “standard” sensory algorithms to induce a basic sense of touch in the missing limb. Here, electrodes zap residual nerves with intensities proportional to the contact force: the harder the grip, the stronger the electrical feedback. Although seemingly logical, that’s not how our skin works. Every time the skin touches or leaves an object, its nerves shoot strong bursts of activity to the brain; while in full contact, the signal is much lower. The resulting electrical strength curve resembles a “U.”

The LUKE hand. Image Credit: George et al., Sci. Robot. 4, eaax2352 (2019).
The team decided to directly compare standard algorithms with one that better mimics the skin’s natural response. They fitted a volunteer with a robotic LUKE arm and implanted an array of electrodes into his forearm—right above the amputation—to stimulate the remaining nerves. When the team activated different combinations of electrodes, the man reported sensations of vibration, pressure, tapping, or a sort of “tightening” in his missing hand. Some combinations of zaps also made him feel as if he were moving the robotic arm’s joints.

In all, the team was able to carefully map nearly 120 sensations to different locations on the phantom hand, which they then overlapped with contact sensors embedded in the LUKE arm. For example, when the patient touched something with his robotic index finger, the relevant electrodes sent signals that made him feel as if he were brushing something with his own missing index fingertip.

Standard sensory feedback already helped: even with simple electrical stimulation, the man could tell apart size (golf versus lacrosse ball) and texture (foam versus plastic) while blindfolded and wearing noise-canceling headphones. But when the team implemented two types of neuromimetic feedback—electrical zaps that resembled the skin’s natural response—his performance dramatically improved. He was able to identify objects much faster and more accurately under their guidance. Outside the lab, he also found it easier to cook, feed, and dress himself. He could even text on his phone and complete routine chores that were previously too difficult, such as stuffing an insert into a pillowcase, hammering a nail, or eating hard-to-grab foods like eggs and grapes.

The study shows that the brain more readily accepts biologically-inspired electrical patterns, making it a relatively easy—but enormously powerful—upgrade that seamlessly integrates the robotic arms with the host. “The functional and emotional benefits…are likely to be further enhanced with long-term use, and efforts are underway to develop a portable take-home system,” the team said.

E-Skin Revolution: Asynchronous Coded Electronic Skin (ACES)
Flexible electronic skins also aren’t new, but the second team presented an upgrade in both speed and durability while retaining multiplexed sensory capabilities.

Starting from a combination of rubber, plastic, and silicon, the team embedded over 200 sensors onto the e-skin, each capable of discerning contact, pressure, temperature, and humidity. They then looked to the skin’s nervous system for inspiration. Our skin is embedded with a dense array of nerve endings that individually transmit different types of sensations, which are integrated inside hubs called ganglia. Compared to having every single nerve ending directly ping data to the brain, this “gather, process, and transmit” architecture rapidly speeds things up.

The team tapped into this biological architecture. Rather than pairing each sensor with a dedicated receiver, ACES sends all sensory data to a single receiver—an artificial ganglion. This setup lets the e-skin’s wiring work as a whole system, as opposed to individual electrodes. Every sensor transmits its data using a characteristic pulse, which allows it to be uniquely identified by the receiver.

The gains were immediate. First was speed. Normally, sensory data from multiple individual electrodes need to be periodically combined into a map of pressure points. Here, data from thousands of distributed sensors can independently go to a single receiver for further processing, massively increasing efficiency—the new e-skin’s transmission rate is roughly 1,000 times faster than that of human skin.

Second was redundancy. Because data from individual sensors are aggregated, the system still functioned even when any individual receptors are damaged, making it far more resilient than previous attempts. Finally, the setup could easily scale up. Although the team only tested the idea with 240 sensors, theoretically the system should work with up to 10,000.

The team is now exploring ways to combine their invention with other material layers to make it water-resistant and self-repairable. As you might’ve guessed, an immediate application is to give robots something similar to complex touch. A sensory upgrade not only lets robots more easily manipulate tools, doorknobs, and other objects in hectic real-world environments, it could also make it easier for machines to work collaboratively with humans in the future (hey Wall-E, care to pass the salt?).

Dexterous robots aside, the team also envisions engineering better prosthetics. When coated onto cyborg limbs, for example, ACES may give them a better sense of touch that begins to rival the human skin—or perhaps even exceed it.

Regardless, efforts that adapt the functionality of the human nervous system to machines are finally paying off, and more are sure to come. Neuromimetic ideas may very well be the link that finally closes the loop.

Image Credit: Dan Hixson/University of Utah College of Engineering.. Continue reading

Posted in Human Robots

#435174 Revolt on the Horizon? How Young People ...

As digital technologies facilitate the growth of both new and incumbent organizations, we have started to see the darker sides of the digital economy unravel. In recent years, many unethical business practices have been exposed, including the capture and use of consumers’ data, anticompetitive activities, and covert social experiments.

But what do young people who grew up with the internet think about this development? Our research with 400 digital natives—19- to 24-year-olds—shows that this generation, dubbed “GenTech,” may be the one to turn the digital revolution on its head. Our findings point to a frustration and disillusionment with the way organizations have accumulated real-time information about consumers without their knowledge and often without their explicit consent.

Many from GenTech now understand that their online lives are of commercial value to an array of organizations that use this insight for the targeting and personalization of products, services, and experiences.

This era of accumulation and commercialization of user data through real-time monitoring has been coined “surveillance capitalism” and signifies a new economic system.

Artificial Intelligence
A central pillar of the modern digital economy is our interaction with artificial intelligence (AI) and machine learning algorithms. We found that 47 percent of GenTech do not want AI technology to monitor their lifestyle, purchases, and financial situation in order to recommend them particular things to buy.

In fact, only 29 percent see this as a positive intervention. Instead, they wish to maintain a sense of autonomy in their decision making and have the opportunity to freely explore new products, services, and experiences.

As individuals living in the digital age, we constantly negotiate with technology to let go of or retain control. This pendulum-like effect reflects the ongoing battle between humans and technology.

My Life, My Data?
Our research also reveals that 54 percent of GenTech are very concerned about the access organizations have to their data, while only 19 percent were not worried. Despite the EU General Data Protection Regulation being introduced in May 2018, this is still a major concern, grounded in a belief that too much of their data is in the possession of a small group of global companies, including Google, Amazon, and Facebook. Some 70 percent felt this way.

In recent weeks, both Facebook and Google have vowed to make privacy a top priority in the way they interact with users. Both companies have faced public outcry for their lack of openness and transparency when it comes to how they collect and store user data. It wasn’t long ago that a hidden microphone was found in one of Google’s home alarm products.

Google now plans to offer auto-deletion of users’ location history data, browsing, and app activity as well as extend its “incognito mode” to Google Maps and search. This will enable users to turn off tracking.

At Facebook, CEO Mark Zuckerberg is keen to reposition the platform as a “privacy focused communications platform” built on principles such as private interactions, encryption, safety, interoperability (communications across Facebook-owned apps and platforms), and secure data storage. This will be a tough turnaround for the company that is fundamentally dependent on turning user data into opportunities for highly individualized advertising.

Privacy and transparency are critically important themes for organizations today, both for those that have “grown up” online as well as the incumbents. While GenTech want organizations to be more transparent and responsible, 64 percent also believe that they cannot do much to keep their data private. Being tracked and monitored online by organizations is seen as part and parcel of being a digital consumer.

Despite these views, there is a growing revolt simmering under the surface. GenTech want to take ownership of their own data. They see this as a valuable commodity, which they should be given the opportunity to trade with organizations. Some 50 percent would willingly share their data with companies if they got something in return, for example a financial incentive.

Rewiring the Power Shift
GenTech are looking to enter into a transactional relationship with organizations. This reflects a significant change in attitudes from perceiving the free access to digital platforms as the “product” in itself (in exchange for user data), to now wishing to use that data to trade for explicit benefits.

This has created an opportunity for companies that seek to empower consumers and give them back control of their data. Several companies now offer consumers the opportunity to sell the data they are comfortable sharing or take part in research that they get paid for. More and more companies are joining this space, including People.io, Killi, and Ocean Protocol.

Sir Tim Berners Lee, the creator of the world wide web, has also been working on a way to shift the power from organizations and institutions back to citizens and consumers. The platform, Solid, offers users the opportunity to be in charge of where they store their data and who can access it. It is a form of re-decentralization.

The Solid POD (Personal Online Data storage) is a secure place on a hosted server or the individual’s own server. Users can grant apps access to their POD as a person’s data is stored centrally and not by an app developer or on an organization’s server. We see this as potentially being a way to let people take back control from technology and other companies.

GenTech have woken up to a reality where a life lived “plugged in” has significant consequences for their individual privacy and are starting to push back, questioning those organizations that have shown limited concern and continue to exercise exploitative practices.

It’s no wonder that we see these signs of revolt. GenTech is the generation with the most to lose. They face a life ahead intertwined with digital technology as part of their personal and private lives. With continued pressure on organizations to become more transparent, the time is now for young people to make their move.

Dr Mike Cooray, Professor of Practice, Hult International Business School and Dr Rikke Duus, Research Associate and Senior Teaching Fellow, UCL

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Ser Borakovskyy / Shutterstock.com Continue reading

Posted in Human Robots

#435145 How Big Companies Can Simultaneously Run ...

We live in the age of entrepreneurs. New startups seem to appear out of nowhere and challenge not only established companies, but entire industries. Where startup unicorns were once mythical creatures, they now seem abundant, not only increasing in numbers but also in the speed with which they can gain the minimum one-billion-dollar valuations to achieve this status.

But no matter how well things go for innovative startups, how many new success stories we hear, and how much space they take up in the media, the story that they are the best or only source of innovation isn’t entirely accurate.

Established organizations, or legacy organizations, can be incredibly innovative too. And while innovation is much more difficult in established organizations than in startups because they have much more complex systems—nobody is more likely to succeed in their innovation efforts than established organizations.

Unlike startups, established organizations have all the resources. They have money, customers, data, suppliers, partners, and infrastructure, which put them in a far better position to transform new ideas into concrete, value-creating, successful offerings than startups.

However, for established organizations, becoming an innovation champion in these times of rapid change requires new rules of engagement.

Many organizations commit the mistake of engaging in innovation as if it were a homogeneous thing that should be approached in the same way every time, regardless of its purpose. In my book, Transforming Legacy Organizations, I argue that innovation in established organizations must actually be divided into three different tracks: optimizing, augmenting, and mutating innovation.

All three are important, and to complicate matters further, organizations must execute all three types of innovation at the same time.

Optimizing Innovation
The first track is optimizing innovation. This type of innovation is the majority of what legacy organizations already do today. It is, metaphorically speaking, the extra blade on the razor. A razor manufacturer might launch a new razor that has not just three, but four blades, to ensure an even better, closer, and more comfortable shave. Then one or two years later, they say they are now launching a razor that has not only four, but five blades for an even better, closer, and more comfortable shave. That is optimizing innovation.

Adding extra blades on the razor is where the established player reigns.

No startup with so much as a modicum of sense would even try to beat the established company in this type of innovation. And this continuous optimization, both on the operational and customer facing sides, is important. In the short term. It pays the rent. But it’s far from enough. There are limits to how many blades a razor needs, and optimizing innovation only improves upon the past.

Augmenting Innovation
Established players must also go beyond optimization and prepare for the future through augmenting innovation.

The digital transformation projects that many organizations are initiating can be characterized as augmenting innovation. In the first instance, it is about upgrading core offerings and processes from analog to digital. Or, if you’re born digital, you’ve probably had to augment the core to become mobile-first. Perhaps you have even entered the next augmentation phase, which involves implementing artificial intelligence. Becoming AI-first, like the Amazons, Microsofts, Baidus, and Googles of the world, requires great technological advancements. And it’s difficult. But technology may, in fact, be a minor part of the task.

The biggest challenge for augmenting innovation is probably culture.

Only legacy organizations that manage to transform their cultures from status quo cultures—cultures with a preference for things as they are—into cultures full of incremental innovators can thrive in constant change.

To create a strong innovation culture, an organization needs to thoroughly understand its immune systems. These are the mechanisms that protect the organization and operate around the clock to keep it healthy and stable, just as the body’s immune system operates to keep the body healthy and stable. But in a rapidly changing world, many of these defense mechanisms are no longer appropriate and risk weakening organizations’ innovation power.

When talking about organizational immune systems, there is a clear tendency to simply point to the individual immune system, people’s unwillingness to change.

But this is too simplistic.

Of course, there is human resistance to change, but the organizational immune system, consisting of a company’s key performance indicators (KPIs), rewards systems, legacy IT infrastructure and processes, and investor and shareholder demands, is far more important. So is the organization’s societal immune system, such as legislative barriers, legacy customers and providers, and economic climate.

Luckily, there are many culture hacks that organizations can apply to strengthen their innovation cultures by upgrading their physical and digital workspaces, transforming their top-down work processes into decentralized, agile ones, and empowering their employees.

Mutating Innovation
Upgrading your core and preparing for the future by augmenting innovation is crucial if you want success in the medium term. But to win in the long run and be as or more successful 20 to 30 years from now, you need to invent the future, and challenge your core, through mutating innovation.

This requires involving radical innovators who have a bold focus on experimenting with that which is not currently understood and for which a business case cannot be prepared.

Here you must also physically move away from the core organization when you initiate and run such initiatives. This is sometimes called “innovation on the edges” because the initiatives will not have a chance at succeeding within the core. It will be too noisy as they challenge what currently exists—precisely what the majority of the organization’s employees are working to optimize or augment.

Forward-looking organizations experiment to mutate their core through “X divisions,” sometimes called skunk works or innovation labs.

Lowe’s Innovation Labs, for instance, worked with startups to build in-store robot assistants and zero-gravity 3D printers to explore the future. Mutating innovation might include pursuing partnerships across all imaginable domains or establishing brand new companies, rather than traditional business units, as we see automakers such as Toyota now doing to build software for autonomous vehicles. Companies might also engage in radical open innovation by sponsoring others’ ingenuity. Japan’s top airline ANA is exploring a future of travel that does not involve flying people from point A to point B via the ANA Avatar XPRIZE competition.

Increasing technological opportunities challenge the core of any organization but also create unprecedented potential. No matter what product, service, or experience you create, you can’t rest on your laurels. You have to bring yourself to a position where you have a clear strategy for optimizing, augmenting, and mutating your core and thus transforming your organization.

It’s not an easy job. But, hey, if it were easy, everyone would be doing it. Those who make it, on the other hand, will be the innovation champions of the future.

Image Credit: rock-the-stock / Shutterstock.com

We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites. Continue reading

Posted in Human Robots

#435098 Coming of Age in the Age of AI: The ...

The first generation to grow up entirely in the 21st century will never remember a time before smartphones or smart assistants. They will likely be the first children to ride in self-driving cars, as well as the first whose healthcare and education could be increasingly turned over to artificially intelligent machines.

Futurists, demographers, and marketers have yet to agree on the specifics of what defines the next wave of humanity to follow Generation Z. That hasn’t stopped some, like Australian futurist Mark McCrindle, from coining the term Generation Alpha, denoting a sort of reboot of society in a fully-realized digital age.

“In the past, the individual had no power, really,” McCrindle told Business Insider. “Now, the individual has great control of their lives through being able to leverage this world. Technology, in a sense, transformed the expectations of our interactions.”

No doubt technology may impart Marvel superhero-like powers to Generation Alpha that even tech-savvy Millennials never envisioned over cups of chai latte. But the powers of machine learning, computer vision, and other disciplines under the broad category of artificial intelligence will shape this yet unformed generation more definitively than any before it.

What will it be like to come of age in the Age of AI?

The AI Doctor Will See You Now
Perhaps no other industry is adopting and using AI as much as healthcare. The term “artificial intelligence” appears in nearly 90,000 publications from biomedical literature and research on the PubMed database.

AI is already transforming healthcare and longevity research. Machines are helping to design drugs faster and detect disease earlier. And AI may soon influence not only how we diagnose and treat illness in children, but perhaps how we choose which children will be born in the first place.

A study published earlier this month in NPJ Digital Medicine by scientists from Weill Cornell Medicine used 12,000 photos of human embryos taken five days after fertilization to train an AI algorithm on how to tell which in vitro fertilized embryo had the best chance of a successful pregnancy based on its quality.

Investigators assigned each embryo a grade based on various aspects of its appearance. A statistical analysis then correlated that grade with the probability of success. The algorithm, dubbed Stork, was able to classify the quality of a new set of images with 97 percent accuracy.

“Our algorithm will help embryologists maximize the chances that their patients will have a single healthy pregnancy,” said Dr. Olivier Elemento, director of the Caryl and Israel Englander Institute for Precision Medicine at Weill Cornell Medicine, in a press release. “The IVF procedure will remain the same, but we’ll be able to improve outcomes by harnessing the power of artificial intelligence.”

Other medical researchers see potential in applying AI to detect possible developmental issues in newborns. Scientists in Europe, working with a Finnish AI startup that creates seizure monitoring technology, have developed a technique for detecting movement patterns that might indicate conditions like cerebral palsy.

Published last month in the journal Acta Pediatrica, the study relied on an algorithm to extract the movements from a newborn, turning it into a simplified “stick figure” that medical experts could use to more easily detect clinically relevant data.

The researchers are continuing to improve the datasets, including using 3D video recordings, and are now developing an AI-based method for determining if a child’s motor maturity aligns with its true age. Meanwhile, a study published in February in Nature Medicine discussed the potential of using AI to diagnose pediatric disease.

AI Gets Classy
After being weaned on algorithms, Generation Alpha will hit the books—about machine learning.

China is famously trying to win the proverbial AI arms race by spending billions on new technologies, with one Chinese city alone pledging nearly $16 billion to build a smart economy based on artificial intelligence.

To reach dominance by its stated goal of 2030, Chinese cities are also incorporating AI education into their school curriculum. Last year, China published its first high school textbook on AI, according to the South China Morning Post. More than 40 schools are participating in a pilot program that involves SenseTime, one of the country’s biggest AI companies.

In the US, where it seems every child has access to their own AI assistant, researchers are just beginning to understand how the ubiquity of intelligent machines will influence the ways children learn and interact with their highly digitized environments.

Sandra Chang-Kredl, associate professor of the department of education at Concordia University, told The Globe and Mail that AI could have detrimental effects on learning creativity or emotional connectedness.

Similar concerns inspired Stefania Druga, a member of the Personal Robots group at the MIT Media Lab (and former Education Teaching Fellow at SU), to study interactions between children and artificial intelligence devices in order to encourage positive interactions.

Toward that goal, Druga created Cognimates, a platform that enables children to program and customize their own smart devices such as Alexa or even a smart, functional robot. The kids can also use Cognimates to train their own AI models or even build a machine learning version of Rock Paper Scissors that gets better over time.

“I believe it’s important to also introduce young people to the concepts of AI and machine learning through hands-on projects so they can make more informed and critical use of these technologies,” Druga wrote in a Medium blog post.

Druga is also the founder of Hackidemia, an international organization that sponsors workshops and labs around the world to introduce kids to emerging technologies at an early age.

“I think we are in an arms race in education with the advancement of technology, and we need to start thinking about AI literacy before patterns of behaviors for children and their families settle in place,” she wrote.

AI Goes Back to School
It also turns out that AI has as much to learn from kids. More and more researchers are interested in understanding how children grasp basic concepts that still elude the most advanced machine minds.

For example, developmental psychologist Alison Gopnik has written and lectured extensively about how studying the minds of children can provide computer scientists clues on how to improve machine learning techniques.

In an interview on Vox, she described that while DeepMind’s AlpahZero was trained to be a chessmaster, it struggles with even the simplest changes in the rules, such as allowing the bishop to move horizontally instead of vertically.

“A human chess player, even a kid, will immediately understand how to transfer that new rule to their playing of the game,” she noted. “Flexibility and generalization are something that even human one-year-olds can do but that the best machine learning systems have a much harder time with.”

Last year, the federal defense agency DARPA announced a new program aimed at improving AI by teaching it “common sense.” One of the chief strategies is to develop systems for “teaching machines through experience, mimicking the way babies grow to understand the world.”

Such an approach is also the basis of a new AI program at MIT called the MIT Quest for Intelligence.

The research leverages cognitive science to understand human intelligence, according to an article on the project in MIT Technology Review, such as exploring how young children visualize the world using their own innate 3D models.

“Children’s play is really serious business,” said Josh Tenenbaum, who leads the Computational Cognitive Science lab at MIT and his head of the new program. “They’re experiments. And that’s what makes humans the smartest learners in the known universe.”

In a world increasingly driven by smart technologies, it’s good to know the next generation will be able to keep up.

Image Credit: phoelixDE / Shutterstock.com Continue reading

Posted in Human Robots