Tag Archives: movements

#435658 Video Friday: A Two-Armed Robot That ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

ICRES 2019 – July 29-30, 2019 – London, U.K.
DARPA SubT Tunnel Circuit – August 15-22, 2019 – Pittsburgh, Pa., USA
IEEE Africon 2019 – September 25-27, 2019 – Accra, Ghana
ISRR 2019 – October 6-10, 2019 – Hanoi, Vietnam
Ro-Man 2019 – October 14-18, 2019 – New Delhi, India
Humanoids 2019 – October 15-17, 2019 – Toronto, Canada
Let us know if you have suggestions for next week, and enjoy today’s videos.

I’m sure you’ve seen this video already because you read this blog every day, but if you somehow missed it because you were skiing across Antarctica (the only valid excuse we’re accepting today), here’s our video introducing HMI’s Aquanaut transforming robot submarine.

And after you recover from all that frostbite, make sure and read our in-depth feature article here.

[ Aquanaut ]

Last week we complained about not having seen a ballbot with a manipulator, so Roberto from CMU shared a new video of their ballbot, featuring a pair of 7-DoF arms.

We should learn more at Humanoids 2019.

[ CMU ]

Thanks Roberto!

The FAA is making it easier for recreational drone pilots to get near-realtime approval to fly in lightly controlled airspace.

[ LAANC ]

Self-reconfigurable modular robots are usually composed of multiple modules with uniform docking interfaces that can be transformed into different configurations by themselves. The reconfiguration planning problem is finding what sequence of reconfiguration actions are required for one arrangement of modules to transform into another. We present a novel reconfiguration planning algorithm for modular robots. The algorithm compares the initial configuration with the goal configuration efficiently. The reconfiguration actions can be executed in a distributed manner so that each module can efficiently finish its reconfiguration task which results in a global reconfiguration for the system. In the end, the algorithm is demonstrated on real modular robots and some example reconfiguration tasks are provided.

[ CKbot ]

A nice design of a gripper that uses a passive thumb of sorts to pick up flat objects from flat surfaces.

[ Paper ] via [ Laval University ]

I like this video of a palletizing robot from Kawasaki because in the background you can see a human doing the exact same job and obviously not enjoying it.

[ Kawasaki ]

This robot cleans and “brings joy and laughter.” What else do we need?

I do appreciate that all the robots are named Leo, and that they’re also all female.

[ LionsBot ]

This is less of a dishwashing robot and more of a dishsorting robot, but we’ll forgive it because it doesn’t drop a single dish.

[ TechMagic ]

Thanks Ryosuke!

A slight warning here that the robot in the following video (which costs something like $180,000) appears “naked” in some scenes, none of which are strictly objectionable, we hope.

Beautifully slim and delicate motion life-size motion figures are ideal avatars for expressing emotions to customers in various arts, content and businesses. We can provide a system that integrates not only motion figures but all moving devices.

[ Speecys ]

The best way to operate a Husky with a pair of manipulators on it is to become the robot.

[ UT Austin ]

The FlyJacket drone control system from EPFL has been upgraded so that it can yank you around a little bit.

In several fields of human-machine interaction, haptic guidance has proven to be an effective training tool for enhancing user performance. This work presents the results of psychophysical and motor learning studies that were carried out with human participant to assess the effect of cable-driven haptic guidance for a task involving aerial robotic teleoperation. The guidance system was integrated into an exosuit, called the FlyJacket, that was developed to control drones with torso movements. Results for the Just Noticeable Difference (JND) and from the Stevens Power Law suggest that the perception of force on the users’ torso scales linearly with the amplitude of the force exerted through the cables and the perceived force is close to the magnitude of the stimulus. Motor learning studies reveal that this form of haptic guidance improves user performance in training, but this improvement is not retained when participants are evaluated without guidance.

[ EPFL ]

The SAND Challenge is an opportunity for small businesses to compete in an autonomous unmanned aerial vehicle (UAV) competition to help NASA address safety-critical risks associated with flying UAVs in the national airspace. Set in a post-natural disaster scenario, SAND will push the envelope of aviation.

[ NASA ]

Legged robots have the potential to traverse diverse and rugged terrain. To find a safe and efficient navigation path and to carefully select individual footholds, it is useful to predict properties of the terrain ahead of the robot. In this work, we propose a method to collect data from robot-terrain interaction and associate it to images, to then train a neural network to predict terrain properties from images.

[ RSL ]

Misty wants to be your new receptionist.

[ Misty Robotics ]

For years, we’ve been pointing out that while new Roombas have lots of great features, older Roombas still do a totally decent job of cleaning your floors. This video is a performance comparison between the newest Roomba (the S9+) and the original 2002 Roomba (!), and the results will surprise you. Or maybe they won’t.

[ Vacuum Wars ]

Lex Fridman from MIT interviews Chris Urmson, who was involved in some of the earliest autonomous vehicle projects, Google’s original self-driving car among them, and is currently CEO of Aurora Innovation.

Chris Urmson was the CTO of the Google Self-Driving Car team, a key engineer and leader behind the Carnegie Mellon autonomous vehicle entries in the DARPA grand challenges and the winner of the DARPA urban challenge. Today he is the CEO of Aurora Innovation, an autonomous vehicle software company he started with Sterling Anderson, who was the former director of Tesla Autopilot, and Drew Bagnell, Uber’s former autonomy and perception lead.

[ AI Podcast ]

In this week’s episode of Robots in Depth, Per speaks with Lael Odhner from RightHand Robotics.

Lael Odhner is a co-founder of RightHand Robotics, that is developing a gripper based on the combination of control and soft, compliant parts to get better grasping of objects. Their work focuses on grasping and manipulating everyday human objects in everyday environments.This mimics how human hands combine control and flexibility to grasp objects with great dexterity.

The combination of control and compliance makes the RightHand robotics gripper very light-weight and affordable. The compliance makes it easier to grasp objects of unknown shape and differs from the way industrial robots usually grip. The compliance also helps in a more unstructured environment where contact with the object and its surroundings cannot be exactly predicted.

[ RightHand Robotics ] via [ Robots in Depth ] Continue reading

Posted in Human Robots

#435589 Construction Robots Learn to Excavate by ...

Pavel Savkin remembers the first time he watched a robot imitate his movements. Minutes earlier, the engineer had finished “showing” the robotic excavator its new goal by directing its movements manually. Now, running on software Savkin helped design, the robot was reproducing his movements, gesture for gesture. “It was like there was something alive in there—but I knew it was me,” he said.

Savkin is the CTO of SE4, a robotics software project that styles itself the “driver” of a fleet of robots that will eventually build human colonies in space. For now, SE4 is focused on creating software that can help developers communicate with robots, rather than on building hardware of its own.
The Tokyo-based startup showed off an industrial arm from Universal Robots that was running SE4’s proprietary software at SIGGRAPH in July. SE4’s demonstration at the Los Angeles innovation conference drew the company’s largest audience yet. The robot, nicknamed Squeezie, stacked real blocks as directed by SE4 research engineer Nathan Quinn, who wore a VR headset and used handheld controls to “show” Squeezie what to do.

As Quinn manipulated blocks in a virtual 3D space, the software learned a set of ordered instructions to be carried out in the real world. That order is essential for remote operations, says Quinn. To build remotely, developers need a way to communicate instructions to robotic builders on location. In the age of digital construction and industrial robotics, giving a computer a blueprint for what to build is a well-explored art. But operating on a distant object—especially under conditions that humans haven’t experienced themselves—presents challenges that only real-time communication with operators can solve.

The problem is that, in an unpredictable setting, even simple tasks require not only instruction from an operator, but constant feedback from the changing environment. Five years ago, the Swedish fiber network provider umea.net (part of the private Umeå Energy utility) took advantage of the virtual reality boom to promote its high-speed connections with the help of a viral video titled “Living with Lag: An Oculus Rift Experiment.” The video is still circulated in VR and gaming circles.

In the experiment, volunteers donned headgear that replaced their real-time biological senses of sight and sound with camera and audio feeds of their surroundings—both set at a 3-second delay. Thus equipped, volunteers attempt to complete everyday tasks like playing ping-pong, dancing, cooking, and walking on a beach, with decidedly slapstick results.

At outer-orbit intervals, including SE4’s dream of construction projects on Mars, the limiting factor in communication speed is not an artificial delay, but the laws of physics. The shifting relative positions of Earth and Mars mean that communications between the planets—even at the speed of light—can take anywhere from 3 to 22 minutes.

A long-distance relationship

Imagine trying to manage a construction project from across an ocean without the benefit of intelligent workers: sending a ship to an unknown world with a construction crew and blueprints for a log cabin, and four months later receiving a letter back asking how to cut down a tree. The parallel problem in long-distance construction with robots, according to SE4 CEO Lochlainn Wilson, is that automation relies on predictability. “Every robot in an industrial setting today is expecting a controlled environment.”
Platforms for applying AR and VR systems to teach tasks to artificial intelligences, as SE4 does, are already proliferating in manufacturing, healthcare, and defense. But all of the related communications systems are bound by physics and, specifically, the speed of light.
The same fundamental limitation applies in space. “Our communications are light-based, whether they’re radio or optical,” says Laura Seward Forczyk, a planetary scientist and consultant for space startups. “If you’re going to Mars and you want to communicate with your robot or spacecraft there, you need to have it act semi- or mostly-independently so that it can operate without commands from Earth.”

Semantic control
That’s exactly what SE4 aims to do. By teaching robots to group micro-movements into logical units—like all the steps to building a tower of blocks—the Tokyo-based startup lets robots make simple relational judgments that would allow them to receive a full set of instruction modules at once and carry them out in order. This sidesteps the latency issue in real-time bilateral communications that could hamstring a project or at least make progress excruciatingly slow.
The key to the platform, says Wilson, is the team’s proprietary operating software, “Semantic Control.” Just as in linguistics and philosophy, “semantics” refers to meaning itself, and meaning is the key to a robot’s ability to make even the smallest decisions on its own. “A robot can scan its environment and give [raw data] to us, but it can’t necessarily identify the objects around it and what they mean,” says Wilson.

That’s where human intelligence comes in. As part of the demonstration phase, the human operator of an SE4-controlled machine “annotates” each object in the robot’s vicinity with meaning. By labeling objects in the VR space with useful information—like which objects are building material and which are rocks—the operator helps the robot make sense of its real 3D environment before the building begins.

Giving robots the tools to deal with a changing environment is an important step toward allowing the AI to be truly independent, but it’s only an initial step. “We’re not letting it do absolutely everything,” said Quinn. “Our robot is good at moving an object from point A to point B, but it doesn’t know the overall plan.” Wilson adds that delegating environmental awareness and raw mechanical power to separate agents is the optimal relationship for a mixed human-robot construction team; it “lets humans do what they’re good at, while robots do what they do best.”

This story was updated on 4 September 2019. Continue reading

Posted in Human Robots

#435522 Harvard’s Smart Exo-Shorts Talk to the ...

Exosuits don’t generally scream “fashionable” or “svelte.” Take the mind-controlled robotic exoskeleton that allowed a paraplegic man to kick off the World Cup back in 2014. Is it cool? Hell yeah. Is it practical? Not so much.

Yapping about wearability might seem childish when the technology already helps people with impaired mobility move around dexterously. But the lesson of the ill-fated Google Glassholes, which includes an awkward dorky head tilt and an assuming voice command, clearly shows that wearable computer assistants can’t just work technologically—they have to look natural and allow the user to behave like as usual. They have to, in a sense, disappear.

To Dr. Jose Pons at the Legs + Walking Ability Lab in Chicago, exosuits need three main selling points to make it in the real world. One, they have to physically interact with their wearer and seamlessly deliver assistance when needed. Two, they should cognitively interact with the host to guide and control the robot at all times. Finally, they need to feel like a second skin—move with the user without adding too much extra mass or reducing mobility.

This week, a US-Korean collaboration delivered the whole shebang in a Lululemon-style skin-hugging package combined with a retro waist pack. The portable exosuit, weighing only 11 pounds, looks like a pair of spandex shorts but can support the wearer’s hip movement when needed. Unlike their predecessors, the shorts are embedded with sensors that let them know when the wearer is walking versus running by analyzing gait.

Switching between the two movement modes may not seem like much, but what naturally comes to our brains doesn’t translate directly to smart exosuits. “Walking and running have fundamentally different biomechanics, which makes developing devices that assist both gaits challenging,” the team said. Their algorithm, computed in the cloud, allows the wearer to easily switch between both, with the shorts providing appropriate hip support that makes the movement experience seamless.

To Pons, who was not involved in the research but wrote a perspective piece, the study is an exciting step towards future exosuits that will eventually disappear under the skin—that is, implanted neural interfaces to control robotic assistance or activate the user’s own muscles.

“It is realistic to think that we will witness, in the next several years…robust human-robot interfaces to command wearable robotics based on…the neural code of movement in humans,” he said.

A “Smart” Exosuit Hack
There are a few ways you can hack a human body to move with an exosuit. One is using implanted electrodes inside the brain or muscles to decipher movement intent. With heavy practice, a neural implant can help paralyzed people walk again or dexterously move external robotic arms. But because the technique requires surgery, it’s not an immediate sell for people who experience low mobility because of aging or low muscle tone.

The other approach is to look to biophysics. Rather than decoding neural signals that control movement, here the idea is to measure gait and other physical positions in space to decipher intent. As you can probably guess, accurately deciphering user intent isn’t easy, especially when the wearable tries to accommodate multiple gaits. But the gains are many: there’s no surgery involved, and the wearable is low in energy consumption.

Double Trouble
The authors decided to tackle an everyday situation. You’re walking to catch the train to work, realize you’re late, and immediately start sprinting.

That seemingly easy conversion hides a complex switch in biomechanics. When you walk, your legs act like an inverted pendulum that swing towards a dedicated center in a predictable way. When you run, however, the legs move more like a spring-loaded system, and the joints involved in the motion differ from a casual stroll. Engineering an assistive wearable for each is relatively simple; making one for both is exceedingly hard.

Led by Dr. Conor Walsh at Harvard University, the team started with an intuitive idea: assisted walking and running requires specialized “actuation” profiles tailored to both. When the user is moving in a way that doesn’t require assistance, the wearable needs to be out of the way so that it doesn’t restrict mobility. A quick analysis found that assisting hip extension has the largest impact, because it’s important to both gaits and doesn’t add mass to the lower legs.

Building on that insight, the team made a waist belt connected to two thigh wraps, similar to a climbing harness. Two electrical motors embedded inside the device connect the waist belt to other components through a pulley system to help the hip joints move. The whole contraption weighed about 11 lbs and didn’t obstruct natural movement.

Next, the team programmed two separate supporting profiles for walking and running. The goal was to reduce the “metabolic cost” for both movements, so that the wearer expends as little energy as needed. To switch between the two programs, they used a cloud-based classification algorithm to measure changes in energy fluctuation to figure out what mode—running or walking—the user is in.

Smart Booster
Initial trials on treadmills were highly positive. Six male volunteers with similar age and build donned the exosuit and either ran or walked on the treadmill at varying inclines. The algorithm performed perfectly at distinguishing between the two gaits in all conditions, even at steep angles.

An outdoor test with eight volunteers also proved the algorithm nearly perfect. Even on uneven terrain, only two steps out of all test trials were misclassified. In an additional trial on mud or snow, the algorithm performed just as well.

“The system allows the wearer to use their preferred gait for each speed,” the team said.

Software excellence translated to performance. A test found that the exosuit reduced the energy for walking by over nine percent and running by four percent. It may not sound like much, but the range of improvement is meaningful in athletic performance. Putting things into perspective, the team said, the metabolic rate reduction during walking is similar to taking 16 pounds off at the waist.

The Wearable Exosuit Revolution
The study’s lightweight exoshorts are hardly the only players in town. Back in 2017, SRI International’s spin-off, Superflex, engineered an Aura suit to support mobility in the elderly. The Aura used a different mechanism: rather than a pulley system, it incorporated a type of smart material that contracts in a manner similar to human muscles when zapped with electricity.

Embedded with a myriad of sensors for motion, accelerometers and gyroscopes, Aura’s smartness came from mini-computers that measure how fast the wearer is moving and track the user’s posture. The data were integrated and processed locally inside hexagon-shaped computing pods near the thighs and upper back. The pods also acted as the control center for sending electrical zaps to give the wearer a boost when needed.

Around the same time, a collaboration between Harvard’s Wyss Institute and ReWalk Robotics introduced a fabric-based wearable robot to assist a wearer’s legs for balance and movement. Meanwhile, a Swiss team coated normal fabric with electroactive material to weave soft, pliable artificial “muscles” that move with the skin.

Although health support is the current goal, the military is obviously interested in similar technologies to enhance soldiers’ physicality. Superflex’s Aura, for example, was originally inspired by technology born from DARPA’s Warrior Web Program, which aimed to reduce a soldier’s mechanical load.

That said, military gear has had a long history of trickling down to consumer use. Similar to the way camouflage, cargo pants, and GORE-TEX trickled down into the consumer ecosphere, it’s not hard to imagine your local Target eventually stocking intelligent exowear.

Image and Video Credit: Wyss Institute at Harvard University. Continue reading

Posted in Human Robots

#435199 The Rise of AI Art—and What It Means ...

Artificially intelligent systems are slowly taking over tasks previously done by humans, and many processes involving repetitive, simple movements have already been fully automated. In the meantime, humans continue to be superior when it comes to abstract and creative tasks.

However, it seems like even when it comes to creativity, we’re now being challenged by our own creations.

In the last few years, we’ve seen the emergence of hundreds of “AI artists.” These complex algorithms are creating unique (and sometimes eerie) works of art. They’re generating stunning visuals, profound poetry, transcendent music, and even realistic movie scripts. The works of these AI artists are raising questions about the nature of art and the role of human creativity in future societies.

Here are a few works of art created by non-human entities.

Unsecured Futures
by Ai.Da

Ai-Da Robot with Painting. Image Credit: Ai-Da portraits by Nicky Johnston. Published with permission from Midas Public Relations.
Earlier this month we saw the announcement of Ai.Da, considered the first ultra-realistic drawing robot artist. Her mechanical abilities, combined with AI-based algorithms, allow her to draw, paint, and even sculpt. She is able to draw people using her artificial eye and a pencil in her hand. Ai.Da’s artwork and first solo exhibition, Unsecured Futures, will be showcased at Oxford University in July.

Ai-Da Cartesian Painting. Image Credit: Ai-Da Artworks. Published with permission from Midas Public Relations.
Obviously Ai.Da has no true consciousness, thoughts, or feelings. Despite that, the (human) organizers of the exhibition believe that Ai.Da serves as a basis for crucial conversations about the ethics of emerging technologies. The exhibition will serve as a stimulant for engaging with critical questions about what kind of future we ought to create via such technologies.

The exhibition’s creators wrote, “Humans are confident in their position as the most powerful species on the planet, but how far do we actually want to take this power? To a Brave New World (Nightmare)? And if we use new technologies to enhance the power of the few, we had better start safeguarding the future of the many.”

Google’s PoemPortraits
Our transcendence adorns,
That society of the stars seem to be the secret.

The two lines of poetry above aren’t like any poetry you’ve come across before. They are generated by an algorithm that was trained via deep learning neural networks trained on 20 million words of 19th-century poetry.

Google’s latest art project, named PoemPortraits, takes a word of your suggestion and generates a unique poem (once again, a collaboration of man and machine). You can even add a selfie in the final “PoemPortrait.” Artist Es Devlin, the project’s creator, explains that the AI “doesn’t copy or rework existing phrases, but uses its training material to build a complex statistical model. As a result, the algorithm generates original phrases emulating the style of what it’s been trained on.”

The generated poetry can sometimes be profound, and sometimes completely meaningless.But what makes the PoemPortraits project even more interesting is that it’s a collaborative project. All of the generated lines of poetry are combined to form a consistently growing collective poem, which you can view after your lines are generated. In many ways, the final collective poem is a collaboration of people from around the world working with algorithms.

Faceless Portraits Transcending Time
AICAN + Ahmed Elgammal

Image Credit: AICAN + Ahmed Elgammal | Faceless Portrait #2 (2019) | Artsy.
In March of this year, an AI artist called AICAN and its creator Ahmed Elgammal took over a New York gallery. The exhibition at HG Commentary showed two series of canvas works portraying harrowing, dream-like faceless portraits.

The exhibition was not simply credited to a machine, but rather attributed to the collaboration between a human and machine. Ahmed Elgammal is the founder and director of the Art and Artificial Intelligence Laboratory at Rutgers University. He considers AICAN to not only be an autonomous AI artist, but also a collaborator for artistic endeavors.

How did AICAN create these eerie faceless portraits? The system was presented with 100,000 photos of Western art from over five centuries, allowing it to learn the aesthetics of art via machine learning. It then drew from this historical knowledge and the mandate to create something new to create an artwork without human intervention.

Genesis
by AIVA Technologies

Listen to the score above. While you do, reflect on the fact that it was generated by an AI.

AIVA is an AI that composes soundtrack music for movies, commercials, games, and trailers. Its creative works span a wide range of emotions and moods. The scores it generates are indistinguishable from those created by the most talented human composers.

The AIVA music engine allows users to generate original scores in multiple ways. One is to upload an existing human-generated score and select the temp track to base the composition process on. Another method involves using preset algorithms to compose music in pre-defined styles, including everything from classical to Middle Eastern.

Currently, the platform is promoted as an opportunity for filmmakers and producers. But in the future, perhaps every individual will have personalized music generated for them based on their interests, tastes, and evolving moods. We already have algorithms on streaming websites recommending novel music to us based on our interests and history. Soon, algorithms may be used to generate music and other works of art that are tailored to impact our unique psyches.

The Future of Art: Pushing Our Creative Limitations
These works of art are just a glimpse into the breadth of the creative works being generated by algorithms and machines. Many of us will rightly fear these developments. We have to ask ourselves what our role will be in an era where machines are able to perform what we consider complex, abstract, creative tasks. The implications on the future of work, education, and human societies are profound.

At the same time, some of these works demonstrate that AI artists may not necessarily represent a threat to human artists, but rather an opportunity for us to push our creative boundaries. The most exciting artistic creations involve collaborations between humans and machines.

We have always used our technological scaffolding to push ourselves beyond our biological limitations. We use the telescope to extend our line of sight, planes to fly, and smartphones to connect with others. Our machines are not always working against us, but rather working as an extension of our minds. Similarly, we could use our machines to expand on our creativity and push the boundaries of art.

Image Credit: Ai-Da portraits by Nicky Johnston. Published with permission from Midas Public Relations. Continue reading

Posted in Human Robots

#435098 Coming of Age in the Age of AI: The ...

The first generation to grow up entirely in the 21st century will never remember a time before smartphones or smart assistants. They will likely be the first children to ride in self-driving cars, as well as the first whose healthcare and education could be increasingly turned over to artificially intelligent machines.

Futurists, demographers, and marketers have yet to agree on the specifics of what defines the next wave of humanity to follow Generation Z. That hasn’t stopped some, like Australian futurist Mark McCrindle, from coining the term Generation Alpha, denoting a sort of reboot of society in a fully-realized digital age.

“In the past, the individual had no power, really,” McCrindle told Business Insider. “Now, the individual has great control of their lives through being able to leverage this world. Technology, in a sense, transformed the expectations of our interactions.”

No doubt technology may impart Marvel superhero-like powers to Generation Alpha that even tech-savvy Millennials never envisioned over cups of chai latte. But the powers of machine learning, computer vision, and other disciplines under the broad category of artificial intelligence will shape this yet unformed generation more definitively than any before it.

What will it be like to come of age in the Age of AI?

The AI Doctor Will See You Now
Perhaps no other industry is adopting and using AI as much as healthcare. The term “artificial intelligence” appears in nearly 90,000 publications from biomedical literature and research on the PubMed database.

AI is already transforming healthcare and longevity research. Machines are helping to design drugs faster and detect disease earlier. And AI may soon influence not only how we diagnose and treat illness in children, but perhaps how we choose which children will be born in the first place.

A study published earlier this month in NPJ Digital Medicine by scientists from Weill Cornell Medicine used 12,000 photos of human embryos taken five days after fertilization to train an AI algorithm on how to tell which in vitro fertilized embryo had the best chance of a successful pregnancy based on its quality.

Investigators assigned each embryo a grade based on various aspects of its appearance. A statistical analysis then correlated that grade with the probability of success. The algorithm, dubbed Stork, was able to classify the quality of a new set of images with 97 percent accuracy.

“Our algorithm will help embryologists maximize the chances that their patients will have a single healthy pregnancy,” said Dr. Olivier Elemento, director of the Caryl and Israel Englander Institute for Precision Medicine at Weill Cornell Medicine, in a press release. “The IVF procedure will remain the same, but we’ll be able to improve outcomes by harnessing the power of artificial intelligence.”

Other medical researchers see potential in applying AI to detect possible developmental issues in newborns. Scientists in Europe, working with a Finnish AI startup that creates seizure monitoring technology, have developed a technique for detecting movement patterns that might indicate conditions like cerebral palsy.

Published last month in the journal Acta Pediatrica, the study relied on an algorithm to extract the movements from a newborn, turning it into a simplified “stick figure” that medical experts could use to more easily detect clinically relevant data.

The researchers are continuing to improve the datasets, including using 3D video recordings, and are now developing an AI-based method for determining if a child’s motor maturity aligns with its true age. Meanwhile, a study published in February in Nature Medicine discussed the potential of using AI to diagnose pediatric disease.

AI Gets Classy
After being weaned on algorithms, Generation Alpha will hit the books—about machine learning.

China is famously trying to win the proverbial AI arms race by spending billions on new technologies, with one Chinese city alone pledging nearly $16 billion to build a smart economy based on artificial intelligence.

To reach dominance by its stated goal of 2030, Chinese cities are also incorporating AI education into their school curriculum. Last year, China published its first high school textbook on AI, according to the South China Morning Post. More than 40 schools are participating in a pilot program that involves SenseTime, one of the country’s biggest AI companies.

In the US, where it seems every child has access to their own AI assistant, researchers are just beginning to understand how the ubiquity of intelligent machines will influence the ways children learn and interact with their highly digitized environments.

Sandra Chang-Kredl, associate professor of the department of education at Concordia University, told The Globe and Mail that AI could have detrimental effects on learning creativity or emotional connectedness.

Similar concerns inspired Stefania Druga, a member of the Personal Robots group at the MIT Media Lab (and former Education Teaching Fellow at SU), to study interactions between children and artificial intelligence devices in order to encourage positive interactions.

Toward that goal, Druga created Cognimates, a platform that enables children to program and customize their own smart devices such as Alexa or even a smart, functional robot. The kids can also use Cognimates to train their own AI models or even build a machine learning version of Rock Paper Scissors that gets better over time.

“I believe it’s important to also introduce young people to the concepts of AI and machine learning through hands-on projects so they can make more informed and critical use of these technologies,” Druga wrote in a Medium blog post.

Druga is also the founder of Hackidemia, an international organization that sponsors workshops and labs around the world to introduce kids to emerging technologies at an early age.

“I think we are in an arms race in education with the advancement of technology, and we need to start thinking about AI literacy before patterns of behaviors for children and their families settle in place,” she wrote.

AI Goes Back to School
It also turns out that AI has as much to learn from kids. More and more researchers are interested in understanding how children grasp basic concepts that still elude the most advanced machine minds.

For example, developmental psychologist Alison Gopnik has written and lectured extensively about how studying the minds of children can provide computer scientists clues on how to improve machine learning techniques.

In an interview on Vox, she described that while DeepMind’s AlpahZero was trained to be a chessmaster, it struggles with even the simplest changes in the rules, such as allowing the bishop to move horizontally instead of vertically.

“A human chess player, even a kid, will immediately understand how to transfer that new rule to their playing of the game,” she noted. “Flexibility and generalization are something that even human one-year-olds can do but that the best machine learning systems have a much harder time with.”

Last year, the federal defense agency DARPA announced a new program aimed at improving AI by teaching it “common sense.” One of the chief strategies is to develop systems for “teaching machines through experience, mimicking the way babies grow to understand the world.”

Such an approach is also the basis of a new AI program at MIT called the MIT Quest for Intelligence.

The research leverages cognitive science to understand human intelligence, according to an article on the project in MIT Technology Review, such as exploring how young children visualize the world using their own innate 3D models.

“Children’s play is really serious business,” said Josh Tenenbaum, who leads the Computational Cognitive Science lab at MIT and his head of the new program. “They’re experiments. And that’s what makes humans the smartest learners in the known universe.”

In a world increasingly driven by smart technologies, it’s good to know the next generation will be able to keep up.

Image Credit: phoelixDE / Shutterstock.com Continue reading

Posted in Human Robots