Tag Archives: student
Today, over 77 percent of Americans own a smartphone with access to the world’s information and near-limitless learning resources.
Yet nearly 36 million adults in the US are constrained by low literacy skills, excluding them from professional opportunities, prospects of upward mobility, and full engagement with their children’s education.
And beyond its direct impact, low literacy rates affect us all. Improving literacy among adults is predicted to save $230 billion in national healthcare costs and could result in US labor productivity increases of up to 2.5 percent.
Across the board, exponential technologies are making demonetized learning tools, digital training platforms, and literacy solutions more accessible than ever before.
With rising automation and major paradigm shifts underway in the job market, these tools not only promise to make today’s workforce more versatile, but could play an invaluable role in breaking the poverty cycles often associated with low literacy.
Just three years ago, the Barbara Bush Foundation for Family Literacy and the Dollar General Literacy Foundation joined forces to tackle this intractable problem, launching a $7 million Adult Literacy XPRIZE.
Challenging teams to develop smartphone apps that significantly increase literacy skills among adult learners in just 12 months, the competition brought five prize teams to the fore, each targeting multiple demographics across the nation.
Now, after four years of research, prototyping, testing, and evaluation, XPRIZE has just this week announced two grand prize winners: Learning Upgrade and People ForWords.
In this blog, I’ll be exploring the nuts and bolts of our two winning teams and how exponential technologies are beginning to address rapidly shifting workforce demands.
Meeting 100 percent adult literacy rates
Retooling today’s workforce for tomorrow’s job market
Granting the gift of lifelong learning
Let’s dive in.
Adult Literacy XPRIZE
Emphasizing the importance of accessible mediums and scalability, the Adult Literacy XPRIZE called for teams to create mobile solutions that lower the barrier to entry, encourage persistence, develop relevant learning content, and can scale nationally.
Outperforming the competition in two key demographic groups in aggregate—native English speakers and English language learners—teams Learning Upgrade and People ForWords together claimed the prize.
To win, both organizations successfully generated the greatest gains between a pre- and post-test, administered one year apart to learners in a 12-month field test across Los Angeles, Dallas, and Philadelphia.
Prize money in hand, Learning Upgrade and People ForWords are now scaling up their solutions, each targeting a key demographic in America’s pursuit of adult literacy.
Based in San Diego, Learning Upgrade has developed an Android and iOS app that helps students learn English and math through video, songs, and gamification. Offering a total of 21 courses from kindergarten through adult education, Learning Upgrade touts a growing platform of over 900 lessons spanning English, reading, math, and even GED prep.
To further personalize each student’s learning, Learning Upgrade measures time-on-task and builds out formative performance assessments, granting teachers a quantified, real-time view of each student’s progress across both lessons and criteria.
Specialized in English reading skills, Dallas-based People ForWords offers a similarly delocalized model with its mobile game “Codex: Lost Words of Atlantis.” Based on an archaeological adventure storyline, the app features an immersive virtual environment.
Set in the Atlantis Library (now with a 3D rendering underway), Codex takes its students through narrative-peppered lessons covering everything from letter-sound practice to vocabulary reinforcement in a hidden object game.
But while both mobile apps have recruited initial piloting populations, the key to success is scale.
Using a similar incentive prize competition structure to drive recruitment, the second phase of the XPRIZE is a $1 million Barbara Bush Foundation Adult Literacy XPRIZE Communities Competition. For 15 months, the competition will challenge organizations, communities, and individuals alike to onboard adult learners onto both prize-winning platforms and fellow finalist team apps, AmritaCREATE and Cell-Ed.
Each awarded $125,000 for participation in the Communities Competition, AmritaCREATE and Cell-Ed bring yet other nuanced advantages to the table.
While AmritaCREATE curates culturally appropriate e-content relevant to given life skills, Cell-Ed takes a learn-on-the-go approach, offering micro-lessons, on-demand essential skills training, and individualized coaching on any mobile device, no internet required.
Although all these cases target slightly different demographics and problem niches, they converge upon common phenomena: mobility, efficiency, life skill relevance, personalized learning, and practicability.
And what better to scale these benefits than AI and immersive virtual environments?
In the case of education’s growing mobility, 5G and the explosion of connectivity speeds will continue to drive a learn-anytime-anywhere education model, whereby adult users learn on the fly, untethered to web access or rigid time strictures.
As I’ve explored in a previous blog on AI-crowd collaboration, we might also see the rise of AI learning consultants responsible for processing data on how you learn.
Quantifying and analyzing your interaction with course modules, where you get stuck, where you thrive, and what tools cause you ease or frustration, each user’s AI trainer might then issue personalized recommendations based on crowd feedback.
Adding a human touch, each app’s hired teaching consultants would thereby be freed to track many more students’ progress at once, vetting AI-generated tips and adjustments, and offering life coaching along the way.
Lastly, virtual learning environments—and, one day, immersive VR—will facilitate both speed and retention, two of the most critical constraints as learners age.
As I often reference, people generally remember only 10 percent of what we see, 20 percent of what we hear, and 30 percent of what we read…. But over a staggering 90 percent of what we do or experience.
By introducing gamification, immersive testing activities, and visually rich sensory environments, adult literacy platforms have a winning chance at scalability, retention, and user persistence.
Exponential Tools: Training and Retooling a Dynamic Workforce
Beyond literacy, however, virtual and augmented reality have already begun disrupting the professional training market.
As projected by ABI Research, the enterprise VR training market is on track to exceed $6.3 billion in value by 2022.
Leading the charge, Walmart has already implemented VR across 200 Academy training centers, running over 45 modules and simulating everything from unusual customer requests to a Black Friday shopping rush.
Then in September of last year, Walmart committed to a 17,000-headset order of the Oculus Go to equip every US Supercenter, neighborhood market, and discount store with VR-based employee training.
In the engineering world, Bell Helicopter is using VR to massively expedite development and testing of its latest aircraft, FCX-001. Partnering with Sector 5 Digital and HTC VIVE, Bell found it could concentrate a typical six-year aircraft design process into the course of six months, turning physical mockups into CAD-designed virtual replicas.
But beyond the design process itself, Bell is now one of a slew of companies pioneering VR pilot tests and simulations with real-world accuracy. Seated in a true-to-life virtual cockpit, pilots have now tested countless iterations of the FCX-001 in virtual flight, drawing directly onto the 3D model and enacting aircraft modifications in real time.
And in an expansion of our virtual senses, several key players are already working on haptic feedback. In the case of VR flight, French company Go Touch VR is now partnering with software developer FlyInside on fingertip-mounted haptic tech for aviation.
Dramatically reducing time and trouble required for VR-testing pilots, they aim to give touch-based confirmation of every switch and dial activated on virtual flights, just as one would experience in a full-sized cockpit mockup. Replicating texture, stiffness, and even the sensation of holding an object, these piloted devices contain a suite of actuators to simulate everything from a light touch to higher-pressured contact, all controlled by gaze and finger movements.
When it comes to other high-risk simulations, virtual and augmented reality have barely scratched the surface.
Firefighters can now combat virtual wildfires with new platforms like FLAIM Trainer or TargetSolutions. And thanks to the expansion of medical AR/VR services like 3D4Medical or Echopixel, surgeons might soon perform operations on annotated organs and magnified incision sites, speeding up reaction times and vastly improving precision.
But perhaps most urgently, virtual reality will offer an immediate solution to today’s constant industry turnover and large-scale re-education demands.
VR educational facilities with exact replicas of anything from large industrial equipment to minute circuitry will soon give anyone a second chance at the 21st-century job market.
Want to become an electric, autonomous vehicle mechanic at age 44? Throw on a demonetized VR module and learn by doing, testing your prototype iterations at almost zero cost and with no risk of harming others.
Want to be a plasma physicist and play around with a virtual nuclear fusion reactor? Now you’ll be able to simulate results and test out different tweaks, logging Smart Educational Record credits in the process.
As tomorrow’s career model shifts from a “one-and-done graduate degree” to continuous lifelong education, professional VR-based re-education will allow for a continuous education loop, reducing the barrier to entry for anyone wanting to try their hand at a new industry.
Learn Anything, Anytime, at Any Age
As VR and artificial intelligence converge with demonetized mobile connectivity, we are finally witnessing an era in which no one will be left behind.
Whether in pursuit of fundamental life skills, professional training, linguistic competence, or specialized retooling, users of all ages, career paths, income brackets, and goals are now encouraged to be students, no longer condemned to stagnancy.
Traditional constraints need no longer prevent non-native speakers from gaining an equal foothold, or specialists from pivoting into new professions, or low-income parents from staking new career paths.
As exponential technologies drive democratized access, bolstering initiatives such as the Barbara Bush Foundation Adult Literacy XPRIZE are blazing the trail to make education a scalable priority for all.
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.
Image Credit: Iulia Ghimisli / Shutterstock.com Continue reading
Autonomous vehicles can follow the general rules of American roads, recognizing traffic signals and lane markings, noticing crosswalks and other regular features of the streets. But they work only on well-marked roads that are carefully scanned and mapped in advance.
Many paved roads, though, have faded paint, signs obscured behind trees and unusual intersections. In addition, 1.4 million miles of U.S. roads—one-third of the country’s public roadways—are unpaved, with no on-road signals like lane markings or stop-here lines. That doesn’t include miles of private roads, unpaved driveways or off-road trails.
What’s a rule-following autonomous car to do when the rules are unclear or nonexistent? And what are its passengers to do when they discover their vehicle can’t get them where they’re going?
Accounting for the Obscure
Most challenges in developing advanced technologies involve handling infrequent or uncommon situations, or events that require performance beyond a system’s normal capabilities. That’s definitely true for autonomous vehicles. Some on-road examples might be navigating construction zones, encountering a horse and buggy, or seeing graffiti that looks like a stop sign. Off-road, the possibilities include the full variety of the natural world, such as trees down over the road, flooding and large puddles—or even animals blocking the way.
At Mississippi State University’s Center for Advanced Vehicular Systems, we have taken up the challenge of training algorithms to respond to circumstances that almost never happen, are difficult to predict and are complex to create. We seek to put autonomous cars in the hardest possible scenario: driving in an area the car has no prior knowledge of, with no reliable infrastructure like road paint and traffic signs, and in an unknown environment where it’s just as likely to see a cactus as a polar bear.
Our work combines virtual technology and the real world. We create advanced simulations of lifelike outdoor scenes, which we use to train artificial intelligence algorithms to take a camera feed and classify what it sees, labeling trees, sky, open paths and potential obstacles. Then we transfer those algorithms to a purpose-built all-wheel-drive test vehicle and send it out on our dedicated off-road test track, where we can see how our algorithms work and collect more data to feed into our simulations.
We have developed a simulator that can create a wide range of realistic outdoor scenes for vehicles to navigate through. The system generates a range of landscapes of different climates, like forests and deserts, and can show how plants, shrubs and trees grow over time. It can also simulate weather changes, sunlight and moonlight, and the accurate locations of 9,000 stars.
The system also simulates the readings of sensors commonly used in autonomous vehicles, such as lidar and cameras. Those virtual sensors collect data that feeds into neural networks as valuable training data.
Simulated desert, meadow and forest environments generated by the Mississippi State University Autonomous Vehicle Simulator. Chris Goodin, Mississippi State University, Author provided.
Building a Test Track
Simulations are only as good as their portrayals of the real world. Mississippi State University has purchased 50 acres of land on which we are developing a test track for off-road autonomous vehicles. The property is excellent for off-road testing, with unusually steep grades for our area of Mississippi—up to 60 percent inclines—and a very diverse population of plants.
We have selected certain natural features of this land that we expect will be particularly challenging for self-driving vehicles, and replicated them exactly in our simulator. That allows us to directly compare results from the simulation and real-life attempts to navigate the actual land. Eventually, we’ll create similar real and virtual pairings of other types of landscapes to improve our vehicle’s capabilities.
A road washout, as seen in real life, left, and in simulation. Chris Goodin, Mississippi State University, Author provided.
Collecting More Data
We have also built a test vehicle, called the Halo Project, which has an electric motor and sensors and computers that can navigate various off-road environments. The Halo Project car has additional sensors to collect detailed data about its actual surroundings, which can help us build virtual environments to run new tests in.
The Halo Project car can collect data about driving and navigating in rugged terrain. Beth Newman Wynn, Mississippi State University, Author provided.
Two of its lidar sensors, for example, are mounted at intersecting angles on the front of the car so their beams sweep across the approaching ground. Together, they can provide information on how rough or smooth the surface is, as well as capturing readings from grass and other plants and items on the ground.
Lidar beams intersect, scanning the ground in front of the vehicle. Chris Goodin, Mississippi State University, Author provided
We’ve seen some exciting early results from our research. For example, we have shown promising preliminary results that machine learning algorithms trained on simulated environments can be useful in the real world. As with most autonomous vehicle research, there is still a long way to go, but our hope is that the technologies we’re developing for extreme cases will also help make autonomous vehicles more functional on today’s roads.
Matthew Doude, Associate Director, Center for Advanced Vehicular Systems; Ph.D. Student in Industrial and Systems Engineering, Mississippi State University; Christopher Goodin, Assistant Research Professor, Center for Advanced Vehicular Systems, Mississippi State University, and Daniel Carruth, Assistant Research Professor and Associate Director for Human Factors and Advanced Vehicle System, Center for Advanced Vehicular Systems, Mississippi State University
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Photo provided for The Conversation by Matthew Goudin / CC BY ND Continue reading
If a recent project using Google’s DeepMind were a recipe, you would take a pair of AI systems, images of animals, and a whole lot of computing power. Mix it all together, and you’d get a series of imagined animals dreamed up by one of the AIs. A look through the research paper about the project—or this open Google Folder of images it produced—will likely lead you to agree that the results are a mix of impressive and downright eerie.
But the eerie factor doesn’t mean the project shouldn’t be considered a success and a step forward for future uses of AI.
From GAN To BigGAN
The team behind the project consists of Andrew Brock, a PhD student at Edinburgh Center for Robotics, and DeepMind intern and researcher Jeff Donahue and Karen Simonyan.
They used a so-called Generative Adversarial Network (GAN) to generate the images. In a GAN, two AI systems collaborate in a game-like manner. One AI produces images of an object or creature. The human equivalent would be drawing pictures of, for example, a dog—without necessarily knowing what a dog exactly looks like. Those images are then shown to the second AI, which has already been fed images of dogs. The second AI then tells the first one how far off its efforts were. The first one uses this information to improve its images. The two go back and forth in an iterative process, and the goal is for the first AI to become so good at creating images of dogs that the second can’t tell the difference between its creations and actual pictures of dogs.
The team was able to draw on Google’s vast vaults of computational power to create images of a quality and life-like nature that were beyond almost anything seen before. In part, this was achieved by feeding the GAN with more images than is usually the case. According to IFLScience, the standard is to feed about 64 images per subject into the GAN. In this case, the research team fed about 2,000 images per subject into the system, leading to it being nicknamed BigGAN.
Their results showed that feeding the system with more images and using masses of raw computer power markedly increased the GAN’s precision and ability to create life-like renditions of the subjects it was trained to reproduce.
“The main thing these models need is not algorithmic improvements, but computational ones. […] When you increase model capacity and you increase the number of images you show at every step, you get this twofold combined effect,” Andrew Brock told Fast Company.
The Power Drain
The team used 512 of Google’s AI-focused Tensor Processing Units (TPU) to generate 512-pixel images. Each experiment took between 24 and 48 hours to run.
That kind of computing power needs a lot of electricity. As artist and Innovator-In-Residence at the Library of Congress Jer Thorp tongue-in-cheek put it on Twitter: “The good news is that AI can now give you a more believable image of a plate of spaghetti. The bad news is that it used roughly enough energy to power Cleveland for the afternoon.”
Thorp added that a back-of-the-envelope calculation showed that the computations to produce the images would require about 27,000 square feet of solar panels to have adequate power.
BigGAN’s images have been hailed by researchers, with Oriol Vinyals, research scientist at DeepMind, rhetorically asking if these were the ‘Best GAN samples yet?’
However, they are still not perfect. The number of legs on a given creature is one example of where the BigGAN seemed to struggle. The system was good at recognizing that something like a spider has a lot of legs, but seemed unable to settle on how many ‘a lot’ was supposed to be. The same applied to dogs, especially if the images were supposed to show said dogs in motion.
Those eerie images are contrasted by other renditions that show such lifelike qualities that a human mind has a hard time identifying them as fake. Spaniels with lolling tongues, ocean scenery, and butterflies were all rendered with what looks like perfection. The same goes for an image of a hamburger that was good enough to make me stop writing because I suddenly needed lunch.
The Future Use Cases
GAN networks were first introduced in 2014, and given their relative youth, researchers and companies are still busy trying out possible use cases.
One possible use is image correction—making pixillated images clearer. Not only does this help your future holiday snaps, but it could be applied in industries such as space exploration. A team from the University of Michigan and the Max Planck Institute have developed a method for GAN networks to create images from text descriptions. At Berkeley, a research group has used GAN to create an interface that lets users change the shape, size, and design of objects, including a handbag.
For anyone who has seen a film like Wag the Dog or read 1984, the possibilities are also starkly alarming. GANs could, in other words, make fake news look more real than ever before.
For now, it seems that while not all GANs require the computational and electrical power of the BigGAN, there is still some way to reach these potential use cases. However, if there’s one lesson from Moore’s Law and exponential technology, it is that today’s technical roadblock quickly becomes tomorrow’s minor issue as technology progresses.
Image Credit: Ondrej Prosicky/Shutterstock Continue reading
From how we get around to how we spend our time to how we manage our health, technology is changing our lives—not to mention economies, governments, and cities around the world. Tech has brought good to individuals and societies by, for example, democratizing access to information and lowering the cost of many products and services. But it’s also brought less-desirable effects we can’t ignore, like a rise in mental health problems and greater wealth inequality.
To keep pushing tech in a direction that will benefit humanity as a whole—rather than benefiting a select few—we must encourage open dialogues about these topics among leading figures in business, government, and spirituality.
To that end, SingularityU The Netherlands recently hosted a dialogue about compassion and technology with His Holiness the Dalai Lama. The event was attended by students and tech innovators, ambassadors, members of the Dutch royal family, and other political and business leaders.
The first half of the conversation focused on robotics, telepresence, and artificial intelligence. His Holiness spoke with Tilly Lockey, a British student helping tech companies create bionic limbs, Karen Dolva, CEO of telepresence company No Isolation, and Maarten Steinbuch, faculty chair of robotics at SingularityU the Netherlands and a professor of systems and control at TU Eindhoven.
When asked what big tech companies could be doing to help spread good around the world, His Holiness pointed out that while technology has changed many aspects of life in developed countries, there is still immense suffering in less-developed nations, and tech companies should pay more attention to the poorer communities around the world.
In the second half of the event, focus switched to sickness, aging, and death. Speakers included Liz Parrish, CEO of BioViva Sciences, Kris Verburgh, faculty chair of health and medicine at SingularityU the Netherlands, Jeantine Lunshof, a bio-ethicist at MIT Media Lab, and Selma Boulmalf, a religious studies student at University of Amsterdam. Among other topics, they talked with His Holiness about longevity research and the drawbacks of trying to extend our lifespans or achieve immortality.
Both sessions were moderated by Christa Meindersma, founder and chair of the Himalaya Initiative for Culture and Society. The event served as the ceremonial opening of an exhibition called The Life of the Buddha, Path to the Present, on display in Amsterdam’s 15-century De Nieuwe Kerk church through February 2019.
In the 21st century, His Holiness said, “There is real possibility to create a happier world, peaceful world. So now we need vision. A peaceful world on the basis of a sense of oneness of humanity.”
Technology’s role in that world is being developed and refined every day, and we must maintain an ongoing awareness of its positive and negative repercussions—on everyone.
Image Credit: vipflash / Shutterstock.com Continue reading