Tag Archives: programs

#432021 Unleashing Some of the Most Ambitious ...

At Singularity University, we are unleashing a generation of women who are smashing through barriers and starting some of the most ambitious technology companies on the planet.

Singularity University was founded in 2008 to empower leaders to use exponential technologies to solve our world’s biggest challenges. Our flagship program, the Global Solutions Program, has historically brought 80 entrepreneurs from around the world to Silicon Valley for 10 weeks to learn about exponential technologies and create moonshot startups that improve the lives of a billion people within a decade.

After nearly 10 years of running this program, we can say that about 70 percent of our successful startups have been founded or co-founded by female entrepreneurs (see below for inspiring examples of their work). This is in sharp contrast to the typical 10–20 percent of venture-backed tech companies that have a female founder, as reported by TechCrunch.

How are we so dramatically changing the game? While 100 percent of the credit goes to these courageous women, as both an alumna of the Global Solutions Program and our current vice chair of Global Grand Challenges, I want to share my reflections on what has worked.

At the most basic level, it is essential to deeply believe in the inherent worth, intellectual genius, and profound entrepreneurial caliber of women. While this may seem obvious, this is not the way our world currently thinks—we live in a world that sees women’s ideas, contributions, work, and existence as inherently less valuable than men’s.

For example, a 2017 Harvard Business Review article noted that even when women engage in the same behaviors and work as men, their work is considered less valuable simply because a woman did the job. An additional 2017 Harvard Business Review article showed that venture capitalists are significantly less likely to invest in female entrepreneurs and are more likely to ask men questions about the potential success of their companies while grilling women about the potential downfalls of their companies.

This doubt and lack of recognition of the genius and caliber of women is also why women are still paid less than men for completing identical work. Further, it’s why women’s work often gets buried in “number two” support roles of men in leadership roles and why women are expected to take on second shifts at home managing tedious household chores in addition to their careers. I would also argue these views as well as the rampant sexual harassment, assault, and violence against women that exists today stems from stubborn, historical, patriarchal views of women as living for the benefit of men, rather than for their own sovereignty and inherent value.

As with any other business, Singularity University has not been immune to these biases but is resolutely focused on helping women achieve intellectual genius and global entrepreneurial caliber by harnessing powerful exponential technologies.

We create an environment where women can physically and intellectually thrive free of harassment to reach their full potential, and we are building a broader ecosystem of alumni and partners around the world who not only support our female entrepreneurs throughout their entrepreneurial journeys, but who are also sparking and leading systemic change in their own countries and communities.

Respecting the Intellectual Genius and Entrepreneurial Caliber of Women
The entrepreneurial legends of our time—Steve Jobs, Elon Musk, Mark Zuckerberg, Bill Gates, Jeff Bezos, Larry Page, Sergey Brin—are men who have all built their empires using exponential technologies. Exponential technologies helped these men succeed faster and with greater impact due to Moore’s Law and the Law of Accelerating Returns which states that any digital technology (such as computing, software, artificial intelligence, robotics, quantum computing, biotechnology, nanotechnology, etc.) will become more sophisticated while dramatically falling in price, enabling rapid scaling.

Knowing this, an entrepreneur can plot her way to an ambitious global solution over time, releasing new applications just as the technology and market are ready. Furthermore, these rapidly advancing technologies often converge to create new tools and opportunities for innovators to come up with novel solutions to challenges that were previously impossible to solve in the past.

For various reasons, women have not pursued exponential technologies as aggressively as men (or were prevented or discouraged from doing so).

While more women are founding firms at a higher rate than ever in wealthy countries like the United States, the majority are small businesses in linear industries that have been around for hundreds of years, such as social assistance, health, education, administrative, or consulting services. In lower-income countries, international aid agencies and nonprofits often encourage women to pursue careers in traditional handicrafts, micro-enterprise, and micro-finance. While these jobs have historically helped women escape poverty and gain financial independence, they have done little to help women realize the enormous power, influence, wealth, and ability to transform the world for the better that comes from building companies, nonprofits, and solutions grounded in exponential technologies.

We need women to be working with exponential technologies today in order to be powerful leaders in the future.

Participants who enroll in our Global Solutions Program spend the first few weeks of the program learning about exponential technologies from the world’s experts and the final weeks launching new companies or nonprofits in their area of interest. We require that women (as well as men) utilize exponential technologies as a condition of the program.

In this sense, at Singularity University women start their endeavors with all of us believing and behaving in a way that assumes they can achieve global impact at the level of our world’s most legendary entrepreneurs.

Creating an Environment Where Woman Can Thrive
While challenging women to embrace exponential technologies is essential, it is also important to create an environment where women can thrive. In particular, this means ensuring women feel at home on our campus by ensuring gender diversity, aggressively addressing sexual harassment, and flipping the traditional culture from one that penalizes women, to one that values and supports them.

While women were initially only a small minority of our Global Solutions Program, in 2014, we achieved around 50% female attendance—a statistic that has since held over the years.

This is not due to a quota—every year we turn away extremely qualified women from our program (and are working on reformulating the program to allow more people to participate in the future.) While part of our recruiting success is due to the efforts of our marketing team, we also benefited from the efforts of some of our early female founders, staff, faculty, and alumnae including Susan Fonseca, Emeline Paat-Dahlstrom, Kathryn Myronuk, Lajuanda Asemota, Chiara Giovenzana, and Barbara Silva Tronseca.

As early champions of Singularity University these women not only launched diversity initiatives and personally reached out to women, but were crucial role models holding leadership roles in our community. In addition, Fonseca and Silva also both created multiple organizations and initiatives outside of (or in conjunction with) the university that produced additional pipelines of female candidates. In particular, Fonseca founded Women@TheFrontier as well as other organizations focusing on women, technology and innovation, and Silva founded BestInnovation (a woman’s accelerator in Latin America), as well as led Singularity University’s Chilean Chapter and founded the first SingularityU Summit in Latin America.

These women’s efforts in globally scaling Singularity University have been critical in ensuring woman around the world now see Singularity University as a place where they can lead and shape the future.

Also, thanks to Google (Alphabet) and many of our alumni and partners, we were able to provide full scholarships to any woman (or man) to attend our program regardless of their economic status. Google committed significant funding for full scholarships while our partners around the world also hosted numerous Global Impact Competitions, where entrepreneurs pitched their solutions to their local communities with the winners earning a full scholarship funded by our partners to attend the Global Solution Program as their prize.

Google and our partners’ support helped individuals attend our program and created a wider buzz around exponential technology and social change around the world in local communities. It led to the founding of 110 SU chapters in 55 countries.

Another vital aspect of our work in supporting women has been trying to create a harassment-free environment. Throughout the Silicon Valley, more than 60% of women convey that while they are trying to build their companies or get their work done, they are also dealing with physical and sexual harassment while being demeaned and excluded in other ways in the workplace. We have taken actions to educate and train our staff on how to deal with situations should they occur. All staff receives training on harassment when they join Singularity University, and all Global Solutions Program participants attend mandatory trainings on sexual harassment when they first arrive on campus. We also have male and female wellness counselors available that can offer support to both individuals and teams of entrepreneurs throughout the entire program.

While at a minimum our campus must be physically safe for women, we also strive to create a culture that values women and supports them in the additional challenges and expectations they face. For example, one of our 2016 female participants, Van Duesterberg, was pregnant during the program and said that instead of having people doubt her commitment to her startup or make her prove she could handle having a child and running a start-up at the same time, people went out of their way to help her.

“I was the epitome of a person not supposed to be doing a startup,” she said. “I was pregnant and would need to take care of my child. But Singularity University was supportive and encouraging. They made me feel super-included and that it was possible to do both. I continue to come back to campus even though the program is over because the network welcomes me and supports me rather than shuts me out because of my physical limitations. Rather than making me feel I had to prove myself, everyone just understood me and supported me, whether it was bringing me healthy food or recommending funders.”

Another strength that we have in supporting women is that after the Global Solutions Program, entrepreneurs have access to a much larger ecosystem.

Many entrepreneurs partake in SU Ventures, which can provide further support to startups as they develop, and we now have a larger community of over 200,000 people in almost every country. These members have often attended other Singularity University programs, events and are committed to our vision of the future. These women and men consist of business executives, Fortune 500 companies, investors, nonprofit and government leaders, technologists, members of the media, and other movers and shakers in the world. They have made introductions for our founders, collaborated with them on business ventures, invested in them and showcased their work at high profile events around the world.

Building for the Future
While our Global Solutions Program is making great strides in supporting female entrepreneurs, there is always more work to do. We are now focused on achieving the same degree of female participation across all of our programs and actively working to recruit and feature more female faculty and speakers on stage. As our community grows and scales around the world, we are also intent at how to best uphold our values and policies around sexual harassment across diverse locations and cultures. And like all businesses everywhere, we are focused on recruiting more women to serve at senior leadership levels within SU. As we make our way forward, we hope that you will join us in boldly leading this change and recognizing the genius and power of female entrepreneurs.

Meet Some of Our Female Moonshots
While we have many remarkable female entrepreneurs in the Singularity University community, the list below features a few of the women who have founded or co-founded companies at the Global Solutions Program that have launched new industries and are on their way to changing the way our world works for millions if not billions of people.

Jessica Scorpio co-founded Getaround in 2009. Getaround was one of the first car-sharing service platforms allowing anyone to rent out their car using a smartphone app. GetAround was a revolutionary idea in 2009, not only because smartphones and apps were still in their infancy, but because it was unthinkable that a technology startup could disrupt the major entrenched car, transport, and logistics companies. Scorpio’s early insights and pioneering entrepreneurial work brought to life new ways that humans relate to car sharing and the future self-driving car industry. Scorpio and Getaround have won numerous awards, and Getaround now serves over 200,000 members.

Paola Santana co-founded Matternet in 2011, which pioneered the commercial drone transport industry. In 2011, only military, hobbyists or the film industry used drones. Matternet demonstrated that drones could be used for commercial transport in short point-to-point deliveries for high-value goods laying the groundwork for drone transport around the world as well as some of the early thinking behind the future flying car industry. Santana was also instrumental in shaping regulations for the use of commercial drones around the world, making the industry possible.

Sara Naseri co-founded Qurasense in 2014, a life sciences start-up that analyzes women’s health through menstrual blood allowing women to track their health every month. Naseri is shifting our understanding of women’s menstrual blood as a waste product and something “not to be talked about,” to a rich, non-invasive, abundant source of information about women’s health.

Abi Ramanan co-founded ImpactVision in 2015, a software company that rapidly analyzes the quality and characteristics of food through hyperspectral images. Her long-term vision is to digitize food supply chains to reduce waste and fraud, given that one-third of all food is currently wasted before it reaches our plates. Ramanan is also helping the world understand that hyperspectral technology can be used in many industries to help us “see the unseen” and augment our ability to sense and understand what is happening around us in a much more sophisticated way.

Anita Schjøll Brede and Maria Ritola co-founded Iris AI in 2015, an artificial intelligence company that is building an AI research assistant that drastically improves the efficiency of R&D research and breaks down silos between different industries. Their long-term vision is for Iris AI to become smart enough that she will become a scientist herself. Fast Company named Iris AI one of the 10 most innovative artificial intelligence companies for 2017.

Hla Hla Win co-founded 360ed in 2016, a startup that conducts teacher training and student education through virtual reality and augmented reality in Myanmar. They have already connected teachers from 128 private schools in Myanmar with schools teaching 21st-century skills in Silicon Valley and around the world. Their moonshot is to build a platform where any teacher in the world can share best practices in teachers’ training. As they succeed, millions of children in some of the poorest parts of the world will have access to a 21st-century education.

Min FitzGerald and Van Duesterberg cofounded Nutrigene in 2017, a startup that ships freshly formulated, tailor-made supplement elixirs directly to consumers. Their long-term vision is to help people optimize their health using actionable data insights, so people can take a guided, tailored approaching to thriving into longevity.

Anna Skaya co-founded Basepaws in 2016, which created the first genetic test for cats and is building a community of citizen scientist pet owners. They are creating personalized pet products such as supplements, therapeutics, treats, and toys while also developing a database of genetic data for future research that will help both humans and pets over the long term.

Olivia Ramos co-founded Deep Blocks in 2016, a startup using artificial intelligence to integrate and streamline the processes of architecture, pre-construction, and real estate. As digital technologies, artificial intelligence, and robotics advance, it no longer makes sense for these industries to exist separately. Ramos recognized the tremendous value and efficiency that it is now possible to unlock with exponential technologies and creating an integrated industry in the future.

Please also visit our website to learn more about other female entrepreneurs, staff and faculty who are pioneering the future through exponential technologies. Continue reading

Posted in Human Robots

#431928 How Fast Is AI Progressing? Stanford’s ...

When? This is probably the question that futurists, AI experts, and even people with a keen interest in technology dread the most. It has proved famously difficult to predict when new developments in AI will take place. The scientists at the Dartmouth Summer Research Project on Artificial Intelligence in 1956 thought that perhaps two months would be enough to make “significant advances” in a whole range of complex problems, including computers that can understand language, improve themselves, and even understand abstract concepts.
Sixty years later, and these problems are not yet solved. The AI Index, from Stanford, is an attempt to measure how much progress has been made in artificial intelligence.
The index adopts a unique approach, and tries to aggregate data across many regimes. It contains Volume of Activity metrics, which measure things like venture capital investment, attendance at academic conferences, published papers, and so on. The results are what you might expect: tenfold increases in academic activity since 1996, an explosive growth in startups focused around AI, and corresponding venture capital investment. The issue with this metric is that it measures AI hype as much as AI progress. The two might be correlated, but then again, they may not.
The index also scrapes data from the popular coding website Github, which hosts more source code than anyone in the world. They can track the amount of AI-related software people are creating, as well as the interest levels in popular machine learning packages like Tensorflow and Keras. The index also keeps track of the sentiment of news articles that mention AI: surprisingly, given concerns about the apocalypse and an employment crisis, those considered “positive” outweigh the “negative” by three to one.
But again, this could all just be a measure of AI enthusiasm in general.
No one would dispute the fact that we’re in an age of considerable AI hype, but the progress of AI is littered by booms and busts in hype, growth spurts that alternate with AI winters. So the AI Index attempts to track the progress of algorithms against a series of tasks. How well does computer vision perform at the Large Scale Visual Recognition challenge? (Superhuman at annotating images since 2015, but they still can’t answer questions about images very well, combining natural language processing and image recognition). Speech recognition on phone calls is almost at parity.
In other narrow fields, AIs are still catching up to humans. Translation might be good enough that you can usually get the gist of what’s being said, but still scores poorly on the BLEU metric for translation accuracy. The AI index even keeps track of how well the programs can do on the SAT test, so if you took it, you can compare your score to an AI’s.
Measuring the performance of state-of-the-art AI systems on narrow tasks is useful and fairly easy to do. You can define a metric that’s simple to calculate, or devise a competition with a scoring system, and compare new software with old in a standardized way. Academics can always debate about the best method of assessing translation or natural language understanding. The Loebner prize, a simplified question-and-answer Turing Test, recently adopted Winograd Schema type questions, which rely on contextual understanding. AI has more difficulty with these.
Where the assessment really becomes difficult, though, is in trying to map these narrow-task performances onto general intelligence. This is hard because of a lack of understanding of our own intelligence. Computers are superhuman at chess, and now even a more complex game like Go. The braver predictors who came up with timelines thought AlphaGo’s success was faster than expected, but does this necessarily mean we’re closer to general intelligence than they thought?
Here is where it’s harder to track progress.
We can note the specialized performance of algorithms on tasks previously reserved for humans—for example, the index cites a Nature paper that shows AI can now predict skin cancer with more accuracy than dermatologists. We could even try to track one specific approach to general AI; for example, how many regions of the brain have been successfully simulated by a computer? Alternatively, we could simply keep track of the number of professions and professional tasks that can now be performed to an acceptable standard by AI.

“We are running a race, but we don’t know how to get to the endpoint, or how far we have to go.”

Progress in AI over the next few years is far more likely to resemble a gradual rising tide—as more and more tasks can be turned into algorithms and accomplished by software—rather than the tsunami of a sudden intelligence explosion or general intelligence breakthrough. Perhaps measuring the ability of an AI system to learn and adapt to the work routines of humans in office-based tasks could be possible.
The AI index doesn’t attempt to offer a timeline for general intelligence, as this is still too nebulous and confused a concept.
Michael Woodridge, head of Computer Science at the University of Oxford, notes, “The main reason general AI is not captured in the report is that neither I nor anyone else would know how to measure progress.” He is concerned about another AI winter, and overhyped “charlatans and snake-oil salesmen” exaggerating the progress that has been made.
A key concern that all the experts bring up is the ethics of artificial intelligence.
Of course, you don’t need general intelligence to have an impact on society; algorithms are already transforming our lives and the world around us. After all, why are Amazon, Google, and Facebook worth any money? The experts agree on the need for an index to measure the benefits of AI, the interactions between humans and AIs, and our ability to program values, ethics, and oversight into these systems.
Barbra Grosz of Harvard champions this view, saying, “It is important to take on the challenge of identifying success measures for AI systems by their impact on people’s lives.”
For those concerned about the AI employment apocalypse, tracking the use of AI in the fields considered most vulnerable (say, self-driving cars replacing taxi drivers) would be a good idea. Society’s flexibility for adapting to AI trends should be measured, too; are we providing people with enough educational opportunities to retrain? How about teaching them to work alongside the algorithms, treating them as tools rather than replacements? The experts also note that the data suffers from being US-centric.
We are running a race, but we don’t know how to get to the endpoint, or how far we have to go. We are judging by the scenery, and how far we’ve run already. For this reason, measuring progress is a daunting task that starts with defining progress. But the AI index, as an annual collection of relevant information, is a good start.
Image Credit: Photobank gallery / Shutterstock.com Continue reading

Posted in Human Robots

#431925 How the Science of Decision-Making Will ...

Neuroscientist Brie Linkenhoker believes that leaders must be better prepared for future strategic challenges by continually broadening their worldviews.
As the director of Worldview Stanford, Brie and her team produce multimedia content and immersive learning experiences to make academic research and insights accessible and useable by curious leaders. These future-focused topics are designed to help curious leaders understand the forces shaping the future.
Worldview Stanford has tackled such interdisciplinary topics as the power of minds, the science of decision-making, environmental risk and resilience, and trust and power in the age of big data.
We spoke with Brie about why understanding our biases is critical to making better decisions, particularly in a time of increasing change and complexity.

Lisa Kay Solomon: What is Worldview Stanford?
Brie Linkenhoker: Leaders and decision makers are trying to navigate this complex hairball of a planet that we live on and that requires keeping up on a lot of diverse topics across multiple fields of study and research. Universities like Stanford are where that new knowledge is being created, but it’s not getting out and used as readily as we would like, so that’s what we’re working on.
Worldview is designed to expand our individual and collective worldviews about important topics impacting our future. Your worldview is not a static thing, it’s constantly changing. We believe it should be informed by lots of different perspectives, different cultures, by knowledge from different domains and disciplines. This is more important now than ever.
At Worldview, we create learning experiences that are an amalgamation of all of those things.
LKS: One of your marquee programs is the Science of Decision Making. Can you tell us about that course and why it’s important?
BL: We tend to think about decision makers as being people in leadership positions, but every person who works in your organization, every member of your family, every member of the community is a decision maker. You have to decide what to buy, who to partner with, what government regulations to anticipate.
You have to think not just about your own decisions, but you have to anticipate how other people make decisions too. So, when we set out to create the Science of Decision Making, we wanted to help people improve their own decisions and be better able to predict, understand, anticipate the decisions of others.

“I think in another 10 or 15 years, we’re probably going to have really rich models of how we actually make decisions and what’s going on in the brain to support them.”

We realized that the only way to do that was to combine a lot of different perspectives, so we recruited experts from economics, psychology, neuroscience, philosophy, biology, and religion. We also brought in cutting-edge research on artificial intelligence and virtual reality and explored conversations about how technology is changing how we make decisions today and how it might support our decision-making in the future.
There’s no single set of answers. There are as many unanswered questions as there are answered questions.
LKS: One of the other things you explore in this course is the role of biases and heuristics. Can you explain the importance of both in decision-making?
BL: When I was a strategy consultant, executives would ask me, “How do I get rid of the biases in my decision-making or my organization’s decision-making?” And my response would be, “Good luck with that. It isn’t going to happen.”
As human beings we make, probably, thousands of decisions every single day. If we had to be actively thinking about each one of those decisions, we wouldn’t get out of our house in the morning, right?
We have to be able to do a lot of our decision-making essentially on autopilot to free up cognitive resources for more difficult decisions. So, we’ve evolved in the human brain a set of what we understand to be heuristics or rules of thumb.
And heuristics are great in, say, 95 percent of situations. It’s that five percent, or maybe even one percent, that they’re really not so great. That’s when we have to become aware of them because in some situations they can become biases.
For example, it doesn’t matter so much that we’re not aware of our rules of thumb when we’re driving to work or deciding what to make for dinner. But they can become absolutely critical in situations where a member of law enforcement is making an arrest or where you’re making a decision about a strategic investment or even when you’re deciding who to hire.
Let’s take hiring for a moment.
How many years is a hire going to impact your organization? You’re potentially looking at 5, 10, 15, 20 years. Having the right person in a role could change the future of your business entirely. That’s one of those areas where you really need to be aware of your own heuristics and biases—and we all have them. There’s no getting rid of them.
LKS: We seem to be at a time when the boundaries between different disciplines are starting to blend together. How has the advancement of neuroscience help us become better leaders? What do you see happening next?
BL: Heuristics and biases are very topical these days, thanks in part to Michael Lewis’s fantastic book, The Undoing Project, which is the story of the groundbreaking work that Nobel Prize winner Danny Kahneman and Amos Tversky did in the psychology and biases of human decision-making. Their work gave rise to the whole new field of behavioral economics.
In the last 10 to 15 years, neuroeconomics has really taken off. Neuroeconomics is the combination of behavioral economics with neuroscience. In behavioral economics, they use economic games and economic choices that have numbers associated with them and have real-world application.
For example, they ask, “How much would you spend to buy A versus B?” Or, “If I offered you X dollars for this thing that you have, would you take it or would you say no?” So, it’s trying to look at human decision-making in a format that’s easy to understand and quantify within a laboratory setting.
Now you bring neuroscience into that. You can have people doing those same kinds of tasks—making those kinds of semi-real-world decisions—in a brain scanner, and we can now start to understand what’s going on in the brain while people are making decisions. You can ask questions like, “Can I look at the signals in someone’s brain and predict what decision they’re going to make?” That can help us build a model of decision-making.
I think in another 10 or 15 years, we’re probably going to have really rich models of how we actually make decisions and what’s going on in the brain to support them. That’s very exciting for a neuroscientist.
Image Credit: Black Salmon / Shutterstock.com Continue reading

Posted in Human Robots

#431869 When Will We Finally Achieve True ...

The field of artificial intelligence goes back a long way, but many consider it was officially born when a group of scientists at Dartmouth College got together for a summer, back in 1956. Computers had, over the last few decades, come on in incredible leaps and bounds; they could now perform calculations far faster than humans. Optimism, given the incredible progress that had been made, was rational. Genius computer scientist Alan Turing had already mooted the idea of thinking machines just a few years before. The scientists had a fairly simple idea: intelligence is, after all, just a mathematical process. The human brain was a type of machine. Pick apart that process, and you can make a machine simulate it.
The problem didn’t seem too hard: the Dartmouth scientists wrote, “We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.” This research proposal, by the way, contains one of the earliest uses of the term artificial intelligence. They had a number of ideas—maybe simulating the human brain’s pattern of neurons could work and teaching machines the abstract rules of human language would be important.
The scientists were optimistic, and their efforts were rewarded. Before too long, they had computer programs that seemed to understand human language and could solve algebra problems. People were confidently predicting there would be a human-level intelligent machine built within, oh, let’s say, the next twenty years.
It’s fitting that the industry of predicting when we’d have human-level intelligent AI was born at around the same time as the AI industry itself. In fact, it goes all the way back to Turing’s first paper on “thinking machines,” where he predicted that the Turing Test—machines that could convince humans they were human—would be passed in 50 years, by 2000. Nowadays, of course, people are still predicting it will happen within the next 20 years, perhaps most famously Ray Kurzweil. There are so many different surveys of experts and analyses that you almost wonder if AI researchers aren’t tempted to come up with an auto reply: “I’ve already predicted what your question will be, and no, I can’t really predict that.”
The issue with trying to predict the exact date of human-level AI is that we don’t know how far is left to go. This is unlike Moore’s Law. Moore’s Law, the doubling of processing power roughly every couple of years, makes a very concrete prediction about a very specific phenomenon. We understand roughly how to get there—improved engineering of silicon wafers—and we know we’re not at the fundamental limits of our current approach (at least, not until you’re trying to work on chips at the atomic scale). You cannot say the same about artificial intelligence.
Common Mistakes
Stuart Armstrong’s survey looked for trends in these predictions. Specifically, there were two major cognitive biases he was looking for. The first was the idea that AI experts predict true AI will arrive (and make them immortal) conveniently just before they’d be due to die. This is the “Rapture of the Nerds” criticism people have leveled at Kurzweil—his predictions are motivated by fear of death, desire for immortality, and are fundamentally irrational. The ability to create a superintelligence is taken as an article of faith. There are also criticisms by people working in the AI field who know first-hand the frustrations and limitations of today’s AI.
The second was the idea that people always pick a time span of 15 to 20 years. That’s enough to convince people they’re working on something that could prove revolutionary very soon (people are less impressed by efforts that will lead to tangible results centuries down the line), but not enough for you to be embarrassingly proved wrong. Of the two, Armstrong found more evidence for the second one—people were perfectly happy to predict AI after they died, although most didn’t, but there was a clear bias towards “15–20 years from now” in predictions throughout history.
Measuring Progress
Armstrong points out that, if you want to assess the validity of a specific prediction, there are plenty of parameters you can look at. For example, the idea that human-level intelligence will be developed by simulating the human brain does at least give you a clear pathway that allows you to assess progress. Every time we get a more detailed map of the brain, or successfully simulate another part of it, we can tell that we are progressing towards this eventual goal, which will presumably end in human-level AI. We may not be 20 years away on that path, but at least you can scientifically evaluate the progress.
Compare this to those that say AI, or else consciousness, will “emerge” if a network is sufficiently complex, given enough processing power. This might be how we imagine human intelligence and consciousness emerged during evolution—although evolution had billions of years, not just decades. The issue with this is that we have no empirical evidence: we have never seen consciousness manifest itself out of a complex network. Not only do we not know if this is possible, we cannot know how far away we are from reaching this, as we can’t even measure progress along the way.
There is an immense difficulty in understanding which tasks are hard, which has continued from the birth of AI to the present day. Just look at that original research proposal, where understanding human language, randomness and creativity, and self-improvement are all mentioned in the same breath. We have great natural language processing, but do our computers understand what they’re processing? We have AI that can randomly vary to be “creative,” but is it creative? Exponential self-improvement of the kind the singularity often relies on seems far away.
We also struggle to understand what’s meant by intelligence. For example, AI experts consistently underestimated the ability of AI to play Go. Many thought, in 2015, it would take until 2027. In the end, it took two years, not twelve. But does that mean AI is any closer to being able to write the Great American Novel, say? Does it mean it’s any closer to conceptually understanding the world around it? Does it mean that it’s any closer to human-level intelligence? That’s not necessarily clear.
Not Human, But Smarter Than Humans
But perhaps we’ve been looking at the wrong problem. For example, the Turing test has not yet been passed in the sense that AI cannot convince people it’s human in conversation; but of course the calculating ability, and perhaps soon the ability to perform other tasks like pattern recognition and driving cars, far exceed human levels. As “weak” AI algorithms make more decisions, and Internet of Things evangelists and tech optimists seek to find more ways to feed more data into more algorithms, the impact on society from this “artificial intelligence” can only grow.
It may be that we don’t yet have the mechanism for human-level intelligence, but it’s also true that we don’t know how far we can go with the current generation of algorithms. Those scary surveys that state automation will disrupt society and change it in fundamental ways don’t rely on nearly as many assumptions about some nebulous superintelligence.
Then there are those that point out we should be worried about AI for other reasons. Just because we can’t say for sure if human-level AI will arrive this century, or never, it doesn’t mean we shouldn’t prepare for the possibility that the optimistic predictors could be correct. We need to ensure that human values are programmed into these algorithms, so that they understand the value of human life and can act in “moral, responsible” ways.
Phil Torres, at the Project for Future Human Flourishing, expressed it well in an interview with me. He points out that if we suddenly decided, as a society, that we had to solve the problem of morality—determine what was right and wrong and feed it into a machine—in the next twenty years…would we even be able to do it?
So, we should take predictions with a grain of salt. Remember, it turned out the problems the AI pioneers foresaw were far more complicated than they anticipated. The same could be true today. At the same time, we cannot be unprepared. We should understand the risks and take our precautions. When those scientists met in Dartmouth in 1956, they had no idea of the vast, foggy terrain before them. Sixty years later, we still don’t know how much further there is to go, or how far we can go. But we’re going somewhere.
Image Credit: Ico Maker / Shutterstock.com Continue reading

Posted in Human Robots

#431671 The Doctor in the Machine: How AI Is ...

Artificial intelligence has received its fair share of hype recently. However, it’s hype that’s well-founded: IDC predicts worldwide spend on AI and cognitive computing will culminate to a whopping $46 billion (with a “b”) by 2020, and all the tech giants are jumping on board faster than you can say “ROI.” But what is AI, exactly?
According to Hilary Mason, AI today is being misused as a sort of catch-all term to basically describe “any system that uses data to do anything.” But it’s so much more than that. A truly artificially intelligent system is one that learns on its own, one that’s capable of crunching copious amounts of data in order to create associations and intelligently mimic actual human behavior.
It’s what powers the technology anticipating our next online purchase (Amazon), or the virtual assistant that deciphers our voice commands with incredible accuracy (Siri), or even the hipster-friendly recommendation engine that helps you discover new music before your friends do (Pandora). But AI is moving past these consumer-pleasing “nice-to-haves” and getting down to serious business: saving our butts.
Much in the same way robotics entered manufacturing, AI is making its mark in healthcare by automating mundane, repetitive tasks. This is especially true in the case of detecting cancer. By leveraging the power of deep learning, algorithms can now be trained to distinguish between sets of pixels in an image that represents cancer versus sets that don’t—not unlike how Facebook’s image recognition software tags pictures of our friends without us having to type in their names first. This software can then go ahead and scour millions of medical images (MRIs, CT scans, etc.) in a single day to detect anomalies on a scope that humans just aren’t capable of. That’s huge.
As if that wasn’t enough, these algorithms are constantly learning and evolving, getting better at making these associations with each new data set that gets fed to them. Radiology, dermatology, and pathology will experience a giant upheaval as tech giants and startups alike jump in to bring these deep learning algorithms to a hospital near you.
In fact, some already are: the FDA recently gave their seal of approval for an AI-powered medical imaging platform that helps doctors analyze and diagnose heart anomalies. This is the first time the FDA has approved a machine learning application for use in a clinical setting.
But how efficient is AI compared to humans, really? Well, aside from the obvious fact that software programs don’t get bored or distracted or have to check Facebook every twenty minutes, AI is exponentially better than us at analyzing data.
Take, for example, IBM’s Watson. Watson analyzed genomic data from both tumor cells and healthy cells and was ultimately able to glean actionable insights in a mere 10 minutes. Compare that to the 160 hours it would have taken a human to analyze that same data. Diagnoses aside, AI is also being leveraged in pharmaceuticals to aid in the very time-consuming grunt work of discovering new drugs, and all the big players are getting involved.
But AI is far from being just a behind-the-scenes player. Gartner recently predicted that by 2025, 50 percent of the population will rely on AI-powered “virtual personal health assistants” for their routine primary care needs. What this means is that consumer-facing voice and chat-operated “assistants” (think Siri or Cortana) would, in effect, serve as a central hub of interaction for all our connected health devices and the algorithms crunching all our real-time biometric data. These assistants would keep us apprised of our current state of well-being, acting as a sort of digital facilitator for our personal health objectives and an always-on health alert system that would notify us when we actually need to see a physician.
Slowly, and thanks to the tsunami of data and advancements in self-learning algorithms, healthcare is transitioning from a reactive model to more of a preventative model—and it’s completely upending the way care is delivered. Whether Elon Musk’s dystopian outlook on AI holds any weight or not is yet to be determined. But one thing’s certain: for the time being, artificial intelligence is saving our lives.
Image Credit: Jolygon / Shutterstock.com Continue reading

Posted in Human Robots