Tag Archives: programs

#432568 Tech Optimists See a Golden ...

Technology evangelists dream about a future where we’re all liberated from the more mundane aspects of our jobs by artificial intelligence. Other futurists go further, imagining AI will enable us to become superhuman, enhancing our intelligence, abandoning our mortal bodies, and uploading ourselves to the cloud.

Paradise is all very well, although your mileage may vary on whether these scenarios are realistic or desirable. The real question is, how do we get there?

Economist John Maynard Keynes notably argued in favor of active intervention when an economic crisis hits, rather than waiting for the markets to settle down to a more healthy equilibrium in the long run. His rebuttal to critics was, “In the long run, we are all dead.” After all, if it takes 50 years of upheaval and economic chaos for things to return to normality, there has been an immense amount of human suffering first.

Similar problems arise with the transition to a world where AI is intimately involved in our lives. In the long term, automation of labor might benefit the human species immensely. But in the short term, it has all kinds of potential pitfalls, especially in exacerbating inequality within societies where AI takes on a larger role. A new report from the Institute for Public Policy Research has deep concerns about the future of work.

Uneven Distribution
While the report doesn’t foresee the same gloom and doom of mass unemployment that other commentators have considered, the concern is that the gains in productivity and economic benefits from AI will be unevenly distributed. In the UK, jobs that account for £290 billion worth of wages in today’s economy could potentially be automated with current technology. But these are disproportionately jobs held by people who are already suffering from social inequality.

Low-wage jobs are five times more likely to be automated than high-wage jobs. A greater proportion of jobs held by women are likely to be automated. The solution that’s often suggested is that people should simply “retrain”; but if no funding or assistance is provided, this burden is too much to bear. You can’t expect people to seamlessly transition from driving taxis to writing self-driving car software without help. As we have already seen, inequality is exacerbated when jobs that don’t require advanced education (even if they require a great deal of technical skill) are the first to go.

No Room for Beginners
Optimists say algorithms won’t replace humans, but will instead liberate us from the dull parts of our jobs. Lawyers used to have to spend hours trawling through case law to find legal precedents; now AI can identify the most relevant documents for them. Doctors no longer need to look through endless scans and perform diagnostic tests; machines can do this, leaving the decision-making to humans. This boosts productivity and provides invaluable tools for workers.

But there are issues with this rosy picture. If humans need to do less work, the economic incentive is for the boss to reduce their hours. Some of these “dull, routine” parts of the job were traditionally how people getting into the field learned the ropes: paralegals used to look through case law, but AI may render them obsolete. Even in the field of journalism, there’s now software that will rewrite press releases for publication, traditionally something close to an entry-level task. If there are no entry-level jobs, or if entry-level now requires years of training, the result is to exacerbate inequality and reduce social mobility.

Automating Our Biases
The adoption of algorithms into employment has already had negative impacts on equality. Cathy O’Neil, mathematics PhD from Harvard, raises these concerns in her excellent book Weapons of Math Destruction. She notes that algorithms designed by humans often encode the biases of that society, whether they’re racial or based on gender and sexuality.

Google’s search engine advertises more executive-level jobs to users it thinks are male. AI programs predict that black offenders are more likely to re-offend than white offenders; they receive correspondingly longer sentences. It needn’t necessarily be that bias has been actively programmed; perhaps the algorithms just learn from historical data, but this means they will perpetuate historical inequalities.

Take candidate-screening software HireVue, used by many major corporations to assess new employees. It analyzes “verbal and non-verbal cues” of candidates, comparing them to employees that historically did well. Either way, according to Cathy O’Neil, they are “using people’s fear and trust of mathematics to prevent them from asking questions.” With no transparency or understanding of how the algorithm generates its results, and no consensus over who’s responsible for the results, discrimination can occur automatically, on a massive scale.

Combine this with other demographic trends. In rich countries, people are living longer. An increasing burden will be placed on a shrinking tax base to support that elderly population. A recent study said that due to the accumulation of wealth in older generations, millennials stand to inherit more than any previous generation, but it won’t happen until they’re in their 60s. Meanwhile, those with savings and capital will benefit as the economy shifts: the stock market and GDP will grow, but wages and equality will fall, a situation that favors people who are already wealthy.

Even in the most dramatic AI scenarios, inequality is exacerbated. If someone develops a general intelligence that’s near-human or super-human, and they manage to control and monopolize it, they instantly become immensely wealthy and powerful. If the glorious technological future that Silicon Valley enthusiasts dream about is only going to serve to make the growing gaps wider and strengthen existing unfair power structures, is it something worth striving for?

What Makes a Utopia?
We urgently need to redefine our notion of progress. Philosophers worry about an AI that is misaligned—the things it seeks to maximize are not the things we want maximized. At the same time, we measure the development of our countries by GDP, not the quality of life of workers or the equality of opportunity in the society. Growing wealth with increased inequality is not progress.

Some people will take the position that there are always winners and losers in society, and that any attempt to redress the inequalities of our society will stifle economic growth and leave everyone worse off. Some will see this as an argument for a new economic model, based around universal basic income. Any moves towards this will need to take care that it’s affordable, sustainable, and doesn’t lead towards an entrenched two-tier society.

Walter Schiedel’s book The Great Leveller is a huge survey of inequality across all of human history, from the 21st century to prehistoric cave-dwellers. He argues that only revolutions, wars, and other catastrophes have historically reduced inequality: a perfect example is the Black Death in Europe, which (by reducing the population and therefore the labor supply that was available) increased wages and reduced inequality. Meanwhile, our solution to the financial crisis of 2007-8 may have only made the problem worse.

But in a world of nuclear weapons, of biowarfare, of cyberwarfare—a world of unprecedented, complex, distributed threats—the consequences of these “safety valves” could be worse than ever before. Inequality increases the risk of global catastrophe, and global catastrophes could scupper any progress towards the techno-utopia that the utopians dream of. And a society with entrenched inequality is no utopia at all.

Image Credit: OliveTree / Shutterstock.com Continue reading

Posted in Human Robots

#432431 Why Slowing Down Can Actually Help Us ...

Leah Weiss believes that when we pay attention to how we do our work—our thoughts and feelings about what we do and why we do it—we can tap into a much deeper reservoir of courage, creativity, meaning, and resilience.

As a researcher, educator, and author, Weiss teaches a course called “Leading with Compassion and Mindfulness” at the Stanford Graduate School of Business, one of the most competitive MBA programs in the world, and runs programs at HopeLab.

Weiss is the author of the new book How We Work: Live Your Purpose, Reclaim your Sanity and Embrace the Daily Grind, endorsed by the Dalai Lama, among others. I caught up with Leah to learn more about how the practice of mindfulness can deepen our individual and collective purpose and passion.

Lisa Kay Solomon: We’re hearing a lot about mindfulness these days. What is mindfulness and why is it so important to bring into our work? Can you share some of the basic tenets of the practice?

Leah Weiss, PhD: Mindfulness is, in its most literal sense, “the attention to inattention.” It’s as simple as noticing when you’re not paying attention and then re-focusing. It is prioritizing what is happening right now over internal and external noise.

The ability to work well with difficult coworkers, handle constructive feedback and criticism, regulate emotions at work—all of these things can come from regular mindfulness practice.

Some additional benefits of mindfulness are a greater sense of compassion (both self-compassion and compassion for others) and a way to seek and find purpose in even mundane things (and especially at work). From the business standpoint, mindfulness at work leads to increased productivity and creativity, mostly because when we are focused on one task at a time (as opposed to multitasking), we produce better results.

We spend more time with our co-workers than we do with our families; if our work relationships are negative, we suffer both mentally and physically. Even worse, we take all of those negative feelings home with us at the end of the work day. The antidote to this prescription for unhappiness is to have clear, strong purpose (one third of people do not have purpose at work and this is a major problem in the modern workplace!). We can use mental training to grow as people and as employees.

LKS: What are some recommendations you would make to busy leaders who are working around the clock to change the world?

LW: I think the most important thing is to remember to tend to our relationship with ourselves while trying to change the world. If we’re beating up on ourselves all the time we’ll be depleted.

People passionate about improving the world can get into habits of believing self-care isn’t important. We demand a lot of ourselves. It’s okay to fail, to mess up, to make mistakes—what’s important is how we learn from those mistakes and what we tell ourselves about those instances. What is the “internal script” playing in your own head? Is it positive, supporting, and understanding? It should be. If it isn’t, you can work on it. And the changes you make won’t just improve your quality of life, they’ll make you more resilient to weather life’s inevitable setbacks.

A close second recommendation is to always consider where everyone in an organization fits and help everyone (including yourself) find purpose. When you know what your own purpose is and show others their purpose, you can motivate a team and help everyone on a team gain pride in and at work. To get at this, make sure to ask people on your team what really lights them up. What sucks their energy and depletes them? If we know our own answers to these questions and relate them to the people we work with, we can create more engaged organizations.

LKS: Can you envision a future where technology and mindfulness can work together?

LW: Technology and mindfulness are already starting to work together. Some artificial intelligence companies are considering things like mindfulness and compassion when building robots, and there are numerous apps that target spreading mindfulness meditations in a widely-accessible way.

LKS: Looking ahead at our future generations who seem more attached to their devices than ever, what advice do you have for them?

LW: It’s unrealistic to say “stop using your device so much,” so instead, my suggestion is to make time for doing things like scrolling social media and make the same amount of time for putting your phone down and watching a movie or talking to a friend. No matter what it is that you are doing, make sure you have meta-awareness or clarity about what you’re paying attention to. Be clear about where your attention is and recognize that you can be a steward of attention. Technology can support us in this or pull us away from this; it depends on how we use it.

Image Credit: frankie’s / Shutterstock.com Continue reading

Posted in Human Robots

#432311 Everyone Is Talking About AI—But Do ...

In 2017, artificial intelligence attracted $12 billion of VC investment. We are only beginning to discover the usefulness of AI applications. Amazon recently unveiled a brick-and-mortar grocery store that has successfully supplanted cashiers and checkout lines with computer vision, sensors, and deep learning. Between the investment, the press coverage, and the dramatic innovation, “AI” has become a hot buzzword. But does it even exist yet?

At the World Economic Forum Dr. Kai-Fu Lee, a Taiwanese venture capitalist and the founding president of Google China, remarked, “I think it’s tempting for every entrepreneur to package his or her company as an AI company, and it’s tempting for every VC to want to say ‘I’m an AI investor.’” He then observed that some of these AI bubbles could burst by the end of 2018, referring specifically to “the startups that made up a story that isn’t fulfillable, and fooled VCs into investing because they don’t know better.”

However, Dr. Lee firmly believes AI will continue to progress and will take many jobs away from workers. So, what is the difference between legitimate AI, with all of its pros and cons, and a made-up story?

If you parse through just a few stories that are allegedly about AI, you’ll quickly discover significant variation in how people define it, with a blurred line between emulated intelligence and machine learning applications.

I spoke to experts in the field of AI to try to find consensus, but the very question opens up more questions. For instance, when is it important to be accurate to a term’s original definition, and when does that commitment to accuracy amount to the splitting of hairs? It isn’t obvious, and hype is oftentimes the enemy of nuance. Additionally, there is now a vested interest in that hype—$12 billion, to be precise.

This conversation is also relevant because world-renowned thought leaders have been publicly debating the dangers posed by AI. Facebook CEO Mark Zuckerberg suggested that naysayers who attempt to “drum up these doomsday scenarios” are being negative and irresponsible. On Twitter, business magnate and OpenAI co-founder Elon Musk countered that Zuckerberg’s understanding of the subject is limited. In February, Elon Musk engaged again in a similar exchange with Harvard professor Steven Pinker. Musk tweeted that Pinker doesn’t understand the difference between functional/narrow AI and general AI.

Given the fears surrounding this technology, it’s important for the public to clearly understand the distinctions between different levels of AI so that they can realistically assess the potential threats and benefits.

As Smart As a Human?
Erik Cambria, an expert in the field of natural language processing, told me, “Nobody is doing AI today and everybody is saying that they do AI because it’s a cool and sexy buzzword. It was the same with ‘big data’ a few years ago.”

Cambria mentioned that AI, as a term, originally referenced the emulation of human intelligence. “And there is nothing today that is even barely as intelligent as the most stupid human being on Earth. So, in a strict sense, no one is doing AI yet, for the simple fact that we don’t know how the human brain works,” he said.

He added that the term “AI” is often used in reference to powerful tools for data classification. These tools are impressive, but they’re on a totally different spectrum than human cognition. Additionally, Cambria has noticed people claiming that neural networks are part of the new wave of AI. This is bizarre to him because that technology already existed fifty years ago.

However, technologists no longer need to perform the feature extraction by themselves. They also have access to greater computing power. All of these advancements are welcomed, but it is perhaps dishonest to suggest that machines have emulated the intricacies of our cognitive processes.

“Companies are just looking at tricks to create a behavior that looks like intelligence but that is not real intelligence, it’s just a mirror of intelligence. These are expert systems that are maybe very good in a specific domain, but very stupid in other domains,” he said.

This mimicry of intelligence has inspired the public imagination. Domain-specific systems have delivered value in a wide range of industries. But those benefits have not lifted the cloud of confusion.

Assisted, Augmented, or Autonomous
When it comes to matters of scientific integrity, the issue of accurate definitions isn’t a peripheral matter. In a 1974 commencement address at the California Institute of Technology, Richard Feynman famously said, “The first principle is that you must not fool yourself—and you are the easiest person to fool.” In that same speech, Feynman also said, “You should not fool the layman when you’re talking as a scientist.” He opined that scientists should bend over backwards to show how they could be wrong. “If you’re representing yourself as a scientist, then you should explain to the layman what you’re doing—and if they don’t want to support you under those circumstances, then that’s their decision.”

In the case of AI, this might mean that professional scientists have an obligation to clearly state that they are developing extremely powerful, controversial, profitable, and even dangerous tools, which do not constitute intelligence in any familiar or comprehensive sense.

The term “AI” may have become overhyped and confused, but there are already some efforts underway to provide clarity. A recent PwC report drew a distinction between “assisted intelligence,” “augmented intelligence,” and “autonomous intelligence.” Assisted intelligence is demonstrated by the GPS navigation programs prevalent in cars today. Augmented intelligence “enables people and organizations to do things they couldn’t otherwise do.” And autonomous intelligence “establishes machines that act on their own,” such as autonomous vehicles.

Roman Yampolskiy is an AI safety researcher who wrote the book “Artificial Superintelligence: A Futuristic Approach.” I asked him whether the broad and differing meanings might present difficulties for legislators attempting to regulate AI.

Yampolskiy explained, “Intelligence (artificial or natural) comes on a continuum and so do potential problems with such technology. We typically refer to AI which one day will have the full spectrum of human capabilities as artificial general intelligence (AGI) to avoid some confusion. Beyond that point it becomes superintelligence. What we have today and what is frequently used in business is narrow AI. Regulating anything is hard, technology is no exception. The problem is not with terminology but with complexity of such systems even at the current level.”

When asked if people should fear AI systems, Dr. Yampolskiy commented, “Since capability comes on a continuum, so do problems associated with each level of capability.” He mentioned that accidents are already reported with AI-enabled products, and as the technology advances further, the impact could spread beyond privacy concerns or technological unemployment. These concerns about the real-world effects of AI will likely take precedence over dictionary-minded quibbles. However, the issue is also about honesty versus deception.

Is This Buzzword All Buzzed Out?
Finally, I directed my questions towards a company that is actively marketing an “AI Virtual Assistant.” Carl Landers, the CMO at Conversica, acknowledged that there are a multitude of explanations for what AI is and isn’t.

He said, “My definition of AI is technology innovation that helps solve a business problem. I’m really not interested in talking about the theoretical ‘can we get machines to think like humans?’ It’s a nice conversation, but I’m trying to solve a practical business problem.”

I asked him if AI is a buzzword that inspires publicity and attracts clients. According to Landers, this was certainly true three years ago, but those effects have already started to wane. Many companies now claim to have AI in their products, so it’s less of a differentiator. However, there is still a specific intention behind the word. Landers hopes to convey that previously impossible things are now possible. “There’s something new here that you haven’t seen before, that you haven’t heard of before,” he said.

According to Brian Decker, founder of Encom Lab, machine learning algorithms only work to satisfy their preexisting programming, not out of an interior drive for better understanding. Therefore, he views AI as an entirely semantic argument.

Decker stated, “A marketing exec will claim a photodiode controlled porch light has AI because it ‘knows when it is dark outside,’ while a good hardware engineer will point out that not one bit in a register in the entire history of computing has ever changed unless directed to do so according to the logic of preexisting programming.”

Although it’s important for everyone to be on the same page regarding specifics and underlying meaning, AI-powered products are already powering past these debates by creating immediate value for humans. And ultimately, humans care more about value than they do about semantic distinctions. In an interview with Quartz, Kai-Fu Lee revealed that algorithmic trading systems have already given him an 8X return over his private banking investments. “I don’t trade with humans anymore,” he said.

Image Credit: vrender / Shutterstock.com Continue reading

Posted in Human Robots

#432021 Unleashing Some of the Most Ambitious ...

At Singularity University, we are unleashing a generation of women who are smashing through barriers and starting some of the most ambitious technology companies on the planet.

Singularity University was founded in 2008 to empower leaders to use exponential technologies to solve our world’s biggest challenges. Our flagship program, the Global Solutions Program, has historically brought 80 entrepreneurs from around the world to Silicon Valley for 10 weeks to learn about exponential technologies and create moonshot startups that improve the lives of a billion people within a decade.

After nearly 10 years of running this program, we can say that about 70 percent of our successful startups have been founded or co-founded by female entrepreneurs (see below for inspiring examples of their work). This is in sharp contrast to the typical 10–20 percent of venture-backed tech companies that have a female founder, as reported by TechCrunch.

How are we so dramatically changing the game? While 100 percent of the credit goes to these courageous women, as both an alumna of the Global Solutions Program and our current vice chair of Global Grand Challenges, I want to share my reflections on what has worked.

At the most basic level, it is essential to deeply believe in the inherent worth, intellectual genius, and profound entrepreneurial caliber of women. While this may seem obvious, this is not the way our world currently thinks—we live in a world that sees women’s ideas, contributions, work, and existence as inherently less valuable than men’s.

For example, a 2017 Harvard Business Review article noted that even when women engage in the same behaviors and work as men, their work is considered less valuable simply because a woman did the job. An additional 2017 Harvard Business Review article showed that venture capitalists are significantly less likely to invest in female entrepreneurs and are more likely to ask men questions about the potential success of their companies while grilling women about the potential downfalls of their companies.

This doubt and lack of recognition of the genius and caliber of women is also why women are still paid less than men for completing identical work. Further, it’s why women’s work often gets buried in “number two” support roles of men in leadership roles and why women are expected to take on second shifts at home managing tedious household chores in addition to their careers. I would also argue these views as well as the rampant sexual harassment, assault, and violence against women that exists today stems from stubborn, historical, patriarchal views of women as living for the benefit of men, rather than for their own sovereignty and inherent value.

As with any other business, Singularity University has not been immune to these biases but is resolutely focused on helping women achieve intellectual genius and global entrepreneurial caliber by harnessing powerful exponential technologies.

We create an environment where women can physically and intellectually thrive free of harassment to reach their full potential, and we are building a broader ecosystem of alumni and partners around the world who not only support our female entrepreneurs throughout their entrepreneurial journeys, but who are also sparking and leading systemic change in their own countries and communities.

Respecting the Intellectual Genius and Entrepreneurial Caliber of Women
The entrepreneurial legends of our time—Steve Jobs, Elon Musk, Mark Zuckerberg, Bill Gates, Jeff Bezos, Larry Page, Sergey Brin—are men who have all built their empires using exponential technologies. Exponential technologies helped these men succeed faster and with greater impact due to Moore’s Law and the Law of Accelerating Returns which states that any digital technology (such as computing, software, artificial intelligence, robotics, quantum computing, biotechnology, nanotechnology, etc.) will become more sophisticated while dramatically falling in price, enabling rapid scaling.

Knowing this, an entrepreneur can plot her way to an ambitious global solution over time, releasing new applications just as the technology and market are ready. Furthermore, these rapidly advancing technologies often converge to create new tools and opportunities for innovators to come up with novel solutions to challenges that were previously impossible to solve in the past.

For various reasons, women have not pursued exponential technologies as aggressively as men (or were prevented or discouraged from doing so).

While more women are founding firms at a higher rate than ever in wealthy countries like the United States, the majority are small businesses in linear industries that have been around for hundreds of years, such as social assistance, health, education, administrative, or consulting services. In lower-income countries, international aid agencies and nonprofits often encourage women to pursue careers in traditional handicrafts, micro-enterprise, and micro-finance. While these jobs have historically helped women escape poverty and gain financial independence, they have done little to help women realize the enormous power, influence, wealth, and ability to transform the world for the better that comes from building companies, nonprofits, and solutions grounded in exponential technologies.

We need women to be working with exponential technologies today in order to be powerful leaders in the future.

Participants who enroll in our Global Solutions Program spend the first few weeks of the program learning about exponential technologies from the world’s experts and the final weeks launching new companies or nonprofits in their area of interest. We require that women (as well as men) utilize exponential technologies as a condition of the program.

In this sense, at Singularity University women start their endeavors with all of us believing and behaving in a way that assumes they can achieve global impact at the level of our world’s most legendary entrepreneurs.

Creating an Environment Where Woman Can Thrive
While challenging women to embrace exponential technologies is essential, it is also important to create an environment where women can thrive. In particular, this means ensuring women feel at home on our campus by ensuring gender diversity, aggressively addressing sexual harassment, and flipping the traditional culture from one that penalizes women, to one that values and supports them.

While women were initially only a small minority of our Global Solutions Program, in 2014, we achieved around 50% female attendance—a statistic that has since held over the years.

This is not due to a quota—every year we turn away extremely qualified women from our program (and are working on reformulating the program to allow more people to participate in the future.) While part of our recruiting success is due to the efforts of our marketing team, we also benefited from the efforts of some of our early female founders, staff, faculty, and alumnae including Susan Fonseca, Emeline Paat-Dahlstrom, Kathryn Myronuk, Lajuanda Asemota, Chiara Giovenzana, and Barbara Silva Tronseca.

As early champions of Singularity University these women not only launched diversity initiatives and personally reached out to women, but were crucial role models holding leadership roles in our community. In addition, Fonseca and Silva also both created multiple organizations and initiatives outside of (or in conjunction with) the university that produced additional pipelines of female candidates. In particular, Fonseca founded Women@TheFrontier as well as other organizations focusing on women, technology and innovation, and Silva founded BestInnovation (a woman’s accelerator in Latin America), as well as led Singularity University’s Chilean Chapter and founded the first SingularityU Summit in Latin America.

These women’s efforts in globally scaling Singularity University have been critical in ensuring woman around the world now see Singularity University as a place where they can lead and shape the future.

Also, thanks to Google (Alphabet) and many of our alumni and partners, we were able to provide full scholarships to any woman (or man) to attend our program regardless of their economic status. Google committed significant funding for full scholarships while our partners around the world also hosted numerous Global Impact Competitions, where entrepreneurs pitched their solutions to their local communities with the winners earning a full scholarship funded by our partners to attend the Global Solution Program as their prize.

Google and our partners’ support helped individuals attend our program and created a wider buzz around exponential technology and social change around the world in local communities. It led to the founding of 110 SU chapters in 55 countries.

Another vital aspect of our work in supporting women has been trying to create a harassment-free environment. Throughout the Silicon Valley, more than 60% of women convey that while they are trying to build their companies or get their work done, they are also dealing with physical and sexual harassment while being demeaned and excluded in other ways in the workplace. We have taken actions to educate and train our staff on how to deal with situations should they occur. All staff receives training on harassment when they join Singularity University, and all Global Solutions Program participants attend mandatory trainings on sexual harassment when they first arrive on campus. We also have male and female wellness counselors available that can offer support to both individuals and teams of entrepreneurs throughout the entire program.

While at a minimum our campus must be physically safe for women, we also strive to create a culture that values women and supports them in the additional challenges and expectations they face. For example, one of our 2016 female participants, Van Duesterberg, was pregnant during the program and said that instead of having people doubt her commitment to her startup or make her prove she could handle having a child and running a start-up at the same time, people went out of their way to help her.

“I was the epitome of a person not supposed to be doing a startup,” she said. “I was pregnant and would need to take care of my child. But Singularity University was supportive and encouraging. They made me feel super-included and that it was possible to do both. I continue to come back to campus even though the program is over because the network welcomes me and supports me rather than shuts me out because of my physical limitations. Rather than making me feel I had to prove myself, everyone just understood me and supported me, whether it was bringing me healthy food or recommending funders.”

Another strength that we have in supporting women is that after the Global Solutions Program, entrepreneurs have access to a much larger ecosystem.

Many entrepreneurs partake in SU Ventures, which can provide further support to startups as they develop, and we now have a larger community of over 200,000 people in almost every country. These members have often attended other Singularity University programs, events and are committed to our vision of the future. These women and men consist of business executives, Fortune 500 companies, investors, nonprofit and government leaders, technologists, members of the media, and other movers and shakers in the world. They have made introductions for our founders, collaborated with them on business ventures, invested in them and showcased their work at high profile events around the world.

Building for the Future
While our Global Solutions Program is making great strides in supporting female entrepreneurs, there is always more work to do. We are now focused on achieving the same degree of female participation across all of our programs and actively working to recruit and feature more female faculty and speakers on stage. As our community grows and scales around the world, we are also intent at how to best uphold our values and policies around sexual harassment across diverse locations and cultures. And like all businesses everywhere, we are focused on recruiting more women to serve at senior leadership levels within SU. As we make our way forward, we hope that you will join us in boldly leading this change and recognizing the genius and power of female entrepreneurs.

Meet Some of Our Female Moonshots
While we have many remarkable female entrepreneurs in the Singularity University community, the list below features a few of the women who have founded or co-founded companies at the Global Solutions Program that have launched new industries and are on their way to changing the way our world works for millions if not billions of people.

Jessica Scorpio co-founded Getaround in 2009. Getaround was one of the first car-sharing service platforms allowing anyone to rent out their car using a smartphone app. GetAround was a revolutionary idea in 2009, not only because smartphones and apps were still in their infancy, but because it was unthinkable that a technology startup could disrupt the major entrenched car, transport, and logistics companies. Scorpio’s early insights and pioneering entrepreneurial work brought to life new ways that humans relate to car sharing and the future self-driving car industry. Scorpio and Getaround have won numerous awards, and Getaround now serves over 200,000 members.

Paola Santana co-founded Matternet in 2011, which pioneered the commercial drone transport industry. In 2011, only military, hobbyists or the film industry used drones. Matternet demonstrated that drones could be used for commercial transport in short point-to-point deliveries for high-value goods laying the groundwork for drone transport around the world as well as some of the early thinking behind the future flying car industry. Santana was also instrumental in shaping regulations for the use of commercial drones around the world, making the industry possible.

Sara Naseri co-founded Qurasense in 2014, a life sciences start-up that analyzes women’s health through menstrual blood allowing women to track their health every month. Naseri is shifting our understanding of women’s menstrual blood as a waste product and something “not to be talked about,” to a rich, non-invasive, abundant source of information about women’s health.

Abi Ramanan co-founded ImpactVision in 2015, a software company that rapidly analyzes the quality and characteristics of food through hyperspectral images. Her long-term vision is to digitize food supply chains to reduce waste and fraud, given that one-third of all food is currently wasted before it reaches our plates. Ramanan is also helping the world understand that hyperspectral technology can be used in many industries to help us “see the unseen” and augment our ability to sense and understand what is happening around us in a much more sophisticated way.

Anita Schjøll Brede and Maria Ritola co-founded Iris AI in 2015, an artificial intelligence company that is building an AI research assistant that drastically improves the efficiency of R&D research and breaks down silos between different industries. Their long-term vision is for Iris AI to become smart enough that she will become a scientist herself. Fast Company named Iris AI one of the 10 most innovative artificial intelligence companies for 2017.

Hla Hla Win co-founded 360ed in 2016, a startup that conducts teacher training and student education through virtual reality and augmented reality in Myanmar. They have already connected teachers from 128 private schools in Myanmar with schools teaching 21st-century skills in Silicon Valley and around the world. Their moonshot is to build a platform where any teacher in the world can share best practices in teachers’ training. As they succeed, millions of children in some of the poorest parts of the world will have access to a 21st-century education.

Min FitzGerald and Van Duesterberg cofounded Nutrigene in 2017, a startup that ships freshly formulated, tailor-made supplement elixirs directly to consumers. Their long-term vision is to help people optimize their health using actionable data insights, so people can take a guided, tailored approaching to thriving into longevity.

Anna Skaya co-founded Basepaws in 2016, which created the first genetic test for cats and is building a community of citizen scientist pet owners. They are creating personalized pet products such as supplements, therapeutics, treats, and toys while also developing a database of genetic data for future research that will help both humans and pets over the long term.

Olivia Ramos co-founded Deep Blocks in 2016, a startup using artificial intelligence to integrate and streamline the processes of architecture, pre-construction, and real estate. As digital technologies, artificial intelligence, and robotics advance, it no longer makes sense for these industries to exist separately. Ramos recognized the tremendous value and efficiency that it is now possible to unlock with exponential technologies and creating an integrated industry in the future.

Please also visit our website to learn more about other female entrepreneurs, staff and faculty who are pioneering the future through exponential technologies. Continue reading

Posted in Human Robots

#431928 How Fast Is AI Progressing? Stanford’s ...

When? This is probably the question that futurists, AI experts, and even people with a keen interest in technology dread the most. It has proved famously difficult to predict when new developments in AI will take place. The scientists at the Dartmouth Summer Research Project on Artificial Intelligence in 1956 thought that perhaps two months would be enough to make “significant advances” in a whole range of complex problems, including computers that can understand language, improve themselves, and even understand abstract concepts.
Sixty years later, and these problems are not yet solved. The AI Index, from Stanford, is an attempt to measure how much progress has been made in artificial intelligence.
The index adopts a unique approach, and tries to aggregate data across many regimes. It contains Volume of Activity metrics, which measure things like venture capital investment, attendance at academic conferences, published papers, and so on. The results are what you might expect: tenfold increases in academic activity since 1996, an explosive growth in startups focused around AI, and corresponding venture capital investment. The issue with this metric is that it measures AI hype as much as AI progress. The two might be correlated, but then again, they may not.
The index also scrapes data from the popular coding website Github, which hosts more source code than anyone in the world. They can track the amount of AI-related software people are creating, as well as the interest levels in popular machine learning packages like Tensorflow and Keras. The index also keeps track of the sentiment of news articles that mention AI: surprisingly, given concerns about the apocalypse and an employment crisis, those considered “positive” outweigh the “negative” by three to one.
But again, this could all just be a measure of AI enthusiasm in general.
No one would dispute the fact that we’re in an age of considerable AI hype, but the progress of AI is littered by booms and busts in hype, growth spurts that alternate with AI winters. So the AI Index attempts to track the progress of algorithms against a series of tasks. How well does computer vision perform at the Large Scale Visual Recognition challenge? (Superhuman at annotating images since 2015, but they still can’t answer questions about images very well, combining natural language processing and image recognition). Speech recognition on phone calls is almost at parity.
In other narrow fields, AIs are still catching up to humans. Translation might be good enough that you can usually get the gist of what’s being said, but still scores poorly on the BLEU metric for translation accuracy. The AI index even keeps track of how well the programs can do on the SAT test, so if you took it, you can compare your score to an AI’s.
Measuring the performance of state-of-the-art AI systems on narrow tasks is useful and fairly easy to do. You can define a metric that’s simple to calculate, or devise a competition with a scoring system, and compare new software with old in a standardized way. Academics can always debate about the best method of assessing translation or natural language understanding. The Loebner prize, a simplified question-and-answer Turing Test, recently adopted Winograd Schema type questions, which rely on contextual understanding. AI has more difficulty with these.
Where the assessment really becomes difficult, though, is in trying to map these narrow-task performances onto general intelligence. This is hard because of a lack of understanding of our own intelligence. Computers are superhuman at chess, and now even a more complex game like Go. The braver predictors who came up with timelines thought AlphaGo’s success was faster than expected, but does this necessarily mean we’re closer to general intelligence than they thought?
Here is where it’s harder to track progress.
We can note the specialized performance of algorithms on tasks previously reserved for humans—for example, the index cites a Nature paper that shows AI can now predict skin cancer with more accuracy than dermatologists. We could even try to track one specific approach to general AI; for example, how many regions of the brain have been successfully simulated by a computer? Alternatively, we could simply keep track of the number of professions and professional tasks that can now be performed to an acceptable standard by AI.

“We are running a race, but we don’t know how to get to the endpoint, or how far we have to go.”

Progress in AI over the next few years is far more likely to resemble a gradual rising tide—as more and more tasks can be turned into algorithms and accomplished by software—rather than the tsunami of a sudden intelligence explosion or general intelligence breakthrough. Perhaps measuring the ability of an AI system to learn and adapt to the work routines of humans in office-based tasks could be possible.
The AI index doesn’t attempt to offer a timeline for general intelligence, as this is still too nebulous and confused a concept.
Michael Woodridge, head of Computer Science at the University of Oxford, notes, “The main reason general AI is not captured in the report is that neither I nor anyone else would know how to measure progress.” He is concerned about another AI winter, and overhyped “charlatans and snake-oil salesmen” exaggerating the progress that has been made.
A key concern that all the experts bring up is the ethics of artificial intelligence.
Of course, you don’t need general intelligence to have an impact on society; algorithms are already transforming our lives and the world around us. After all, why are Amazon, Google, and Facebook worth any money? The experts agree on the need for an index to measure the benefits of AI, the interactions between humans and AIs, and our ability to program values, ethics, and oversight into these systems.
Barbra Grosz of Harvard champions this view, saying, “It is important to take on the challenge of identifying success measures for AI systems by their impact on people’s lives.”
For those concerned about the AI employment apocalypse, tracking the use of AI in the fields considered most vulnerable (say, self-driving cars replacing taxi drivers) would be a good idea. Society’s flexibility for adapting to AI trends should be measured, too; are we providing people with enough educational opportunities to retrain? How about teaching them to work alongside the algorithms, treating them as tools rather than replacements? The experts also note that the data suffers from being US-centric.
We are running a race, but we don’t know how to get to the endpoint, or how far we have to go. We are judging by the scenery, and how far we’ve run already. For this reason, measuring progress is a daunting task that starts with defining progress. But the AI index, as an annual collection of relevant information, is a good start.
Image Credit: Photobank gallery / Shutterstock.com Continue reading

Posted in Human Robots