Tag Archives: robotics

#429632 How All Leaders Can Make the World a ...

This article is part of a new series exploring the skills leaders must learn to make the most of rapid change in an increasingly disruptive world. The first article in the series, “How the Most Successful Leaders Will Thrive in an Exponential World,” broadly outlines four critical leadership skills—futurist, technologist, innovator, and humanitarian—and how they work together.
Today's post, part three in the series, takes a more detailed look at leaders as humanitarians. Be sure to check out part two of the series, "How Leaders Dream Boldly to Bring New Futures to Life," and stay tuned for upcoming articles exploring leaders as technologists and innovators.
Recently, Mark Zuckerberg, Facebook’s founder and CEO, posted a public manifesto of nearly 6,000 words to Facebook’s community of almost 1.9 billion people called “Building a Global Community.” In the opening lines, readers quickly see that this isn’t about a product update or policy change, but rather focuses on a larger philosophical question that Zuckerberg courageously poses: “Are we building the world we all want?”
The manifesto was not without controversy, raising very public concerns from traditional media companies and questions from Washington insiders who actively wonder about Zuckerberg’s longer-term political aspirations.
Regardless of your interpretation of the manifesto’s intent, what’s remarkable is that a private sector CEO—someone who is typically laser-focused on growth projections and shareholder return—has declared a very ambitious aspiration to use the technology platform to promote and strengthen a global community.
As we enter an era of increasing globalization and connectivity, what is the responsibility of leaders, not just ones elected to public office, to support the betterment of the lives they touch? How might leaders support the foundational needs of their employees, customers, investors and strategic partners—to lead like a humanitarian?
What It Means to Lead Like a Humanitarian
To lead like a humanitarian requires making choices to transform scarce resources into abundant opportunities to positively and responsibly impact communities far beyond our own.
This might mean making big investments in solving our world’s biggest challenges. Or it might mean adopting a business model that intentionally serves a specific population in need or promotes sustainability, community service and employee engagement outside the office.
At its foundation, leading like a humanitarian means taking responsibility for how we connect our work—regardless of the job—to a meaningful purpose beyond growth and profitability.
Unlocking Possibilities by Liberating Scarce Resources
Technology is at the core of some of today’s biggest businesses, and organizations can have more impact now than in the past. While tech can be used to produce great products, it can also be aimed at solving big problems in the world by liberating resources that were once scarce and making them more abundant for more people.
What does this look like in practice? Apps abound that use the sensors and software on your phone for entertainment, everyday productivity, and socializing. But the same sensors, motivated by a different purpose, can be used to make your phone an intelligent aid for the blind, a diagnostic tool for doctors in remote areas, or an off-the-shelf radiation detector.
It’s not to say the first purpose is worthless—it’s great to relax with a quick game of Angry Birds every so often. But it isn’t the only goal worth pursuing, and with a dose of creativity and a different focus, the same skills used to produce games can make tools to help those in need.
This is an example using now-familiar mobile technology, but other technologies are coming with even greater potential for positive impact. These include breakthroughs in areas such as digital fabrication, biotechnology, and artificial intelligence and robotics. As these technologies arrive and become more accessible, we need to consider how they can be used for good too.
But technology isn’t the only resource to which leaders should pay heed.
Perhaps one of the most valuable resources technology can help liberate is human potential. No problem goes unsolved without someone taking up the challenge and aiming to find a solution. Leaders need to motivate and enable team members as much as possible.
And here, technology is proving a good tool too. A recent Deloitte report on global human capital trends found that the digitization of human capital processes is radically changing how employees engage with work, from the recruitment process through leadership development and career advancement.
Technology is enabling learning to move from episodic, generic training to continuous, blended social exchanges. Platforms such as Degreed, EdCast and Axonify move beyond bounded online classes by offering microlearning and on-demand learning opportunities.
Leaders need to assess if they are supporting a culture conducive to continuous learning and if they are empowering all employees to learn from and with each other.
As we widen our view of what’s possible, what actually happens in practice will change too. Together, the ability of people and technology to solve big problems has never been greater.
Developing New Business Models
As technology enables teams, big and small, to make an impact as never before, leaders and organizations need to reimagine who they are serving, what they are serving, and how they are serving them in viable, sustainable and profitable ways. Businesses no longer need to choose between maximizing profit and helping society. They can choose to do both.
Last year Fortune Magazine’s "Change the World" cover story featured 50 successful global companies that are doing well by doing good.
Its top profiled company, GlaxoSmithKline, is making choices to ensure growth and help people by reversing the traditional business model of maximizing revenue through protected drug patents. They are no longer filing patents in poor countries to enable lower prices and improved access to medicine in those countries. They are also partnering with NGOs to retrain workers on the proper administration of drugs and collaborating with governments to make their drugs part of national treatment programs of HIV and other widespread diseases.

Nearly five billion new people are expected to come online through high-speed internet in the next ten years. Now is the time to imagine what new opportunities are on the horizon—not just for tapping new markets and customers but for how you empower them too.
In an increasingly dynamic world, re-evaluating old business models is a key new strategy.
Leaders need to build proficiency in both critically examining current models and creatively exploring fundamentally new ways of thinking about value creation and capture.
Live a Higher Purpose Within Your Organization
One of the most powerful ways a leader can motivate and enable these changes is to actively and continuously clarify the organization’s higher purpose—the “why” that drives the work—and to make choices that are consistent with what the company stands for.
There has been a lot of social science suggesting all workers—especially those in “Generation Z”—are motivated by work that matters to them. In her book, The Progress Principle, Harvard Business School professor Theresa Amabile argues that the most important motivator of great work is the feeling of meaning and progress—that your work matters.
Leading as a humanitarian requires modeling meaning throughout the organization and behaving in ways congruent with core values, internally and externally.
Last year, Marc Benioff, the CEO of Salesforce, pushed for LGBT rights in Indiana, North Carolina and Georgia. In 2015, a company-wide survey revealed Salesforce had a gender discrepancy in pay, which Benioff remedied in what has been called the “$3 million dollar raise.” In January, Salesforce said they would adjust pay again to level out the salaries of employees who joined up through acquisitions and didn’t share the gender equality salary policies. And they’ve said they will monitor the gap as an ongoing initiative and commitment to employees.
In a Time article last year, Benioff stated his rationale for taking an active stance, “If I were to write a book today, I would call it CEO 2.0: How the Next Generation CEO Has to Be an Advocate for Stakeholders, Not Just Shareholders. That is, today CEOs need to stand up not just for their shareholders, but their employees, their customers, their partners, the community, the environment, schools, everybody. Anything that’s a key part of their ecosystem.”

There’s More Than One Way to Lead Like a Humanitarian
Leading like a humanitarian is a mindset and set of practices, not a single, defined position. But try asking a simple question as you make decisions about the direction of your organization: How does our work positively impact the world around us, and can we do better?
This shift in view looks beyond only productivity and profit toward empowerment and shared possibility. Equipped with ever-more-powerful technologies, capable of both greater harm and good, leaders need to consider how their decisions will make the world a better place.
Banner Image Credit: Zoe Brinkley Continue reading

Posted in Human Robots

#429630 This Is What Happens When We Debate ...

Is there a uniform set of moral laws, and if so, can we teach artificial intelligence those laws to keep it from harming us? This is the question explored in an original short film recently released by The Guardian.
In the film, the creators of an AI with general intelligence call in a moral philosopher to help them establish a set of moral guidelines for the AI to learn and follow—which proves to be no easy task.

Complex moral dilemmas often don’t have a clear-cut answer, and humans haven’t yet been able to translate ethics into a set of unambiguous rules. It’s questionable whether such a set of rules can even exist, as ethical problems often involve weighing factors against one another and seeing the situation from different angles.
So how are we going to teach the rules of ethics to artificial intelligence, and by doing so, avoid having AI ultimately do us great harm or even destroy us? This may seem like a theme from science fiction, yet it’s become a matter of mainstream debate in recent years.
OpenAI, for example, was funded with a billion dollars in late 2015 to learn how to build safe and beneficial AI. And earlier this year, AI experts convened in Asilomar, California to debate best practices for building beneficial AI.
Concerns have been voiced about AI being racist or sexist, reflecting human bias in a way we didn’t intend it to—but it can only learn from the data available, which in many cases is very human.
As much as the engineers in the film insist ethics can be “solved” and there must be a “definitive set of moral laws,” the philosopher argues that such a set of laws is impossible, because “ethics requires interpretation.”
There’s a sense of urgency to the conversation, and with good reason—all the while, the AI is listening and adjusting its algorithm. One of the most difficult to comprehend—yet most crucial—features of computing and AI is the speed at which it’s improving, and the sense that progress will continue to accelerate. As one of the engineers in the film puts it, “The intelligence explosion will be faster than we can imagine.”
Futurists like Ray Kurzweil predict this intelligence explosion will lead to the singularity—a moment when computers, advancing their own intelligence in an accelerating cycle of improvements, far surpass all human intelligence. The questions both in the film and among leading AI experts are what that moment will look like for humanity, and what we can do to ensure artificial superintelligence benefits rather than harms us.
The engineers and philosopher in the film are mortified when the AI offers to “act just like humans have always acted.” The AI’s idea to instead learn only from history’s religious leaders is met with even more anxiety. If artificial intelligence is going to become smarter than us, we also want it to be morally better than us. Or as the philosopher in the film so concisely puts it: "We can't rely on humanity to provide a model for humanity. That goes without saying."
If we’re unable to teach ethics to an AI, it will end up teaching itself, and what will happen then? It just may decide we humans can’t handle the awesome power we’ve bestowed on it, and it will take off—or take over.
Image Credit: The Guardian/YouTube Continue reading

Posted in Human Robots

#429627 This Week’s Awesome Stories From ...

DRONES
Airbus Swears Its Pod/Car/Drone Is a Serious Idea DefinitivelyJack Stewart | WIRED"Airbus came up with a crazy idea to change all of that with Pop.Up, a conceptual two-passenger pod that clips to a set of wheels, hangs under a quadcopter, links with others to create a train, and even zips through a hyperloop tube…As humans pack into increasingly dense global mega-cities, they’ll need new ideas for transport to avoid gridlock."

ROBOTICS
How This Japanese Robotics Master Is Building Better, More Human AndroidsHarry McCracken | Fast Company"On the tech side, making a robot look and behave like a person involves everything from electronics to the silicone Ishiguro’s team uses to simulate skin. 'We have a technology to precisely control pneumatic actuators,' he says, noting, as an example of what they need to re-create, that 'the human shoulder has four degrees of freedom.'"

VIRTUAL REALITY
A Virtual Version of You That Can Visit Many VR WorldsRachel Metz | MIT Technology Review"The Ready Room demo lets you choose your avatar’s gender, pick from two different body types (both somewhat cartoony), adjust a range of body traits like skin hue, weight, and head shape, and dial in such specific things as the shapes and spacing of eyes, nose, and lips. You can choose clothes, hairstyles, and sneakers, and you can keep a portfolio of the same avatar in different outfits or make several different ones."
PRIVACY
A New Bill Would Allow Employers to See Your Genetic Information—Unless You Pay a FineJulia Belluz | VOX"Now this new bill, HR 1313—or the Preserving Employee Wellness Programs Act—seeks to clarify exactly how much personal health data employers can ask their employees to disclose. And in doing so, the bill also opens the door to employers requesting information from personal genetics tests or family medical histories. Unsurprisingly, HR 1313 has captured the media’s imagination. Vanity Fair suggested the bill 'could make one sci-fi dystopia a reality.'"
SELF-DRIVING CARS
Intel Buys Mobileye in $15.3 Billion Bid to Lead Self-Driving Car MarketMark Scott | The New York Times"Mobileye, founded in Jerusalem in 1999, has signed deals with several automakers, including Audi, for the use of its vision and camera technology, which uses machine learning and complex neuroscience to help drivers—and increasingly cars themselves—avoid obstacles on the road."
Image Credit: Italdesign Continue reading

Posted in Human Robots

#429625 AI won’t kill you, but ignoring it ...

Relax. Artificial intelligence is making our lives easier, but won't be a threat to human existence, according to panel of practitioners in the space. Continue reading

Posted in Human Robots

#429619 New Artificial Synapse Bridges the Gap ...

From AlphaGo’s historic victory against world champion Lee Sedol to DeepStack’s sweeping win against professional poker players, artificial intelligence is clearly on a roll.
Part of the momentum comes from breakthroughs in artificial neural networks, which loosely mimic the multi-layer structure of the human brain. But that’s where the similarity ends. While the brain can hum along on energy only enough to power a light bulb, AlphaGo’s neural network runs on a whopping 1,920 CPUs and 280 GPUs, with a total power consumption of roughly one million watts—50,000 times more than its biological counterpart.
Our super-efficient brains run on the energy needed power a single light bulb.Extrapolate those numbers, and it’s easy to see that artificial neural networks have a serious problem—even if scientists design powerfully intelligent machines, they may demand too much energy to be practical for everyday use.
Hardware structure is partly to blame. Our computers, with their separate processor and memory units, are simply not wired appropriately to support the type of massively parallel, energy-efficient computing that the brain elegantly performs.
Recently, a team from Stanford University and Sandia National Laboratories took a different approach to brain-like computing systems.
Rather than simulating a neural network with software, they made a device that behaves like the brain’s synapses—the connection between neurons that processes and stores information—and completely overhauled our traditional idea of computing hardware.
The artificial synapse, dubbed the “electrochemical neuromorphic organic device (ENODe),” may one day be used to create chips that perform brain-like computations with minimal energy requirements.
Made of flexible, organic material compatible with the brain, it may even lead to better brain-computer interfaces, paving the way for a cyborg future. The team published their findings in Nature Materials.
"It's an entirely new family of devices because this type of architecture has not been shown before. For many key metrics, it also performs better than anything that's been done before with inorganics," says study lead author Dr. Alberto Salleo, a material engineer at Stanford.
The biological synapse
The brain’s computational architecture is fundamentally different than a classical computer. Rather than having separate processing and storage units, the brain uses synapses to perform both functions. Right off the bat, this arrangement is better: it saves the energy required to shuttle data back and forth from the processor to the memory module.
The synapse is a structure where the projections of two neurons meet. It looks a bit like a battery cell, with two membranes and a gap between. As the brain learns, electrical currents hop down one neuronal branch until they reach a synapse. There, they mix together with all the pulses coming from other branches and sum up into a single signal.
Neurotransmitters drift between synapses.When sufficiently strong, the electricity triggers the neuron to release chemicals that drift towards a neighboring neuron’s synapse and, in turn, causes the neuron to fire.
Here’s the crucial bit: every time this happens, the synapse is modified slightly into a different state, in that it subsequently requires less (or more) energy to activate the downstream neuron. In fact, neuroscientists believe that different conductive states are how synapses store information.
The artificial synapse
The new device, ENODe, heavily borrows from nature’s design.
Like a biological synapse, the ENODe consists of two thin films made of flexible organic materials, separated by a thin gap containing an electrolyte that allows protons to pass through. The entire device is controlled by a master switch: when open, the device is in “read-only” mode; when closed, the device is “writable” and ready to store information.
To input data, researchers zapped the top layer of film with a small voltage, causing it to release an electron. To neutralize its charge, the film then “steals” a hydrogen ion from its bottom neighboring film. This redox reaction changes the device’s oxidation level, which in turn alters its conductivity.
Just like biological synapses, the stronger or longer the initial electrical pulse, the more hydrogen ions gets shuffled around, which corresponds to larger conductivity. The scalability was welcomingly linear: with training, the researchers were able to predict within one percent of uncertainty the voltage needed to get to a particular state.
In all, the team programmed 500 distinct conductive states, every single one available for computation—a cornucopia compared to the two-state (0 and 1) common computer, and perfect for supporting neuron-based computational models like artificial neural networks.
The master switch design also helped solve a pesky problem that’s haunted previous generations of brain-like chips: the voltage-time dilemma, which states that you can’t simultaneously get both low-energy switching between states and long stability in a state.
This is because if ions only need a bit of voltage to move during switching (low energy), they can also easily diffuse away after the switch, which means the chips can change randomly, explains Dr. J. Joshua Yang and Dr. Qiangfei Xia of the University of Massachusetts, who wrote an opinion piece about the study but was not directly involved.
The ENODe circumvents the problem with its “read-only” mode. Here, the master switch flips open, cutting off any external current to the device and preventing proton changes in the layers.

"A miniature version of the device could cut energy consumption by a factor of several million—well under the energy consumption of a biological synapse."

By decoupling the mechanism that maintains the state of the device from the one that governs switching, the team was able to use a switching voltage of roughly 0.5 millivolts to get to an adjacent state. For comparison, this is about one-tenth the energy needed for a state-of-the-art computer to move data from the processor to the memory unit.
Once locked into a state, the device could maintain it for 25 hours with 0.04 percent variation—a “striking feature” that puts ENODe well above other similar technologies in terms of reliability.
“Just like a battery, once you charge it stays charged” without needing additional energy input, explains study author Dr. A Alec Talin.
ENODe’s energy requirement, though exceedingly low compared to current devices, is still thousands of times higher than the estimates for a single synapse. The team is working hard to miniaturize the device, which could drastically cut down energy consumption by a factor of several million—well under the energy consumption of a biological synapse.
Neuromorphic circuits
To show that ENODes actually mimics a synapse, the team brought their design to life using biocompatible plastic and put it through a series of tests.
First, they integrated the ENODe into an electrical circuit and demonstrated its ability to learn a textbook experiment: Pavlovian conditioning, where one stimulus is gradually associated with another after repeated exposure—like linking the sound of a bell to an involuntary mouth-watering response.
Next, the team implemented a three-layer network and trained it to identify hand-written digits—a type of benchmarking task that researchers often run artificial neural networks through to test their performances.
Because building a physical neural network is technologically challenging, for this test the team used the model of their neuron to simulate one instead.
The ENODe-based neural network managed an accuracy between 93 to 97 percent, far higher than that achieved by previous brain-like chips, reported the authors.
Computational prowess aside, the ENODe is also particularly suited to synapse with the brain. The device is made of organic material that, while not present in brain tissue, is biocompatible and frequently used as a scaffold to grow cells on. The material is also flexible, bendy enough to hug irregular surfaces and may allow researchers to pack multiple ENODes into a tiny volume at high density.
Then there’s the device itself, with its 500 conductance states, that “naturally interfaces with the analog world, with no need for the traditional power-hungry and time consuming analog-to-digital converters,” remarks Yang and Xie.
“[This] opens up a possibility of interfacing live biological cells [with circuits] that can do computing via artificial synapses,” says Talin. “We think that could have huge implications in the future for creating much better brain-machine interfaces.”
Image Credit: Shutterstock Continue reading

Posted in Human Robots