Tag Archives: resemble

#435056 How Researchers Used AI to Better ...

A few years back, DeepMind’s Demis Hassabis famously prophesized that AI and neuroscience will positively feed into each other in a “virtuous circle.” If realized, this would fundamentally expand our insight into intelligence, both machine and human.

We’ve already seen some proofs of concept, at least in the brain-to-AI direction. For example, memory replay, a biological mechanism that fortifies our memories during sleep, also boosted AI learning when abstractly appropriated into deep learning models. Reinforcement learning, loosely based on our motivation circuits, is now behind some of AI’s most powerful tools.

Hassabis is about to be proven right again.

Last week, two studies independently tapped into the power of ANNs to solve a 70-year-old neuroscience mystery: how does our visual system perceive reality?

The first, published in Cell, used generative networks to evolve DeepDream-like images that hyper-activate complex visual neurons in monkeys. These machine artworks are pure nightmare fuel to the human eye; but together, they revealed a fundamental “visual hieroglyph” that may form a basic rule for how we piece together visual stimuli to process sight into perception.

In the second study, a team used a deep ANN model—one thought to mimic biological vision—to synthesize new patterns tailored to control certain networks of visual neurons in the monkey brain. When directly shown to monkeys, the team found that the machine-generated artworks could reliably activate predicted populations of neurons. Future improved ANN models could allow even better control, giving neuroscientists a powerful noninvasive tool to study the brain. The work was published in Science.

The individual results, though fascinating, aren’t necessarily the point. Rather, they illustrate how scientists are now striving to complete the virtuous circle: tapping AI to probe natural intelligence. Vision is only the beginning—the tools can potentially be expanded into other sensory domains. And the more we understand about natural brains, the better we can engineer artificial ones.

It’s a “great example of leveraging artificial intelligence to study organic intelligence,” commented Dr. Roman Sandler at Kernel.co on Twitter.

Why Vision?
ANNs and biological vision have quite the history.

In the late 1950s, the legendary neuroscientist duo David Hubel and Torsten Wiesel became some of the first to use mathematical equations to understand how neurons in the brain work together.

In a series of experiments—many using cats—the team carefully dissected the structure and function of the visual cortex. Using myriads of images, they revealed that vision is processed in a hierarchy: neurons in “earlier” brain regions, those closer to the eyes, tend to activate when they “see” simple patterns such as lines. As we move deeper into the brain, from the early V1 to a nub located slightly behind our ears, the IT cortex, neurons increasingly respond to more complex or abstract patterns, including faces, animals, and objects. The discovery led some scientists to call certain IT neurons “Jennifer Aniston cells,” which fire in response to pictures of the actress regardless of lighting, angle, or haircut. That is, IT neurons somehow extract visual information into the “gist” of things.

That’s not trivial. The complex neural connections that lead to increasing abstraction of what we see into what we think we see—what we perceive—is a central question in machine vision: how can we teach machines to transform numbers encoding stimuli into dots, lines, and angles that eventually form “perceptions” and “gists”? The answer could transform self-driving cars, facial recognition, and other computer vision applications as they learn to better generalize.

Hubel and Wiesel’s Nobel-prize-winning studies heavily influenced the birth of ANNs and deep learning. Much of earlier ANN “feed-forward” model structures are based on our visual system; even today, the idea of increasing layers of abstraction—for perception or reasoning—guide computer scientists to build AI that can better generalize. The early romance between vision and deep learning is perhaps the bond that kicked off our current AI revolution.

It only seems fair that AI would feed back into vision neuroscience.

Hieroglyphs and Controllers
In the Cell study, a team led by Dr. Margaret Livingstone at Harvard Medical School tapped into generative networks to unravel IT neurons’ complex visual alphabet.

Scientists have long known that neurons in earlier visual regions (V1) tend to fire in response to “grating patches” oriented in certain ways. Using a limited set of these patches like letters, V1 neurons can “express a visual sentence” and represent any image, said Dr. Arash Afraz at the National Institute of Health, who was not involved in the study.

But how IT neurons operate remained a mystery. Here, the team used a combination of genetic algorithms and deep generative networks to “evolve” computer art for every studied neuron. In seven monkeys, the team implanted electrodes into various parts of the visual IT region so that they could monitor the activity of a single neuron.

The team showed each monkey an initial set of 40 images. They then picked the top 10 images that stimulated the highest neural activity, and married them to 30 new images to “evolve” the next generation of images. After 250 generations, the technique, XDREAM, generated a slew of images that mashed up contorted face-like shapes with lines, gratings, and abstract shapes.

This image shows the evolution of an optimum image for stimulating a visual neuron in a monkey. Image Credit: Ponce, Xiao, and Schade et al. – Cell.
“The evolved images look quite counter-intuitive,” explained Afraz. Some clearly show detailed structures that resemble natural images, while others show complex structures that can’t be characterized by our puny human brains.

This figure shows natural images (right) and images evolved by neurons in the inferotemporal cortex of a monkey (left). Image Credit: Ponce, Xiao, and Schade et al. – Cell.
“What started to emerge during each experiment were pictures that were reminiscent of shapes in the world but were not actual objects in the world,” said study author Carlos Ponce. “We were seeing something that was more like the language cells use with each other.”

This image was evolved by a neuron in the inferotemporal cortex of a monkey using AI. Image Credit: Ponce, Xiao, and Schade et al. – Cell.
Although IT neurons don’t seem to use a simple letter alphabet, it does rely on a vast array of characters like hieroglyphs or Chinese characters, “each loaded with more information,” said Afraz.

The adaptive nature of XDREAM turns it into a powerful tool to probe the inner workings of our brains—particularly for revealing discrepancies between biology and models.

The Science study, led by Dr. James DiCarlo at MIT, takes a similar approach. Using ANNs to generate new patterns and images, the team was able to selectively predict and independently control neuron populations in a high-level visual region called V4.

“So far, what has been done with these models is predicting what the neural responses would be to other stimuli that they have not seen before,” said study author Dr. Pouya Bashivan. “The main difference here is that we are going one step further and using the models to drive the neurons into desired states.”

It suggests that our current ANN models for visual computation “implicitly capture a great deal of visual knowledge” which we can’t really describe, but which the brain uses to turn vision information into perception, the authors said. By testing AI-generated images on biological vision, however, the team concluded that today’s ANNs have a degree of understanding and generalization. The results could potentially help engineer even more accurate ANN models of biological vision, which in turn could feed back into machine vision.

“One thing is clear already: Improved ANN models … have led to control of a high-level neural population that was previously out of reach,” the authors said. “The results presented here have likely only scratched the surface of what is possible with such implemented characterizations of the brain’s neural networks.”

To Afraz, the power of AI here is to find cracks in human perception—both our computational models of sensory processes, as well as our evolved biological software itself. AI can be used “as a perfect adversarial tool to discover design cracks” of IT, said Afraz, such as finding computer art that “fools” a neuron into thinking the object is something else.

“As artificial intelligence researchers develop models that work as well as the brain does—or even better—we will still need to understand which networks are more likely to behave safely and further human goals,” said Ponce. “More efficient AI can be grounded by knowledge of how the brain works.”

Image Credit: Sangoiri / Shutterstock.com Continue reading

Posted in Human Robots

#434772 Traditional Higher Education Is Losing ...

Should you go to graduate school? If so, why? If not, what are your alternatives? Millions of young adults across the globe—and their parents and mentors—find themselves asking these questions every year.

Earlier this month, I explored how exponential technologies are rising to meet the needs of the rapidly changing workforce.

In this blog, I’ll dive into a highly effective way to build the business acumen and skills needed to make the most significant impact in these exponential times.

To start, let’s dive into the value of graduate school versus apprenticeship—especially during this time of extraordinarily rapid growth, and the micro-diversification of careers.

The True Value of an MBA
All graduate schools are not created equal.

For complex technical trades like medicine, engineering, and law, formal graduate-level training provides a critical foundation for safe, ethical practice (until these trades are fully augmented by artificial intelligence and automation…).

For the purposes of today’s blog, let’s focus on the value of a Master in Business Administration (MBA) degree, compared to acquiring your business acumen through various forms of apprenticeship.

The Waning of Business Degrees
Ironically, business schools are facing a tough business problem. The rapid rate of technological change, a booming job market, and the digitization of education are chipping away at the traditional graduate-level business program.

The data speaks for itself.

The Decline of Graduate School Admissions
Enrollment in two-year, full-time MBA programs in the US fell by more than one-third from 2010 to 2016.

While in previous years, top business schools (e.g. Stanford, Harvard, and Wharton) were safe from the decrease in applications, this year, they also felt the waning interest in MBA programs.

Harvard Business School: 4.5 percent decrease in applications, the school’s biggest drop since 2005.
Wharton: 6.7 percent decrease in applications.
Stanford Graduate School: 4.6 percent decrease in applications.

Another signal of change began unfolding over the past week. You may have read news headlines about an emerging college admissions scam, which implicates highly selective US universities, sports coaches, parents, and students in a conspiracy to game the undergraduate admissions process.

Already, students are filing multibillion-dollar civil lawsuits arguing that the scheme has devalued their degrees or denied them a fair admissions opportunity.

MBA Graduates in the Workforce
To meet today’s business needs, startups and massive companies alike are increasingly hiring technologists, developers, and engineers in place of the MBA graduates they may have preferentially hired in the past.

While 85 percent of US employers expect to hire MBA graduates this year (a decrease from 91 percent in 2017), 52 percent of employers worldwide expect to hire graduates with a master’s in data analytics (an increase from 35 percent last year).

We’re also seeing the waning of MBA degree holders at the CEO level.

For decades, an MBA was the hallmark of upward mobility towards the C-suite of top companies.

But as exponential technologies permeate not only products but every part of the supply chain—from manufacturing and shipping to sales, marketing and customer service—that trend is changing by necessity.

Looking at the Harvard Business Review’s Top 100 CEOs in 2018 list, more CEOs on the list held engineering degrees than MBAs (34 held engineering degrees, while 32 held MBAs).

There’s much more to leading innovative companies than an advanced business degree.

How Are Schools Responding?
With disruption to the advanced business education system already here, some business schools are applying notes from their own innovation classes to brace for change.

Over the past half-decade, we’ve seen schools with smaller MBA programs shut their doors in favor of advanced degrees with more specialization. This directly responds to market demand for skills in data science, supply chain, and manufacturing.

Some degrees resemble the precise skills training of technical trades. Others are very much in line with the apprenticeship models we’ll explore next.

Regardless, this new specialization strategy is working and attracting more new students. Over the past decade (2006 to 2016), enrollment in specialized graduate business programs doubled.

Higher education is also seeing a preference shift toward for-profit trade schools, like coding boot camps. This shift is one of several forces pushing universities to adopt skill-specific advanced degrees.

But some schools are slow to adapt, raising the question: how and when will these legacy programs be disrupted? A survey of over 170 business school deans around the world showed that many programs are operating at a loss.

But if these schools are world-class business institutions, as advertised, why do they keep the doors open even while they lose money? The surveyed deans revealed an important insight: they keep the degree program open because of the program’s prestige.

Why Go to Business School?
Shorthand Credibility, Cognitive Biases, and Prestige
Regardless of what knowledge a person takes away from graduate school, attending one of the world’s most rigorous and elite programs gives grads external validation.

With over 55 percent of MBA applicants applying to just 6 percent of graduate business schools, we have a clear cognitive bias toward the perceived elite status of certain universities.

To the outside world, thanks to the power of cognitive biases, an advanced degree is credibility shorthand for your capabilities.

Simply passing through a top school’s filtration system means that you had some level of abilities and merits.

And startup success statistics tend to back up that perceived enhanced capability. Let’s take, for example, universities with the most startup unicorn founders (see the figure below).

When you consider the 320+ unicorn startups around the world today, these numbers become even more impressive. Stanford’s 18 unicorn companies account for over 5 percent of global unicorns, and Harvard is responsible for producing just under 5 percent.

Combined, just these two universities (out of over 5,000 in the US, and thousands more around the world) account for 1 in 10 of the billion-dollar private companies in the world.

By the numbers, the prestigious reputation of these elite business programs has a firm basis in current innovation success.

While prestige may be inherent to the degree earned by graduates from these business programs, the credibility boost from holding one of these degrees is not a guaranteed path to success in the business world.

For example, you might expect that the Harvard School of Business or Stanford Graduate School of Business would come out on top when tallying up the alma maters of Fortune 500 CEOs.

It turns out that the University of Wisconsin-Madison leads the business school pack with 14 CEOs to Harvard’s 12. Beyond prestige, the success these elite business programs see translates directly into cultivating unmatched networks and relationships.

Graduate schools—particularly at the upper echelon—are excellent at attracting sharp students.

At an elite business school, if you meet just five to ten people with extraordinary skill sets, personalities, ideas, or networks, then you have returned your $200,000 education investment.

It’s no coincidence that some 40 percent of Silicon Valley venture capitalists are alumni of either Harvard or Stanford.

From future investors to advisors, friends, and potential business partners, relationships are critical to an entrepreneur’s success.

As we saw above, graduate business degree programs are melting away in the current wave of exponential change.

With an increasing $1.5 trillion in student debt, there must be a more impactful alternative to attending graduate school for those starting their careers.

When I think about the most important skills I use today as an entrepreneur, writer, and strategic thinker, they didn’t come from my decade of graduate school at Harvard or MIT… they came from my experiences building real technologies and companies, and working with mentors.

Apprenticeship comes in a variety of forms; here, I’ll cover three top-of-mind approaches:

Real-world business acumen via startup accelerators
A direct apprenticeship model
The 6 D’s of mentorship

Startup Accelerators and Business Practicum
Let’s contrast the shrinking interest in MBA programs with applications to a relatively new model of business education: startup accelerators.

Startup accelerators are short-term (typically three to six months), cohort-based programs focusing on providing startup founders with the resources (capital, mentorship, relationships, and education) needed to refine their entrepreneurial acumen.

While graduate business programs have been condensing, startup accelerators are alive, well, and expanding rapidly.

In the 10 years from 2005 (when Paul Graham founded Y Combinator) through 2015, the number of startup accelerators in the US increased by more than tenfold.

The increase in startup accelerator activity hints at a larger trend: our best and brightest business minds are opting to invest their time and efforts in obtaining hands-on experience, creating tangible value for themselves and others, rather than diving into the theory often taught in business school classrooms.

The “Strike Force” Model
The Strike Force is my elite team of young entrepreneurs who work directly with me across all of my companies, travel by my side, sit in on every meeting with me, and help build businesses that change the world.

Previous Strike Force members have gone on to launch successful companies, including Bold Capital Partners, my $250 million venture capital firm.

Strike Force is an apprenticeship for the next generation of exponential entrepreneurs.

To paraphrase my good friend Tony Robbins: If you want to short-circuit the video game, find someone who’s been there and done that and is now doing something you want to one day do.

Every year, over 500,000 apprentices in the US follow this precise template. These apprentices are learning a craft they wish to master, under the mentorship of experts (skilled metal workers, bricklayers, medical technicians, electricians, and more) who have already achieved the desired result.

What if we more readily applied this model to young adults with aspirations of creating massive value through the vehicles of entrepreneurship and innovation?

For the established entrepreneur: How can you bring young entrepreneurs into your organization to create more value for your company, while also passing on your ethos and lessons learned to the next generation?

For the young, driven millennial: How can you find your mentor and convince him or her to take you on as an apprentice? What value can you create for this person in exchange for their guidance and investment in your professional development?

The 6 D’s of Mentorship
In my last blog on education, I shared how mobile device and internet penetration will transform adult literacy and basic education. Mobile phones and connectivity already create extraordinary value for entrepreneurs and young professionals looking to take their business acumen and skill set to the next level.

For all of human history up until the last decade or so, if you wanted to learn from the best and brightest in business, leadership, or strategy, you either needed to search for a dated book that they wrote at the local library or bookstore, or you had to be lucky enough to meet that person for a live conversation.

Now you can access the mentorship of just about any thought leader on the planet, at any time, for free.

Thanks to the power of the internet, mentorship has digitized, demonetized, dematerialized, and democratized.

What do you want to learn about?

Investing? Leadership? Technology? Marketing? Project management?

You can access a near-infinite stream of cutting-edge tools, tactics, and lessons from thousands of top performers from nearly every field—instantaneously, and for free.

For example, every one of Warren Buffett’s letters to his Berkshire Hathaway investors over the past 40 years is available for free on a device that fits in your pocket.

The rise of audio—particularly podcasts and audiobooks—is another underestimated driving force away from traditional graduate business programs and toward apprenticeships.

Over 28 million podcast episodes are available for free. Once you identify the strong signals in the noise, you’re still left with thousands of hours of long-form podcast conversation from which to learn valuable lessons.

Whenever and wherever you want, you can learn from the world’s best. In the future, mentorship and apprenticeship will only become more personalized. Imagine accessing a high-fidelity, AI-powered avatar of Bill Gates, Richard Branson, or Arthur C. Clarke (one of my early mentors) to help guide you through your career.

Virtual mentorship and coaching are powerful education forces that are here to stay.

Bringing It All Together
The education system is rapidly changing. Traditional master’s programs for business are ebbing away in the tides of exponential technologies. Apprenticeship models are reemerging as an effective way to train tomorrow’s leaders.

In a future blog, I’ll revisit the concept of apprenticeships and other effective business school alternatives.

If you are a young, ambitious entrepreneur (or the parent of one), remember that you live in the most abundant time ever in human history to refine your craft.

Right now, you have access to world-class mentorship and cutting-edge best-practices—literally in the palm of your hand. What will you do with this extraordinary power?

Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: fongbeerredhot / Shutterstock.com Continue reading

Posted in Human Robots

#434559 Can AI Tell the Difference Between a ...

Scarcely a day goes by without another headline about neural networks: some new task that deep learning algorithms can excel at, approaching or even surpassing human competence. As the application of this approach to computer vision has continued to improve, with algorithms capable of specialized recognition tasks like those found in medicine, the software is getting closer to widespread commercial use—for example, in self-driving cars. Our ability to recognize patterns is a huge part of human intelligence: if this can be done faster by machines, the consequences will be profound.

Yet, as ever with algorithms, there are deep concerns about their reliability, especially when we don’t know precisely how they work. State-of-the-art neural networks will confidently—and incorrectly—classify images that look like television static or abstract art as real-world objects like school-buses or armadillos. Specific algorithms could be targeted by “adversarial examples,” where adding an imperceptible amount of noise to an image can cause an algorithm to completely mistake one object for another. Machine learning experts enjoy constructing these images to trick advanced software, but if a self-driving car could be fooled by a few stickers, it might not be so fun for the passengers.

These difficulties are hard to smooth out in large part because we don’t have a great intuition for how these neural networks “see” and “recognize” objects. The main insight analyzing a trained network itself can give us is a series of statistical weights, associating certain groups of points with certain objects: this can be very difficult to interpret.

Now, new research from UCLA, published in the journal PLOS Computational Biology, is testing neural networks to understand the limits of their vision and the differences between computer vision and human vision. Nicholas Baker, Hongjing Lu, and Philip J. Kellman of UCLA, alongside Gennady Erlikhman of the University of Nevada, tested a deep convolutional neural network called VGG-19. This is state-of-the-art technology that is already outperforming humans on standardized tests like the ImageNet Large Scale Visual Recognition Challenge.

They found that, while humans tend to classify objects based on their overall (global) shape, deep neural networks are far more sensitive to the textures of objects, including local color gradients and the distribution of points on the object. This result helps explain why neural networks in image recognition make mistakes that no human ever would—and could allow for better designs in the future.

In the first experiment, a neural network was trained to sort images into 1 of 1,000 different categories. It was then presented with silhouettes of these images: all of the local information was lost, while only the outline of the object remained. Ordinarily, the trained neural net was capable of recognizing these objects, assigning more than 90% probability to the correct classification. Studying silhouettes, this dropped to 10%. While human observers could nearly always produce correct shape labels, the neural networks appeared almost insensitive to the overall shape of the images. On average, the correct object was ranked as the 209th most likely solution by the neural network, even though the overall shapes were an exact match.

A particularly striking example arose when they tried to get the neural networks to classify glass figurines of objects they could already recognize. While you or I might find it easy to identify a glass model of an otter or a polar bear, the neural network classified them as “oxygen mask” and “can opener” respectively. By presenting glass figurines, where the texture information that neural networks relied on for classifying objects is lost, the neural network was unable to recognize the objects by shape alone. The neural network was similarly hopeless at classifying objects based on drawings of their outline.

If you got one of these right, you’re better than state-of-the-art image recognition software. Image Credit: Nicholas Baker, Hongjing Lu, Gennady Erlikhman, Philip J. Kelman. “Deep convolutional networks do not classify based on global object shape.” Plos Computational Biology. 12/7/18. / CC BY 4.0
When the neural network was explicitly trained to recognize object silhouettes—given no information in the training data aside from the object outlines—the researchers found that slight distortions or “ripples” to the contour of the image were again enough to fool the AI, while humans paid them no mind.

The fact that neural networks seem to be insensitive to the overall shape of an object—relying instead on statistical similarities between local distributions of points—suggests a further experiment. What if you scrambled the images so that the overall shape was lost but local features were preserved? It turns out that the neural networks are far better and faster at recognizing scrambled versions of objects than outlines, even when humans struggle. Students could classify only 37% of the scrambled objects, while the neural network succeeded 83% of the time.

Humans vastly outperform machines at classifying object (a) as a bear, while the machine learning algorithm has few problems classifying the bear in figure (b). Image Credit: Nicholas Baker, Hongjing Lu, Gennady Erlikhman, Philip J. Kelman. “Deep convolutional networks do not classify based on global object shape.” Plos Computational Biology. 12/7/18. / CC BY 4.0
“This study shows these systems get the right answer in the images they were trained on without considering shape,” Kellman said. “For humans, overall shape is primary for object recognition, and identifying images by overall shape doesn’t seem to be in these deep learning systems at all.”

Naively, one might expect that—as the many layers of a neural network are modeled on connections between neurons in the brain and resemble the visual cortex specifically—the way computer vision operates must necessarily be similar to human vision. But this kind of research shows that, while the fundamental architecture might resemble that of the human brain, the resulting “mind” operates very differently.

Researchers can, increasingly, observe how the “neurons” in neural networks light up when exposed to stimuli and compare it to how biological systems respond to the same stimuli. Perhaps someday it might be possible to use these comparisons to understand how neural networks are “thinking” and how those responses differ from humans.

But, as yet, it takes a more experimental psychology to probe how neural networks and artificial intelligence algorithms perceive the world. The tests employed against the neural network are closer to how scientists might try to understand the senses of an animal or the developing brain of a young child rather than a piece of software.

By combining this experimental psychology with new neural network designs or error-correction techniques, it may be possible to make them even more reliable. Yet this research illustrates just how much we still don’t understand about the algorithms we’re creating and using: how they tick, how they make decisions, and how they’re different from us. As they play an ever-greater role in society, understanding the psychology of neural networks will be crucial if we want to use them wisely and effectively—and not end up missing the woods for the trees.

Image Credit: Irvan Pratama / Shutterstock.com Continue reading

Posted in Human Robots

#434256 Singularity Hub’s Top Articles of the ...

2018 was a big year for science and technology. The first gene-edited babies were born, as were the first cloned monkeys. SpaceX successfully launched the Falcon Heavy, and NASA’s InSight lander placed a seismometer on Mars. Bitcoin’s value plummeted, as did the cost of renewable energy. The world’s biggest neuromorphic supercomputer was switched on, and quantum communication made significant progress.

As 2018 draws to a close and we start anticipating the developments that will happen in 2019, here’s a look back at our ten most-read articles of the year.

This 3D Printed House Goes Up in a Day for Under $10,000
Vanessa Bates Ramirez | 3/18/18
“ICON and New Story’s vision is one of 3D printed houses acting as a safe, affordable housing alternative for people in need. New Story has already built over 800 homes in Haiti, El Salvador, Bolivia, and Mexico, partnering with the communities they serve to hire local labor and purchase local materials rather than shipping everything in from abroad.”

Machines Teaching Each Other Could Be the Biggest Exponential Trend in AI
Aaron Frank | 1/21/18
“Data is the fuel of machine learning, but even for machines, some data is hard to get—it may be risky, slow, rare, or expensive. In those cases, machines can share experiences or create synthetic experiences for each other to augment or replace data. It turns out that this is not a minor effect, it actually is self-amplifying, and therefore exponential.”

Low-Cost Soft Robot Muscles Can Lift 200 Times Their Weight and Self-Heal
Edd Gent | 1/11/18
“Now researchers at the University of Colorado Boulder have built a series of low-cost artificial muscles—as little as 10 cents per device—using soft plastic pouches filled with electrically insulating liquids that contract with the force and speed of mammalian skeletal muscles when a voltage is applied to them.”

These Are the Most Exciting Industries and Jobs of the Future
Raya Bidshahri | 1/29/18
“Technological trends are giving rise to what many thought leaders refer to as the “imagination economy.” This is defined as “an economy where intuitive and creative thinking create economic value, after logical and rational thinking have been outsourced to other economies.” Unsurprisingly, humans continue to outdo machines when it comes to innovating and pushing intellectual, imaginative, and creative boundaries, making jobs involving these skills the hardest to automate.”

Inside a $1 Billion Real Estate Company Operating Entirely in VR
Aaron Frank | 4/8/18
“Incredibly, this growth is largely the result of eXp Realty’s use of an online virtual world similar to Second Life. That means every employee, contractor, and the thousands of agents who work at the company show up to work—team meetings, training seminars, onboarding sessions—all inside a virtual reality campus.To be clear, this is a traditional real estate brokerage helping people buy and sell physical homes—but they use a virtual world as their corporate offices.”

How Fast Is AI Progressing? Stanford’s New Report Card for Artificial Intelligence
Thomas Hornigold | 1/18/18
“Progress in AI over the next few years is far more likely to resemble a gradual rising tide—as more and more tasks can be turned into algorithms and accomplished by software—rather than the tsunami of a sudden intelligence explosion or general intelligence breakthrough. Perhaps measuring the ability of an AI system to learn and adapt to the work routines of humans in office-based tasks could be possible.”

When Will We Finally Achieve True Artificial Intelligence?
Thomas Hornigold | 1/1/18
“The issue with trying to predict the exact date of human-level AI is that we don’t know how far is left to go. This is unlike Moore’s Law. Moore’s Law, the doubling of processing power roughly every couple of years, makes a very concrete prediction about a very specific phenomenon. We understand roughly how to get there—improved engineering of silicon wafers—and we know we’re not at the fundamental limits of our current approach. You cannot say the same about artificial intelligence.”

IBM’s New Computer Is the Size of a Grain of Salt and Costs Less Than 10 Cents
Edd Gent | 3/26/18
“Costing less than 10 cents to manufacture, the company envisions the device being embedded into products as they move around the supply chain. The computer’s sensing, processing, and communicating capabilities mean it could effectively turn every item in the supply chain into an Internet of Things device, producing highly granular supply chain data that could streamline business operations.”

Why the Rise of Self-Driving Vehicles Will Actually Increase Car Ownership
Melba Kurman and Hod Lipson / 2/14/18
“When people predict the demise of car ownership, they are overlooking the reality that the new autonomous automotive industry is not going to be just a re-hash of today’s car industry with driverless vehicles. Instead, the automotive industry of the future will be selling what could be considered an entirely new product: a wide variety of intelligent, self-guiding transportation robots. When cars become a widely used type of transportation robot, they will be cheap, ubiquitous, and versatile.”

A Model for the Future of Education
Peter Diamandis | 9/12/18
“I imagine a relatively near-term future in which robotics and artificial intelligence will allow any of us, from ages 8 to 108, to easily and quickly find answers, create products, or accomplish tasks, all simply by expressing our desires. From ‘mind to manufactured in moments.’ In short, we’ll be able to do and create almost whatever we want. In this future, what attributes will be most critical for our children to learn to become successful in their adult lives? What’s most important for educating our children today?”

Image Credit: Yurchanka Siarhei / Shutterstock.com Continue reading

Posted in Human Robots

#432482 This Week’s Awesome Stories From ...

A Brain-Boosting Prosthesis Moves From Rats to Humans
Robbie Gonzalez | WIRED
“Today, their proof-of-concept prosthetic lives outside a patient’s head and connects to the brain via wires. But in the future, Hampson hopes, surgeons could implant a similar apparatus entirely within a person’s skull, like a neural pacemaker. It could augment all manner of brain functions—not just in victims of dementia and brain injury, but healthy individuals, as well.”

Here’s How the US Needs to Prepare for the Age of Artificial Intelligence
Will Knight | MIT Technology Review
“The Trump administration has abandoned this vision and has no intention of devising its own AI plan, say those working there. They say there is no need for an AI moonshot, and that minimizing government interference is the best way to make sure the technology flourishes… That looks like a huge mistake. If it essentially ignores such a technological transformation, the US might never make the most of an opportunity to reboot its economy and kick-start both wage growth and job creation. Failure to plan could also cause the birthplace of AI to lose ground to international rivals.”

Underwater GPS Inspired by Shrimp Eyes
Jeremy Hsu | IEEE Spectrum
“A few years ago, U.S. and Australian researchers developed a special camera inspired by the eyes of mantis shrimp that can see the polarization patterns of light waves, which resemble those in a rope being waved up and down. That means the bio-inspired camera can detect how light polarization patterns change once the light enters the water and gets deflected or scattered.”

‘The Business of War’: Google Employees Protest Work for the Pentagon
Scott Shane and Daisuke Wakabayashi | The New York Times
“Thousands of Google employees, including dozens of senior engineers, have signed a letter protesting the company’s involvement in a Pentagon program that uses artificial intelligence to interpret video imagery and could be used to improve the targeting of drone strikes.

The letter, which is circulating inside Google and has garnered more than 3,100 signatures, reflects a culture clash between Silicon Valley and the federal government that is likely to intensify as cutting-edge artificial intelligence is increasingly employed for military purposes. ‘We believe that Google should not be in the business of war,’ says the letter, addressed to Sundar Pichai, the company’s chief executive. It asks that Google pull out of Project Maven, a Pentagon pilot program, and announce a policy that it will not ‘ever build warfare technology.’ (Read the text of the letter.)”

MIT’s New Headset Reads the ‘Words in Your Head’
Brian Heater | TechCrunch
“A team at MIT has been working on just such a device, though the hardware design, admittedly, doesn’t go too far toward removing that whole self-consciousness bit from the equation. AlterEgo is a headmounted—or, more properly, jaw-mounted—device that’s capable of reading neuromuscular signals through built-in electrodes. The hardware, as MIT puts it, is capable of reading ‘words in your head.’”

Image Credit: christitzeimaging.com / Shutterstock.com Continue reading

Posted in Human Robots