Tag Archives: world

#435313 This Week’s Awesome Stories From ...

ARTIFICIAL INTELLIGENCE
Microsoft Invests $1 Billion in OpenAI to Pursue Holy Grail of Artificial Intelligence
James Vincent | The Verge
“i‘The creation of AGI will be the most important technological development in human history, with the potential to shape the trajectory of humanity,’ said [OpenAI cofounder] Sam Altman. ‘Our mission is to ensure that AGI technology benefits all of humanity, and we’re working with Microsoft to build the supercomputing foundation on which we’ll build AGI.’i”

ROBOTICS
UPS Wants to Go Full-Scale With Its Drone Deliveries
Eric Adams | Wired
“If UPS gets its way, it’ll be known for vehicles other than its famous brown vans. The delivery giant is working to become the first commercial entity authorized by the Federal Aviation Administration to use autonomous delivery drones without any of the current restrictions that have governed the aerial testing it has done to date.”

SYNTHETIC BIOLOGY
Scientists Can Finally Build Feedback Circuits in Cells
Megan Molteni | Wired
“Network a few LOCKR-bound molecules together, and you’ve got a circuit that can control a cell’s functions the same way a PID computer program automatically adjusts the pitch of a plane. With the right key, you can make cells glow or blow themselves apart. You can send things to the cell’s trash heap or zoom them to another cellular zip code.”

ENERGY
Carbon Nanotubes Could Increase Solar Efficiency to 80 Percent
David Grossman | Popular Mechanics
“Obviously, that sort of efficiency rating is unheard of in the world of solar panels. But even though a proof of concept is a long way from being used in the real world, any further developments in the nanotubes could bolster solar panels in ways we haven’t seen yet.”

FUTURE
What Technology Is Most Likely to Become Obsolete During Your Lifetime?
Daniel Kolitz | Gizmodo
“Old technology seldom just goes away. Whiteboards and LED screens join chalk blackboards, but don’t eliminate them. Landline phones get scarce, but not phones. …And the technologies that seem to be the most outclassed may come back as a the cult objects of aficionados—the vinyl record, for example. All this is to say that no one can tell us what will be obsolete in fifty years, but probably a lot less will be obsolete than we think.”

NEUROSCIENCE
The Human Brain Project Hasn’t Lived Up to Its Promise
Ed Yong | The Atlantic
“The HBP, then, is in a very odd position, criticized for being simultaneously too grandiose and too narrow. None of the skeptics I spoke with was dismissing the idea of simulating parts of the brain, but all of them felt that such efforts should be driven by actual research questions. …Countless such projects could have been funded with the money channeled into the HBP, which explains much of the furor around the project.”

Image Credit: Aron Van de Pol / Unsplash Continue reading

Posted in Human Robots

#435308 Brain-Machine Interfaces Are Getting ...

Elon Musk grabbed a lot of attention with his July 16 announcement that his company Neuralink plans to implant electrodes into the brains of people with paralysis by next year. Their first goal is to create assistive technology to help people who can’t move or are unable to communicate.

If you haven’t been paying attention, brain-machine interfaces (BMIs) that allow people to control robotic arms with their thoughts might sound like science fiction. But science and engineering efforts have already turned it into reality.

In a few research labs around the world, scientists and physicians have been implanting devices into the brains of people who have lost the ability to control their arms or hands for over a decade. In our own research group at the University of Pittsburgh, we’ve enabled people with paralyzed arms and hands to control robotic arms that allow them to grasp and move objects with relative ease. They can even experience touch-like sensations from their own hand when the robot grasps objects.

At its core, a BMI is pretty straightforward. In your brain, microscopic cells called neurons are sending signals back and forth to each other all the time. Everything you think, do and feel as you interact with the world around you is the result of the activity of these 80 billion or so neurons.

If you implant a tiny wire very close to one of these neurons, you can record the electrical activity it generates and send it to a computer. Record enough of these signals from the right area of the brain and it becomes possible to control computers, robots, or anything else you might want, simply by thinking about moving. But doing this comes with tremendous technical challenges, especially if you want to record from hundreds or thousands of neurons.

What Neuralink Is Bringing to the Table
Elon Musk founded Neuralink in 2017, aiming to address these challenges and raise the bar for implanted neural interfaces.

Perhaps the most impressive aspect of Neuralink’s system is the breadth and depth of their approach. Building a BMI is inherently interdisciplinary, requiring expertise in electrode design and microfabrication, implantable materials, surgical methods, electronics, packaging, neuroscience, algorithms, medicine, regulatory issues, and more. Neuralink has created a team that spans most, if not all, of these areas.

With all of this expertise, Neuralink is undoubtedly moving the field forward, and improving their technology rapidly. Individually, many of the components of their system represent significant progress along predictable paths. For example, their electrodes, that they call threads, are very small and flexible; many researchers have tried to harness those properties to minimize the chance the brain’s immune response would reject the electrodes after insertion. Neuralink has also developed high-performance miniature electronics, another focus area for labs working on BMIs.

Often overlooked in academic settings, however, is how an entire system would be efficiently implanted in a brain.

Neuralink’s BMI requires brain surgery. This is because implanted electrodes that are in intimate contact with neurons will always outperform non-invasive electrodes where neurons are far away from the electrodes sitting outside the skull. So, a critical question becomes how to minimize the surgical challenges around getting the device into a brain.

Maybe the most impressive aspect of Neuralink’s announcement was that they created a 3,000-electrode neural interface where electrodes could be implanted at a rate of between 30 and 200 per minute. Each thread of electrodes is implanted by a sophisticated surgical robot that essentially acts like a sewing machine. This all happens while specifically avoiding blood vessels that blanket the surface of the brain. The robotics and imaging that enable this feat, with tight integration to the entire device, is striking.

Neuralink has thought through the challenge of developing a clinically viable BMI from beginning to end in a way that few groups have done, though they acknowledge that many challenges remain as they work towards getting this technology into human patients in the clinic.

Figuring Out What More Electrodes Gets You
The quest for implantable devices with thousands of electrodes is not only the domain of private companies. DARPA, the NIH BRAIN Initiative, and international consortiums are working on neurotechnologies for recording and stimulating in the brain with goals of tens of thousands of electrodes. But what might scientists do with the information from 1,000, 3,000, or maybe even 100,000 neurons?

At some level, devices with more electrodes might not actually be necessary to have a meaningful impact in people’s lives. Effective control of computers for access and communication, of robotic limbs to grasp and move objects as well as of paralyzed muscles is already happening—in people. And it has been for a number of years.

Since the 1990s, the Utah Array, which has just 100 electrodes and is manufactured by Blackrock Microsystems, has been a critical device in neuroscience and clinical research. This electrode array is FDA-cleared for temporary neural recording. Several research groups, including our own, have implanted Utah Arrays in people that lasted multiple years.

Currently, the biggest constraints are related to connectors, electronics, and system-level engineering, not the implanted electrode itself—although increasing the electrodes’ lifespan to more than five years would represent a significant advance. As those technical capabilities improve, it might turn out that the ability to accurately control computers and robots is limited more by scientists’ understanding of what the neurons are saying—that is, the neural code—than by the number of electrodes on the device.

Even the most capable implanted system, and maybe the most capable devices researchers can reasonably imagine, might fall short of the goal of actually augmenting skilled human performance. Nevertheless, Neuralink’s goal of creating better BMIs has the potential to improve the lives of people who can’t move or are unable to communicate. Right now, Musk’s vision of using BMIs to meld physical brains and intelligence with artificial ones is no more than a dream.

So, what does the future look like for Neuralink and other groups creating implantable BMIs? Devices with more electrodes that last longer and are connected to smaller and more powerful wireless electronics are essential. Better devices themselves, however, are insufficient. Continued public and private investment in companies and academic research labs, as well as innovative ways for these groups to work together to share technologies and data, will be necessary to truly advance scientists’ understanding of the brain and deliver on the promise of BMIs to improve peoples’ lives.

While researchers need to keep the future societal implications of advanced neurotechnologies in mind—there’s an essential role for ethicists and regulation—BMIs could be truly transformative as they help more people overcome limitations caused by injury or disease in the brain and body.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: UPMC/Pitt Health Sciences, / CC BY-NC-ND Continue reading

Posted in Human Robots

#435224 Can AI Save the Internet from Fake News?

There’s an old proverb that says “seeing is believing.” But in the age of artificial intelligence, it’s becoming increasingly difficult to take anything at face value—literally.

The rise of so-called “deepfakes,” in which different types of AI-based techniques are used to manipulate video content, has reached the point where Congress held its first hearing last month on the potential abuses of the technology. The congressional investigation coincided with the release of a doctored video of Facebook CEO Mark Zuckerberg delivering what appeared to be a sinister speech.

View this post on Instagram

‘Imagine this…’ (2019) Mark Zuckerberg reveals the truth about Facebook and who really owns the future… see more @sheffdocfest VDR technology by @cannyai #spectreknows #privacy #democracy #surveillancecapitalism #dataism #deepfake #deepfakes #contemporaryartwork #digitalart #generativeart #newmediaart #codeart #markzuckerberg #artivism #contemporaryart

A post shared by Bill Posters (@bill_posters_uk) on Jun 7, 2019 at 7:15am PDT

Scientists are scrambling for solutions on how to combat deepfakes, while at the same time others are continuing to refine the techniques for less nefarious purposes, such as automating video content for the film industry.

At one end of the spectrum, for example, researchers at New York University’s Tandon School of Engineering have proposed implanting a type of digital watermark using a neural network that can spot manipulated photos and videos.

The idea is to embed the system directly into a digital camera. Many smartphone cameras and other digital devices already use AI to boost image quality and make other corrections. The authors of the study out of NYU say their prototype platform increased the chances of detecting manipulation from about 45 percent to more than 90 percent without sacrificing image quality.

On the other hand, researchers at Carnegie Mellon University recently hit on a technique for automatically and rapidly converting large amounts of video content from one source into the style of another. In one example, the scientists transferred the facial expressions of comedian John Oliver onto the bespectacled face of late night show host Stephen Colbert.

The CMU team says the method could be a boon to the movie industry, such as by converting black and white films to color, though it also conceded that the technology could be used to develop deepfakes.

Words Matter with Fake News
While the current spotlight is on how to combat video and image manipulation, a prolonged trench warfare on fake news is being fought by academia, nonprofits, and the tech industry.

This isn’t the fake news that some have come to use as a knee-jerk reaction to fact-based information that might be less than flattering to the subject of the report. Rather, fake news is deliberately-created misinformation that is spread via the internet.

In a recent Pew Research Center poll, Americans said fake news is a bigger problem than violent crime, racism, and terrorism. Fortunately, many of the linguistic tools that have been applied to determine when people are being deliberately deceitful can be baked into algorithms for spotting fake news.

That’s the approach taken by a team at the University of Michigan (U-M) to develop an algorithm that was better than humans at identifying fake news—76 percent versus 70 percent—by focusing on linguistic cues like grammatical structure, word choice, and punctuation.

For example, fake news tends to be filled with hyperbole and exaggeration, using terms like “overwhelming” or “extraordinary.”

“I think that’s a way to make up for the fact that the news is not quite true, so trying to compensate with the language that’s being used,” Rada Mihalcea, a computer science and engineering professor at U-M, told Singularity Hub.

The paper “Automatic Detection of Fake News” was based on the team’s previous studies on how people lie in general, without necessarily having the intention of spreading fake news, she said.

“Deception is a complicated and complex phenomenon that requires brain power,” Mihalcea noted. “That often results in simpler language, where you have shorter sentences or shorter documents.”

AI Versus AI
While most fake news is still churned out by humans with identifiable patterns of lying, according to Mihalcea, other researchers are already anticipating how to detect misinformation manufactured by machines.

A group led by Yejin Choi, with the Allen Institute of Artificial Intelligence and the University of Washington in Seattle, is one such team. The researchers recently introduced the world to Grover, an AI platform that is particularly good at catching autonomously-generated fake news because it’s equally good at creating it.

“This is due to a finding that is perhaps counterintuitive: strong generators for neural fake news are themselves strong detectors of it,” wrote Rowan Zellers, a PhD student and team member, in a Medium blog post. “A generator of fake news will be most familiar with its own peculiarities, such as using overly common or predictable words, as well as the peculiarities of similar generators.”

The team found that the best current discriminators can classify neural fake news from real, human-created text with 73 percent accuracy. Grover clocks in with 92 percent accuracy based on a training set of 5,000 neural network-generated fake news samples. Zellers wrote that Grover got better at scale, identifying 97.5 percent of made-up machine mumbo jumbo when trained on 80,000 articles.

It performed almost as well against fake news created by a powerful new text-generation system called GPT-2 built by OpenAI, a nonprofit research lab founded by Elon Musk, classifying 96.1 percent of the machine-written articles.

OpenAI had so feared that the platform could be abused that it has only released limited versions of the software. The public can play with a scaled-down version posted by a machine learning engineer named Adam King, where the user types in a short prompt and GPT-2 bangs out a short story or poem based on the snippet of text.

No Silver AI Bullet
While real progress is being made against fake news, the challenges of using AI to detect and correct misinformation are abundant, according to Hugo Williams, outreach manager for Logically, a UK-based startup that is developing different detectors using elements of deep learning and natural language processing, among others. He explained that the Logically models analyze information based on a three-pronged approach.

Publisher metadata: Is the article from a known, reliable, and trustworthy publisher with a history of credible journalism?
Network behavior: Is the article proliferating through social platforms and networks in ways typically associated with misinformation?
Content: The AI scans articles for hundreds of known indicators typically found in misinformation.

“There is no single algorithm which is capable of doing this,” Williams wrote in an email to Singularity Hub. “Even when you have a collection of different algorithms which—when combined—can give you relatively decent indications of what is unreliable or outright false, there will always need to be a human layer in the pipeline.”

The company released a consumer app in India back in February just before that country’s election cycle that was a “great testing ground” to refine its technology for the next app release, which is scheduled in the UK later this year. Users can submit articles for further scrutiny by a real person.

“We see our technology not as replacing traditional verification work, but as a method of simplifying and streamlining a very manual process,” Williams said. “In doing so, we’re able to publish more fact checks at a far quicker pace than other organizations.”

“With heightened analysis and the addition of more contextual information around the stories that our users are reading, we are not telling our users what they should or should not believe, but encouraging critical thinking based upon reliable, credible, and verified content,” he added.

AI may never be able to detect fake news entirely on its own, but it can help us be smarter about what we read on the internet.

Image Credit: Dennis Lytyagin / Shutterstock.com Continue reading

Posted in Human Robots

#435199 The Rise of AI Art—and What It Means ...

Artificially intelligent systems are slowly taking over tasks previously done by humans, and many processes involving repetitive, simple movements have already been fully automated. In the meantime, humans continue to be superior when it comes to abstract and creative tasks.

However, it seems like even when it comes to creativity, we’re now being challenged by our own creations.

In the last few years, we’ve seen the emergence of hundreds of “AI artists.” These complex algorithms are creating unique (and sometimes eerie) works of art. They’re generating stunning visuals, profound poetry, transcendent music, and even realistic movie scripts. The works of these AI artists are raising questions about the nature of art and the role of human creativity in future societies.

Here are a few works of art created by non-human entities.

Unsecured Futures
by Ai.Da

Ai-Da Robot with Painting. Image Credit: Ai-Da portraits by Nicky Johnston. Published with permission from Midas Public Relations.
Earlier this month we saw the announcement of Ai.Da, considered the first ultra-realistic drawing robot artist. Her mechanical abilities, combined with AI-based algorithms, allow her to draw, paint, and even sculpt. She is able to draw people using her artificial eye and a pencil in her hand. Ai.Da’s artwork and first solo exhibition, Unsecured Futures, will be showcased at Oxford University in July.

Ai-Da Cartesian Painting. Image Credit: Ai-Da Artworks. Published with permission from Midas Public Relations.
Obviously Ai.Da has no true consciousness, thoughts, or feelings. Despite that, the (human) organizers of the exhibition believe that Ai.Da serves as a basis for crucial conversations about the ethics of emerging technologies. The exhibition will serve as a stimulant for engaging with critical questions about what kind of future we ought to create via such technologies.

The exhibition’s creators wrote, “Humans are confident in their position as the most powerful species on the planet, but how far do we actually want to take this power? To a Brave New World (Nightmare)? And if we use new technologies to enhance the power of the few, we had better start safeguarding the future of the many.”

Google’s PoemPortraits
Our transcendence adorns,
That society of the stars seem to be the secret.

The two lines of poetry above aren’t like any poetry you’ve come across before. They are generated by an algorithm that was trained via deep learning neural networks trained on 20 million words of 19th-century poetry.

Google’s latest art project, named PoemPortraits, takes a word of your suggestion and generates a unique poem (once again, a collaboration of man and machine). You can even add a selfie in the final “PoemPortrait.” Artist Es Devlin, the project’s creator, explains that the AI “doesn’t copy or rework existing phrases, but uses its training material to build a complex statistical model. As a result, the algorithm generates original phrases emulating the style of what it’s been trained on.”

The generated poetry can sometimes be profound, and sometimes completely meaningless.But what makes the PoemPortraits project even more interesting is that it’s a collaborative project. All of the generated lines of poetry are combined to form a consistently growing collective poem, which you can view after your lines are generated. In many ways, the final collective poem is a collaboration of people from around the world working with algorithms.

Faceless Portraits Transcending Time
AICAN + Ahmed Elgammal

Image Credit: AICAN + Ahmed Elgammal | Faceless Portrait #2 (2019) | Artsy.
In March of this year, an AI artist called AICAN and its creator Ahmed Elgammal took over a New York gallery. The exhibition at HG Commentary showed two series of canvas works portraying harrowing, dream-like faceless portraits.

The exhibition was not simply credited to a machine, but rather attributed to the collaboration between a human and machine. Ahmed Elgammal is the founder and director of the Art and Artificial Intelligence Laboratory at Rutgers University. He considers AICAN to not only be an autonomous AI artist, but also a collaborator for artistic endeavors.

How did AICAN create these eerie faceless portraits? The system was presented with 100,000 photos of Western art from over five centuries, allowing it to learn the aesthetics of art via machine learning. It then drew from this historical knowledge and the mandate to create something new to create an artwork without human intervention.

Genesis
by AIVA Technologies

Listen to the score above. While you do, reflect on the fact that it was generated by an AI.

AIVA is an AI that composes soundtrack music for movies, commercials, games, and trailers. Its creative works span a wide range of emotions and moods. The scores it generates are indistinguishable from those created by the most talented human composers.

The AIVA music engine allows users to generate original scores in multiple ways. One is to upload an existing human-generated score and select the temp track to base the composition process on. Another method involves using preset algorithms to compose music in pre-defined styles, including everything from classical to Middle Eastern.

Currently, the platform is promoted as an opportunity for filmmakers and producers. But in the future, perhaps every individual will have personalized music generated for them based on their interests, tastes, and evolving moods. We already have algorithms on streaming websites recommending novel music to us based on our interests and history. Soon, algorithms may be used to generate music and other works of art that are tailored to impact our unique psyches.

The Future of Art: Pushing Our Creative Limitations
These works of art are just a glimpse into the breadth of the creative works being generated by algorithms and machines. Many of us will rightly fear these developments. We have to ask ourselves what our role will be in an era where machines are able to perform what we consider complex, abstract, creative tasks. The implications on the future of work, education, and human societies are profound.

At the same time, some of these works demonstrate that AI artists may not necessarily represent a threat to human artists, but rather an opportunity for us to push our creative boundaries. The most exciting artistic creations involve collaborations between humans and machines.

We have always used our technological scaffolding to push ourselves beyond our biological limitations. We use the telescope to extend our line of sight, planes to fly, and smartphones to connect with others. Our machines are not always working against us, but rather working as an extension of our minds. Similarly, we could use our machines to expand on our creativity and push the boundaries of art.

Image Credit: Ai-Da portraits by Nicky Johnston. Published with permission from Midas Public Relations. Continue reading

Posted in Human Robots

#435196 Avatar Love? New ‘Black Mirror’ ...

This week, the widely-anticipated fifth season of the dystopian series Black Mirror was released on Netflix. The storylines this season are less focused on far-out scenarios and increasingly aligned with current issues. With only three episodes, this season raises more questions than it answers, often leaving audiences bewildered.

The episode Smithereens explores our society’s crippling addiction to social media platforms and the monopoly they hold over our data. In Rachel, Jack and Ashley Too, we see the disruptive impact of technologies on the music and entertainment industry, and the price of fame for artists in the digital world. Like most Black Mirror episodes, these explore the sometimes disturbing implications of tech advancements on humanity.

But once again, in the midst of all the doom and gloom, the creators of the series leave us with a glimmer of hope. Aligned with Pride month, the episode Striking Vipers explores the impact of virtual reality on love, relationships, and sexual fluidity.

*The review contains a few spoilers.*

Striking Vipers
The first episode of the season, Striking Vipers may be one of the most thought-provoking episodes in Black Mirror history. Reminiscent of previous episodes San Junipero and Hang the DJ, the writers explore the potential for technology to transform human intimacy.

The episode tells the story of two old friends, Danny and Karl, whose friendship is reignited in an unconventional way. Karl unexpectedly appears at Danny’s 38th birthday and reintroduces him to the VR version of a game they used to play years before. In the game Striking Vipers X, each of the players is represented by an avatar of their choice in an uncanny digital reality. Following old tradition, Karl chooses to become the female fighter, Roxanne, and Danny takes on the role of the male fighter, Lance. The state-of-the-art VR headsets appear to use an advanced form of brain-machine interface to allow each player to be fully immersed in the virtual world, emulating all physical sensations.

To their surprise (and confusion), Danny and Karl find themselves transitioning from fist-fighting to kissing. Over the course of many games, they continue to explore a sexual and romantic relationship in the virtual world, leaving them confused and distant in the real world. The virtual and physical realities begin to blur, and so do the identities of the players with their avatars. Danny, who is married (in a heterosexual relationship) and is a father, begins to carry guilt and confusion in the real world. They both wonder if there would be any spark between them in real life.

The brain-machine interface (BMI) depicted in the episode is still science fiction, but that hasn’t stopped innovators from pushing the technology forward. Experts today are designing more intricate BMI systems while programming better algorithms to interpret the neural signals they capture. Scientists have already succeeded in enabling paralyzed patients to type with their minds, and are even allowing people to communicate with one another purely through brainwaves.

The convergence of BMIs with virtual reality and artificial intelligence could make the experience of such immersive digital realities possible. Virtual reality, too, is decreasing exponentially in cost and increasing in quality.

The narrative provides meaningful commentary on another tech area—gaming. It highlights video games not necessarily as addictive distractions, but rather as a platform for connecting with others in a deeper way. This is already very relevant. Video games like Final Fantasy are often a tool for meaningful digital connections for their players.

The Implications of Virtual Reality on Love and Relationships
The narrative of Striking Vipers raises many novel questions about the implications of immersive technologies on relationships: could the virtual world allow us a safe space to explore suppressed desires? Can virtual avatars make it easier for us to show affection to those we care about? Can a sexual or romantic encounter in the digital world be considered infidelity?

Above all, the episode explores the therapeutic possibilities of such technologies. While many fears about virtual reality had been raised in previous seasons of Black Mirror, this episode was focused on its potential. This includes the potential of immersive technology to be a source of liberation, meaningful connections, and self-exploration, as well as a tool for realizing our true identities and desires.

Once again, this is aligned with emerging trends in VR. We are seeing the rise of social VR applications and platforms that allow you to hang out with your friends and family as avatars in the virtual space. The technology is allowing for animation movies, such as Coco VR, to become an increasingly social and interactive experience. Considering that meaningful social interaction can alleviate depression and anxiety, such applications could contribute to well-being.

Techno-philosopher and National Geographic host Jason Silva points out that immersive media technologies can be “engines of empathy.” VR allows us to enter virtual spaces that mimic someone else’s state of mind, allowing us to empathize with the way they view the world. Silva said, “Imagine the intimacy that becomes possible when people meet and they say, ‘Hey, do you want to come visit my world? Do you want to see what it’s like to be inside my head?’”

What is most fascinating about Striking Vipers is that it explores how we may redefine love with virtual reality; we are introduced to love between virtual avatars. While this kind of love may seem confusing to audiences, it may be one of the complex implications of virtual reality on human relationships.

In many ways, the title Black Mirror couldn’t be more appropriate, as each episode serves as a mirror to the most disturbing aspects of our psyches as they get amplified through technology. However, what we see in uplifting and thought-provoking plots like Striking Vipers, San Junipero, and Hang The DJ is that technology could also amplify the most positive aspects of our humanity. This includes our powerful capacity to love.

Image Credit: Arsgera / Shutterstock.com Continue reading

Posted in Human Robots