Tag Archives: lines

#437373 Microsoft’s New Deepfake Detector Puts ...

The upcoming US presidential election seems set to be something of a mess—to put it lightly. Covid-19 will likely deter millions from voting in person, and mail-in voting isn’t shaping up to be much more promising. This all comes at a time when political tensions are running higher than they have in decades, issues that shouldn’t be political (like mask-wearing) have become highly politicized, and Americans are dramatically divided along party lines.

So the last thing we need right now is yet another wrench in the spokes of democracy, in the form of disinformation; we all saw how that played out in 2016, and it wasn’t pretty. For the record, disinformation purposely misleads people, while misinformation is simply inaccurate, but without malicious intent. While there’s not a ton tech can do to make people feel safe at crowded polling stations or up the Postal Service’s budget, tech can help with disinformation, and Microsoft is trying to do so.

On Tuesday the company released two new tools designed to combat disinformation, described in a blog post by VP of Customer Security and Trust Tom Burt and Chief Scientific Officer Eric Horvitz.

The first is Microsoft Video Authenticator, which is made to detect deepfakes. In case you’re not familiar with this wicked byproduct of AI progress, “deepfakes” refers to audio or visual files made using artificial intelligence that can manipulate peoples’ voices or likenesses to make it look like they said things they didn’t. Editing a video to string together words and form a sentence someone didn’t say doesn’t count as a deepfake; though there’s manipulation involved, you don’t need a neural network and you’re not generating any original content or footage.

The Authenticator analyzes videos or images and tells users the percentage chance that they’ve been artificially manipulated. For videos, the tool can even analyze individual frames in real time.

Deepfake videos are made by feeding hundreds of hours of video of someone into a neural network, “teaching” the network the minutiae of the person’s voice, pronunciation, mannerisms, gestures, etc. It’s like when you do an imitation of your annoying coworker from accounting, complete with mimicking the way he makes every sentence sound like a question and his eyes widen when he talks about complex spreadsheets. You’ve spent hours—no, months—in his presence and have his personality quirks down pat. An AI algorithm that produces deepfakes needs to learn those same quirks, and more, about whoever the creator’s target is.

Given enough real information and examples, the algorithm can then generate its own fake footage, with deepfake creators using computer graphics and manually tweaking the output to make it as realistic as possible.

The scariest part? To make a deepfake, you don’t need a fancy computer or even a ton of knowledge about software. There are open-source programs people can access for free online, and as far as finding video footage of famous people—well, we’ve got YouTube to thank for how easy that is.

Microsoft’s Video Authenticator can detect the blending boundary of a deepfake and subtle fading or greyscale elements that the human eye may not be able to see.

In the blog post, Burt and Horvitz point out that as time goes by, deepfakes are only going to get better and become harder to detect; after all, they’re generated by neural networks that are continuously learning from and improving themselves.

Microsoft’s counter-tactic is to come in from the opposite angle, that is, being able to confirm beyond doubt that a video, image, or piece of news is real (I mean, can McDonald’s fries cure baldness? Did a seal slap a kayaker in the face with an octopus? Never has it been so imperative that the world know the truth).

A tool built into Microsoft Azure, the company’s cloud computing service, lets content producers add digital hashes and certificates to their content, and a reader (which can be used as a browser extension) checks the certificates and matches the hashes to indicate the content is authentic.

Finally, Microsoft also launched an interactive “Spot the Deepfake” quiz it developed in collaboration with the University of Washington’s Center for an Informed Public, deepfake detection company Sensity, and USA Today. The quiz is intended to help people “learn about synthetic media, develop critical media literacy skills, and gain awareness of the impact of synthetic media on democracy.”

The impact Microsoft’s new tools will have remains to be seen—but hey, we’re glad they’re trying. And they’re not alone; Facebook, Twitter, and YouTube have all taken steps to ban and remove deepfakes from their sites. The AI Foundation’s Reality Defender uses synthetic media detection algorithms to identify fake content. There’s even a coalition of big tech companies teaming up to try to fight election interference.

One thing is for sure: between a global pandemic, widespread protests and riots, mass unemployment, a hobbled economy, and the disinformation that’s remained rife through it all, we’re going to need all the help we can get to make it through not just the election, but the rest of the conga-line-of-catastrophes year that is 2020.

Image Credit: Darius Bashar on Unsplash Continue reading

Posted in Human Robots

#437345 Moore’s Law Lives: Intel Says Chips ...

If you weren’t already convinced the digital world is taking over, you probably are now.

To keep the economy on life support as people stay home to stem the viral tide, we’ve been forced to digitize interactions at scale (for better and worse). Work, school, events, shopping, food, politics. The companies at the center of the digital universe are now powerhouses of the modern era—worth trillions and nearly impossible to avoid in daily life.

Six decades ago, this world didn’t exist.

A humble microchip in the early 1960s would have boasted a handful of transistors. Now, your laptop or smartphone runs on a chip with billions of transistors. As first described by Moore’s Law, this is possible because the number of transistors on a chip doubled with extreme predictability every two years for decades.

But now progress is faltering as the size of transistors approaches physical limits, and the money and time it takes to squeeze a few more onto a chip are growing. There’ve been many predictions that Moore’s Law is, finally, ending. But, perhaps also predictably, the company whose founder coined Moore’s Law begs to differ.

In a keynote presentation at this year’s Hot Chips conference, Intel’s chief architect, Raja Koduri, laid out a roadmap to increase transistor density—that is, the number of transistors you can fit on a chip—by a factor of 50.

“We firmly believe there is a lot more transistor density to come,” Koduri said. “The vision will play out over time—maybe a decade or more—but it will play out.”

Why the optimism?

Calling the end of Moore’s Law is a bit of a tradition. As Peter Lee, vice president at Microsoft Research, quipped to The Economist a few years ago, “The number of people predicting the death of Moore’s Law doubles every two years.” To date, prophets of doom have been premature, and though the pace is slowing, the industry continues to dodge death with creative engineering.

Koduri believes the trend will continue this decade and outlined the upcoming chip innovations Intel thinks can drive more gains in computing power.

Keeping It Traditional
First, engineers can further shrink today’s transistors. Fin field effect transistors (or FinFET) first hit the scene in the 2010s and have since pushed chip features past 14 and 10 nanometers (or nodes, as such size checkpoints are called). Korduri said FinFET will again triple chip density before it’s exhausted.

The Next Generation
FinFET will hand the torch off to nanowire transistors (also known as gate-all-around transistors).

Here’s how they’ll work. A transistor is made up of three basic components: the source, where current is introduced, the gate and channel, where current selectively flows, and the drain. The gate is like a light switch. It controls how much current flows through the channel. A transistor is “on” when the gate allows current to flow, and it’s off when no current flows. The smaller transistors get, the harder it is to control that current.

FinFET maintained fine control of current by surrounding the channel with a gate on three sides. Nanowire designs kick that up a notch by surrounding the channel with a gate on four sides (hence, gate-all-around). They’ve been in the works for years and are expected around 2025. Koduri said first-generation nanowire transistors will be followed by stacked nanowire transistors, and together, they’ll quadruple transistor density.

Building Up
Growing transistor density won’t only be about shrinking transistors, but also going 3D.

This is akin to how skyscrapers increase a city’s population density by adding more usable space on the same patch of land. Along those lines, Intel recently launched its Foveros chip design. Instead of laying a chip’s various “neighborhoods” next to each other in a 2D silicon sprawl, they’ve stacked them on top of each other like a layer cake. Chip stacking isn’t entirely new, but it’s advancing and being applied to general purpose CPUs, like the chips in your phone and laptop.

Koduri said 3D chip stacking will quadruple transistor density.

A Self-Fulfilling Prophecy
The technologies Koduri outlines are an evolution of the same general technology in use today. That is, we don’t need quantum computing or nanotube transistors to augment or replace silicon chips yet. Rather, as it’s done many times over the years, the chip industry will get creative with the design of its core product to realize gains for another decade.

Last year, veteran chip engineer Jim Keller, who at the time was Intel’s head of silicon engineering but has since left the company, told MIT Technology Review there are over a 100 variables driving Moore’s Law (including 3D architectures and new transistor designs). From the standpoint of pure performance, it’s also about how efficiently software uses all those transistors. Keller suggested that with some clever software tweaks “we could get chips that are a hundred times faster in 10 years.”

But whether Intel’s vision pans out as planned is far from certain.

Intel’s faced challenges recently, taking five years instead of two to move its chips from 14 nanometers to 10 nanometers. After a delay of six months for its 7-nanometer chips, it’s now a year behind schedule and lagging other makers who already offer 7-nanometer chips. This is a key point. Yes, chipmakers continue making progress, but it’s getting harder, more expensive, and timelines are stretching.

The question isn’t if Intel and competitors can cram more transistors onto a chip—which, Intel rival TSMC agrees is clearly possible—it’s how long will it take and at what cost?

That said, demand for more computing power isn’t going anywhere.

Amazon, Microsoft, Alphabet, Apple, and Facebook now make up a whopping 20 percent of the stock market’s total value. By that metric, tech is the most dominant industry in at least 70 years. And new technologies—from artificial intelligence and virtual reality to a proliferation of Internet of Things devices and self-driving cars—will demand better chips.

There’s ample motivation to push computing to its bitter limits and beyond. As is often said, Moore’s Law is a self-fulfilling prophecy, and likely whatever comes after it will be too.

Image credit: Laura Ockel / Unsplash Continue reading

Posted in Human Robots

#437150 AI Is Getting More Creative. But Who ...

Creativity is a trait that makes humans unique from other species. We alone have the ability to make music and art that speak to our experiences or illuminate truths about our world. But suddenly, humans’ artistic abilities have some competition—and from a decidedly non-human source.

Over the last couple years there have been some remarkable examples of art produced by deep learning algorithms. They have challenged the notion of an elusive definition of creativity and put into perspective how professionals can use artificial intelligence to enhance their abilities and produce beyond the known boundaries.

But when creativity is the result of code written by a programmer, using a format given by a software engineer, featuring private and public datasets, how do we assign ownership of AI-generated content, and particularly that of artwork? McKinsey estimates AI will annually generate value of $3.5 to $5.8 trillion across various sectors.

In 2018, a portrait that was christened Edmond de Belamy was made in a French art collective called Obvious. It used a database with 15,000 portraits from the 1300s to the 1900s to train a deep learning algorithm to produce a unique portrait. The painting sold for $432,500 in a New York auction. Similarly, a program called Aiva, trained on thousands of classical compositions, has released albums whose pieces are being used by ad agencies and movies.

The datasets used by these algorithms were different, but behind both there was a programmer who changed the brush strokes or musical notes into lines of code and a data scientist or engineer who fitted and “curated” the datasets to use for the model. There could also have been user-based input, and the output may be biased towards certain styles or unintentionally infringe on similar pieces of art. This shows that there are many collaborators with distinct roles in producing AI-generated content, and it’s important to discuss how they can protect their proprietary interests.

A perspective article published in Nature Machine Intelligence by Jason K. Eshraghian in March looks into how AI artists and the collaborators involved should assess their ownership, laying out some guiding principles that are “only applicable for as long as AI does not have legal parenthood, the way humans and corporations are accorded.”

Before looking at how collaborators can protect their interests, it’s useful to understand the basic requirements of copyright law. The artwork in question must be an “original work of authorship fixed in a tangible medium.” Given this principle, the author asked whether it’s possible for AI to exercise creativity, skill, or any other indicator of originality. The answer is still straightforward—no—or at least not yet. Currently, AI’s range of creativity doesn’t exceed the standard used by the US Copyright Office, which states that copyright law protects the “fruits of intellectual labor founded in the creative powers of the mind.”

Due to the current limitations of narrow AI, it must have some form of initial input that helps develop its ability to create. At the moment AI is a tool that can be used to produce creative work in the same way that a video camera is a tool used to film creative content. Video producers don’t need to comprehend the inner workings of their cameras; as long as their content shows creativity and originality, they have a proprietary claim over their creations.

The same concept applies to programmers developing a neural network. As long as the dataset they use as input yields an original and creative result, it will be protected by copyright law; they don’t need to understand the high-level mathematics, which in this case are often black box algorithms whose output it’s impossible to analyze.

Will robots and algorithms eventually be treated as creative sources able to own copyrights? The author pointed to the recent patent case of Warner-Lambert Co Ltd versus Generics where Lord Briggs, Justice of the Supreme Court of the UK, determined that “the court is well versed in identifying the governing mind of a corporation and, when the need arises, will no doubt be able to do the same for robots.”

In the meantime, Dr. Eshraghian suggests four guiding principles to allow artists who collaborate with AI to protect themselves.

First, programmers need to document their process through online code repositories like GitHub or BitBucket.

Second, data engineers should also document and catalog their datasets and the process they used to curate their models, indicating selectivity in their criteria as much as possible to demonstrate their involvement and creativity.

Third, in cases where user data is utilized, the engineer should “catalog all runs of the program” to distinguish the data selection process. This could be interpreted as a way of determining whether user-based input has a right to claim the copyright too.

Finally, the output should avoid infringing on others’ content through methods like reverse image searches and version control, as mentioned above.

AI-generated artwork is still a very new concept, and the ambiguous copyright laws around it give a lot of flexibility to AI artists and programmers worldwide. The guiding principles Eshraghian lays out will hopefully shed some light on the legislation we’ll eventually need for this kind of art, and start an important conversation between all the stakeholders involved.

Image Credit: Wikimedia Commons Continue reading

Posted in Human Robots

#437109 This Week’s Awesome Tech Stories From ...

Why the Coronavirus Is So Confusing
Ed Yong | The Atlantic
“…beyond its vast scope and sui generis nature, there are other reasons the pandemic continues to be so befuddling—a slew of forces scientific and societal, epidemiological and epistemological. What follows is an analysis of those forces, and a guide to making sense of a problem that is now too big for any one person to fully comprehend.”

Common Sense Comes Closer to Computers
John Pavlus | Quanta Magazine
“The problem of common-sense reasoning has plagued the field of artificial intelligence for over 50 years. Now a new approach, borrowing from two disparate lines of thinking, has made important progress.”

Scientists Create Glowing Plants Using Bioluminescent Mushroom DNA
George Dvorsky | Gizmodo
“New research published today in Nature Biotechnology describes a new technique, in which the DNA from bioluminescent mushrooms was used to create plants that glow 10 times brighter than their bacteria-powered precursors. Botanists could eventually use this technique to study the inner workings of plants, but it also introduces the possibility of glowing ornamental plants for our homes.”

Old Drugs May Find a New Purpose: Fighting the Coronavirus
Carl Zimmer | The New York Times
“Driven by the pandemic’s spread, research teams have been screening thousands of drugs to see if they have this unexpected potential to fight the coronavirus. They’ve tested the drugs on dishes of cells, and a few dozen candidates have made the first cut.”

OpenAI’s New Experiments in Music Generation Create an Uncanny Valley Elvis
Devin Coldewey | TechCrunch
“AI-generated music is a fascinating new field, and deep-pocketed research outfit OpenAI has hit new heights in it, creating recreations of songs in the style of Elvis, 2Pac and others. The results are convincing, but fall squarely in the unnerving ‘uncanny valley’ of audio, sounding rather like good, but drunk, karaoke heard through a haze of drugs.”

Neural Net-Generated Memes Are One of the Best Uses of AI on the Internet
Jay Peters | The Verge
“I’ve spent a good chunk of my workday so far creating memes thanks to this amazing website from Imgflip that automatically generates captions for memes using a neural network. …You can pick from 48 classic meme templates, including distracted boyfriend, Drake in ‘Hotline Bling,’ mocking Spongebob, surprised Pikachu, and Oprah giving things away.”

Can Genetic Engineering Bring Back the American Chestnut?
Gabriel Popkin | The New York Times Magazine
“The geneticists’ research forces conservationists to confront, in a new and sometimes discomfiting way, the prospect that repairing the natural world does not necessarily mean returning to an unblemished Eden. It may instead mean embracing a role that we’ve already assumed: engineers of everything, including nature.”

Image credit: Dan Gold / Unsplash Continue reading

Posted in Human Robots

#436482 50+ Reasons Our Favorite Emerging ...

For most of history, technology was about atoms, the manipulation of physical stuff to extend humankind’s reach. But in the last five or six decades, atoms have partnered with bits, the elemental “particles” of the digital world as we know it today. As computing has advanced at the accelerating pace described by Moore’s Law, technological progress has become increasingly digitized.

SpaceX lands and reuses rockets and self-driving cars do away with drivers thanks to automation, sensors, and software. Businesses find and hire talent from anywhere in the world, and for better and worse, a notable fraction of the world learns and socializes online. From the sequencing of DNA to artificial intelligence and from 3D printing to robotics, more and more new technologies are moving at a digital pace and quickly emerging to reshape the world around us.

In 2019, stories charting the advances of some of these digital technologies consistently made headlines. Below is, what is at best, an incomplete list of some of the big stories that caught our eye this year. With so much happening, it’s likely we’ve missed some notable headlines and advances—as well as some of your personal favorites. In either instance, share your thoughts and candidates for the biggest stories and breakthroughs on Facebook and Twitter.

With that said, let’s dive straight into the year.

Artificial Intelligence
No technology garnered as much attention as AI in 2019. With good reason. Intelligent computer systems are transitioning from research labs to everyday life. Healthcare, weather forecasting, business process automation, traffic congestion—you name it, and machine learning algorithms are likely beginning to work on it. Yet, AI has also been hyped up and overmarketed, and the latest round of AI technology, deep learning, is likely only one piece of the AI puzzle.

This year, Open AI’s game-playing algorithms beat some of the world’s best Dota 2 players, DeepMind notched impressive wins in Starcraft, and Carnegie Mellon University’s Libratus “crushed” pros at six-player Texas Hold‘em.
Speaking of games, AI’s mastery of the incredibly complex game of Go prompted a former world champion to quit, stating that AI ‘”cannot be defeated.”
But it isn’t just fun and games. Practical, powerful applications that make the best of AI’s pattern recognition abilities are on the way. Insilico Medicine, for example, used machine learning to help discover and design a new drug in just 46 days, and DeepMind is focused on using AI to crack protein folding.
Of course, AI can be a double-edged sword. When it comes to deepfakes and fake news, for example, AI makes both easier to create and detect, and early in the year, OpenAI created and announced a powerful AI text generator but delayed releasing it for fear of malicious use.
Recognizing AI’s power for good and ill, the OECD, EU, World Economic Forum, and China all took a stab at defining an ethical framework for the development and deployment of AI.

Computing Systems
Processors and chips kickstarted the digital boom and are still the bedrock of continued growth. While progress in traditional silicon-based chips continues, it’s slowing and getting more expensive. Some say we’re reaching the end of Moore’s Law. While that may be the case for traditional chips, specialized chips and entirely new kinds of computing are waiting in the wings.

In fall 2019, Google confirmed its quantum computer had achieved “quantum supremacy,” a term that means a quantum computer can perform a calculation a normal computer cannot. IBM pushed back on the claim, and it should be noted the calculation was highly specialized. But while it’s still early days, there does appear to be some real progress (and more to come).
Should quantum computing become truly practical, “the implications are staggering.” It could impact machine learning, medicine, chemistry, and materials science, just to name a few areas.
Specialized chips continue to take aim at machine learning—a giant new chip with over a trillion transistors, for example, may make machine learning algorithms significantly more efficient.
Cellular computers also saw advances in 2019 thanks to CRISPR. And the year witnessed the emergence of the first reprogrammable DNA computer and new chips inspired by the brain.
The development of hardware computing platforms is intrinsically linked to software. 2019 saw a continued move from big technology companies towards open sourcing (at least parts of) their software, potentially democratizing the use of advanced systems.

Increasing interconnectedness has, in many ways, defined the 21st century so far. Your phone is no longer just a phone. It’s access to the world’s population and accumulated knowledge—and it fits in your pocket. Pretty neat. This is all thanks to networks, which had some notable advances in 2019.

The biggest network development of the year may well be the arrival of the first 5G networks.
5G’s faster speeds promise advances across many emerging technologies.
Self-driving vehicles, for example, may become both smarter and safer thanks to 5G C-V2X networks. (Don’t worry with trying to remember that. If they catch on, they’ll hopefully get a better name.)
Wi-Fi may have heard the news and said “hold my beer,” as 2019 saw the introduction of Wi-Fi 6. Perhaps the most important upgrade, among others, is that Wi-Fi 6 ensures that the ever-growing number of network connected devices get higher data rates.
Networks also went to space in 2019, as SpaceX began launching its Starlink constellation of broadband satellites. In typical fashion, Elon Musk showed off the network’s ability to bounce data around the world by sending a Tweet.

Augmented Reality and Virtual Reality
Forget Pokemon Go (unless you want to add me as a friend in the game—in which case don’t forget Pokemon Go). 2019 saw AR and VR advance, even as Magic Leap, the most hyped of the lot, struggled to live up to outsized expectations and sell headsets.

Mixed reality AR and VR technologies, along with the explosive growth of sensor-based data about the world around us, is creating a one-to-one “Mirror World” of our physical reality—a digital world you can overlay on our own or dive into immersively thanks to AR and VR.
Facebook launched Replica, for example, which is a photorealistic virtual twin of the real world that, among other things, will help train AIs to better navigate their physical surroundings.
Our other senses (beyond eyes) may also become part of the Mirror World through the use of peripherals like a newly developed synthetic skin that aim to bring a sense of touch to VR.
AR and VR equipment is also becoming cheaper—with more producers entering the space—and more user-friendly. Instead of a wired headset requiring an expensive gaming PC, the new Oculus Quest is a wireless, self-contained step toward the mainstream.
Niche uses also continue to gain traction, from Google Glass’s Enterprise edition to the growth of AR and VR in professional education—including on-the-job-training and roleplaying emotionally difficult work encounters, like firing an employee.

Digital Biology and Biotech
The digitization of biology is happening at an incredible rate. With wild new research coming to light every year and just about every tech giant pouring money into new solutions and startups, we’re likely to see amazing advances in 2020 added to those we saw in 2019.

None were, perhaps, more visible than the success of protein-rich, plant-based substitutes for various meats. This was the year Beyond Meat was the top IPO on the NASDAQ stock exchange and people stood in line for the plant-based Impossible Whopper and KFC’s Beyond Chicken.
In the healthcare space, a report about three people with HIV who became virus free thanks to a bone marrow transplants of stem cells caused a huge stir. The research is still in relatively early stages, and isn’t suitable for most people, but it does provides a glimmer of hope.
CRISPR technology, which almost deserves its own section, progressed by leaps and bounds. One tweak made CRISPR up to 50 times more accurate, while the latest new CRISPR-based system, CRISPR prime, was described as a “word processor” for gene editing.
Many areas of healthcare stand to gain from CRISPR. For instance, cancer treatment, were a first safety test showed ‘promising’ results.
CRISPR’s many potential uses, however, also include some weird/morally questionable areas, which was exemplified by one the year’s stranger CRISPR-related stories about a human-monkey hybrid embryo in China.
Incidentally, China could be poised to take the lead on CRISPR thanks to massive investments and research programs.
As a consequence of quick advances in gene editing, we are approaching a point where we will be able to design our own biology—but first we need to have a serious conversation as a society about the ethics of gene editing and what lines should be drawn.

3D Printing
3D printing has quietly been growing both market size and the objects the printers are capable of producing. While both are impressive, perhaps the biggest story of 2019 is their increased speed.

One example was a boat that was printed in just three days, which also set three new world records for 3D printing.
3D printing is also spreading in the construction industry. In Mexico, the technology is being used to construct 50 new homes with subsidized mortgages of just $20/month.
3D printers also took care of all parts of a 640 square-meter home in Dubai.
Generally speaking, the use of 3D printing to make parts for everything from rocket engines (even entire rockets) to trains to cars illustrates the sturdiness of the technology, anno 2019.
In healthcare, 3D printing is also advancing the cause of bio-printed organs and, in one example, was used to print vascularized parts of a human heart.

Living in Japan, I get to see Pepper, Aibo, and other robots on pretty much a daily basis. The novelty of that experience is spreading to other countries, and robots are becoming a more visible addition to both our professional and private lives.

We can’t talk about robots and 2019 without mentioning Boston Dynamics’ Spot robot, which went on sale for the general public.
Meanwhile, Google, Boston Dynamics’ former owner, rebooted their robotics division with a more down-to-earth focus on everyday uses they hope to commercialize.
SoftBank’s Pepper robot is working as a concierge and receptionist in various countries. It is also being used as a home companion. Not satisfied, Pepper rounded off 2019 by heading to the gym—to coach runners.
Indeed, there’s a growing list of sports where robots perform as well—or better—than humans.
2019 also saw robots launch an assault on the kitchen, including the likes of Samsung’s robot chef, and invade the front yard, with iRobot’s Terra robotic lawnmower.
In the borderlands of robotics, full-body robotic exoskeletons got a bit more practical, as the (by all accounts) user-friendly, battery-powered Sarcos Robotics Guardian XO went commercial.

Autonomous Vehicles
Self-driving cars did not—if you will forgive the play on words—stay quite on track during 2019. The fallout from Uber’s 2018 fatal crash marred part of the year, while some big players ratcheted back expectations on a quick shift to the driverless future. Still, self-driving cars, trucks, and other autonomous systems did make progress this year.

Winner of my unofficial award for best name in self-driving goes to Optimus Ride. The company also illustrates that self-driving may not be about creating a one-size-fits-all solution but catering to specific markets.
Self-driving trucks had a good year, with tests across many countries and states. One of the year’s odder stories was a self-driving truck traversing the US with a delivery of butter.
A step above the competition may be the future slogan (or perhaps not) of Boeing’s self-piloted air taxi that saw its maiden test flight in 2019. It joins a growing list of companies looking to create autonomous, flying passenger vehicles.
2019 was also the year where companies seemed to go all in on last-mile autonomous vehicles. Who wins that particular competition could well emerge during 2020.

Blockchain and Digital Currencies
Bitcoin continues to be the cryptocurrency equivalent of a rollercoaster, but the underlying blockchain technology is progressing more steadily. Together, they may turn parts of our financial systems cashless and digital—though how and when remains a slightly open question.

One indication of this was Facebook’s hugely controversial announcement of Libra, its proposed cryptocurrency. The company faced immediate pushback and saw a host of partners jump ship. Still, it brought the tech into mainstream conversations as never before and is putting the pressure on governments and central banks to explore their own digital currencies.
Deloitte’s in-depth survey of the state of blockchain highlighted how the technology has moved from fintech into just about any industry you can think of.
One of the biggest issues facing the spread of many digital currencies—Bitcoin in particular, you could argue—is how much energy it consumes to mine them. 2019 saw the emergence of several new digital currencies with a much smaller energy footprint.
2019 was also a year where we saw a new kind of digital currency, stablecoins, rise to prominence. As the name indicates, stablecoins are a group of digital currencies whose price fluctuations are more stable than the likes of Bitcoin.
In a geopolitical sense, 2019 was a year of China playing catch-up. Having initially banned blockchain, the country turned 180 degrees and announced that it was “quite close” to releasing a digital currency and a wave of blockchain-programs.

Renewable Energy and Energy Storage
While not every government on the planet seems to be a fan of renewable energy, it keeps on outperforming fossil fuel after fossil fuel in places well suited to it—even without support from some of said governments.

One of the reasons for renewable energy’s continued growth is that energy efficiency levels keep on improving.
As a result, an increased number of coal plants are being forced to close due to an inability to compete, and the UK went coal-free for a record two weeks.
We are also seeing more and more financial institutions refusing to fund fossil fuel projects. One such example is the European Investment Bank.
Renewable energy’s advance is tied at the hip to the rise of energy storage, which also had a breakout 2019, in part thanks to investments from the likes of Bill Gates.
The size and capabilities of energy storage also grew in 2019. The best illustration came from Australia were Tesla’s mega-battery proved that energy storage has reached a stage where it can prop up entire energy grids.

Image Credit: Mathew Schwartz / Unsplash Continue reading

Posted in Human Robots