Tag Archives: no

#437242 Robot jaws show medicated chewing gum ...

Medicated chewing gum has been recognized as a new advanced drug delivery method but currently there is no gold standard for testing drugs released from chewing gum in vitro. New research has shown a chewing robot with built-in humanoid jaws could provide opportunities for pharmaceutical companies to develop medicated chewing gum. Continue reading

Posted in Human Robots

#437224 This Week’s Awesome Tech Stories From ...

VIRTUAL REALITY
How Holographic Tech Is Shrinking VR Displays to the Size of Sunglasses
Kyle Orland | Ars Technica
“…researchers at Facebook Reality Labs are using holographic film to create a prototype VR display that looks less like ski goggles and more like lightweight sunglasses. With a total thickness less than 9mm—and without significant compromises on field of view or resolution—these displays could one day make today’s bulky VR headset designs completely obsolete.”

TRANSPORTATION
Stock Surge Makes Tesla the World’s Most Valuable Automaker
Timothy B. Lee | Ars Technica
“It’s a remarkable milestone for a company that sells far fewer cars than its leading rivals. …But Wall Street is apparently very optimistic about Tesla’s prospects for future growth and profits. Many experts expect a global shift to battery electric vehicles over the next decade or two, and Tesla is leading that revolution.”

FUTURE OF FOOD
These Plant-Based Steaks Come Out of a 3D Printer
Adele Peters | Fast Company
“The startup, launched by cofounders who met while developing digital printers at HP, created custom 3D printers that aim to replicate meat by printing layers of what they call ‘alt-muscle,’ ‘alt-fat,’ and ‘alt-blood,’ forming a complex 3D model.”

AUTOMATION
The US Air Force Is Turning Old F-16s Into AI-Powered Fighters
Amit Katwala | Wired UK
“Maverick’s days are numbered. The long-awaited sequel to Top Gun is due to hit cinemas in December, but the virtuoso fighter pilots at its heart could soon be a thing of the past. The trustworthy wingman will soon be replaced by artificial intelligence, built into a drone, or an existing fighter jet with no one in the cockpit.”

ROBOTICS
NASA Wants to Build a Steam-Powered Hopping Robot to Explore Icy Worlds
Georgina Torbet | Digital Trends
“A bouncing, ball-like robot that’s powered by steam sounds like something out of a steampunk fantasy, but it could be the ideal way to explore some of the distant, icy environments of our solar system. …This round robot would be the size of a soccer ball, with instruments held in the center of a metal cage, and it would use steam-powered thrusters to make jumps from one area of terrain to the next.”

FUTURE
Could Teleporting Ever Work?
Daniel Kolitz | Gizmodo
“Have the major airlines spent decades suppressing teleportation research? Have a number of renowned scientists in the field of teleportation studies disappeared under mysterious circumstances? Is there a cork board at the FBI linking Delta Airlines, shady foreign security firms, and dozens of murdered research professors? …No. None of that is the case. Which begs the question: why doesn’t teleportation exist yet?”

ENERGY
Nuclear ‘Power Balls’ Could Make Meltdowns a Thing of the Past
Daniel Oberhaus | Wired
“Not only will these reactors be smaller and more efficient than current nuclear power plants, but their designers claim they’ll be virtually meltdown-proof. Their secret? Millions of submillimeter-size grains of uranium individually wrapped in protective shells. It’s called triso fuel, and it’s like a radioactive gobstopper.”

TECHNOLOGY
A Plan to Redesign the Internet Could Make Apps That No One Controls
Will Douglas Heaven | MIT Techology Review
“[John Perry] Barlow’s ‘home of Mind’ is ruled today by the likes of Google, Facebook, Amazon, Alibaba, Tencent, and Baidu—a small handful of the biggest companies on earth. Yet listening to the mix of computer scientists and tech investors speak at an online event on June 30 hosted by the Dfinity Foundation…it is clear that a desire for revolution is brewing.”

IMPACT
To Save the World, the UN Is Turning It Into a Computer Simulation
Will Bedingfield | Wired
“The UN has now announced its new secret recipe to achieve [its 17 sustainable development goals or SDGs]: a computer simulation called Policy Priority Inference (PPI). …PPI is a budgeting software—it simulates a government and its bureaucrats as they allocate money on projects that might move a country closer to an SDG.”

Image credit: Benjamin Suter / Unsplash Continue reading

Posted in Human Robots

#437216 New Report: Tech Could Fuel an Age of ...

With rapid technological progress running headlong into dramatic climate change and widening inequality, most experts agree the coming decade will be tumultuous. But a new report predicts it could actually make or break civilization as we know it.

The idea that humanity is facing a major shake-up this century is not new. The Fourth Industrial Revolution being brought about by technologies like AI, gene editing, robotics, and 3D printing is predicted to cause dramatic social, political, and economic upheaval in the coming decades.

But according to think tank RethinkX, thinking about the coming transition as just another industrial revolution is too simplistic. In a report released last week called Rethinking Humanity, the authors argue that we are about to see a reordering of our relationship with the world as fundamental as when hunter-gatherers came together to build the first civilizations.

At the core of their argument is the fact that since the first large human settlements appeared 10,000 years ago, civilization has been built on the back of our ability to extract resources from nature, be they food, energy, or materials. This led to a competitive landscape where the governing logic is grow or die, which has driven all civilizations to date.

That could be about to change thanks to emerging technologies that will fundamentally disrupt the five foundational sectors underpinning society: information, energy, food, transportation, and materials. They predict that across all five, costs will fall by 10 times or more, while production processes will become 10 times more efficient and will use 90 percent fewer natural resources with 10 to 100 times less waste.

They say that this transformation has already happened in information, where the internet has dramatically reduced barriers to communication and knowledge. They predict the combination of cheap solar and grid storage will soon see energy costs drop as low as one cent per kilowatt hour, and they envisage widespread adoption of autonomous electric vehicles and the replacement of car ownership with ride-sharing.

The authors laid out their vision for the future of food in another report last year, where they predicted that traditional agriculture would soon be replaced by industrial-scale brewing of single-celled organisms genetically modified to produce all the nutrients we need. In a similar vein, they believe the same processes combined with additive manufacturing and “nanotechnologies” will allow us to build all the materials required for the modern world from the molecule up rather than extracting scarce natural resources.

They believe this could allow us to shift from a system of production based on extraction to one built on creation, as limitless renewable energy makes it possible to build everything we need from scratch and barriers to movement and information disappear. As a result, a lifestyle worthy of the “American Dream” could be available to anyone for as little as $250/month by 2030.

This will require a fundamental reimagining of our societies, though. All great civilizations have eventually hit fundamental limits on their growth and we are no different, as demonstrated by our growing impact on the environment and the increasing concentration of wealth. Historically this stage of development has lead to a doubling down on old tactics in search of short-term gains, but this invariably leads to the collapse of the civilization.

The authors argue that we’re in a unique position. Because of the technological disruption detailed above, we have the ability to break through the limits on our growth. But only if we change what the authors call our “Organizing System.” They describe this as “the prevailing models of thought, belief systems, myths, values, abstractions, and conceptual frameworks that help explain how the world works and our relationship to it.”

They say that the current hierarchical, centralized system based on nation-states is unfit for the new system of production that is emerging. The cracks are already starting to appear, with problems like disinformation campaigns, fake news, and growing polarization demonstrating how ill-suited our institutions are for dealing with the distributed nature of today’s information systems. And as this same disruption comes to the other foundational sectors the shockwaves could lead to the collapse of civilization as we know it.

Their solution is a conscious shift towards a new way of organizing the world. As emerging technology allows communities to become self-sufficient, flows of physical resources will be replaced by flows of information, and we will require a decentralized but highly networked Organizing System.

The report includes detailed recommendations on how to usher this in. Examples include giving individuals control and ownership of data rights; developing new models for community ownership of energy, information, and transportation networks; and allowing states and cities far greater autonomy on policies like immigration, taxation, education, and public expenditure.

How easy it will be to get people on board with such a shift is another matter. The authors say it may require us to re-examine the foundations of our society, like representative democracy, capitalism, and nation-states. While they acknowledge that these ideas are deeply entrenched, they appear to believe we can reason our way around them.

That seems optimistic. Cultural and societal change can be glacial, and efforts to impose it top-down through reason and logic are rarely successful. The report seems to brush over many of the messy realities of humanity, such as the huge sway that tradition and religion hold over the vast majority of people.

It also doesn’t deal with the uneven distribution of the technology that is supposed to catapult us into this new age. And while the predicted revolutions in transportation, energy, and information do seem inevitable, the idea that in the next decade or two we’ll be able to produce any material we desire using cheap and abundant stock materials seems like a stretch.

Despite the techno-utopianism though, many of the ideas in the report hold promise for building societies that are better adapted for the disruptive new age we are about to enter.

Image Credit: Futuristic Society/flickr Continue reading

Posted in Human Robots

#437150 AI Is Getting More Creative. But Who ...

Creativity is a trait that makes humans unique from other species. We alone have the ability to make music and art that speak to our experiences or illuminate truths about our world. But suddenly, humans’ artistic abilities have some competition—and from a decidedly non-human source.

Over the last couple years there have been some remarkable examples of art produced by deep learning algorithms. They have challenged the notion of an elusive definition of creativity and put into perspective how professionals can use artificial intelligence to enhance their abilities and produce beyond the known boundaries.

But when creativity is the result of code written by a programmer, using a format given by a software engineer, featuring private and public datasets, how do we assign ownership of AI-generated content, and particularly that of artwork? McKinsey estimates AI will annually generate value of $3.5 to $5.8 trillion across various sectors.

In 2018, a portrait that was christened Edmond de Belamy was made in a French art collective called Obvious. It used a database with 15,000 portraits from the 1300s to the 1900s to train a deep learning algorithm to produce a unique portrait. The painting sold for $432,500 in a New York auction. Similarly, a program called Aiva, trained on thousands of classical compositions, has released albums whose pieces are being used by ad agencies and movies.

The datasets used by these algorithms were different, but behind both there was a programmer who changed the brush strokes or musical notes into lines of code and a data scientist or engineer who fitted and “curated” the datasets to use for the model. There could also have been user-based input, and the output may be biased towards certain styles or unintentionally infringe on similar pieces of art. This shows that there are many collaborators with distinct roles in producing AI-generated content, and it’s important to discuss how they can protect their proprietary interests.

A perspective article published in Nature Machine Intelligence by Jason K. Eshraghian in March looks into how AI artists and the collaborators involved should assess their ownership, laying out some guiding principles that are “only applicable for as long as AI does not have legal parenthood, the way humans and corporations are accorded.”

Before looking at how collaborators can protect their interests, it’s useful to understand the basic requirements of copyright law. The artwork in question must be an “original work of authorship fixed in a tangible medium.” Given this principle, the author asked whether it’s possible for AI to exercise creativity, skill, or any other indicator of originality. The answer is still straightforward—no—or at least not yet. Currently, AI’s range of creativity doesn’t exceed the standard used by the US Copyright Office, which states that copyright law protects the “fruits of intellectual labor founded in the creative powers of the mind.”

Due to the current limitations of narrow AI, it must have some form of initial input that helps develop its ability to create. At the moment AI is a tool that can be used to produce creative work in the same way that a video camera is a tool used to film creative content. Video producers don’t need to comprehend the inner workings of their cameras; as long as their content shows creativity and originality, they have a proprietary claim over their creations.

The same concept applies to programmers developing a neural network. As long as the dataset they use as input yields an original and creative result, it will be protected by copyright law; they don’t need to understand the high-level mathematics, which in this case are often black box algorithms whose output it’s impossible to analyze.

Will robots and algorithms eventually be treated as creative sources able to own copyrights? The author pointed to the recent patent case of Warner-Lambert Co Ltd versus Generics where Lord Briggs, Justice of the Supreme Court of the UK, determined that “the court is well versed in identifying the governing mind of a corporation and, when the need arises, will no doubt be able to do the same for robots.”

In the meantime, Dr. Eshraghian suggests four guiding principles to allow artists who collaborate with AI to protect themselves.

First, programmers need to document their process through online code repositories like GitHub or BitBucket.

Second, data engineers should also document and catalog their datasets and the process they used to curate their models, indicating selectivity in their criteria as much as possible to demonstrate their involvement and creativity.

Third, in cases where user data is utilized, the engineer should “catalog all runs of the program” to distinguish the data selection process. This could be interpreted as a way of determining whether user-based input has a right to claim the copyright too.

Finally, the output should avoid infringing on others’ content through methods like reverse image searches and version control, as mentioned above.

AI-generated artwork is still a very new concept, and the ambiguous copyright laws around it give a lot of flexibility to AI artists and programmers worldwide. The guiding principles Eshraghian lays out will hopefully shed some light on the legislation we’ll eventually need for this kind of art, and start an important conversation between all the stakeholders involved.

Image Credit: Wikimedia Commons Continue reading

Posted in Human Robots

#437120 The New Indiana Jones? AI. Here’s How ...

Archaeologists have uncovered scores of long-abandoned settlements along coastal Madagascar that reveal environmental connections to modern-day communities. They have detected the nearly indiscernible bumps of earthen mounds left behind by prehistoric North American cultures. Still other researchers have mapped Bronze Age river systems in the Indus Valley, one of the cradles of civilization.

All of these recent discoveries are examples of landscape archaeology. They’re also examples of how artificial intelligence is helping scientists hunt for new archaeological digs on a scale and at a pace unimaginable even a decade ago.

“AI in archaeology has been increasing substantially over the past few years,” said Dylan Davis, a PhD candidate in the Department of Anthropology at Penn State University. “One of the major uses of AI in archaeology is for the detection of new archaeological sites.”

The near-ubiquitous availability of satellite data and other types of aerial imagery for many parts of the world has been both a boon and a bane to archaeologists. They can cover far more ground, but the job of manually mowing their way across digitized landscapes is still time-consuming and laborious. Machine learning algorithms offer a way to parse through complex data far more quickly.

AI Gives Archaeologists a Bird’s Eye View
Davis developed an automated algorithm for identifying large earthen and shell mounds built by native populations long before Europeans arrived with far-off visions of skyscrapers and superhighways in their eyes. The sites still hidden in places like the South Carolina wilderness contain a wealth of information about how people lived, even what they ate, and the ways they interacted with the local environment and other cultures.

In this particular case, the imagery comes from LiDAR, which uses light pulses that can penetrate tree canopies to map forest floors. The team taught the computer the shape, size, and texture characteristics of the mounds so it could identify potential sites from the digital 3D datasets that it analyzed.

“The process resulted in several thousand possible features that my colleagues and I checked by hand,” Davis told Singularity Hub. “While not entirely automated, this saved the equivalent of years of manual labor that would have been required for analyzing the whole LiDAR image by hand.”

In Madagascar—where Davis is studying human settlement history across the world’s fourth largest island over a timescale of millennia—he developed a predictive algorithm to help locate archaeological sites using freely available satellite imagery. His team was able to survey and identify more than 70 new archaeological sites—and potentially hundreds more—across an area of more than 1,000 square kilometers during the course of about a year.

Machines Learning From the Past Prepare Us for the Future
One impetus behind the rapid identification of archaeological sites is that many are under threat from climate change, such as coastal erosion from sea level rise, or other human impacts. Meanwhile, traditional archaeological approaches are expensive and laborious—serious handicaps in a race against time.

“It is imperative to record as many archaeological sites as we can in a short period of time. That is why AI and machine learning are useful for my research,” Davis said.

Studying the rise and fall of past civilizations can also teach modern humans a thing or two about how to grapple with these current challenges.

Researchers at the Institut Català d’Arqueologia Clàssica (ICAC) turned to machine-learning algorithms to reconstruct more than 20,000 kilometers of paleo-rivers along the Indus Valley civilization of what is now part of modern Pakistan and India. Such AI-powered mapping techniques wouldn’t be possible using satellite images alone.

That effort helped locate many previously unknown archaeological sites and unlocked new insights into those Bronze Age cultures. However, the analytics can also assist governments with important water resource management today, according to Hèctor A. Orengo Romeu, co-director of the Landscape Archaeology Research Group at ICAC.

“Our analyses can contribute to the forecasts of the evolution of aquifers in the area and provide valuable information on aspects such as the variability of agricultural productivity or the influence of climate change on the expansion of the Thar desert, in addition to providing cultural management tools to the government,” he said.

Leveraging AI for Language and Lots More
While landscape archaeology is one major application of AI in archaeology, it’s far from the only one. In 2000, only about a half-dozen scientific papers referred to the use of AI, according to the Web of Science, reputedly the world’s largest global citation database. Last year, more than 65 papers were published concerning the use of machine intelligence technologies in archaeology, with a significant uptick beginning in 2015.

AI methods, for instance, are being used to understand the chemical makeup of artifacts like pottery and ceramics, according to Davis. “This can help identify where these materials were made and how far they were transported. It can also help us to understand the extent of past trading networks.”

Linguistic anthropologists have also used machine intelligence methods to trace the evolution of different languages, Davis said. “Using AI, we can learn when and where languages emerged around the world.”

In other cases, AI has helped reconstruct or decipher ancient texts. Last year, researchers at Google’s DeepMind used a deep neural network called PYTHIA to recreate missing inscriptions in ancient Greek from damaged surfaces of objects made of stone or ceramics.

Named after the Oracle at Delphi, PYTHIA “takes a sequence of damaged text as input, and is trained to predict character sequences comprising hypothesised restorations of ancient Greek inscriptions,” the researchers reported.

In a similar fashion, Chinese scientists applied a convolutional neural network (CNN) to untangle another ancient tongue once found on turtle shells and ox bones. The CNN managed to classify oracle bone morphology in order to piece together fragments of these divination objects, some with inscriptions that represent the earliest evidence of China’s recorded history.

“Differentiating the materials of oracle bones is one of the most basic steps for oracle bone morphology—we need to first make sure we don’t assemble pieces of ox bones with tortoise shells,” lead author of the study, associate professor Shanxiong Chen at China’s Southwest University, told Synced, an online tech publication in China.

AI Helps Archaeologists Get the Scoop…
And then there are applications of AI in archaeology that are simply … interesting. Just last month, researchers published a paper about a machine learning method trained to differentiate between human and canine paleofeces.

The algorithm, dubbed CoproID, compares the gut microbiome DNA found in the ancient material with DNA found in modern feces, enabling it to get the scoop on the origin of the poop.

Also known as coprolites, paleo-feces from humans and dogs are often found in the same archaeological sites. Scientists need to know which is which if they’re trying to understand something like past diets or disease.

“CoproID is the first line of identification in coprolite analysis to confirm that what we’re looking for is actually human, or a dog if we’re interested in dogs,” Maxime Borry, a bioinformatics PhD student at the Max Planck Institute for the Science of Human History, told Vice.

…But Machine Intelligence Is Just Another Tool
There is obviously quite a bit of work that can be automated through AI. But there’s no reason for archaeologists to hit the unemployment line any time soon. There are also plenty of instances where machines can’t yet match humans in identifying objects or patterns. At other times, it’s just faster doing the analysis yourself, Davis noted.

“For ‘big data’ tasks like detecting archaeological materials over a continental scale, AI is useful,” he said. “But for some tasks, it is sometimes more time-consuming to train an entire computer algorithm to complete a task that you can do on your own in an hour.”

Still, there’s no telling what the future will hold for studying the past using artificial intelligence.

“We have already started to see real improvements in the accuracy and reliability of these approaches, but there is a lot more to do,” Davis said. “Hopefully, we start to see these methods being directly applied to a variety of interesting questions around the world, as these methods can produce datasets that would have been impossible a few decades ago.”

Image Credit: James Wheeler from Pixabay Continue reading

Posted in Human Robots