Tag Archives: ai

#432880 Google’s Duplex Raises the Question: ...

By now, you’ve probably seen Google’s new Duplex software, which promises to call people on your behalf to book appointments for haircuts and the like. As yet, it only exists in demo form, but already it seems like Google has made a big stride towards capturing a market that plenty of companies have had their eye on for quite some time. This software is impressive, but it raises questions.

Many of you will be familiar with the stilted, robotic conversations you can have with early chatbots that are, essentially, glorified menus. Instead of pressing 1 to confirm or 2 to re-enter, some of these bots would allow for simple commands like “Yes” or “No,” replacing the buttons with limited ability to recognize a few words. Using them was often a far more frustrating experience than attempting to use a menu—there are few things more irritating than a robot saying, “Sorry, your response was not recognized.”

Google Duplex scheduling a hair salon appointment:

Google Duplex calling a restaurant:

Even getting the response recognized is hard enough. After all, there are countless different nuances and accents to baffle voice recognition software, and endless turns of phrase that amount to saying the same thing that can confound natural language processing (NLP), especially if you like your phrasing quirky.

You may think that standard customer-service type conversations all travel the same route, using similar words and phrasing. But when there are over 80,000 ways to order coffee, and making a mistake is frowned upon, even simple tasks require high accuracy over a huge dataset.

Advances in audio processing, neural networks, and NLP, as well as raw computing power, have meant that basic recognition of what someone is trying to say is less of an issue. Soundhound’s virtual assistant prides itself on being able to process complicated requests (perhaps needlessly complicated).

The deeper issue, as with all attempts to develop conversational machines, is one of understanding context. There are so many ways a conversation can go that attempting to construct a conversation two or three layers deep quickly runs into problems. Multiply the thousands of things people might say by the thousands they might say next, and the combinatorics of the challenge runs away from most chatbots, leaving them as either glorified menus, gimmicks, or rather bizarre to talk to.

Yet Google, who surely remembers from Glass the risk of premature debuts for technology, especially the kind that ask you to rethink how you interact with or trust in software, must have faith in Duplex to show it on the world stage. We know that startups like Semantic Machines and x.ai have received serious funding to perform very similar functions, using natural-language conversations to perform computing tasks, schedule meetings, book hotels, or purchase items.

It’s no great leap to imagine Google will soon do the same, bringing us closer to a world of onboard computing, where Lens labels the world around us and their assistant arranges it for us (all the while gathering more and more data it can convert into personalized ads). The early demos showed some clever tricks for keeping the conversation within a fairly narrow realm where the AI should be comfortable and competent, and the blog post that accompanied the release shows just how much effort has gone into the technology.

Yet given the privacy and ethics funk the tech industry finds itself in, and people’s general unease about AI, the main reaction to Duplex’s impressive demo was concern. The voice sounded too natural, bringing to mind Lyrebird and their warnings of deepfakes. You might trust “Do the Right Thing” Google with this technology, but it could usher in an era when automated robo-callers are far more convincing.

A more human-like voice may sound like a perfectly innocuous improvement, but the fact that the assistant interjects naturalistic “umm” and “mm-hm” responses to more perfectly mimic a human rubbed a lot of people the wrong way. This wasn’t just a voice assistant trying to sound less grinding and robotic; it was actively trying to deceive people into thinking they were talking to a human.

Google is running the risk of trying to get to conversational AI by going straight through the uncanny valley.

“Google’s experiments do appear to have been designed to deceive,” said Dr. Thomas King of the Oxford Internet Institute’s Digital Ethics Lab, according to Techcrunch. “Their main hypothesis was ‘can you distinguish this from a real person?’ In this case it’s unclear why their hypothesis was about deception and not the user experience… there should be some kind of mechanism there to let people know what it is they are speaking to.”

From Google’s perspective, being able to say “90 percent of callers can’t tell the difference between this and a human personal assistant” is an excellent marketing ploy, even though statistics about how many interactions are successful might be more relevant.

In fact, Duplex runs contrary to pretty much every major recommendation about ethics for the use of robotics or artificial intelligence, not to mention certain eavesdropping laws. Transparency is key to holding machines (and the people who design them) accountable, especially when it comes to decision-making.

Then there are the more subtle social issues. One prominent effect social media has had is to allow people to silo themselves; in echo chambers of like-minded individuals, it’s hard to see how other opinions exist. Technology exacerbates this by removing the evolutionary cues that go along with face-to-face interaction. Confronted with a pair of human eyes, people are more generous. Confronted with a Twitter avatar or a Facebook interface, people hurl abuse and criticism they’d never dream of using in a public setting.

Now that we can use technology to interact with ever fewer people, will it change us? Is it fair to offload the burden of dealing with a robot onto the poor human at the other end of the line, who might have to deal with dozens of such calls a day? Google has said that if the AI is in trouble, it will put you through to a human, which might help save receptionists from the hell of trying to explain a concept to dozens of dumbfounded AI assistants all day. But there’s always the risk that failures will be blamed on the person and not the machine.

As AI advances, could we end up treating the dwindling number of people in these “customer-facing” roles as the buggiest part of a fully automatic service? Will people start accusing each other of being robots on the phone, as well as on Twitter?

Google has provided plenty of reassurances about how the system will be used. They have said they will ensure that the system is identified, and it’s hardly difficult to resolve this problem; a slight change in the script from their demo would do it. For now, consumers will likely appreciate moves that make it clear whether the “intelligent agents” that make major decisions for us, that we interact with daily, and that hide behind social media avatars or phone numbers are real or artificial.

Image Credit: Besjunior / Shutterstock.com Continue reading

Posted in Human Robots

#432878 Chinese Port Goes Full Robot With ...

By the end of 2018, something will be very different about the harbor area in the northern Chinese city of Caofeidian. If you were to visit, the whirring cranes and tractors driving containers to and fro would be the only things in sight.

Caofeidian is set to become the world’s first fully autonomous harbor by the end of the year. The US-Chinese startup TuSimple, a specialist in developing self-driving trucks, will replace human-driven terminal tractor-trucks with 20 self-driving models. A separate company handles crane automation, and a central control system will coordinate the movements of both.

According to Robert Brown, Director of Public Affairs at TuSimple, the project could quickly transform into a much wider trend. “The potential for automating systems in harbors and ports is staggering when considering the number of deep-water and inland ports around the world. At the same time, the closed, controlled nature of a port environment makes it a perfect proving ground for autonomous truck technology,” he said.

Going Global
The autonomous cranes and trucks have a big task ahead of them. Caofeidian currently processes around 300,000 TEU containers a year. Even if you were dealing with Lego bricks, that number of units would get you a decent-sized cathedral or a 22-foot-long aircraft carrier. For any maritime fans—or people who enjoy the moving of heavy objects—TEU stands for twenty-foot equivalent unit. It is the industry standard for containers. A TEU equals an 8-foot (2.43 meter) wide, 8.5-foot (2.59 meter) high, and 20-foot (6.06 meter) long container.

While impressive, the Caofeidian number pales in comparison with the biggest global ports like Shanghai, Singapore, Busan, or Rotterdam. For example, 2017 saw more than 40 million TEU moved through Shanghai port facilities.

Self-driving container vehicles have been trialled elsewhere, including in Yangshan, close to Shanghai, and Rotterdam. Qingdao New Qianwan Container Terminal in China recently laid claim to being the first fully automated terminal in Asia.

The potential for efficiencies has many ports interested in automation. Qingdao said its systems allow the terminal to operate in complete darkness and have reduced labor costs by 70 percent while increasing efficiency by 30 percent. In some cases, the number of workers needed to unload a cargo ship has gone from 60 to 9.

TuSimple says it is in negotiations with several other ports and also sees potential in related logistics-heavy fields.

Stable Testing Ground
For autonomous vehicles, ports seem like a perfect testing ground. They are restricted, confined areas with few to no pedestrians where operating speeds are limited. The predictability makes it unlike, say, city driving.

Robert Brown describes it as an ideal setting for the first adaptation of TuSimple’s technology. The company, which, amongst others, is backed by chipmaker Nvidia, have been retrofitting existing vehicles from Shaanxi Automobile Group with sensors and technology.

At the same time, it is running open road tests in Arizona and China of its Class 8 Level 4 autonomous trucks.

The Camera Approach
Dozens of autonomous truck startups are reported to have launched in China over the past two years. In other countries the situation is much the same, as the race for the future of goods transportation heats up. Startup companies like Embark, Einride, Starsky Robotics, and Drive.ai are just a few of the names in the space. They are facing competition from the likes of Tesla, Daimler, VW, Uber’s Otto subsidiary, and in March, Waymo announced it too was getting into the truck race.

Compared to many of its competitors, TuSimple’s autonomous driving system is based on a different approach. Instead of laser-based radar (LIDAR), TuSimple primarily uses cameras to gather data about its surroundings. Currently, the company uses ten cameras, including forward-facing, backward-facing, and wide-lens. Together, they produce the 360-degree “God View” of the vehicle’s surroundings, which is interpreted by the onboard autonomous driving systems.

Each camera gathers information at 30 frames a second. Millimeter wave radar is used as a secondary sensor. In total, the vehicles generate what Robert Brown describes with a laugh as “almost too much” data about its surroundings and is accurate beyond 300 meters in locating and identifying objects. This includes objects that have given LIDAR problems, such as black vehicles.

Another advantage is price. Companies often loathe revealing exact amounts, but Tesla has gone as far as to say that the ‘expected’ price of its autonomous truck will be from $150,0000 and upwards. While unconfirmed, TuSimple’s retrofitted, camera-based solution is thought to cost around $20,000.

Image Credit: chinahbzyg / Shutterstock.com Continue reading

Posted in Human Robots

#432691 Is the Secret to Significantly Longer ...

Once upon a time, a powerful Sumerian king named Gilgamesh went on a quest, as such characters often do in these stories of myth and legend. Gilgamesh had witnessed the death of his best friend, Enkidu, and, fearing a similar fate, went in search of immortality. The great king failed to find the secret of eternal life but took solace that his deeds would live well beyond his mortal years.

Fast-forward four thousand years, give or take a century, and Gilgamesh (as famous as any B-list celebrity today, despite the passage of time) would probably be heartened to learn that many others have taken up his search for longevity. Today, though, instead of battling epic monsters and the machinations of fickle gods, those seeking to enhance and extend life are cutting-edge scientists and visionary entrepreneurs who are helping unlock the secrets of human biology.

Chief among them is Aubrey de Grey, a biomedical gerontologist who founded the SENS Research Foundation, a Silicon Valley-based research organization that seeks to advance the application of regenerative medicine to age-related diseases. SENS stands for Strategies for Engineered Negligible Senescence, a term coined by de Grey to describe a broad array (seven, to be precise) of medical interventions that attempt to repair or prevent different types of molecular and cellular damage that eventually lead to age-related diseases like cancer and Alzheimer’s.

Many of the strategies focus on senescent cells, which accumulate in tissues and organs as people age. Not quite dead, senescent cells stop dividing but are still metabolically active, spewing out all sorts of proteins and other molecules that can cause inflammation and other problems. In a young body, that’s usually not a problem (and probably part of general biological maintenance), as a healthy immune system can go to work to put out most fires.

However, as we age, senescent cells continue to accumulate, and at some point the immune system retires from fire watch. Welcome to old age.

Of Mice and Men
Researchers like de Grey believe that treating the cellular underpinnings of aging could not only prevent disease but significantly extend human lifespans. How long? Well, if you’re talking to de Grey, Biblical proportions—on the order of centuries.

De Grey says that science has made great strides toward that end in the last 15 years, such as the ability to copy mitochondrial DNA to the nucleus. Mitochondria serve as the power plant of the cell but are highly susceptible to mutations that lead to cellular degeneration. Copying the mitochondrial DNA into the nucleus would help protect it from damage.

Another achievement occurred about six years ago when scientists first figured out how to kill senescent cells. That discovery led to a spate of new experiments in mice indicating that removing these ticking-time-bomb cells prevented disease and even extended their lifespans. Now the anti-aging therapy is about to be tested in humans.

“As for the next few years, I think the stream of advances is likely to become a flood—once the first steps are made, things get progressively easier and faster,” de Grey tells Singularity Hub. “I think there’s a good chance that we will achieve really dramatic rejuvenation of mice within only six to eight years: maybe taking middle-aged mice and doubling their remaining lifespan, which is an order of magnitude more than can be done today.”

Not Horsing Around
Richard G.A. Faragher, a professor of biogerontology at the University of Brighton in the United Kingdom, recently made discoveries in the lab regarding the rejuvenation of senescent cells with chemical compounds found in foods like chocolate and red wine. He hopes to apply his findings to an animal model in the future—in this case,horses.

“We have been very fortunate in receiving some funding from an animal welfare charity to look at potential treatments for older horses,” he explains to Singularity Hub in an email. “I think this is a great idea. Many aspects of the physiology we are studying are common between horses and humans.”

What Faragher and his colleagues demonstrated in a paper published in BMC Cell Biology last year was that resveralogues, chemicals based on resveratrol, were able to reactivate a protein called a splicing factor that is involved in gene regulation. Within hours, the chemicals caused the cells to rejuvenate and start dividing like younger cells.

“If treatments work in our old pony systems, then I am sure they could be translated into clinical trials in humans,” Faragher says. “How long is purely a matter of money. Given suitable funding, I would hope to see a trial within five years.”

Show Them the Money
Faragher argues that the recent breakthroughs aren’t because a result of emerging technologies like artificial intelligence or the gene-editing tool CRISPR, but a paradigm shift in how scientists understand the underpinnings of cellular aging. Solving the “aging problem” isn’t a question of technology but of money, he says.

“Frankly, when AI and CRISPR have removed cystic fibrosis, Duchenne muscular dystrophy or Gaucher syndrome, I’ll be much more willing to hear tales of amazing progress. Go fix a single, highly penetrant genetic disease in the population using this flashy stuff and then we’ll talk,” he says. “My faith resides in the most potent technological development of all: money.”

De Grey is less flippant about the role that technology will play in the quest to defeat aging. AI, CRISPR, protein engineering, advances in stem cell therapies, and immune system engineering—all will have a part.

“There is not really anything distinctive about the ways in which these technologies will contribute,” he says. “What’s distinctive is that we will need all of these technologies, because there are so many different types of damage to repair and they each require different tricks.”

It’s in the Blood
A startup in the San Francisco Bay Area believes machines can play a big role in discovering the right combination of factors that lead to longer and healthier lives—and then develop drugs that exploit those findings.

BioAge Labs raised nearly $11 million last year for its machine learning platform that crunches big data sets to find blood factors, such as proteins or metabolites, that are tied to a person’s underlying biological age. The startup claims that these factors can predict how long a person will live.

“Our interest in this comes out of research into parabiosis, where joining the circulatory systems of old and young mice—so that they share the same blood—has been demonstrated to make old mice healthier and more robust,” Dr. Eric Morgen, chief medical officer at BioAge, tells Singularity Hub.

Based on that idea, he explains, it should be possible to alter those good or bad factors to produce a rejuvenating effect.

“Our main focus at BioAge is to identify these types of factors in our human cohort data, characterize the important molecular pathways they are involved in, and then drug those pathways,” he says. “This is a really hard problem, and we use machine learning to mine these complex datasets to determine which individual factors and molecular pathways best reflect biological age.”

Saving for the Future
Of course, there’s no telling when any of these anti-aging therapies will come to market. That’s why Forever Labs, a biotechnology startup out of Ann Arbor, Michigan, wants your stem cells now. The company offers a service to cryogenically freeze stem cells taken from bone marrow.

The theory behind the procedure, according to Forever Labs CEO Steven Clausnitzer, is based on research showing that stem cells may be a key component for repairing cellular damage. That’s because stem cells can develop into many different cell types and can divide endlessly to replenish other cells. Clausnitzer notes that there are upwards of a thousand clinical studies looking at using stem cells to treat age-related conditions such as cardiovascular disease.

However, stem cells come with their own expiration date, which usually coincides with the age that most people start experiencing serious health problems. Stem cells harvested from bone marrow at a younger age can potentially provide a therapeutic resource in the future.

“We believe strongly that by having access to your own best possible selves, you’re going to be well positioned to lead healthier, longer lives,” he tells Singularity Hub.

“There’s a compelling argument to be made that if you started to maintain the bone marrow population, the amount of nuclear cells in your bone marrow, and to re-up them so that they aren’t declining with age, it stands to reason that you could absolutely mitigate things like cardiovascular disease and stroke and Alzheimer’s,” he adds.

Clausnitzer notes that the stored stem cells can be used today in developing therapies to treat chronic conditions such as osteoarthritis. However, the more exciting prospect—and the reason he put his own 38-year-old stem cells on ice—is that he believes future stem cell therapies can help stave off the ravages of age-related disease.

“I can start reintroducing them not to treat age-related disease but to treat the decline in the stem-cell niche itself, so that I don’t ever get an age-related disease,” he says. “I don’t think that it equates to immortality, but it certainly is a step in that direction.”

Indecisive on Immortality
The societal implications of a longer-living human species are a guessing game at this point. We do know that by mid-century, the global population of those aged 65 and older will reach 1.6 billion, while those older than 80 will hit nearly 450 million, according to the National Academies of Science. If many of those people could enjoy healthy lives in their twilight years, an enormous medical cost could be avoided.

Faragher is certainly working toward a future where human health is ubiquitous. Human immortality is another question entirely.

“The longer lifespans become, the more heavily we may need to control birth rates and thus we may have fewer new minds. This could have a heavy ‘opportunity cost’ in terms of progress,” he says.

And does anyone truly want to live forever?

“There have been happy moments in my life but I have also suffered some traumatic disappointments. No [drug] will wash those experiences out of me,” Faragher says. “I no longer view my future with unqualified enthusiasm, and I do not think I am the only middle-aged man to feel that way. I don’t think it is an accident that so many ‘immortalists’ are young.

“They should be careful what they wish for.”

Image Credit: Karim Ortiz / Shutterstock.com Continue reading

Posted in Human Robots

#432671 Stuff 3.0: The Era of Programmable ...

It’s the end of a long day in your apartment in the early 2040s. You decide your work is done for the day, stand up from your desk, and yawn. “Time for a film!” you say. The house responds to your cues. The desk splits into hundreds of tiny pieces, which flow behind you and take on shape again as a couch. The computer screen you were working on flows up the wall and expands into a flat projection screen. You relax into the couch and, after a few seconds, a remote control surfaces from one of its arms.

In a few seconds flat, you’ve gone from a neatly-equipped office to a home cinema…all within the same four walls. Who needs more than one room?

This is the dream of those who work on “programmable matter.”

In his recent book about AI, Max Tegmark makes a distinction between three different levels of computational sophistication for organisms. Life 1.0 is single-celled organisms like bacteria; here, hardware is indistinguishable from software. The behavior of the bacteria is encoded into its DNA; it cannot learn new things.

Life 2.0 is where humans live on the spectrum. We are more or less stuck with our hardware, but we can change our software by choosing to learn different things, say, Spanish instead of Italian. Much like managing space on your smartphone, your brain’s hardware will allow you to download only a certain number of packages, but, at least theoretically, you can learn new behaviors without changing your underlying genetic code.

Life 3.0 marks a step-change from this: creatures that can change both their hardware and software in something like a feedback loop. This is what Tegmark views as a true artificial intelligence—one that can learn to change its own base code, leading to an explosion in intelligence. Perhaps, with CRISPR and other gene-editing techniques, we could be using our “software” to doctor our “hardware” before too long.

Programmable matter extends this analogy to the things in our world: what if your sofa could “learn” how to become a writing desk? What if, instead of a Swiss Army knife with dozens of tool attachments, you just had a single tool that “knew” how to become any other tool you could require, on command? In the crowded cities of the future, could houses be replaced by single, OmniRoom apartments? It would save space, and perhaps resources too.

Such are the dreams, anyway.

But when engineering and manufacturing individual gadgets is such a complex process, you can imagine that making stuff that can turn into many different items can be extremely complicated. Professor Skylar Tibbits at MIT referred to it as 4D printing in a TED Talk, and the website for his research group, the Self-Assembly Lab, excitedly claims, “We have also identified the key ingredients for self-assembly as a simple set of responsive building blocks, energy and interactions that can be designed within nearly every material and machining process available. Self-assembly promises to enable breakthroughs across many disciplines, from biology to material science, software, robotics, manufacturing, transportation, infrastructure, construction, the arts, and even space exploration.”

Naturally, their projects are still in the early stages, but the Self-Assembly Lab and others are genuinely exploring just the kind of science fiction applications we mooted.

For example, there’s the cell-phone self-assembly project, which brings to mind eerie, 24/7 factories where mobile phones assemble themselves from 3D printed kits without human or robotic intervention. Okay, so the phones they’re making are hardly going to fly off the shelves as fashion items, but if all you want is something that works, it could cut manufacturing costs substantially and automate even more of the process.

One of the major hurdles to overcome in making programmable matter a reality is choosing the right fundamental building blocks. There’s a very important balance to strike. To create fine details, you need to have things that aren’t too big, so as to keep your rearranged matter from being too lumpy. This might make the building blocks useless for certain applications—for example, if you wanted to make tools for fine manipulation. With big pieces, it might be difficult to simulate a range of textures. On the other hand, if the pieces are too small, different problems can arise.

Imagine a setup where each piece is a small robot. You have to contain the robot’s power source and its brain, or at least some kind of signal-generator and signal-processor, all in the same compact unit. Perhaps you can imagine that one might be able to simulate a range of textures and strengths by changing the strength of the “bond” between individual units—your desk might need to be a little bit more firm than your bed, which might be nicer with a little more give.

Early steps toward creating this kind of matter have been taken by those who are developing modular robots. There are plenty of different groups working on this, including MIT, Lausanne, and the University of Brussels.

In the latter configuration, one individual robot acts as a centralized decision-maker, referred to as the brain unit, but additional robots can autonomously join the brain unit as and when needed to change the shape and structure of the overall system. Although the system is only ten units at present, it’s a proof-of-concept that control can be orchestrated over a modular system of robots; perhaps in the future, smaller versions of the same thing could be the components of Stuff 3.0.

You can imagine that with machine learning algorithms, such swarms of robots might be able to negotiate obstacles and respond to a changing environment more easily than an individual robot (those of you with techno-fear may read “respond to a changing environment” and imagine a robot seamlessly rearranging itself to allow a bullet to pass straight through without harm).

Speaking of robotics, the form of an ideal robot has been a subject of much debate. In fact, one of the major recent robotics competitions—DARPA’s Robotics Challenge—was won by a robot that could adapt, beating Boston Dynamics’ infamous ATLAS humanoid with the simple addition of a wheel that allowed it to drive as well as walk.

Rather than building robots into a humanoid shape (only sometimes useful), allowing them to evolve and discover the ideal form for performing whatever you’ve tasked them to do could prove far more useful. This is particularly true in disaster response, where expensive robots can still be more valuable than humans, but conditions can be very unpredictable and adaptability is key.

Further afield, many futurists imagine “foglets” as the tiny nanobots that will be capable of constructing anything from raw materials, somewhat like the “Santa Claus machine.” But you don’t necessarily need anything quite so indistinguishable from magic to be useful. Programmable matter that can respond and adapt to its surroundings could be used in all kinds of industrial applications. How about a pipe that can strengthen or weaken at will, or divert its direction on command?

We’re some way off from being able to order our beds to turn into bicycles. As with many tech ideas, it may turn out that the traditional low-tech solution is far more practical and cost-effective, even as we can imagine alternatives. But as the march to put a chip in every conceivable object goes on, it seems certain that inanimate objects are about to get a lot more animated.

Image Credit: PeterVrabel / Shutterstock.com Continue reading

Posted in Human Robots

#432640 Artificial Intelligence And Education

Today, we frequently hear the term artificial intelligence. Furthermore, we experience its benefits in our everyday life, but how does it influence the educational system? Can AI improve its quality and boost the productivity of college or university students? Several years ago, its impact wasn’t as noticeable as it is today. Nevertheless, it shows a …

The post Artificial Intelligence And Education appeared first on TFOT. Continue reading

Posted in Human Robots