Tag Archives: think

#432193 Are ‘You’ Just Inside Your Skin or ...

In November 2017, a gunman entered a church in Sutherland Springs in Texas, where he killed 26 people and wounded 20 others. He escaped in his car, with police and residents in hot pursuit, before losing control of the vehicle and flipping it into a ditch. When the police got to the car, he was dead. The episode is horrifying enough without its unsettling epilogue. In the course of their investigations, the FBI reportedly pressed the gunman’s finger to the fingerprint-recognition feature on his iPhone in an attempt to unlock it. Regardless of who’s affected, it’s disquieting to think of the police using a corpse to break into someone’s digital afterlife.

Most democratic constitutions shield us from unwanted intrusions into our brains and bodies. They also enshrine our entitlement to freedom of thought and mental privacy. That’s why neurochemical drugs that interfere with cognitive functioning can’t be administered against a person’s will unless there’s a clear medical justification. Similarly, according to scholarly opinion, law-enforcement officials can’t compel someone to take a lie-detector test, because that would be an invasion of privacy and a violation of the right to remain silent.

But in the present era of ubiquitous technology, philosophers are beginning to ask whether biological anatomy really captures the entirety of who we are. Given the role they play in our lives, do our devices deserve the same protections as our brains and bodies?

After all, your smartphone is much more than just a phone. It can tell a more intimate story about you than your best friend. No other piece of hardware in history, not even your brain, contains the quality or quantity of information held on your phone: it ‘knows’ whom you speak to, when you speak to them, what you said, where you have been, your purchases, photos, biometric data, even your notes to yourself—and all this dating back years.

In 2014, the United States Supreme Court used this observation to justify the decision that police must obtain a warrant before rummaging through our smartphones. These devices “are now such a pervasive and insistent part of daily life that the proverbial visitor from Mars might conclude they were an important feature of human anatomy,” as Chief Justice John Roberts observed in his written opinion.

The Chief Justice probably wasn’t making a metaphysical point—but the philosophers Andy Clark and David Chalmers were when they argued in “The Extended Mind” (1998) that technology is actually part of us. According to traditional cognitive science, “thinking” is a process of symbol manipulation or neural computation, which gets carried out by the brain. Clark and Chalmers broadly accept this computational theory of mind, but claim that tools can become seamlessly integrated into how we think. Objects such as smartphones or notepads are often just as functionally essential to our cognition as the synapses firing in our heads. They augment and extend our minds by increasing our cognitive power and freeing up internal resources.

If accepted, the extended mind thesis threatens widespread cultural assumptions about the inviolate nature of thought, which sits at the heart of most legal and social norms. As the US Supreme Court declared in 1942: “freedom to think is absolute of its own nature; the most tyrannical government is powerless to control the inward workings of the mind.” This view has its origins in thinkers such as John Locke and René Descartes, who argued that the human soul is locked in a physical body, but that our thoughts exist in an immaterial world, inaccessible to other people. One’s inner life thus needs protecting only when it is externalized, such as through speech. Many researchers in cognitive science still cling to this Cartesian conception—only, now, the private realm of thought coincides with activity in the brain.

But today’s legal institutions are straining against this narrow concept of the mind. They are trying to come to grips with how technology is changing what it means to be human, and to devise new normative boundaries to cope with this reality. Justice Roberts might not have known about the idea of the extended mind, but it supports his wry observation that smartphones have become part of our body. If our minds now encompass our phones, we are essentially cyborgs: part-biology, part-technology. Given how our smartphones have taken over what were once functions of our brains—remembering dates, phone numbers, addresses—perhaps the data they contain should be treated on a par with the information we hold in our heads. So if the law aims to protect mental privacy, its boundaries would need to be pushed outwards to give our cyborg anatomy the same protections as our brains.

This line of reasoning leads to some potentially radical conclusions. Some philosophers have argued that when we die, our digital devices should be handled as remains: if your smartphone is a part of who you are, then perhaps it should be treated more like your corpse than your couch. Similarly, one might argue that trashing someone’s smartphone should be seen as a form of “extended” assault, equivalent to a blow to the head, rather than just destruction of property. If your memories are erased because someone attacks you with a club, a court would have no trouble characterizing the episode as a violent incident. So if someone breaks your smartphone and wipes its contents, perhaps the perpetrator should be punished as they would be if they had caused a head trauma.

The extended mind thesis also challenges the law’s role in protecting both the content and the means of thought—that is, shielding what and how we think from undue influence. Regulation bars non-consensual interference in our neurochemistry (for example, through drugs), because that meddles with the contents of our mind. But if cognition encompasses devices, then arguably they should be subject to the same prohibitions. Perhaps some of the techniques that advertisers use to hijack our attention online, to nudge our decision-making or manipulate search results, should count as intrusions on our cognitive process. Similarly, in areas where the law protects the means of thought, it might need to guarantee access to tools such as smartphones—in the same way that freedom of expression protects people’s right not only to write or speak, but also to use computers and disseminate speech over the internet.

The courts are still some way from arriving at such decisions. Besides the headline-making cases of mass shooters, there are thousands of instances each year in which police authorities try to get access to encrypted devices. Although the Fifth Amendment to the US Constitution protects individuals’ right to remain silent (and therefore not give up a passcode), judges in several states have ruled that police can forcibly use fingerprints to unlock a user’s phone. (With the new facial-recognition feature on the iPhone X, police might only need to get an unwitting user to look at her phone.) These decisions reflect the traditional concept that the rights and freedoms of an individual end at the skin.

But the concept of personal rights and freedoms that guides our legal institutions is outdated. It is built on a model of a free individual who enjoys an untouchable inner life. Now, though, our thoughts can be invaded before they have even been developed—and in a way, perhaps this is nothing new. The Nobel Prize-winning physicist Richard Feynman used to say that he thought with his notebook. Without a pen and pencil, a great deal of complex reflection and analysis would never have been possible. If the extended mind view is right, then even simple technologies such as these would merit recognition and protection as a part of the essential toolkit of the mind.This article was originally published at Aeon and has been republished under Creative Commons.

Image Credit: Sergii Tverdokhlibov / Shutterstock.com Continue reading

Posted in Human Robots

#432165 Silicon Valley Is Winning the Race to ...

Henry Ford didn’t invent the motor car. The late 1800s saw a flurry of innovation by hundreds of companies battling to deliver on the promise of fast, efficient and reasonably-priced mechanical transportation. Ford later came to dominate the industry thanks to the development of the moving assembly line.

Today, the sector is poised for another breakthrough with the advent of cars that drive themselves. But unlike the original wave of automobile innovation, the race for supremacy in autonomous vehicles is concentrated among a few corporate giants. So who is set to dominate this time?

I’ve analyzed six companies we think are leading the race to build the first truly driverless car. Three of these—General Motors, Ford, and Volkswagen—come from the existing car industry and need to integrate self-driving technology into their existing fleet of mass-produced vehicles. The other three—Tesla, Uber, and Waymo (owned by the same company as Google)—are newcomers from the digital technology world of Silicon Valley and have to build a mass manufacturing capability.

While it’s impossible to know all the developments at any given time, we have tracked investments, strategic partnerships, and official press releases to learn more about what’s happening behind the scenes. The car industry typically rates self-driving technology on a scale from Level 0 (no automation) to Level 5 (full automation). We’ve assessed where each company is now and estimated how far they are from reaching the top level. Here’s how we think each player is performing.

Volkswagen
Volkswagen has invested in taxi-hailing app Gett and partnered with chip-maker Nvidia to develop an artificial intelligence co-pilot for its cars. In 2018, the VW Group is set to release the Audi A8, the first production vehicle that reaches Level 3 on the scale, “conditional driving automation.” This means the car’s computer will handle all driving functions, but a human has to be ready to take over if necessary.

Ford
Ford already sells cars with a Level 2 autopilot, “partial driving automation.” This means one or more aspects of driving are controlled by a computer based on information about the environment, for example combined cruise control and lane centering. Alongside other investments, the company has put $1 billion into Argo AI, an artificial intelligence company for self-driving vehicles. Following a trial to test pizza delivery using autonomous vehicles, Ford is now testing Level 4 cars on public roads. These feature “high automation,” where the car can drive entirely on its own but not in certain conditions such as when the road surface is poor or the weather is bad.

General Motors
GM also sells vehicles with Level 2 automation but, after buying Silicon Valley startup Cruise Automation in 2016, now plans to launch the first mass-production-ready Level 5 autonomy vehicle that drives completely on its own by 2019. The Cruise AV will have no steering wheel or pedals to allow a human to take over and be part of a large fleet of driverless taxis the company plans to operate in big cities. But crucially the company hasn’t yet secured permission to test the car on public roads.

Waymo (Google)

Waymo Level 5 testing. Image Credit: Waymo

Founded as a special project in 2009, Waymo separated from Google (though they’re both owned by the same parent firm, Alphabet) in 2016. Though it has never made, sold, or operated a car on a commercial basis, Waymo has created test vehicles that have clocked more than 4 million miles without human drivers as of November 2017. Waymo tested its Level 5 car, “Firefly,” between 2015 and 2017 but then decided to focus on hardware that could be installed in other manufacturers’ vehicles, starting with the Chrysler Pacifica.

Uber
The taxi-hailing app maker Uber has been testing autonomous cars on the streets of Pittsburgh since 2016, always with an employee behind the wheel ready to take over in case of a malfunction. After buying the self-driving truck company Otto in 2016 for a reported $680 million, Uber is now expanding its AI capabilities and plans to test NVIDIA’s latest chips in Otto’s vehicles. It has also partnered with Volvo to create a self-driving fleet of cars and with Toyota to co-create a ride-sharing autonomous vehicle.

Tesla
The first major car manufacturer to come from Silicon Valley, Tesla was also the first to introduce Level 2 autopilot back in 2015. The following year, it announced that all new Teslas would have the hardware for full autonomy, meaning once the software is finished it can be deployed on existing cars with an instant upgrade. Some experts have challenged this approach, arguing that the company has merely added surround cameras to its production cars that aren’t as capable as the laser-based sensing systems that most other carmakers are using.

But the company has collected data from hundreds of thousands of cars, driving millions of miles across all terrains. So, we shouldn’t dismiss the firm’s founder, Elon Musk, when he claims a Level 4 Tesla will drive from LA to New York without any human interference within the first half of 2018.

Winners

Who’s leading the race? Image Credit: IMD

At the moment, the disruptors like Tesla, Waymo, and Uber seem to have the upper hand. While the traditional automakers are focusing on bringing Level 3 and 4 partial automation to market, the new companies are leapfrogging them by moving more directly towards Level 5 full automation. Waymo may have the least experience of dealing with consumers in this sector, but it has already clocked up a huge amount of time testing some of the most advanced technology on public roads.

The incumbent carmakers are also focused on the difficult process of integrating new technology and business models into their existing manufacturing operations by buying up small companies. The challengers, on the other hand, are easily partnering with other big players including manufacturers to get the scale and expertise they need more quickly.

Tesla is building its own manufacturing capability but also collecting vast amounts of critical data that will enable it to more easily upgrade its cars when ready for full automation. In particular, Waymo’s experience, technology capability, and ability to secure solid partnerships puts it at the head of the pack.

This article was originally published on The Conversation. Read the original article.

Image Credit: Waymo Continue reading

Posted in Human Robots

#432027 We Read This 800-Page Report on the ...

The longevity field is bustling but still fragmented, and the “silver tsunami” is coming.

That is the takeaway of The Science of Longevity, the behemoth first volume of a four-part series offering a bird’s-eye view of the longevity industry in 2017. The report, a joint production of the Biogerontology Research Foundation, Deep Knowledge Life Science, Aging Analytics Agency, and Longevity.International, synthesizes the growing array of academic and industry ventures related to aging, healthspan, and everything in between.

This is huge, not only in scale but also in ambition. The report, totally worth a read here, will be followed by four additional volumes in 2018, covering topics ranging from the business side of longevity ventures to financial systems to potential tensions between life extension and religion.

And that’s just the first step. The team hopes to publish updated versions of the report annually, giving scientists, investors, and regulatory agencies an easy way to keep their finger on the longevity pulse.

“In 2018, ‘aging’ remains an unnamed adversary in an undeclared war. For all intents and purposes it is mere abstraction in the eyes of regulatory authorities worldwide,” the authors write.

That needs to change.

People often arrive at the field of aging from disparate areas with wildly diverse opinions and strengths. The report compiles these individual efforts at cracking aging into a systematic resource—a “periodic table” for longevity that clearly lays out emerging trends and promising interventions.

The ultimate goal? A global framework serving as a road map to guide the burgeoning industry. With such a framework in hand, academics and industry alike are finally poised to petition the kind of large-scale investments and regulatory changes needed to tackle aging with a unified front.

Infographic depicting many of the key research hubs and non-profits within the field of geroscience.
Image Credit: Longevity.International
The Aging Globe
The global population is rapidly aging. And our medical and social systems aren’t ready to handle this oncoming “silver tsunami.”

Take the medical field. Many age-related diseases such as Alzheimer’s lack effective treatment options. Others, including high blood pressure, stroke, lung or heart problems, require continuous medication and monitoring, placing enormous strain on medical resources.

What’s more, because disease risk rises exponentially with age, medical care for the elderly becomes a game of whack-a-mole: curing any individual disease such as cancer only increases healthy lifespan by two to three years before another one hits.

That’s why in recent years there’s been increasing support for turning the focus to the root of the problem: aging. Rather than tackling individual diseases, geroscience aims to add healthy years to our lifespan—extending “healthspan,” so to speak.

Despite this relative consensus, the field still faces a roadblock. The US FDA does not yet recognize aging as a bona fide disease. Without such a designation, scientists are banned from testing potential interventions for aging in clinical trials (that said, many have used alternate measures such as age-related biomarkers or Alzheimer’s symptoms as a proxy).

Luckily, the FDA’s stance is set to change. The promising anti-aging drug metformin, for example, is already in clinical trials, examining its effect on a variety of age-related symptoms and diseases. This report, and others to follow, may help push progress along.

“It is critical for investors, policymakers, scientists, NGOs, and influential entities to prioritize the amelioration of the geriatric world scenario and recognize aging as a critical matter of global economic security,” the authors say.

Biomedical Gerontology
The causes of aging are complex, stubborn, and not all clear.

But the report lays out two main streams of intervention with already promising results.

The first is to understand the root causes of aging and stop them before damage accumulates. It’s like meddling with cogs and other inner workings of a clock to slow it down, the authors say.

The report lays out several treatments to keep an eye on.

Geroprotective drugs is a big one. Often repurposed from drugs already on the market, these traditional small molecule drugs target a wide variety of metabolic pathways that play a role in aging. Think anti-oxidants, anti-inflammatory, and drugs that mimic caloric restriction, a proven way to extend healthspan in animal models.

More exciting are the emerging technologies. One is nanotechnology. Nanoparticles of carbon, “bucky-balls,” for example, have already been shown to fight viral infections and dangerous ion particles, as well as stimulate the immune system and extend lifespan in mice (though others question the validity of the results).

Blood is another promising, if surprising, fountain of youth: recent studies found that molecules in the blood of the young rejuvenate the heart, brain, and muscles of aged rodents, though many of these findings have yet to be replicated.

Rejuvenation Biotechnology
The second approach is repair and maintenance.

Rather than meddling with inner clockwork, here we force back the hands of a clock to set it back. The main example? Stem cell therapy.

This type of approach would especially benefit the brain, which harbors small, scattered numbers of stem cells that deplete with age. For neurodegenerative diseases like Alzheimer’s, in which neurons progressively die off, stem cell therapy could in theory replace those lost cells and mend those broken circuits.

Once a blue-sky idea, the discovery of induced pluripotent stem cells (iPSCs), where scientists can turn skin and other mature cells back into a stem-like state, hugely propelled the field into near reality. But to date, stem cells haven’t been widely adopted in clinics.

It’s “a toolkit of highly innovative, highly invasive technologies with clinical trials still a great many years off,” the authors say.

But there is a silver lining. The boom in 3D tissue printing offers an alternative approach to stem cells in replacing aging organs. Recent investment from the Methuselah Foundation and other institutions suggests interest remains high despite still being a ways from mainstream use.

A Disruptive Future
“We are finally beginning to see an industry emerge from mankind’s attempts to make sense of the biological chaos,” the authors conclude.

Looking through the trends, they identified several technologies rapidly gaining steam.

One is artificial intelligence, which is already used to bolster drug discovery. Machine learning may also help identify new longevity genes or bring personalized medicine to the clinic based on a patient’s records or biomarkers.

Another is senolytics, a class of drugs that kill off “zombie cells.” Over 10 prospective candidates are already in the pipeline, with some expected to enter the market in less than a decade, the authors say.

Finally, there’s the big gun—gene therapy. The treatment, unlike others mentioned, can directly target the root of any pathology. With a snip (or a swap), genetic tools can turn off damaging genes or switch on ones that promote a youthful profile. It is the most preventative technology at our disposal.

There have already been some success stories in animal models. Using gene therapy, rodents given a boost in telomerase activity, which lengthens the protective caps of DNA strands, live healthier for longer.

“Although it is the prospect farthest from widespread implementation, it may ultimately prove the most influential,” the authors say.

Ultimately, can we stop the silver tsunami before it strikes?

Perhaps not, the authors say. But we do have defenses: the technologies outlined in the report, though still immature, could one day stop the oncoming tidal wave in its tracks.

Now we just have to bring them out of the lab and into the real world. To push the transition along, the team launched Longevity.International, an online meeting ground that unites various stakeholders in the industry.

By providing scientists, entrepreneurs, investors, and policy-makers a platform for learning and discussion, the authors say, we may finally generate enough drive to implement our defenses against aging. The war has begun.

Read the report in full here, and watch out for others coming soon here. The second part of the report profiles 650 (!!!) longevity-focused research hubs, non-profits, scientists, conferences, and literature. It’s an enormously helpful resource—totally worth keeping it in your back pocket for future reference.

Image Credit: Worraket / Shutterstock.com Continue reading

Posted in Human Robots

#431958 The Next Generation of Cameras Might See ...

You might be really pleased with the camera technology in your latest smartphone, which can recognize your face and take slow-mo video in ultra-high definition. But these technological feats are just the start of a larger revolution that is underway.

The latest camera research is shifting away from increasing the number of mega-pixels towards fusing camera data with computational processing. By that, we don’t mean the Photoshop style of processing where effects and filters are added to a picture, but rather a radical new approach where the incoming data may not actually look like at an image at all. It only becomes an image after a series of computational steps that often involve complex mathematics and modeling how light travels through the scene or the camera.

This additional layer of computational processing magically frees us from the chains of conventional imaging techniques. One day we may not even need cameras in the conventional sense any more. Instead we will use light detectors that only a few years ago we would never have considered any use for imaging. And they will be able to do incredible things, like see through fog, inside the human body and even behind walls.

Single Pixel Cameras
One extreme example is the single pixel camera, which relies on a beautifully simple principle. Typical cameras use lots of pixels (tiny sensor elements) to capture a scene that is likely illuminated by a single light source. But you can also do things the other way around, capturing information from many light sources with a single pixel.

To do this you need a controlled light source, for example a simple data projector that illuminates the scene one spot at a time or with a series of different patterns. For each illumination spot or pattern, you then measure the amount of light reflected and add everything together to create the final image.

Clearly the disadvantage of taking a photo in this is way is that you have to send out lots of illumination spots or patterns in order to produce one image (which would take just one snapshot with a regular camera). But this form of imaging would allow you to create otherwise impossible cameras, for example that work at wavelengths of light beyond the visible spectrum, where good detectors cannot be made into cameras.

These cameras could be used to take photos through fog or thick falling snow. Or they could mimic the eyes of some animals and automatically increase an image’s resolution (the amount of detail it captures) depending on what’s in the scene.

It is even possible to capture images from light particles that have never even interacted with the object we want to photograph. This would take advantage of the idea of “quantum entanglement,” that two particles can be connected in a way that means whatever happens to one happens to the other, even if they are a long distance apart. This has intriguing possibilities for looking at objects whose properties might change when lit up, such as the eye. For example, does a retina look the same when in darkness as in light?

Multi-Sensor Imaging
Single-pixel imaging is just one of the simplest innovations in upcoming camera technology and relies, on the face of it, on the traditional concept of what forms a picture. But we are currently witnessing a surge of interest for systems that use lots of information but traditional techniques only collect a small part of it.

This is where we could use multi-sensor approaches that involve many different detectors pointed at the same scene. The Hubble telescope was a pioneering example of this, producing pictures made from combinations of many different images taken at different wavelengths. But now you can buy commercial versions of this kind of technology, such as the Lytro camera that collects information about light intensity and direction on the same sensor, to produce images that can be refocused after the image has been taken.

The next generation camera will probably look something like the Light L16 camera, which features ground-breaking technology based on more than ten different sensors. Their data are combined using a computer to provide a 50 MB, re-focusable and re-zoomable, professional-quality image. The camera itself looks like a very exciting Picasso interpretation of a crazy cell-phone camera.

Yet these are just the first steps towards a new generation of cameras that will change the way in which we think of and take images. Researchers are also working hard on the problem of seeing through fog, seeing behind walls, and even imaging deep inside the human body and brain.

All of these techniques rely on combining images with models that explain how light travels through through or around different substances.

Another interesting approach that is gaining ground relies on artificial intelligence to “learn” to recognize objects from the data. These techniques are inspired by learning processes in the human brain and are likely to play a major role in future imaging systems.

Single photon and quantum imaging technologies are also maturing to the point that they can take pictures with incredibly low light levels and videos with incredibly fast speeds reaching a trillion frames per second. This is enough to even capture images of light itself traveling across as scene.

Some of these applications might require a little time to fully develop, but we now know that the underlying physics should allow us to solve these and other problems through a clever combination of new technology and computational ingenuity.

This article was originally published on The Conversation. Read the original article.

Image Credit: Sylvia Adams / Shutterstock.com Continue reading

Posted in Human Robots

#431939 This Awesome Robot Is the Size of a ...

They say size isn’t everything, but when it comes to delta robots it seems like it’s pretty important.

The speed and precision of these machines sees them employed in delicate pick-and-place tasks in all kinds of factories, as well as to control 3D printer heads. But Harvard researchers have found that scaling them down to millimeter scale makes them even faster and more precise, opening up applications in everything from microsurgery to manipulating tiny objects like circuit board components or even living cells.

Unlike the industrial robots you’re probably more familiar with, delta robots consist of three individually controlled arms supporting a platform. Different combinations of movements can move the platform in three directions, and a variety of tools can be attached to this platform.



The benefit of this design is that unlike a typical robotic arm, all the motors are housed at the base rather than at the joints, which reduces their mechanical complexity, but also—importantly—the weight of the arms. That means they can move and accelerate faster and with greater precision.

It’s been known for a while that the physics of these robots means the smaller you can make them, the more pronounced these advantages become, but scientists had struggled to build them at scales below tens of centimeters.

In a recent paper in the journal Science Robotics, the researchers describe how they used an origami-inspired micro-fabrication approach that relies on folding flat sheets of composite materials to create a robot measuring just 15 millimeters by 15 millimeters by 20 millimeters.

The robot dubbed “milliDelta” features joints that rely on a flexible polymer core to bend—a simplified version of the more complicated joints found in large-scale delta robots. The machine was powered by three piezoelectric actuators, which flex when a voltage is applied, and could perform movements at frequencies 15 to 20 times higher than current delta robots, with precisions down to roughly 5 micrometers.

One potential use for the device is to cancel out surgeons’ hand tremors as they carry out delicate microsurgery procedures, such as operations on the eye’s retina. The researchers actually investigated this application in their paper. They got volunteers to hold a toothpick and measured the movement of the tip to map natural hand tremors. They fed this data to the milliDelta, which was able to match the movements and therefore cancel them out.

In an email to Singularity Hub, the researchers said that adding the robot to the end of a surgical tool could make it possible to stabilize needles or scalpels, though this would require some design optimization. For a start, the base would have to be redesigned to fit on a surgical tool, and sensors would have to be added to the robot to allow it to measure tremors in real time.

Another promising application for the device would be placing components on circuit boards at very high speeds, which could prove valuable in electronics manufacturing. The researchers even think the device’s precision means it could be used for manipulating living cells in research and clinical laboratories.

The researchers even said it would be feasible to integrate the devices onto microrobots to give them similarly impressive manipulation capabilities, though that would require considerable work to overcome control and sensing challenges.

Image Credit: Wyss institute / Harvard Continue reading

Posted in Human Robots