Tag Archives: head

#432482 This Week’s Awesome Stories From ...

CYBERNETICS
A Brain-Boosting Prosthesis Moves From Rats to Humans
Robbie Gonzalez | WIRED
“Today, their proof-of-concept prosthetic lives outside a patient’s head and connects to the brain via wires. But in the future, Hampson hopes, surgeons could implant a similar apparatus entirely within a person’s skull, like a neural pacemaker. It could augment all manner of brain functions—not just in victims of dementia and brain injury, but healthy individuals, as well.”

ARTIFICIAL INTELLIGENCE
Here’s How the US Needs to Prepare for the Age of Artificial Intelligence
Will Knight | MIT Technology Review
“The Trump administration has abandoned this vision and has no intention of devising its own AI plan, say those working there. They say there is no need for an AI moonshot, and that minimizing government interference is the best way to make sure the technology flourishes… That looks like a huge mistake. If it essentially ignores such a technological transformation, the US might never make the most of an opportunity to reboot its economy and kick-start both wage growth and job creation. Failure to plan could also cause the birthplace of AI to lose ground to international rivals.”

BIOMIMICRY
Underwater GPS Inspired by Shrimp Eyes
Jeremy Hsu | IEEE Spectrum
“A few years ago, U.S. and Australian researchers developed a special camera inspired by the eyes of mantis shrimp that can see the polarization patterns of light waves, which resemble those in a rope being waved up and down. That means the bio-inspired camera can detect how light polarization patterns change once the light enters the water and gets deflected or scattered.”

POLITICS & TECHNOLOGY
‘The Business of War’: Google Employees Protest Work for the Pentagon
Scott Shane and Daisuke Wakabayashi | The New York Times
“Thousands of Google employees, including dozens of senior engineers, have signed a letter protesting the company’s involvement in a Pentagon program that uses artificial intelligence to interpret video imagery and could be used to improve the targeting of drone strikes.

The letter, which is circulating inside Google and has garnered more than 3,100 signatures, reflects a culture clash between Silicon Valley and the federal government that is likely to intensify as cutting-edge artificial intelligence is increasingly employed for military purposes. ‘We believe that Google should not be in the business of war,’ says the letter, addressed to Sundar Pichai, the company’s chief executive. It asks that Google pull out of Project Maven, a Pentagon pilot program, and announce a policy that it will not ‘ever build warfare technology.’ (Read the text of the letter.)”

CYBERNETICS
MIT’s New Headset Reads the ‘Words in Your Head’
Brian Heater | TechCrunch
“A team at MIT has been working on just such a device, though the hardware design, admittedly, doesn’t go too far toward removing that whole self-consciousness bit from the equation. AlterEgo is a headmounted—or, more properly, jaw-mounted—device that’s capable of reading neuromuscular signals through built-in electrodes. The hardware, as MIT puts it, is capable of reading ‘words in your head.’”



Image Credit: christitzeimaging.com / Shutterstock.com Continue reading

Posted in Human Robots

#432431 Why Slowing Down Can Actually Help Us ...

Leah Weiss believes that when we pay attention to how we do our work—our thoughts and feelings about what we do and why we do it—we can tap into a much deeper reservoir of courage, creativity, meaning, and resilience.

As a researcher, educator, and author, Weiss teaches a course called “Leading with Compassion and Mindfulness” at the Stanford Graduate School of Business, one of the most competitive MBA programs in the world, and runs programs at HopeLab.

Weiss is the author of the new book How We Work: Live Your Purpose, Reclaim your Sanity and Embrace the Daily Grind, endorsed by the Dalai Lama, among others. I caught up with Leah to learn more about how the practice of mindfulness can deepen our individual and collective purpose and passion.

Lisa Kay Solomon: We’re hearing a lot about mindfulness these days. What is mindfulness and why is it so important to bring into our work? Can you share some of the basic tenets of the practice?

Leah Weiss, PhD: Mindfulness is, in its most literal sense, “the attention to inattention.” It’s as simple as noticing when you’re not paying attention and then re-focusing. It is prioritizing what is happening right now over internal and external noise.

The ability to work well with difficult coworkers, handle constructive feedback and criticism, regulate emotions at work—all of these things can come from regular mindfulness practice.

Some additional benefits of mindfulness are a greater sense of compassion (both self-compassion and compassion for others) and a way to seek and find purpose in even mundane things (and especially at work). From the business standpoint, mindfulness at work leads to increased productivity and creativity, mostly because when we are focused on one task at a time (as opposed to multitasking), we produce better results.

We spend more time with our co-workers than we do with our families; if our work relationships are negative, we suffer both mentally and physically. Even worse, we take all of those negative feelings home with us at the end of the work day. The antidote to this prescription for unhappiness is to have clear, strong purpose (one third of people do not have purpose at work and this is a major problem in the modern workplace!). We can use mental training to grow as people and as employees.

LKS: What are some recommendations you would make to busy leaders who are working around the clock to change the world?

LW: I think the most important thing is to remember to tend to our relationship with ourselves while trying to change the world. If we’re beating up on ourselves all the time we’ll be depleted.

People passionate about improving the world can get into habits of believing self-care isn’t important. We demand a lot of ourselves. It’s okay to fail, to mess up, to make mistakes—what’s important is how we learn from those mistakes and what we tell ourselves about those instances. What is the “internal script” playing in your own head? Is it positive, supporting, and understanding? It should be. If it isn’t, you can work on it. And the changes you make won’t just improve your quality of life, they’ll make you more resilient to weather life’s inevitable setbacks.

A close second recommendation is to always consider where everyone in an organization fits and help everyone (including yourself) find purpose. When you know what your own purpose is and show others their purpose, you can motivate a team and help everyone on a team gain pride in and at work. To get at this, make sure to ask people on your team what really lights them up. What sucks their energy and depletes them? If we know our own answers to these questions and relate them to the people we work with, we can create more engaged organizations.

LKS: Can you envision a future where technology and mindfulness can work together?

LW: Technology and mindfulness are already starting to work together. Some artificial intelligence companies are considering things like mindfulness and compassion when building robots, and there are numerous apps that target spreading mindfulness meditations in a widely-accessible way.

LKS: Looking ahead at our future generations who seem more attached to their devices than ever, what advice do you have for them?

LW: It’s unrealistic to say “stop using your device so much,” so instead, my suggestion is to make time for doing things like scrolling social media and make the same amount of time for putting your phone down and watching a movie or talking to a friend. No matter what it is that you are doing, make sure you have meta-awareness or clarity about what you’re paying attention to. Be clear about where your attention is and recognize that you can be a steward of attention. Technology can support us in this or pull us away from this; it depends on how we use it.

Image Credit: frankie’s / Shutterstock.com Continue reading

Posted in Human Robots

#432352 Watch This Lifelike Robot Fish Swim ...

Earth’s oceans are having a rough go of it these days. On top of being the repository for millions of tons of plastic waste, global warming is affecting the oceans and upsetting marine ecosystems in potentially irreversible ways.

Coral bleaching, for example, occurs when warming water temperatures or other stress factors cause coral to cast off the algae that live on them. The coral goes from lush and colorful to white and bare, and sometimes dies off altogether. This has a ripple effect on the surrounding ecosystem.

Warmer water temperatures have also prompted many species of fish to move closer to the north or south poles, disrupting fisheries and altering undersea environments.

To keep these issues in check or, better yet, try to address and improve them, it’s crucial for scientists to monitor what’s going on in the water. A paper released last week by a team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) unveiled a new tool for studying marine life: a biomimetic soft robotic fish, dubbed SoFi, that can swim with, observe, and interact with real fish.

SoFi isn’t the first robotic fish to hit the water, but it is the most advanced robot of its kind. Here’s what sets it apart.

It swims in three dimensions
Up until now, most robotic fish could only swim forward at a given water depth, advancing at a steady speed. SoFi blows older models out of the water. It’s equipped with side fins called dive planes, which move to adjust its angle and allow it to turn, dive downward, or head closer to the surface. Its density and thus its buoyancy can also be adjusted by compressing or decompressing air in an inner compartment.

“To our knowledge, this is the first robotic fish that can swim untethered in three dimensions for extended periods of time,” said CSAIL PhD candidate Robert Katzschmann, lead author of the study. “We are excited about the possibility of being able to use a system like this to get closer to marine life than humans can get on their own.”

The team took SoFi to the Rainbow Reef in Fiji to test out its swimming skills, and the robo fish didn’t disappoint—it was able to swim at depths of over 50 feet for 40 continuous minutes. What keeps it swimming? A lithium polymer battery just like the one that powers our smartphones.

It’s remote-controlled… by Super Nintendo
SoFi has sensors to help it see what’s around it, but it doesn’t have a mind of its own yet. Rather, it’s controlled by a nearby scuba-diving human, who can send it commands related to speed, diving, and turning. The best part? The commands come from an actual repurposed (and waterproofed) Super Nintendo controller. What’s not to love?

Image Credit: MIT CSAIL
Previous robotic fish built by this team had to be tethered to a boat, so the fact that SoFi can swim independently is a pretty big deal. Communication between the fish and the diver was most successful when the two were less than 10 meters apart.

It looks real, sort of
SoFi’s side fins are a bit stiff, and its camera may not pass for natural—but otherwise, it looks a lot like a real fish. This is mostly thanks to the way its tail moves; a motor pumps water between two chambers in the tail, and as one chamber fills, the tail bends towards that side, then towards the other side as water is pumped into the other chamber. The result is a motion that closely mimics the way fish swim. Not only that, the hydraulic system can change the water flow to get different tail movements that let SoFi swim at varying speeds; its average speed is around half a body length (21.7 centimeters) per second.

Besides looking neat, it’s important SoFi look lifelike so it can blend in with marine life and not scare real fish away, so it can get close to them and observe them.

“A robot like this can help explore the reef more closely than current robots, both because it can get closer more safely for the reef and because it can be better accepted by the marine species.” said Cecilia Laschi, a biorobotics professor at the Sant’Anna School of Advanced Studies in Pisa, Italy.

Just keep swimming
It sounds like this fish is nothing short of a regular Nemo. But its creators aren’t quite finished yet.

They’d like SoFi to be able to swim faster, so they’ll work on improving the robo fish’s pump system and streamlining its body and tail design. They also plan to tweak SoFi’s camera to help it follow real fish.

“We view SoFi as a first step toward developing almost an underwater observatory of sorts,” said CSAIL director Daniela Rus. “It has the potential to be a new type of tool for ocean exploration and to open up new avenues for uncovering the mysteries of marine life.”

The CSAIL team plans to make a whole school of SoFis to help biologists learn more about how marine life is reacting to environmental changes.

Image Credit: MIT CSAIL Continue reading

Posted in Human Robots

#432193 Are ‘You’ Just Inside Your Skin or ...

In November 2017, a gunman entered a church in Sutherland Springs in Texas, where he killed 26 people and wounded 20 others. He escaped in his car, with police and residents in hot pursuit, before losing control of the vehicle and flipping it into a ditch. When the police got to the car, he was dead. The episode is horrifying enough without its unsettling epilogue. In the course of their investigations, the FBI reportedly pressed the gunman’s finger to the fingerprint-recognition feature on his iPhone in an attempt to unlock it. Regardless of who’s affected, it’s disquieting to think of the police using a corpse to break into someone’s digital afterlife.

Most democratic constitutions shield us from unwanted intrusions into our brains and bodies. They also enshrine our entitlement to freedom of thought and mental privacy. That’s why neurochemical drugs that interfere with cognitive functioning can’t be administered against a person’s will unless there’s a clear medical justification. Similarly, according to scholarly opinion, law-enforcement officials can’t compel someone to take a lie-detector test, because that would be an invasion of privacy and a violation of the right to remain silent.

But in the present era of ubiquitous technology, philosophers are beginning to ask whether biological anatomy really captures the entirety of who we are. Given the role they play in our lives, do our devices deserve the same protections as our brains and bodies?

After all, your smartphone is much more than just a phone. It can tell a more intimate story about you than your best friend. No other piece of hardware in history, not even your brain, contains the quality or quantity of information held on your phone: it ‘knows’ whom you speak to, when you speak to them, what you said, where you have been, your purchases, photos, biometric data, even your notes to yourself—and all this dating back years.

In 2014, the United States Supreme Court used this observation to justify the decision that police must obtain a warrant before rummaging through our smartphones. These devices “are now such a pervasive and insistent part of daily life that the proverbial visitor from Mars might conclude they were an important feature of human anatomy,” as Chief Justice John Roberts observed in his written opinion.

The Chief Justice probably wasn’t making a metaphysical point—but the philosophers Andy Clark and David Chalmers were when they argued in “The Extended Mind” (1998) that technology is actually part of us. According to traditional cognitive science, “thinking” is a process of symbol manipulation or neural computation, which gets carried out by the brain. Clark and Chalmers broadly accept this computational theory of mind, but claim that tools can become seamlessly integrated into how we think. Objects such as smartphones or notepads are often just as functionally essential to our cognition as the synapses firing in our heads. They augment and extend our minds by increasing our cognitive power and freeing up internal resources.

If accepted, the extended mind thesis threatens widespread cultural assumptions about the inviolate nature of thought, which sits at the heart of most legal and social norms. As the US Supreme Court declared in 1942: “freedom to think is absolute of its own nature; the most tyrannical government is powerless to control the inward workings of the mind.” This view has its origins in thinkers such as John Locke and René Descartes, who argued that the human soul is locked in a physical body, but that our thoughts exist in an immaterial world, inaccessible to other people. One’s inner life thus needs protecting only when it is externalized, such as through speech. Many researchers in cognitive science still cling to this Cartesian conception—only, now, the private realm of thought coincides with activity in the brain.

But today’s legal institutions are straining against this narrow concept of the mind. They are trying to come to grips with how technology is changing what it means to be human, and to devise new normative boundaries to cope with this reality. Justice Roberts might not have known about the idea of the extended mind, but it supports his wry observation that smartphones have become part of our body. If our minds now encompass our phones, we are essentially cyborgs: part-biology, part-technology. Given how our smartphones have taken over what were once functions of our brains—remembering dates, phone numbers, addresses—perhaps the data they contain should be treated on a par with the information we hold in our heads. So if the law aims to protect mental privacy, its boundaries would need to be pushed outwards to give our cyborg anatomy the same protections as our brains.

This line of reasoning leads to some potentially radical conclusions. Some philosophers have argued that when we die, our digital devices should be handled as remains: if your smartphone is a part of who you are, then perhaps it should be treated more like your corpse than your couch. Similarly, one might argue that trashing someone’s smartphone should be seen as a form of “extended” assault, equivalent to a blow to the head, rather than just destruction of property. If your memories are erased because someone attacks you with a club, a court would have no trouble characterizing the episode as a violent incident. So if someone breaks your smartphone and wipes its contents, perhaps the perpetrator should be punished as they would be if they had caused a head trauma.

The extended mind thesis also challenges the law’s role in protecting both the content and the means of thought—that is, shielding what and how we think from undue influence. Regulation bars non-consensual interference in our neurochemistry (for example, through drugs), because that meddles with the contents of our mind. But if cognition encompasses devices, then arguably they should be subject to the same prohibitions. Perhaps some of the techniques that advertisers use to hijack our attention online, to nudge our decision-making or manipulate search results, should count as intrusions on our cognitive process. Similarly, in areas where the law protects the means of thought, it might need to guarantee access to tools such as smartphones—in the same way that freedom of expression protects people’s right not only to write or speak, but also to use computers and disseminate speech over the internet.

The courts are still some way from arriving at such decisions. Besides the headline-making cases of mass shooters, there are thousands of instances each year in which police authorities try to get access to encrypted devices. Although the Fifth Amendment to the US Constitution protects individuals’ right to remain silent (and therefore not give up a passcode), judges in several states have ruled that police can forcibly use fingerprints to unlock a user’s phone. (With the new facial-recognition feature on the iPhone X, police might only need to get an unwitting user to look at her phone.) These decisions reflect the traditional concept that the rights and freedoms of an individual end at the skin.

But the concept of personal rights and freedoms that guides our legal institutions is outdated. It is built on a model of a free individual who enjoys an untouchable inner life. Now, though, our thoughts can be invaded before they have even been developed—and in a way, perhaps this is nothing new. The Nobel Prize-winning physicist Richard Feynman used to say that he thought with his notebook. Without a pen and pencil, a great deal of complex reflection and analysis would never have been possible. If the extended mind view is right, then even simple technologies such as these would merit recognition and protection as a part of the essential toolkit of the mind.This article was originally published at Aeon and has been republished under Creative Commons.

Image Credit: Sergii Tverdokhlibov / Shutterstock.com Continue reading

Posted in Human Robots

#432165 Silicon Valley Is Winning the Race to ...

Henry Ford didn’t invent the motor car. The late 1800s saw a flurry of innovation by hundreds of companies battling to deliver on the promise of fast, efficient and reasonably-priced mechanical transportation. Ford later came to dominate the industry thanks to the development of the moving assembly line.

Today, the sector is poised for another breakthrough with the advent of cars that drive themselves. But unlike the original wave of automobile innovation, the race for supremacy in autonomous vehicles is concentrated among a few corporate giants. So who is set to dominate this time?

I’ve analyzed six companies we think are leading the race to build the first truly driverless car. Three of these—General Motors, Ford, and Volkswagen—come from the existing car industry and need to integrate self-driving technology into their existing fleet of mass-produced vehicles. The other three—Tesla, Uber, and Waymo (owned by the same company as Google)—are newcomers from the digital technology world of Silicon Valley and have to build a mass manufacturing capability.

While it’s impossible to know all the developments at any given time, we have tracked investments, strategic partnerships, and official press releases to learn more about what’s happening behind the scenes. The car industry typically rates self-driving technology on a scale from Level 0 (no automation) to Level 5 (full automation). We’ve assessed where each company is now and estimated how far they are from reaching the top level. Here’s how we think each player is performing.

Volkswagen
Volkswagen has invested in taxi-hailing app Gett and partnered with chip-maker Nvidia to develop an artificial intelligence co-pilot for its cars. In 2018, the VW Group is set to release the Audi A8, the first production vehicle that reaches Level 3 on the scale, “conditional driving automation.” This means the car’s computer will handle all driving functions, but a human has to be ready to take over if necessary.

Ford
Ford already sells cars with a Level 2 autopilot, “partial driving automation.” This means one or more aspects of driving are controlled by a computer based on information about the environment, for example combined cruise control and lane centering. Alongside other investments, the company has put $1 billion into Argo AI, an artificial intelligence company for self-driving vehicles. Following a trial to test pizza delivery using autonomous vehicles, Ford is now testing Level 4 cars on public roads. These feature “high automation,” where the car can drive entirely on its own but not in certain conditions such as when the road surface is poor or the weather is bad.

General Motors
GM also sells vehicles with Level 2 automation but, after buying Silicon Valley startup Cruise Automation in 2016, now plans to launch the first mass-production-ready Level 5 autonomy vehicle that drives completely on its own by 2019. The Cruise AV will have no steering wheel or pedals to allow a human to take over and be part of a large fleet of driverless taxis the company plans to operate in big cities. But crucially the company hasn’t yet secured permission to test the car on public roads.

Waymo (Google)

Waymo Level 5 testing. Image Credit: Waymo

Founded as a special project in 2009, Waymo separated from Google (though they’re both owned by the same parent firm, Alphabet) in 2016. Though it has never made, sold, or operated a car on a commercial basis, Waymo has created test vehicles that have clocked more than 4 million miles without human drivers as of November 2017. Waymo tested its Level 5 car, “Firefly,” between 2015 and 2017 but then decided to focus on hardware that could be installed in other manufacturers’ vehicles, starting with the Chrysler Pacifica.

Uber
The taxi-hailing app maker Uber has been testing autonomous cars on the streets of Pittsburgh since 2016, always with an employee behind the wheel ready to take over in case of a malfunction. After buying the self-driving truck company Otto in 2016 for a reported $680 million, Uber is now expanding its AI capabilities and plans to test NVIDIA’s latest chips in Otto’s vehicles. It has also partnered with Volvo to create a self-driving fleet of cars and with Toyota to co-create a ride-sharing autonomous vehicle.

Tesla
The first major car manufacturer to come from Silicon Valley, Tesla was also the first to introduce Level 2 autopilot back in 2015. The following year, it announced that all new Teslas would have the hardware for full autonomy, meaning once the software is finished it can be deployed on existing cars with an instant upgrade. Some experts have challenged this approach, arguing that the company has merely added surround cameras to its production cars that aren’t as capable as the laser-based sensing systems that most other carmakers are using.

But the company has collected data from hundreds of thousands of cars, driving millions of miles across all terrains. So, we shouldn’t dismiss the firm’s founder, Elon Musk, when he claims a Level 4 Tesla will drive from LA to New York without any human interference within the first half of 2018.

Winners

Who’s leading the race? Image Credit: IMD

At the moment, the disruptors like Tesla, Waymo, and Uber seem to have the upper hand. While the traditional automakers are focusing on bringing Level 3 and 4 partial automation to market, the new companies are leapfrogging them by moving more directly towards Level 5 full automation. Waymo may have the least experience of dealing with consumers in this sector, but it has already clocked up a huge amount of time testing some of the most advanced technology on public roads.

The incumbent carmakers are also focused on the difficult process of integrating new technology and business models into their existing manufacturing operations by buying up small companies. The challengers, on the other hand, are easily partnering with other big players including manufacturers to get the scale and expertise they need more quickly.

Tesla is building its own manufacturing capability but also collecting vast amounts of critical data that will enable it to more easily upgrade its cars when ready for full automation. In particular, Waymo’s experience, technology capability, and ability to secure solid partnerships puts it at the head of the pack.

This article was originally published on The Conversation. Read the original article.

Image Credit: Waymo Continue reading

Posted in Human Robots