Tag Archives: your

#431368 This Week’s Awesome Stories From ...

INTERNET OF THINGSAmazon Key Is a New Service That Lets Couriers Unlock Your Front DoorBen Popper | The Verge“When a courier arrives with a package for in-home delivery, they scan the barcode, sending a request to Amazon’s cloud. If everything checks out, the cloud grants permission by sending a message back to the camera, which starts recording. The courier then gets a prompt on their app, swipes the screen, and voilà, your door unlocks.”
ROBOTICSWatch Yamaha’s Humanoid Robot Ride a Motorcycle Around a RacetrackPhilip E. Ross | IEEE Spectrum“What’s striking is that the bike is unmodified: the robot is a hunched-over form on top. It senses the environment, calculates what to do, keeps the bike stable, manages acceleration and deceleration—all while factoring in road conditions, air resistance, and engine braking.”
ARTIFICIAL INTELLIGENCETech Giants Are Paying Huge Salaries for Scarce A.I. TalentCade Metz | The New York Times“Typical A.I. specialists, including both Ph.D.s fresh out of school and people with less education and just a few years of experience, can be paid from $300,000 to $500,000 a year or more in salary and company stock, according to nine people who work for major tech companies or have entertained job offers from them. All of them requested anonymity because they did not want to damage their professional prospects.”
HEALTH This Doctor Diagnosed His Own Cancer With an iPhone UltrasoundAntonio Regalado | MIT Technology Review“The device he used, called the Butterfly IQ, is the first solid-state ultrasound machine to reach the market in the U.S. Ultrasound works by shooting sound into the body and capturing the echoes. Usually, the sound waves are generated by a vibrating crystal. But Butterfly’s machine instead uses 9,000 tiny drums etched onto a semiconductor chip.”
ENTREPRENEURSHIPWeWork: A $20 Billion Startup Fueled by Silicon Valley Pixie DustEliot Brown | Wall Street Journal“WeWork’s strategy carries the costs and risks associated with traditional real estate. Its client list is heavily weighted toward startups that may or may not be around for long. WeWork is on the hook for long-term leases, and it doesn’t own its own buildings. Vacancy rates have risen recently, and the company is increasing incentives to draw tenants… The model has proved popular, with 150,000 individuals renting space in more than 170 locations globally.”
Image Credit: NIKITA TV / Shutterstock.com Continue reading

Posted in Human Robots

#431315 Better Than Smart Speakers? Japan Is ...

While American internet giants are developing speakers, Japanese companies are working on robots and holograms. They all share a common goal: to create the future platform for the Internet of Things (IoT) and smart homes.
Names like Bocco, EMIEW3, Xperia Assistant, and Gatebox may not ring a bell to most outside of Japan, but Sony, Hitachi, Sharp, and Softbank most certainly do. The companies, along with Japanese start-ups, have developed robots, robot concepts, and even holograms like the ones hiding behind the short list of names.
While there are distinct differences between the various systems, they share the potential to act as a remote control for IoT devices and smart homes. It is a very different direction than that taken by companies like Google, Amazon, and Apple, who have so far focused on building IoT speaker systems.
Bocco robot. Image Credit: Yukai Engineering
“Technology companies are pursuing the platform—or smartphone if you will—for IoT. My impression is that Japanese companies—and Japanese consumers—prefer that such a platform should not just be an object, but a companion,” says Kosuke Tatsumi, designer at Yukai Engineering, a startup that has developed the Bocco robot system.
At Hitachi, a spokesperson said that the company’s human symbiotic service robot, EMIEW3, robot is currently in the field, doing proof-of-value tests at customer sites to investigate needs and potential solutions. This could include working as an interactive control system for the Internet of Things:
“EMIEW3 is able to communicate with humans, thus receive instructions, and as it is connected to a robotics IT platform, it is very much capable of interacting with IoT-based systems,” the spokesperson said.
The power of speech is getting feet
Gartner analysis predicts that there will be 8.4 billion internet-connected devices—collectively making up the Internet of Things—by the end of 2017. 5.2 billion of those devices are in the consumer category. By the end of 2020, the number of IoT devices will rise to 12.8 billion—and that is just in the consumer category.
As a child of the 80s, I can vividly remember how fun it was to have separate remote controls for TV, video, and stereo. I can imagine a situation where my internet-connected refrigerator and ditto thermostat, television, and toaster try to work out who I’m talking to and what I want them to do.
Consensus seems to be that speech will be the way to interact with many/most IoT devices. The same goes for a form of virtual assistant functioning as the IoT platform—or remote control. Almost everything else is still an open ballgame, despite an early surge for speaker-based systems, like those from Amazon, Google, and Apple.
Why robots could rule
Famous android creator and robot scientist Dr. Hiroshi Ishiguro sees the interaction between humans and the AI embedded in speakers or robots as central to both approaches. From there, the approaches differ greatly.
Image Credit: Hiroshi Ishiguro Laboratories
“It is about more than the difference of form. Speaking to an Amazon Echo is not a natural kind of interaction for humans. That is part of what we in Japan are creating in many human-like robot systems,” he says. “The human brain is constructed to recognize and interact with humans. This is part of why it makes sense to focus on developing the body for the AI mind as well as the AI mind itself. In a way, you can describe it as the difference between developing an assistant, which could be said to be what many American companies are currently doing, and a companion, which is more the focus here in Japan.”
Another advantage is that robots are more kawaii—a multifaceted Japanese word that can be translated as “cute”—than speakers are. This makes it easy for people to relate to them and forgive them.
“People are more willing to forgive children when they make mistakes, and the same is true with a robot like Bocco, which is designed to look kawaii and childlike,” Kosuke Tatsumi explains.
Japanese robots and holograms with IoT-control capabilities
So, what exactly do these robot and hologram companions look like, what can they do, and who’s making them? Here are seven examples of Japanese companies working to go a step beyond smart speakers with personable robots and holograms.
1. In 2016 Sony’s mobile division demonstrated the Xperia Agent concept robot that recognizes individual users, is voice controlled, and can do things like control your television and receive calls from services like Skype.

2. Sharp launched their Home Assistant at CES 2016. A robot-like, voice-controlled assistant that can to control, among other things, air conditioning units, and televisions. Sharp has also launched a robotic phone called RoBoHon.
3. Gatebox has created a holographic virtual assistant. Evil tongues will say that it is primarily the expression of an otaku (Japanese for nerd) dream of living with a manga heroine. Gatebox is, however, able to control things like lights, TVs, and other systems through API integration. It also provides its owner with weather-related advice like “remember your umbrella, it looks like it will rain later.” Gatebox can be controlled by voice, gesture, or via an app.
4. Hitachi’s EMIEW3 robot is designed to assist people in businesses and public spaces. It is connected to a robot IT-platform via the cloud that acts as a “remote brain.” Hitachi is currently investigating the business use cases for EMIEW3. This could include the role of controlling platform for IoT devices.

5. Softbank’s Pepper robot has been used as a platform to control use of medical IoT devices such as smart thermometers by Avatarion. The company has also developed various in-house systems that enable Pepper to control IoT-devices like a coffee machine. A user simply asks Pepper to brew a cup of coffee, and it starts the coffee machine for you.
6. Yukai Engineering’s Bocco registers when a person (e.g., young child) comes home and acts as a communication center between that person and other members of the household (e.g., parent still at work). The company is working on integrating voice recognition, voice control, and having Bocco control things like the lights and other connected IoT devices.
7. Last year Toyota launched the Kirobo Mini, a companion robot which aims to, among other things, help its owner by suggesting “places to visit, routes for travel, and music to listen to” during the drive.

Today, Japan. Tomorrow…?
One of the key questions is whether this emerging phenomenon is a purely Japanese thing. If the country’s love of robots makes it fundamentally different. Japan is, after all, a country where new units of Softbank’s Pepper robot routinely sell out in minutes and the RoBoHon robot-phone has its own cafe nights in Tokyo.
It is a country where TV introduces you to friendly, helpful robots like Doraemon and Astro Boy. I, on the other hand, first met robots in the shape of Arnold Schwarzenegger’s Terminator and struggled to work out why robots seemed intent on permanently borrowing things like clothes and motorcycles, not to mention why they hated people called Sarah.
However, research suggests that a big part of the reason why Japanese seem to like robots is a combination of exposure and positive experiences that leads to greater acceptance of them. As robots spread to more and more industries—and into our homes—our acceptance of them will grow.
The argument is also backed by a project by Avatarion, which used Softbank’s Nao-robot as a classroom representative for children who were in the hospital.
“What we found was that the other children quickly adapted to interacting with the robot and treating it as the physical representation of the child who was in hospital. They accepted it very quickly,” Thierry Perronnet, General Manager of Avatarion, explains.
His company has also developed solutions where Softbank’s Pepper robot is used as an in-home nurse and controls various medical IoT devices.
If robots end up becoming our preferred method for controlling IoT devices, it is by no means certain that said robots will be coming from Japan.
“I think that the goal for both Japanese and American companies—including the likes of Google, Amazon, Microsoft, and Apple—is to create human-like interaction. For this to happen, technology needs to evolve and adapt to us and how we are used to interacting with others, in other words, have a more human form. Humans’ speed of evolution cannot keep up with technology’s, so it must be the technology that changes,” Dr. Ishiguro says.
Image Credit: Sony Mobile Communications Continue reading

Posted in Human Robots

#431301 Collective Intelligence Is the Root of ...

Many of us intuitively think about intelligence as an individual trait. As a society, we have a tendency to praise individual game-changers for accomplishments that would not be possible without their teams, often tens of thousands of people that work behind the scenes to make extraordinary things happen.
Matt Ridley, best-selling author of multiple books, including The Rational Optimist: How Prosperity Evolves, challenges this view. He argues that human achievement and intelligence are entirely “networking phenomena.” In other words, intelligence is collective and emergent as opposed to individual.
When asked what scientific concept would improve everybody’s cognitive toolkit, Ridley highlights collective intelligence: “It is by putting brains together through the division of labor— through trade and specialization—that human society stumbled upon a way to raise the living standards, carrying capacity, technological virtuosity, and knowledge base of the species.”
Ridley has spent a lifetime exploring human prosperity and the factors that contribute to it. In a conversation with Singularity Hub, he redefined how we perceive intelligence and human progress.
Raya Bidshahri: The common perspective seems to be that competition is what drives innovation and, consequently, human progress. Why do you think collaboration trumps competition when it comes to human progress?
Matt Ridley: There is a tendency to think that competition is an animal instinct that is natural and collaboration is a human instinct we have to learn. I think there is no evidence for that. Both are deeply rooted in us as a species. The evidence from evolutionary biology tells us that collaboration is just as important as competition. Yet, at the end, the Darwinian perspective is quite correct: it’s usually cooperation for the purpose of competition, wherein a given group tries to achieve something more effectively than another group. But the point is that the capacity to co-operate is very deep in our psyche.
RB: You write that “human achievement is entirely a networking phenomenon,” and we need to stop thinking about intelligence as an individual trait, and that instead we should look at what you refer to as collective intelligence. Why is that?
MR: The best way to think about it is that IQ doesn’t matter, because a hundred stupid people who are talking to each other will accomplish more than a hundred intelligent people who aren’t. It’s absolutely vital to see that everything from the manufacturing of a pencil to the manufacturing of a nuclear power station can’t be done by an individual human brain. You can’t possibly hold in your head all the knowledge you need to do these things. For the last 200,000 years we’ve been exchanging and specializing, which enables us to achieve much greater intelligence than we can as individuals.
RB: We often think of achievement and intelligence on individual terms. Why do you think it’s so counter-intuitive for us to think about collective intelligence?
MR: People are surprisingly myopic to the extent they understand the nature of intelligence. I think it goes back to a pre-human tendency to think in terms of individual stories and actors. For example, we love to read about the famous inventor or scientist who invented or discovered something. We never tell these stories as network stories. We tell them as individual hero stories.

“It’s absolutely vital to see that everything from the manufacturing of a pencil to the manufacturing of a nuclear power station can’t be done by an individual human brain.”

This idea of a brilliant hero who saves the world in the face of every obstacle seems to speak to tribal hunter-gatherer societies, where the alpha male leads and wins. But it doesn’t resonate with how human beings have structured modern society in the last 100,000 years or so. We modern-day humans haven’t internalized a way of thinking that incorporates this definition of distributed and collective intelligence.
RB: One of the books you’re best known for is The Rational Optimist. What does it mean to be a rational optimist?
MR: My optimism is rational because it’s not based on a feeling, it’s based on evidence. If you look at the data on human living standards over the last 200 years and compare it with the way that most people actually perceive our progress during that time, you’ll see an extraordinary gap. On the whole, people seem to think that things are getting worse, but things are actually getting better.
We’ve seen the most astonishing improvements in human living standards: we’ve brought the number of people living in extreme poverty to 9 percent from about 70 percent when I was born. The human lifespan is expanding by five hours a day, child mortality has gone down by two thirds in half a century, and much more. These feats dwarf the things that are going wrong. Yet most people are quite pessimistic about the future despite the things we’ve achieved in the past.
RB: Where does this idea of collective intelligence fit in rational optimism?
MR: Underlying the idea of rational optimism was understanding what prosperity is, and why it happens to us and not to rabbits or rocks. Why are we the only species in the world that has concepts like a GDP, growth rate, or living standard? My answer is that it comes back to this phenomena of collective intelligence. The reason for a rise in living standards is innovation, and the cause of that innovation is our ability to collaborate.
The grand theme of human history is exchange of ideas, collaborating through specialization and the division of labor. Throughout history, it’s in places where there is a lot of open exchange and trade where you get a lot of innovation. And indeed, there are some extraordinary episodes in human history when societies get cut off from exchange and their innovation slows down and they start moving backwards. One example of this is Tasmania, which was isolated and lost a lot of the technologies it started off with.
RB: Lots of people like to point out that just because the world has been getting better doesn’t guarantee it will continue to do so. How do you respond to that line of argumentation?
MR: There is a quote by Thomas Babington Macaulay from 1830, where he was fed up with the pessimists of the time saying things will only get worse. He says, “On what principle is it that with nothing but improvement behind us, we are to expect nothing but deterioration before us?” And this was back in the 1830s, where in Britain and a few other parts of the world, we were only seeing the beginning of the rise of living standards. It’s perverse to argue that because things were getting better in the past, now they are about to get worse.

“I think it’s worth remembering that good news tends to be gradual, and bad news tends to be sudden. Hence, the good stuff is rarely going to make the news.”

Another thing to point out is that people have always said this. Every generation thought they were at the peak looking downhill. If you think about the opportunities technology is about to give us, whether it’s through blockchain, gene editing, or artificial intelligence, there is every reason to believe that 2017 is going to look like a time of absolute misery compared to what our children and grandchildren are going to experience.
RB: There seems to be a fair amount of mayhem in today’s world, and lots of valid problems to pay attention to in the news. What would you say to empower our readers that we will push through it and continue to grow and improve as a species?
MR: I think it’s worth remembering that good news tends to be gradual, and bad news tends to be sudden. Hence, the good stuff is rarely going to make the news. It’s happening in an inexorable way, as a result of ordinary people exchanging, specializing, collaborating, and innovating, and it’s surprisingly hard to stop it.
Even if you look back to the 1940s, at the end of a world war, there was still a lot of innovation happening. In some ways it feels like we are going through a bad period now. I do worry a lot about the anti-enlightenment values that I see spreading in various parts of the world. But then I remind myself that people are working on innovative projects in the background, and these things are going to come through and push us forward.
Image Credit: Sahacha Nilkumhang / Shutterstock.com

We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites. Continue reading

Posted in Human Robots

#431159 How Close Is Turing’s Dream of ...

The quest for conversational artificial intelligence has been a long one.
When Alan Turing, the father of modern computing, racked his considerable brains for a test that would truly indicate that a computer program was intelligent, he landed on this area. If a computer could convince a panel of human judges that they were talking to a human—if it could hold a convincing conversation—then it would indicate that artificial intelligence had advanced to the point where it was indistinguishable from human intelligence.
This gauntlet was thrown down in 1950 and, so far, no computer program has managed to pass the Turing test.
There have been some very notable failures, however: Joseph Weizenbaum, as early as 1966—when computers were still programmed with large punch-cards—developed a piece of natural language processing software called ELIZA. ELIZA was a machine intended to respond to human conversation by pretending to be a psychotherapist; you can still talk to her today.
Talking to ELIZA is a little strange. She’ll often rephrase things you’ve said back at you: so, for example, if you say “I’m feeling depressed,” she might say “Did you come to me because you are feeling depressed?” When she’s unsure about what you’ve said, ELIZA will usually respond with “I see,” or perhaps “Tell me more.”
For the first few lines of dialogue, especially if you treat her as your therapist, ELIZA can be convincingly human. This was something Weizenbaum noticed and was slightly alarmed by: people were willing to treat the algorithm as more human than it really was. Before long, even though some of the test subjects knew ELIZA was just a machine, they were opening up with some of their deepest feelings and secrets. They were pouring out their hearts to a machine. When Weizenbaum’s secretary spoke to ELIZA, even though she knew it was a fairly simple computer program, she still insisted Weizenbaum leave the room.
Part of the unexpected reaction ELIZA generated may be because people are more willing to open up to a machine, feeling they won’t be judged, even if the machine is ultimately powerless to do or say anything to really help. The ELIZA effect was named for this computer program: the tendency of humans to anthropomorphize machines, or think of them as human.

Weizenbaum himself, who later became deeply suspicious of the influence of computers and artificial intelligence in human life, was astonished that people were so willing to believe his script was human. He wrote, “I had not realized…that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”

“Consciously, you know you’re talking to a big block of code stored somewhere out there in the ether. But subconsciously, you might feel like you’re interacting with a human.”

The ELIZA effect may have disturbed Weizenbaum, but it has intrigued and fascinated others for decades. Perhaps you’ve noticed it in yourself, when talking to an AI like Siri, Alexa, or Google Assistant—the occasional response can seem almost too real. Consciously, you know you’re talking to a big block of code stored somewhere out there in the ether. But subconsciously, you might feel like you’re interacting with a human.
Yet the ELIZA effect, as enticing as it is, has proved a source of frustration for people who are trying to create conversational machines. Natural language processing has proceeded in leaps and bounds since the 1960s. Now you can find friendly chatbots like Mitsuku—which has frequently won the Loebner Prize, awarded to the machines that come closest to passing the Turing test—that aim to have a response to everything you might say.
In the commercial sphere, Facebook has opened up its Messenger program and provided software for people and companies to design their own chatbots. The idea is simple: why have an app for, say, ordering pizza when you can just chatter to a robot through your favorite messenger app and make the order in natural language, as if you were telling your friend to get it for you?
Startups like Semantic Machines hope their AI assistant will be able to interact with you just like a secretary or PA would, but with an unparalleled ability to retrieve information from the internet. They may soon be there.
But people who engineer chatbots—both in the social and commercial realm—encounter a common problem: the users, perhaps subconsciously, assume the chatbots are human and become disappointed when they’re not able to have a normal conversation. Frustration with miscommunication can often stem from raised initial expectations.
So far, no machine has really been able to crack the problem of context retention—understanding what’s been said before, referring back to it, and crafting responses based on the point the conversation has reached. Even Mitsuku will often struggle to remember the topic of conversation beyond a few lines of dialogue.

“For everything you say, there could be hundreds of responses that would make sense. When you travel a layer deeper into the conversation, those factors multiply until you end up with vast numbers of potential conversations.”

This is, of course, understandable. Conversation can be almost unimaginably complex. For everything you say, there could be hundreds of responses that would make sense. When you travel a layer deeper into the conversation, those factors multiply until—like possible games of Go or chess—you end up with vast numbers of potential conversations.
But that hasn’t deterred people from trying, most recently, tech giant Amazon, in an effort to make their AI voice assistant, Alexa, friendlier. They have been running the Alexa Prize competition, which offers a cool $500,000 to the winning AI—and a bonus of a million dollars to any team that can create a ‘socialbot’ capable of sustaining a conversation with human users for 20 minutes on a variety of themes.
Topics Alexa likes to chat about include science and technology, politics, sports, and celebrity gossip. The finalists were recently announced: chatbots from universities in Prague, Edinburgh, and Seattle. Finalists were chosen according to the ratings from Alexa users, who could trigger the socialbots into conversation by saying “Hey Alexa, let’s chat,” although the reviews for the socialbots weren’t always complimentary.
By narrowing down the fields of conversation to a specific range of topics, the Alexa Prize has cleverly started to get around the problem of context—just as commercially available chatbots hope to do. It’s much easier to model an interaction that goes a few layers into the conversational topic if you’re limiting those topics to a specific field.
Developing a machine that can hold almost any conversation with a human interlocutor convincingly might be difficult. It might even be a problem that requires artificial general intelligence to truly solve, rather than the previously-employed approaches of scripted answers or neural networks that associate inputs with responses.
But a machine that can have meaningful interactions that people might value and enjoy could be just around the corner. The Alexa Prize winner is announced in November. The ELIZA effect might mean we will relate to machines sooner than we’d thought.
So, go well, little socialbots. If you ever want to discuss the weather or what the world will be like once you guys take over, I’ll be around. Just don’t start a therapy session.
Image Credit: Shutterstock Continue reading

Posted in Human Robots

#431142 Will Privacy Survive the Future?

Technological progress has radically transformed our concept of privacy. How we share information and display our identities has changed as we’ve migrated to the digital world.
As the Guardian states, “We now carry with us everywhere devices that give us access to all the world’s information, but they can also offer almost all the world vast quantities of information about us.” We are all leaving digital footprints as we navigate through the internet. While sometimes this information can be harmless, it’s often valuable to various stakeholders, including governments, corporations, marketers, and criminals.
The ethical debate around privacy is complex. The reality is that our definition and standards for privacy have evolved over time, and will continue to do so in the next few decades.
Implications of Emerging Technologies
Protecting privacy will only become more challenging as we experience the emergence of technologies such as virtual reality, the Internet of Things, brain-machine interfaces, and much more.
Virtual reality headsets are already gathering information about users’ locations and physical movements. In the future all of our emotional experiences, reactions, and interactions in the virtual world will be able to be accessed and analyzed. As virtual reality becomes more immersive and indistinguishable from physical reality, technology companies will be able to gather an unprecedented amount of data.
It doesn’t end there. The Internet of Things will be able to gather live data from our homes, cities and institutions. Drones may be able to spy on us as we live our everyday lives. As the amount of genetic data gathered increases, the privacy of our genes, too, may be compromised.
It gets even more concerning when we look farther into the future. As companies like Neuralink attempt to merge the human brain with machines, we are left with powerful implications for privacy. Brain-machine interfaces by nature operate by extracting information from the brain and manipulating it in order to accomplish goals. There are many parties that can benefit and take advantage of the information from the interface.
Marketing companies, for instance, would take an interest in better understanding how consumers think and consequently have their thoughts modified. Employers could use the information to find new ways to improve productivity or even monitor their employees. There will notably be risks of “brain hacking,” which we must take extreme precaution against. However, it is important to note that lesser versions of these risks currently exist, i.e., by phone hacking, identify fraud, and the like.
A New Much-Needed Definition of Privacy
In many ways we are already cyborgs interfacing with technology. According to theories like the extended mind hypothesis, our technological devices are an extension of our identities. We use our phones to store memories, retrieve information, and communicate. We use powerful tools like the Hubble Telescope to extend our sense of sight. In parallel, one can argue that the digital world has become an extension of the physical world.
These technological tools are a part of who we are. This has led to many ethical and societal implications. Our Facebook profiles can be processed to infer secondary information about us, such as sexual orientation, political and religious views, race, substance use, intelligence, and personality. Some argue that many of our devices may be mapping our every move. Your browsing history could be spied on and even sold in the open market.
While the argument to protect privacy and individuals’ information is valid to a certain extent, we may also have to accept the possibility that privacy will become obsolete in the future. We have inherently become more open as a society in the digital world, voluntarily sharing our identities, interests, views, and personalities.

“The question we are left with is, at what point does the tradeoff between transparency and privacy become detrimental?”

There also seems to be a contradiction with the positive trend towards mass transparency and the need to protect privacy. Many advocate for a massive decentralization and openness of information through mechanisms like blockchain.
The question we are left with is, at what point does the tradeoff between transparency and privacy become detrimental? We want to live in a world of fewer secrets, but also don’t want to live in a world where our every move is followed (not to mention our every feeling, thought and interaction). So, how do we find a balance?
Traditionally, privacy is used synonymously with secrecy. Many are led to believe that if you keep your personal information secret, then you’ve accomplished privacy. Danny Weitzner, director of the MIT Internet Policy Research Initiative, rejects this notion and argues that this old definition of privacy is dead.
From Witzner’s perspective, protecting privacy in the digital age means creating rules that require governments and businesses to be transparent about how they use our information. In other terms, we can’t bring the business of data to an end, but we can do a better job of controlling it. If these stakeholders spy on our personal information, then we should have the right to spy on how they spy on us.
The Role of Policy and Discourse
Almost always, policy has been too slow to adapt to the societal and ethical implications of technological progress. And sometimes the wrong laws can do more harm than good. For instance, in March, the US House of Representatives voted to allow internet service providers to sell your web browsing history on the open market.
More often than not, the bureaucratic nature of governance can’t keep up with exponential growth. New technologies are emerging every day and transforming society. Can we confidently claim that our world leaders, politicians, and local representatives are having these conversations and debates? Are they putting a focus on the ethical and societal implications of emerging technologies? Probably not.
We also can’t underestimate the role of public awareness and digital activism. There needs to be an emphasis on educating and engaging the general public about the complexities of these issues and the potential solutions available. The current solution may not be robust or clear, but having these discussions will get us there.
Stock Media provided by blasbike / Pond5 Continue reading

Posted in Human Robots