Tag Archives: LED

#437416 Robotics firm expands autonomous data ...

Back in 2013, local Brooklyn papers were excitedly reporting on a new initiative aimed at getting residents involved in cleaning up the highly polluted Gowanus Canal. Brooklyn Atlantis, as the project was known, was the brainchild of NYU Tandon Professor of Mechanical and Aerospace Engineering Maurizio Porfiri, who envisioned building and launching robotic boats to collect water-quality data and capture images of the infamous canal, which citizen scientists would then view and help classify. Those robotic boats ultimately led to the formation of the company Manifold Robotics, which aimed to further develop the unmanned surface vehicles (USVs) with sensor technology. (The fledgling company received support from PowerBridgeNY, a collaborative initiative to bring university research to market.) More recently, the startup has now branched out to develop a mobile data collection platform that allows unmanned aerial vehicles (UAVs) to operate safely in the sky near power lines. Continue reading

Posted in Human Robots

#437357 Algorithms Workers Can’t See Are ...

“I’m sorry, Dave. I’m afraid I can’t do that.” HAL’s cold, if polite, refusal to open the pod bay doors in 2001: A Space Odyssey has become a defining warning about putting too much trust in artificial intelligence, particularly if you work in space.

In the movies, when a machine decides to be the boss (or humans let it) things go wrong. Yet despite myriad dystopian warnings, control by machines is fast becoming our reality.

Algorithms—sets of instructions to solve a problem or complete a task—now drive everything from browser search results to better medical care.

They are helping design buildings. They are speeding up trading on financial markets, making and losing fortunes in micro-seconds. They are calculating the most efficient routes for delivery drivers.

In the workplace, self-learning algorithmic computer systems are being introduced by companies to assist in areas such as hiring, setting tasks, measuring productivity, evaluating performance, and even terminating employment: “I’m sorry, Dave. I’m afraid you are being made redundant.”

Giving self‐learning algorithms the responsibility to make and execute decisions affecting workers is called “algorithmic management.” It carries a host of risks in depersonalizing management systems and entrenching pre-existing biases.

At an even deeper level, perhaps, algorithmic management entrenches a power imbalance between management and worker. Algorithms are closely guarded secrets. Their decision-making processes are hidden. It’s a black-box: perhaps you have some understanding of the data that went in, and you see the result that comes out, but you have no idea of what goes on in between.

Algorithms at Work
Here are a few examples of algorithms already at work.

At Amazon’s fulfillment center in south-east Melbourne, they set the pace for “pickers,” who have timers on their scanners showing how long they have to find the next item. As soon as they scan that item, the timer resets for the next. All at a “not quite walking, not quite running” speed.

Or how about AI determining your success in a job interview? More than 700 companies have trialed such technology. US developer HireVue says its software speeds up the hiring process by 90 percent by having applicants answer identical questions and then scoring them according to language, tone, and facial expressions.

Granted, human assessments during job interviews are notoriously flawed. Algorithms,however, can also be biased. The classic example is the COMPAS software used by US judges, probation, and parole officers to rate a person’s risk of re-offending. In 2016 a ProPublica investigation showed the algorithm was heavily discriminatory, incorrectly classifying black subjects as higher risk 45 percent of the time, compared with 23 percent for white subjects.

How Gig Workers Cope
Algorithms do what their code tells them to do. The problem is this code is rarely available. This makes them difficult to scrutinize, or even understand.

Nowhere is this more evident than in the gig economy. Uber, Lyft, Deliveroo, and other platforms could not exist without algorithms allocating, monitoring, evaluating, and rewarding work.

Over the past year Uber Eats’ bicycle couriers and drivers, for instance, have blamed unexplained changes to the algorithm for slashing their jobs, and incomes.

Rider’s can’t be 100 percent sure it was all down to the algorithm. But that’s part of the problem. The fact those who depend on the algorithm don’t know one way or the other has a powerful influence on them.

This is a key result from our interviews with 58 food-delivery couriers. Most knew their jobs were allocated by an algorithm (via an app). They knew the app collected data. What they didn’t know was how data was used to award them work.

In response, they developed a range of strategies (or guessed how) to “win” more jobs, such as accepting gigs as quickly as possible and waiting in “magic” locations. Ironically, these attempts to please the algorithm often meant losing the very flexibility that was one of the attractions of gig work.

The information asymmetry created by algorithmic management has two profound effects. First, it threatens to entrench systemic biases, the type of discrimination hidden within the COMPAS algorithm for years. Second, it compounds the power imbalance between management and worker.

Our data also confirmed others’ findings that it is almost impossible to complain about the decisions of the algorithm. Workers often do not know the exact basis of those decisions, and there’s no one to complain to anyway. When Uber Eats bicycle couriers asked for reasons about their plummeting income, for example, responses from the company advised them “we have no manual control over how many deliveries you receive.”

Broader Lessons
When algorithmic management operates as a “black box” one of the consequences is that it is can become an indirect control mechanism. Thus far under-appreciated by Australian regulators, this control mechanism has enabled platforms to mobilize a reliable and scalable workforce while avoiding employer responsibilities.

“The absence of concrete evidence about how the algorithms operate”, the Victorian government’s inquiry into the “on-demand” workforce notes in its report, “makes it hard for a driver or rider to complain if they feel disadvantaged by one.”

The report, published in June, also found it is “hard to confirm if concern over algorithm transparency is real.”

But it is precisely the fact it is hard to confirm that’s the problem. How can we start to even identify, let alone resolve, issues like algorithmic management?

Fair conduct standards to ensure transparency and accountability are a start. One example is the Fair Work initiative, led by the Oxford Internet Institute. The initiative is bringing together researchers with platforms, workers, unions, and regulators to develop global principles for work in the platform economy. This includes “fair management,” which focuses on how transparent the results and outcomes of algorithms are for workers.

Understandings about impact of algorithms on all forms of work is still in its infancy. It demands greater scrutiny and research. Without human oversight based on agreed principles we risk inviting HAL into our workplaces.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: PickPik Continue reading

Posted in Human Robots

#437337 6G Will Be 100 Times Faster Than ...

Though 5G—a next-generation speed upgrade to wireless networks—is scarcely up and running (and still nonexistent in many places) researchers are already working on what comes next. It lacks an official name, but they’re calling it 6G for the sake of simplicity (and hey, it’s tradition). 6G promises to be up to 100 times faster than 5G—fast enough to download 142 hours of Netflix in a second—but researchers are still trying to figure out exactly how to make such ultra-speedy connections happen.

A new chip, described in a paper in Nature Photonics by a team from Osaka University and Nanyang Technological University in Singapore, may give us a glimpse of our 6G future. The team was able to transmit data at a rate of 11 gigabits per second, topping 5G’s theoretical maximum speed of 10 gigabits per second and fast enough to stream 4K high-def video in real time. They believe the technology has room to grow, and with more development, might hit those blistering 6G speeds.

NTU final year PhD student Abhishek Kumar, Assoc Prof Ranjan Singh and postdoc Dr Yihao Yang. Dr Singh is holding the photonic topological insulator chip made from silicon, which can transmit terahertz waves at ultrahigh speeds. Credit: NTU Singapore
But first, some details about 5G and its predecessors so we can differentiate them from 6G.

Electromagnetic waves are characterized by a wavelength and a frequency; the wavelength is the distance a cycle of the wave covers (peak to peak or trough to trough, for example), and the frequency is the number of waves that pass a given point in one second. Cellphones use miniature radios to pick up electromagnetic signals and convert those signals into the sights and sounds on your phone.

4G wireless networks run on millimeter waves on the low- and mid-band spectrum, defined as a frequency of a little less (low-band) and a little more (mid-band) than one gigahertz (or one billion cycles per second). 5G kicked that up several notches by adding even higher frequency millimeter waves of up to 300 gigahertz, or 300 billion cycles per second. Data transmitted at those higher frequencies tends to be information-dense—like video—because they’re much faster.

The 6G chip kicks 5G up several more notches. It can transmit waves at more than three times the frequency of 5G: one terahertz, or a trillion cycles per second. The team says this yields a data rate of 11 gigabits per second. While that’s faster than the fastest 5G will get, it’s only the beginning for 6G. One wireless communications expert even estimates 6G networks could handle rates up to 8,000 gigabits per second; they’ll also have much lower latency and higher bandwidth than 5G.

Terahertz waves fall between infrared waves and microwaves on the electromagnetic spectrum. Generating and transmitting them is difficult and expensive, requiring special lasers, and even then the frequency range is limited. The team used a new material to transmit terahertz waves, called photonic topological insulators (PTIs). PTIs can conduct light waves on their surface and edges rather than having them run through the material, and allow light to be redirected around corners without disturbing its flow.

The chip is made completely of silicon and has rows of triangular holes. The team’s research showed the chip was able to transmit terahertz waves error-free.

Nanyang Technological University associate professor Ranjan Singh, who led the project, said, “Terahertz technology […] can potentially boost intra-chip and inter-chip communication to support artificial intelligence and cloud-based technologies, such as interconnected self-driving cars, which will need to transmit data quickly to other nearby cars and infrastructure to navigate better and also to avoid accidents.”

Besides being used for AI and self-driving cars (and, of course, downloading hundreds of hours of video in seconds), 6G would also make a big difference for data centers, IoT devices, and long-range communications, among other applications.

Given that 5G networks are still in the process of being set up, though, 6G won’t be coming on the scene anytime soon; a recent whitepaper on 6G from Japanese company NTTDoCoMo estimates we’ll see it in 2030, pointing out that wireless connection tech generations have thus far been spaced about 10 years apart; we got 3G in the early 2000s, 4G in 2010, and 5G in 2020.

In the meantime, as 6G continues to develop, we’re still looking forward to the widespread adoption of 5G.

Image Credit: Hans Braxmeier from Pixabay Continue reading

Posted in Human Robots

#437222 China and AI: What the World Can Learn ...

China announced in 2017 its ambition to become the world leader in artificial intelligence (AI) by 2030. While the US still leads in absolute terms, China appears to be making more rapid progress than either the US or the EU, and central and local government spending on AI in China is estimated to be in the tens of billions of dollars.

The move has led—at least in the West—to warnings of a global AI arms race and concerns about the growing reach of China’s authoritarian surveillance state. But treating China as a “villain” in this way is both overly simplistic and potentially costly. While there are undoubtedly aspects of the Chinese government’s approach to AI that are highly concerning and rightly should be condemned, it’s important that this does not cloud all analysis of China’s AI innovation.

The world needs to engage seriously with China’s AI development and take a closer look at what’s really going on. The story is complex and it’s important to highlight where China is making promising advances in useful AI applications and to challenge common misconceptions, as well as to caution against problematic uses.

Nesta has explored the broad spectrum of AI activity in China—the good, the bad, and the unexpected.

The Good
China’s approach to AI development and implementation is fast-paced and pragmatic, oriented towards finding applications which can help solve real-world problems. Rapid progress is being made in the field of healthcare, for example, as China grapples with providing easy access to affordable and high-quality services for its aging population.

Applications include “AI doctor” chatbots, which help to connect communities in remote areas with experienced consultants via telemedicine; machine learning to speed up pharmaceutical research; and the use of deep learning for medical image processing, which can help with the early detection of cancer and other diseases.

Since the outbreak of Covid-19, medical AI applications have surged as Chinese researchers and tech companies have rushed to try and combat the virus by speeding up screening, diagnosis, and new drug development. AI tools used in Wuhan, China, to tackle Covid-19 by helping accelerate CT scan diagnosis are now being used in Italy and have been also offered to the NHS in the UK.

The Bad
But there are also elements of China’s use of AI that are seriously concerning. Positive advances in practical AI applications that are benefiting citizens and society don’t detract from the fact that China’s authoritarian government is also using AI and citizens’ data in ways that violate privacy and civil liberties.

Most disturbingly, reports and leaked documents have revealed the government’s use of facial recognition technologies to enable the surveillance and detention of Muslim ethnic minorities in China’s Xinjiang province.

The emergence of opaque social governance systems that lack accountability mechanisms are also a cause for concern.

In Shanghai’s “smart court” system, for example, AI-generated assessments are used to help with sentencing decisions. But it is difficult for defendants to assess the tool’s potential biases, the quality of the data, and the soundness of the algorithm, making it hard for them to challenge the decisions made.

China’s experience reminds us of the need for transparency and accountability when it comes to AI in public services. Systems must be designed and implemented in ways that are inclusive and protect citizens’ digital rights.

The Unexpected
Commentators have often interpreted the State Council’s 2017 Artificial Intelligence Development Plan as an indication that China’s AI mobilization is a top-down, centrally planned strategy.

But a closer look at the dynamics of China’s AI development reveals the importance of local government in implementing innovation policy. Municipal and provincial governments across China are establishing cross-sector partnerships with research institutions and tech companies to create local AI innovation ecosystems and drive rapid research and development.

Beyond the thriving major cities of Beijing, Shanghai, and Shenzhen, efforts to develop successful innovation hubs are also underway in other regions. A promising example is the city of Hangzhou, in Zhejiang Province, which has established an “AI Town,” clustering together the tech company Alibaba, Zhejiang University, and local businesses to work collaboratively on AI development. China’s local ecosystem approach could offer interesting insights to policymakers in the UK aiming to boost research and innovation outside the capital and tackle longstanding regional economic imbalances.

China’s accelerating AI innovation deserves the world’s full attention, but it is unhelpful to reduce all the many developments into a simplistic narrative about China as a threat or a villain. Observers outside China need to engage seriously with the debate and make more of an effort to understand—and learn from—the nuances of what’s really happening.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Dominik Vanyi on Unsplash Continue reading

Posted in Human Robots

#437216 New Report: Tech Could Fuel an Age of ...

With rapid technological progress running headlong into dramatic climate change and widening inequality, most experts agree the coming decade will be tumultuous. But a new report predicts it could actually make or break civilization as we know it.

The idea that humanity is facing a major shake-up this century is not new. The Fourth Industrial Revolution being brought about by technologies like AI, gene editing, robotics, and 3D printing is predicted to cause dramatic social, political, and economic upheaval in the coming decades.

But according to think tank RethinkX, thinking about the coming transition as just another industrial revolution is too simplistic. In a report released last week called Rethinking Humanity, the authors argue that we are about to see a reordering of our relationship with the world as fundamental as when hunter-gatherers came together to build the first civilizations.

At the core of their argument is the fact that since the first large human settlements appeared 10,000 years ago, civilization has been built on the back of our ability to extract resources from nature, be they food, energy, or materials. This led to a competitive landscape where the governing logic is grow or die, which has driven all civilizations to date.

That could be about to change thanks to emerging technologies that will fundamentally disrupt the five foundational sectors underpinning society: information, energy, food, transportation, and materials. They predict that across all five, costs will fall by 10 times or more, while production processes will become 10 times more efficient and will use 90 percent fewer natural resources with 10 to 100 times less waste.

They say that this transformation has already happened in information, where the internet has dramatically reduced barriers to communication and knowledge. They predict the combination of cheap solar and grid storage will soon see energy costs drop as low as one cent per kilowatt hour, and they envisage widespread adoption of autonomous electric vehicles and the replacement of car ownership with ride-sharing.

The authors laid out their vision for the future of food in another report last year, where they predicted that traditional agriculture would soon be replaced by industrial-scale brewing of single-celled organisms genetically modified to produce all the nutrients we need. In a similar vein, they believe the same processes combined with additive manufacturing and “nanotechnologies” will allow us to build all the materials required for the modern world from the molecule up rather than extracting scarce natural resources.

They believe this could allow us to shift from a system of production based on extraction to one built on creation, as limitless renewable energy makes it possible to build everything we need from scratch and barriers to movement and information disappear. As a result, a lifestyle worthy of the “American Dream” could be available to anyone for as little as $250/month by 2030.

This will require a fundamental reimagining of our societies, though. All great civilizations have eventually hit fundamental limits on their growth and we are no different, as demonstrated by our growing impact on the environment and the increasing concentration of wealth. Historically this stage of development has lead to a doubling down on old tactics in search of short-term gains, but this invariably leads to the collapse of the civilization.

The authors argue that we’re in a unique position. Because of the technological disruption detailed above, we have the ability to break through the limits on our growth. But only if we change what the authors call our “Organizing System.” They describe this as “the prevailing models of thought, belief systems, myths, values, abstractions, and conceptual frameworks that help explain how the world works and our relationship to it.”

They say that the current hierarchical, centralized system based on nation-states is unfit for the new system of production that is emerging. The cracks are already starting to appear, with problems like disinformation campaigns, fake news, and growing polarization demonstrating how ill-suited our institutions are for dealing with the distributed nature of today’s information systems. And as this same disruption comes to the other foundational sectors the shockwaves could lead to the collapse of civilization as we know it.

Their solution is a conscious shift towards a new way of organizing the world. As emerging technology allows communities to become self-sufficient, flows of physical resources will be replaced by flows of information, and we will require a decentralized but highly networked Organizing System.

The report includes detailed recommendations on how to usher this in. Examples include giving individuals control and ownership of data rights; developing new models for community ownership of energy, information, and transportation networks; and allowing states and cities far greater autonomy on policies like immigration, taxation, education, and public expenditure.

How easy it will be to get people on board with such a shift is another matter. The authors say it may require us to re-examine the foundations of our society, like representative democracy, capitalism, and nation-states. While they acknowledge that these ideas are deeply entrenched, they appear to believe we can reason our way around them.

That seems optimistic. Cultural and societal change can be glacial, and efforts to impose it top-down through reason and logic are rarely successful. The report seems to brush over many of the messy realities of humanity, such as the huge sway that tradition and religion hold over the vast majority of people.

It also doesn’t deal with the uneven distribution of the technology that is supposed to catapult us into this new age. And while the predicted revolutions in transportation, energy, and information do seem inevitable, the idea that in the next decade or two we’ll be able to produce any material we desire using cheap and abundant stock materials seems like a stretch.

Despite the techno-utopianism though, many of the ideas in the report hold promise for building societies that are better adapted for the disruptive new age we are about to enter.

Image Credit: Futuristic Society/flickr Continue reading

Posted in Human Robots