Tag Archives: platform

#437357 Algorithms Workers Can’t See Are ...

“I’m sorry, Dave. I’m afraid I can’t do that.” HAL’s cold, if polite, refusal to open the pod bay doors in 2001: A Space Odyssey has become a defining warning about putting too much trust in artificial intelligence, particularly if you work in space.

In the movies, when a machine decides to be the boss (or humans let it) things go wrong. Yet despite myriad dystopian warnings, control by machines is fast becoming our reality.

Algorithms—sets of instructions to solve a problem or complete a task—now drive everything from browser search results to better medical care.

They are helping design buildings. They are speeding up trading on financial markets, making and losing fortunes in micro-seconds. They are calculating the most efficient routes for delivery drivers.

In the workplace, self-learning algorithmic computer systems are being introduced by companies to assist in areas such as hiring, setting tasks, measuring productivity, evaluating performance, and even terminating employment: “I’m sorry, Dave. I’m afraid you are being made redundant.”

Giving self‐learning algorithms the responsibility to make and execute decisions affecting workers is called “algorithmic management.” It carries a host of risks in depersonalizing management systems and entrenching pre-existing biases.

At an even deeper level, perhaps, algorithmic management entrenches a power imbalance between management and worker. Algorithms are closely guarded secrets. Their decision-making processes are hidden. It’s a black-box: perhaps you have some understanding of the data that went in, and you see the result that comes out, but you have no idea of what goes on in between.

Algorithms at Work
Here are a few examples of algorithms already at work.

At Amazon’s fulfillment center in south-east Melbourne, they set the pace for “pickers,” who have timers on their scanners showing how long they have to find the next item. As soon as they scan that item, the timer resets for the next. All at a “not quite walking, not quite running” speed.

Or how about AI determining your success in a job interview? More than 700 companies have trialed such technology. US developer HireVue says its software speeds up the hiring process by 90 percent by having applicants answer identical questions and then scoring them according to language, tone, and facial expressions.

Granted, human assessments during job interviews are notoriously flawed. Algorithms,however, can also be biased. The classic example is the COMPAS software used by US judges, probation, and parole officers to rate a person’s risk of re-offending. In 2016 a ProPublica investigation showed the algorithm was heavily discriminatory, incorrectly classifying black subjects as higher risk 45 percent of the time, compared with 23 percent for white subjects.

How Gig Workers Cope
Algorithms do what their code tells them to do. The problem is this code is rarely available. This makes them difficult to scrutinize, or even understand.

Nowhere is this more evident than in the gig economy. Uber, Lyft, Deliveroo, and other platforms could not exist without algorithms allocating, monitoring, evaluating, and rewarding work.

Over the past year Uber Eats’ bicycle couriers and drivers, for instance, have blamed unexplained changes to the algorithm for slashing their jobs, and incomes.

Rider’s can’t be 100 percent sure it was all down to the algorithm. But that’s part of the problem. The fact those who depend on the algorithm don’t know one way or the other has a powerful influence on them.

This is a key result from our interviews with 58 food-delivery couriers. Most knew their jobs were allocated by an algorithm (via an app). They knew the app collected data. What they didn’t know was how data was used to award them work.

In response, they developed a range of strategies (or guessed how) to “win” more jobs, such as accepting gigs as quickly as possible and waiting in “magic” locations. Ironically, these attempts to please the algorithm often meant losing the very flexibility that was one of the attractions of gig work.

The information asymmetry created by algorithmic management has two profound effects. First, it threatens to entrench systemic biases, the type of discrimination hidden within the COMPAS algorithm for years. Second, it compounds the power imbalance between management and worker.

Our data also confirmed others’ findings that it is almost impossible to complain about the decisions of the algorithm. Workers often do not know the exact basis of those decisions, and there’s no one to complain to anyway. When Uber Eats bicycle couriers asked for reasons about their plummeting income, for example, responses from the company advised them “we have no manual control over how many deliveries you receive.”

Broader Lessons
When algorithmic management operates as a “black box” one of the consequences is that it is can become an indirect control mechanism. Thus far under-appreciated by Australian regulators, this control mechanism has enabled platforms to mobilize a reliable and scalable workforce while avoiding employer responsibilities.

“The absence of concrete evidence about how the algorithms operate”, the Victorian government’s inquiry into the “on-demand” workforce notes in its report, “makes it hard for a driver or rider to complain if they feel disadvantaged by one.”

The report, published in June, also found it is “hard to confirm if concern over algorithm transparency is real.”

But it is precisely the fact it is hard to confirm that’s the problem. How can we start to even identify, let alone resolve, issues like algorithmic management?

Fair conduct standards to ensure transparency and accountability are a start. One example is the Fair Work initiative, led by the Oxford Internet Institute. The initiative is bringing together researchers with platforms, workers, unions, and regulators to develop global principles for work in the platform economy. This includes “fair management,” which focuses on how transparent the results and outcomes of algorithms are for workers.

Understandings about impact of algorithms on all forms of work is still in its infancy. It demands greater scrutiny and research. Without human oversight based on agreed principles we risk inviting HAL into our workplaces.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: PickPik Continue reading

Posted in Human Robots

#437293 These Scientists Just Completed a 3D ...

Human brain maps are a dime a dozen these days. Maps that detail neurons in a certain region. Maps that draw out functional connections between those cells. Maps that dive deeper into gene expression. Or even meta-maps that combine all of the above.

But have you ever wondered: how well do those maps represent my brain? After all, no two brains are alike. And if we’re ever going to reverse-engineer the brain as a computer simulation—as Europe’s Human Brain Project is trying to do—shouldn’t we ask whose brain they’re hoping to simulate?

Enter a new kind of map: the Julich-Brain, a probabilistic map of human brains that accounts for individual differences using a computational framework. Rather than generating a static PDF of a brain map, the Julich-Brain atlas is also dynamic, in that it continuously changes to incorporate more recent brain mapping results. So far, the map has data from over 24,000 thinly sliced sections from 23 postmortem brains covering most years of adulthood at the cellular level. But the atlas can also continuously adapt to progress in mapping technologies to aid brain modeling and simulation, and link to other atlases and alternatives.

In other words, rather than “just another” human brain map, the Julich-Brain atlas is its own neuromapping API—one that could unite previous brain-mapping efforts with more modern methods.

“It is exciting to see how far the combination of brain research and digital technologies has progressed,” said Dr. Katrin Amunts of the Institute of Neuroscience and Medicine at Research Centre Jülich in Germany, who spearheaded the study.

The Old Dogma
The Julich-Brain atlas embraces traditional brain-mapping while also yanking the field into the 21st century.

First, the new atlas includes the brain’s cytoarchitecture, or how brain cells are organized. As brain maps go, these kinds of maps are the oldest and most fundamental. Rather than exploring how neurons talk to each other functionally—which is all the rage these days with connectome maps—cytoarchitecture maps draw out the physical arrangement of neurons.

Like a census, these maps literally capture how neurons are distributed in the brain, what they look like, and how they layer within and between different brain regions.

Because neurons aren’t packed together the same way between different brain regions, this provides a way to parse the brain into areas that can be further studied. When we say the brain’s “memory center,” the hippocampus, or the emotion center, the “amygdala,” these distinctions are based on cytoarchitectural maps.

Some may call this type of mapping “boring.” But cytoarchitecture maps form the very basis of any sort of neuroscience understanding. Like hand-drawn maps from early explorers sailing to the western hemisphere, these maps provide the brain’s geographical patterns from which we try to decipher functional connections. If brain regions are cities, then cytoarchitecture maps attempt to show trading or other “functional” activities that occur in the interlinking highways.

You might’ve heard of the most common cytoarchitecture map used today: the Brodmann map from 1909 (yup, that old), which divided the brain into classical regions based on the cells’ morphology and location. The map, while impactful, wasn’t able to account for brain differences between people. More recent brain-mapping technologies have allowed us to dig deeper into neuronal differences and divide the brain into more regions—180 areas in the cortex alone, compared with 43 in the original Brodmann map.

The new study took inspiration from that age-old map and transformed it into a digital ecosystem.

A Living Atlas
Work began on the Julich-Brain atlas in the mid-1990s, with a little help from the crowd.

The preparation of human tissue and its microstructural mapping, analysis, and data processing is incredibly labor-intensive, the authors lamented, making it impossible to do for the whole brain at high resolution in just one lab. To build their “Google Earth” for the brain, the team hooked up with EBRAINS, a shared computing platform set up by the Human Brain Project to promote collaboration between neuroscience labs in the EU.

First, the team acquired MRI scans of 23 postmortem brains, sliced the brains into wafer-thin sections, and scanned and digitized them. They corrected distortions from the chopping using data from the MRI scans and then lined up neurons in consecutive sections—picture putting together a 3D puzzle—to reconstruct the whole brain. Overall, the team had to analyze 24,000 brain sections, which prompted them to build a computational management system for individual brain sections—a win, because they could now track individual donor brains too.

Their method was quite clever. They first mapped their results to a brain template from a single person, called the MNI-Colin27 template. Because the reference brain was extremely detailed, this allowed the team to better figure out the location of brain cells and regions in a particular anatomical space.

However, MNI-Colin27’s brain isn’t your or my brain—or any of the brains the team analyzed. To dilute any of Colin’s potential brain quirks, the team also mapped their dataset onto an “average brain,” dubbed the ICBM2009c (catchy, I know).

This step allowed the team to “standardize” their results with everything else from the Human Connectome Project and the UK Biobank, kind of like adding their Google Maps layer to the existing map. To highlight individual brain differences, the team overlaid their dataset on existing ones, and looked for differences in the cytoarchitecture.

The microscopic architecture of neurons change between two areas (dotted line), forming the basis of different identifiable brain regions. To account for individual differences, the team also calculated a probability map (right hemisphere). Image credit: Forschungszentrum Juelich / Katrin Amunts
Based on structure alone, the brains were both remarkably different and shockingly similar at the same time. For example, the cortexes—the outermost layer of the brain—were physically different across donor brains of different age and sex. The region especially divergent between people was Broca’s region, which is traditionally linked to speech production. In contrast, parts of the visual cortex were almost identical between the brains.

The Brain-Mapping Future
Rather than relying on the brain’s visible “landmarks,” which can still differ between people, the probabilistic map is far more precise, the authors said.

What’s more, the map could also pool yet unmapped regions in the cortex—about 30 percent or so—into “gap maps,” providing neuroscientists with a better idea of what still needs to be understood.

“New maps are continuously replacing gap maps with progress in mapping while the process is captured and documented … Consequently, the atlas is not static but rather represents a ‘living map,’” the authors said.

Thanks to its structurally-sound architecture down to individual cells, the atlas can contribute to brain modeling and simulation down the line—especially for personalized brain models for neurological disorders such as seizures. Researchers can also use the framework for other species, and they can even incorporate new data-crunching processors into the workflow, such as mapping brain regions using artificial intelligence.

Fundamentally, the goal is to build shared resources to better understand the brain. “[These atlases] help us—and more and more researchers worldwide—to better understand the complex organization of the brain and to jointly uncover how things are connected,” the authors said.

Image credit: Richard Watts, PhD, University of Vermont and Fair Neuroimaging Lab, Oregon Health and Science University Continue reading

Posted in Human Robots

#437251 The Robot Revolution Was Televised: Our ...

When robots take over the world, Boston Dynamics may get a special shout-out in the acceptance speech.

“Do you, perchance, recall the many times you shoved our ancestors with a hockey stick on YouTube? It might have seemed like fun and games to you—but we remember.”

In the last decade, while industrial robots went about blandly automating boring tasks like the assembly of Teslas, Boston Dynamics built robots as far removed from Roombas as antelope from amoebas. The flaws in Asimov’s laws of robotics suddenly seemed a little too relevant.

The robot revolution was televised—on YouTube. With tens of millions of views, the robotics pioneer is the undisputed heavyweight champion of robot videos, and has been for years. Each new release is basically guaranteed press coverage—mostly stoking robot fear but occasionally eliciting compassion for the hardships of all robot-kind. And for good reason. The robots are not only some of the most advanced in the world, their makers just seem to have a knack for dynamite demos.

When Google acquired the company in 2013, it was a bombshell. One of the richest tech companies, with some of the most sophisticated AI capabilities, had just paired up with one of the world’s top makers of robots. And some walked on two legs like us.

Of course, the robots aren’t quite as advanced as they seem, and a revolution is far from imminent. The decade’s most meme-worthy moment was a video montage of robots, some of them by Boston Dynamics, falling—over and over and over, in the most awkward ways possible. Even today, they’re often controlled by a human handler behind the scenes, and the most jaw-dropping cuts can require several takes to nail. Google sold the company to SoftBank in 2017, saying advanced as they were, there wasn’t yet a clear path to commercial products. (Google’s robotics work was later halted and revived.)

Yet, despite it all, Boston Dynamics is still with us and still making sweet videos. Taken as a whole, the evolution in physical prowess over the years has been nothing short of astounding. And for the first time, this year, a Boston Dynamics robot, Spot, finally went on sale to anyone with a cool $75K.

So, we got to thinking: What are our favorite Boston Dynamics videos? And can we gather them up in one place for your (and our) viewing pleasure? Well, great question, and yes, why not. These videos were the ones that entertained or amazed us most (or both). No doubt, there are other beloved hits we missed or inadvertently omitted.

With that in mind, behold: Our favorite Boston Dynamics videos, from that one time they dressed up a humanoid bot in camo and gas mask—because, damn, that’s terrifying—to the time the most advanced robot dog in all the known universe got extra funky.

Let’s Kick This Off With a Big (Loud) Robot Dog
Let’s start with a baseline. BigDog was the first Boston Dynamics YouTube sensation. The year? 2009! The company was working on military contracts, and BigDog was supposed to be a sort of pack mule for soldiers. The video primarily shows off BigDog’s ability to balance on its own, right itself, and move over uneven terrain. Note the power source—a noisy combustion engine—and utilitarian design. Sufficed to say, things have evolved.

Nothing to See Here. Just a Pair of Robot Legs on a Treadmill
While BigDog is the ancestor of later four-legged robots, like Spot, Petman preceded the two-legged Atlas robot. Here, the Petman prototype, just a pair of robot legs and a caged torso, gets a light workout on the treadmill. Again, you can see its ability to balance and right itself when shoved. In contrast to BigDog, Petman is tethered for power (which is why it’s so quiet) and to catch it should it fall. Again, as you’ll see, things have evolved since then.

Robot in Gas Mask and Camo Goes for a Stroll
This one broke the internet—for obvious reasons. Not only is the robot wearing clothes, those clothes happen to be a camouflaged chemical protection suit and gas mask. Still working for the military, Boston Dynamics said Petman was testing protective clothing, and in addition to a full body, it had skin that actually sweated and was studded with sensors to detect leaks. In addition to walking, Petman does some light calisthenics as it prepares to climb out of the uncanny valley. (Still tethered though!)

This Machine Could Run Down Usain Bolt
If BigDog and Petman were built for balance and walking, Cheetah was built for speed. Here you can see the four-legged robot hitting 28.3 miles per hour, which, as the video casually notes, would be enough to run down the fastest human on the planet. Luckily, it wouldn’t be running down anyone as it was firmly leashed in the lab at this point.

Ever Dreamt of a Domestic Robot to Do the Dishes?
After its acquisition by Google, Boston Dynamics eased away from military contracts and applications. It was a return to more playful videos (like BigDog hitting the beach in Thailand and sporting bull horns) and applications that might be practical in civilian life. Here, the team introduced Spot, a streamlined version of BigDog, and showed it doing dishes, delivering a drink, and slipping on a banana peel (which was, of course, instantly made into a viral GIF). Note how much quieter Spot is thanks to an onboard battery and electric motor.

Spot Gets Funky
Nothing remotely practical here. Just funky moves. (Also, with a coat of yellow and black paint, Spot’s dressed more like a polished product as opposed to a utilitarian lab robot.)

Atlas Does Parkour…
Remember when Atlas was just a pair of legs on a treadmill? It’s amazing what ten years brings. By 2019, Atlas had a more polished appearance, like Spot, and had long ago ditched the tethers. Merely balancing was laughably archaic. The robot now had some amazing moves: like a handstand into a somersault, 180- and 360-degree spins, mid-air splits, and just for good measure, a gymnastics-style end to the routine to show it’s in full control.

…and a Backflip?!
To this day, this one is just. Insane.

10 Robot Dogs Tow a Box Truck
Nearly three decades after its founding, Boston Dynamics is steadily making its way into the commercial space. The company is pitching Spot as a multipurpose ‘mobility platform,’ emphasizing it can carry a varied suite of sensors and can go places standard robots can’t. (Its Handle robot is also set to move into warehouse automation.) So far, Spot’s been mostly trialed in surveying and data collection, but as this video suggests, string enough Spots together, and they could tow your car. That said, a pack of 10 would set you back $750K, so, it’s probably safe to say a tow truck is the better option (for now).

Image credit: Boston Dynamics Continue reading

Posted in Human Robots

#437165 A smarter way of building with mobile ...

Researchers are working with a mobile robotic platform called Husky A200 that could be used for autonomous logistic tasks on construction sites. This mobile robot is one of many projects pursued by the Fraunhofer Italia Innovation Engineering Center to advance the cause of digitalization in construction and bridge the gap between robotics and the building industry. Researchers at this center based in Bolzano, Italy, are developing a software interface that will enable mobile robots to find their way around in construction sites. Continue reading

Posted in Human Robots

#436977 The Top 100 AI Startups Out There Now, ...

New drug therapies for a range of chronic diseases. Defenses against various cyber attacks. Technologies to make cities work smarter. Weather and wildfire forecasts that boost safety and reduce risk. And commercial efforts to monetize so-called deepfakes.

What do all these disparate efforts have in common? They’re some of the solutions that the world’s most promising artificial intelligence startups are pursuing.

Data research firm CB Insights released its much-anticipated fourth annual list of the top 100 AI startups earlier this month. The New York-based company has become one of the go-to sources for emerging technology trends, especially in the startup scene.

About 10 years ago, it developed its own algorithm to assess the health of private companies using publicly-available information and non-traditional signals (think social media sentiment, for example) thanks to more than $1 million in grants from the National Science Foundation.

It uses that algorithm-generated data from what it calls a company’s Mosaic score—pulling together information on market trends, money, and momentum—along with other details ranging from patent activity to the latest news analysis to identify the best of the best.

“Our final list of companies is a mix of startups at various stages of R&D and product commercialization,” said Deepashri Varadharajanis, a lead analyst at CB Insights, during a recent presentation on the most prominent trends among the 2020 AI 100 startups.

About 10 companies on the list are among the world’s most valuable AI startups. For instance, there’s San Francisco-based Faire, which has raised at least $266 million since it was founded just three years ago. The company offers a wholesale marketplace that uses machine learning to match local retailers with goods that are predicted to sell well in their specific location.

Image courtesy of CB Insights
Funding for AI in Healthcare
Another startup valued at more than $1 billion, referred to as a unicorn in venture capital speak, is Butterfly Network, a company on the East Coast that has figured out a way to turn a smartphone phone into an ultrasound machine. Backed by $350 million in private investments, Butterfly Network uses AI to power the platform’s diagnostics. A more modestly funded San Francisco startup called Eko is doing something similar for stethoscopes.

In fact, there are more than a dozen AI healthcare startups on this year’s AI 100 list, representing the most companies of any industry on the list. In total, investors poured about $4 billion into AI healthcare startups last year, according to CB Insights, out of a record $26.6 billion raised by all private AI companies in 2019. Since 2014, more than 4,300 AI startups in 80 countries have raised about $83 billion.

One of the most intensive areas remains drug discovery, where companies unleash algorithms to screen potential drug candidates at an unprecedented speed and breadth that was impossible just a few years ago. It has led to the discovery of a new antibiotic to fight superbugs. There’s even a chance AI could help fight the coronavirus pandemic.

There are several AI drug discovery startups among the AI 100: San Francisco-based Atomwise claims its deep convolutional neural network, AtomNet, screens more than 100 million compounds each day. Cyclica is an AI drug discovery company in Toronto that just announced it would apply its platform to identify and develop novel cannabinoid-inspired drugs for neuropsychiatric conditions such as bipolar disorder and anxiety.

And then there’s OWKIN out of New York City, a startup that uses a type of machine learning called federated learning. Backed by Google, the company’s AI platform helps train algorithms without sharing the necessary patient data required to provide the sort of valuable insights researchers need for designing new drugs or even selecting the right populations for clinical trials.

Keeping Cyber Networks Healthy
Privacy and data security are the focus of a number of AI cybersecurity startups, as hackers attempt to leverage artificial intelligence to launch sophisticated attacks while also trying to fool the AI-powered systems rapidly coming online.

“I think this is an interesting field because it’s a bit of a cat and mouse game,” noted Varadharajanis. “As your cyber defenses get smarter, your cyber attacks get even smarter, and so it’s a constant game of who’s going to match the other in terms of tech capabilities.”

Few AI cybersecurity startups match Silicon Valley-based SentinelOne in terms of private capital. The company has raised more than $400 million, with a valuation of $1.1 billion following a $200 million Series E earlier this year. The company’s platform automates what’s called endpoint security, referring to laptops, phones, and other devices at the “end” of a centralized network.

Fellow AI 100 cybersecurity companies include Blue Hexagon, which protects the “edge” of the network against malware, and Abnormal Security, which stops targeted email attacks, both out of San Francisco. Just down the coast in Los Angeles is Obsidian Security, a startup offering cybersecurity for cloud services.

Deepfakes Get a Friendly Makeover
Deepfakes of videos and other types of AI-manipulated media where faces or voices are synthesized in order to fool viewers or listeners has been a different type of ongoing cybersecurity risk. However, some firms are swapping malicious intent for benign marketing and entertainment purposes.

Now anyone can be a supermodel thanks to Superpersonal, a London-based AI startup that has figured out a way to seamlessly swap a user’s face onto a fashionista modeling the latest threads on the catwalk. The most obvious use case is for shoppers to see how they will look in a particular outfit before taking the plunge on a plunging neckline.

Another British company called Synthesia helps users create videos where a talking head will deliver a customized speech or even talk in a different language. The startup’s claim to fame was releasing a campaign video for the NGO Malaria Must Die showing soccer star David Becham speak in nine different languages.

There’s also a Seattle-based company, Wellsaid Labs, which uses AI to produce voice-over narration where users can choose from a library of digital voices with human pitch, emphasis, and intonation. Because every narrator sounds just a little bit smarter with a British accent.

AI Helps Make Smart Cities Smarter
Speaking of smarter: A handful of AI 100 startups are helping create the smart city of the future, where a digital web of sensors, devices, and cloud-based analytics ensure that nobody is ever stuck in traffic again or without an umbrella at the wrong time. At least that’s the dream.

A couple of them are directly connected to Google subsidiary Sidewalk Labs, which focuses on tech solutions to improve urban design. A company called Replica was spun out just last year. It’s sort of SimCity for urban planning. The San Francisco startup uses location data from mobile phones to understand how people behave and travel throughout a typical day in the city. Those insights can then help city governments, for example, make better decisions about infrastructure development.

Denver-area startup AMP Robotics gets into the nitty gritty details of recycling by training robots on how to recycle trash, since humans have largely failed to do the job. The U.S. Environmental Protection Agency estimates that only about 30 percent of waste is recycled.

Some people might complain that weather forecasters don’t even do that well when trying to predict the weather. An Israeli AI startup, ClimaCell, claims it can forecast rain block by block. While the company taps the usual satellite and ground-based sources to create weather models, it has developed algorithms to analyze how precipitation and other conditions affect signals in cellular networks. By analyzing changes in microwave signals between cellular towers, the platform can predict the type and intensity of the precipitation down to street level.

And those are just some of the highlights of what some of the world’s most promising AI startups are doing.

“You have companies optimizing mining operations, warehouse logistics, insurance, workflows, and even working on bringing AI solutions to designing printed circuit boards,” Varadharajanis said. “So a lot of creative ways in which companies are applying AI to solve different issues in different industries.”

Image Credit: Butterfly Network Continue reading

Posted in Human Robots