Skip to content


Culture war games: the Gorgon Stare of the surveillance state

Historically Hollow: The Cries of Populism
By Bryan Caplan

Amazon is simply the best store that ever existed, by far, with incredible selection and unearthly convenience. The price: cheap.

Facebook, Twitter, and other social media let us socialize with our friends, comfortably meet new people, and explore even the most obscure interests. The price: free.

Uber and Lyft provide high-quality, convenient transportation. The price: really cheap.

Skype is a sci-fi quality video phone. The price: free.

Youtube gives us endless entertainment. The price: free.

Google gives us the totality of human knowledge! The price: free.

That’s what I’ve seen. What I’ve heard, however, is totally different. The populists of our Golden Age are loud and furious. They’re crying about “monopolies” that deliver firehoses worth of free stuff. They’re bemoaning the “death of competition” in industries (like taxicabs) that governments forcibly monopolized for as long as any living person can remember. They’re insisting that “only the 1% benefit” in an age when half of the high-profile new businesses literally give their services away for free. And they’re lashing out at businesses for “taking our data” – even though five years ago hardly anyone realized that they had data.

My point: If your overall reaction to business progress over the last fifteen years is even mildly negative, no sensible person will try to please you, because you are impossible to please. Yet our new anti-tech populists have managed to make themselves a center of pseudo-intellectual attention.

Angry lamentation about the effects of new tech on privacy has flabbergasted me the most. For practical purposes, we have more privacy than ever before in human history. You can now buy embarrassing products in secret. You can read or view virtually anything you like in secret. You can interact with over a billion people in secret.

Then what privacy have we lost? The privacy to not be part of a Big Data Set. The privacy to not have firms try to sell us stuff based on our previous purchases. In short, we have lost the kinds of privacy that no prudent person loses sleep over.

I’m being watched at Amazon Go — and I don’t care
By Erica Pandey

The big picture: Per a February survey by IBM’s Institute for Business Value, 71% of consumers say it’s worth sacrificing privacy for the benefits of technology.

  1. A whopping 81% say they’re concerned about how their data is being used, but only 45% have actually updated privacy settings on an app or account in the last year and a measly 16% have stopped using a tech company’s service because of data misuse.
  2. According to an Axios poll, 46% of consumers ages 18–24 say they always accept companies’ privacy policies without reading a single word. Only 15% of those over 65 say they do the same.

Google admits listening to some smart speaker recordings
By Martyn Landi

In a statement, the company said a small number of anonymous recordings were transcribed by its experts, and revealed that an investigation had been launched after some Dutch audio data had been leaked.

“We partner with language experts around the world to improve speech technology by transcribing a small set of queries – this work is critical to developing technology that powers products like the Google Assistant,” Google said.

“Language experts only review around 0.2% of all audio snippets, and these snippets are not associated with user accounts as part of the review process.

“We just learned that one of these reviewers has violated our data security policies by leaking confidential Dutch audio data.

“Our Security and Privacy Response teams have been activated on this issue, are investigating, and we will take action.

“We are conducting a full review of our safeguards in this space to prevent misconduct like this from happening again.”

Apple contractors ‘regularly hear confidential details’ on Siri recordings
By Alex Hern

Sometimes, “you can definitely hear a doctor and patient, talking about the medical history of the patient. Or you’d hear someone, maybe with car engine background noise – you can’t say definitely, but it’s a drug deal … you can definitely hear it happening. And you’d hear, like, people engaging in sexual acts that are accidentally recorded on the pod or the watch.”

The contractor said staff were encouraged to report accidental activations “but only as a technical problem”, with no specific procedures to deal with sensitive recordings. “We’re encouraged to hit targets, and get through work as fast as possible. The only function for reporting what you’re listening to seems to be for technical problems. There’s nothing about reporting the content.”

As well as the discomfort they felt listening to such private information, the contractor said they were motivated to go public about their job because of their fears that such information could be misused. “There’s not much vetting of who works there, and the amount of data that we’re free to look through seems quite broad. It wouldn’t be difficult to identify the person that you’re listening to, especially with accidental triggers – addresses, names and so on.

“Apple is subcontracting out, there’s a high turnover. It’s not like people are being encouraged to have consideration for people’s privacy, or even consider it. If there were someone with nefarious intentions, it wouldn’t be hard to identify [people on the recordings].”

Amazon Workers Are Listening to What You Tell Alexa
By Matt Day, Giles Turner and Natalia Drozdiak

Amazon, in its marketing and privacy policy materials, doesn’t explicitly say humans are listening to recordings of some conversations picked up by Alexa. “We use your requests to Alexa to train our speech recognition and natural language understanding systems,” the company says in a list of frequently asked questions.

In Alexa’s privacy settings, Amazon gives users the option of disabling the use of their voice recordings for the development of new features. The company says people who opt out of that program might still have their recordings analyzed by hand over the regular course of the review process. A screenshot reviewed by Bloomberg shows that the recordings sent to the Alexa reviewers don’t provide a user’s full name and address but are associated with an account number, as well as the user’s first name and the device’s serial number.

Judge orders Amazon to turn over Echo recordings in double murder case
By Zack Whittaker

A New Hampshire judge has ordered Amazon to turn over two days of Amazon Echo recordings in a double murder case.

Prosecutors believe that recordings from an Amazon Echo in a Farmington home where two women were murdered in January 2017 may yield further clues to their killer. Although police seized the Echo when they secured the crime scene, any recordings are stored on Amazon servers.

The order granting the search warrant, obtained by TechCrunch, said that there is “probable cause to believe” that the Echo picked up “audio recordings capturing the attack” and “any events that preceded or succeeded the attack.”

Amazon is also directed to turn over any “information identifying any cellular devices that were linked to the smart speaker during that time period,” the order said.

‘Anonymous’ data might not be so anonymous, study shows
By Nicholas Wells and Leslie Picker

Using machine learning, the researchers developed a system to estimate the likelihood that a specific person could be re-identified from an anonymized data set containing demographic characteristics. The researchers’ model suggests that more than 99 percent of Americans could be correctly re-identified from any dataset using 15 demographic attributes, including age, gender and marital status.

“While there might be a lot of people who are in their thirties, male and living in New York City, far fewer of them were also born on January 5, are driving a red sports car and live with two kids (both girls) and one dog,” said Luc Rocher, a PhD candidate at Université catholique de Louvain and the study’s lead author. Personal data can be used for research, illicit activities and even investing, as CNBC has previously reported.

As part of their research, the trio published an online tool to help people understand how likely it is for them to be re-identified, based on just three common demographic characteristics: gender, birth date and ZIP code. On average, people have an 83% chance of being re-identified based on those three data points, the researchers said.

“The goal of anonymization is so we can use data to benefit society,” said Yves-Alexandre de Montjoye, one of the researchers. “This is extremely important but should not and does not have to happen at the expense of people’s privacy.”

Google and the University of Chicago Are Sued Over Data Sharing
By Daisuke Wakabayashi

On Wednesday, the University of Chicago, the medical center and Google were sued in a potential class-action lawsuit accusing the hospital of sharing hundreds of thousands of patients’ records with the technology giant without stripping identifiable date stamps or doctor’s notes.

The suit, filed in United States District Court for the Northern District of Illinois, demonstrates the difficulties technology companies face in handling health data as they forge ahead into one of the most promising — and potentially lucrative — areas of artificial intelligence: diagnosing medical problems.

Google is at the forefront of an effort to build technology that can read electronic health records and help physicians identify medical conditions. But the effort requires machines to learn this skill by analyzing a vast array of old health records collected by hospitals and other medical institutions.

That raises privacy concerns, especially when is used by a company like Google, which already knows what you search for, where you are and what interests you hold.

The U.S. government fined the app now known as TikTok $5.7 million for illegally collecting children’s data
By Craig Timberg and Tony Romm

Federal regulators fined social media app Musical.ly — now known as TikTok — $5.7 million for illegally collecting the names, email addresses, pictures and locations of kids under age 13, a record penalty for violations of the nation’s child privacy law.

The fine results from a settlement between the Federal Trade Commission and TikTok, which merged with California-based Musical.ly in 2018, over allegations of illegal data collection of children.

Here are the data brokers quietly buying and selling your personal information
By Steven Melendez and Alex Pasternack

All that information can be used to create profiles of you—think of them as virtual, possibly erroneous versions of you—that can be used to target you with ads, classify the riskiness of your lifestyle, or help determine your eligibility for a job. Like the companies themselves, the risks can be hard to see. Apart from the dangers of merely collecting and storing all that data, detailed (and often erroneous) consumer profiles can lead to race or income-based discrimination, in a high-tech version of redlining.

Piles of personal data are flowing to political consultants attempting to influence your vote (like Cambridge Analytica) and to government agencies pursuing non-violent criminal suspects (like U.S. Immigration and Customs Enforcement). Meanwhile, people-search websites, accessible to virtually anyone with a credit card, can be a goldmine for doxxers, abusers, and stalkers.

People in the U.S. still struggle to understand the nature and scope of the data collected about them, according to a recent survey by the Pew Research Center, and only 9% believe they have “a lot of control” over the data that is collected about them. Still, the vast majority, 74%, say it is very important to them to be in control of who can get that information.

Inside the secretive world of stalking apps
By Camilla Hodgson

Apps such as mSpy, TheTruthSpy and FlexiSpy allow users to monitor someone else’s phone activity, including their call logs, the contents of text and chat messages, GPS data and photos. Often billed as “parental control” or “employee monitoring” tools, many stalkerware apps also advertise themselves as a way to catch cheating partners — and note they can be installed invisibly on a target’s phone.

Installation generally requires physical access to the device; users can then hide the app’s icon and view the contents of the phone remotely, by logging into an online dashboard that monitors its activity.

Although these apps are secretive about user numbers and revenues, cyber-security company Kaspersky Labs said a growing number of people were being attacked by stalkerware.

Last year Kaspersky found and removed 58,000 instances of stalkerware after customers used its antivirus app, which looks for malicious code, to scan their devices. By July 2019 its specific anti-stalkerware product, which was released in April, had detected malicious apps on phones belonging to more than 7,000 customers worldwide.

Stalkerware “can be much more severe than other types of malware . . . because it is made to be used as a tool for the abuse of another person’s privacy and is often used by domestic abusers”, said security researcher Alexey Firsh.

Anti-spyware company Certo also said demand had “certainly increased in recent years”.

The cheap availability of personal surveillance apps can have devastating effects. In 2014 a survey by National Public Radio of 72 domestic violence shelters in the US discovered that 85 per cent had assisted victims whose abusers had tracked them using GPS. The same year, the National Network to End Domestic Violence found that 54 per cent of abusers had tracked their victims’ mobile phones using stalkerware.

Hollywood and hyper-surveillance: the incredible story of Gorgon Stare
By Sharon Weinberger

In the 1998 Hollywood thriller Enemy of the State, an innocent man (played by Will Smith) is pursued by a rogue spy agency that uses the advanced satellite “Big Daddy” to monitor his every move. The film — released 15 years before Edward Snowden blew the whistle on a global surveillance complex — has achieved a cult following.

It was, however, much more than just prescient: it was also an inspiration, even a blueprint, for one of the most powerful surveillance technologies ever created. So contends technology writer and researcher Arthur Holland Michel in his compelling book Eyes in the Sky. He notes that a researcher (unnamed) at the Lawrence Livermore National Laboratory in California who saw the movie at its debut decided to “explore — theoretically, at first — how emerging digital-imaging technology could be affixed to a satellite” to craft something like Big Daddy, despite the “nightmare scenario” it unleashes in the film. Holland Michel repeatedly notes this contradiction between military scientists’ good intentions and a technology based on a dystopian Hollywood plot.

He traces the development of that technology, called wide-area motion imagery (WAMI, pronounced ‘whammy’), by the US military from 2001. A camera on steroids, WAMI can capture images of large areas, in some cases an entire city. The technology got its big break after 2003, in the chaotic period following the US-led invasion of Iraq, where home-made bombs — improvised explosive devices (IEDs) — became the leading killer of US and coalition troops. Defence officials began to call for a Manhattan Project to spot and tackle the devices.

In 2006, the cinematically inspired research was picked up by DARPA, the Defense Advanced Research Projects Agency, which is tasked with US military innovation (D. Kaiser Nature 543, 176–177; 2017). DARPA funded the building of an aircraft-mounted camera with a capacity of almost two billion pixels. The Air Force had dubbed the project Gorgon Stare, after the monsters of penetrating gaze from classical Greek mythology, whose horrifying appearance turned observers to stone. (DARPA called its programme Argus, after another mythical creature: a giant with 100 eyes.)

Holland Michel notes tensions between security and privacy without hyping them.

And he gets those responsible for building WAMI to speak to him candidly — sometimes shockingly so. Take, for example, the former US military officer who touts the ‘benefits’ of the colonial subjugation of India (which he bizarrely claims created order among the country’s ethnic groups) to justify mass surveillance in the United States.

This potential for domestic mass surveillance becomes a key point. As the story proceeds, WAMI’s creators start looking for ways to use the battlefield technology at home: having built a new hammer, they search for more nails. Here, the story takes an even more dystopian turn. John Arnold, “a media-shy billionaire”, uses his own money to help secretly deploy a WAMI system to assist the police in tracking suspects in crime-ridden Baltimore, Maryland. Arnold, who has funded other “new crime-fighting technologies”, first learnt about WAMI’s use overseas from a podcast, and decided to debut it stateside. “Even the mayor was kept in the dark,” Holland Michel writes.

When Gorgon Stare is completed, Michael Meermans, an executive at Sierra Nevada (the company in Sparks, Nevada, that built it) asks himself rhetorically whether the task is over. Of course not. “When it comes to the world of actually collecting information and creating knowledge,” Meermans says, “you can never stop.”

When Battlefield Surveillance Comes to Your Town
By Christopher Mims

Currently, politics is the biggest limiter on these large-scale surveillance efforts. Legal scholars agree that the Fourth Amendment, which protects people in the U.S. against “unreasonable searches and seizures,” only applies to law enforcement watching our behavior in public in certain circumstances. But as the technology rolls out—and roll out it will—it’s likely to stoke considerable debates about a new definition of privacy.

“These wide-area surveillance systems give the government unprecedented power. It gives them a time machine to look into peoples’ past and learn details about their private lives,” says Matt Cagle, technology and civil liberties attorney at the ACLU of Northern California. “Again and again this technology has jumped ahead of where the courts are. This is qualitatively different than other forms of aerial surveillance we’ve seen in the past.”

When local councils use data tools to classify us, what price freedom?
By Kenan Malik

Are you a “metro high-flyer” or part of an “alpha family”? A “midlife renter” or a “cafes and catchments” sort? An “estate veteran” or a “bus-route renter”? You may not know, but if you live in Nottingham or Kent, your local council certainly does. And if you’re from Durham, so does the bobby on the beat.

These labels are part of 66 classifications of Britons devised for Mosaic, a system created by the credit score company Experian. Mosaic, according to Experian, is constructed out of “850m pieces of information” and allows you to “peer inside… all the types of household” in any town or village, “with their life-stages, marital status, household compositions and financial positions”.

Mosaic is designed as a marketing tool for private companies, to help “identify the consumers most responsive to different direct marketing channels”. It’s become a tool for local councils, too. According to a study from Cardiff University’s Data Justice Lab published last month, Kent and Nottinghamshire county councils and Blaby district council use Mosaic. At least 53 local authorities are purchasing data systems from private companies to help classify citizens and predict future outcomes, on everything from which child might be in danger of abuse to who might be committing benefit fraud.

Data evangelism surfs a number of recent social developments. More atomised societies smooth the way to viewing individuals as collections of data. The contemporary obsession with identity makes it easier to view every citizen as existing in a Mosaic-like category.

And our desire for “frictionless” lives has led us to stumble, almost without realising, into a new kind of surveillance society. We all worry about privacy and are outraged when Facebook or the NHS suffers data leaks. And yet we constantly trade away our data without thinking about it. From “smart doorbells” that link to “databases of suspicious persons” to genealogy companies that we trust with our DNA, opening their databases for police inspection, we are creating a world in which surveillance seems as inescapable as gossip about Love Island or another Donald Trump Twitter rant. A surveillance state created not just by government fiat, as in China, but also by our own absence of mind and thought.

Millions of people uploaded photos to the Ever app. Then the company used them to develop facial recognition tools.
By Olivia Solon and Cyrus Farivar

“Make memories”: That’s the slogan on the website for the photo storage app Ever, accompanied by a cursive logo and an example album titled “Weekend with Grandpa.”

Everything about Ever’s branding is warm and fuzzy, about sharing your “best moments” while freeing up space on your phone.

What isn’t obvious on Ever’s website or app — except for a brief reference that was added to the privacy policy after NBC News reached out to the company in April — is that the photos people share are used to train the company’s facial recognition system, and that Ever then offers to sell that technology to private companies, law enforcement and the military.

In other words, what began in 2013 as another cloud storage app has pivoted toward a far more lucrative business known as Ever AI — without telling the app’s millions of users.

“This looks like an egregious violation of people’s privacy,” said Jacob Snow, a technology and civil liberties attorney at the American Civil Liberties Union of Northern California. “They are taking images of people’s families, photos from a private photo app, and using it to build surveillance technology. That’s hugely concerning.”

Ever AI promises prospective military clients that it can “enhance surveillance capabilities” and “identify and act on threats.” It offers law enforcement the ability to identify faces in body-cam recordings or live video feeds.

Previously, the privacy policy explained that facial recognition technology was used to help “organize your files and enable you to share them with the right people.” The app has an opt-in face-tagging feature much like Facebook that allows users to search for specific friends or family members who use the app.

In the previous privacy policy, the only indication that the photos would be used for another purpose was a single line: “Your files may be used to help improve and train our products and these technologies.”

On April 15, one week after NBC News first contacted Ever, the company added a sentence to explain what it meant by “our products.”

“Some of these technologies may be used in our separate products and services for enterprise customers, including our enterprise face recognition offerings, but your files and personal information will not be,” the policy now states.

Customers Handed Over Their DNA. The Company Let the FBI Take a Look.
By Amy Dockser Marcus

Privacy advocates argued that when consumers submitted their DNA to a company, they didn’t expect it could be used by law enforcement without a warrant. Some are concerned about the government potentially having access to the genetic data of large numbers of people, many of whom never agreed to its use, and without wider public debate. Innocent people could get caught up in an investigation.

81% of ‘suspects’ flagged by Met’s police facial recognition technology innocent, independent report says
By Rowland Manthorpe and Alexander J Martin

The first independent evaluation of the scheme was commissioned by Scotland Yard and conducted by academics from the University of Essex.

Professor Pete Fussey and Dr Daragh Murray evaluated the technology’s accuracy at six of the 10 police trials. They found that, of 42 matches, only eight were verified as correct – an error rate of 81%. Four of the 42 were people who were never found because they were absorbed into the crowd, so a match could not be verified.

The Met prefers to measure accuracy by comparing successful and unsuccessful matches with the total number of faces processed by the facial recognition system. According to this metric, the error rate was just 0.1%.

Oregon became a testing ground for Amazon’s facial-recognition policing. But what if Rekognition gets it wrong?
By Drew Harwell

Lawyers in Washington County said they’re just starting to see the technique show up in arrest reports, and some are preparing for the day when they may have to litigate the systems’ admissibility in court. Marc Brown, a chief deputy defender working with Oregon’s Office of Public Defense Services, said he worried the system’s hidden decision-making could improperly tilt the balance of power: Human eyewitnesses can be questioned in court, but not this “magic black box,” and “we as defense attorneys cannot question, you know, how did this process work.”

The system’s results, Brown added, could pose a huge confirmation-bias problem by steering how deputies react. “You’ve already been told that this is the one, so when you investigate, that’s going to be in your mind,” he said. “The question is no longer who committed the crime, but where’s the evidence to support the computer’s analysis?”

Face Value
By Rachel Connolly

A report on algorithms in policing by The Law Society of England and Wales stated that the lack of transparency rules governing private and public sector partnerships is one of the key barriers to legally challenging technology like AFR. Companies are not covered by rules like the Freedom of Information Act that help hold the government accountable; privately owned companies, which many tech companies are, don’t even have to answer to any shareholders. Not that this necessarily makes a difference: an attempted shareholder revolt over Amazon’s decision to continue selling its Rekognition tool to police recently failed, with less than 3 percent of the vote.

When the law draws an arbitrary line between the public and private sector’s use of this technology, it creates an “everything the light touches” style of regulation, with some elements of facial recognition up for scrutiny and others arbitrarily protected. Police are then incentivized to team up with the private sector in the knowledge that some of their surveillance methods will remain safely beyond the reach of the law. A state which outsources its surveillance efforts to private companies is still a surveillance state, just one with very little oversight.

Here’s what you need to know about Palantir, the secretive $20 billion data-analysis company whose work with ICE is dragging Amazon into controversy
By Rosalie Chan

The company was born out of Thiel’s experience working at PayPal, where credit card fraud cost the company millions each month. To solve the problem, PayPal built an internal security application that helped employees analyze suspicious transactions.

Palantir takes a similar approach by finding patterns in complicated data. For example, law enforcement agencies can use it to search for links in phone records, photos, vehicle information, criminal history, biometrics, credit card transactions, addresses, and police reports.

VICE reported Palantir’s software allows law enforcement to enter a license plate number and quickly get an itinerary of the routes and places the vehicle has travelled. Police can also use it to map out family and business relationships.

And Palantir’s technology has been used in New Orleans for predictive policing, The Verge reported— a practice that has been shown to increase surveillance and arrests in communities of color. Palantir has been involved in various lawsuits in the past few years. For example, in 2017, Palantir settled a lawsuit from the Department of Labor saying that its hiring practices discriminated against Asians.

On its website, it says that people work with Palantir to uncover human trafficking rings, analyze finances, respond to natural disasters, track disease outbreaks, combat cyberattacks, prevent terrorist attacks, and more.

Working with government agencies is a core part of Palantir’s business. For the first several years, Palantir only sold its data analysis products to US government agencies. Palantir works with various military organizations and combat missions to gather information on enemy activity, track criminals, identify fraud, plan logistics, and more.

For example, its software has been used by the Marine Corps to gather intelligence, and it’s building software for the US Army to analyze terrain, movement, and weather information in remote areas. It’s even been rumored to have been used to track down Osama Bin Laden, although Palantir did not comment directly on it.

Palantir has also been selective about the customers it works with. For example, Karp previously told Fortune that Palantir turned down a partnership with a tobacco company “for fear the company would harness the data to pinpoint vulnerable communities to sell cigarettes to.”

According to USAspending.gov, Palantir has roughly $50 million in contracts with ICE. Palantir provides investigative case management software to ICE to gather, store, and search troves of data on undocumented immigrants’ employment information, phone records, immigration history, and more.

Palantir employees had “begged” to end the ICE deal, but Karp said the data is being used for drug enforcement, not separating families. Palantir also said ICE uses its technology for investigating criminal activity like human trafficking, child exploitation, and counter-terrorism. However, in May, Mijente reported that ICE agents used Palantir’s software to build profiles of undocumented children and family members that could be used for prosecution and arrest.

WNYC also reported that ICE agents used a Palantir program called FALCON mobile to plan workplace raids earlier this year. This app reportedly allowed them to search through law enforcement databases with information on people’s people’s immigration histories, family relationships, and past border crossings.

In January, two days after an ICE reportedly sent an email notifying staff to use the FALCON app, ICE raided nearly 100 7-Elevens across the country. In April, ICE arrested 280 immigrants in the largest workplace raid in over a decade.

From October 2017 to 2018, ICE workplace raids led to 1,525 arrests for civil immigration violations. In comparison, there were only 172 arrests the year before.

Jeff Bezos Protests the Invasion of His Privacy, as Amazon Builds a Sprawling Surveillance State for Everyone Else
By Glenn Greenwald

Jeff Bezos is as entitled as anyone else to his personal privacy. The threats from the National Enquirer are grotesque. If Bezos’ preemptive self-publishing of his private sex material reduces the unwarranted shame and stigma around adult consensual sexual activities, that will be a societal good.

But Bezos, given how much he works and profits to destroy the privacy of everyone else (to say nothing of the labor abuses of his company), is about the least sympathetic victim imaginable of privacy invasion. In the past, hard-core surveillance cheerleaders in Congress such as Dianne Feinstein, Pete Hoekstra, and Jane Harman became overnight, indignant privacy advocates when they learned that the surveillance state apparatus they long cheered had been turned against them.

Perhaps being a victim of privacy invasion will help Jeff Bezos realize the evils of what his company is enabling. Only time will tell. As of now, one of the world’s greatest privacy invaders just had his privacy invaded. As the ACLU put it: “Amazon is building the tools for authoritarian surveillance that advocates, activists, community leaders, politicians, and experts have repeatedly warned against.”

Doorbell-camera firm Ring has partnered with 400 police forces, extending surveillance concerns
By Drew Harwell

Ring officials and law enforcement partners portray the vast camera network as an irrepressible shield for neighborhoods, saying it can assist police investigators and protect homes from criminals, intruders and thieves.

“The mission has always been making the neighborhood safer,” said Eric Kuhn, the general manager of Neighbors, Ring’s crime-focused companion app. “We’ve had a lot of success in terms of deterring crime and solving crimes that would otherwise not be solved as quickly.”

But legal experts and privacy advocates have voiced alarm about the company’s eyes-everywhere ambitions and increasingly close relationship with police, saying the program could threaten civil liberties, turn residents into informants, and subject innocent people, including those who Ring users have flagged as “suspicious,” to greater surveillance and potential risk.

“If the police demanded every citizen put a camera at their door and give officers access to it, we might all recoil,” said Andrew Guthrie Ferguson, a law professor and author of “The Rise of Big Data Policing.”

By tapping into “a perceived need for more self-surveillance and by playing on consumer fears about crime and security,” he added, Ring has found “a clever workaround for the development of a wholly new surveillance network, without the kind of scrutiny that would happen if it was coming from the police or government.”

Ring’s expansion also has led some to question its plans. The company applied for a facial-recognition patent last year that could alert when a person designated as “suspicious” was caught on camera. The cameras do not currently use facial-recognition software, and a spokeswoman said the application was designed only to explore future possibilities.

Amazon, Ring’s parent company, has developed facial-recognition software, called Rekognition, that is used by police nationwide. The technology is improving all the time: Earlier this month, Amazon’s Web Services arm announced that it had upgraded the face-scanning system’s accuracy at estimating a person’s emotion and was even perceptive enough to track “a new emotion: ‘Fear.’ ”

For now, the Ring systems’ police expansion is earning early community support. Mike Diaz, a member of the city council in Chula Vista, Calif., where police have partnered with Ring, said the cameras could be an important safeguard for some local neighborhoods where residents are tired of dealing with crime. He’s not bothered, he added, by the concerns he has heard about how the company is partnering with police in hopes of selling more cameras.

“That’s America, right?” Diaz said. “Who doesn’t want to put bad guys away?

Famous con man Frank Abagnale: Crime is 4,000 times easier today
By Karen Roby

Karen Roby: What do you tell CIOs and CEOs about cybersecurity?

Frank Abagnale: Well, first of all, I tell them that the most important thing that they have to do is educate their employees, and the most important job they have is protecting the information that’s been entrusted to them by their clients. So, that’s the most important thing.

Unfortunately, a lot of people are not trained by their companies, and so they fall for phishing scams, or they fall for social engineering scams over the phone where they give away a lot of information where they shouldn’t. People are basically honest and because they’re honest, they don’t have a deceptive mind. So, when they see an email that looks very official looking, they assume that it is real.

I’ve been an instructor at the FBI Academy for 43 years. I’ve taught two generations of FBI agents who’ve gone through the academy. What’s amazing to me is how much easier crime is than when I did it 50 years ago. It’s actually 4,000 times easier because I didn’t have all of the technology that exists today. So, technology absolutely breeds crime. It always has, and there will always be people who will use technology in a negative, self-serving way.

I’ve been involved in security breaches going back to TJ Maxx 14 years ago, up to Marriott and Facebook just a few months ago. One thing that I’ve learned over my career is that every breach occurs because somebody in that company did something they weren’t supposed to do, or somebody in that company failed to do something they were, excuse me, suppose to do.

Hackers do not cause breaches, people do. All hackers do is look for weak points to get in. So in the case of Equifax, they didn’t update their systems, they didn’t fix their security patches, and that opened the door for hackers.

I live in South Carolina. Someone hacked into the tax revenue office four years ago and stole 3.8 million tax returns from the citizens of South Carolina—that was everyone. After the investigation, it was determined that an employee took home a laptop they shouldn’t have taken home. They opened it an unrestricted environment, and the hacker got in. So this is why it is so important to educate your employees about the most important part of the job they have, and that is protecting the information that’s been entrusted to them.

Equifax Data-Breach Settlement: Get Up to $20,000 If You Can Prove Harm
By David Yaffe-Bellany

Two years after a major data breach exposed the personal information of around 147 million Americans, the credit bureau Equifax has agreed to pay at least $650 million to resolve consumer claims and multiple state and federal investigations stemming from the episode.

At least $300 million of that amount will go to consumers, according to settlement documents filed in federal court in Atlanta on Monday. Those affected by the breach could get an additional $125 million if the initial fund is exhausted. (Fines paid to state authorities and the Consumer Financial Protection Bureau account for most of the rest of the settlement amount.)

There is little evidence that the breach actually led to fraud, making it difficult to determine how badly consumers may have been harmed.

But Equifax has agreed to provide up to 10 years of free credit-monitoring services to breach victims. Consumers will also be compensated for time spent taking preventive measures or dealing with identity theft, at a rate of $25 an hour for up to 20 hours.

They can also be reimbursed for up to $20,000 in losses that are “fairly traceable” to the breach, the settlement says, including the cost of freezing or unfreezing a credit file and buying credit-monitoring services, as well as fraud and identity theft.

In 2017, Equifax said hackers stole sensitive information, including Social Security and driver’s license numbers, belonging to millions of its customers in one of the most significant data breaches in history. The breach took Equifax more than two months to detect, government investigators later discovered, and the company waited more than a month to inform the public. The hackers involved in the episode have never been identified.

U.S. Cities Rethink Data Relationship With Residents
By James Rundle

The Office of Personnel Management, a federal agency that manages the government’s civilian workforce, was hacked multiple times in 2014. The breaches involved more than 21 million Social Security numbers and about 20 million forms with data like someone’s mental-health history. In 2015, a security researcher found an open database on the internet containing information on more than 191 million U.S. voters, including Social Security numbers and party affiliations.

Microsoft says it has found another Russian operation targeting prominent think tanks
By Elizabeth Dwoskin and Craig Timberg

For the second time in six months, Microsoft has identified a Russian government-affiliated operation targeting prominent think tanks that have been critical of Russia, the company said in a blog post Tuesday evening.

The “spear-phishing” attacks — in which hackers send out phony emails intended to trick people into visiting websites that look authentic but in fact enable them to infiltrate their victims’ corporate computer systems — were tied to the APT28 hacking group, a unit of Russian military intelligence that interfered in the 2016 U.S. election. The group targeted more than 100 European employees of the German Marshall Fund, the Aspen Institute Germany, and the German Council on Foreign Relations, influential groups that focus on transatlantic policy issues.

The attacks, which took place during the last three months of 2018, come ahead of European parliamentary elections in May. They highlight a continuously aggressive campaign by Russian operatives to undermine democratic institutions in countries they see as adversaries.

The announcement is also the second time in the past six months that Microsoft has gone public with its efforts to thwart APT28, which is sometimes called Strontium or Fancy Bear. (Microsoft exclusively uses the term Strontium.)

Shortly before the U.S. midterm elections, Microsoft disabled spear-phishing efforts aimed at prominent conservative organizations and the U.S. Senate. APT28 created phony websites impersonating the groups, as well as people’s colleagues and Microsoft’s own properties.

U.S. Cyber Command operation disrupted Internet access of Russian troll factory on day of 2018 midterms
By Ellen Nakashima

The strike on the Internet Research Agency in St. Petersburg, a company underwritten by an oligarch close to President Vladi­mir Putin, was part of the first offensive cyber campaign against Russia designed to thwart attempts to interfere with a U.S. election, the officials said.

“They basically took the IRA offline,” according to one individual familiar with the matter who, like others, spoke on the condition of anonymity to discuss classified information. “They shut ‘em down.”

The operation marked the first muscle-flexing by U.S. Cyber Command, with intelligence from the National Security Agency, under new authorities it was granted by President Trump and Congress last year to bolster offensive capabilities.

Whether the impact of the St. Petersburg action will be long-lasting remains to be seen. Russia’s tactics are evolving, and some analysts were skeptical of the deterrent value on either the Russian troll factory or on Putin, who, according to U.S. intelligence officials, ordered an “influence” campaign in 2016 to undermine faith in U.S. democracy. U.S. officials have also assessed that the Internet Research Agency works on behalf of the Kremlin.

U.S. Escalates Online Attacks on Russia’s Power Grid
By David E. Sanger and Nicole Perlroth

The United States is stepping up digital incursions into Russia’s electric power grid in a warning to President Vladimir V. Putin and a demonstration of how the Trump administration is using new authorities to deploy cybertools more aggressively, current and former government officials said.

In interviews over the past three months, the officials described the previously unreported deployment of American computer code inside Russia’s grid and other targets as a classified companion to more publicly discussed action directed at Moscow’s disinformation and hacking units around the 2018 midterm elections.

Advocates of the more aggressive strategy said it was long overdue, after years of public warnings from the Department of Homeland Security and the F.B.I. that Russia has inserted malware that could sabotage American power plants, oil and gas pipelines, or water supplies in any future conflict with the United States.

But it also carries significant risk of escalating the daily digital Cold War between Washington and Moscow.

Hacking the Russian Power Grid
By The New York Times

So what happened in 2008 was the Russians did something pretty brilliant. They dropped a bunch of USB keys — you know, the kind you might get at a convention or maybe that’s given to you at a hotel — in parking lots around American bases in the Middle East. People would pick these things up, bring them into work, and, believe it or not, put them in their computers at work.

And suddenly, they were able to drain out of the Pentagon some of its most secret communications, all because somebody picked up a USB and stuck it in their machines. And one day, a woman named Debbie Plunkett came into the office at the N.S.A. Remember, this was just ahead of President Obama’s election. And she discovered this breach, and basically she said, we’ve got to get them out. And this started a massive effort secretly inside the N.S.A. to clean out the Department of Defense’s systems. In fact, after a while, people began using superglue to seal the USB ports on Pentagon computers …

A Summer Camp for the Next Generation of N.S.A. Agents
By Sue Halpern

That the N.S.A., which may best be known for its own security breaches—Snowden’s, in 2013, Hal Martin’s, in 2016, and the Shadow Brokers’, in 2017—is training kids to root out cybercriminals should tell you that the problem of cybercrime is bad. Twelve billion records were stolen last year; by 2023, that number is expected to triple. Even a short list of recent cyberattacks—on a Tennessee hospice, a Philadelphia credit union, a library system in upstate New York, government offices in New Bedford, Massachusetts, and Syracuse, New York, a Maine health center, the Los Angeles Police Department—illustrates the problem. The vulnerabilities are manifold, the defenses inadequate. As more devices are connected to the Internet and the attack surface expands, those vulnerabilities will not only multiply—they will be unmatched by the number of people trained to mitigate them. As Jon Oltsik, a cybersecurity analyst at the Enterprise Strategy Group, wrote in a January blog post, “The cybersecurity skills shortage represents an existential threat to all of us.”

How Chinese Spies Got the N.S.A.’s Hacking Tools, and Used Them for Attacks
By Nicole Perlroth, David E. Sanger and Scott Shane

Based on the timing of the attacks and clues in the computer code, researchers with the firm Symantec believe the Chinese did not steal the code but captured it from an N.S.A. attack on their own computers — like a gunslinger who grabs an enemy’s rifle and starts blasting away.

The Chinese action shows how proliferating cyberconflict is creating a digital wild West with few rules or certainties, and how difficult it is for the United States to keep track of the malware it uses to break into foreign networks and attack adversaries’ infrastructure.

The N.S.A. used sophisticated malware to destroy Iran’s nuclear centrifuges — and then saw the same code proliferate around the world, doing damage to random targets, including American business giants like Chevron.

Symantec discovered that as early as March 2016, the Chinese hackers were using tweaked versions of two N.S.A. tools, called Eternal Synergy and Double Pulsar, in their attacks. Months later, in August 2016, the Shadow Brokers released their first samples of stolen N.S.A. tools, followed by their April 2017 internet dump of its entire collection of N.S.A. exploits.

The Shadow Brokers’ release of the N.S.A.’s most highly coveted hacking tools in 2016 and 2017 forced the agency to turn over its arsenal of software vulnerabilities to Microsoft for patching and to shut down some of the N.S.A.’s most sensitive counterterrorism operations, two former N.S.A. employees said.

The N.S.A.’s tools were picked up by North Korean and Russian hackers and used for attacks that crippled the British health care system, shut down operations at the shipping corporation Maersk and cut short critical supplies of a vaccine manufactured by Merck. In Ukraine, the Russian attacks paralyzed critical Ukrainian services, including the airport, Postal Service, gas stations and A.T.M.s.

Inside Olympic Destroyer, the Most Deceptive Hack in History
By Andy Greenberg

“Olympic Destroyer was the first time someone used false flags of that kind of sophistication in a significant, national-security-relevant attack,” Healey says. “It’s a harbinger of what the conflicts of the future might look like.”

Healey, who worked in the George W. Bush White House as director for cyber infrastructure protection, says he has no doubt that US intelligence agencies can see through deceptive clues that muddy attribution. He’s more worried about other countries where a misattributed cyberattack could have lasting consequences. “For the folks that can’t afford CrowdStrike and FireEye, for the vast bulk of nations, attribution is still an issue,” Healey says. “If you can’t imagine this with US and Russia, imagine it with India and Pakistan, or China and Taiwan, where a false flag provokes a much stronger response than even its authors intended, in a way that leaves the world looking very different afterwards.”

Hackers expose Russian intelligence agency’s secret internet projects in ‘the largest data leak’ the group has ever faced
By Kat Tenbarge

The unearthed cyber projects included at least 20 non-public initatives, and 0v1ru$ also released the names of the SyTech project managers associated with them. BBC Russia reports that none of the breached data contains Russian government secrets.

Projects referred to as “Nautilus” and “Nautilus-S” appear to be attempts to scrape social media sites for data extraction, and to identify Russian internet users who seek to access the internet anonymously via Tor browsers that withhold users’ locations. Forbes reports that the “Nautilus-S” projects is believed to have made progress since its initial launch in 2012, under FSB’s Kvant Research Institute.

Project “Mentor” appears to focus on data collection from Russian enterprises, while “Hope” and “Tax-3” appear to relate to Russia’s ongoing initiative to separate its internal internet from the world wide web.

An entire nation just got hacked
By Ivana Kottasová

Asen Genov is pretty furious. His personal data was made public this week after records of more than 5 million Bulgarians got stolen by hackers from the country’s tax revenue office.

In a country of just 7 million people, the scale of the hack means that just about every working adult has been affected.

Government databases are gold mines for hackers. They contain a huge wealth of information that can be “useful” for years to come, experts say.

“You can make (your password) longer and more sophisticated, but the information the government holds are things that are not going to change,” said Guy Bunker, an information security expert and the chief technology officer at Clearswift, a cybersecurity company.

“Your date of birth is not going to change, you’re not going to move house tomorrow,” he said. “A lot of the information that was taken was valid yesterday, is valid today, and will probably be valid for a large number of people in five, 10, 20 years’ time.”

Data breaches used to be spearheaded by highly skilled hackers. But it increasingly doesn’t take a sophisticated and carefully planned operation to break into IT systems. Hacking tools and malware that are available on the dark web make it possible for amateur hackers to cause enormous damage.

A strict data protection law that came into effect last year across the European Union has placed new burdens on anyone who collects and stores personal data. It also introduced hefty fines for anyone who mismanages data, potentially opening the door for the Bulgarian government to fine itself for the breach.

Still, attacks against government systems are on the rise, said Adam Levin, the founder of CyberScout, another cybersecurity firm. “It’s a war right now — one we will win if we make cybersecurity a front-burner issue,” he said.

Garry Kasparov Says AI Can Make Us ‘More Human’
By Dan Costa

Dan Costa: So I want to ask you the three questions I ask everybody that comes on the show. Is there a technology trend that concerns you and that keeps you up at night?

Garry Kasparov: No, I’m an incorrigible optimist. I worry about bad people, not about bad technology because every technology has a dual use. You can build a nuclear reactor but before unfortunately you build nuclear bomb. It’s quite unfortunate that destruction is easier than construction. That’s why in history we always know, that a new, disruptive technology has been tested for some sort of damage.

Dan Costa: Are you not worried that the same thing will happen to AI? I will be used to destroy before it gets used to create?

Garry Kasparov: Again, it’s not about killer robots. It’s about bad guys, bad actors behind it. People will say oh, we should think about ethical AI. AI could not be more ethical than its creators. I don’t understand what it means, like ethical electricity. If we have bias in our society, AI follows it. It sees a disparity, whether it’s racial, it’s gender, or an income disparity. It takes it into account; AI is an algorithm based on odds. So somehow, complaining about ethical AI is like complaining about a mirror because we don’t like what we see there.

Dan Costa: Is there a technology that you use every day that still inspires wonder?

Garry Kasparov: No. For me, the real wonder of the world is access to information. Since I can collect data, it makes it easier. I grew up in the Soviet Union, and the information was scarce, there were not many books. Now, the fact is that I can [read anything on a] Kindle…it just makes me feel good. There’s so much technology that surrounds us now that helps us to get better. Also, what’s amazing, people keep complaining, oh what can we do? There’s nothing new that can be invented…and I say wait a second. You look at this device in your pocket, let’s go backwards in 1976 or 1977, the Cray supercomputer was like a miracle. This device is what? Ten thousand times more powerful?

Paging Big Brother: In Amazon’s Bookstore, Orwell Gets a Rewrite
By David Streitfeld

I started browsing Orwell on Amazon after writing about the explosion in counterfeit books offered by the retailer. The fake books appeared to help Amazon by, for example, encouraging publishers to advertise their genuine books on the site. The company responded in a blog post that it prohibits counterfeit products and has invested in personnel and technology tools including machine learning to protect customers from fraud and abuse.

On Sunday, Amazon said in a statement that “there is no single source of truth” for the copyright status of every book in every country, and so it relied on authors and publishers to police its site. “This is a complex issue for all retailers,” it said. The company added that machine learning and artificial intelligence were ineffective when there is no single source of truth from which the model can learn.

Bookselling is an ancient and complicated profession, and fake editions of all sorts can turn up anywhere. But Amazon is the world’s biggest bookstore and the standards it sets have ripples everywhere.

How it treats Orwell is especially revelatory because their relationship has been fraught. In 2009, Amazon wiped counterfeit copies of “1984” and “Animal Farm” from customers’ Kindles, creeping out some readers who realized their libraries were no longer under their control.

Microsoft’s Ebook Apocalypse Shows the Dark Side of DRM
By Brian Barrett

Your iTunes movies, your Kindle books—they’re not really yours. You don’t own them. You’ve just bought a license that allows you to access them, one that can be revoked at any time. And while a handful of incidents have brought that reality into sharp relief over the years, none has quite the punch of Microsoft disappearing every single ebook from every one of its customers.

Microsoft made the announcement in April that it would shutter the Microsoft Store’s books section for good. The company had made its foray into ebooks in 2017, as part of a Windows 10 Creators Update that sought to round out the software available to its Surface line. Relegated to Microsoft’s Edge browser, the digital bookstore never took off. As of April 2, it halted all ebook sales. And starting as soon as this week, it’s going to remove all purchased books from the libraries of those who bought them.

Microsoft will refund customers in full for what they paid, plus an extra $25 if they made annotations or markups. But that provides only the coldest comfort.

“On the one hand, at least people aren’t out the money that they paid for these books. But consumers exchange money for goods because they preferred the goods to the money. That’s what happens when you buy something,” says Aaron Perzanowski, professor at the Case Western University School of Law and coauthor of The End of Ownership: Personal Property in the Digital Economy. “I don’t think it’s sufficient to cover the harm that’s been done to consumers.”

The issue also extends beyond ebooks and movies. Think of Jibo, the $900 robot whose servers are shutting down. Or the Revolv smart-home hub that Google acquired and promptly shut down—sparking another FTC inquiry. Even Keurig tried to DRM its coffee pods.

The Final Battle in Big Tech’s War to Dominate Your World
By David Dayen

What is sacrificed for the convenience of an always-on digital life partner? Choice, for one thing: Customers will be subject to the whims of a lone digital gatekeeper. Aspiring film or video game makers will have to sign with a single dominant player to reach an audience. Will these suppliers be able to survive if a middleman like Apple takes as much as 50 percent of the revenue, as rumored with News+? Moreover, the algorithm guiding you through life could distort and pervert: We learned last week how Facebook’s ad server discriminates by race and gender, even when it’s on autopilot.

We created antitrust laws out of concern that monopoly corporations could rewrite laws, hoard profits, squeeze suppliers, and dictate the structures of daily life from their lofty perch. It may seem positive that these companies are taking on one another, with consumers poised to enjoy a surplus in any war for their attention. But when it concludes, the outcome could be a kind of digital tyranny, where participation in society demands signing up with a giant corporate overlord. At stake isn’t simply market competition, but the very notion of freedom.

Threat of mass shootings give rise to AI-powered cameras
By Ivan Moreno

AI is transforming surveillance cameras from passive sentries into active observers that can identify people, suspicious behavior and guns, amassing large amounts of data that help them learn over time to recognize mannerisms, gait and dress. If the cameras have a previously captured image of someone who is banned from a building, the system can immediately alert officials if the person returns.

At a time when the threat of a mass shooting is ever-present, schools are among the most enthusiastic adopters of the technology, known as real-time video analytics or intelligent video, even as civil liberties groups warn about a threat to privacy. Police, retailers, stadiums and Fortune 500 companies are also using intelligent video.

“What we’re really looking for are those things that help us to identify things either before they occur or maybe right as they occur so that we can react a little faster,” Hildreth said.

A year after an expelled student killed 17 people at Marjory Stoneman Douglas High School in Parkland, Florida, Broward County installed cameras from Canada-based Avigilon throughout the district in February. Hildreth’s Atlanta district will spend $16.5 million to put the cameras in its roughly 100 buildings in coming years.

In Greeley, Colorado, the school district has used Avigilon cameras for about five years, and the technology has advanced rapidly, said John Tait, security manager for Weld County School District 6.

Upcoming upgrades include the ability to identify guns and read people’s expressions, a capability not currently part of Avigilon’s systems.

“It’s almost kind of scary,” Tait said. “It will look at the expressions on people’s faces and their mannerisms and be able to tell if they look violent.”

Retailers can spot shoplifters in real time and alert security or warn of a potential shoplifter. One company, Athena-Security, has cameras that spot when someone has a weapon. And in a bid to help retailers, it recently expanded its capabilities to help identify big spenders when they visit a store.

Big tech is spying on your wallet
By Phillip Longman

Until a few years ago, efforts to personalize prices using digital data about the customer were relatively primitive. In 2012, for example, a Wall Street Journal investigation found that Staples.com was quoting people higher prices if they lived in an area that lacked an Office Depot or other Staples competitor. The same year, researchers published evidence that Amazon was routinely charging some customers 20 percent more (and in some cases 166 percent more) than other customers for the same Kindle e-book based on the customers’ location. The same researchers also found that Google would recommend more expensive or cheaper models of digital cameras, headphones, and other products to different customers based on what Google’s algorithm concluded was their ability to pay.

By 2016, a ProPublica investigation revealed that Amazon was engaging in a different dimension of marketplace discrimination—one that affects both buyers and sellers and that deeply distorts the ability of markets to set fair and efficient prices. Amazon both provides a platform for third-party vendors and sells products directly on the same platform. In this way, not only does Amazon own the biggest store in the largest mall, it owns the mall itself. What ProPublica found was that when consumers entered this virtual mall and searched for the best deal on, say, Loctite Super Glue, Amazon would prominently display offers available directly from Amazon rather than those offered by highly rated merchants who were selling the same glue for less.

This is just the beginning. When people try to sell their wares on Amazon, whether they are publishers trying to sell books or merchants trying to sell glue, they have to accept the terms Amazon offers. Indeed, these days many can’t reach the customers they need except through Amazon, which makes it very hard for them to say no when, for example, Amazon suggests it’s time to fork over more money so it doesn’t bury their offers at the bottom of every search. And because Amazon effectively has the ability to look into their cash registers, it has deep knowledge of just how much they can afford to pay. It can use this knowledge to wring more money from sellers.

On current trends, these forms of discrimination are poised to get far worse. One reason is the vastly increasing amounts of data that individuals and businesses generate online. Second is the rapidly increasing processing power available through machine learning, artificial intelligence, and other advances in computing, which enable more sophisticated, highly tailored means of discriminating. According to a report by Deloitte and Salesforce, 40 percent of brands that currently deploy AI are using the technology not just to personalize the customer experience but also to tailor pricing and promotions in real time.

How to make algorithms fair when you don’t know what they’re doing
By Amit Katwala

Google, 2018
Researchers at Cornell University found that setting a user’s gender to female resulted in them being served fewer ads for high-paying jobs.

Durham police force, 2017
An algorithm to predict reoffending was rolled back due to concerns that it discriminated against people from certain areas.

China, ongoing
The state monitors many aspects of an individual’s life – such as employment and hobbies – to give a score based on “trustworthiness”.

Chicago Police, 2013-2017
A project to identify the risk of being involved in a shooting labelled people as previous offenders based on where they lived.

Google, 2013
A study showed searches with “black sounding” names were more likely to turn up ads for services such as criminal background checks.

Racial bias in a medical algorithm favors white patients over sicker black patients
By Carolyn Y. Johnson

A widely used algorithm that predicts which patients will benefit from extra medical care dramatically underestimates the health needs of the sickest black patients, amplifying long-standing racial disparities in medicine, researchers have found.

Correcting the bias would more than double the number of black patients flagged as at risk of complicated medical needs within the health system the researchers studied, and they are already working with Optum on a fix. When the company replicated the analysis on a national data set of 3.7 million patients, they found that black patients who were ranked by the algorithm as equally as in need of extra care as white patients were much sicker: They collectively suffered from 48,772 additional chronic diseases.

The algorithm wasn’t intentionally racist — in fact, it specifically excluded race. Instead, to identify patients who would benefit from more medical support, the algorithm used a seemingly race-blind metric: how much patients would cost the health-care system in the future. But cost isn’t a race-neutral measure of health-care need. Black patients incurred about $1,800 less in medical costs per year than white patients with the same number of chronic conditions; thus the algorithm scored white patients as equally at risk of future health problems as black patients who had many more diseases.

The software used to predict patients’ need for more intensive medical support was an outgrowth of the Affordable Care Act, which created financial incentives for health systems to keep people well instead of waiting to treat them when they got sick. The idea was that it would be possible to simultaneously contain costs and keep people healthier by identifying those patients at greatest risk for becoming very sick and providing more resources to them. But because wealthy, white people tend to utilize more health care, such tools could also lead health systems to focus on them, missing an opportunity to help some of the sickest people.

Christine Vogeli, director of evaluation and research at the Center for Population Health at Partners HealthCare, a nonprofit health system in Massachusetts, said when her team first tested the algorithm, they mapped the highest scores in their patient population and found them concentrated in some of the most affluent suburbs of Boston. That led them to use the tool in a limited way, supplementing it with other information, rather than using it off the shelf.

“You’re going to have to make sure people are savvy about it … or you’re going to have an issue where you’re only serving the richest and most wealthy folks,” Vogeli said.

Such biases may seem obvious in hindsight, but algorithms are notoriously opaque because they are proprietary products that can cost hundreds of thousands of dollars. The researchers who conducted the new study had an unusual amount of access to the data that went into the algorithm and what it predicted.

Can data-labour unions break the monopoly capture of data?
By Karin Pettersson

Recently the Economist reported that Uber drivers were taking legal action against the company. The drivers want access to the data collected about them and their performance. They do not understand how they are rated and how jobs are assigned. Access to the ratings and reviews would let drivers appeal unfair dismissal from the app—something they can’t do today.

The data, of course, are at the core of Uber’s business model. The algorithms from which Uber makes its money feed on data delivered by the drivers. But the value is captured by the company and is not even accessible to the people producing it.

The Uber drivers, in this case, are not even demanding a share in the value creation—just a minimum level of transparency. The issue has been handed over to European courts.

An increasing share of the value in today’s economy does not emanate from labour but from the data extracted from human activity. Today that value is captured by a few players, leading to what John Doerr, an early Amazon and Google investor, has called ‘the greatest legal accumulation of wealth in history’. The tech giants employ significantly fewer people than other industries and the labour-income share of these companies seems to be only a fraction of the traditional average.

Amazon’s warehouse-worker tracking system can automatically fire people without a human supervisor’s involvement
By Julie Bort

Amazon’s system tracks a metric called “time off task,” meaning how much time workers pause or take breaks, The Verge reported. It has been previously reported that some workers feel so pressured that they don’t take bathroom breaks.

If the system determines the employee is failing to meet production targets, it can automatically issue warnings and terminate them without a supervisor’s intervention, although Amazon said that a human supervisor can override the system. The company also said it provides training to those who don’t meet their production goals.

While all employees in every job know they could be fired if they fail to meet their performance objectives, few of us are managed by an automated system tracking our every movement that has full authority to make that decision.

And, of course, people are not robots. People have highly productive days and less-productive days. The true benefit of a human workforce isn’t to use people like cogs in a production wheel, but to employ humans who are creative, can solve problems, and can learn and grow if they are given the breathing room to contribute.

Nevertheless, Amazon’s mechanisms for exacting productivity are pervasive in many areas of its operations. For instance, drivers delivering Amazon packages have reported feeling so pressured that they speed through neighborhoods, blow by stop signs, and pee in bottles in the trucks or outside, Business Insider’s Hayley Peterson reported.

The Surveillance Threat Is Not What Orwell Imagined
By Shoshana Zuboff

Augmented reality game Pokémon Go, developed at Google and released in 2016 by a Google spinoff, took the challenge of mass behavioral modification to a new level. Business customers from McDonalds to Starbucks paid for “footfall” to their establishments on a “cost per visit” basis, just as online advertisers pay for “cost per click.” The game engineers learned how to herd people through their towns and cities to destinations that contribute profits, all of it without game players’ knowledge.

How AI Will Rewire Us
By Nicholas A. Christakis

For instance, the political scientist Kevin Munger directed specific kinds of bots to intervene after people sent racist invective to other people online. He showed that, under certain circumstances, a bot that simply reminded the perpetrators that their target was a human being, one whose feelings might get hurt, could cause that person’s use of racist speech to decline for more than a month.

But adding AI to our social environment can also make us behave less productively and less ethically. In yet another experiment, this one designed to explore how AI might affect the “tragedy of the commons”—the notion that individuals’ self-centered actions may collectively damage their common interests—we gave several thousand subjects money to use over multiple rounds of an online game. In each round, subjects were told that they could either keep their money or donate some or all of it to their neighbors. If they made a donation, we would match it, doubling the money their neighbors received. Early in the game, two-thirds of players acted altruistically. After all, they realized that being generous to their neighbors in one round might prompt their neighbors to be generous to them in the next one, establishing a norm of reciprocity. From a selfish and short-term point of view, however, the best outcome would be to keep your own money and receive money from your neighbors. In this experiment, we found that by adding just a few bots (posing as human players) that behaved in a selfish, free-riding way, we could drive the group to behave similarly. Eventually, the human players ceased cooperating altogether. The bots thus converted a group of generous people into selfish jerks.

Computers Can Now Bluff Like a Poker Champ. Better, Actually.
By Daniela Hernandez

Pluribus developed its winning poker strategy and superb bluffing skills by playing trillions of hands against five other clones of itself, said Dr. Brown.

After each round, it analyzed its decisions. If these resulted in wins, the bot would be more likely to opt for such moves in the future.

Pluribus’s digital brain realized it could win by making a bet when it had a weak hand by forcing its opponent to fold, which also taught it that it should bluff in future plays, said Dr. Brown. It then used those lessons to make real-time decisions when battling top human players, all of whom had earned more than $1 million playing professionally, according to the paper.

“People have this notion that [bluffing] is a very human ability—that it’s about looking into the other person’s eyes,” Dr. Brown said. “It’s really about math, and this is what’s going on here. We can create an AI algorithm that can bluff better than any human.”

AI: Brooms on a mission — a cautionary tale
By Tom Chivers

If your amazing cancer-curing AI stops looking for a cure for cancer after three days, randomly scrambles its utility function, and starts caring very deeply about ornithology, for example, then it’s not much use to you, even if it doesn’t accidentally destroy the universe, which it might.

“Step number one to making it safe is making sure its reward function is stable,” Shanahan said. “And we can probably do that.”

But there may be times when we don’t want it to stay the same. Our values change over time. Holden Karnofsky, whose organisation OpenPhil supports a lot of AI safety research, pointed that out to me. “Imagine if we took the values of 1800 AD,” he said. If an AI had been created then (Charles Babbage was working on it, sort of), and had become superintelligent and world-dominating, then would we want it to stay eternally the same?

“If we entrenched those values for ever; if we said: ‘We really think the world should work this way, and so that’s the way we want the world to work for ever,’ that would have been really bad.” We will probably feel much the same way about the values of 2019 in 200 years’ time, assuming that we last that long.

And, more starkly, if we get the values we instil in it slightly wrong, according to the people who worry about these things, it’s not just that it’ll entrench the ideals of a particular time, or that it will not be good at its job. It’s that (as we’ve discussed) it could destroy everything that we value, in the process of finding the most efficient way of maximising whatever it values.

Maybe the Rationalists are right – AI could go terribly wrong
By Tom Chivers

A paper released in 2018 showed how some AIs, programmed using evolutionary methods, went off the rails in ways that are very recognisable. One, for instance, was told to win at noughts and crosses against other AIs. It found that the best way to do this was to play impossible moves billions of squares away from the board; that forced its opponents to simulate a billions-of-squares-wide board, which their memory couldn’t handle, so they crashed. The AI won a lot of games by default. Another was supposed to sort lists into order; it realised that by hacking into the target files and deleting the lists, it could return empty lists and they’d always be correct. These AIs have “solved” the problem, but not in the way the programmers wanted.

Pentagon outlines its first artificial intelligence strategy
By Matt O’Brien

The U.S. military wants to expand its use of artificial intelligence in warfare, but says it will take care to deploy the technology in accordance with the nation’s values.

The Pentagon outlined its first AI strategy in a report released Tuesday.

The plan calls for accelerating the use of AI systems throughout the military, from intelligence-gathering operations to predicting maintenance problems in planes or ships. It urges the U.S. to advance such technology swiftly before other countries chip away at its technological advantage.

“Other nations, particularly China and Russia, are making significant investments in AI for military purposes, including in applications that raise questions regarding international norms and human rights,” the report says.

The report makes little mention of autonomous weapons but cites an existing 2012 military directive that requires humans to be in control.

The U.S. and Russia are among a handful of nations that have blocked efforts at the United Nations for an international ban on “killer robots” — fully autonomous weapons systems that could one day conduct war without human intervention. The U.S. has argued that it’s premature to try to regulate them.

Coming Soon to a Battlefield: Robots That Can Kill
By Zachary Fryer-Biggs

Work, like all the current and former officials who discussed the future of AI in weapons with me, said that he doesn’t know of anyone in the military now trying to remove human beings entirely from lethal decision making. No such offensive system has been put through the specialized review process created by an Obama-era Pentagon directive, although the procedures have gotten a lot of internal attention, according to current and former Defense Department officials.

Work also says that the concept of machines entirely picking their own targets or going horribly awry, like something out of the Terminator movies, is unlikely because the offensive technologies being developed have only narrow applications. They “will only attack the things that we said” they could, Work said.

Work’s ideas got a sympathetic hearing from Air Force General Paul Selva, who retired as the vice chairman of the Joint Chiefs of Staff in July and was a major backer of AI-related innovations. But Selva bluntly talked about the “Terminator conundrum,” the question of how to grapple with the arrival of machines that are capable of deciding to kill on their own.

Speaking at a Washington think tank in 2016, he made clear that the issue wasn’t hypothetical: “In the world of autonomy, as we look at what our competitors might do in that same space, the notion of a completely robotic system that can make a decision about whether or not to inflict harm on an adversary is here,” he said. “It’s not terribly refined, it’s not terribly good, but it’s here.”

He further explained in June at the Brookings Institution that machines can be told to sense the presence of targets following a programmer’s specific instructions. In such an instance, Selva said, the machines recognize the unique identifying characteristics of one or more targets—their “signature”—and can be told to detonate when they clearly identify a target. “It’s code that we write … The signatures are known, thus the consequences are known.”

With artificial intelligence, Selva said at Brookings, machines can be instructed less directly to “go learn the signature.” Then they can be told, “Once you’ve learned the signature, identify the target.” In those instances, machines aren’t just executing instructions written by others, they are acting on cues they have created themselves, after learning from experience—either their own or others’.

Selva has said that so far, the military has held back from turning killing decisions directly over to intelligent machines. But he has recommended a broad “national debate,” in which the implications of letting machines choose whom and when to kill can be measured.

Amazon, Microsoft, ‘putting world at risk of killer AI’: study
By Issam Ahmed

“Autonomous weapons will inevitably become scalable weapons of mass destruction, because if the human is not in the loop, then a single person can launch a million weapons or a hundred million weapons,” Stuart Russell, a computer science professor at the University of California, Berkeley told AFP on Wednesday.

“The fact is that autonomous weapons are going to be developed by corporations, and in terms of a campaign to prevent autonomous weapons from becoming widespread, they can play a very big role,” he added.

The development of AI for military purposes has triggered debates and protest within the industry: last year Google declined to renew a Pentagon contract called Project Maven, which used machine learning to distinguish people and objects in drone videos.

It also dropped out of the running for Joint Enterprise Defense Infrastructure (JEDI), the cloud contract that Amazon and Microsoft are hoping to bag.

The report noted that Microsoft employees had also voiced their opposition to a US Army contract for an augmented reality headset, HoloLens, that aims at “increasing lethality” on the battlefield.

According to Russell, “anything that’s currently a weapon, people are working on autonomous versions, whether it’s tanks, fighter aircraft, or submarines.”

Israel’s Harpy is an autonomous drone that already exists, “loitering” in a target area and selecting sites to hit.

More worrying still are new categories of autonomous weapons that don’t yet exist — these could include armed mini-drones like those featured in the 2017 short film “Slaughterbots.”

“With that type of weapon, you could send a million of them in a container or cargo aircraft — so they have destructive capacity of a nuclear bomb but leave all the buildings behind,” said Russell.

Using facial recognition technology, the drones could “wipe out one ethnic group or one gender, or using social media information you could wipe out all people with a political view.”

The European Union in April published guidelines for how companies and governments should develop AI, including the need for human oversight, working towards societal and environmental wellbeing in a non-discriminatory way, and respecting privacy.

Russell argued it was essential to take the next step in the form of an international ban on lethal AI, that could be summarized as “machines that can decide to kill humans shall not be developed, deployed, or used.”

How Artificial Intelligence Is Reshaping Repression
By Steven Feldstein (PDF, 911KB)

Even before the onset of digital repression, the landscape of contemporary authoritarianism was shifting in noteworthy ways. First, the erosion of democratic institutions and norms has accelerated worldwide. The Varieties of Democracy (V-Dem) 2018 report estimates that around 2.5 billion people now live in countries affected by this “global autocratization trend.” … In fact, gradual democratic backsliding has become one of the most common routes to authoritarianism.

Second, the manner in which autocrats exit power is also changing. From 1946 to 1988, coups were the most common way for autocrats to leave office, with such events accounting for 48.6 percent of authoritarian exits. But in the post–Cold War era, instances of change from factors external to the regime have overtaken coups. From 1989 to 2017, the most common causes of departure for dictators were popular revolt and electoral defeat. Exits through coups have plummeted, making up only 13 percent of total exits (in fact, leadership exits due to civil war slightly exceeded exits from coups in this period). …

This indicates that the gravest threats to authoritarian survival today may be coming not from insider-led rebellions, but from discontented publics on the streets or at the ballot box. The implication for dictators who want to stay in power is clear: redirect resources to keep popular civic movements under control and do a better job of rigging elections. In these areas, AI technology provides a crucial advantage. Rather than relying on security forces to repress their citizenry—with all the resource costs and political risk that this entails—autocratic leaders are embracing digital tactics for monitoring, surveilling, and harassing civil society movements and for distorting elections.

AI technology is “dual-use”: It can be deployed for beneficial purposes as well as exploited for military and repressive ends. But this technology cannot be neatly separated into “beneficial” and “harmful” buckets. The functions that gain value from automation can just as easily be used by authoritarians for malicious purposes as by democratic or commercial actors for beneficial ones. To help ensure that AI is used responsibly, enhancing the connections linking the policy community to engineers and researchers will be key. In other words, those responsible for designing, programming, and implementing AI systems also should share responsibility for applying and upholding human-rights standards. Policy experts should be in regular, open dialogue with engineers and technologists so that all sides are aware of potential misuses of AI and can develop appropriate responses at an early stage.

How U.S. Tech Giants Are Helping to Build China’s Surveillance State
By Ryan Gallagher

The two sources familiar with Semptian’s work in China said that the company’s equipment does not vacuum up and store millions of people’s data on a random basis. Instead, the sources said, the equipment has visibility into communications as they pass across phone and internet networks, and it can filter out information associated with particular words, phrases, or people.

In response to a request for a video containing further details about how Aegis works, Zhu agreed to send one, provided that the undercover reporter signed a nondisclosure agreement. The Intercept is publishing a short excerpt of the 16-minute video because of the overwhelming public importance of its content, which shows how millions of people in China are subject to government surveillance. The Intercept removed information that could infringe on individual privacy.

The Semptian video demonstration shows how the Aegis system tracks people’s movements. If a government operative enters a person’s cellphone number, Aegis can show where the device has been over a given period of time: the last three days, the last week, the last month, or longer.

The video displays a map of mainland China and zooms in to electronically follow a person in Shenzhen as they travel through the city, from an airport, through parks and gardens, to a conference center, to a hotel, and past the offices of a pharmaceutical company.

The technology can also allow government users to run searches for a particular instant messenger name, email address, social media account, forum user, blog commenter, or other identifier, like a cellphone IMSI code or a computer MAC address, a unique series of numbers associated with each device.

In many cases, it appears that the system can collect the full content of a communication, such as recorded audio of a phone call or the written body of a text message, not just the metadata, which shows the sender and the recipient of an email, or the phone numbers someone called and when. Whether the system can access the full content of a message likely depends on whether it has been protected with strong encryption.

It is unclear why the U.S. tech giants have chosen to work with Semptian; the decision may have been taken as part of a broader strategy to establish closer ties with China and gain greater access to the East Asian country’s lucrative marketplace. A spokesperson for the OpenPower Foundation declined to answer questions about the organization’s work with Semptian, saying only that “technology available through the Foundation is general purpose, commercially available worldwide, and does not require a U.S. export license.”

How U.S. surveillance technology is propping up authoritarian regimes
By Robert Morgus and Justin Sherman

Authoritarian governance is a growing trend around the world. Building out domestic surveillance infrastructure to aid in the adoption of imported surveillance technologies — a characteristic of digital authoritarianism — exacerbates this global affront to democracy. It cuts domestic economies off from the global network and allows states at will to censor and slow access to Internet content they deem undesirable. This, in turn, contributes to the global rise in attacks on free press and open public discourse. More broadly, digital authoritarianism consolidates power in the hands of governments that are themselves hostile — or typically align themselves with powers hostile — to U.S. interests.

Enabling digital authoritarianism also supports a model — one championed by states such as China and Russia — that is opposed to free speech and threatens to further restrict it around the world. This is an approach that tries to establish such practices as content censorship, online surveillance and traffic throttling as global norms. That would fracture the Internet as we know it, encouraging governments to construct servers, cables and other domestic systems that can be tightly controlled by the state and cut off from the rest of the world as desired.

For many reasons, the United States and its allies do not subscribe to this vision of a sovereign and controlled Internet. Rather, the United States has long promoted an Internet that is global and open, largely for its democratizing force. Generally, this means that countries such as the United States have defended free speech online, protected net neutrality and advocated the economic benefits of a global Internet that connects markets and societies.

But when U.S. companies sell surveillance technology to the likes of Saudi Arabia, they are sending more than surveillance kits; they are also sending conflicting signals. On one hand, the United States cares deeply about protecting a global and open Internet. This was made clear in the 2018 U.S. National Cyber Strategy and the United States’ recent proposal to the U.N. General Assembly, co-signed by countries including Australia, Canada, France, Germany and Britain. On the other hand, American companies are selling surveillance technology that undermines this mission — contributing to the broader spread of digital authoritarianism that the United States claims to fight. (This also implicates allies such as Britain, whose companies have also sold surveillance technology to oppressive regimes.)

We won’t be able to allay this situation until the United States updates its approach to exporting surveillance technology. Of course, this must be done carefully. But digital authoritarianism is spreading, and U.S. companies need to stop helping it.

No One Is Safe: How Saudi Arabia Makes Dissidents Disappear
By Ayman M. Mohyeldin

In many instances, the surveillance of Saudi dissidents began online. But the internet was at first a lifeline for millions of people in the region. During the Arab Spring of 2010–12, social media helped topple autocrats in Egypt, Tunisia, and Libya. Monarchs in a number of the Persian Gulf States began to fear the dissenters in their own countries, many of whom had aired their grievances or organized their protests online.

In Saudi Arabia, by contrast, the ruler at the time—King Abdullah—saw real value in social media, believing the web might actually serve to narrow the gap between the ruling family and its subjects. “In the beginning, the kingdom’s obsession with tracking social media was not to monitor dissidents or opponents, but rather to identify societal problems early on,” said a Western expat who lives in Saudi Arabia and advises the ruling elite and various ministries on matters of national security. “It was to give the kingdom a chance at identifying economic vulnerabilities and blind spots so it could intervene before that frustration exploded.”

During the early 2010s, the head of Abdullah’s royal court was Khaled al-Tuwaijry. According to various press accounts, he, in turn, relied on a young, ambitious law-school graduate named Saud al-Qahtani, who was tasked with assembling a team that would monitor all forms of media, with a special focus on cybersecurity. Like Assiri, al-Qahtani had been a member of the Saudi Air Force.

Over the years, Assiri and other government critics would learn that one of the popular chat rooms on the nascent web was actually a foil. Saudi cyber-operatives had allegedly set it up to entice others to join in and comment freely, only to be tricked into revealing details that would disclose their identities. One such forum, several activists told me, was believed to have been created by al-Qahtani, who, early on, had instructed the monarchy to treat the internet as a secret, potent monitoring tool. (Al-Qahtani did not respond to requests for comment.)

Since then, al-Qahtani is believed to have shaped the country’s broader cybersecurity efforts. His online network—according to human rights monitors and computer-threat experts—has included Saudi computer sleuths and hackers poised to go after government critics at home and abroad. As first reported by Vice’s Motherboard, al-Qahtani worked closely with Hacking Team, an Italian surveillance company that sells intrusion resources and “offensive security” capabilities around the globe. Others have traced Saudi government ties to the Israeli surveillance company NSO, whose signature spyware, Pegasus, has played a role in the attempted entrapment of at least three dissidents interviewed for this report.

Inside the WhatsApp hack: how an Israeli technology was used to spy
By Mehul Srivastava

Developed and sold by the Herzlia-based NSO Group, which is part-owned by a UK-based private equity group called Novalpina Capital, Pegasus was designed to worm its way into phones such as Mr Rukundo’s and start transmitting the owner’s location, their encrypted chats, travel plans — and even the voices of people the owners met — to servers around the world.

Since 2012, NSO has devised various ways to deliver Pegasus to targeted phones — sometimes as a malicious link in a text message, or a redirected website that infected the device. But by May this year, the FT reported, NSO had developed a new method by weaponising a vulnerability in WhatsApp, used by 1.5bn people globally, to deliver Pegasus completely surreptitiously. The user did not even have to answer the phone but once delivered, the software instantly used flaws in the device’s operating system to turn it into a secret eavesdropping tool.

WhatsApp quickly closed the vulnerability and launched a six-month investigation into the abuse of its platforms. The probe, carried out in secrecy, makes apparent for the first time the extent — and nature — of the surveillance operations that NSO has enabled.

In recent days, the University of Toronto’s Citizen Lab, which studies digital surveillance around the world and is working in partnership with WhatsApp, started to notify journalists, human rights activists and other members of civil society — like Mr Rukundo — whose phones had been targeted using the spyware. It also provided help to defend themselves in the future.

NSO — which was valued at $1bn in a leveraged buyout backed by Novalpina in February — says its technology is sold only to carefully vetted customers and used to prevent terrorism and crime. NSO has said it respects human rights unequivocally, and it conducts a thorough evaluation of the potential for misuse of its products by clients, which includes a review of a country’s past human rights record and governance standards. The company believes the allegations of misuse of its products are based on “erroneous information”.

The NSO Group said in a statement: “In the strongest possible terms, we dispute today’s allegations and will vigorously fight them. Our technology is not designed or licensed for use against human rights activists and journalists.”

But WhatsApp’s internal investigation undercuts the efficacy of such vetting. In the roughly two weeks before WhatsApp closed the vulnerability, at least 1,400 people around the world were targeted through missed calls on the platform, including 100 members of “civil society”, the company said in a statement on Tuesday.

This is “an unmistakable pattern of abuse”, the Facebook-owned business said. “There must be strong legal oversight of cyber weapons like the one used in this attack to ensure they are not used to violate individual rights and freedoms people deserve wherever they live. Human rights groups have documented a disturbing trend that such tools have been used to attack journalists and human rights defenders.”

Made in China, Exported to the World: The Surveillance State
By Paul Mozur, Jonah M. Kessel and Melissa Chan

Ecuador shows how technology built for China’s political system is now being applied — and sometimes abused — by other governments. Today, 18 countries — including Zimbabwe, Uzbekistan, Pakistan, Kenya, the United Arab Emirates and Germany — are using Chinese-made intelligent monitoring systems, and 36 have received training in topics like “public opinion guidance,” which is typically a euphemism for censorship, according to an October report from Freedom House, a pro-democracy research group.

With China’s surveillance know-how and equipment now flowing to the world, critics warn that it could help underpin a future of tech-driven authoritarianism, potentially leading to a loss of privacy on an industrial scale. Often described as public security systems, the technologies have darker potential uses as tools of political repression.

“They’re selling this as the future of governance; the future will be all about controlling the masses through technology,” Adrian Shahbaz, research director at Freedom House, said of China’s new tech exports.

Companies worldwide provide the components and code of dystopian digital surveillance and democratic nations like Britain and the United States also have ways of watching their citizens. But China’s growing market dominance has changed things. Loans from Beijing have made surveillance technology available to governments that could not previously afford it, while China’s authoritarian system has diminished the transparency and accountability of its use.

Can We Escape Surveillance Culture?
By Kenan Malik

A man tries to avoid the cameras, covering his face by pulling up his fleece. He is stopped by the police and forced to have his photo taken. He is then fined £90 for ‘disorderly behaviour’. ‘What’s your suspicion?’ someone asks the police. ‘The fact that he’s walked past clearly masking his face from recognition,’ replies one of the plainclothes police operating the system.

If you want to protect your privacy, you must have something to hide. And if you actually do something to protect your privacy, well, that’s ‘disorderly behaviour’.

What Hong Kong’s Protestors Can Teach Us About the Future of Privacy
By Frederike Kaltheuner

Something odd happened in Hong Kong last week. Protestors against a controversial proposed extradition bill were afraid to use their metro cards. Instead of swiping their cards through the turnstiles of the city’s subway system, they lined up to buy single-journey tickets with cash, apparently worried about “leaving a paper trail” that could prove their presence at the demonstration.

The moment you are protesting against your government, a seamless public transit system can turn into a rich source of data for surveillance and crowd control. Today, you might be young, healthy, and (if you’re lucky!) live in a country that has universal health care. Tomorrow, social services might get cut, putting you in desperate need of private insurance. Through data brokers, that private insurance company could obtain information from the mood tracking app on your phone, your purchases at your online pharmacy, or your route to your regular therapy sessions. These are all types of data that are routinely tracked, sold, and shared today. Whether it’s due to austerity or a changing political climate, our digital doppelgängers might come back to haunt us. In fact, they are already haunting some.

As is often the case with technology ,  the future is already here, it’s just not equally distributed. As Sam Adler-Bell put it in The New Inquiry, “for the underclasses, privacy — in the form of access to ungovernable spaces — has never been on offer.” Today, old inequalities are reappearing in novel and unexpected forms. Facial recognition is a perfect example. For basically anyone who isn’t white or male, error rates are much higher. The hands-free convenience of paying with your face is only convenient if your face is actually being recognized. And the mass deployment of this tech is only invisible if those systems don’t confuse your face with that of a wanted suspect. Furthermore, for political dissidents, investigative journalists, and undocumented immigrants, facial recognition may already mean the end of anonymity in public spaces.

In Hong Kong Protests, Faces Become Weapons
By Paul Mozur

Hong Kong is at the bleeding edge of a significant change in the authorities’ ability to track dangerous criminals and legitimate political protesters alike — and in their targets’ ability to fight back. Across the border in China, the police often catch people with digital fingerprints gleaned using one of the world’s most invasive surveillance systems. The advent of facial-recognition technology and the rapid expansion of a vast network of cameras and other tracking tools has begun to increase those capabilities substantially.

The transformation strikes a strong chord in Hong Kong. The protests began over a proposed bill that would have allowed the city to extradite criminal suspects to mainland China, where the police and courts ultimately answer to the Communist Party.

The authorities in Hong Kong have outlined strict privacy controls for the use of facial recognition and the collection of other biometric data, although the extent of their efforts is unclear. They also appear to be using other technological methods for tracking protesters. Last month, a 22-year old man was arrested for being the administrator of a Telegram group.

Protesters are responding. On Sunday, as another demonstration turned into a violent confrontation with the police, some of those involved shined laser pointers at police cameras and used spray paint to block the lenses of surveillance cameras in front of the Chinese government’s liaison office. Riot officers carried cameras on poles just behind the front lines as they fired tear gas and rubber bullets.

The protesters’ ire intensified after the police removed identification numbers from their uniforms, presumably to keep violent misconduct from being reported to city leaders. To some protesters, the move suggested the police were taking a cue from the mainland, where officers lack public accountability and often do not identify themselves.

Chinese Cyberattack Hits Telegram, App Used by Hong Kong Protesters
By Paul Mozur and Alexandra Stevenson

A network of computers in China bombarded Telegram, a secure messaging app used by many of the protesters, with a huge volume of traffic that disrupted service. The app’s founder, Pavel Durov, said the attack coincided with the Hong Kong protests, a phenomenon that Telegram had seen before.

“This case was not an exception,” he wrote.

The Hong Kong police made their own move to limit digital communications. On Tuesday night, as demonstrators gathered near Hong Kong’s legislative building, the authorities arrested the administrator of a Telegram chat group with 20,000 members, even though he was at his home miles from the protest site.

“I never thought that just speaking on the internet, just sharing information, could be regarded as a speech crime,” the chat leader, Ivan Ip, 22, said in an interview.

“I only slept four hours after I got out on bail,” he said. “I’m scared that they will show up again and arrest me again. This feeling of terror has been planted in my heart. My parents and 70-year-old grandma who live with me are also scared.”

How Artificial Intelligence Will Reshape the Global Order
By Nicholas Wright

As well as retroactively censoring speech, AI and big data will allow predictive control of potential dissenters. This will resemble Amazon or Google’s consumer targeting but will be much more effective, as authoritarian governments will be able to draw on data in ways that are not allowed in liberal democracies. Amazon and Google have access only to data from some accounts and devices; an AI designed for social control will draw data from the multiplicity of devices someone interacts with during their daily life. And even more important, authoritarian regimes will have no compunction about combining such data with information from tax returns, medical records, criminal records, sexual-health clinics, bank statements, genetic screenings, physical information (such as location, biometrics, and CCTV monitoring using facial recognition software), and information gleaned from family and friends. AI is as good as the data it has access to. Unfortunately, the quantity and quality of data available to governments on every citizen will prove excellent for training AI systems.

Even the mere existence of this kind of predictive control will help authoritarians. Self-censorship was perhaps the East German Stasi’s most important disciplinary mechanism. AI will make the tactic dramatically more effective. People will know that the omnipresent monitoring of their physical and digital activities will be used to predict undesired behavior, even actions they are merely contemplating. From a technical perspective, such predictions are no different from using AI health-care systems to predict diseases in seemingly healthy people before their symptoms show.

Facebook’s new rapid response team has a crucial task: Avoid fueling another genocide
By David Ingram

When tensions begin rising in a country, no matter where, it might fall to Birch’s team to give guidance to Facebook’s thousands of content reviewers on what kinds of posts to watch out for and take down — for example, after the Easter Sunday bombings in Sri Lanka targeting churches and hotels in April.

“We can turn those up and turn those down quickly, within the space of hours,” Birch said. “When there’s something happening on the ground and we have concern about tensions on the ground that could be bubbling up, then we can more aggressively downrank content that we may not otherwise.”

Thorny questions await the team and Facebook. One of them is how to respond when the people using Facebook to stoke violence in a given country are elected politicians, military chiefs or other authorities — not just everyday users.

Facebook regularly takes down material from civilians or militant organizations, but it makes an exception for posts from governments, so-called “state actor” speech.

That had a tragic consequence in Myanmar, as Facebook failed to take down government-sponsored posts there that experts say contributed to violence. Only later did Facebook take down some accounts tied to the Myanmar military, making an exception to the company’s usual policy.

Facebook’s reluctance to fact-check officials from authoritarian governments who use Facebook push propaganda remains a yawning issue that risks another Myanmar-type genocide playing out on the platform again, said a former Facebook employee who worked on related issues at the company and spoke on condition of anonymity.

In March the company removed 200 accounts from Facebook and Instagram linked to a consultant for Philippine President Rodrigo Duterte, though Duterte — who has imposed a violent crackdown on drug users, alarming U.N. human rights advocates — still has a robust Facebook account with more than 4 million followers.

Brian Fishman, a Facebook policy director who works with the Strategic Response team, said the company was re-evaluating its policy with the goal of developing a clear rule to apply worldwide, though he said the company had no change to announce yet.

“You definitely want to set rules that you can apply as consistently as possible,” he said.

But he said the company still saw reasons not to censor governments. If Facebook shut down accounts linked to an authoritarian government, it might interfere with unrelated government services. Or, he said, some countries might retaliate against Facebook by shutting down or restricting internet services, hurting millions of users.

“We have to be very careful and very judicious,” he said, noting that Facebook’s power comes from its size and not from the United Nations or other legal authority.

Social Media Councils: A Better Way Forward, Window Dressing, or Global Speech Police?
By Corynne McSherry

We’re all in favor of finding ways to build more due process into platform censorship. That said, we have a lot of questions. Who determines council membership, and on what terms? What happens when members disagree? How can we ensure the council’s independence from the companies it’s intended to check? Who will pay the bills, keeping in mind that significant funding will be needed to ensure that it is not staffed only by the few organizations that can afford to participate? What standard will the councils follow to determine whether a given decision is appropriate? How do they decide which of the millions of decisions made get reviewed? How can they get the cultural fluency to understand the practices and vocabulary of every online community? Will their decisions be binding on the companies who participate, and if so, how will the decisions be enforced? A host of additional questions are raised in recent document from the Internet and Jurisdiction Policy Network.

But our biggest concern is that social media councils will end up either legitimating a profoundly broken system (while doing too little to fix it) or becoming a kind of global speech police, setting standards for what is and is not allowed online whether or not that content is legal. We are hard-pressed to decide which is worse.

It’s One Thing For Trolls And Grandstanding Politicians To Get CDA 230 Wrong, But The Press Shouldn’t Help Them
By Mike Masnick

There’s an unfortunate belief among some internet trolls and grandstanding politicians that Section 230 of the Communications Decency Act requires platforms to be “neutral” and that any attempt to moderate content or to have any form of bias in a platform’s moderation focus somehow removes 230 protections. Unfortunately, it appears that many in the press are incorrectly buying into this flat out incorrect analysis of CDA 230. We first saw it last year, in Wired’s giant cover story about Facebook’s battles, in which it twice suggested that too much moderation might lose Facebook its CDA 230 protections:

But if anyone inside Facebook is unconvinced by religion, there is also Section 230 of the 1996 Communications Decency Act to recommend the idea. This is the section of US law that shelters internet intermediaries from liability for the content their users post. If Facebook were to start creating or editing content on its platform, it would risk losing that immunity—and it’s hard to imagine how Facebook could exist if it were liable for the many billion pieces of content a day that users post on its site.

This is not just wrong, it’s literally backwards from reality. As we’ve pointed out, anyone who actually reads the law should know that it was written to encourage moderation. Section (b)(4) directly says that one of the policy goals of the law is “to remove disincentives for the development and utilization of blocking and filtering technologies.” And (more importantly), section (c)(2) makes it clear that Section 230’s intent was to encourage moderation by taking away liability for any moderation decisions:

No provider or user of an interactive computer service shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected…

In short: if a site decides to remove content that it believes is “objectionable” (including content it finds to be harassing), there is no liability for the platform even if the content blocked is “constitutionally protected.”

Indeed, this was the core point of CDA 230 and the key reason why Rep. Chris Cox wrote the law in the first place. As was detailed in Jeff Kosseff’s new book on the history of Section 230, Cox was spurred into action after reading about the awful ruling in the Stratton Oakmont v. Prodigy case, in which a judge decided that since Prodigy did some moderation of its forums, it was liable for any content that was left up. This was the opposite finding from another lawsuit, Cubby v. CompuServe, which found CompuServe not liable, since it didn’t do any moderation.

However, part of Prodigy’s pitch was that it was to be the more “family friendly” internet service compared to the anything goes nature of CompuServe. The ruling in the Stratton Oakmont case would have made that effectively impossible — and thus Section 230 was created explicitly to encourage different platforms to experiment with different models of moderation, so that there could be different platforms who chose to treat content differently.

There are enough issues to be concerned about regarding the internet and big platforms these days, that having the media repeatedly misrepresenting Section 230 of the CDA and suggesting — falsely — that it’s a special gift to internet platforms doesn’t help matters at all. CDA 230 protects platforms that host user speech — including from any moderation choices they make. It does not require them to be neutral, and it does not require them to define themselves as a “platform” instead of a “publisher.” News organizations should know better and should stop repeating this myth.

Content bans won’t just eliminate “bad” speech online
By Index on Censorship

If we are to ensure that all our speech is protected, including speech that calls out others for engaging in hateful conduct, then social media companies’ policies and procedures need to be clear, accountable and non-partisan. Any decisions to limit content should be taken by, and tested by, human beings. Algorithms simply cannot parse the context and nuance sufficiently to distinguish, say, racist speech from anti-racist speech.

We need to tread carefully. While an individual who incites violence towards others should not (and does not) enjoy the protection of the law, on any platform, or on any kind of media, tackling those who advocate hate cannot be solved by simply banning them.

In the drive to stem the tide of hateful speech online, we should not rush to welcome an ever-widening definition of speech to be banned by social media.

This means we – as users – might have to tolerate conspiracy theories, the offensive and the idiotic, as long as it does not incite violence. That doesn’t mean we can’t challenge them. And we should.

But the ability to express contrary points of view, to call out racism, to demand retraction and to highlight obvious hypocrisy depend on the ability to freely share information.

The Coming Gentrification of YouTube
By David Auerbach

What many are asking of YouTube amounts to, “Please remove some of your harmful content, but only the unimportant stuff.” But lest we forget, YouTube is a profit maximizing corporation, not an organ of representative democracy or the public trust. In so far as YouTube responds to public concerns about its content, the company will be guided, not by the political conscience of its critics, but rather by a desire to limit liability while protecting its bottom line.

I’ve experienced the unpleasant caprices of YouTube recommendations. (I used to work for Google, but never got anywhere near YouTube or their algorithms.) While watching a thoughtful talk on the limits of machine learning, YouTube automatically queued up “THE ARTIFICIAL INTELLIGENCE AGENDA EXPOSED” by David Icke, the British former professional soccer player turned full time conspiracy theorist infamous for declaring that the Rothschilds were actually members of an alien lizard race that secretly runs the world. Icke describes how an unspecified “THEY” (possibly the lizards, or the Jews, or both) are getting youth addicted to technology so that they can later be connected to artificial intelligence and become AI.

The video I was watching had 2,500 views. Icke’s had 250,000.

Ironically, The New York Times boosted Icke’s profile last year while interviewing author Alice Walker. Walker raved about one of Icke’s books, saying “In Icke’s books there is the whole of existence, on this planet and several others, to think about.” The Times defended the piece by saying that Walker was “worthy of interviewing.”

It’s dismaying to see Icke’s ignorance and bigotry promoted anywhere, and yet an open, liberal society requires that he have the civic right to express himself. The question is, what responsibilities do platforms like YouTube and The New York Times have to limit his exposure?

Unlike the Times, YouTube has a softer method of moderation: it demonetizes videos so their creators can’t profit from them. YouTube will not place ads on videos or pay their creators for videos that don’t meet a considerably higher bar of safe, inoffensive content. Such videos are also penalized in rankings and not recommended. Right-wing loudmouth Steven Crowder, subject of much controversy last week, has been demonetized. That Icke AI video has not, possibly because Icke does not talk about lizards or Jews in it.

Yet even if the Icke video is demonetized, it won’t draw any more attention to the AI video I was originally watching. It’s just not popular enough. And this is why crowd-sourced recommendations are dangerous in general. They tend to draw attention to the popular, the established, and the controversial. Yet asking YouTube to override the collective hive mind is placing social authority in the hands of a for-profit corporation, and when has that ever worked out well?

Russians are shunning state-controlled TV for YouTube
By The Economist

Russian pundits have long described politics as a battle between the television and the refrigerator (that is, between propaganda and economics). Now, the internet is weighing in.

According to the Levada Centre, an independent pollster, Russians’ trust in television has fallen by 30 percentage points since 2009, to below 50%. The number of people who trust internet-based information sources has tripled to nearly a quarter of the population. Older people still get most of their news from television, but most of those aged 18-24 rely on the internet, which remains relatively free.

YouTube in particular is eroding the state-television monopoly. It is now viewed by 82% of the Russian population aged 18-44. Channel One, Russia’s main television channel, reaches 83% of the same age group. Vloggers have overtaken some television anchors. Yuri Dud, a YouTube journalist who interviews politicians and celebrities such as Alexei Navalny, the opposition leader, gets 10m-20m views per video, much more than any television news programme. Even Dmitry Kiselev, the state television propagandist-in-chief, felt compelled to appear on Mr Dud’s show.

According to Agora, a human-rights watchdog, Russian prosecutors have initiated 1,295 criminal proceedings for online offences and handed out 143 sentences since 2015. The vast majority originated from VKontakte pages.

This heavy-handed approach has alienated young internet users.

Banning established platforms like YouTube or Google may be technically possible, but could be politically explosive. Last year the state regulator tried to block Telegram, a messaging service developed by Mr Durov, for refusing the Russian security services access to encrypted messages. This inadvertently crashed lots of services, including hotel- and airline-booking systems which (like Telegram) relied on Amazon and Google servers. It also sparked some of the largest street protests in years.

Google Caves On Russian Censorship
By Mike Masnick

Over the last few years, Russia has passed a number of internet censorship laws, and there have been lots of questions about how Google and other tech giants would respond. A year ago, we noted that Facebook/Instagram had decided to cave in and that ratcheted up the pressure on Google.

It should be noted that Russia has been on Google’s case for a while, and the company had been resisting such pressure. Indeed, the company actually shut down its Russian office a few years back to try to protect itself (and its employees) from Russian legal threats.

But, apparently, something has changed:

The business news source Vedomosti is reporting that Google has struck a deal with Russian censors to continue operating in the country by deleting websites that are banned in Russia from its results. The government censorship agency Roskomnadzor maintains a registry of sites that may not be distributed on Russian territory, but Google is one of a few search engines that does not subscribe to that registry. However, the company regularly deletes links from its search results that Roskomnadzor has banned, sources within both Roskomnadzor and Google told Vedomosti.

The report notes that, previously, Roskomnadzor had just been fining Google for its failures, and the company had been simply paying the fines. Now, however, it will sign up to censor the official list of sites, which is large and constantly growing. Given what the company just went through with the whole China debacle, you would think the company would be more thoughtful about this kind of thing.

Google Is Conducting a Secret “Performance Review” of Its Censored China Search Project
By Ryan Gallagher

Google previously launched a search engine in China in 2006, but pulled out of the country in 2010, citing concerns about Chinese government interference. At that time, Google co-founder Sergey Brin said the decision to stop operating search in the country was principally about “opposing censorship and speaking out for the freedom of political dissent.”

Dragonfly represented a dramatic reversal of that position. The search engine, which Google planned to launch as an app for Android and iOS devices, was designed to comply with strict censorship rules imposed by China’s ruling Communist Party regime, enabling surveillance of people’s searches while also blocking thousands of terms, such as “Nobel prize,” “human rights,” and “student protest.”

More than 60 human rights groups and 22 U.S. lawmakers wrote to Google criticizing the project. In February, Amnesty International met with Google to reiterate its concerns about the China plan. “The lack of transparency around the development of Dragonfly is very disturbing,” Anna Bacciarelli, an Amnesty researcher, told The Intercept earlier this month. “We continue to call on Google’s CEO Sundar Pichai to publicly confirm that it has dropped Dragonfly for good, not just ‘for now.’”

China social media: WeChat and the Surveillance State
By Stephen McDonell

In China pretty much everyone has WeChat. I don’t know a single person without it. Developed by tech giant Tencent it is an incredible app. It’s convenient. It works. It’s fun. It was ahead of the game on the global stage and it has found its way into all corners of people’s existence.

It could deliver to the Communist Party a life map of pretty much everybody in this country, citizens and foreigners alike.

Capturing the face and voice image of everyone who was suspended for mentioning the Tiananmen crackdown anniversary in recent days would be considered very useful for those who want to monitor anyone who might potentially cause problems.

When I placed details of this entire process on Twitter others were asking: why cave in to such a Big Brother intrusion on your privacy?

They’ve probably not lived in China.

It is hard to imagine a life here without it.

When you meet somebody in a work context they don’t given you a name card any more, they share their WeChat; if you play for a football team training details are on WeChat; children’s school arrangements, WeChat; Tinder-style dates, WeChat; movie tickets, WeChat; news stream, WeChat; restaurant locations, WeChat; paying for absolutely everything from a bowl of noodles to clothes to a dining room table… WeChat.

People wouldn’t be able to speak to their friends or family without it.

So the censors who can lock you out of Wechat hold real power over you.

The app – thought by Western intelligence agencies to be the least secure of its type in the world – has essentially got you over a barrel.

If you want to have a normal life in China, you had better not say anything controversial about the Communist Party and especially not about its leader, Xi Jinping.

This is China 2019.

Chinese police use app to spy on citizens’ smartphones
By Christian Shepherd and Yuan Yang

Chinese police are installing intrusive data-harvesting software on ordinary citizens’ smartphones during routine security interactions with people even when they are not suspected of any crime, new research shows.

The move suggests Chinese police are using highly invasive surveillance techniques, similar to those deployed in the restive western region of Xinjiang, in the rest of China.

The software, a smartphone application called MFSocket, provides access to image and audio files, location data, call logs, messages and the phone’s calendar and contacts, including those used in the messaging app Telegram, French security researcher Baptiste Robert said.

Many of the accounts involve people having their phones scanned when they go to the police to register after moving to a new city — a requirement in some places in China. Other checks occur when they apply for a new identity card, are stopped at security barriers, or are involved in others of the many interactions with police that are considered routine in China.

In January, one internet user said on the popular review website Douban.com that the police had installed the app on the user’s handset, according to the device’s smartphone log. This occurred when the user was briefly detained by local authorities for sharing a news article from an outlet blocked in mainland China.

Edward Schwarck, a doctoral candidate studying Chinese public security at the University of Oxford, said the use of the MFSocket app showed that police were attempting to move towards “intelligence-led” policing — investigations designed to anticipate illegal acts before they happen.

“The end result is that the security state is becoming much more resilient. They are not just responding to threats any more but are pre-empting them,” said Mr Schwarck.

China Is Forcing Tourists to Install Text-Stealing Malware at its Border
By Joseph Cox

“[This app] provides yet another source of evidence showing how pervasive mass surveillance is being carried out in Xinjiang. We already know that Xinjiang residents—particularly Turkic Muslims—are subjected to round-the-clock and multidimensional surveillance in the region,” Maya Wang, China senior researcher at Human Rights Watch, said. “What you’ve found goes beyond that: it suggests that even foreigners are subjected to such mass, and unlawful surveillance.”

Once installed on an Android phone, by “side-loading” its installation and requesting certain permissions rather than downloading it from the Google Play Store, BXAQ collects all of the phone’s calendar entries, phone contacts, call logs, and text messages and uploads them to a server, according to expert analysis. The malware also scans the phone to see which apps are installed, and extracts the subject’s usernames for some installed apps. (Update: after the publication of this piece, multiple antivirus firms updated their products to flag the app as malware).

Included in the app’s code are hashes for over 73,000 different files the malware scans for. Ordinarily, it is difficult to determine what specific files these hashes relate to, but the reporting team and researchers managed to uncover the inputs of around 1,300 of them. This was done by searching for connected files on the file search engine Virus Total. Citizen Lab identified the hashes in the VirusTotal database, and researchers from the Bochum team later downloaded some of the files from VirusTotal. The reporting team also found other copies online, and verified what sort of material the app was scanning for.

Many of the files that are scanned for contain clearly extremist content, such as the so-called Islamic State’s publication Rumiyah. But the app also scans for parts of the Quran, PDFs related to the Dalai Lama, and a music file from Japanese metal band Unholy Grave (the band has a song called “Taiwan: Another China.”)

One Month, 500,000 Face Scans: How China Is Using A.I. to Profile a Minority
By Paul Mozur

Law enforcement from the central province of Shaanxi, for example, aimed to acquire a smart camera system last year that “should support facial recognition to identify Uighur/non-Uighur attributes.”

Some police departments and technology companies described the practice as “minority identification,” though three of the people said that phrase was a euphemism for a tool that sought to identify Uighurs exclusively. Uighurs often look distinct from China’s majority Han population, more closely resembling people from Central Asia. Such differences make it easier for software to single them out.

For decades, democracies have had a near monopoly on cutting-edge technology. Today, a new generation of start-ups catering to Beijing’s authoritarian needs are beginning to set the tone for emerging technologies like artificial intelligence. Similar tools could automate biases based on skin color and ethnicity elsewhere.

“Take the most risky application of this technology, and chances are good someone is going to try it,” said Clare Garvie, an associate at the Center on Privacy and Technology at Georgetown Law. “If you make a technology that can classify people by an ethnicity, someone will use it to repress that ethnicity.”

Selling products with names like Fire Eye, Sky Eye and Dragonfly Eye, the start-ups promise to use A.I. to analyze footage from China’s surveillance cameras. The technology is not mature — in 2017 Yitu promoted a one-in-three success rate when the police responded to its alarms at a train station — and many of China’s cameras are not powerful enough for facial recognition software to work effectively.

Yet they help advance China’s architecture for social control. To make the algorithms work, the police have put together face-image databases for people with criminal records, mental illnesses, records of drug use, and those who petitioned the government over grievances, according to two of the people and procurement documents. A national database of criminals at large includes about 300,000 faces, while a list of people with a history of drug use in the city of Wenzhou totals 8,000 faces, they said.

One database generated by Yitu software and reviewed by The Times showed how the police in the city of Sanmenxia used software running on cameras to attempt to identify residents more than 500,000 times over about a month beginning in mid-February.

Included in the code alongside tags like “rec_gender” and “rec_sunglasses” was “rec_uygur,” which returned a 1 if the software believed it had found a Uighur. Within the half million identifications the cameras attempted to record, the software guessed it saw Uighurs 2,834 times. Images stored alongside the entry would allow the police to double check.

Yitu and its rivals have ambitions to expand overseas. Such a push could easily put ethnic profiling software in the hands of other governments, said Jonathan Frankle, an A.I. researcher at the Massachusetts Institute of Technology.

“I don’t think it’s overblown to treat this as an existential threat to democracy,” Mr. Frankle said. “Once a country adopts a model in this heavy authoritarian mode, it’s using data to enforce thought and rules in a much more deep-seated fashion than might have been achievable 70 years ago in the Soviet Union. To that extent, this is an urgent crisis we are slowly sleepwalking our way into.”

Chinese company leaves Muslim-tracking facial recognition database exposed online
By Catalin Cimpanu

The user data wasn’t just benign usernames, but highly detailed and highly sensitive information that someone would usually find on an ID card, Gevers said. The researcher saw user profiles with information such as names, ID card numbers, ID card issue date, ID card expiration date, sex, nationality, home addresses, dates of birth, photos, and employer.

For each user, there was also a list of GPS coordinates, locations where that user had been seen.

The database also contained a list of “trackers” and associated GPS coordinates. Based on the company’s website, these trackers appear to be the locations of public cameras from where video had been captured and was being analyzed.

Some of the descriptive names associated with the “trackers” contained terms such as “mosque,” “hotel,” “police station,” “internet cafe,” “restaurant,” and other places where public cameras would normally be found.

Gevers told ZDNet that these coordinates were all located in China’s Xinjiang province, the home of China’s Uyghur Muslim minority population.

There are numerous reports of human rights abuses carried out by Chinese authorities in Xinjiang, such as forcing the Uyghur Muslim population to install spyware on their phones, or forcing some Uyghur Muslims into “re-education” camps that Uyghur Muslims living abroad have described as forced labor camps.

The database that Gevers found wasn’t just some dead servers with old data. The researcher said that during the past 24 hours a stream of nearly 6.7 million GPS coordinates were recorded, meaning the database was actively tracking Uyghur Muslims as they moved around.

When surveillance meets incompetence
By Devin Coldewey

We know major actors in the private sector fail at this stuff all the time and, adding insult to injury, are not held responsible — case in point: Equifax. We know our weapons systems are hackable; our electoral systems are trivial to compromise and under active attack; the census is a security disaster; and unsurprisingly the agencies responsible for making all these rickety systems are themselves both unprepared and ignorant, by the government’s own admission… not to mention unconcerned with due process.

The companies and governments of today are simply not equipped to handle the enormousness, or recognize the enormity, of large-scale surveillance. Not only that, but the people that compose those companies and governments are far from reliable themselves, as we have seen from repeated abuse and half-legal uses of surveillance technologies for decades.

China-style authoritarian rule advances even as democracy fights back
By Ian Bremmer

From Tiananmen Square to Soviet collapse to the fall of governments in the early days of the Arab Spring, many assumed advances in communications technology would make it impossible for autocrats to remain in charge. In a world, where they could no longer control the flow of information within their borders and limit the ability of citizens to communicate with one another, how, many wondered, could autocrats maintain their grip?

Instead, governments have found ways to use new technologies to protect themselves. Syria’s civil war provides a compelling example. In the conflict’s early days, Russia provided President Bashar al-Assad with a few hundred data engineers and analysts to help the Syrian military sift through the texts and social media accounts of Syrian citizens to spot and arrest those most likely to challenge their government. This low-cost project proved extraordinarily effective in helping the Syrian government deprive opponents of allies.

China adds Washington Post, Guardian to ‘Great Firewall’ blacklist
By Gerry Shih

Websites of The Washington Post and the Guardian appear to now be blocked in China as the country’s government further tightens its so-called “Great Firewall” censorship apparatus as it navigates a politically sensitive period.

Outlets such as Bloomberg, the New York Times, Reuters and the Wall Street Journal have been blocked for years. So have social media services such as Facebook and Twitter and all Google-owned services, including YouTube. Other popular services such as Dropbox, Slack and WhatsApp are also prohibited.

Revealed: how TikTok censors videos that do not please Beijing
By Alex Hern

TikTok, the popular Chinese-owned social network, instructs its moderators to censor videos that mention Tiananmen Square, Tibetan independence, or the banned religious group Falun Gong, according to leaked documents detailing the site’s moderation guidelines.

The documents, revealed by the Guardian for the first time, lay out how ByteDance, the Beijing-headquartered technology company that owns TikTok, is advancing Chinese foreign policy aims abroad through the app.

The revelations come amid rising suspicion that discussion of the Hong Kong protests on TikTok is being censored for political reasons: a Washington Post report earlier this month noted that a search on the site for the city-state revealed “barely a hint of unrest in sight”.

The guidelines divide banned material into two categories: some content is marked as a “violation”, which sees it deleted from the site entirely, and can lead to a user being banned from the service. But lesser infringements are marked as “visible to self”, which leaves the content up but limits its distribution through TikTok’s algorithmically-curated feed.

This latter enforcement technique means that it can be unclear to users whether they have posted infringing content, or if their post simply has not been deemed compelling enough to be shared widely by the notoriously unpredictable algorithm.

The bulk of the guidelines covering China are contained in a section governing “hate speech and religion”.

In every case, they are placed in a context designed to make the rules seem general purpose, rather than specific exceptions. A ban on criticism of China’s socialist system, for instance, comes under a general ban of “criticism/attack towards policies, social rules of any country, such as constitutional monarchy, monarchy, parliamentary system, separation of powers, socialism system, etc”.

Another ban covers “demonisation or distortion of local or other countries’ history such as May 1998 riots of Indonesia, Cambodian genocide, Tiananmen Square incidents”.

China’s robot censors crank up as Tiananmen anniversary nears
By Cate Cadell

Censors at Chinese internet companies say tools to detect and block content related to the 1989 crackdown have reached unprecedented levels of accuracy, aided by machine learning and voice and image recognition.

“We sometimes say that the artificial intelligence is a scalpel, and a human is a machete,” said one content screening employee at Beijing Bytedance Co Ltd, who asked not to be identified because they are not authorized to speak to media.

Two employees at the firm said censorship of the Tiananmen crackdown, along with other highly sensitive issues including Taiwan and Tibet, is now largely automated.

Posts that allude to dates, images and names associated with the protests are automatically rejected.

“When I first began this kind of work four years ago there was opportunity to remove the images of Tiananmen, but now the artificial intelligence is very accurate,” one of the people said.

Four censors, working across Bytedance, Weibo Corp and Baidu Inc apps said they censor between 5,000-10,000 pieces of information a day, or five to seven pieces a minute, most of which they said were pornographic or violent content.

Despite advances in AI censorship, current-day tourist snaps in the square are sometimes unintentionally blocked, one of the censors said.

In the lead-up to this year’s Tiananmen Square anniversary, censorship on social media has targeted LGBT groups, labor and environment activists and NGOs, they say.

Upgrades to censorship tech have been urged on by new policies introduced by the Cyberspace Administration of China (CAC). The group was set up – and officially led – by President Xi Jinping, whose tenure has been defined by increasingly strict ideological control of the internet.

The CAC did not respond to a request for comment.

Last November, the CAC introduced new rules aimed at quashing dissent online in China, where “falsifying the history of the Communist Party” on the internet is a punishable offence for both platforms and individuals.

‘If I disappear’: Chinese students make farewell messages amid crackdowns over labor activism
By Gerry Shih

Over the past eight months, China’s ruling party has gone to extraordinary lengths to shut down the small club of students at the country’s top university. Peking University’s young Marxists drew the government’s ire after they campaigned for workers’ rights and openly criticized social inequality and corruption in China.

That alone was provocative. In recent years, China’s leaders have been highly sensitive to rumblings of labor unrest as the sputtering economy lays bare the divides between rich and poor — fissures that were formed, and mostly overlooked, during decades of white-hot growth.

But the source of the dissent carried an extra sting for the government. Peking University, after all, educates China’s best and brightest, the top 0.1 percent of the country’s high school graduates. And its rebellious young Marxists were doing something particularly embarrassing: They were standing up for disenfranchised workers against the state.

They were, in other words, emulating the early Communist Party itself.

“The Communist Party knows there is no greater threat than a movement that links students with the lower class,” Hu said by telephone. “They’re looking at a reflection of their early revolutionary selves.”

Perhaps sensing the potential for unrest, the Communist Party in October appointed Qiu Shuiping, a former head of the Beijing branch of the Ministry of State Security — the feared foreign and domestic spy agency — to be Peking University’s new top official.

But the more authorities clamped down, the more the students defied them by airing what was happening on Twitter and the hosting service GitHub, services beyond the reach of Chinese censors.

Twitter Takes Down Accounts of China Dissidents Ahead of Tiananmen Anniversary
By Paul Mozur

Within China, Twitter users have faced escalating pressures. At the end of 2018, China’s Ministry of Public Security began to target Chinese Twitter users. Although Twitter is blocked, many use virtual-private network software that enables access.

In a campaign carried out across the country and coordinated by a division known as the internet police, local officers detained Twitter users and forced them to delete their tweets, which often included years of online discussion, and then the accounts themselves. The campaign is continuing, according to human rights groups.

Chinese censors go old school to clamp down on Twitter: A knock on the door
By Gerry Shih

The 50-year-old software engineer was tapping away at his computer in November when state security officials filed into his office on mainland China.

They had an unusual — and nonnegotiable — request.

Delete these tweets, they said.

The agents handed over a printout of 60 posts the engineer had fired off to his 48,000 followers. The topics included U.S.-China trade relations and the plight of underground Christians in his coastal province in southeast China.

When the engineer did not comply after 24 hours, he discovered that someone had hacked into his Twitter account and deleted its entire history of 11,000 tweets.

“If the authorities hack you, what can you do?” said the engineer, who spoke on the condition of anonymity for fear of landing in deeper trouble with authorities. “I felt completely drained.”

Twitter Helped Chinese Government Promote Disinformation on Repression of Uighurs
By Ryan Gallagher

Twitter’s promotion of Chinese government propaganda had appeared to contradict its own policies, which state that advertising on the platform must be “honest.” The advertisements also undermined statements from Twitter CEO Jack Dorsey, who told the Senate Intelligence Committee last year that the company was working to combat “propaganda through bots and human coordination [and] misinformation campaigns.”

Like many Western technology companies, Twitter has a complex relationship with China. The social media platform is blocked in the country and cannot be accessed there without the use of censorship circumvention technologies, such as a virtual private network or proxy service. At the same time, however, Twitter generates a lot of advertising revenue in China and has a growing presence in the country.

In July, Twitter’s director in China reportedly stated that the company’s team there had tripled in the last year and was the company’s fastest growing division. In May, the social media giant held a “Twitter for Marketers” conference in Beijing. Meanwhile, Twitter was criticized for purging Chinese dissidents’ accounts on the platform – which it claimed was a mistake – and has also been the subject of a protest campaign, launched by the Chinese artist Badiucao, after it refused to publish a “hashflag” symbol to commemorate the 30th anniversary of the Tiananmen Square massacre.

Poon, the Amnesty researcher, said police in China have in recent months increasingly targeted human rights advocates in the country who are active on Twitter, forcing them to delete their accounts or remove specific posts that are critical of the government. These cases have been reported to Twitter, according to Poon, but the company has not taken any action.

“Twitter has allowed the Chinese government to advertise its propaganda while turning a deaf ear on those who have been persecuted by the Chinese regime,” Poon said. “We need to hear how Twitter can justify that.”

New Site Exposes How Apple Censors Apps in China
By Ryan Gallagher

In late 2017, Apple admitted to U.S. senators that it had removed from its app store in China more than 600 “virtual private network” apps that allow users to evade censorship and online spying. But the company never disclosed which specific apps it removed — nor did it reveal other services it had pulled from its app store at the behest of China’s authoritarian government.

In addition to the hundreds of VPN apps, Apple is currently preventing its users in China from downloading apps from news organizations, including the New York Times, Radio Free Asia, Tibetan News, and Voice of Tibet. It is also blocking censorship circumvention tools like Tor and Psiphon; Google’s search app and Google Earth; an app called Bitter Winter, which provides information about human rights and religious freedoms in China; and an app operated by the Central Tibetan Authority, which provides information about Tibetan human rights and social issues.

The Huawei dilemma
By Isabel Hilton

In western capitals, Huawei channels profound anxieties about the motives, strategies and ambitions of the Chinese Communist Party, the Chinese state and the -companies it controls. Beijing insists that the suspicion it has built up Huawei to usurp western technological monopolies—and that its telecoms expose customers to risks of espionage or future sabotage—are unfounded. Huawei’s supporters, including at the highest levels of the Chinese government, argue that such charges betray a declining power’s fear of a rising power’s innovative energy. Huawei itself, meanwhile, insists that it is simply a private company owned by its employees, unrelated to government or Party. A long list of intelligence agents and investigative reports have found that claim less than credible.

While Huawei continues to insist it would never bow to demands for surveillance from Beijing’s intelligence services— despite two laws that mandate any Chinese individual or entity to do so on request—the response of the west to the firm illustrates the dilemma democracies face in their dealings with China. How to deal with what the European Commission describes as an economic partner that is simultaneously a strategic competitor, and one that plays to very different rules? Australia, long a hardliner on Huawei, has banned the company from its 5G, as has the United States. The US continues to pressure other governments, including Germany and the UK, to follow suit.

For the UK the Huawei dilemma offers a foretaste of the difficulties of navigating the clash between the world’s two biggest economies outside the EU. Despite the complexities of their industrial interdependence, the two giants are locked into a deepening confrontation that risks forcing others into unappealing compromises between prosperity and security. We are ill-prepared and ill-equipped to cope with that horrible choice.

The fight to control Africa’s digital revolution
By David Pilling

Consumers in advanced economies are only now waking up to the dangers posed by technology to their privacy and freedom. In Africa, companies are still at the stage of what Kenyan writer Nanjala Nyabola calls “a mass data sweep” in which information about an expanding consumer class is being busily devoured.

Even governments may be vulnerable. Technicians working at the African Union headquarters in Addis Ababa in 2017 noticed that peak data usage in the building occurred every night between midnight and 2am. A report in Le Monde, vehemently denied by Beijing, said data from the heavily bugged headquarters — a gift from the Chinese government — was being downloaded to Shanghai every night.

Many African countries are now almost wholly reliant on Chinese companies, including Huawei, for their digital services. Transsion, a Shenzhen-based handset maker, sells more phones in Africa than any other company. It has even begun manufacturing in Ethiopia.

Many Chinese companies, including ZTE and Hikvision, provide the surveillance technology used by African governments to monitor — or spy on — their own populations. CloudWalk Technology, a Guangzhou start-up, last year signed a deal with the government of Zimbabwe to provide a mass facial recognition programme. Zimbabwe will send data on millions of its citizens, captured by CCTV cameras, to the Chinese company, which hopes to improve technology that still struggles to distinguish between black faces.

“Using technology as a substitute for trust creates this black box,” says Ms Nyabola. “But most of us don’t understand how these systems are built. So what comes out is just chaos.”

Governments in Africa have a massive opportunity to use the digital revolution to improve the lives of their citizens. Too many are using it against them.

Uh-oh: Silicon Valley is building a Chinese-style social credit system
By Mike Elgan

The New York State Department of Financial Services announced earlier this year that life insurance companies can base premiums on what they find in your social media posts. That Instagram pic showing you teasing a grizzly bear at Yellowstone with a martini in one hand, a bucket of cheese fries in the other, and a cigarette in your mouth, could cost you. On the other hand, a Facebook post showing you doing yoga might save you money. (Insurance companies have to demonstrate that social media evidence points to risk, and not be based on discrimination of any kind—they can’t use social posts to alter premiums based on race or disability, for example.)

It’s now easy to get banned by Uber, too. Whenever you get out of the car after an Uber ride, the app invites you to rate the driver. What many passengers don’t know is that the driver now also gets an invitation to rate you. Under a new policy announced in May: If your average rating is “significantly below average,” Uber will ban you from the service.

Nobody likes antisocial, violent, rude, unhealthy, reckless, selfish, or deadbeat behavior. What’s wrong with using new technology to encourage everyone to behave?

The most disturbing attribute of a social credit system is not that it’s invasive, but that it’s extralegal. Crimes are punished outside the legal system, which means no presumption of innocence, no legal representation, no judge, no jury, and often no appeal. In other words, it’s an alternative legal system where the accused have fewer rights.

Social credit systems are an end-run around the pesky complications of the legal system. Unlike China’s government policy, the social credit system emerging in the U.S. is enforced by private companies. If the public objects to how these laws are enforced, it can’t elect new rule-makers.

An increasing number of societal “privileges” related to transportation, accommodations, communications, and the rates we pay for services (like insurance) are either controlled by technology companies or affected by how we use technology services. And Silicon Valley’s rules for being allowed to use their services are getting stricter.

If current trends hold, it’s possible that in the future a majority of misdemeanors and even some felonies will be punished not by Washington, D.C., but by Silicon Valley. It’s a slippery slope away from democracy and toward corporatocracy.

China’s Potential New Trade Weapon: Corporate Social Credits
By Yoko Kubota

While Beijing’s better-known plans for a social-credit system for individuals have stirred privacy concerns, a parallel effort to monitor corporate behavior would similarly consolidate data on credit ratings and other characteristics, collected by various central and local government agencies, into one central database, according to China’s State Council. The system is set to fully start next year.

An algorithm would then determine to what degree companies are complying with the country’s various laws and regulations. In some cases, companies could be punished by losing access to preferential policies or facing stricter levels of administrative punishment, a document from the State Administration for Market Regulation showed. Analysts said that other punishments could include denial of access to land purchases, certain loans and procurement bidding.

China’s Squeezes Hong Kong’s Corporations as Part of Its Clampdown
By Michael Schuman

The consequences for Hong Kong are potentially dire. The city has thrived as Asia’s premier financial center and a favored destination for global companies because of its strong rule of law and trustworthy administration—crucial elements for doing business that are sorely lacking in many other parts of the region, most of all China itself. But Beijing’s persistent efforts to bring the territory under its political thumb run the risk of undermining confidence in the “one country, two system” formula that governed London’s 1997 handover of the city to China and that ensures Hong Kong a high degree of autonomy. Without it, Hong Kong would not be Hong Kong, and the pillars supporting its economy and society would crumble.

More than that, Beijing’s pressure on Hong Kong companies could easily go global. If CCP cadres use their economic leverage over Hong Kong executives to stifle dissent against China, what’s stopping them from doing the same to American, European, or Japanese managers worldwide? Here we find what may be the darkest side of the integration of authoritarian China into the global economy. Companies from Starbucks to Apple have become heavily reliant on Chinese consumers to drive revenue growth, and it is not unreasonable to think that Beijing could capitalize on those business interests to impose its political views on the world.

To an extent, Beijing already has. It’s become almost routine for Hollywood producers to scrub scripts of anything that might offend thin-skinned Chinese censors and prevent a film from showing in lucrative Chinese theaters. Companies that accidentally tread on Chinese political sensitivities—by, for instance, including Taiwan among countries on their websites (Beijing considers the island part of China)—bring its ire upon them. Amid the Hong Kong scrape, Versace and Coach had to issue groveling apologies after angry Chinese perceived that they were listing the city as independent from China on their products.

The fracturing of the global economic consensus
By Rana Foroohar

CEOs agreed that we will not return to the open markets of the 1990s. They see the US-China trade conflict as the beginning of a clash of civilisations that will last for decades and divide the world. Beijing’s state-run model was the object of both envy and scepticism. Many western CEOs expressed the former, contrasting China’s long view with the pressures they faced due to quarterly earnings reports and increasing pressure from activist shareholders.

But some executives from developing countries worried about the price they would pay for dependence on mercantilist Beijing. CEOs from Asia were split. Some felt China’s increasingly repressive surveillance state would prove too brittle, while others believed that its Belt and Road infrastructure programme would be the foundation of an entirely new and benign order benefiting east and west alike.

Nearly everyone agreed on the need for deeper understanding of China. As one participant put it, “we need to move from Cartesian to Confucian thinking”. But more than a few were betting that companies rather than countries would lead the new order — in particular, platform giants that have more scale and power than most nations. They could start to leverage their advantages in ways that mimic governments, taking as their “citizens” a younger generation of digital natives who have lost faith in traditional institutions.

One participant, pointing out that liberal democratic governments simply can’t move fast enough to keep pace with technology, wondered whether “technology platforms might be the new Westphalian states”. In the middle of this session, the person sitting next to me passed me a slide on the “geopolitics of platforms” showing a regional breakdown of the equity market share of tech platforms — 70 per cent in the US, 27 per cent in Asia, and 3 per cent in Europe. Looked at in this way, perhaps the US still has more power relative to China than one might think. But as another participant put it, “countries are only relevant if they can tax companies”.

Platforms, of course, are not easy to tax. This is one of the issues that public policymakers throughout the developed world are wrestling with. It is also one that underscores the inexorable rise of corporate power over the past 40-odd years.

As one participant replied when I asked if he thought Facebook chief operating officer Sheryl Sandberg could still run for US president one day: “Why should she? She’s already leading Facebook.”

Facebook has declared sovereignty
By Molly Roberts

Facebook’s empowering of an independent group to rule on its most controversial content moderation may be in part a way to avoid responsibility. Next time an Alex Jones gets an eviction notice, Zuckerberg won’t have to go on a podcast attempting to explain his decision and accidentally end up defending Holocaust deniers. Instead, he can simply say “wasn’t me.”

But that doesn’t take away from a striking shift: A company that once protested that it was merely a platform and not a publisher is now acknowledging that its role in society is so outsize, and its decisions about who can say what so consequential, that it must establish a check on its own dominance.

Call it a court or call it, as Facebook now does, an oversight board, this company, by adopting a structure of government, is essentially admitting it has some of the powers of a government. The question is what that means for the rest of us.

For one thing, Facebook is not governing a single people. It is affecting the lives of a whole series of people in nations throughout the world, and those people have drastically disparate conceptions of appropriate public discourse. Facebook’s draft charter says it hopes to enshrine the values of “voice, safety, equity, dignity, equality and privacy.” It’s hard to argue with those. But what they mean to Zuckerberg is likely not what they mean to an activist in India, or a shop-owner in Britain, or anyone else anywhere else.

Facebook’s decisions can fundamentally alter the speech ecosystem in a nation. The company does not only end up governing individuals; it ends up governing governments, too. The norms Facebook or its court choose for their pseudo-constitution will apply everywhere, and though the company will strive for sensitivity to local context and concerns, those norms will affect how the whole world talks to one another.

That’s a lot of control, as Facebook has implicitly conceded by creating this court. But the court alone cannot close the chasm of accountability that renders Facebook’s preeminence so unsettling. Democracy, at least in theory, allows us to change things we do not like. We can vote out legislators who pass policy we disagree with, or who fail to pass policy at all. We cannot vote out Facebook. We can only quit it.

Launching a Global Currency Is a Bold, Bad Move for Facebook
By Matt Stoller

Enabling an open flow of money across all borders is a political choice best made by governments. And openness isn’t always good. For instance, most nations, especially the United States, use economic sanctions to bar individuals, countries or companies from using our financial system in ways that harm our interests. Sanctions enforcement flows through the banking system — if you can’t bank in dollars, you can’t use dollars. With the success of a private parallel currency, government sanctions could lose their bite. Should Facebook and a supermajority of venture capitalists and tech executives really be deciding whether North Korean sanctions can succeed? Of course not.

A permissionless currency system based on a consensus of large private actors across open protocols sounds nice, but it’s not democracy. Today, American bank regulators and central bankers are hired and fired by publicly elected leaders. Libra payments regulators would be hired and fired by a self-selected council of corporations. There are ways to characterize such a system, but democratic is not one of them.

A Brief History of How Your Privacy Was Stolen
By Roger McNamee

These leaders of Web 2.0 were young entrepreneurs with a different value system than those of us who came of age in the 1960s and 70s. They left behind the hippie libertarianism of Steve Jobs for an aggressive version that was more in line with Ayn Rand. Mr. Thiel, the PayPal co-founder who made the first outside investment in Facebook, wrote a 2014 essay in The Wall Street Journal titled, “Competition Is For Losers.” The subtitle encapsulated his advice to entrepreneurs: “If you want to create and capture lasting value, look to build a monopoly.”

They pursued monopolies with a passion, coining terms like “blitz scaling” to describe a growth philosophy that sought to eliminate all forms of friction in pursuit of more customers. Eventually, they transformed capitalism.

LinkedIn and Facebook took off first, in the mid-2000s, followed by Zynga, Twitter and others. Their services were mostly free, supported by advertising and in-app purchases.

These global apps transformed the relationship of technology to users, taking over far more of our everyday life — while gathering vast amounts of personal data. The early threats to privacy — identity and financial theft — were replaced by a greater threat few people recognized: business models based on surveillance and manipulation.

The new danger was pioneered by the last great Web 1.0 company, Google. As described by the Harvard professor Shoshana Zuboff in her book, “The Age of Surveillance Capitalism,” sometime in 2002, engineers at Google discovered that user data generated by searches could also be used to predict behavior beyond what purchases a visitor to the site intended to make. They realized that much more data would lead to much better behavioral predictions. They embraced surveillance and invented a market for behavioral predictions.

They created Gmail, an email product that connected identity to purchase intent, but also shredded traditional notions of privacy. Machine reading of Gmail messages enabled Google to gather valuable insights about users’ current and future behavior.

Google Maps gathered user location and movements. Soon thereafter, Google sent out a fleet of cars to photograph every building on every street, a product called Street View, and took pictures from satellites for a product they called Google Earth. Other products enabled Google to track users as they made their way around the web. They converted human experience into data and claimed ownership of it.

And Google learned to reach beyond its own products. The company would attempt to acquire all available personal data — in public and private spaces across the web — and combine it with data gathered from its own products to construct a data avatar of every digital consumer.

Data from third parties like banks, credit card processors, cellular carriers and online apps — along with data from web tracking, and data from surveillance products like Google Assistant, Street View and Sidewalk Labs — became part of user profiles. Our digital avatars are used to predict our behavior, a valuable commodity that is then sold to the highest bidder.

Thanks to convenient services — and disingenuous framing of the trade-offs — Google made previously unimaginable ideas, like scanning private messages and documents, acceptable to millions of people.

Having started Facebook with relatively strong privacy controls, Mark Zuckerberg adopted Google’s monetization strategy, which required systematic privacy invasions. In 2010, Mr. Zuckerberg declared that Facebook users no longer had an expectation of privacy.

Because of the nature of Facebook’s platform, it was able to capture emotional signals of users that were not available to Google, and in 2014 it followed Google’s lead by incorporating data from users’ browsing history and other sources to make its behavioral predictions more accurate and valuable.

This is what Ms. Zuboff calls “surveillance capitalism.” Surveillance results in highly accurate user information, something marketers crave. On the flip side, consumers have access to minimal information, primarily what platforms allow.

The compelling economics of surveillance capitalism has attracted new players, including Amazon, Microsoft, IBM, telecom carriers and auto manufacturers. The personal data they collect enables filter bubbles, recommendation engines and other techniques to nudge consumers toward desired actions — like buying products — that increase the value of behavioral predictions.

Platforms are under no obligation to protect user privacy. They are free to directly monetize the information they gather by selling it to the highest bidder. For example, platforms that track user mouse movements over time could be the first to notice symptoms of a neurological disorder like Parkinson’s disease — and this information could be sold to an insurance company. (And that company might then raise rates or deny coverage to a customer before he is even aware of his symptoms.)

Do You Know What You’ve Given Up?
By James Bennet

Many of these trade-offs were clearly worthwhile. But now the stakes are rising and the choices are growing more fraught. Is it O.K., for example, for an insurance company to ask you to wear a tracker to monitor whether you’re getting enough exercise, and set your rates accordingly? Would it concern you if police detectives felt free to collect your DNA from a discarded coffee cup, and to share your genetic code? What if your employer demanded access to all your digital activity, so that it could run that data through an algorithm to judge whether you’re trustworthy?

These sorts of things are already happening in the United States. Polling suggests that public anxiety about privacy is growing, as data breaches at companies like Facebook and Equifax have revealed how much information we’ve already traded away — and how vulnerable we can find ourselves when it’s exposed. Following the example of the European Union, which toughened its privacy regulations last year, officials in city halls, state capitals and Washington are considering new rules to protect privacy. Industry leaders are scrambling to influence that debate, and to rewrite their own rules.

The Super ‘Transparent’ Pai FCC Is Still Trying To Hide Details On Those Fake Net Neutrality Comments
By Karl Bode

We’ve long discussed how the Pai FCC’s net neutrality repeal was plagued with millions of fraudulent comments, many of which were submitted by a bot pulling names from a hacked database of some kind. Millions of ordinary folks (like myself) had their identities used to support Pai’s unpopular plan, as did several Senators. Numerous journalists have submitted FOIA requests for more data (server logs, IP addresses, API data, anything) that might indicate who was behind the fraudulent comments, who may have bankrolled them, and what the Pai FCC knew about it.

But the Pai FCC has repeatedly tried to tap dance around FOIA requests, leading to several journalists (including those at the New York Times and Buzzfeed) suing the FCC. Despite the Times’ lawyers best efforts to work with the FCC to tailor the nature of their requests over a period of months, the agency continues to hide behind FOIA exemptions that don’t really apply here: namely FOIA exemption 6 (related to protecting privacy) and 7E (related to protecting agency security and law enforcement activity).

In court filings made last week, the FCC also reiterated its claim that the primary reason it won’t release more data is because it’s just super concerned about user privacy:

“If the FCC is compelled to disclose an individual’s IP address, operating system and version, browser platform and version, and language settings, and that information is linked to the individual’s publicly-available name and postal address, that disclosure would result in clearly unwarranted invasions of personal privacy,” the FCC argues in papers filed late last week with U.S. District Court Judge Lorna Schofield in the Southern District of New York.”

To be clear, this is the same FCC that did absolutely nothing to prevent or address the fraud, then actively blocked law enforcement inquiries into this issue. In other words, this FCC has had numerous opportunities to cooperate with both law enforcement by providing confidential data, and has refused to do so. It’s also the same FCC that has done absolutely nothing about countless privacy scandals in the telecom sector, suggesting this sudden breathless concern for privacy may not be particularly authentic.

Thieves of Experience: How Google and Facebook Corrupted Capitalism
By Nicholas Carr

In pulling off its data grab, Google also benefited from the terrorist attacks of September 11, 2001. As much as the dot-com crash, the horrors of 9/11 set the stage for the rise of surveillance capitalism. Zuboff notes that, in 2000, members of the Federal Trade Commission, frustrated by internet companies’ lack of progress in adopting privacy protections, began formulating legislation to secure people’s control over their online information and severely restrict the companies’ ability to collect and store it. It seemed obvious to the regulators that ownership of personal data should by default lie in the hands of private citizens, not corporations. The 9/11 attacks changed the calculus. The centralized collection and analysis of online data, on a vast scale, came to be seen as essential to national security. “The privacy provisions debated just months earlier vanished from the conversation more or less overnight,” Zuboff writes.

Google and other Silicon Valley companies benefited directly from the government’s new stress on digital surveillance. They earned millions through contracts to share their data collection and analysis techniques with the National Security Agency and the Central Intelligence Agency. But they also benefited indirectly. Online surveillance came to be viewed as normal and even necessary by politicians, government bureaucrats, and the general public. One of the unintended consequences of this uniquely distressing moment in American history, Zuboff observes, was that “the fledgling practices of surveillance capitalism were allowed to root and grow with little regulatory or legislative challenge.” Other possible ways of organizing online markets, such as through paid subscriptions for apps and services, never even got a chance to be tested.

What we lose under this regime is something more fundamental than privacy. It’s the right to make our own decisions about privacy — to draw our own lines between those aspects of our lives we are comfortable sharing and those we are not. “Privacy involves the choice of the individual to disclose or to reveal what he believes, what he thinks, what he possesses,” explained Supreme Court Justice William O. Douglas in a 1967 decision.

Those who wrote the Bill of Rights believed that every individual needs both to communicate with others and to keep his affairs to himself. That dual aspect of privacy means that the individual should have the freedom to select for himself the time and circumstances when he will share his secrets with others and decide the extent of that sharing.

Google and other internet firms usurp this essential freedom. “The typical complaint is that privacy is eroded, but that is misleading,” Zuboff writes. “In the larger societal pattern, privacy is not eroded but redistributed. […] Instead of people having the rights to decide how and what they will disclose, these rights are concentrated within the domain of surveillance capitalism.” The transfer of decision rights is also a transfer of autonomy and agency, from the citizen to the corporation.

Tech Giants Google, Facebook and Amazon Intensify Antitrust Debate
By Jacob M. Schlesinger, Brent Kendall and John D. McKinnon

Big Tech has been the catalyst for the antitrust debate. These companies are central to the American economy and society in a way unimaginable 20 years ago, and there is growing public alarm over what the firms are doing to an array of markets, to national discourse and to privacy.

The most intense invectives have been directed at Google, Amazon, Facebook, and, to a lesser extent, Apple, lumped together as GAFA by their critics.

Monopolists are usually charged with using their clout to extract higher prices. Some of these behemoths give away many of their services free. Instead of direct financial harm to consumers, critics argue the companies use their market power to steer business to their own operations, weakening competition and sucking up profits in retailing, music, advertising and other industries, while squashing innovation. Consumers might also find stronger privacy protections if companies competed over that issue.

Tech companies generally have said that they believe they operate in dynamic and highly competitive markets and don’t believe they are illegal monopolies. Amazon founder and CEO Jeff Bezos devoted much of his 2019 shareholder letter to underscoring the highly competitive nature of many aspects of Amazon’s business.

In response to a question at a congressional hearing last year about whether Facebook is a monopoly, founder Mark Zuckerberg said, “It certainly doesn’t feel like that to me.” Google has contended during its battles over alleged anticompetitive behavior in the European Union that its products and services promote choice and competition. Apple, in response to charges that its App Store is anticompetitive, has cited the security benefits its restrictions and rules on app developers provide to consumers.

Silicon Valley’s giants look more entrenched than ever before
By The Economist

Compared with the dotcom bubble, the industry is more concentrated today: Microsoft, Amazon, Apple, Alphabet (Google’s parent) and Facebook represent half of its market capitalisation. The prevailing concern is not that tech firms are too flimsy to justify their valuations, but that their position is too powerful.

The lofty prices for the big five rest on strong fundamentals. In 2010, they made 4% of the pre-tax profits of non-financial firms in America; that figure is now 12%. Their valuations imply that investors expect earnings to grow fast. They have good reason to be bullish, because today’s giants are protected by high barriers to entry.

One element of this is that the big tech firms are spending heavily on innovation to try to ensure they remain at the cutting edge. In 2010 the big five tech companies accounted for 10% of the S&P 500’s total spending on research and development. Today, their share is 30%.

The big tech firms have also been keen to gobble up potential rivals. When Facebook was young, it rejected myriad acquisition offers, but it is now a predator, not prey, paying $19bn for WhatsApp in 2014. Since 2010, the big five have spent a net $100bn in cash (and more in stock) to buy would-be rivals. Partly as a result, the number of listed American firms worth at least $1bn that produce software or hardware has been flat since 2000.

The Tech Industry Is Dead! Long Live the Tech Monopolies!
By David Auerbach

The designation of “Communication Services” recognizes that the two internet titans, Alphabet and Facebook, are fundamentally about the same things as Verizon and Disney—but those things aren’t really “social communication and information,” as is claimed in the marketing language. Rather, they are subscriptions and advertising. “Social communication” is a euphemism for platforms that provide a medium for communication while inducing forms of dopaminergic behavioral dependency and encouraging their customers to evangelize the platform. “Information” is an overly vague term for “information that people and institutions are willing to pay for,” whether it’s a Netflix subscription or, for advertisers, a customer conversion.

The difficulty posed by Communication Services is that these large, hypernetworked companies are volatile, not just in their stock valuations but in their supposed utility. The stagnant valuations of AT&T and Verizon reflect the well-defined nature of their networks. But the valuation of Netflix’s content-delivery network is far more speculative and for all of Facebook’s success we are even less certain of what the demand will be for Facebook’s services (to users or to advertisers) even a few years from now. Alphabet’s advertising network is more distributed than Facebook’s, yet it also depends strongly on the centralized platform of YouTube.

The fortunes of these companies all depend on their ability to grow their networks to orders of magnitude that didn’t exist 20 years ago. From the macro perspective, YouTube, Netflix, Facebook, and Verizon Wireless are all different forms of the same thing: not technology or media, but networks. You may choose between an Android and an iPhone because those are conventional products, but you do not choose between Facebook, Google, and Netflix. You become part of their networks even if you don’t use them. As these hypernetworks have become closer to monopoly utilities than overgrown startups, they have become the epitome of Henry Adams’ spun-up dynamo, which continues expending all the energy it has because to stop would be to die. Some days it sputters, and one day it may stop.

The problem with Silicon Valley’s obsession with blitzscaling growth
By Tim O’Reilly

Microsoft was founded in 1975, and its operating systems —first MS-DOS, and then Windows—became the platform for a burgeoning personal computer industry, supporting hundreds of PC hardware companies and thousands of software companies. Yet one by one, the most lucrative application categories—word processing, spreadsheets, databases, presentation software—came to be dominated by Microsoft itself.

One by one, the once-promising companies of the PC era—Micropro, Ashton-Tate, Lotus, Borland—went bankrupt or were acquired at bargain-basement prices. Developers, no longer able to see opportunity in the personal computer, shifted their attention to the internet and to open-source projects like Linux, Apache, and Mozilla. Having destroyed all its commercial competition, Microsoft sowed the dragon’s teeth, raising up a new generation of developers who gave away their work for free, and who enabled the creation of new kinds of business models outside Microsoft’s closed domain.

The government also took notice. When Microsoft moved to crush Netscape, the darling of the new internet industry, by shipping a free browser as part of its operating system, it had gone too far. In 1994, Microsoft was sued by the US Department of Justice, signed a consent decree that didn’t hold, and was sued again in 1998 for engaging in anti-competitive practices. A final settlement in 2001 gave enough breathing room to the giants of the next era, most notably Google and Amazon, to find their footing outside Microsoft’s shadow.

Our entire economy seems to have forgotten that workers are also consumers, and suppliers are also customers. When companies use automation to put people out of work, they can no longer afford to be consumers; when platforms extract all the value and leave none for their suppliers, they are undermining their own long-term prospects. It’s two-sided markets all the way down.

Beware the Big Tech Backlash
By Greg Ip

… there is growing evidence that these companies use their size to stifle competition. British-released emails show that Facebook decided what access other companies could have to its platform based on their competitive threat; it cut off a Twitter video service’s access to Facebook friends. A studypublished by Harvard Business School by Feng Zhu and Qihong Liu found that Amazon targeted better-selling, higher-reviewed items sold by third-party merchants to sell itself. This is good for customers but crimps the growth of third-party merchants, they found.

A look at documents released by a U.K. lawmaker as part of a British parliamentary committee inquiry into “disinformation and fake news,” reveals how Facebook and CEO Mark Zuckerberg gave select developers special access to user data and deliberated on whether to sell that data.

If there is a case for government intervention, then, this is it: more muscular antitrust oversight. “Google, Amazon, Apple, Facebook, and Microsoft … have collectively bought over 436 companies and startups in the past 10 years, and regulators have not challenged any of them,” Jonathan Tepper and Denise Hearn write in their book, “The Myth of Capitalism: Monopolies and the Death of Competition.” “Either the upstarts sell out to the bigger company, or they get ruthlessly crushed.”

The coming antitrust fights are an existential battle over how to protect capitalism
By Linette Lopez

Because their services are free, they were, in many cases, allowed to grow without restraint. Activity that may have been seen as anti-competitive a generation ago was allowed to slide.

For example, back in 2012, Federal Trade Commission (FTC) researchers showed that Google used its power to make competitors harder to find on its search engine. The FTC, however, decided to do nothing about it, saying simply that their job was not to protect competitors.

Regulators didn’t do anything when Facebook crippled video app Vine by blocking Vine from its friend-finding feature. And they stood by and did nothing when Google acquired YouTube and changed its search algorithm, substituting its own subjective, “relevance” ranking in place of objective search criteria.

Watchdogs like the Electronic Privacy Information Center (EPIC) have been complaining about stuff like this on Capitol Hill for a while now. When it comes to mergers, EPIC argues that agencies like the FTC should be thinking about whether or not the merger may erode consumer privacy. This, it said, is what should have been under consideration when Facebook acquired Whatsapp back in 2014.

“The FTC ultimately approved the merger after Facebook and WhatsApp promised not to make any changes to WhatsApp users’ privacy settings,” EPIC pointed out in Congressional testimony in December of 2018.

“However Facebook announced in 2016 that it would begin acquiring the personal information of WhatsApp users, including phone numbers, directly contradicting their previous promises to honor user privacy. Following this, EPIC and CDD filed another complaint with the FTC in 2016, but the Commission has taken no further action. Meanwhile, antitrust authorities in the EU fined Facebook $122 million for making deliberately false representations about the company’s ability to integrate the personal data of WhatsApp users.”

Facebook vs the feds: The inside story of a multi-billion dollar tech giant’s privacy war with Washington
By Tony Romm

The spark for the government’s investigation into Facebook was Cambridge Analytica, a political consultancy with ties to the upper echelons of Trump’s 2016 presidential campaign. The firm sought to harness the power of Facebook data — including users’ likes and interests — to create “psychographic” profiles of users and better target its clients’ political messages.

In doing so, Cambridge Analytica relied on a quiz app created by a third-party researcher that collected data about those who installed it as well as their Facebook friends, a practice the company allowed until a series of rule changes in 2015. Revelations three years later about the data it amassed — putting 87 million Facebook users’ information at risk for further misuse — sparked an international backlash from regulators who saw it as a sign of Silicon Valley’s endemic problems with privacy.

By the end of March 2018, the FTC announced its own probe into Facebook, an unexpected move for a federal enforcement agency that typically says nothing about its work to probe corporate wrongdoers. The investigation sought to determine if Facebook broke promises it made to the government in 2011 to improve its privacy practices, a legally binding accord that ended an earlier inquiry into the social-networking giant. Violations threatened Facebook with steep fines, though Facebook for months maintained publicly that it didn’t breach the accord.

The commission’s task — immediately seen as a litmus test of its power to oversee Silicon Valley — fell chiefly to Simons, who had joined the FTC’s two other Republicans and two Democrats in spring 2018. Simons assumed the chairmanship of the agency in May after decades of practicing antitrust law for the government and a host of private-sector clients.

The FTC’s probe into Facebook only widened amid a torrent of additional revelations about its privacy practices. That June, for example, Facebook acknowledged it had shared user information with 52 hardware and software makers, including Amazon, Microsoft and Huawei, as well as apps including Hinge, an online-dating service, and Spotify, a music-streaming giant, in ways that might not have been readily apparent to users. Each of the new disclosures triggered fierce criticism among privacy hawks, who questioned why the FTC — which had been watching Facebook since 2012 — never spotted a single violation at the company in the first place.

By the end of 2018, staff investigators at the FTC had concluded that Facebook had breached its previous agreement with the government. Taking into account the total number of users who had seen misleading privacy disclosures about Facebook — a form of deception in the eyes of the FTC — the agency computed a theoretical maximum fine that reached into the tens of billions of dollars.

Politicians Don’t Trust Facebook—Unless They’re Campaigning
By Hamdan Azhar

Over the past two months, I surveyed the official campaign websites of 535 US politicians. As of June 14, 81 sitting US senators, including Brown and Hawley, have Facebook tracking pixels embedded somewhere on their campaign websites; 31 of them send exact donation amounts. As of last Friday, at least 176 members of the House of Representatives also have the Facebook pixel on their campaign homepages. And almost every 2020 presidential candidate uses this kind of tracker, too, including President Donald Trump.

And this should be underlined: Facebook’s pixel technology, which is meant to help target Facebook ads to visitors, must be approved by websites on which it operates. These politicians—or at least their campaigns—have actively signed up to allow Facebook to track their visitors.

Federal political committees do need to collect certain information from donors in order to comply with campaign finance laws, including the names, addresses, and occupations for individuals who contribute more than $200 in an election cycle. That information gets publicly disclosed in reports submitted to the Federal Election Commission. But the use of that information is also regulated by the government, which says that it “shall not be sold or used by any person for the purpose of soliciting contributions or for any commercial purpose.”

Those regulations don’t apply to data collected by Facebook. The use of that data is instead governed by the company’s own policies. Pixels fall under its so-called Business Tools terms, which says Facebook won’t share that data with third parties without permission, unless the company is required to do so by law. Facebook also says it does not use pixel data to place Facebook users into interest segments that other advertisers can choose to target. The company requires anyone using a pixel to provide “clear and prominent notice” of its data collection and sharing on the site.

There is also something concerning about a private company having detailed browsing records for hundreds of thousands or even millions of people—a concern that lawmakers have expressed repeatedly. In testimony before Congress last year, Mark Zuckerberg tried to reassure them. “On Facebook, you have control over your information. The content that you share, you put there. You can take it down at any time,” he said. “The information that we collect you can choose to have us not collect. You can delete any of it.”

The CEO clarified later in the hearing that web browsing history is treated a bit differently. “Web logs are not in Download Your Information,’” Zuckerberg said. “We only store them temporarily. And we convert the web logs into a set of ad interests that you might be interested in those ads, and we put that in the Download Your Information instead.” Facebook also says it de-identifies pixel data after an initial period of time.

Shortly after those hearings, Facebook announced a still-forthcoming Clear History tool that will enable users to delete Facebook’s record of their browsing history. But as of now, there seems to be limited independent oversight of Facebook’s ability to store pixel data and use it however it pleases.

Facebook has become a core part of the modern campaign apparatus for politicians who use the social networking site to broadcast their messaging, solicit donations, announce events, recruit volunteers, and more. Politicians also spend heavily on Facebook advertising. The advertising analysis and consulting firm Borrell Associates estimated that $1.4 billion was spent on political digital advertising in 2016, and it projected that number to hit $3.3 billion in 2020.

Advertisers Are Wary of Breaking Up Google and Facebook
By Suzanne Vranica and Alexandra Bruell

“I cannot walk away from scale, as a brand that needs mass marketing,” said Raja Rajamannar, MasterCard Inc.’s chief marketing officer.

“Look at what Facebook and Google do for me—they have pretty sophisticated tools and consumers’ demographic information and location,” and they can help advertisers reach a swath of the population, said a top marketing executive at a quick-service restaurant.

Those sentiments explain why the two tech titans accounted for 58% of the U.S. digital ad spending in 2018, according to eMarketer.

The advertisers’ views stand in contrast to those of many publishers. Many media executives would rejoice if U.S. antitrust enforcers moved to break up Google or Facebook or take other drastic action. Their case: the “duopoly” has cornered markets like search, video and digital advertising.

That doesn’t mean advertisers don’t have deep frustrations with the companies—from their grip on user data to their repeated missteps in allowing ads to run next to offensive or hateful content. Several ad executives said if government scrutiny resulted in changes to those behaviors, that would mark an improvement.

Advertisers also would like to receive user data from the tech companies to improve what they already know about their customers, though they realize this is especially sensitive territory now as the tech companies deal with increased scrutiny over their privacy practices.

There are some advertisers who do believe Google’s dominance of advertising technology raises serious anticompetitive concerns because it is both a seller and buyer of ads and operates leading products at every stage between advertisers and publishers.

“Google is a virtual monopoly,” said a top marketer for a global beverage company. “It quashes innovation and growth in the industry,” the person added.

German Regulators Just Outlawed Facebook’s Whole Ad Business
By Emily Dreyfuss

Antitrust regulators used to consider data and privacy outside their purview. The old philosophy held that antitrust was concerned with price, and if a product was free then consumers couldn’t be harmed, says Maurice Stucke, antitrust expert and law professor at the University of Tennessee. “What we’re seeing now is those myths are being largely discredited.”

The most remarkable part of the ruling is the way it makes clear that privacy and competition are inextricably intertwined. “On the one hand there is a service provided to users free of charge. On the other hand, the attractiveness and value of the advertising spaces increase with the amount and detail of user data,” Mundt said. “It is therefore precisely in the area of data collection and data use where Facebook, as a dominant company, must comply with the rules and laws applicable in Germany and Europe.”

If Facebook loses the appeal, then Germany will become a grand experiment in whether the surveillance economy is actually essential to the operation of social media. Other Europeans and Americans may demand they are given the same option. “This ruling is really an icebreaker. Icebreakers break through the ice in order to lead the path for other vessels to follow,” says Stucke.

The Week in Tech: We Might Be Regulating the Web Too Fast
By Jamie Condliffe

“We’re entering a new phase of hyper regulation,” said Paul Fehlinger, the deputy executive director of the Internet and Jurisdiction Policy Network, an organization established to understand how national laws affect the internet.

This flurry of content rules is understandable. Much of the material they would police is abhorrent, and social media’s rapid rise has caught lawmakers off guard; now the public wants something done.

But the regulations could have unintended consequences.

Difficulties in defining “harmful” mean governments will develop different standards. In turn, the web could easily look different depending on your location — a big shift from its founding principles. (This is already happening: The Chicago Tribune’s website, for example, doesn’t comply with General Data Protection Regulation, so there’s no access to it from Europe.)

There may be less visible effects. If regulation required differences at a hardware level, that could fragment the infrastructure, said Konstantinos Komaitis, a senior director at the nonprofit Internet Society, which promotes the open development and use of the internet. That could make the internet less resilient to outages and attacks.

And bigger, richer companies will find it easier to comply with sprawling regulation, which could reinforce the power of Big Tech.

“There is a major risk that we end up in a situation where short-term political pressure trumps long-term vision,” Mr. Fehlinger said.

Why ‘breaking up’ big tech probably won’t work
By Fiona Scott Morton

The break-them-up sloganeering fails to recognize that “big” is not, under the law, an antitrust violation. New products can achieve huge market share because consumers love them: consider Rollerblade’s dominance in the inline-skating market in the late 1980s, AOL’s high share of Internet access in the 1990s or how Apple owned the tablet market in 2011. If a product gets a big share because it is good and popular — but its maker has not behaved anti-competitively toward its rivals — it has not violated our antitrust laws. Without new laws giving the government the power to take a different approach, Washington cannot just break up big tech, or any company, solely because it is large or has a high market share.

Will a breakup prompt a remedy that will increase competition? In the old days of the Standard Oil monopoly, enforcers often broke up large entities by geography. But there is no sense in which it’s useful to a have a search engine or social network for a small section of the country. Rather, an agency must think carefully about the source of each platform’s market power and figure out what remedy — antitrust or otherwise — would create competition in that market. If used indiscriminately, a breakup can actually harm consumers and workers and reduce innovation.

Simply divesting Instagram from Facebook is unlikely to work. For one thing, everyone wants to be on the same site as their friends, so a divested division with no links to Facebook would likely lose its customers quickly — back to Facebook.

Regulations that lower entry barriers will help reduce the entrenched market power of dominant platforms. Today it is very difficult for a consumer to switch platform providers because her data are not under her control. What if a consumer could port her shopping data from Amazon.com to Jet.com with a few clicks? (Amazon’s founder and chief executive, Jeff Bezos, owns The Post.) Or her favorite playlists between Spotify and Pandora? Data portability could allow new competitors to attract customers more easily.

Spotify just painted a big target on Apple’s back, and the iPhone maker should worry if antitrust regulators start aiming at it
By Troy Wolverton

Facebook is the dominant social network. Google owns the search market and the vast majority of smartphones worldwide run its Android operating system. Between the two of them, the companies are expected to account for more than half of the global digital ad market this year.

Meanwhile, about half of all US e-commerce purchases are made through Amazon. And Microsoft still dominates the PC operating system market.

By contrast, Apple wouldn’t seem to control any notable industries. Its Mac computers have long had only a small portion of the overall PC market. The iPhone may be one of most popular lines of smartphones in the world, but Samsung sells more phones overall and Huawei has nearly overtaken Apple in market share. Apple Music may have raised Spotify’s ire, but it’s still a distant second to Spotify in terms of subscribers.

But such high-level views understate Apple’s actual market power.

The fee Apple charges can make competing services uncompetitive. To recoup its costs for paying Apple’s commission, Spotify used to charge its iPhone users $12.99 a month — $3 more than it charged those who signed up through its website. By contrast, Apple Music didn’t have to pay those charges and iPhone users were able to sign up for subscriptions inside its app for just $9.99, Spotify says.

What’s more, Spotify says Apple has barred it from advertising or offering promotional rates for its service inside its app. It’s also repeatedly delayed approving Spotify’s app updates, the streaming music service said.

And for years, according to Spotify, Apple blocked it from offering an Apple Watch app and still bars it from offering an app for Apple’s HomePod smart speaker.

But the bigger danger to Apple from the Spotify complaint could be the public-relations hit it could cause. In the mid-1990s, before the antitrust trial, Microsoft was one of the most respected companies in the US and Bill Gates was one of the most widely admired business leaders.

But that case, which brought to light Microsoft’s cutthroat tactics and Gates’ seemingly disdainful attitude toward government oversight, damaged the reputations of both.

How Americans View Tech Companies
By Aaron Smith

In the midst of an ongoing debate over the power of digital technology companies and the way they do business, sizable shares of Americans believe these companies privilege the views of certain groups over others. Some 43% of Americans think major technology firms support the views of liberals over conservatives, while 33% believe these companies support the views of men over women, a new Pew Research Center survey finds. In addition, 72% of the public thinks it likely that social media platforms actively censor political views that those companies find objectionable.

The belief that technology companies are politically biased and/or engaged in suppression of political speech is especially widespread among Republicans. Fully 85% of Republicans and Republican-leaning independents think it likely that social media sites intentionally censor political viewpoints, with 54% saying this is very likely. And a majority of Republicans (64%) think major technology companies as a whole support the views of liberals over conservatives.

On a personal level, 74% of Americans say major technology companies and their products and services have had more of a positive than a negative impact on their own lives. And a slightly smaller majority of Americans (63%) think the impact of these companies on society as a whole has been more good than bad. At the same time, their responses highlight an undercurrent of public unease about the technology industry and its broader role in society. When presented with several statements that might describe these firms, a 65% majority of Americans feel the statement “they often fail to anticipate how their products and services will impact society” describes them well – while just 24% think these firms “do enough to protect the personal data of their users.” Meanwhile, a minority of Americans think these companies can be trusted to do the right thing just about always (3%) or most of the time (25%), and roughly half the public (51%) thinks they should be regulated more than they are now.

Americans have become much less positive about tech companies’ impact on the U.S.
By Carroll Doherty and Jocelyn Kiley

Four years ago, technology companies were widely seen as having a positive impact on the United States. But the share of Americans who hold this view has tumbled 21 percentage points since then, from 71% to 50%.

We won’t know if screen time is a hazard until Facebook comes clean
By Andy Przybylski and Peter Etchells

There’s no point in introducing any sort of legislation when nobody’s clear on what, precisely, needs regulating, or what it is that we’re actually worried about. That scientific evidence needs to come from expert sources independent of Silicon Valley. Facebook’s Project Atlas, which paid people (including children) to collect research data on their digital behaviour, has once again made it clear that large social media companies don’t have the best track record when it comes to research on their own users (and Atlas wasn’t an isolated case).

In 2017, Yvette Cooper, the chair of the Home Affairs Committee chastised representatives from YouTube, Twitter and Facebook, arguing that their platforms were being used to facilitate various forms of extremism and radicalisation. It’s an understandable concern, but not only do we actually know very little about how such online grooming happens, we also have little insight into how best to stop it.

Currently, companies such as YouTube are essentially playing a glorified game of high-tech whack-a-mole, with content moderators (both artificial and real) in a perpetual race to flag and bin the worst offenders. It’s an essential job – but it’s a tactical solution to a strategic problem. If online social platforms opened their doors to teams of trained social scientists, the possibility of developing lasting long-term solutions to these problems becomes more of an achievable reality.

It’s not all about the dark side either: we could also learn more about what brings out the best in us. Online social platforms aren’t just restricted to Facebook, Snapchat or Instagram – they also include platforms like GoFundMe or JustGiving. Participation by these platforms could improve our understanding of human altruism, and how to build better support for the causes we care about. Psychologists already study these sorts of phenomena, but are traditionally restricted to more artificial forms of data, such as lab-based economy games.

Facebook’s ex-security chief on disinformation campaigns: ‘The sexiest explanation is usually not true’
By Victoria Kwan

What more can journalists do to make sure they’re reporting responsibly on these stories, even if they don’t have access to platform information, or a technical forensics background?

It is incumbent on platforms to find ways to share more data — that is their responsibility to figure out. There are interesting problems with privacy laws, but if the company doesn’t normally want to talk about the details of that, then [journalists should] push back.

A big thing I always tell all journalists is: look, it’s probably not Russia. The truth is, the vast majority of political disinformation is coming from semi-professionals who are making money pushing disinformation, who are also politically motivated and have some kind of relationship to the political actors themselves. The vast majority of the time, it is not a foreign influence campaign. And that should be the automatic assumption: it is not James Bond.

Journalists are there on all kinds of other things. If you read a local newspaper story about a woman disappearing in the middle of the night, it’s probably not a human trafficking ring. It’s probably the husband. Local crime reporters understand this, so they don’t write, ‘This is probably a Ukrainian human trafficking ring’, as the first assumption in the story. They write: Scott Peterson seemed very sketchy.

At some point, you start to realise it’s mostly scammers. This is the truth on the internet: there are tens of thousands of people whose entire job it is to push spam on Facebook. It’s their career. There are hundreds of times more people doing that than there are working in professional disinformation campaigns for governments. So they have to fundamentally accept that the sexiest explanation is usually not true.

This is something that companies go through, too. They’ll hire new analysts, and they jump to wild conclusions. ‘I found a Chinese IP, maybe it’s MSS [Ministry of State Security].’ It’s probably not MSS; it’s probably unpatched Windows bugs in China. This is also why you do the red-teaming, and why you have disinterested parties whose job it is to question the conclusions.

When a journalist actually gets on a call with a technology company, like Facebook or YouTube, to discuss disinformation campaigns on that platform, what kinds of questions should she be asking?

There are two different scenarios: when the company provides attribution, and when it doesn’t. The platforms are going to be the most reluctant to provide attribution, because if they get it wrong, it’s a huge deal. It’s a lot of downsides, and not a lot of upsides, for doing attribution.

The media also needs to understand that attribution [to] actors like Russia is relatively easy for these companies. None of them have offices in Russia anymore. Russia is economically irrelevant to all the major tech companies, whereas that is not true for China or India. So the first thing the media needs to consider is whether or not the platforms are motivated by their financial ties or by the safety of their employees in not making the attribution.

If the companies don’t provide attribution, then the other thing the media should ask is: are you providing the raw data to anybody who can? Are you providing it to any NGOs or academics, or are you providing it to a trustworthy law enforcement source? This is the key thing.

But this is only going to get harder because the truth is, if you look globally at disinformation campaigns, the median victim of a professional disinformation campaign is a victim of a campaign being run by their own government against the domestic audience. If you look at India, the disinformation is not being driven by foreign adversaries, it’s being driven by the Indian political parties. That makes the attribution question very complicated for the tech companies. So that’s something the media needs to keep in mind.

This is also why the media needs to be consistent about calling for regulation of the [technology] companies. When you call for global regulation, you’re asking to switch power over to the governments who are perhaps not totally democratically accountable; or are democracies, but democracies that think disinformation is totally fine.

That’s the kind of stuff that drives me nuts on the big-picture issues, when large media organisations call for these companies to be subservient to governments, and then also believe that the companies should protect people from their own governments.

An economist explains what happens if there’s another financial crisis
By Kenneth Rogoff and Ross Chainey

How optimistic are you for the future?

My children and their friends are very optimistic about technology for the future. It’s interesting the contrast: if you speak to economists, to central banks, to Wall Street, they say we’re done inventing anything, we’ve had 250 great years, now it’s going to slow down. I think that’s wrong; that actually we’re likely to see an acceleration in technology.

Does that make me optimistic for society? I’m not sure because society has a very difficult problem handling rapid change, handling innovation. In some ways, it might be easier if we settle down to a slower pace of technological growth.

That’s not what’s going to happen. Mankind’s innate optimism, innate curiosity is going to be producing new ideas. Artificial intelligence is here; it’s coming very rapidly. Whether it’s 10, 20, 30 years, there’s no question that its imprint will be very great.

So, I’m impressed that technology will improve very quickly. My biggest worry is that society and politics will not progress at a similar pace and that disconnect between the fast pace of technology and the slow pace at which societies and politics change could bring many problems.

H. G. Wells and the Uncertainties of Progress
By Peter J. Bowler

The historian Philip Blom calls the early twentieth century the “vertigo years”, when everyday life was transformed by a bewildering array of new technologies. … Wells realized that this state of uncertainty would continue indefinitely, making it virtually impossible even for the enthusiasts to predict what would emerge. The technophiles hail their innovations as the driving force of progress, but they do not always foresee what will be invented — or what the ultimate effects on society will be. This is a situation we are acutely aware of today: few, if any, could have anticipated the impact of computers and the digital revolution, and we are only gradually becoming aware that these innovations have not brought us unalloyed benefits.

The Revolution Need Not Be Automated
By Daron Acemoglu and Pascual Restrepo

… the field is dominated by a handful of large tech companies with business models closely linked to automation. These firms account for the bulk of investments in AI research, and they have created a business environment in which the removal of fallible humans from the production processes is regarded as a technological and business imperative. To top it off, governments are subsidizing corporations through accelerated amortization, tax breaks, and interest deductions – all while taxing labor.

No wonder adopting new automation technologies has become profitable even when the technologies themselves are not particularly productive. Such failures in the market for innovation and technology seem to be promoting precisely the wrong kind of AI. A single-minded focus on automating more and more tasks is translating into low productivity and wage growth and a declining labor share of value added.

This doesn’t have to be the case. By recognizing an obvious market failure and redirecting AI development toward the creation of new productivity-enhancing tasks for people, we can achieve shared prosperity once again.

Big Tech’s Harvest of Sorrow?
By Daron Acemoglu

Applying science to social problems has brought huge dividends in the past. Long before the invention of the silicon chip, medical and technological innovations had already made our lives far more comfortable – and longer. But history is also replete with disasters caused by the power of science and the zeal to improve the human condition.

For example, efforts to boost agricultural yields through scientific or technological augmentation in the context of collectivization in the Soviet Union or Tanzania backfired spectacularly. Sometimes, plans to remake cities through modern urban planning all but destroyed them. The political scientist James Scott has dubbed such efforts to transform others’ lives through science instances of “high modernism.”

An ideology as dangerous as it is dogmatically overconfident, high modernism refuses to recognize that many human practices and behaviors have an inherent logic that is adapted to the complex environment in which they have evolved. When high modernists dismiss such practices in order to institute a more scientific and rational approach, they almost always fail.

Historically, high-modernist schemes have been most damaging in the hands of an authoritarian state seeking to transform a prostrate, weak society. In the case of Soviet collectivization, state authoritarianism originated from the self-proclaimed “leading role” of the Communist Party, and pursued its schemes in the absence of any organizations that could effectively resist them or provide protection to peasants crushed by them.

Yet authoritarianism is not solely the preserve of states. It can also originate from any claim to unbridled superior knowledge or ability. Consider contemporary efforts by corporations, entrepreneurs, and others who want to improve our world through digital technologies. Recent innovations have vastly increased productivity in manufacturing, improved communication, and enriched the lives of billions of people. But they could easily devolve into a high-modernist fiasco.

What Clausewitz Can Teach Us About War on Social Media
P. W. Singer & Emerson T. Brooking

Each new technology was used to wage information wars that ran alongside the physical fighting. Yet propaganda was almost universally ineffective. During the Blitz, one of the most popular radio stations in the United Kingdom was an English-language station produced by the Nazis — because the British loved to laugh at it. In the 1960s and 1970s, alongside the millions of tons of bombs US forces dropped on North Vietnam were tens of millions of leaflets, which the North Vietnamese promptly used as toilet paper.

The Internet has changed all that. In the space of a decade, social media has turned almost everyone into a collector and distributor of information. Attacking an adversary’s centre of gravity — the minds and spirits of its people — no longer requires massive bombing runs or reams of ineffective propaganda. All it takes is a smartphone and a few idle seconds. Anyone can do it.

The Internet has brought one other unprecedented change that would have stumped even Clausewitz: its laws of war are set by a mere handful of people. On networks of billions of people, a tiny number of individuals can instantly turn the tide of an information war one way or another. What Mark Zuckerberg and Twitter CEO Jack Dorsey allow (or ban) in their digital kingdoms can make or break entire companies and change the course of international conflicts.

Unfortunately, as these social media companies belatedly begin to reckon with their growing political power (power they never asked for and have often proved ill equipped to wield), they are repeating past mistakes. Time and again, they have failed to prepare for the political, legal, and moral dimensions of their world-changing technologies, failed to plan for how bad actors might abuse them and good actors might misuse them. At each foreseeable surprise they turn to technology as the answer. This cycle is about to repeat itself as companies develop new forms of artificial intelligence. They believe this might solve their problems of censorship and content moderation, but it is easy to foresee how AI systems will also be weaponised against their users.

If you are online, your attention is like a piece of contested territory. States, companies, and people you may never have heard of are fighting for it in conflicts that you may or may not realise are unfolding around you. Everything you watch, like, or share makes a tiny ripple on the information battlefield, offering an infinitesimal advantage to one side or another.

Those who can direct the flow of this swirling tide can accomplish incredible good. They can free people, expose crimes, save lives, and prompt far-reaching reforms. But they can also accomplish astonishing evil. They can foment violence, stoke hate, spread lies, spark wars, and even erode democracy itself.

Twitter won’t ruin the world. But constraining democracy would
By Kenan Malik

There are real issues to be addressed about how information sharing can lead to Twitter mobs, fake news and online hate. But the debate also expresses deeper anxieties about allowing people too much sway. “I remember specifically one day thinking of that phrase: we put power in the hands of people,” Wetherell observes. “But now, what if you just say it slightly differently: oh no, we put power into the hands of people.”

It’s a fear that has leached from politics into technology. Where once we were enamoured of democracy, now many panic about its consequences. And where once many lauded technology as empowering people, now that is precisely what many fear.

“Audiences that regularly amplify awful posts,” Wetherell suggests, should be suspended or banned from platforms. This is the technological equivalent of restricting the franchise. And it’s already happening. From laws enforcing takedowns of fake news to bans on those promoting unpalatable ideas, such restrictions have become the norm.

The problems of Twitter mobs and fake news are real. As are the issues raised by populism and anti-migrant hostility. But neither in technology nor in society will we solve any problem by beginning with the thought: “Oh no, we put power into the hands of people.” Retweeting won’t ruin the world. Constraining democracy may well do.

Business Leaders Set the A.I. Agenda
By The New York Times

Dov Seidman

Founder and chief executive, LRN

The business of business is no longer just business. The business of business is now society. The world is fused, and we can no longer maintain neutrality. Therefore, it is inescapable to take responsibility for what technology enables and how it’s used. Restoring trust will take more than software.

We need to scale “moralware” through leadership that is guided by our deepest shared values and ensures that technology lives up to its promise: enhancing our capabilities, enriching lives, truly bringing us together, and making the world more open and equal. This means seeing more than “users” and “clicks” but real people, who are worthy of the truth and can be trusted to make their own informed choices.

Algorithmic Governance and Political Legitimacy
By Matthew B. Crawford

“Technology” is a slippery term. We don’t use it to name a toothbrush or a screwdriver, or even things like capacitors and diodes. Rather, use of the word usually indicates that a speaker is referring to a tool or an underlying operation that he does not understand (or does not wish to explain). Essentially, we use it to mean “magic.” In this obscurity lies great opportunity to “exert more power with less effort,” to use Hamburger’s formula.

To grasp the affinities between administrative governance and algorithmic governance, one must first get over that intellectually debilitating article of libertarian faith, namely that “the government” poses the only real threat to liberty. For what does Silicon Valley represent, if not a locus of quasi-governmental power untouched by either the democratic process or by those hard-won procedural liberties that are meant to secure us against abuses by the (actual, elected) government? If the governmental quality of rule by algo­rithms remains obscure to us, that is because we actively welcome it into our own lives under the rubric of convenience, the myth of free services, and ersatz forms of human connection—the new opiates of the masses.

To characterize this as the operation of “the free market” (as its spokespersons do) requires a display of intellectual agility that might be admirable if otherwise employed. The reality is that what has emerged is a new form of monopoly power made possible by the “network effect” of those platforms through which everyone must pass to conduct the business of life. These firms sit at informational bottlenecks, collecting data and then renting it out, whether for the purpose of targeted ads or for modeling the electoral success of a political platform. Mark Zuckerberg has said frankly that “In a lot of ways Facebook is more like a government than a traditional company. . . . We have this large community of people, and more than other technology companies we’re really setting policies.”

One reason why algorithms have become attractive to elites is that they can be used to install the automated enforcement of cut­ting‑edge social norms. In the last few months there have been some news items along these lines: Zuckerberg assured Congress that Facebook is developing AI that will detect and delete what progressives like to call “hate speech.” You don’t have to be a free speech absolutist to recognize how tendentiously that label often gets ap­plied, and be concerned accordingly. The New York Times ran a story about new software being pitched in Hollywood that will determine if a script has “equitable gender roles.” The author of a forthcoming French book on artificial intelligence, herself an AI researcher, told me that she got a pitch recently from a start-up “whose aim was ‘to report workplace harassment and discrimination without talking to a human.’ They claim to be able to ‘use scientific memory and interview techniques to capture secure records of high­ly emotional events.’”

Locating the authority of evolving social norms in a computer will serve to provide a sheen of objectivity, such that any reluctance to embrace newly announced norms appears, not as dissent, but as something irrational—as a psychological defect that requires some kind of therapeutic intervention. So the effect will be to gather yet more power to what Michel Foucault called “the minor civil servants of moral orthopedics.” (Note that Harvard University has over fifty Title IX administrators on staff.) And there will be no satisfying this beast, because it isn’t in fact “social norms” that will be enforced (that term suggests something settled and agreed-upon); rather it will be a state of permanent revolution in social norms. Whatever else it is, wokeness is a competitive status game played in the institutions that serve as gatekeepers of the meritocracy. The flanking maneuvers of institutional actors against one another, and the competition among victim groups for relative standing on the intersectional to­tem pole, make the bounds of acceptable opinion highly unstable. This very unsettledness, quite apart from the specific content of the norm of the month, makes for pliable subjects of power: one is not allowed to develop confidence in the rightness of one’s own judg­ments.

The conflicts created by identi­ty politics become occasions to extend administrative authority into previously autonomous domains of activity. This would be to take a more Marxian line, treating PC as “superstructure” that serves main­ly to grease the wheels for the interests of a distinct class—not of capitalists, but of managers. …

The incentive to technologize the whole drama enters thus: managers are answerable (sometimes legally) for the conflict that they also feed on. In a corporate setting, especially, some kind of ass‑covering becomes necessary. Judgments made by an algorithm (ideally one supplied by a third-party vendor) are ones that nobody has to take responsibility for. The more contentious the social and political landscape, the bigger the institutional taste for automated decision-making is likely to be.

Persuasion is what you try to do if you are engaged in politics. Curating information is what you do if you believe your outlook is one from which dissent can only be due to a failure to properly process the relevant information. This is an anti-political form of politics. If politics is essentially fighting (toward compromise or stalemate, if all goes well, but fighting nonetheless), technocratic rule is essentially helping, as in “the helping professions.” It extends compassion to human beings, based on an awareness of their cogni­tive limitations and their tendency to act out.

In the Founders Letter that accompanied Google’s 2004 initial public offering, Larry Page and Sergey Brin said their goal is “getting you exactly what you want, even when you aren’t sure what you need.” The perfect search engine would do this “with almost no effort” on the part of the user. In a 2013 update to the Founders Letter, Page said that “the search engine of my dreams provides information without you even having to ask.” Adam J. White glosses these statements: “To say that the perfect search engine is one that mini­mizes the user’s effort is effectively to say that it minimizes the user’s active input. Google’s aim is to provide perfect search results for what users ‘truly’ want—even if the users themselves don’t yet realize what that is. Put another way, the ultimate aspiration is not to answer the user’s questions but the question Google believes she should have asked.” As Eric Schmidt told the Wall Street Journal, “[O]ne idea is that more and more searches are done on your behalf without you having to type. . . . I actually think most people don’t want Google to answer their questions. They want Google to tell them what they should be doing next.”

The ideal being articulated in Mountain View is that we will inte­grate Google’s services into our lives so effortlessly, and the guiding presence of this beneficent entity in our lives will be so pervasive and unobtrusive, that the boundary between self and Google will blur. The firm will provide a kind of mental scaffold for us, guiding our intentions by shaping our informational context. This is to take the idea of trusteeship and install it in the infrastructure of thought.

When the internal culture at Google spills out into the headlines, we are offered a glimpse of the moral universe that stands behind the “objective” algorithms. Recall the Googlers’ reaction, which can only be called hysterical, to the internal memo by James Damore. He offered rival explanations, other than sexism, for the relative scarcity of women programmers at the firm (and in tech generally). The memo was written in the language of rational argumentation, and adduced plenty of facts, but the wrong kind. For this to occur within the firm was deeply threatening to its self-understanding as being at once a mere conduit for information and a force for pro­gress. Damore had to be quarantined in the most decisive manner possible. His dissent was viewed not as presenting arguments that must be met, but rather facts that must be morally disqualified.

On one hand, facilitating the free flow of information was Silicon Valley’s original ideal. But on the other hand, the control of information has become indispensable to prosecuting the forward march of history. This, in a nutshell, would seem to be the predicament that the platform firms of Silicon Valley find themselves in. The incoherence of their double mandate accounts for their stumbling, incoherent moves to suppress the kinds of speech that cultural progressives find threatening. …

This conflict is most acute in the United States, where the legal and political tradition protecting free speech is most robust. In Eu­rope, the alliance between social media companies and state actors to root out and punish whatever they deem “hate” (some of which others deem dissent) is currently being formalized. This has become especially urgent ahead of the European Parliament elections sched­uled for May 2019, which various EU figures have characterized as the last chance to quarantine the populist threat. Mounir Mahjoubi, France’s secretary of state for digital affairs, explained in February 2019 that, by the time of the election, “it will be possible to formally file a complaint online for hateful content.” … In particular, Twitter and Facebook have agreed to immediately transmit the IP addresses of those denounced for such behavior to a special cell of the French police “so that the individual may be rapidly identified, rapidly prosecuted and sentenced.” He did not explain how “hateful con­tent” is to be defined, or who gets to do it.

Among those ensconced in powerful institutions, the view seems to be that the breakdown of trust in establishment voices is caused by the proliferation of unauthorized voices on the internet. But the causal arrow surely goes the other way as well: our highly fragmented casting-about for alternative narratives that can make better sense of the world as we experience it is a response to the felt lack of fit between experience and what we are offered by the official organs, and a corollary lack of trust in them. For progressives to now seek to police discourse from behind an algorithm is to double down on the political epistemology that has gotten us to this point. The algorithm’s role is to preserve the appearance of liberal proceduralism, that austerely fair-minded ideal, the spirit of which is long dead.

Such a project reveals a lack of confidence in one’s arguments—or a conviction about the impotence of argument in politics, due to the irrationality of one’s opponents. In that case we have a simple contest for power, to be won and held onto by whatever means necessary.

Posted in Games.


0 Responses

Stay in touch with the conversation, subscribe to the RSS feed for comments on this post.



Some HTML is OK

or, reply to this post via trackback.