Skip to content

Culture war games: a perfect storm of temptations

Why Fiction Trumps Truth
By Yuval Noah Harari

It is sobering to realize that the Scientific Revolution began in the most fanatical culture in the world. Europe in the days of Columbus, Copernicus and Newton had one of the highest concentrations of religious extremists in history, and the lowest level of tolerance.

Newton himself apparently spent more time looking for secret messages in the Bible than deciphering the laws of physics. The luminaries of the Scientific Revolution lived in a society that expelled Jews and Muslims, burned heretics wholesale, saw a witch in every cat-loving elderly lady and started a new religious war every full moon.

If you had traveled to Cairo or Istanbul around 400 years ago, you would have found a multicultural and tolerant metropolis where Sunnis, Shiites, Orthodox Christians, Catholics, Armenians, Copts, Jews and even the occasional Hindu lived side by side in relative harmony. Though they had their share of disagreements and riots — and though the Ottoman Empire routinely discriminated against people on religious grounds — it was a liberal paradise compared with Western Europe. If you had then sailed on to contemporary Paris or London, you would have found cities awash with religious bigotry, in which only those belonging to the dominant sect could live. In London they killed Catholics; in Paris they killed Protestants; the Jews had long been driven out; and nobody even entertained the thought of letting any Muslims in. And yet the Scientific Revolution began in London and Paris rather than in Cairo or Istanbul.

The ability to compartmentalize rationality probably has a lot to do with the structure of our brain. Different parts of the brain are responsible for different modes of thinking. Humans can subconsciously deactivate and reactivate those parts of the brain that are crucial for skeptical thinking. Thus Adolf Eichmann could have shut down his prefrontal cortex while listening to Hitler give an impassioned speech, and then reboot it while poring over the Auschwitz train schedule.

Even if we need to pay some price for deactivating our rational faculties, the advantages of increased social cohesion are often so big that fictional stories routinely triumph over the truth in human history. Scholars have known this for thousands of years, which is why scholars often had to decide whether they served the truth or social harmony. Should they aim to unite people by making sure everyone believes in the same fiction, or should they let people know the truth even at the price of disunity? Socrates chose the truth and was executed. The most powerful scholarly establishments in history — whether of Christian priests, Confucian mandarins or Communist ideologues — placed unity above truth. That’s why they were so powerful.

Yuval Noah Harari lets Russians delete Putin’s lies from translation of his book
By Haaretz

In a section of the book about the “post-truth” era, the Russian translation omitted paragraphs relating to the 2014 occupation of the Crimean Peninsula and the initial claims made by Putin and other officials that no Russian soldiers had been sent there. The paragraphs were replaced by a reference to Trump, whom Harari claims has made 6,000 false public statements since assuming the presidency.

In the chapter “The View from the Kremlin,” a number of sentences were changed, also relating to the Crimean occupation, with the war and the annexation described in much milder terms in Russian. For example, the Russian edition states that the Russians “don’t consider the annexation of Crimea as the invasion of a foreign country,” and that as a result of the annexation, “Russia accumulated important strategic assets.”

“My objective is to bring the fundamental ideas about the dangers of dictatorship, extremism and fanaticism to the broadest possible audience, and that includes people who live in countries that aren’t democratic,” he said. “A few examples in the book could put off that audience or lead to censorship of the book by the authorities. For this reason, I will occasionally permit changes in a few examples, but not to the general idea.”

Pamela Meyer: How to spot a liar
By Pamela Meyer

Lying is a cooperative act. Think about it, a lie has no power whatsoever by its mere utterance. Its power emerges when someone else agrees to believe the lie.

So I know it may sound like tough love, but look, if at some point you got lied to, it’s because you agreed to get lied to. Truth number one about lying: Lying’s a cooperative act. Now not all lies are harmful. Sometimes we’re willing participants in deception for the sake of social dignity, maybe to keep a secret that should be kept secret, secret. We say, “Nice song.” “Honey, you don’t look fat in that, no.” Or we say, favorite of the digiratti, “You know, I just fished that email out of my Spam folder. So sorry.”

But there are times when we are unwilling participants in deception. And that can have dramatic costs for us. Last year saw 997 billion dollars in corporate fraud alone in the United States. That’s an eyelash under a trillion dollars. That’s seven percent of revenues. Deception can cost billions. Think Enron, Madoff, the mortgage crisis. Or in the case of double agents and traitors, like Robert Hanssen or Aldrich Ames, lies can betray our country, they can compromise our security, they can undermine democracy, they can cause the deaths of those that defend us.

Deception is actually serious business. This con man, Henry Oberlander, he was such an effective con man, British authorities say he could have undermined the entire banking system of the Western world. And you can’t find this guy on Google; you can’t find him anywhere. He was interviewed once, and he said the following. He said, “Look, I’ve got one rule.” And this was Henry’s rule, he said, “Look, everyone is willing to give you something. They’re ready to give you something for whatever it is they’re hungry for.” And that’s the crux of it. If you don’t want to be deceived, you have to know, what is it that you’re hungry for?

Gold standard for entrepreneurs
By The Herald

We feel that respectful mention is also due to Sulun Osman of Istanbul whose genius lay in his ability to sell famous buildings and public property to members of the public. Among his transactions during a career spanning 25 years was the sale of the bridge of the Golden Horn, and the disposal of the Simplon-Orient Express to an eager purchaser. He also sold the clocks in the city squares of Istanbul. ”I waited under one of the clocks until someone stopped to correct his watch with the time shown. I then asked him for 2.50 lires, and when he asked me why I told him I owned the clock. While we were arguing an accomplice of mine came along, looked at the clock, set his watch by it, and paid me 2.50 lire.

”The stranger was a trader from Anatolia. He asked me how much I was making from the business. I told him and in the end I sold him the two clocks in Deyazit Square for #100.”

‘Nigerian prince’ email scams still rake in over $700,000 a year—here’s how to protect yourself
By Megan Leonhardt

The reason these scams are so effective is that they present victims with a “perfect storm of temptations,” Dr. Frank McAndrew, a social psychologist and professor at Illinois-based Knox College, tells CNBC Make It.

First, these scams play on people’s greed. Many times, the scam is set up in a way where victims are promised that they’ll make a hefty financial profit without much effort, McAndrew says. In most successful scams, the fraudsters also prey on your desire to be a hero.

“We get the opportunity to feel good about ourselves by helping another person in need,” McAndrew says. “After all, what could be more noble than helping an orphan in need or helping some poor soul recover money that rightfully belongs to them in the the first place?”

The Email Scam with Centuries of History
By Rosie Cima

This kind of scam is called advance fee fraud, because the victim is asked to pay a small fee in advance of receiving a large payment (which never comes). It’s also called a “Nigerian money offer” because many of these email scams — although certainly not all of them — are operated out of Nigeria, and Nigerian criminals led the wave to revive these scams in the Internet era. In the 1980s, Nigeria was home to a bunch of mail and fax-based advance fee scams. It’s also called a 419 scam, which is the section of the Nigerian criminal code dealing with fraud. This scam’s history reaches much farther back than that, though.

The 1832 memoirs of Eugène François Vidocq, a French criminal-turned-private-investigator, detail a swindle executed by prisoners. “[Prisoners] obtained the address of certain rich persons living in the province,” Vidocq writes. “Which was easy from the number of prisoners who were constantly arriving. They then wrote letters to them, colloquially referred to as ‘letters of Jerusalem.’” The memoir includes an example, reproduced below:

“Sir,–You will doubtlessly be astonished at receiving a letter from a person unknown to you, who is about to ask a favour from you; but from the sad condition in which I am placed, I am lost if some honourable person will not lend me succour: that is the reason of my addressing you, of whom I have heard so much that I cannot for a moment hesitate to confide all my affairs to your kindness. As valet-de-chambre to the marquis de ——, I emigrated with my master, and that we might avoid suspicion we travelled on foot and I carried the luggage, consisting of a casket containing 16,000 francs in gold and the diamonds of the late marchioness.”

You probably know what’s coming next. In order to conceal their identities in transit, the writer says he and the “marquis” had to abandon and hide their gold-and-diamond casket somewhere. When the writer was sent back for the money, he was apprehended before retrieving it, and imprisoned for not having a passport.

“In this cruel situation, having heard mention of you by a relation of my master’s […] I beg to know if I cannot, through your aid, obtain the casket in question and get a portion of the money which it contains. I could then supply my immediate necessities and pay my counsel, who dictates this, and assures me that by some presents, I could extricate myself from this affair.

“Receive, sir, &c.


The letter had a 20% response rate, according to Vidocq.

Why Do Nigerian Scammers Say They are From Nigeria?
By Cormac Herley

Far-fetched tales of West African riches strike most as comical. Our analysis suggests that is an advantage to the attacker, not a disadvantage. Since his attack has a low density of victims the Nigerian scammer has an over-riding need to reduce false positives. By sending an email that repels all but the most gullible the scammer gets the most promising marks to self-select, and tilts the true to false positive ratio in his favor.

Facebook Connected Her to a Tattooed Soldier in Iraq. Or So She Thought.
By Jack Nicas

The F.B.I. said it received nearly 18,500 complaints from victims of romance or similar internet scams last year, with reported losses exceeding $362 million, up 71 percent from 2017.

The F.B.I. investigates a fraction of those reports, said James Barnacle, head of the F.B.I.’s money laundering unit. Many victims lose a few thousand dollars, and “it’s really hard for an agency like the F.B.I. to work something that low, just because there’s so many cases that come in our door,” he said.

Facebook said it constantly removes impostor accounts with the help of software, human reviewers and user reports. Its software also scans for scammers and locks accounts until owners can provide proof of identity. It has a video warning people of scams.

Nigeria has become synonymous with online scams, fairly or not. Easy internet access, poverty and English are widespread. And those who learn the trade pass it to others, said the men who talked to The Times about the internet schemes.

They called their victims “clients” and themselves “Yahoo Boys,” a nod to the online chat service Yahoo Messenger where love scams gained traction nearly 20 years ago. Now they ply their trade on Facebook and Instagram, they said, because that’s where the people are.

Three Nigerian men, age 25, who spoke on the condition of anonymity, said they conned people on Facebook to pay for their education at Lagos State University.

They said they previously made $28 to $42 a month in administrative jobs or pressing shirts. With love hoaxes, the money was inconsistent but more plentiful. One estimated he made about $14,000 in two years; another took in $28,000 in three years.

Nigerian authorities have publicized raids on Yahoo Boys, but the scammers said they do not worry. Some said they paid bribes to evade arrest.

Facebook was becoming tougher on their profession, they added, but Instagram was easy to elude. If their accounts were blocked, they bought new ones; a six-month-old profile cost about $14, one said.

Harvard grad studies cons and how to avoid them
By Christina Pazzanese

GAZETTE: Are there more instances of elaborate frauds and scams these days or does it just seem like there are because of social media?

KONNIKOVA: It just seems like it because of social media. People are drawing more attention to it. Cons have always existed; they will always exist. Social media lowers the barrier of entry. I think there are more small-time cons because it’s become easier, but overall, there’s nothing, to me, that says there’s a rise in big cons right now. We’ve become more susceptible and you don’t have to be quite as good to be a con artist. The bad ones are the ones getting caught. The truly good ones, the ones we don’t know about because they’ve never gotten caught, those people were able to operate without technology. Now, there’s just more small fish who are able to do things that they wouldn’t have been able to do before because they weren’t talented enough. Social media makes it so much easier both in terms of crafting a false persona and also in terms of finding victims because we are just so incredibly stupid about what we share online.

The Ms.Scribe Story
By The Duchess of Richmond

The primary purpose of this fandom biography, as with any story, is to provide entertainment and amusement for its readers. Msscribe is a fascinating person and I believe that many people will find her eventful fandom life interesting. My secondary purpose is to preserve fandom history that is in danger of being lost — for instance, I rescued the Fermatojam pages originally captured and posted by anatsuno from Google cache. Msscribe’s story includes or touches upon many scandals and conflicts of the past five or six years in a certain sector of the Harry Potter fandom.

We Are All MsScribe
By Scott Alexander

But if you insist on skipping the (admittedly super-long) link above, here is what happens:

In the early 2000s, Harry Potter fanfiction authors and readers get embroiled in an apocalyptic feud between people who think that Harry should be in a relationship with Ginny vs. people who think Harry should be in a relationship with Hermione. This devolves from debate to personal attacks to real world stalking and harassment to legal cases to them splitting the community into different sites that pretty much refuse to talk to each other and ban stories with their nonpreferred relationship.

These sites then sort themselves out into a status hierarchy with a few people called Big Name Fans at the top and everyone else competing to get their attention and affection, whether by praising them slavishly or by striking out in particularly cruel ways at people in the “enemy” relationship community.

A young woman named MsScribe joins the Harry/Hermione community. She proceeds to make herself popular and famous by use of sock-puppet accounts (a sockpuppet is when someone uses multiple internet nicknames to pretend to be multiple different people) that all praise her and talk about how great she is. Then she moves on to racist and sexist sockpuppet accounts who launch lots of slurs at her, so that everyone feels very sorry for her.

At the height of her power, she controls a small army of religious trolls who go around talking about the sinfulness of Harry Potter fanfiction authors and especially MsScribe and how much they hate gay people. All of these trolls drop hints about how they are supported by the Harry/Ginny community, and MsScribe leads the campaign to paint everyone who wants Harry and Ginny to be in a relationship as vile bigots and/or Christians. She classily cements her position by convincing everyone to call them “cockroaches” and post pictures of cockroaches whenever they make comments.

Throughout all this, a bunch of people are coming up with ironclad evidence that she is the one behind all of this (this is the Internet! They can just trace IPs!) Throughout all of it, MsScribe makes increasingly implausible denials. And throughout all of it, everyone supports MsScribe and ridicules her accusers. Because really, do you want to be on the side of a confirmed popular person, or a bunch of confirmed suspected racists whom we know are racist because they deny racism which is exactly what we would expect racists to do?

MsScribe writes negatively about a fan with cancer asking for money, and her comments get interpreted as being needlessly cruel to a cancer patient. Her popularity drops and everyone takes a second look at the evidence and realizes hey, she was obviously manipulating everyone all along. There is slight sheepishness but few apologies, because hey, we honestly thought the people we were bullying were unpopular.

MsScribe later ended up switching from Harry Potter fandom to blogging about social justice issues, which does not surprise me one bit.

A Report on Damage Done by One Individual Under Several Names
By Laura J. Mixon

To give readers a sense of BS/RH’s actions, I’m interspersing a few screencaps from her prior verbal attacks. I think it’s important for people to see her own words, in order to avoid the risk of having people elide over just how harmful her words have been.

BS/RH’s online attacks against others in the SFF community extend far beyond simply a few youthful indiscretions in her past, foul language, or having posted harsh reviews of some people’s books. Her assaults, using multiple identities, are repeated, vicious, and energetic. They have spilled out across the years, well beyond the edges of fannish and writing communities online. BS/RH’s attacks have destroyed communities and harmed careers and lives in the real world.

  1. She has been involved in efforts to suppress the publication of fiction and reviews for those works that in her sole opinion should not be published.
  2. She and her associates have pressured con-runners to disinvite speakersfrom panels and readings, constraining their ability to do business.
  3. She routinely accuses people ofdoing the very harm to her that she is in fact doing to them—of stalking, threatening, and harassing—when they have done nothing except try to get as far away from her as they can.
  4. At least one of her targets was goaded into a suicide attempt.
  5. She has issued extremely explicit death, rape, and maiming threatsagainst a wide variety of people across the color, gender, sexual-orientation, and dis/ability spectrum.
  6. She and her supporters argue that she punches up, but the truth is that she punches in all directions. The bulk of her targets—despite her progressively-slanted rhetoric—have been women, people of color, and other marginalized or vulnerable people.
  7. She has single-handedly destroyed several online SFF, fanfic, and videogaming communities with her negative, hostile comments and attacks.
  8. After an attack, she deletes her most inflammatory posts and accounts and departs, leaving her targets reeling and others who come later scratching their heads, unable to find evidence and wondering what all the fuss was about.
  9. She has stalked SFF fans online for months and years, simply for posting that they liked an author’s book that she did not, or for speaking up against her when she called their favorite author (often a POC) epithets like “stupid fuck,” and calling them “morons” for liking that author.
  10. She has chased down positive reviews of authors’ works, to appear there and frighten reviewers and fans away from promoting the writers’ works, interfering with their ability to get publicity for their publications. Of the most extreme cases, lasting at least a year, two were launched against women writers of color.
  11. Her attacks have not diminished over time; they have simply become more skilled and difficult to deflect. As recently as three weeks ago as I write this, she was lying to her supporters to manipulate them into attacking one of her latest victims.
  12. She excels at shifting her tone and her strategy, seeming friendly and helpful one moment and vicious and harsh the next. She has mastered the crafting and dissemination of false narratives that seem persuasive to observers who are not familiar with the harm she has done in the past.
  13. In light of the harm she has done, her apologies do not even come close to addressing the damage she has done, much less undoing it.

I know the above facts to be true either because I directly witnessed it myself; researched the evidence still available from online forums; or received information from people who have been harmed by her, who have entrusted me with evidence (screencaps, copies of incriminating emails, web archives, and witness accounts) of the actions I describe above.

Some reading this will note that few people have come forward with their stories as a result of the recent dust-up, and most of those who have spoken out have done so anonymously.

They have not come forward because they are afraid.

They are afraid they will not be believed. They are afraid that their experiences will be discounted or minimized. That people will make excuses for her, or believe her when she tells the world that they are the villains and she is the victim. Their run-ins with her were in many cases among the worst experiences in their lives. When they resurface on the web, she often finds them again and re-launches her vitriolic attacks.

RIP Culture War Thread
By Scott Alexander

People settled on a narrative. The Culture War thread was made up entirely of homophobic transphobic alt-right neo-Nazis. I freely admit there were people who were against homosexuality in the thread (according to my survey, 13%), people who opposed using trans people’s preferred pronouns (according to my survey, 9%), people who identified as alt-right (7%), and a single person who identified as a neo-Nazi (who as far as I know never posted about it). Less outrageous ideas were proportionally more popular: people who were mostly feminists but thought there were differences between male and female brains, people who supported the fight against racial discrimination but thought could be genetic differences between races. All these people definitely existed, some of them in droves. All of them had the right to speak; sometimes I sympathized with some of their points. If this had been the complaint, I would have admitted to it right away. If the New York Times can’t avoid attracting these people to its comment section, no way r/ssc is going to manage it.

But instead it was always that the the thread was “dominated by” or “only had” or “was an echo chamber for” homophobic transphobic alt-right neo-Nazis, which always grew into the claim that the subreddit was dominated by homophobic etc neo-Nazis, which always grew into the claim that the SSC community was dominated by homophobic etc neo-Nazis, which always grew into the claim that I personally was a homophobic etc neo-Nazi of them all. I am a pro-gay Jew who has dated trans people and votes pretty much straight Democrat. I lost distant family in the Holocaust. You can imagine how much fun this was for me.

People would message me on Twitter to shame me for my Nazism. People who linked my blog on social media would get replies from people “educating” them that they were supporting Nazism, or asking them to justify why they thought it was appropriate to share Nazi sites. I wrote a silly blog post about mathematics and corn-eating. It reached the front page of a math subreddit and got a lot of upvotes. Somebody found it, asked if people knew that the blog post about corn was from a pro-alt-right neo-Nazi site that tolerated racists and sexists. There was a big argument in the comments about whether it should ever be acceptable to link to or read my website. Any further conversation about math and corn was abandoned. This kept happening, to the point where I wouldn’t even read Reddit discussions of my work anymore. The New York Times already has a reputation, but for some people this was all they’d heard about me.

Some people started an article about me on a left-wing wiki that listed the most offensive things I have ever said, and the most offensive things that have ever been said by anyone on the SSC subreddit and CW thread over its three years of activity, all presented in the most damning context possible; it started steadily rising in the Google search results for my name. A subreddit devoted to insulting and mocking me personally and Culture War thread participants in general got started; it now has over 2,000 readers. People started threatening to use my bad reputation to discredit the communities I was in and the causes I cared about most.

Some people found my real name and started posting it on Twitter. Some people made entire accounts devoted to doxxing me in Twitter discussions whenever an opportunity came up. A few people just messaged me letting me know they knew my real name and reminding me that they could do this if they wanted to.

Some people started messaging my real-life friends, telling them to stop being friends with me because I supported racists and sexists and Nazis. Somebody posted a monetary reward for information that could be used to discredit me.

One person called the clinic where I worked, pretended to be a patient, and tried to get me fired.

At Psychiatric Emergency Rooms, Fake Patients Take a Heavy Toll
By Jacob Appel

Malingering — the act of faking illness for personal gain — is far more widespread than the public might suspect. (It is different from Munchausen syndrome, in which the tendency to feign illness is caused by a genuine psychiatric disorder.) In my decade of experience at several psychiatric emergency rooms around New York City, I’ve rarely worked a 12-hour shift without confronting at least one, and often several, patients seeking hospitalization under false pretenses. A recent study that a colleague and I published in Psychiatric Services found that one in every five patients evaluated at a psychiatric emergency room in lower Manhattan over the course of a month was strongly suspected to be malingering.

The motivations of malingerers vary considerably. Three years ago, I published a rudimentary nosology in the newsletter of The American Academy of Psychiatry and the Law that categorized malingerers into three types. The first type simply seeks “three hots and a cot” — three warm meals and a place to sleep — in hopes of avoiding homeless shelters and food pantries. These men and women, some of whom do suffer from underlying psychiatric illnesses, reflect social service failures on the part of society. A second type of malingerer arrives at emergency rooms in search of prescriptions for opiates or benzodiazepines. While some of these patients may plan to resell their medications, the vast majority do suffer from a severe illness or addiction — though they may exaggerate the extent of their pain and anxiety.

These first two species of malingerers can be thought of as “soft” malingerers. They have genuine and legitimate needs that should not be dismissed merely because they present to hospitals on false pretenses. At the same time, it makes little sense to offer a woman a $750 clinical workup when all she wants is a $5 sandwich, or to house a man on a thousands-per-night psychiatric ward when he could stay at a luxury hotel for far less. Simply having a hospital operate its own safe, clean, and easily accessible homeless shelter adjacent to the psychiatric ER could conserve vast resources.

A third type of malingerer is rarer, yet far more pernicious. These individuals can be thought of as “hard” malingerers, and they seek ends that are nefarious to various degrees: avoiding a court date, convincing a judge to suspend child support payments, hiding from a loan shark or drug dealer, and so forth. I once encountered a patient in an ER who appeared to be seeking an alibi for his extramarital affair. (One diagnostic clue for pernicious malingering is that the patient wishes neither to be admitted to the hospital nor discharged from the emergency room, but expresses a desire to stay for a precisely enumerated period of time.)

A Suspense Novelist’s Trail of Deceptions
By Ian Parker

A seductive man lies about a fatal disease, then defends the lie by pretending to be his brother. The brother’s name is Blake. When I asked Hannah if the plot was inspired by real events, she was evasive, and more than once she said, “I really like Dan, and he’s only ever been good to me.” She also noted that, before starting to write “Closed Casket,” she described its plot to Mallory: “He said, ‘Yes, that sounds amazing!’ ” Hannah, then, can’t be accused of discourtesy.

But she acknowledged that there were “obvious parallels” between “Closed Casket” and “rumors that circulated” about Mallory. She also admitted that the character of Kimpton, the American doctor, owes something to her former editor. I had noticed that Kimpton speaks with an affected English accent and—in what works as a fine portrait of Mallory, mid-flow—has eyes that “seemed to flare and subside as his lips moved.” The passage continues, “These wide-eyed flares were only seconds apart, and appeared to want to convey enthusiastic emphasis. One was left with the impression that every third or fourth word he uttered was a source of delight to him.” (Chris Parris-Lamb, shown these sentences, said, “My God! That’s so good.”) While Hannah was writing “Closed Casket,” her private working title for the novel was “You’re So Vain, You Probably Think This Poirot’s About You.”

A publishing employee in New York told me that, in 2013, Hannah had become suspicious that Mallory wasn’t telling the truth when he spoke of making a trip to the U.K. for cancer treatment, and had hired a detective to investigate. This suggestion seemed to be supported by an account, on Hannah’s blog, of hiring a private detective that summer. Hannah wrote that she had called him to describe a “weird conundrum.” Later, during a vacation with her husband in Agatha Christie’s country house, in Devon, she called to check on the detective’s progress; he told her that “there was a rumor going round that X is the case.”

“You’re supposed to be finding out if X is true,” Hannah told the detective.

“I’m not sure how we could really do that,” he replied. “Not without hacking e-mail accounts and things like that—and that’s illegal.”

Asked about the blog post, Hannah told me that she had thought of hiring a detective to check on Mallory, and had discussed the idea with friends, but hadn’t followed through. She had, however, hired a detective to investigate a graffiti problem in Cambridge. I said that I found this hard to believe. She went on to say that she had forgotten the detective’s name, she had deleted all her old e-mails, and she didn’t want to bother her husband and ask him to confirm the graffiti story. All this encouraged the thought that the novelist now writing as Agatha Christie had hired a detective to investigate her editor, whom she suspected of lying about a fatal disease.

The Internet Has a Cancer-Faking Problem
By Róisín Lanigan

This condition of faking illness online has a name: “Munchausen by internet,” or MBI. It’s a form of factitious disorder, the mental disorder formerly known as Munchausen syndrome, in which people feign illness or actually make themselves sick for sympathy and attention. According to Marc Feldman, the psychiatrist at the University of Alabama at Tuscaloosa who coined the term MBI back in 2000, people with the condition are often motivated to lie by a need to control the reactions of others, particularly if they feel out of control in their own lives. He believes that the veil of the internet makes MBI much more common among Americans than the 1 percent in hospitals who are estimated to have factitious disorder.

Dawn Branley-Bell, a psychologist at Northumbria University who studies extreme online behaviors, agrees that digital life can encourage deceptive behavior. “The internet makes it easier to portray ourselves as something we are not,” she says. “Trolls often justify their actions by saying the online world is not ‘real life,’ so it doesn’t matter what they do or say online. It is possible some users refuse to believe their [actions] online have real, psychological effects upon others.” Once the lie is told, she notes, it can be difficult to backtrack.

Why Some Doctors Purposely Misdiagnose Patients
By Olga Khazan

In 2013, nearly 400 people sued a hospital and doctor in London, Kentucky, for needlessly performing heart procedures to “unjustly enrich themselves,” as the Courier-Journal in Louisville reported. Last year, a Texas doctor was accused of “falsely diagnosing patients with various degenerative diseases including rheumatoid arthritis,” according to CNN. And a different Kentucky doctor was sentenced to 60 months in federal prison for, among other things, implanting medically unnecessary stents in his patients.

Saccoccio told me that while it’s hard to determine how common the intentional-misdiagnosis style of fraud is, the more typical variety is called “upcoding”: doing a cheaper procedure but billing for a more expensive one. (Awaad is accused of doing this, as well.) U.S. government audits suggest that about 10 percent of all Medicare claims are not accurate, though Malcolm Sparrow, a Harvard professor of public management, told me that’s likely an underestimate. He added that it’s not possible to know how many of these inaccuracies are false diagnoses, rather than other kinds of errors.

Sparrow speculated that doctors cheat the system because “they believe they won’t get caught, and mostly they don’t get caught.” There’s also the fact that doctors often do know more than their patients about various diseases. Sometimes, fraudulent doctors lord that knowledge over patients who get suspicious. In 2015, Farid Fata was sentenced to 45 years in prison for administering unnecessary chemotherapy to 553 patients. “Several times when I had researched and questioned his treatment, he asked if I had fellowshipped at Sloan Kettering like he had,” one of his patients, Michelle Mannarino, told Healthcare Finance.

Some lawyers argue that many of the doctors who get swept up in these kinds of cases are doing honest work: These doctors simply have a different opinion than another doctor who is later asked to review their diagnoses. Writing in The Wall Street Journal last year, the lawyers Kyle Clark and Andrew George pointed out that a decade ago, most health-care fraud centered on something the doctor failed to do, such as neglecting to treat a patient who was actually sick. Now prosecutors are bringing more and more so-called medical-necessity cases, which focus on a test or procedure doctors did do that they shouldn’t have. “Doctors can, and do, honestly disagree by wide margins,” Clark and George wrote. “Show two doctors the same image, and you may get wildly varying—yet highly confident—opinions of what it shows.”

The Fake Sex Doctor Who Conned the Media Into Publicizing His Bizarre Research on Suicide, Butt-Fisting, and Bestiality
By Jennings Brown

When I asked Sendler during our main interview, at the Gizmodo office, if he saw these falsifications as marketing or misrepresentation, his demeanor shifted, and he finally spoke to me in a way that seemed earnest: “That’s sort of subjective, right? Isn’t everyone sort of misrepresenting themselves in every way?” he said.

Then he seemed to threaten me. “Like I don’t know you, right? You might be working here today but you might not work here tomorrow, right? And I might feel intimidated by you today but tomorrow you don’t have a job, right? And I still have mine, right?”

As his diatribe continued, Sendler helped me realize why people like him believe they can get away with falsifying their entire career and lying to vulnerable people.

“You have to understand that in the world where people use—even the President of this country uses Twitter and creates falsehoods every day,” Sendler said. “How do we then quantify the degree of guilt that you can do, right? Because, you see, if the most powerful man can do this eight, nine thousand times… and he doesn’t care. He still does his thing, and people still support him because they believe in the agenda that he executes.”

He is right, in this case. If someone can inflate their business accomplishments for years, then become a world leader who rules by sowing chaos with constant distortion—what’s to stop a confident, charismatic serial liar from manifesting a psychological career and being treated like a medical luminary?

“Sometimes it really matters how you can sell things and convince people,” Sendler told me, moments before he left my office. “Reality is inflatable and everything is part of the game.”

A chemistry is performed
By Deborah Friedell

Nothing that Carreyrou has uncovered about Holmes’s life before Theranos suggests that she had the makings of a world-class scam artist. The best he can come up with is that, as a child, she was too competitive at Monopoly: ‘When she occasionally lost, she stormed off in a fury.’ She told Auletta that she had kept ‘a notebook with a complete design for a time machine that I designed when I must have been, like, seven. The wonderful thing about the way I was raised is that no one ever told me that I couldn’t do those things.’ The old anecdotes, meant to tell one kind of story, now coldly service a different one. She thought she could do anything. ‘I think the minute that you have a backup plan, you’ve admitted that you’re not going to succeed.’ She figured that once she got the Edison to work, her lies wouldn’t matter; she’d probably win the Nobel Prize. On her desk, the one custom-designed to look like the president’s, she kept a paperweight that said: ‘What would you attempt to do if you knew you could not fail?’ She gave copies of Paulo Coelho’s novel The Alchemist to her employees, because it had taught her that ‘when you really want something, all the universe conspires in helping you to achieve it.’ Bad Blood wasn’t written to be a parable for our current moment, but it may as well have been.

How an Aspiring ‘It’ Girl Tricked New York’s Party People — and Its Banks
By Jessica Pressler

She wasn’t superhot, they pointed out, or super-charming; she wasn’t even very nice. How did she manage to convince an enormous amount of cool, successful people that she was something she clearly was not? Watching the Rikers guard shove Fast Company into a manila envelope, I realized what Anna had in common with the people she’d been studying in the pages of that magazine: She saw something others didn’t. Anna looked at the soul of New York and recognized that if you distract people with shiny objects, with large wads of cash, with the indicia of wealth, if you show them the money, they will be virtually unable to see anything else. And the thing was: It was so easy.

‘Anna Delvey,’ Fake Heiress Who Swindled N.Y.’s Elite, Is Sentenced to 4 to 12 Years in Prison
By Jan Ransom

… the judge said Ms. Sorokin showed no remorse for her actions throughout the trial, and seemed more concerned about her clothing and which actress would play her in an upcoming Netflix series about the case, set to be produced by Shonda Rhimes.

A juror who visited the courtroom to watch the sentencing, and asked to remain anonymous, said she had been annoyed by Ms. Sorokin’s apparent self-centeredness. Her concern about her wardrobe led to a two-hour delay in the trial one day.

“She was interested in the designer clothes, the champagne, the private jets, the boutique hotel experience and the exotic travel that went along with it — everything that big money could buy,” Justice Kiesel said. “But she didn’t have big money. All she had was a big scam.”

Jurors agreed with prosecutors that her gilt-edged life was an elaborate ruse financed by lies.

Ms. Sorokin stiffed hotels, persuaded a bank employee to give her a $100,000 line of credit, swayed a private jet company to let her fly on credit and tried to secure a $25 million loan from a hedge fund. In all, she stole about $213,000 in cash and services.

Still, the jury found her not guilty of the most serious offense — faking records in an attempt to obtain a $22 million loan. She was also acquitted of stealing from a friend who said Ms. Sorokin duped her into covering the cost of a $60,000 vacation to Morocco.

To many friends, there was every reason to believe that Ms. Sorokin was a wealthy German heiress with so much money that she frequently doled out $100 tips and flew on a private jet to Berkshire Hathaway’s annual investment conference.

Mr. Spodek, her lawyer, said during the trial that people believed what they wanted about Ms. Sorokin. She was enabled, he said, by a system “seduced by glamour and glitz.” She intended to pay back her creditors, he said.

“Through her sheer ingenuity, she created the life that she wanted for herself,” he said during the trial. “Anna was not content with being a spectator, but wanted to be a participant.”

A revenge-seeking fitness expert created 369 fake Instagram accounts and staged a kidnapping
By Brittany Shammas

The threats came from 369 Instagram accounts and 18 different email addresses. But they were all controlled by one woman.

FBI agents say fitness trainer and mother of four Tammy Steffen used those accounts to unleash a torrent of harassment on her ex-business partner at a Tampa gym and on her competitors in the bodybuilding world.

“I plan to slice you up into little pieces,” read one message sent from an innocuously named Instagram account, catloverexpress. “Your blood shall I taste.”

On Friday, the 37-year-old woman was sentenced to almost five years in prison after pleading guilty to federal charges of cyberstalking and sending threatening communications online.

“The extent of her crime is astounding,” FBI special agent Kristin Rehler told WFLA.

But the cyberstalking, which targeted five victims in three states, is just one aspect of a strange saga. Authorities say Steffen, 37, also deployed a headless baby doll and a fake kidnapping scheme to try to exact revenge on her former business partner. The reason? She believed he had sabotaged her chances of winning an online fitness competition — an allegation authorities said is untrue.

It’s been two weeks since TwoX became a default…
By Deimorz

I mostly just want to urge people to not take everything at face value. There are a lot of people that seem quite invested in trying to get the mods to remove this subreddit from the defaults, and unfortunately that means that they’re willing to try to cheat, lie, and do various other unsavory things to influence this decision.

For example, the OP of this thread was using at least 5 alternate accounts to attempt to tilt things in here, including upvoting their own submission and supportive comments (and they’ve now been banned from the site for that). There’s generally just a great deal of attempted manipulation going on around the topic of 2XC being a default, between people attempting to manipulate votes, using multiple accounts to post comments supportive of their side, organized groups brigading relevant posts, etc. Some people have even been performing what’s often referred to as a “false flag”, where even though they’re actually normally a contributing member of the subreddit, they’ve been creating alt accounts to make or upvote harassing comments/messages in order to make that issue seem more prevalent than it actually is.

And on the topic of harassing PMs, one of the most frustrating aspects of the situation from our perspective is that there’s been a significant amount of lying on this end. We’ve received quite a few reports about users who have claimed to have received a large amount of harassment, but when we investigate we find that they’ve often never received any PMs at all, or only one message when they claim to have received many. Some people have even gone so far as creating alts to PM themselves with, so that they can take screenshots for “proof”.

I’m certainly not trying to say that there hasn’t been any harassment, because some definitely has actually occurred (and please report it to us by sending a modmail to /r/ if it happens to you). But between the various outside groups trying quite hard to push 2XC out, the false flags, and the lying, please take all claims about it with a large grain of salt.

Hate Crime Hoaxes Are Rare, but Can Be ‘Devastating’
By Audra D. S. Burch

Hoaxes are not tracked formally, but the Center for the Study of Hate and Extremism at California State University, San Bernardino, said that of an estimated 21,000 hate crime cases between 2016 and 2018, fewer than 50 reports were found to be false. The center believes that less than 1 percent of all reported hate crimes are false.

But such false reports can play an outsize role in undermining the credibility of real bias victims and anti-hate efforts. In the aftermath of Mr. Smollett’s arrest, one lawmaker has even promised to draft a bill increasing the penalty for filing false hate crime reports.

“Devastating is how I would describe this Smollett story, especially during this legislative season when some states are trying to pass hate crime reform bills,” said Brian Levin, a national hate crime expert and the California center’s director. “This has the potential to eclipse the real facts about hate crimes.”

Hate crime hoaxes, like Jussie Smollett’s alleged attack, are more common than you think
By Wilfred Reilly

A great many hate crime stories turn out to be hoaxes. Simply looking at what happened to the most widely reported hate crime stories over the past 4-5 years illustrates this: not only the Smollett case but also the Yasmin Seweid, Air Force Academy, Eastern Michigan, Wisconsin-Parkside, Kean College, Covington Catholic, and “Hopewell Baptist burning” racial scandals all turned out to be fakes. And, these cases are not isolated outliers.

Doing research for a book, Hate Crime Hoax, I was able to easily put together a data set of 409 confirmed hate hoaxes. An overlapping but substantially different list of 348 hoaxes exists at, and researcher Laird Wilcox put together another list of at least 300 in his still-contemporary book Crying Wolf. To put these numbers in context, a little over 7,000 hate crimes were reported by the FBI in 2017 and perhaps 8-10% of these are widely reported enough to catch the eye of a national researcher.

There is very little brutally violent racism in the modern USA. There are less than 7,000 real hate crimes reported in a typical year. Inter-racial crime is quite rare; 84% of white murder victims and 93% of Black murder victims are killed by criminals of their own race, and the person most likely to kill you is your ex-wife or husband. When violent inter-racial crimes do occur, whites are at least as likely to be the targets as are minorities. Simply put, Klansmen armed with nooses are not lurking on Chicago street corners.

In this context, what hate hoaxers actually do is worsen generally good race relations, and distract attention from real problems. As Chicago’s disgusted top cop, Police Superintendent Eddie Johnson, pointed out yesterday, skilled police officers spent four weeks tracking down Smollett’s imaginary attackers — in a city that has seen 28 murders as of Feb. 9th, according to The Chicago Tribune. We all, media and citizens alike, would be better served to focus on real issues like gun violence and the opiate epidemic than on fairy tales like Jussie’s.

Hate Crime Hoaxes and Why They Happen
By Wilfred Reilly

It is a tragic truth of human history that fake hate crimes have, on more than one occasion, been the precursor to real atrocities. The best-known example is probably the “blood libel” against the Jews. Throughout medieval Europe, Christians started rumors that Christian children were being killed and their blood used in Jewish religious rituals. These stories were, invariably, complete canards. But the false belief that the Jewish people were perpetrating violence against Christians became the inspiration and excuse for the Christians to commit real violence against the Jews—vicious pogroms in which whole Jewish communities were driven out of their homes and many of them killed horribly.

While the current epidemic of hate-based violence in the United States is mostly an epidemic of hoaxes, and any “race war” going on today exists only in the minds of a few radicals, there are disturbing signs that the fakes are fostering real hostility among the races, which could lead to real violence in the future. Consider, for example, the fact that hate-crime hoaxes are increasingly being perpetrated by white members of the alt-right, with the explicit goal of making black people and leftist causes look bad.

Meet the GOP operatives who aim to smear the 2020 Democrats — but keep bungling it
By Manuel Roig-Franzia and Beth Reinhard

The spectacle that is the Burkman-Wohl partnership launched late last year. The duo hyped a news conference promising to introduce a woman who allegedly claimed to have been raped by Mueller, the special counsel investigating Russian interference in the 2016 presidential election. The woman was a no-show.

Mueller asked the FBI to investigate allegations that women were offered money to make sexual assault claims against him. Burkman and Wohl no longer want to say much about their Mueller probe but have denied offering money for testimony.

A few months later, on April 29, a shocking post went up on Medium, the self-publishing website. It was purportedly written by a 21-year-old gay college student named Hunter Kelly who claimed Democratic presidential candidate Pete Buttigieg had sexually assaulted him.

Just hours later, the story started to unravel.

“I WAS NOT SEXUALLY ASSAULTED,” Kelly posted on Facebook.

In an interview with The Washington Post, Kelly said he met Wohl via Instagram and got a message from him asking, “Do you want to be part of a political operation?”

Wohl’s pitch, according to Kelly, was to work on a Trump-backed project scrutinizing Buttigieg’s record on race relations. Burkman booked a plane ticket for Kelly, a student at Ferris State University in Big Rapids, Mich.

After they got to Burkman’s home, Kelly said, under pressure from Wohl and Burkman he reluctantly signed a statement alleging that Buttigieg had assaulted him in a room at the Washington’s Mayflower Hotel in February. Kelly claimed that he had not seen the Medium post before it went online and had not approved its publication. The whole story, he said, was entirely made up.

“Those two are willing to do whatever it takes and to hurt whomever they have to hurt so they can keep the spotlight on them and get what they want,” Kelly told The Post.

Anti-Trump Krassenstein Brothers Claim Jacob Wohl Called Them After Twitter Ban to Fight Internet Censorship Together
By Shane Croucher

Ed and Brian Krassenstein, the brothers best known for their relentless use of Twitter to attack President Donald Trump, have denied breaking the social media platform’s rules after their accounts—followed by hundreds of thousands of users—were permanently suspended.

They also claimed the notorious pro-Trump conspiracy theorist Jacob Wohl, himself banned permanently from the site for using fake accounts and other rule violations, called them shortly after their ban to say they should all join together and fight Twitter and internet censorship.

Wohl, who has a history of making false and outlandish claims, later appeared to take credit for the Krassenteins’ ban in an Instagram post. “Another set of enemies have been vanquished. And all it took was $1,000 on and some bogus emails to twitter execs. #Don’tFuckWithJacobWohl,” posted Wohl.

Critics accuse the Krassensteins of grifting, exploiting the genuine anger at the Trump administration purely for their own commercial gain through the online “resistance” movement. However, the brothers say they are sincere in their beliefs.

Their business history is also chequered, having previously come under scrutiny from federal investigators amid suspicions of fraud relating to their past ventures and, reported The Daily Beast.

The feds reportedly seized half a million dollars from the brothers, who were suspected of aiding financial scams, though they were never arrested or charged. They deny any wrongdoing.

Avenatti, Wohl and the Krassensteins Prove Political Media Is a Hucksters’ Paradise
By Matt Taibbi

Michael Avenatti isn’t Icarus, or any other Greek mythical figure. He’s just a jerk. The quote is the self-promoting sleaze-dog lawyer version of Alex Rodriguez owning two portraits of himself in the form of a centaur.

Already charged for attempting to extort Nike and for embezzling $12 million from a batch of clients, he’s been hit with a new indictment. He’s accused of blowing the proceeds of porn star Stormy Daniels’ book deal on things like his monthly $3,900 Ferrari payment, while stalling her with excuses that the publisher was late or “resisting… due to poor sales of [Daniels’s] book.”

The fate of Avenatti-Icarus feels intertwined with Ed and Brian Krassenstein of #Resistance fame. The flying Krassensteins have just been removed from Twitter, allegedly for using fake accounts and “purchasing fake interactions.”

This comes three years after their home was raided by federal agents, and nearly two after a forfeiture complaint made public the Krassensteins’ 13-year history of owning and operating sites pushing Ponzi-like “High-Yield Investment Plans” or HYIPs. Authorities said the pair “generated tens of thousands of complaints by victims of fraudulent HYIPs.” (Emphasis mine)

After their Twitter ban this week, in one of the most perfect details you’ll ever find in a news story, the Krassensteins were contacted by Jacob Wohl, the infamous pro-Trump conspiracy peddler who is himself banned. Wohl reportedly proposed they all band together to “fight Twitter and internet censorship.”

Confused? You shouldn’t be. The Krassensteins and Wohl are just two sides of the same coin, just as Avenatti is a more transparently pathetic version of the man he claimed to oppose, Donald Trump.

The Trump era has seen the rapid proliferation of a new type of political grifter. He or she often builds huge Twitter followings with hyper-partisan content, dishing out relentless aggression in the form of dunks and hot takes while promising big, Kaboom-y revelations that may or (more often) may not be factual. They often amplify presences using vast networks of sock-puppet accounts.

Avenatti had 254 cable appearances last year, including 147 on MSNBC and CNN alone in a 10-week period. Cable news bookers fell so madly in love they nearly propelled him into the presidential race, during a time when, among other things, he was allegedly bilking $1.6 million from a paraplegic.

Waytago, cable! Congratulations for giving air time to any slimeball who throws enough coal on your ratings furnace, beginning of course with the president.

How Alex Jones and Infowars Helped a Florida Man Torment Sandy Hook Families
By Elizabeth Williamson

Soon after the Dec. 14, 2012, mass shooting at Sandy Hook Elementary School in Newtown, Conn., Mr. Jones, the right-wing provocateur, began spreading outlandish theories that the killing of 20 first graders and six educators was staged by the government and victims’ families as part of an elaborate plot to confiscate Americans’ firearms.

Many of the most noxious claims originated in the mind of Mr. Halbig, a retired Florida public school official who became fixated on what he called “this supposed tragedy” at Sandy Hook. Court records and a previously unreleased deposition given by Mr. Jones in one of a set of defamation lawsuits brought against him by the families of 10 Sandy Hook victims show how he and Mr. Halbig used each other to pursue their obsession and promote it across the internet.

Over several years, Mr. Jones gave Mr. Halbig’s views an audience by inviting him to be a guest on Infowars, his radio and online show. Infowars gave Mr. Halbig a camera crew and a platform for fund-raising, even as Mr. Halbig repeatedly visited Newtown, demanding thousands of pages of public records, including photos of the murder scene, the children’s bodies and receipts for the cleanup of “bodily fluids, brain matter, skull fragments and around 45 to 60 gallons of blood.”

Given practical support and visibility by Mr. Jones, Mr. Halbig hounded families of the victims and other residents of Newtown, and promoted a baseless tale that Avielle Richman, a first grader killed at Sandy Hook, was still alive.

The deposition and its details about Mr. Jones’s operation and his interactions with Mr. Halbig were made public on Friday, days after Avielle Richman’s father, Jeremy Richman, killed himself in Newtown’s Edmond Town Hall, where Avielle Foundation, a nonprofit organization dedicated to brain science that the family established in their daughter’s name, had an office.

Beware the Jabberwock
By This American Life

Jon Ronson Mark said he remembered Alex beating Jared unconscious in geography. And then he said this.

Mark Milton We all ended up going to a party, and then some of Bubba’s friends jumped him that night.

Lina Misitzis This was at a party after Alex beat up Jared?

Mark Milton Yeah. Yeah, this was at Mark’s house, out there in McLendon-Chisholm.

Lina Misitzis So he moved to Austin after that party?

Mark Milton Yeah, they sold the condo over in Lakeside Village. And he couldn’t take it from him being jumped by a couple of boys that was Bubba’s friends, and he moved out of there.

Jon Ronson This is the story Josh had told me in the hotel lobby, and that Jared told me. It’s the story Alex swears is untrue. Alex’s family wouldn’t talk to us, so I can’t say exactly why they finally moved out of Rockwall. In our interview, Alex retold the story of the school assembly, but he also named other reasons for their move. Like how his Austin-born mother was homesick, and fed up with all the fighting. In the end, what seems clear is that this fight happened, and the assembly where Alex outed the drug dealing cops probably never did.

If I had to guess, I would say that Alex has replaced a true story, where he’s humiliated at a party, with a different story where he’s a hero, standing up to corrupt cops and getting beaten up for his bravery. In a way, it’s same story as the one where the whole football team came at him in a phalanx. It’s the character he plays on Infowars, the beleaguered hero attacked from all sides, bloodied but undaunted, and emerging the victor. It’s like stories little kids tell about themselves.

The Many Lies of Carl Beech
By Matthew Scott

No-one should be better at spotting liars than senior detectives and journalists, yet many of these people were taken in by Beech. Perhaps his stories played to their prejudices, or encouraged them to adopt positions they wished to be seen to hold: that “virtually no one lies about sexual abuse,” or that “the establishment” is made up of wicked people who, as a class, are capable of just about anything. In the police videos, Beech sobbed, murmured, appeared to struggle over the more traumatic aspects of his story, and generally behaved the way one might expect were he the victim of appalling abuse. His internet research enabled him to “remember” seemingly telling details, and to draw—as if from memory—the places where he said abuse had taken place. It was only after meticulous investigation of Beech’s story, interviews with his schoolmates and family, a review of his surviving school records, forensic examination of his computers, and even a medical examination for signs of past injuries or broken bones (there were none), that it could be conclusively proved that Carl Beech was a liar.

Exclusive: We Found Archie Carter
By Aaron Freedman

Through emails and documents provided to Jacobin, Carter shows that Quillette was not only negligent in their fact-checking of his fabrication, they actually embellished his story with their own ideological fables.

Yes, Carter did mislead Quillette about his identity, lying to editors that “Archie Carter” was his real name and the Mets were his favorite team. But from then on, the errors were Quillette’s. On Tuesday — two full days before running the piece — an editor asked if Carter could provide proof of ID. He never did, and Quillette inexplicably ran the piece anyway.

Carter, who had only written about NYC-DSA meetings that were publicly reported on, even gave his Quillette editor an out: “If you really want, I could offer one or two more stories.”

Quillette didn’t even acknowledge that bluff by Carter. Instead, an editor replied, “Thanks for the redraft. I’ve given it a polish to bring it more into line with our house style.”

That was a bit of an understatement. Comparing the original draft Carter had written (verified through a Google Doc link included in his email correspondence with Quillette), it’s clear that the publication made an extra effort to add embellishing details to the story — separate from Carter’s original fabrication — in order to advance a right-wing narrative of DSA as hopeless, dithering, anti-working class snowflakes.

For example, it was Quillette, not Carter, that included the line, “My union friends were horrified. While these people spend hours reproaching themselves and each other, real people in America are suffering.”

Quillette also suggested that DSA meetings “would drag on forever in order to accommodate the neuroses of the participants and to ensure that the proceedings observed the norms of ‘inclusivity.”

“I included this as fish bait,” Carter said. “They took it.”

Quillette editor-in-chief Claire Lehmann didn’t respond to requests for comment, but Carter discussed what he thought he had accomplished with today’s saga.

“My hope is that it does damage right-wing credibility,” he said confidently.

By This American Life (PDF, 148KB)

Mike Daisey: And everything I have done in making this monologue for the theater has been toward that end – to make people care. I’m not going to say that I didn’t take a few shortcuts in my passion to be heard. But I stand behind the work. My mistake, the mistake that I truly regret is that I had it on your show as journalism and it’s not journalism. It’s theater. I use the tools of theater and memoir to achieve its dramatic arc and of that arc and of that work I am very proud because I think it made you care, Ira, and I think it made you want to delve. And my hope is that it makes – has made- other people delve.

Ira Glass: So you’re saying the story isn’t true in the journalistic sense?

Mike Daisey: I am agreeing it is not up to the standards of journalism and that’s why it was completely wrong for me to have it on your show. And that’s something I deeply regret. And I regret that the people who are listening, the audience of This American Life who know that it is a journalistic enterprise, if they feel misled or betrayed, I regret to them as well.

Ira Glass: Right but you’re saying that the only way you can get through emotionally to people is to mess around with the facts, but that isn’t so.

Theater, Disguised as Real Journalism
By David Carr

The easy lesson might be that journalism is not a game of bean bag, and it would be best left to professionals. But we are in a pro-am informational world where news comes from all directions. Traditional media still originate big stories, but many others come from all corners — books, cellphone videos, blogs and, yes, radio shows built on storytelling.

But there is another word for news and information that comes from advocates with a vested interest: propaganda.

It is worth mentioning that professional credentials are not insurance against journalistic scandal. “Marketplace,” the highly regarded business show from American Public Media that uncovered Mr. Daisey’s untruths, recently had to retract a first-person account from Leo Webb, who portrayed himself as an out-of-work former Army sniper who was also a minor league baseball player. Turned out he was neither a veteran nor a ballplayer.

There is nothing in the journalism playbook to prevent a determined liar from getting one over now and again. It is partly because seekers of truth expect the same from others. On the broadcast this weekend, Mr. Glass seemed stunned by Mr. Daisey’s ability to look him in the eye and dissemble.

“I have such a weird mix of feelings about this because I simultaneously feel terrible for you, and also I feel lied to,” Mr. Glass said. “And also, I stuck my neck out for you.”

I sent an e-mail to someone I know who is an expert on journalistic malfeasance to ask if, in a complicated informational age, there was a way to make sure that someone telling an important story had the actual goods.

“All the good editing, fact-checking and plagiarism-detection software in the world is not going to change the fact that anyone is, under the right circumstances, capable of anything and that journalism is essentially built on trust.”

I think Jayson Blair, who responded to my e-mail query, may be on to something.

The End of Detecting Deception
By Joe Navarro M.A.

In 2016, I wrote an article for readers of Psychology Today, looking at over two-hundred DNA exonerations. People on death row exonerated after definitive DNA tests confirmed they were not the culprits; it was not their saliva, blood, sweat, or semen found at the crime scene. What was startling when I burrowed deep into all these cases, in each and every instance, the law enforcement officers were sure the suspect was lying, but not one officer could detect the truth. Not one officer believed the suspect when they claimed they did not do it. In other words, and I repeat, they could not detect the truth, but they were certain they could detect deception. This wasn’t just embarrassing—lives were at stake—it was shameful. Shameful that anyone should be falsely accused, but also shameful that not one officer in those 261 cases could differentiate the truthful from the deceptive. Why? Because for decades into the present, law enforcement officers have been taught that they can detect deception through nonverbals, when in fact, we humans are no better than a coin toss at detecting deception—a mere fifty/fifty chance. And that is one way you wind up with the innocent on death row.

But it is not just law enforcement, after the popular TV show Lie to Me came out (premiered on the Fox network), all of a sudden there were hundreds of aficionados teaching others how to detect deception; ignoring or twisting what science actually supported and unfortunately further mucking-up the field with simplistic assertions. Too often a veneer of science was wrapped around one or two examples for general public consumption giving the misleading assumption that detecting deception is not just easy, but that it is assured. That is fallacious and wrong.

If detecting deception were just a parlor game, it would not be an issue, but claiming to detect deception and teaching as much has real life consequences. Those men on death row I spoke of earlier, they were going to executed, because of the false beliefs of law enforcement officers that they could detect deception. People have been fired from their jobs because when questioned they showed signs of nervousness or stress. Relationships have been strained or ruined for similar false assumptions. The public and law enforcement has been fed a lot of nonsense about detecting deception and it’s time to stop.

An Epidemic of Disbelief
By Barbara Bradley Hagerty

What recourse does a victim have when police or prosecutors refuse to take her seriously? Virtually none, it seems. She can’t force the police to investigate and she can’t make prosecutors try her case, because the state has vast discretion in how it handles criminal cases. Some women—in San Francisco, Houston, and Memphis—have tried to sue in federal court. They claimed that the state violated their due-process rights by failing to test their rape kits and fully investigate their claims, and that government policies discriminated against women by giving rape cases a lower priority than violent crimes more commonly committed against men, such as aggravated assault and robbery. Those lawsuits have been dismissed or withdrawn, although a federal appeals court recently ruled that the Memphis lawsuit was incorrectly dismissed and should be reinstated. A class-action suit in Austin, Texas, may have a better chance of showing gender discrimination based on one striking fact: Of the more than 200 sexual-assault cases police referred to prosecutors from July 2016 to June 2017, eight resulted in plea agreements, but only one case went to trial. The victim was a man.Yet even as police and prosecutors seem stuck in time, our culture is moving forward. This moment feels fundamentally different from previous decades, when a sensational rape trial would trigger a surge of outrage and promises of reform, only to see the scandal ebb from consciousness. Too many women have disclosed their #MeToo moments; too many rape kits have been pulled out of storage rooms. And if the success of the Cleveland task force proves anything, it is this: Rape cases are winnable. Serial rapists could be swept from the streets and untold numbers of women could escape the worst moments of their life, if police and prosecutors would suspend their disbelief.

The Case of Al Franken
By Jane Mayer

Franken, his wife, his children, and a group of staff and advisers argued late into the night about what to do. Shaken, Franken had asked his chief of staff, “Do you think I’m this terrible person?” His wife wanted to fight on, but his children worried about his well-being, and everyone’s biggest concern was that, if he remained a pariah, he couldn’t represent Minnesota effectively. Franken could have toughed it out like New Jersey’s Democratic senator Bob Menendez, who hung on despite having been indicted on federal corruption charges, in 2015. (Democrats hadn’t demanded Menendez’s resignation, largely because New Jersey’s governor at the time was a Republican and would have appointed a Republican replacement; in Franken’s case, the Minnesota governor was a Democrat.) But Franken decided he had to resign.

Drew Littman, Franken’s first chief of staff, told me, “People said he didn’t have to do it, but he’s so social—his nerves are exposed all the time. It was like going to school and thinking these people are your friends and they really like you, and then one day they all get together and beat you up. You don’t want to go back to that school after that.” Norman Ornstein, Franken’s friend, said, “It was no more a choice than jumping after they make you walk the plank.”

The next day, Franken gave a short resignation speech. Gillibrand and other Senate colleagues flocked to hug him afterward. But Franken told me, “I’m angry at my colleagues who did this. I think they were just trying to get past one bad news cycle.” For months, he ignored phone calls and cancelled dates with friends. “It got pretty dark,” he said. “I became clinically depressed. I wasn’t a hundred per cent cognitively. I needed medication.”

Franken feels deeply sorry that he made women uncomfortable, and is still trying to understand and learn from what he did wrong. But he told me that “differentiating different kinds of behavior is important.” He also argued, “The idea that anybody who accuses someone of something is always right—that’s not the case. That isn’t reality.”

‘I’m Radioactive’
By Emily Yoffe

Given the millennia during which women have had to take male abuse and suffer under institutionalized denial of and indifference to it, it is perhaps understandable that there is a willingness to shrug off the prospect that some unfairly accused men will become roadkill on the way to a more equitable future. A common feminist dictum holds there are no innocent men, as per the slogans #YesAllMen and #KillAllMen. We are now in a time when a sexual encounter can be recast in a malevolent light, no matter whether the participants all appeared to consider it consensual at the time and no matter how long ago it took place. Looking back, it can be even harder—perhaps impossible— to know what really happened in a private sexual encounter.

But creating injustice today does not undo the harms of the past; instead it undermines the integrity of the necessary effort to address sexual misconduct. When we endlessly expand the categories of victim and perpetrator, we let loose forces that will not stay contained. Anyone, regardless of innocence, can be targeted and found worthy of destruction. And long after the headlines have faded, the damage continues to accrue.

When We Kill
By Nicholas Kristof

Defense lawyers grimly joke that if you’re falsely convicted of a crime, it’s best to be sentenced to death — because then at least you will get pro bono lawyers and media scrutiny that may increase the prospect of exoneration. Researchers find that an exoneration is 130 times more likely for a death sentence than for other sentences.

Yet if death penalties get unusual scrutiny, there are countervailing forces. Researchers find that juries are more likely to recommend the death penalty for defendants who are perceived as showing a lack of remorse — and innocent people don’t display remorse. A second factor is that death sentences are often sought after particularly brutal crimes that create great pressure on the police to find the culprits.

In 1989, for example, after five black teenagers in New York City were arrested in the rape and beating of a white investment banker who became known as the Central Park Jogger, Donald Trump bought full-page newspaper ads calling for the death penalty. The teenagers were later exonerated when DNA evidence and a confession by another man showed that they were innocent of that crime.

Trial by Fire
By David Grann

Short, with a paunch, Vasquez had investigated more than twelve hundred fires. Arson investigators have always been considered a special breed of detective. In the 1991 movie “Backdraft,” a heroic arson investigator says of fire, “It breathes, it eats, and it hates. The only way to beat it is to think like it. To know that this flame will spread this way across the door and up across the ceiling.” Vasquez, who had previously worked in Army intelligence, had several maxims of his own. One was “Fire does not destroy evidence—it creates it.” Another was “The fire tells the story. I am just the interpreter.” He cultivated a Sherlock Holmes-like aura of invincibility. Once, he was asked under oath whether he had ever been mistaken in a case. “If I have, sir, I don’t know,” he responded. “It’s never been pointed out.”

Many arson investigators, it turned out, had only a high-school education. In most states, in order to be certified, investigators had to take a forty-hour course on fire investigation, and pass a written exam. Often, the bulk of an investigator’s training came on the job, learning from “old-timers” in the field, who passed down a body of wisdom about the telltale signs of arson, even though a study in 1977 warned that there was nothing in “the scientific literature to substantiate their validity.”

In 1992, the National Fire Protection Association, which promotes fire prevention and safety, published its first scientifically based guidelines to arson investigation. Still, many arson investigators believed that what they did was more an art than a science—a blend of experience and intuition. In 1997, the International Association of Arson Investigators filed a legal brief arguing that arson sleuths should not be bound by a 1993 Supreme Court decision requiring experts who testified at trials to adhere to the scientific method. What arson sleuths did, the brief claimed, was “less scientific.” By 2000, after the courts had rejected such claims, arson investigators increasingly recognized the scientific method, but there remained great variance in the field, with many practitioners still relying on the unverified techniques that had been used for generations. “People investigated fire largely with a flat-earth approach,” Hurst told me. “It looks like arson—therefore, it’s arson.” He went on, “My view is you have to have a scientific basis. Otherwise, it’s no different than witch-hunting.”

In December, 2004, questions about the scientific evidence in the Willingham case began to surface. Maurice Possley and Steve Mills, of the Chicago Tribune, had published an investigative series on flaws in forensic science; upon learning of Hurst’s report, Possley and Mills asked three fire experts, including John Lentini, to examine the original investigation. The experts concurred with Hurst’s report. Nearly two years later, the Innocence Project commissioned Lentini and three other top fire investigators to conduct an independent review of the arson evidence in the Willingham case. The panel concluded that “each and every one” of the indicators of arson had been “scientifically proven to be invalid.”

In 2005, Texas established a government commission to investigate allegations of error and misconduct by forensic scientists. The first cases that are being reviewed by the commission are those of Willingham and Willis. In mid-August, the noted fire scientist Craig Beyler, who was hired by the commission, completed his investigation. In a scathing report, he concluded that investigators in the Willingham case had no scientific basis for claiming that the fire was arson, ignored evidence that contradicted their theory, had no comprehension of flashover and fire dynamics, relied on discredited folklore, and failed to eliminate potential accidental or alternative causes of the fire. He said that Vasquez’s approach seemed to deny “rational reasoning” and was more “characteristic of mystics or psychics.”

Inside the Secret Sting Operations to Expose Celebrity Psychics
By Jack Hitt

Busting psychics has a history almost as rich as the rise of modern psychic belief, somewhere around the mid-19th century. Of course, there has always been a general sense that there exists a supernatural gift for seeing into the future, the present (sometimes called “remote viewing”) or the past (that is, communicating with the dead). The hope that this power exists reaches back to some of the earliest civilizations — the court seers of the Egyptians, the Oracle of Delphi just north of Athens, the bone-reading shaman of ancient China. Among the more recent big shifts in how we conceive of supernatural communication occurred around the time of Charles Darwin, when there was an explosion of secular interest in the numinous, called spiritualism.

This new popular pursuit found an audience across all classes and denominations. While the poor sought out their corner soothsayer, the smart set was happy to ponder the works of a Russian mystic named Madame Blavatsky, whose “theosophies” were a kind of modern mash-up of religion, science and philosophy. In fact, a lot of this interest took the form of science — people trying to measure these various powers or to discuss the supernatural in sober, logical tomes. These were the days when you might read the other books written by Sherlock Holmes’s creator, Arthur Conan Doyle, a member of the London-based Society for Psychical Research who died insisting that he would rather be remembered for his paranormal works such as “The New Revelation” and “The Vital Message” than for his Sherlock Holmes detective stories.

And throughout this rise of interest, there was a parallel rise in debunkers. The poet Robert Browning once exposed the mid-19th century Scottish psychic, Daniel Home, who claimed to conjure the spirit of Browning’s infant son, who died young. Except Browning hadn’t lost a son. Worse, the poet lunged at the apparition to unmask it and found himself clutching Home’s bare foot. Helen Duncan was found to swallow a length of cheesecloth, which she could produce dramatically from her mouth as ectoplasm. During an early-20th century séance, Frederick Munnings deployed a long voice-altering trumpet across the room — a clever tactic undermined one night when someone accidentally switched on the light.

The idea of talking with the dead is one of those stubborn hopes that’s difficult for a culture to move beyond. Famous skeptics like Harry Houdini left precise instructions with his wife and friends as to just how he would reach out, if it were possible, after his death. Stanley Kubrick, in talking about his movie “The Shining” with Stephen King, confessed that he found optimism in stories about the supernatural. “If there are ghosts, that means we survived death,” he explained.

From Astrology to Cult Politics—the Many Ways We Try (and Fail) to Replace Religion
By Clay Routledge

Nearly one third of Americans report having felt in contact with someone who has died, feel that they have been in the presence of a ghost, and believe ghosts can interact with and harm humans. These numbers are going up, not down, as more people seek something to fill the religion-shaped hole in their lives. By no coincidence, infrequent church attendees are roughly twice as likely to believe in ghosts as regular churchgoers.

Americans are abandoning the pews, but are increasingly fascinated by astrology, “spiritual” healing practices, and fringe media sources that purport to describe the powers of the supernatural realm. The number of claimed “haunted houses” in the United States is growing. And paranormal tourism centered on such allegedly haunted locales has become a booming business, now accounting for over half a billion dollars in revenue annually.

This trend can be observed on the basis of age cohort: Young adults, being less religious, are more inclined to believe in ghosts, astrology, clairvoyance and spiritual energy. But it also can be observed geographically: The parts of the United States where secular liberals are predominant tend to be the same areas where the market for alternative spiritual experiences and products is most lucrative. Even prominent media outlets such as The New York Times and (in Britain) The Guardian, whose readership consists primarily of secular liberals, frequently publish articles about topics such as witchcraft and astrology—even if they are careful not to legitimize the claims made by proponents of these beliefs.

How millennials replaced religion with astrology and crystals
By Jessica Roy

“I think that it’s a yearning to return to something. There’s a rejection of things that don’t work,” Nicholas said. “Socialism isn’t new, and astrology definitely isn’t new, and earthly spirituality or living in accordance with the earth’s rituals isn’t new, it’s ancient. I think we’re yearning for something that technology cannot give us, that capitalism cannot give us.”

But capitalism is certainly trying.

The astrology-and-crystals trend is one of those things that, once you start noticing it, is suddenly everywhere. Raw crystal and astrology-inspired jewelry and decor dominate Instagram. At a fashion show in L.A. for Mother Denim’s new capsule collection, Mystical, attendees received a velvet pouch packed with crystals, with accompanying cards indicating their meaning. Retired L.A. Laker Dennis Rodman was accused in May by a Newport Beach yoga studio of helping to steal a 400-pound amethyst crystal. The New Yorker published a satirical piece titled “Healing Crystals and How to Shoplift Them.”

Looking to the stars has made landfall in the tech world as well: Facebook recently announced its new cryptocurrency, Libra.

The astrology iOS app Co-Star, which recently raised a $5.2 million seed round to launch an Android version, sends users push alerts with fun, social-media-friendly daily horoscopes ranging from the innocuous (“It’s going to be OK.” “Drink water.”) to the lightly deranged (“Be someone’s service animal today.” “Start a cult.”).

A Quest for the Holy Grail: On D. W. Pasulka’s “American Cosmic: UFOs, Religion, Technology”
By Samuel Loncar

For almost all of human history people believed in powers in the earth and sky that appeared to them. Outside of modern Westernized contexts, people still believe this. Our culture is different because — we like to think — we have eliminated such fairy tales (never mind that we are chock-full of our own fairy tales). Except we have not, which is one of the extraordinary things Pasulka shows in her book.

UFO appearances, when one brackets out any question of their ontological status (whether they are real, or are as they appear to people), show patterns very similar to miracle reports throughout history. These similarities are so striking in fact that they raise serious questions about whether or not UFOs are somehow a version of whatever has caused people, for all of history, to perceive events in the external world that can only be described as wonders or miracles. The UFO phenomenon thus belongs as much to the history of religions as it does to science.

When You’ll Believe Anything
By Morgan Housel

Chronicling the Great Plague of London, Daniel Defoe wrote in 1722:

The people were more addicted to prophecies and astrological conjurations, dreams, and old wives’ tales than ever they were before or since … almanacs frighted them terribly … the posts of houses and corners of streets were plastered over with doctors’ bills and papers of ignorant fellows, quacking and inviting the people to come to them for remedies, which was generally set off with such flourishes as these: ‘Infallible preventive pills against the plague.’ ‘Neverfailing preservatives against the infection.’ ‘Sovereign cordials against the corruption of the air.’

The plague killed a quarter of Londoners in 18 months. You’ll believe just about anything when the stakes are that high.

That’s the trigger of misbelief: high stakes and limited options. A good way to think about beliefs is that they’re rarely just a calculation of something’s cold utility. They’re always formed within the context of how badly you want and need that thing to be true.

Ministry of Apparitions
By Malcolm Gaskill

In 2001 an architect called Danny Sullivan claimed to have found cine film of an angel while rooting around in a Monmouth junk shop. This was, unsurprisingly, a hoax, as were claims that Marlon Brando had paid £350,000 for the footage. But the alleged provenance was intriguing. Sullivan invented a psychical researcher called William Doidge, who had, he said, fought with the Scots Guards at the Battle of Mons in August 1914. The angel had been caught on camera much later, in the Cotswolds in 1952. It’s a well-known story that British soldiers at Mons claimed they really did see angels – but that story, too, turns out to be unfounded. In September 1914 the Welsh writer Arthur Machen published a story called ‘The Bowmen’ in the London Evening News. In it, spectral archers from the Battle of Agincourt come to the aid of the British Expeditionary Force at Mons. To his dismay the story was widely taken as truth, and a ‘snowball of rumour’ hardened into ‘the solidest of facts’. ‘If I had failed in the art of letters,’ Machen wrote later, ‘I had succeeded, unwittingly, in the art of deceit.’

From the outbreak of war, tales of the uncanny abounded. French Catholic troops at the Battle of the Marne and Russian Orthodox troops at the Battle of Augustov reported seeing the Virgin Mary. ‘Sport is more in my line than Spiritualism,’ a British officer wounded at Mons remarked, ‘but when you have experiences brought under your very nose again and again, you cannot help thinking that there must be something in such things.’ Battlefield apparitions were nothing new: Roman legionaries saw Castor and Pollux; troops fighting in the Crimean War saw saints. Walter Scott thought this perfectly natural: ‘One warrior catches the idea from another; all are alike eager to acknowledge the present miracle, and the battle is won before the mistake is discovered.’

The Depression added economic uncertainty to postwar political anxiety, reflected in the growth of horoscopes and almanacs. In 1930 the Sunday Express launched Britain’s first astrology column, and within a few years, according to a Mass Observation survey, two-thirds of women in London believed the future was foretold in the stars (only one in five men admitted the same). By 1942 Mass Observation estimated that around half of the British population believed in some kind of supernatural agency. The Second World War inevitably brought more stories of visions and prophecies, which in Germany were suppressed as inimical to the state’s monopoly on truth and power. Allied troops again took talismans to war (but not the swastikas). Eisenhower carried seven lucky coins in his pocket.

Westerners are today shy of admitting how often magic trumps logic in their thinking. But the trauma of war lays bare essential human truths. Public discourse during the Great War – in books, newspapers, magazines, pamphlets, letters, manifestos and almanacs – was merely the visible expression of fear, anxiety, horror, rage and grief.

Knock It Off, Lazy News Outlets. ‘Momo’ Isn’t Telling Kids To Hurt Themselves.
By Scott Shackford

The Momo Challenge coverage is a ghost story disguised as news. It’s presented as though evil hackers are able to put it in front of children when parents aren’t looking. That’s not how any of this works, and it’s just remarkably irresponsible for media outlets to sell a fearful story without an ounce of skepticism.

But I suppose we shouldn’t be surprised. Just a couple of weeks ago, media outlets across the country “warned” parents about an alleged social media “challenge” telling children to run away from home and hide for 48 hours without telling anybody. This challenge does not exist. It was debunked by Snopes when reports first emerged in 2015. Again, this was all based on an incident in another country (France) that, on further inspection, had nothing to do with any sort of online “challenge.”

So why are media outlets and cops warning parents about a trend that doesn’t actually exist? This story from NBC News (the main media outlet—not some local affiliate) says police departments haven’t actually had any cases, but want to warn folks because they themselves were contacted by the media. This is a great example of media people conjuring fake news.

If you type “48 Hour Challenge” into Google News, you’ll get numerous pieces in this vein. There are plenty of “Momo Challenge” stories, too, but at least in this case, it looks like people are waking up to the obvious fakeness of it.

Momo challenge: The anatomy of a hoax
By BBC News

Police have suggested that rather than focusing on the specific momo meme, parents could use the opportunity to educate children about internet safety, as well as having an open conversation about what children are accessing.

“This is merely a current, attention-grabbing example of the minefield that is online communication for kids,” wrote the Police Service of Northern Ireland, in a Facebook post.

Broadcaster Andy Robertson, who creates videos online as Geek Dad, said in a podcast that parents should not “share warnings that perpetuate and mythologise the story”.

“A better focus is good positive advice for children, setting up technology appropriately and taking an interest in their online interactions,” he said.

To avoid causing unnecessary alarm, parents should also be careful about sharing news articles with other adults that perpetuate the myth.

Viral ‘Momo challenge’ is a malicious hoax, say charities
By Jim Waterson

While some concerned members of the public have rushed to share posts warning of the suicide risk, there are fears that they have exacerbated the situation by scaring children and spreading the images and the association with self-harm.

“Even though it’s done with best intentions, publicising this issue has only piqued curiosity among young people,” said Kat Tremlett, harmful content manager at the UK Safer Internet Centre.

The rumour mill appears to have created a feedback loop, where news coverage of the Momo challenge is prompting schools or the police to warn about the supposed risks posed by the Momo challenge, which has in turn produced more news stories warning about the challenge.

Tremlett said she was now hearing of children who are “white with worry” as a result of media coverage about a supposed threat that did not previously exist.

“It’s a myth that is perpetuated into being some kind of reality,” she said.

Researchers say fears about ‘fake news’ are exaggerated
By Mathew Ingram

It’s so widely accepted that it’s verging on conventional wisdom: misinformation, or “fake news,” spread primarily by Facebook to hundreds of millions of people (and created by Russian agents), helped distort the political landscape before and during the 2016 US presidential election, and this resulted in Donald Trump becoming president. But is it really that cut and dried? Not according to Brendan Nyhan, a political scientist and professor of public policy at the University of Michigan. He and several colleagues have been researching this question since the election, and have come to a very different conclusion. Fears about the spread and influence of fake news have been over-hyped, Nyhan says, and many of the initial conclusions about the scope of the problem and its effect on US politics were exaggerated or just plain wrong.

Nyhan says his data shows so-called “fake news” reached only a tiny proportion of the population before and during the 2016 election. In most cases, misinformation from a range of fake news sites made up just 2 percent or less of the average person’s online news consumption, and even among the group of older conservatives who were most likely to consume fake news, it only made up about 8 percent. Not only that, but the University of Michigan researcher says a new paper he and his colleagues recently published shows the reach of fake news actually fell significantly between the 2016 election and the midterm elections last year, which suggests Facebook has cracked down on the problem. Nyhan also says “no credible evidence exists that exposure to fake news changed the outcome of the 2016 election.”

This might come as a surprise to Kathleen Hall Jamieson. She’s a veteran public policy researcher who published a book last year entitled Cyberwar: How Russian Hackers and Trolls Helped Elect a President. Jamieson, whose colleagues call her “the Drill Sergeant” for her no-nonsense attitude, has more 40 years of studying human behavior under her belt. In the book, she says the evidence suggests misinformation propagated by Russian trolls likely influenced the outcome of the election, in part because of the number of “swing” or undecided voters who were susceptible to those kinds of tactics. Jamieson also notes that the traditional news media played a key role in spreading this fake news and propaganda, by writing innumerable articles about Hillary Clinton’s emails. And she argues fake news wouldn’t have had to make much of an impact to influence the election, since a fairly small number of votes gave Trump the electoral college wins he needed.

Nyhan and his fellow researchers, however, including Princeton political scientist Andrew Guess, say their study looked at the actual behavior of a large sample of users who consented to have their online activity tracked and recorded in real time, and then followed up with interviews about their perceptions of the content. Not only was the amount of actual fake news they encountered incredibly tiny, Guess told CJR this past fall, but the idea that this would influence their behavior is also a bit of a stretch (something Nyhan wrote about for The New York Times last year). “It’s predominantly people who are inclined to believe the conclusions that are being made in this content, not so much swaying them to believe something,” Guess said. “In other words, it’s more or less just confirmation bias.”

So why has this myth of fake news swinging the election persisted despite a lack of evidence to support it? Nyhan’s theory is that it’s a little like the myth that Orson Welles’s radio play “War of the Worlds” caused widespread panic among the US population when it was aired in 1938. The play was likely only heard by a tiny number of people, and there’s no actual evidence that it caused any kind of panic, and yet the myth persists—in part because newspapers at the time played up the idea, as a way of discrediting radio (a relatively new competitor) as a source of news. In the same way, Nyhan argues, concerns about fake news being spread by Russian agents on Facebook are fueled by broader concerns about the influence of social networks on society.

Why Fears of Fake News Are Overhyped
By Brendan Nyhan

We find that only 27 percent of Americans visited fake news websites, which we define as recently created sites that frequently published false or misleading claims that overwhelmingly favor one of the presidential candidates, in the weeks before the 2016 election. These visits show the expected political skew — Clinton and Trump supporters tended to prefer pro-Clinton and pro-Trump sites, respectively — but made up only about 2 percent of the information people consumed from websites focusing on hard news topics. Consistent with behavioral evidence showing that online echo chambers are relatively rare, fake news consumption was concentrated among the 10 percent of Americans with the most conservative news diets, who were responsible for approximately six in 10 visits to fake news websites during this period. Even in that group, however, fake news made up less than 8 percent of their total news diet. Finally, people ages 60 and over consumed more fake news than other cohorts, which may reflect a lack of digital literacy or simply having more time to read news. (Other scholars have found similar patterns in Facebook sharing and Twitter sharing and consumption of fake news.)

… even if relatively few people consume fake news, those consumers may be especially politically active and thus disproportionately influential in our politics. In particular, fake news consumers may be especially important in party politics, which is highly responsive to people with intense preferences who vote in primaries. Fake news readers are also likely to disseminate the information they encounter from fake news websites via online and social networks, indirectly exposing many more people than would consume it directly.

Researchers Retract Widely Cited Fake-News Study
By Lilly Dancyger

“For me it’s very embarrassing, but errors occur and of course when we find them we have to correct them,” Menczer tells Rolling Stone. “The results of our paper show that in fact the low attention span does play a role in the spread of low quality information, but to say that something plays a role is not the same as saying that it’s enough to fully explain why something happens. It’s one of many factors.”

He also said that his team is working on a new study that will account for other factors, specifically bots.

“We found that bots can play a very important role in amplifying the spread,” he said, since they’re there “specifically to spread low-quality information.”

He also pointed out that various researchers are coming up with different results, depending on what metrics they use to measure the spread of misinformation — some have found that fake news actually spreads faster and farther than real news, while a new study that came out just this week determined, by looking at individual habits rather than the larger social media landscape, that sharing fake news wasn’t as common a behavior as it may seem. These independent researchers, as well as the social media companies themselves, are still trying to figure out how and why it spreads, and what could be done to stop it.

Letter from the Editor
By Barbara Allen

On Tuesday, April 30, Poynter posted a list of 515 “unreliable” news websites, built from pre-existing databases compiled by journalists, fact-checkers and researchers around the country. Our aim was to provide a useful tool for readers to gauge the legitimacy of the information they were consuming.

Soon after we published, we received complaints from those on the list and readers who objected to the inclusion of certain sites, and the exclusion of others. We began an audit to test the accuracy and veracity of the list, and while we feel that many of the sites did have a track record of publishing unreliable information, our review found weaknesses in the methodology. We detected inconsistencies between the findings of the original databases that were the sources for the list and our own rendering of the final report.

Therefore, we are removing this unreliable sites list until we are able to provide our audience a more consistent and rigorous set of criteria.

The ‘Liar’s Dividend’ is dangerous for journalists. Here’s how to fight it.
By Kelly McBride

As editors and reporters for many of the world’s leading news organizations sat in a room last week at Columbia University talking about the “Information Wars,” Yasmin Green, the director of research at Jigsaw, a Google subsidiary focused on digital threats, introduced the unsettling concept of the “Liar’s Dividend.”

You can hear Green’s entire explanation from the Craig Newmark Center for Journalism Ethics and Security Symposium in the video below. Here’s the concept in a nutshell: Debunking fake or manipulated material like videos, audios or documents, ultimately could stoke belief in the fakery. As a result, even after the fake is exposed, it will be harder for the public to trust any information on that particular topic.

This is a bigger problem than the Oxygen Theory, which argues that by debunking a falsehood, journalists give the claim a longer life. The Liar’s Dividend suggests that in addition to fueling the flames of falsehoods, the debunking efforts actually legitimize the debate over the veracity. This creates smoke and fans suspicions among at least some in the audience that there might well be something true about the claim. That’s the “dividend” paid to the perpetrator of the lie.

To dig back into ancient history for an example, in 2010 after robust reporting by almost every American news outlet that Barack Obama’s Hawaiian birth was certain, the intense debunking could not erase the doubt in the minds of a significant segment of the American public. At that point, 25% of Americans still thought it was likely or probable that Obama was born overseas. Well under half, only 42% of poll respondents, believed the facts as they had been conclusively demonstrated: that Obama was certainly born in the U.S. And 29% said they believed the president was probably born in the US. Certainly, political predisposition contributes to the existence of the Liar’s Dividend; in a polarized society, it can’t be minimized.

This is problematic for reporters and fact-checkers, and it buoys purveyors of misinformation. As NPR’s Media Correspondent David Folkenflik suggested at the symposium, “The idea is there’s just enough chum in water, it distracts people, nobody knows which to believe and they move on.”

Arguably, we can trace the concept of the Liar’s Dividend to a strategy employed by big tobacco in the 1980s. Faced with mounting research that cigarettes cause cancer, the Big Tobacco Playbook was employed to plant doubt in the public’s mind as a means to dispute the emerging science.

That strategy took advantage of a tendency within the press to look for opposing sides to duel in any story, a flawed reporting technique that eventually came to be known as false equivalency.

Alphabet-Owned Jigsaw Bought a Russian Troll Campaign as an Experiment
By Andy Greenberg

As part of research into state-sponsored disinformation that it undertook in the spring of 2018, Jigsaw set out to test just how easily and cheaply social media disinformation campaigns, or “influence operations,” could be bought in the shadier corners of the Russian-speaking web. In March 2018, after negotiating with several underground disinformation vendors, Jigsaw analysts went so far as to hire one to carry out an actual disinformation operation, assigning the paid troll service to attack a political activism website Jigsaw had itself created as a target.

In doing so, Jigsaw demonstrated just how low the barrier to entry for organized, online disinformation has become. It’s easily within the reach of not just governments but private individuals. Critics, though, say that the company took its trolling research a step too far, and further polluted social media’s political discourse in the process.

To attack the site it had created, Jigsaw settled on a service called SEOTweet, a fake follower and retweet seller that also offered the researchers a two-week disinformation campaign for the bargain price of $250. Jigsaw, posing as political adversaries of the “Down with Stalin” site, agreed to that price and tasked SEOTweet with attacking the site. In fact, SEOTweet first offered to remove the site from the web altogether fraudulent complaints that the site hosted abusive content, which it would ostensibly send to the site’s web host. The cost: $500. Jigsaw declined that more aggressive offer, but green lit its third-party security firm to pay SEOTweet $250 to carry out its social media campaign, providing no further instructions.

“Buying and engaging in a disinformation operation in Russia, even if it’s very small, that in the first place is an extremely controversial and risky thing to do,” says Johns Hopkins University political scientist Thomas Rid, the author of a forthcoming book on disinformation titled Active Measures.

Even worse may be the potential for how Russians and the Russia media could perceive—or spin—the experiment, Rid says. The subject is especially fraught given Jigsaw’s ties to Alphabet and Google. “The biggest risk is that this experiment could be spun as ‘Google meddles in Russian culture and politics.’ It fits anti-American clichés perfectly,” Rid says. “Didn’t they see they were tapping right into that narrative?”

Jigsaw wouldn’t be the first to court controversy for flirting with the disinformation dark arts. Last year, the consultancy New Knowledge acknowledged that it had experimented with disinformation targeted at conservative voters ahead of Alabama’s special election to fill an open Senate seat. Eventually, internet billionaire Reid Hoffman apologized for funding the group that had hired New Knowledge and sponsored its influence operation test.

The Jigsaw case study has at least proven one point: The incendiary power of a disinformation campaign is now accessible to anyone with a few hundred dollars to spare, from a government to a tech company to a random individual with a grudge. That means you can expect those campaigns to grow in number, along with the toxic fallout for their intended victims—and in some cases, to the actors caught carrying them out, too.

Mueller Witness’ Team Gamed Out Russian Meddling … in 2015
By Betsy Swan and Erin Banco

In a discussion of how a hostile foreign government could weaponize social media against an adversary, one Wikistrat analyst put it this way:

“Cyber-mercenaries are mainly hired by governments as online counter-information and counter-counter-information officers. Disguised as ordinary citizens, these cyber-mercenaries are experts at sensationalizing and distorting political issues in a manner that appeals to common sense. Their objectives are not to convince explicitly, but rather subconsciously, by inserting a seed of doubt that leads to confusion and encourages fact-skepticism. Their ultimate targets are foreign governments, but their attacks are launched on proxy targets, ordinary citizens, chiefly ignorant, vulnerable, and uneducated populaces of a particular nation-state.”

The analyst then noted that entities like “Russia’s ‘Internet Research Group’”—likely a misnomer for the country’s Internet Research Agency, which Mueller indicted in Feb. 2018—already weaponize social media to shape their countries’ domestic politics.

“As a foreign policy tool, misinformation can be used to spread fear, uncertainty and doubt among the population of antagonist countries, therefore furthering the instigator’s own agenda,” the analyst added. “Instead of direct government involvement, using cyber-mercenaries to enact these operations would create a degree of indirection and a veneer of plausible deniability that would make it harder to clearly separate propaganda from facts.”

Another analyst then sounded off on what makes trolling effective.

“[T]hese ‘cyber-trolls’ are trained controversialists: they openly engage in public controversy…People are drawn to the excitement of controversy and these cyber-trolls are experts in sensationalizing a political issue. The objective of cyber-trolls is not to convince explicitly, but rather subconsciously, by inserting a seed of doubt that leads to confusion and encourages fact-skepticism. They are not afraid to use provocative and confrontational language, as it is to their advantage if it leads to an emotional rise in the reader because the reader is then more likely to engage in debate, which in-turn, creates more buzz and attracts a greater audience, increasing the potential number of people exposed to this misinformation campaign.”

The analysis was written after the St. Petersburg-based Internet Research Agency had begun its U.S. election interference campaign, but well before the American public knew about it. In the end the troll factory controlled thousands of fake organizations and personas on Facebook, Twitter and Instagram, which it used to push out divisive rhetoric and fake news—overwhelmingly in support of Donald Trump’s candidacy.

Facebook busts Israel-based campaign to disrupt elections
By Isabel Debre and Raphael Satter

Facebook said Thursday it banned an Israeli company that ran an influence campaign aimed at disrupting elections in various countries and has canceled dozens of accounts engaged in spreading disinformation.

Nathaniel Gleicher, Facebook’s head of cybersecurity policy, told reporters that the tech giant had purged 65 Israeli accounts, 161 pages, dozens of groups and four Instagram accounts.

Although Facebook said the individuals behind the network attempted to conceal their identities, it discovered that many were linked to the Archimedes Group, a Tel Aviv-based political consulting and lobbying firm that publicly boasts of its social media skills and ability to “change reality.”

“It’s a real communications firm making money through the dissemination of fake news,” said Graham Brookie, director of the Digital Forensic Research Lab at the Atlantic Council, a think tank collaborating with Facebook to expose and explain disinformation campaigns.

He called Archimedes’ commercialization of tactics more commonly tied to governments, like Russia, an emerging–and worrying–trend in the global spread of social media disinformation. “These efforts go well beyond what is acceptable in free and democratic societies,” Brookie said.

Gleicher described the pages as conducting “coordinated inauthentic behavior,” with accounts posting on behalf of certain political candidates, smearing their opponents and presenting as legitimate local news organizations peddling supposedly leaked information.

Troll Watch: Study Shows Older Americans Share The Most Fake News

MCCAMMON: So you found that the older person was, the more likely they were to share fake news. And you said this is true regardless of partisan identity or ideological affiliation. But you also did look at partisan affiliation, right? What did you find there?

GUESS: We found a relationship between being more conservative or more Republican and also on average sharing more fake news articles. And that was a clear relationship as well and one that we weren’t necessarily as surprised by because we know that most of the false content that was being disseminated during this period was strongly pro-Trump and anti-Clinton in orientation. And it makes sense that people would be more likely to share and engage with content that they’re predisposed to agree with.

MCCAMMON: After the 2016 election, there was a lot of finger-pointing at Facebook, at Facebook users sharing fake news. From your findings, do you think older Facebook users helped elect Donald Trump?

GUESS: No, I don’t think the evidence that we’ve shown here, you know, supports such an interpretation. I think that connecting, you know, fake news on social media with the election outcome is highly speculative. And, personally, I find that to be pretty implausible given the weight of the evidence that we’ve seen so far.

MCCAMMON: Your study did find that a relatively small percentage of people shared fake news. I mean, is there – are there things to feel good about here?

GUESS: So it’s true that most people, including, you know, most people over 65 were not doing this. It’s also the case that plenty of people were sharing corrective information. So we also find that lots of people were sharing links to fact checks of fake news. So, you know, while it’s easy to sort of focus on the problem, and I think the problem is real, it’s also the case that potentially more people than the ones who are sharing fake news are also sort of vigilant and paying attention to the possible problems with the information that they’re encountering online.

This former Google exec talked to the social media trolls the Russians paid to influence elections
By Kate Fazzini

The troll farm operation was “not unlike a very top-down, controlled social media strategy” of a large company, she said. “The idea is to mimic the diversity of a crowd of people who go onto social media. The manuals and guidelines they would receive — they would say, ‘today on this topic, you are going to post the following rebuttals and use the following codes in your comments.'”

But mimicking a genuine, organic social movement is more challenging when researchers look at the broader, full scope of data, François said. So the lessons from this research campaign are likely to help identify fake, foreign-influenced social campaigns in the future.

Facebook and Twitter remove thousands of fake accounts tied to Russia, Venezuela and Iran
By Donie O’Sullivan

Russian-linked trolls attempting to influence the US through social media have often played to both left and right, and that appears to be the case with this batch of accounts as well. However, there does appear to have been a concerted attempt using the accounts to push some right-wing hashtags. Twitter said that the accounts had sent almost 40,000 tweets with the hashtag #ReleaseTheMemo, another 40,000 tweets with the hashtag #MAGA, and 18,000 tweets using the hashtag #IslamIsTheProblem. It is not clear whether the accounts’ tweets had any significant impact on the hashtags’ popularity.

The #ReleaseTheMemo hashtag emerged in January 2018 when Republican Rep. Devin Nunes, the then chairman of the House Intelligence Committee, touted a document that alleged FBI surveillance abuses during the 2016 election.

Twitter also announced it removed a network of accounts run from Venezuela which “appear to be engaged in a state-backed influence campaign” targeting Venezuelan audiences on behalf of the country’s president.

The company said it had also removed a network of accounts from Iran that were pretending to be US people or news outlets.

Facebook said it had taken down 783 pages, groups, and accounts that were run from Iran. The pages primarily aimed at the Middle East and South Asia, but also targeted the US, Facebook said in a blog post.

The Daily 202: Russian efforts to manipulate African Americans show sophistication of disinformation campaign
By James Hohmann

Researchers at Oxford University’s Computational Propaganda Project and Graphika, a network analysis firm, spent seven months analyzing millions of social media posts that major technology firms turned over to congressional investigators. Their goal was to understand the inner workings of the Internet Research Agency, which the U.S. government has charged with criminal offenses for interfering in the 2016 election.

It turns out that African Americans were targeted with more Facebook ads than any other group, including conservatives.

Three of the four most-liked Facebook posts put up by the Russian influence effort came from an account called Blacktivist that urged the community to be more cynical about politics. African Americans were urged to vote for Green Party candidate Jill Stein throughout the month before the 2016 election. A post on Oct. 29 that year declared: “NO LIVES MATTER TO HILLARY CLINTON. ONLY VOTES MATTER TO HILLARY CLINTON.” A message on Nov. 3 added: “NOT VOTING is a way to exercise our rights.”

On Twitter, four of the Russian agency’s five most‐retweeted accounts catered exclusively to African Americans.

On Instagram, all five of the most-liked posts created by the Russians were aimed at African American women. They included the hashtags #blackpower, #blackpride, #unapologeticallyblack, #blacklivesmatter, #icantbreathe, #riot and #blackgirlskillingit.

The influence operation — run out of St. Petersburg — was sophisticated, relentless and became more effective with time. Its goal was to manipulate identity politics to tear America apart. The Soviet Union had also tried to heighten racial divisions during the Cold War, but their operatives lacked access to the technology platforms that now make it so easy.

“Messaging to African Americans sought to divert their political energy away from established political institutions by preying on anger with structural inequalities … including police violence, poverty, and disproportionate levels of incarceration,” the report says. “These campaigns pushed a message that the best way to advance the cause of the African American community was to boycott the election and focus on other issues instead.”

Russian trolls used campus rows to ‘push partisan hot buttons’
By John Morgan

Dr Linvill said that one motivation behind the higher education-related activity was that these IRA accounts wanted to “look like the thing they are trying to mimic”, so seized on issues favoured by US right-wing social media accounts.

But controversies over professors’ political views and free speech were also seen as “wedge issues” that can divide left and right, he said.

Others have seen a wider Russian disinformation aim of “weakening Western democracies by undermining trust in institutions”.

“They attack higher education from the right and that is how they connect with an audience, push partisan hot buttons, and appear authentic. They also want to attack established institutions. They similarly attack science, the media, and the electoral process,” Dr Linvill said.

Johan Farkas, a doctoral student at Malmö University, who co-authored a paper on Russian IRA strategy, agreed that the agency “specifically targeted ‘hot button’ issues to destabilise public debates”.

“The issue of free speech…at universities fits well with this aim, especially considering how they operated numerous accounts claiming to belong to white conservative hardliners. Through these fake profiles, the Internet Research Agency amplified polarising and antagonistic viewpoints on a range of issues, including free speech,” he said.

Twitter and Facebook take first actions against China for using fake accounts to sow discord in Hong Kong
By Marie C. Baca and Tony Romm

Twitter said it was suspending nearly a thousand Chinese accounts and banning advertising from state-owned media companies, citing a “significant state-backed information operation” related to protests in Hong Kong. Meanwhile, Facebook said it was removing five Facebook accounts, seven pages and three groups after being tipped off to the use of “a number of deceptive tactics, including the use of fake accounts.”

The new takedowns by Facebook and Twitter reflect the extent to which disinformation has become a global scourge, far surpassing the once-secret efforts of Russian agents to stoke social unrest in the United States during the 2016 presidential election. Researchers recently have pointed to similar campaigns linked to Saudi Arabia, Israel, China, the United Arab Emirates and Venezuela, efforts aimed at shaping discussions on social media beyond their borders.

YouTube Channels Are Yanked for Alleged Disinformation Campaigns in Hong Kong
By Robert McMillan

Google pulled 210 YouTube channels from its platform, saying that they appeared to be part of a coordinated disinformation campaign in response to pro-democracy protests in Hong Kong.

Twitter Inc. and Facebook Inc. made similar moves earlier this week, citing evidence that the Chinese government was behind efforts to discredit the protesters.

“This discovery was consistent with recent observations and actions related to China announced by Facebook and Twitter,” Google wrote in a blog post Thursday. It didn’t specifically blame Beijing for the campaign.

Russia Deployed Its Trolls to Cover Up the Murder of 298 People on MH17
By Amy Knight

Van der Noordaa and van de Ven analyzed 9 million IRA tweets covering the period 2014-2017 that were released by Twitter in October 2018 as part of an effort to elucidate the Russian role in the U.S. presidential election.

They report that in the 24 hours after the MH17 crash the IRA posted at least 65,000 tweets, mainly in Russian, that blamed the Ukrainian government in Kiev for the disaster. Altogether, 111,486 tweets about MH17 were posted by the IRA in just three days, from July 17 through 19. (By comparison, in the 10-week period leading up to the November 2016 elections, the IRA accounts posted 175,993 tweets.) According to the two journalists: “Never before or after did the trolls tweet so much in such a short period of time.”

At the beginning, there was confusion among the trolls: An early tweet claimed that a Ukrainian plane had been shot down and that the rebels were responsible, which would “trigger a new series of sanctions against Russia.” But the blame was quickly switched to Kiev, with the hashtag “Poroshenko [the Ukrainian president] we want an answer!”

By the next morning, July 18, all the tweets were accusing Kiev, with three hashtags: #КиевСбилБоинг (“Kiev shot Boeing”), #ПровокацияКиева (“KievProvocation”) and #КиевСкажиПравду (“KievTelltheTruth”). The onslaught of tweets ended abruptly on the morning of July 19, after which the trolls continued to write about MH17, but with much less frequency and without the hashtags.

What is remarkable about the three-day tweetstorm is that the trolls actually wrote their own tweets instead of limiting themselves to retweeting or copying other extremist tweets, as was the case with other international incidents. They also composed their own stories on the LiveJournal platform, a popular Russian blog website, and then shared them on Twitter.

Iranians tried to hack U.S. presidential campaign in effort that targeted hundreds, Microsoft says
By Jay Greene, Tony Romm and Ellen Nakashima

Since then, other countries have come to adopt more of Russia’s playbook. Iran, for instance, for years had targeted U.S. officials through “large-scale intrusion attempts,” said John Hultquist, the director of intelligence analysis at the cybersecurity firm FireEye. But it has become more aggressive recently in response to President Trump, who has imposed massive sanctions and pulled out of an international deal over the country’s nuclear program, Hultquist said.

“The Iranians are very aggressive, and they could leverage whatever access they get for an upper hand in any kind of negotiations,” Hultquist added. “They could cause a lot of mayhem.”

Other tech companies also have been warning about the rising Iranian threat, largely out of concern that malicious actors originating in the country were spreading disinformation online. In May, for example, Facebook and Twitter said they had removed a sprawling Iranian-based propaganda operation, including accounts that mimicked Republican congressional candidates and appeared to try to push pro-Iranian political messages on social media. Some of those accounts similarly took aim at U.S. policymakers and journalists, researchers said at the time.

An Iranian Activist Wrote Dozens of Articles for Right-Wing Outlets. But Is He a Real Person?
By Murtaza Hussain

Alavi, whose contributor biography on the Forbes website identifies him as “an Iranian activist with a passion for equal rights,” has published scores of articles on Iran over the past few years at Forbes, The Hill, the Daily Caller, The Federalist, Saudi-owned al-Arabiya English, and other outlets. (Alavi did not respond to The Intercept’s requests for comment by Twitter direct messages or at the Gmail address he used to correspond with news outlets.)

The articles published under Alavi’s name, as well as his social media presence, appear to have been a boon for the MEK. An opposition group deeply unpopular in Iran and known for its sophisticated propaganda, the MEK has over the past decade turned its attention to English-language audiences — especially in countries like the U.S., Canada, and the United Kingdom, whose foreign policies are crucial nodes in the MEK’s central goal of overthrowing the Iranian regime.

Online Influencers Tell You What to Buy, Advertisers Wonder Who’s Listening
By Suzanne Kapner and Sharon Terlep

The singer Ariana Grande sued Forever 21 Inc. for allegedly stealing her likeness after she rejected an endorsement deal with the clothing retailer.

Ms. Grande, who has 165 million Instagram followers, accused the company of hiring a look-alike model for its Instagram posts and website. The model wore a hairstyle and clothing similar to what the pop star wore in her “7 rings” music video, which has more than half a billion YouTube views.

“The market value for even a single Instagram post by Ms. Grande is well into the six figures,” said the lawsuit, which seeks at least $10 million in damages. Forever 21, in a statement, disputed the allegations.

A Good Company, the online retailer, worked with 4,000 influencers to promote its eco-friendly stationery and other office supplies. It paid them cash or gift cards for their social-media posts.

The company, which didn’t get its expected sales boost, sent an anonymous survey to their influencers, asking if they had ever paid for followers, likes or comments. Nearly two-thirds of respondents said yes, Mr. Ankarlid, the CEO, said.

HypeAuditor, an analytics firm, investigated 1.84 million Instagram accounts and found more than half used fraud to inflate the number of followers.

Some influencers had large numbers of followers who weren’t real people, meaning the accounts had been bought or were inactive, according to Anna Komok, HypeAuditor’s marketing manager. Clues include large numbers of followers outside the influencer’s home country.

The scams cost little. Enterprises known as click farms employ people to inflate online traffic. They sell 1,000 bogus YouTube followers for as little as $49. On Facebook, the same number of followers costs $34, and on Instagram they cost $16, according to Masarah Paquet-Clouston, a researcher at cybersecurity firm GoSecure, who collaborated with others to seek prices.

Facebook and Instagram, which is owned by Facebook Inc., have policies against such deceptions, a company spokesperson said. The company said Instagram has an initiative to remove phony likes, follows and comments from accounts that use third-party apps to boost popularity. YouTube also prohibits such deceptions.

Influencer deception will cost advertisers $1.3 billion this year, estimated Roberto Cavazos, a statistics professor at the University of Baltimore.

These Influencers Aren’t Flesh and Blood, Yet Millions Follow Them
By Tiffany Hsu

Lil Miquela operated for two years before it was revealed that she was the product of a secretive company, Brud. Its California business registration lists an address in Silver Lake blocked by thick vegetation, but workers, who must sign nondisclosure agreements, said the company actually operates out of downtown Los Angeles. Brud’s public relations firm, Huxley, declined multiple interview requests.

On a public Google Doc that functions as the company’s website, Brud bills itself as “a transmedia studio that creates digital character driven story worlds” and says Lil Miquela is “as real as Rihanna.” Its “head of compassion,” in Brud-speak, is Trevor McFedries, whom Lil Miquela has referred to in several posts as a father figure.

Before co-founding Brud, Mr. McFedries was known as Yung Skeeter, a D.J., producer, director and musician who has worked with Katy Perry, Steve Aoki, Bad Robot Productions and Spotify. He has helped raise millions of dollars in financing from heavyweights like Spark Capital, Sequoia Capital and Founders Fund, according to TechCrunch.

Last summer, Lil Miquela’s Instagram account appeared to be hacked by a woman named Bermuda, a Trump supporter who accused Lil Miquela of “running from the truth.” A wild narrative emerged on social media: Lil Miquela was a robot built to serve a “literal genius” named Daniel Cain before Brud reprogrammed her. “My identity was a choice Brud made in order to sell me to brands, to appear ‘woke,’” she wrote in one post. The character vowed never to forgive Brud. A few months later, she forgave.

Fans followed along, rapt.

The online drama was as engineered as Lil Miquela herself, part of a “story line written by Brud,” according to Huxley. It echoed “S1m0ne,” a 2002 film starring Al Pacino as a film director who replaces an uncooperative actress with a digital ingénue.

Will Smith, Robert De Niro and the Rise of the All-Digital Actor
By Carolyn Giardina

A believable, fully digital human is still considered among the most difficult tasks in visual effects. “Digital humans are still very hard, but it’s not unachievable. You only see that level of success at the top-level companies,” explains Chris Nichols, a director at Chaos Group Labs and key member of the Digital Human League, a research and development group. He adds that this approach can be “extraordinarily expensive. It involves teams of people and months of work, research and development and a lot of revisions. They can look excellent if you involve the right talent.”

The VFX team must first create the “asset,” effectively a movable model of the human. Darren Hendler, head of VFX house Digital Domain’s digital human group, estimates that this could cost from $500,000 to $1 million to create. Then, he suggests, producers could expect to pay anywhere from $30,000 to $100,000 per shot, depending on the individual requirements of the performance in the scene.

More often, filmmakers use what has been broadly described as “digital cosmetics,” which could be thought of as a digital makeup applications — for instance, removing wrinkles for smoother skin. This means that age is becoming less of an issue when casting an actor. “It’s safer and cheaper than plastic surgery,” notes Nichols. Marvel’s Avengers: Endgame involved the creation of roughly 200 such de-aging shots, with work on actors such as Robert Downey Jr. and Chris Evans, to enable its time-traveling story.

AI and machine learning, and the related category known as generative adversarial networks (GANS), which involves neural networks, could advance this area even further. “I wouldn’t be surprised if The Irishman and Gemini Man are the last fully digital human versions that don’t use some sort of GANS as part of the process,” Hendler says, adding that de-aging techniques and digital humans could start to appear in more films, and not just those with Marvel-size budgets. “I think we’ll start to see some of this used on smaller-budget shows.”

Adds Guy Williams, Weta’s VFX supervisor on Gemini Man: “Once Gemini Man and The Irishman come out, you’ll [have several] successful films showing how it can be done. When you give that possibility to directors, they will find new ways to use it.”

‘Deepfakes’ Trigger a Race to Fight Manipulated Photos and Videos
By Abigail Summerville

People who create deepfakes are constantly adapting to attempts to detect the manipulations, said Mr. Farid. Some combine the work of two different computer systems, one of which alters the images while the other tries to determine if it can be distinguished from authentic content, he said.

The stakes are high. In extreme cases, Mr. Farid said, a deepfake could trigger a military conflict or other real-life turmoil. “A fake video of Jeff Bezos secretly saying that Amazon’s profits are down leads to a massive stock manipulation,” he said, citing one possible scenario.

Mr. Farid said it is worrying that social-media companies aren’t doing more to combat deepfakes, particularly in the wake of Russian interference in the 2016 presidential election, which Moscow has denied.

“These platforms have been weaponized, and these aren’t hypothetical threats,” he said.

Fraudsters Used AI to Mimic CEO’s Voice in Unusual Cybercrime Case
By Catherine Stupp

Criminals used artificial intelligence-based software to impersonate a chief executive’s voice and demand a fraudulent transfer of €220,000 ($243,000) in March in what cybercrime experts described as an unusual case of artificial intelligence being used in hacking.

The CEO of a U.K.-based energy firm thought he was speaking on the phone with his boss, the chief executive of the firm’s German parent company, who asked him to send the funds to a Hungarian supplier. The caller said the request was urgent, directing the executive to pay within an hour, according to the company’s insurance firm, Euler Hermes Group SA.

Euler Hermes declined to name the victim companies.

Law enforcement authorities and AI experts have predicted that criminals would use AI to automate cyberattacks. Whoever was behind this incident appears to have used AI-based software to successfully mimic the German executive’s voice by phone. The U.K. CEO recognized his boss’ slight German accent and the melody of his voice on the phone, said Rüdiger Kirsch, a fraud expert at Euler Hermes, a subsidiary of Munich-based financial services company Allianz SE.

Mr. Kirsch believes hackers used commercial voice-generating software to carry out the attack. He recorded his own voice using one such product and said the reproduced version sounded real.

A few software companies offer services that can quickly impersonate voices, said Bobby Filar, director of data science at Endgame, a cybersecurity company. “You don’t need to be a Ph.D. in mathematics to use it.” he said.

Another tactic hackers could use would be to stitch together audio samples to mimic a person’s voice, which would likely require many hours of recordings. Security researchers demonstrated this technique at the Black Hat conference last year.

Applying machine-learning technology to spoof voices makes cybercrime easier, said Irakli Beridze, head of the Centre on AI and Robotics at the United Nations Interregional Crime and Justice Research Institute.

The U.N. center is researching technologies to detect fake videos, which Mr. Beridze said could be an even more useful tool for hackers. In the case at the U.K. energy firm, an unfamiliar phone number finally aroused suspicions. “Imagine a video call with [a CEO’s] voice, the facial expressions you’re familiar with. Then you wouldn’t have any doubts at all,” he said.

What do we do about deepfake video?
By Tom Chivers

“There’s some way to go before the fakes are undetectable,” says Hitrova. “For instance, with CGI faces, they haven’t quite perfected the generation of teeth or eyes that look natural. But this is changing, and I think it’s important that we explore solutions – technological solutions, and digital literacy solutions, as well as policy solutions.”

Education – critical thinking and digital literacy – will be important too. Finnish children score highly on their ability to spot fake news, a trait that is credited to the country’s policy of teaching critical thinking skills at school. But that can only be part of the solution. For one thing, most of us are not at school. Even if the current generation of schoolchildren becomes more wary – as they naturally are anyway, having grown up with digital technology – their elders will remain less so, as can be seen in the case of British MPs being fooled by obvious fake tweets. “Older people are much less tech-savvy,” says Hitrova. “They’re much more likely to share something without fact-checking it.”

A European Commission report two weeks ago found that digital disinformation was rife in the recent European elections, and that platforms are failing to take steps to reduce it. Facebook, for instance, has entirely washed its hands of responsibility for fact-checking, saying that it will only take down fake videos after a third-party fact-checker has declared it to be false.

Britain, though, is taking a more active role, says Hitrova. “The EU is using the threat of regulation to force platforms to self-regulate, which so far they have not,” she says. “But the UK’s recent online harms white paper and the Department for Digital, Culture, Media and Sport subcommittee [on disinformation, which has not yet reported but is expected to recommend regulation] show that the UK is really planning to regulate. It’s an important moment; they’ll be the first country in the world to do so, they’ll have a lot of work – it’s no simple task to balance fake news against the rights to parody and art and political commentary – but it’s truly important work.” Wachter agrees: “The sophistication of the technology calls for new types of law.”

In the past, as new forms of information and disinformation have arisen, society has developed antibodies to deal with them: few people would be fooled by first world war propaganda now. But, says Wachter, the world is changing so fast that we may not be able to develop those antibodies this time around – and even if we do, it could take years, and we have a real problem to sort out right now. “Maybe in 10 years’ time we’ll look back at this stuff and wonder how anyone took it seriously, but we’re not there now.”

Fact-Checking the President in Real Time
By Jonathan Rauch

Before the public sees robo-checking, the software needs to become more sophisticated, the database of fact-checks needs to grow larger, and information providers need to adopt and refine the concept. Still, live, automated fact-checking is now demonstrably possible. In principle, it could be applied by web browsers, YouTube, cable TV, and even old-fashioned broadcast TV. Checker bots could also prowl the places where trollbots go and stay just a few seconds behind them. Imagine setting your browser to enable pop-ups that provide evaluations, context, additional information—all at the moment when your brain first encounters a new factual or pseudo-factual morsel.

Of course, outrage addicts and trolls and hyper-partisans will continue to seek out fake news and conspiracy theories, and some of them will dismiss the whole idea of fact-checking as spurious. The disinformation industry will try to trick and evade the checkers. Charlatans will continue to say whatever they please, foreign meddlers will continue trying to flood the information space with junk, and hackers of our brains will continue to innovate. The age-old race between disinformation and truth will continue. But disinfotech will never again have the field to itself. Little by little, yet faster than you might expect, digital technology is learning to tell the truth.

Was Shakespeare a woman?
By Dominic Green

The ‘case’ for anyone but Shakespeare is always a fantasy in pursuit of facts. Winkler’s article, like every case for Shakespeare not having been Shakespeare, repeatedly commits the elementary error of historical writing. Absence of evidence does not mean evidence of absence. It is strange that Shakespeare doesn’t refer to books in his will. But it doesn’t mean that he didn’t read. Hitler, after all, did not attend the Wannsee Conference. But that doesn’t mean he didn’t order the Holocaust.

‘Everything in here was rigorously fact-checked by The Atlantic,’ Winkler insists. In which case, The Atlantic has further reason to be embarrassed.

Conspiracism at the Atlantic
By Oliver Kamm

The Atlantic may think that the anti-Shakespeare campaigners offer an entertaining diversion. Its editors certainly failed to pick up the failings of Winkler’s research, yet I believe the issue is a lot more serious than that. These are dark times for liberal values of critical inquiry, reason and science. The magazine has given vent to an entirely worthless conspiracy theory without checking its provenance or veracity. As its editors know well, conspiracy theories have an ineluctable tendency to expand their horizons.

Exclusive: The true origins of the Seth Rich conspiracy theory. A Yahoo News investigation.
By Michael Isikoff

The conspiracy claims reached their zenith in May 2017 — the same week as Mueller’s appointment as special counsel in the Russia probe — when Fox News’ website posted a sensational story claiming that an FBI forensic report had discovered evidence on Rich’s laptop that he had been in communication with WikiLeaks prior to his death. Sean Hannity, the network’s primetime star, treated the account as major news on his nightly broadcast, calling it “explosive” and proclaiming it “might expose the single biggest fraud, lies, perpetrated on the American people by the media and the Democrats in our history.”

Among Hannity’s guests that week who echoed his version of events was conservative lawyer Jay Sekulow. Although neither he nor Hannity mentioned it, Sekulow had just been hired as one of Trump’s lead lawyers in the Russia investigation. “It sure doesn’t look like a robbery,” said Sekulow on Hannity’s show on May 18, 2017, during a segment devoted to the Rich case. “There’s one thing this thing undercuts is this whole Russia argument, [which] is such subterfuge,” he added.

In fact, the Fox story was a “complete fabrication,” said Sines, who consulted with the FBI about the Fox News claims. There was “no connection between Seth and WikiLeaks. And there was no evidence on his work computer of him downloading and disseminating things from the DNC.” (A spokeswoman for the FBI’s Washington field office said the office had never opened an investigation into Rich’s murder, considering it a local crime for which the Washington Metropolitan Police Department had jurisdiction. Andrew McCabe, the FBI’s acting director at the time, said in an interview that he reached out to his agents after he heard about the conspiracy stories about Rich and was told, “There’s no there there.”)

After eight days of controversy, Fox News was forced to retract the story after one of its two key sources, former Washington, D.C., homicide detective Rod Wheeler, backed away from comments he had given the Fox News website reporter Malia Zimmerman and a local Fox affiliate reporter confirming the account. The article, the network said in a statement at the time, “was not initially subjected to the high degree of editorial scrutiny we require for all our reporting.” Fox News later announced it was conducting an internal investigation into how the story came to be posted on its website. The results have never been disclosed, and a spokeswoman for Fox News declined to comment, citing ongoing litigation against the news network brought by the Rich family.

How conspiracy theories followed man to the Moon
By Frédéric Pouchot

Academic Didier Desormeaux, who has written widely on conspiracy theories, said the more important an event the more likely it is to attract outrageous counter narratives.

“Conquering space was a major event for humanity. Undermining that can shake the very foundations of science and man’s mastery of nature,” he told AFP, making it a huge target for conspiracists.

While earlier conspiracy theories also involved images — such as the assassination of US president John F Kennedy in 1963, and the so-called Roswell UFO incident — “what is new about these rumours is that they are based on a minute deconstruction of the images sent back by NASA,” the French specialist insisted.

For Desormeaux it is the first time a “conspiracy theory was built entirely around the visual interpretation of a media event — which they denounce entirely as a set-up.”

The same logic has been used repeatedly to dismiss school massacres in the US as fake, he added, with hardcore conspiracists claiming that the dead “are played by actors”.

“Images can anaesthetise our capacity to think” when deployed with ever more twisted leaps of logic, Desormeaux warned.

“The power of such theories is that no matter what they survive, because they become a belief which comes with a kind of evangelism and so they can go on forever,” he added.

How a James Comey Tweet Upended a Small California Town
By Niraj Chokshi

Two days earlier, on April 27, Mr. Comey had shared a tweet listing a handful of jobs he had held in the past alongside the hashtag #FiveJobsIveHad.

Hundreds of others had done the same before and since, but a small fringe group of conspiracy theorists seized on the tweet, claiming that it contained a coded message.

By removing letters, the hashtag could be shortened to “Five Jihad,” they argued. And a search for the abbreviation formed by the first letters of the jobs he listed, G.V.C.S.F., led to the Grass Valley Charter School Foundation, whose fund-raiser was scheduled for this weekend.

Mr. Comey, they concluded, was broadcasting an attack, perhaps as a distraction from other pending news.

The Grass Valley police quickly determined that the theory was baseless and that the school, with about 500 students from prekindergarten through eighth grade, was under no threat.

“We definitely did our due diligence,” said Alex Gammelgard, the chief of police. “Every single potential piece we did pointed to the same thing: that it was not credible.”

Meanwhile, the online community that spread the false conspiracy theory to begin with has already found a new subject to investigate, according to Mr. Rothschild.

“This stuff moves so fast,” he said. “They’ve already moved on.”

How the anti-vaccine movement crept into the GOP mainstream
By Arthur Allen

Vaccination was not a partisan issue in the past and even today, in states where vaccination hasn’t become politicized, GOP governments are sometimes as likely as Democratic ones to tighten vaccine requirements. Wyoming, for instance, is deeply conservative, but its state health department in a little-noticed decision last year created an immunization registry, added two vaccines to a list of school-entry requirements, and required home-schooled children to be vaccinated if they want to participate in sports or theater.

In neighboring Colorado, though, opposition to vaccine requirements became an attractive issue for conservatives, a minority in the state Legislature. Colorado has one of the country’s lowest rates of vaccinated kindergartners, but when Democrats tried to pass a modest bill requiring parents to take their vaccination exemption forms to the health department, hundreds came out to testify against it. The witnesses ranged from conservative Christians to parents with children they think were hurt by vaccines, to “natural living” types who don’t want vaccines to muck around with the immune system. But with a few exceptions, it was Republicans who helped stall and kill the bill.

Bastion of Anti-Vaccine Fervor: Progressive Waldorf Schools
By Kimiko de Freytas-Tamura

Ed Day, the Rockland County executive, has lashed out at the Green Meadow parents and other opponents of vaccine mandates, calling them “mentally nebulous.”

“The anti-vaccination movement is a serious threat to public health,” he said.

The Green Meadow Waldorf School in Rockland County, about 25 miles northwest of New York City, costs roughly $25,000 a year in tuition and is grounded in an educational philosophy that frowns upon rote learning.

The Waldorf method encourages children to learn at their own pace — textbooks are banned until the sixth grade, and technology and smartphones are prohibited altogether. Dance and arts are emphasized as tools to learn, for example, the alphabet.

The Waldorf schools were founded in the early 20th century on the teachings of Rudolf Steiner, a charismatic Austrian educator who preached a philosophy called “anthroposophy” that included eccentric medical theory.

He taught that diseases were influenced by “astral bodies” and that humans can also breathe through their skin. While he did not completely reject the vaccines against smallpox and diphtheria used in his day, he said rosemary baths were better for diphtheria and that smallpox could be avoided by being mentally prepared to confront it.

There are about 150 Waldorf schools in North America, including several in the New York region.

Over the last two decades, Waldorf schools across the country have had a spate of disease outbreaks, which is why they are the focus of concern in the measles epidemic.

A Waldorf school in North Carolina had an outbreak of chickenpox in November, the worst the state had seen since 1995. In 2016, a Waldorf school in Calgary, Canada, was affected by an outbreak of highly contagious whooping cough.

Vicki Larson, a spokeswoman for Green Meadow, said the school followed the law and that it was up to parents to decide whether to vaccinate their children.

In interviews, parents at Green Meadow said their skepticism about vaccines was not rooted in ignorance but in their own research. They said they scoured the internet and public libraries for vaccine findings, then shared their conclusions with one another.

Elizabeth, who said she has a master’s degree in public health policy, called for comparative studies between vaccinated and unvaccinated children over a prolonged period of time.

“Let’s put it on and see how healthy everyone is,” she said.

Vaccine advocates noted that such studies had already been done.

Doctors in Denmark and other countries with national health systems and central medical records have followed hundreds of thousands of children for decades and concluded that vaccines are safe, prevent diseases and do not cause autism.

Some of the parents said they distrusted the medical profession and the Centers for Disease Control and Prevention.

The World’s Many Measles Conspiracies Are All the Same
By Laurie Garrett

Most vaccine refusal worldwide goes hand in hand with public distrust in government. The nature of the anti-vaccination fury differs from one place to another and even within communities. False rumors that pig tissues might have been used in the production of a vaccine are enough to bring parental acceptance to a halt in Muslim countries. Since 2001, the rate of vaccine refusal in the United States has climbed fourfold. Declining to vaccinate their children, parents cite everything from excessive drug company profits and hidden mercury contamination to fear of their youngsters developing autism and general opposition to being told by the government to have their children poked with needles. Some dog owners even refuse to have their pets vaccinated, fearing they will have an autistic pooch on their hands.

Since the earliest 18th-century days of smallpox vacuolization—a crude form of immunization that preceded vaccine invention—there have been opponents and refuseniks. Many religious groups in the United States, such as the Amish, some Orthodox Jewish sects, and Christian Scientists, have long opposed vaccination. New York is currently in the grips of a measles outbreak, now totaling 133 cases, spreading primarily within a Hassidic Jewish community that rejects vaccines. Japan is also battling a measles epidemic, with 167 cases as of Feb. 10; nearly a third of those cases stemmed from the Miroku Community Kyusei Shinkyo, a religious group that opposed many aspects of modern medicine. (The group has issued an apology, vowing to cease opposition to vaccination.)

But the global anti-vaccination movement that predominantly confronts public health advocates today is dominated by highly educated, typically well-heeled individuals, such as the wealthiest residents of posh West Los Angeles communities like Santa Monica, Brentwood, and Beverly Hills, where rates of child vaccination are as low as those seen in civil war-torn South Sudan. Or consider the residents of Clark County, Washington, where only 78 percent of children are fully immunized. Thousands of parents have taken advantage of a state law allowing “philosophical or personal objection to the immunization of the child.” Clark County has had 70 confirmed measles cases this year, prompting the governor of Washington to declare a public health emergency and the state legislature to now consider a bill that would eliminate philosophical objections as grounds for refusing vaccination.

Among the affluent and poorly vaccinated Silicon Valley crowd, an absolutely false myth keeps employees of Amazon, Microsoft, Google, and the like from immunizing their children: that the measles vaccine causes autism. A newly published study of 657,461 children born in Denmark from 1999 to 2010, with follow-up through late 2013, found that the vaccine “does not trigger autism in susceptible children, and is not associated with clustering of autism cases after vaccination.” The Danish study is merely the latest of a long list of research efforts that, according to the American Academy of Pediatrics, “find vaccines to be a safe and effective way to prevent serious disease.”

Vaccine Misinformation vs. Tighter State Laws: Guess What Wins
By Brendan Nyhan

A teenager testified before Congress on Tuesday that he got vaccinated in defiance of his mother, who he said got her anti-vaccination views from social media. Although this kind of misinformation can endanger public health, it’s not obvious that social media is substantially increasing overall vaccine hesitancy. Despite rapid growth in the proportion of Americans using social media sites, flu vaccination rates and infant immunization levels have largely remained stable in recent years. Moreover, fears about and resistance to vaccination are not new; they date to the late 18th century, when the first vaccine was developed.

Social media may simply provide a new pretext for hesitant parents who would otherwise cite a different reason for their decision. In other words, we may be mistakenly treating what is largely a symptom of vaccine hesitancy as its cause — an example of a recurring pattern in which we fault social media for causing problems that it is merely making more visible. (For instance, the internet and social media are often blamed for fueling political polarization, but the trend toward greater polarization long predates social media and is sharpest among older people, the group least likely to use new technology.)

In Mod We Trust
By Scott Alexander

The Verge writes a story (an exposé?) on the Facebook-moderation industry.

It goes through the standard ways it maltreats its employees: low pay, limited bathroom breaks, awful managers – and then into some not-so-standard ones. Mods have to read (or watch) all of the worst things people post on Facebook, from conspiracy theories to snuff videos. The story talks about the psychological trauma this inflicts:

It’s an environment where workers cope by telling dark jokes about committing suicide, then smoke weed during breaks to numb their emotions…where employees, desperate for a dopamine rush amid the misery, have been found having sex inside stairwells and a room reserved for lactating mothers…

It’s a place where the conspiracy videos and memes that they see each day gradually lead them to embrace fringe views. One auditor walks the floor promoting the idea that the Earth is flat. A former employee told me he has begun to question certain aspects of the Holocaust. Another former employee, who told me he has mapped every escape route out of his house and sleeps with a gun at his side, said: “I no longer believe 9/11 was a terrorist attack.

While I was writing the article on the Culture War Thread, several of the CW moderators told me that the hard part of their job wasn’t keep the Thread up and running and well-moderated, it was dealing with the constant hectoring that they had made the wrong decision. If they banned someone, people would say the ban was unfair and they were tyrants and they hated freedom of speech. If they didn’t ban someone, people would say they tolerated racism and bullying and abuse, or that they were biased and would have banned the person if they’d been on the other side.

If I sound a little bitter about this, it’s because I spent four years working at a psychiatric hospital, helping create the most dehumanizing and totalitarian environment possible. It wasn’t a lot of fun. But you could trace every single rule to somebody’s lawsuit or investigative report, and to some judge or jury or news-reading public that decided it was outrageous that a psychiatric hospital hadn’t had a procedure in place to prevent whatever occurred from occurring. Some patient in Florida hit another patient with their book and it caused brain damage? Well, that’s it, nobody in a psych hospital can ever have a hardcover book again. Some patient in Delaware used a computer to send a threatening email to his wife? That’s it, psych patients can never use the Internet unless supervised one-on-one by a trained Internet supervisor with a college degree in Psychiatric Internet Supervision, which your institution cannot afford. Some patient in Oregon managed to hang herself in the fifteen minute interval between nurses opening her door at night to ask “ARE YOU REALLY ASLEEP OR ARE YOU TRYING TO COMMIT SUICIDE IN THERE?” Guess nurses will have to do that every ten minutes now. It was all awful, and it all created a climate of constant misery, and it was all 100% understandable under the circumstances.

I’m not saying nobody should ever be allowed to do investigative reporting or complain about problems. But I would support some kind of anti-irony rule, where you’re not allowed to make extra money writing another outrage-bait article about the outrages your first outrage-bait article caused.

… I find this article interesting because it presents a pessimistic view of information spread. Normal people who are exposed to conspiracy theories – without any social connection to the person spouting them, or any pre-existing psychological vulnerabilities that make them seek the conspiracy theories out – end up believing them or at least suspecting. This surprises me a little. If it’s true, how come more people haven’t been infected? How come Facebook moderators don’t believe the debunking of the conspiracy theories instead? Is it just that nobody ever reports those for mod review? Or is this whole phenomenon just an artifact of every large workplace (the article says “hundreds” of people work at Cognizant) having one or two conspiracy buffs, and in this case the reporter hunted them down because it made a better story?

Five myths about conspiracy theories
By Rob Brotherton

In the first few days of August 2018, mainstream news headlines described an emerging conspiracy theory as “bizarre,” “dangerous,” “terrifying ” and a “deranged conspiracy cult.” The movement, one Post columnist wrote, “is scary because it’s getting bigger, it’s scary because we don’t know how to stop it, and it’s scary because the people behind it won’t be stopped.” Yikes.

The catastrophizing headlines were part of a broader tendency to paint conspiracism as a creeping contagion that “manages to insinuate itself in the most alert and intelligent minds,” as historian Daniel Pipes charitably put it, or as “mumbo jumbo” that has already “conquered the world,” as journalist Francis Wheen bluntly asserted.

Those articles last summer were about QAnon, a loose collection of cryptic nonsense that started online and manifested as a handful of people showing up at President Trump’s rallies with homemade signs and shirts. Though the articles implied a substantial number of believers, none reported any data. Subsequent polling showed that many people — 4 in 10 — hadn’t heard of QAnon or didn’t know enough to have an opinion. Among those who knew about it, it was viewed overwhelmingly unfavorably (by both Democrats and Republicans). An analysis of the QAnon subreddit showed that a tiny but vocal contingent of boosters was making almost all the noise about it on the forum. Most people who engage with ideas like this just sit back and watch, probably treating the theories as a curiosity or entertainment.

Of course, ideas can have consequences. Timothy McVeigh , the Oklahoma City bomber, saw himself as resisting a government conspiracy to grab the citizenry’s guns; a man who believed in Pizzagate, a conspiracy theory about a D.C. pizzeria, fired a gun in the restaurant and terrorized its patrons, who included children. But these men are the exception, not the rule. Hyping up every conspiracy theory into an existential threat without any evidence is, ironically, the same habit of mind that produces conspiracy theories in the first place.

Conspiracy Theories Can’t Be Stopped
By Maggie Koerth-Baker

… this is where conspiracy beliefs start to get tangled up with truth. Because history does contain real examples of conspiracy. Pizzagate was a dangerous lie that led an armed man to walk into a family restaurant, convinced he was there to rescue children from pedophilic members of the Democratic Party. But that incident also exists in the same universe as the Tuskegee experiments, redlining and the Iran-Contra Affair. “I have this conspiracy that Western governments are involved in an international spying ring,” Wood said. “Before about 2014 that would have made you a conspiracy theorist. Now we know it’s true.”

Summoning — and demonizing — the belief in conspiracies can also have political consequences. “During the Bush Administration, the left was going fucking bonkers … about 9/11 and Halliburton and Cheney and Blackwater and all this stuff,” Uscinski said. “As soon as Obama won they didn’t give a shit about any of that stuff anymore. They did not care. It was politically and socially inert.” In turn, conspiracy theories about Obama flourished on the right. Uscinski said he is frustrated by this tendency for partisans to build up massive conspiracy infrastructures when they are out of power, only to develop a sudden amnesia and deep concern about the conspiracy mongering behavior of the other side once power is restored. It’s a cycle, he said that threatened to make social science a tool of partisan slapfights more than a standard of truth. And in a 2017 paper, he argued that conspiracy beliefs could even be useful parts of the democratic process, calling them “tools for dissent used by the weak to balance against power.”

These issues add up to more scientists beginning to have questions about what the goals of conspiracy belief research should actually be. Do we want an entire field of study aimed at preventing conspiracy theories from forming and dispelling the ones that do?

“I don’t think so,” Wood said. “I’m sure some people would disagree with me on that. But the objective shouldn’t be nobody speculates about people in power abusing power. That’s a terrible outcome for the world.”

Liberals and Conservatives Are Both Susceptible to Fake News, but for Different Reasons
By Scott Barry Kaufman

The researchers found some asymmetries, however. Conservatives who scored high in faith intuition (i.e., those who tend to think with their gut instincts) had higher perceptions of the legitimacy among fake news, although this variable had little effect on the judgments of liberals. The researchers suggest that conservatives may be most susceptible on average to fall prey to fake news stories, considering that they are the group most likely to be exposed to such material online, and they are also the group with the highest average levels of faith in intuition.

However, liberals aren’t off the hook, as they are statistically more likely to use investment in the righteousness of their political viewpoints to believe politically-consistent news stories, and their higher level of need for cognition to delegitimize politically-inconsistent news stories. The researchers found that liberals who scored higher in a measure of “collective narcissism“– which measures a tendency to invest in, and perceive superiority of, your political views–showed exaggerated legitimacy judgments for the politically-consistent (e.g., anti-Trump) fake news stories. This data is interesting because it suggests that collective narcissism is not only a right-wing populist phenomenon.

Taken together, all of these findings are consistent with an identity-based approach to the understanding of politically and ideologically motivated engagement with “fake news.” It’s clear that we must view fake news engagement through a motivated reasoning lens, and that both conservatives and liberals can fall prey to fake news, even though the underlying motives may differ within each group.

These findings further emphasize the importance of really thinking through how the spread of political misinformation at a societal level can impact the political landscape. As the researchers note, “it might not be enough to ask people to think more critically about political views. Instead, we might look to reduce the effects of online echo chambers and facilitate greater levels of communication between those with opposing political outlooks.”

While social media has the potential to divide, we must not forget that it also has the potential to expose people to ideologically diverse viewpoints.

How we made a monster
By Tom Chivers

The whole incentive structure of social media rewards bullshit and partisanship and tribalism. Exciting, interesting stories get upvoted or retweeted or liked, and it’s much easier to come up with exciting, interesting stories if you’re not constrained by whether or not they actually happened.

More subtly, controversy is incentivised. Stuff that everyone agrees with, people read, say “obviously”, and move on. Stuff that’s controversial gets one lot of people sharing it to signal that they are members of the tribe that believes that stuff, and the other lot hate-sharing it to show that they aren’t. You don’t get big fights on Twitter any more about gay marriage, because even Conservatives are in favour of gay marriage. Instead you get big fights about statements like “biological sex isn’t real” or “we should arm primary school teachers”, because a strident opinion on that marks out tribal allegiances much more clearly.

Why Your Opinions Usually Aren’t Your Own (And How You Can Fix That)
By Clay Skipper

We used to have trusted sources of information, and now, with the democratization of voices and platforms, it feels like we don’t necessarily know which sources we can trust anymore. If it only takes one person who’s confident to sway public opinion, and everyone is loud and confident, aren’t we a greater risk of conforming to something that’s just wrong?
Here’s one way to think about it it. When you go outside and ask somebody what time it is, you expect them to tell you the truth. If they say it’s 3:00, you’ll go on thinking it’s 3:00. So we are wired, either by evolution or by culture, to credit what people say. To discount what people say—because it may depend on economic self-interest, political motives, or rage—people don’t easily come to that. But if you’re wired to believe members of the human in the new circumstances you described, you can get into a lot of trouble.

I do constitutional law. And if I read something in a very good newspaper about a Supreme Court decision, I’ll often think, “No, they got that wrong.” Just because they’re doing a million other things. But then I’ll be reading something on occupational safety or environmental protection [in the same newspaper], and I’ll just be nodding and saying, “Oh, yeah. That’s right.” It’s a tough mental operation. I think you’d get vertigo if you felt all the time that what you’re hearing from others is not reliable.

Generally, if you choose the right sources or reasonable sources, it’s going to be okay. But there is that risk that you’ll end up believing things that just aren’t so. For example, I heard from a friend who worked in the White House who was a staunch Democrat and a very strong opponent of President Bush. He said, “After working here for six months, I think that two-thirds of the things that I hate most about the Bush Administration aren’t true. I was just conforming to other people.” He had heard things reported about his own work that were off.

Think Republicans are disconnected from reality? It’s even worse among liberals
By Arlie Hochschild

In a surprising new national survey, members of each major American political party were asked what they imagined to be the beliefs held by members of the other. The survey asked Democrats: “How many Republicans believe that racism is still a problem in America today?” Democrats guessed 50%. It’s actually 79%. The survey asked Republicans how many Democrats believe “most police are bad people”. Republicans estimated half; it’s really 15%.

The survey, published by the thinktank More in Common as part of its Hidden Tribes of America project, was based on a sample of more than 2,000 people. One of the study’s findings: the wilder a person’s guess as to what the other party is thinking, the more likely they are to also personally disparage members of the opposite party as mean, selfish or bad. Not only do the two parties diverge on a great many issues, they also disagree on what they disagree on.

You’re probably making incorrect assumptions about your opposing political party
By Arthur C. Brooks

Let’s take a specific example of what this means in the case of a contentious issue like immigration, which continues to roil American politics. Despite a recent ugly rally and series of tweets from the president, the data show that, in fact, a strong majority of Republicans believe that properly controlled immigration can be good for the country. They also show that a strong majority of Democrats disagree that the United States should have completely open borders. In other words, while left and right differ on immigration, those holding extreme views are a minority in both parties. However, Republicans think a majority of Democrats believe in open borders while Democrats think a majority of Republicans believe immigration is bad for the United States. The perception gap is 33 percentage points on each side.

And that is the perception gap for the average Democrat or Republican. Strong partisans — progressive activists and devoted conservatives — are most inaccurate in their perceptions of the other side, reaching more than 45 percentage points on extremely divisive issues.

But aren’t strong partisans the most informed, consuming a lot of media about politics? And shouldn’t all that information, well, inform? Maybe not so much. People who consume news media “most of the time” are almost three times as inaccurate in their understanding of others’ views as those who consume news “only now and then,” the study found. This is almost certainly a function of partisans’ compulsive consumption of media sources that support their existing biases. Your political IQ is probably higher after watching reruns of “Full House” than hour after hour of political TV shows.

Heavy social-media use has the same negative effect on viewpoint accuracy. The perception gap is about 10 percentage points higher for those who have shared political content on social media in the past year than those who haven’t. That isn’t much of a shock. Consider, for example, that only about 22 percent of U.S. adults are on Twitter, and 80 percent of the tweets come from 10 percent of users. If you rely on Twitter for political information, you are being informed by ersatz pundits (and propaganda bots) residing within 2.2 percent of the population.

Democrats and Republicans are very bad at guessing each other’s beliefs
By Amanda Ripley

Some amount of the time, we are fighting ghosts, not real people. And the more inaccurate our perceptions, the more likely we are to describe our opponents as “hateful” and “brainwashed,” the study found.

Meanwhile, two major institutions meant to help us become more informed — the news media and college — seem to be having the opposite effect. The more time Americans spend consuming news from a range of sources, the more inaccurate their views of the other side. Of the 13 kinds of news media considered in the study, the worst offenders, correlating with the most skewed perceptions, were Breitbart News, the Drudge Report and conservative talk radio. Nine others were also associated with inaccurate perceptions — including the New York Times, The Post, Slate, BuzzFeed and Fox News. Only one — network news — seemed to have a positive effect.

Additionally, the more education that Democrats, in particular, acquire, the more ignorant they seem to be about Republicans. Democrats with a postgraduate degree are three times as inaccurate in their perceptions of Republicans as Democrats who dropped out of high school. Interestingly, education does not seem to have this effect on Republicans.

Is America Hopelessly Polarized, or Just Allergic to Politics?
By Samara Klar, Yanna Krupnikov and John Barry Ryan

True polarization is when you dislike the other party and really like your own party. Most people do not care enough about politics to say they are “happy” simply because their child is marrying someone from their political party. So when people in our studies were asked to consider a future in-law who rarely discussed politics, only 15 percent could be considered truly polarized. While this number grew to 25 percent among people who have strong connections to their party, it shrank to 10 percent of weak partisans.

Polarization is also low among weak partisans when they are told that the in-law will frequently talk about politics. Weak partisans aren’t happy with an in-law from the opposing party discussing politics, but many are just as unhappy with an in-law from their own party who insists on political conversation.

Other studies suggest, much like ours, that somewhere between 15 percent and 20 percent of Americans are truly polarized. In the 2018 American Family Survey, for example, only 21 percent of participants reported that it was important for a married couple to be of the same party. Meanwhile, 81 percent said agreement in feelings about children was pivotal to a marriage.

Looking beyond marriage, a survey conducted by The New York Times found that only 16 percent of people placed their political party membership among the top three terms they used to describe themselves. When the political scientists James Druckman and Matthew Levendusky asked Americans to rank six identities in the order of personal importance, partisanship tied for last (alongside class).

Some people might genuinely hate the other party. These people may get the most attention, but they are also outnumbered by the majority who just want to discuss other things than politics.

The Battle for American Minds
By Amy Zegart

For years, studies have found that individuals come to faulty conclusions even when the facts are staring them in the face. “Availability bias” leads people to believe that a given future event is more likely if they can easily and vividly remember a past occurrence, which is why many Americans say they are more worried about dying in a shark attack than in a car crash, even though car crashes are about 60,000 times as likely. “Optimism bias” explains why NFL fans are bad at predicting the wins and losses of their favorite teams. It also helps explain why so many experts, and the online betting markets, were surprised by Brexit even though 35 polls taken weeks before the referendum were about evenly split (17 showed “Leave” ahead, while 15 showed the “Remain” side ahead) and found the referendum outcome too close to call. “Attribution error” clarifies why foreign-policy leaders so often attribute an adversary’s behavior to dispositional variables (they meant to do that!) rather than situational variables (they had no choice, given the circumstances). It’s also why college students tend to think I got an A if they did well in a course, but The professor gave me a C if they did poorly.

In an information-warfare context, we need to understand not just why people are blind to the facts, but why they get duped into believing falsehoods. Two findings seem especially relevant.

The first is that humans are generally poor deception detectors. A meta-study examining hundreds of experiments found that humans are terrible at figuring out whether someone is lying based on verbal or nonverbal cues. In Hollywood, good-guy interrogators always seem to spot the shifty eye or subtle tell that gives away the bad guy. In reality, people detect deception only about 54 percent of the time, not much better than a coin toss.

The second finding is that older Americans are far more likely to share fake news online than are younger Americans. A study published in January found that Facebook users 65 and older shared nearly seven times as many fake-news articles as 18-to-29-year-olds—even when researchers controlled for other factors, such as education, party, ideology, and overall posting activity. One possible explanation for this age disparity is that older Facebook users just aren’t as media savvy as their grandkids. Another is that memory and cognitive function generally decline with age. It’s early days, and this is just one study. But if this research is right, it suggests that much of the hue and cry about Millennials is off the mark. Digital natives may be far more savvy than older and wiser adults when it comes to spotting and spreading fake news.

Younger Americans are better than older Americans at telling factual news statements from opinions
By Jeffrey Gottfried and Elizabeth Grieco

This stronger ability to classify statements regardless of their ideological appeal may well be tied to the fact that younger adults – especially Millennials – are less likely to strongly identify with either political party. Younger Americans also are more “digitally savvy” than their elders, a characteristic that is also tied to greater success at classifying news statements. But even when accounting for levels of digital savviness and party affiliation, the differences by age persist: Younger adults are still better than their elders at deciphering factual from opinion news statements.

Beyond digital savviness, the original study found that two other factors have a strong relationship with being able to correctly classify factual and opinion statements: having higher political awareness and more trust in the information from the national news media. Despite the fact that younger adults tend to be less politically aware and trusting of the news media than their elders, they still performed better at this task.

When age is further broken down into four groups, the two youngest age groups – 18- to 29-year-olds and 30- to 49-year-olds – are almost matched in their ability to correctly categorize all five factual and all five opinion statements, and both outpaced those in the two older age groups – 50- to 64-year-olds and those ages 65 and older.

Millennials fall for financial scams more than any other age group
By Zak Guzman

Millennials are more prone to lose money in financial scams than their elders, according to newly released government data.

The Federal Trade Commission reported last week in its annual data summary of consumer complaints that 40 percent of Americans in their 20s who reported fraud in 2017 also said they lost money. By contrast, only 18 percent of victims aged 70 or older reported losing money.

The dollar value associated with the fraud complaints were much higher for those aged 70 and older, however. Those in their 20s reported a median loss of $400. That’s compared to $621 for those in their 70s and $1,092 for those 80 and up.

In all, the data collected by the federal watchdog group includes 2.7 million complaints, down slightly from total complaints in 2016. Dollar losses reported from fraud, however, increased $63 million from last year to almost $905 million.

Cyber fraud techniques evolve into confidence trick arms race
by Siddharth Venkataramakrishnan

Marcel Carlsson, a security consultant and researcher, says that the amount of information available online has made planning spear phishing attacks easier. “Back in the day people would go through garbage to find letters,” he explains. “Now you have Google Maps, companies post their information on Facebook and people put personal stuff on social media.”

There are a number of options to mitigate the risk from social engineering attacks. At a basic level, email providers have long attempted to filter potentially harmful messages.

Another approach comes from Prof Harris and Mr Carlsson, who have developed a tool to fight fraudsters. It runs on the premise that every example of social engineering has one of two “punchlines”.

“You either have to ask someone a question about private info, or you tell them to do something,” says Prof Harris. The algorithm has been trained on 100,000 phishing emails to highlight the most common examples of attacks.

“It takes each sentence and detects, if it is a question, whether it has a private answer. If it is a command, it looks it up against its blacklist,” Prof Harris says. Mr Carlsson adds that the system can be customised for different sectors with varying buzzwords to maximise its efficiency.

Mr Carlsson says that while there is a growing awareness of the technical component of cyber security, there is also a need for businesses to improve the human aspect. “Businesspeople think [cyber security] is about IT, but [social engineering] is a risk to a business like any other.”

Prof Harris agrees, pointing out how social engineering often exploits the way society works. “It will always be there, the human element: people are usually eager to help you in their jobs.” One way to mitigate the risk is through “red teaming” — having cyber security professionals pretend to be attackers and test how prepared employees are to repel fraudsters.

The internet is not the only avenue for social engineering, with individuals also facing attacks over the phone and in person. Prof Harris found in his experiments that about one-quarter of his students fell for social engineering attacks over the phone.

The Truth About Pinocchio’s Nose
By John Hooper and Anna Kraczyna

Ask people the moral of the Pinocchio fable and doubtless most will say it is a cautionary tale about lying. Yet the puppet’s famously extending nose does not feature as a lie detector at any point in the original series, which ended in grim fashion with two villains hanging Pinocchio from a tree to die. Such was the popularity of the puppet’s story that Lorenzini was asked to resume the series. It was only in the second run that Pinocchio’s nose grew when he told a lie — and not always then.

In fact, the driving theme of the story is the importance of education, for which Lorenzini was a passionate advocate. What leads Pinocchio from one misadventure to the next is his reluctance to go to school. The consequences of not getting an education in late-19th-century Italy are shockingly exemplified in one of the most sinister episodes of “The Adventures”: Pinocchio and a friend go to Toyland, thinking of it as a kind of paradise. But once there, they are turned into donkeys. Pinocchio narrowly escapes being slaughtered for his hide, but his friend is worked to death — the fate that, in a less dramatic form, awaited many unskilled laborers in Lorenzini’s day.

In Italian, the word for donkey is applied both to those who are worked to the point of exhaustion, or, indeed, death, and those who don’t do well at school — not necessarily because they are stupid, but because they refuse to study. Lorenzini’s point is that being a donkey at school leads to working like a donkey afterward. The only way to avoid living the life (and maybe dying the death) of a donkey is to get an education.

Education is also fundamental to the story’s fairy-tale conclusion, in which Pinocchio ceases to be a puppet and becomes a boy. Seven chapters from the end, he goes to school, excels at his studies and is promised his humanity. But it’s then that he makes his near fatal mistake: choosing to go to Toyland, where he is turned not into a person but a donkey. After a further series of terrifying misadventures, he begins studying again, but it is only when he starts to take responsibility for himself and those he loves that he earns the right to become a human being.

The moral of the story, then, is not that children should always tell the truth, but that education is paramount, enabling both liberation from a life of brutal toil, and, more important, self-awareness and a sense of duty to others. The true message of “The Adventures” is that, until you open yourself to knowledge and your fellow human beings, you will remain a puppet forever — other people will continue to pull your strings. And what, in these increasingly authoritarian times, could be more ardently relevant than that?

The Peculiar Blindness of Experts
By David Epstein

The idea for the most important study ever conducted of expert predictions was sparked in 1984, at a meeting of a National Research Council committee on American-Soviet relations. The psychologist and political scientist Philip E. Tetlock was 30 years old, by far the most junior committee member. He listened intently as other members discussed Soviet intentions and American policies. Renowned experts delivered authoritative predictions, and Tetlock was struck by how many perfectly contradicted one another and were impervious to counterarguments.

Tetlock decided to put expert political and economic predictions to the test. With the Cold War in full swing, he collected forecasts from 284 highly educated experts who averaged more than 12 years of experience in their specialties. To ensure that the predictions were concrete, experts had to give specific probabilities of future events. Tetlock had to collect enough predictions that he could separate lucky and unlucky streaks from true skill. The project lasted 20 years, and comprised 82,361 probability estimates about the future.

The result: The experts were, by and large, horrific forecasters. Their areas of specialty, years of experience, and (for some) access to classified information made no difference. They were bad at short-term forecasting and bad at long-term forecasting. They were bad at forecasting in every domain. When experts declared that future events were impossible or nearly impossible, 15 percent of them occurred nonetheless. When they declared events to be a sure thing, more than one-quarter of them failed to transpire. As the Danish proverb warns, “It is difficult to make predictions, especially about the future.”

Even faced with their results, many experts never admitted systematic flaws in their judgment. When they missed wildly, it was a near miss; if just one little thing had gone differently, they would have nailed it. “There is often a curiously inverse relationship,” Tetlock concluded, “between how well forecasters thought they were doing and how well they did.”

We Are All Confident Idiots
By David Dunning

Some of our most stubborn misbeliefs arise not from primitive childlike intuitions or careless category errors, but from the very values and philosophies that define who we are as individuals. Each of us possesses certain foundational beliefs—narratives about the self, ideas about the social order—that essentially cannot be violated: To contradict them would call into question our very self-worth. As such, these views demand fealty from other opinions. And any information that we glean from the world is amended, distorted, diminished, or forgotten in order to make sure that these sacrosanct beliefs remain whole and unharmed.

One very commonly held sacrosanct belief, for example, goes something like this: I am a capable, good, and caring person. Any information that contradicts this premise is liable to meet serious mental resistance. Political and ideological beliefs, too, often cross over into the realm of the sacrosanct. The anthropological theory of cultural cognition suggests that people everywhere tend to sort ideologically into cultural worldviews diverging along a couple of axes: They are either individualist (favoring autonomy, freedom, and self-reliance) or communitarian (giving more weight to benefits and costs borne by the entire community); and they are either hierarchist (favoring the distribution of social duties and resources along a fixed ranking of status) or egalitarian (dismissing the very idea of ranking people according to status). According to the theory of cultural cognition, humans process information in a way that not only reflects these organizing principles, but also reinforces them. These ideological anchor points can have a profound and wide-ranging impact on what people believe, and even on what they “know” to be true.

In 2006, Daniel Kahan, a professor at Yale Law School, performed a study together with some colleagues on public perceptions of nanotechnology. They found, as other surveys had before, that most people knew little to nothing about the field. They also found that ignorance didn’t stop people from opining about whether nanotechnology’s risks outweighed its benefits.

Why would this be so? Because of underlying beliefs. Hierarchists, who are favorably disposed to people in authority, may respect industry and scientific leaders who trumpet the unproven promise of nanotechnology. Egalitarians, on the other hand, may fear that the new technology could present an advantage that conveys to only a few people. And collectivists might worry that nanotechnology firms will pay insufficient heed to their industry’s effects on the environment and public health. Kahan’s conclusion: If two paragraphs of text are enough to send people on a glide path to polarization, simply giving members of the public more information probably won’t help them arrive at a shared, neutral understanding of the facts; it will just reinforce their biased views.

Then, of course, there is the problem of rampant misinformation in places that, unlike classrooms, are hard to control—like the Internet and news media. In these Wild West settings, it’s best not to repeat common misbeliefs at all. Telling people that Barack Obama is not a Muslim fails to change many people’s minds, because they frequently remember everything that was said—except for the crucial qualifier “not.” Rather, to successfully eradicate a misbelief requires not only removing the misbelief, but filling the void left behind (“Obama was baptized in 1988 as a member of the United Church of Christ”). If repeating the misbelief is absolutely necessary, researchers have found it helps to provide clear and repeated warnings that the misbelief is false. I repeat, false.

The most difficult misconceptions to dispel, of course, are those that reflect sacrosanct beliefs. And the truth is that often these notions can’t be changed. Calling a sacrosanct belief into question calls the entire self into question, and people will actively defend views they hold dear. This kind of threat to a core belief, however, can sometimes be alleviated by giving people the chance to shore up their identity elsewhere. Researchers have found that asking people to describe aspects of themselves that make them proud, or report on values they hold dear, can make any incoming threat seem, well, less threatening.

For example, in a study conducted by Geoffrey Cohen, David Sherman, and other colleagues, self-described American patriots were more receptive to the claims of a report critical of U.S. foreign policy if, beforehand, they wrote an essay about an important aspect of themselves, such as their creativity, sense of humor, or family, and explained why this aspect was particularly meaningful to them. In a second study, in which pro-choice college students negotiated over what federal abortion policy should look like, participants made more concessions to restrictions on abortion after writing similar self-affirmative essays.

Sometimes, too, researchers have found that sacrosanct beliefs themselves can be harnessed to persuade a subject to reconsider a set of facts with less prejudice. For example, conservatives tend not to endorse policies that preserve the environment as much as liberals do. But conservatives do care about issues that involve “purity” in thought, deed, and reality. Casting environmental protection as a chance to preserve the purity of the Earth causes conservatives to favor those policies much more, as research by Matthew Feinberg and Robb Willer of Stanford University suggests. In a similar vein, liberals can be persuaded to raise military spending if such a policy is linked to progressive values like fairness and equity beforehand—by, for instance, noting that the military offers recruits a way out of poverty, or that military promotion standards apply equally to all.

But here is the real challenge: How can we learn to recognize our own ignorance and misbeliefs? To begin with, imagine that you are part of a small group that needs to make a decision about some matter of importance. Behavioral scientists often recommend that small groups appoint someone to serve as a devil’s advocate—a person whose job is to question and criticize the group’s logic. While this approach can prolong group discussions, irritate the group, and be uncomfortable, the decisions that groups ultimately reach are usually more accurate and more solidly grounded than they otherwise would be.

For individuals, the trick is to be your own devil’s advocate: to think through how your favored conclusions might be misguided; to ask yourself how you might be wrong, or how things might turn out differently from what you expect. It helps to try practicing what the psychologist Charles Lord calls “considering the opposite.” To do this, I often imagine myself in a future in which I have turned out to be wrong in a decision, and then consider what the likeliest path was that led to my failure. And lastly: Seek advice. Other people may have their own misbeliefs, but a discussion can often be sufficient to rid a serious person of his or her most egregious misconceptions.

“My-side bias” makes it difficult for us to see the logic in arguments we disagree with
By Christian Jarrett

“Our results show why debates about controversial issues often seem so futile,” the researchers said. “Our values can blind us to acknowledging the same logic in our opponent’s arguments if the values underlying these arguments offend our own.”

This is just the latest study that illustrates the difficulty we have in assessing evidence and arguments objectively. Related research that we’ve covered recently has also shown that: our brains treat opinions we agree with as facts; that many of us over-estimate our knowledge; how we’re biased to see our own theories as accurate; and that when the facts appear to contradict our beliefs, well then we turn to unfalsifiable arguments. These findings and others show that thinking objectively does not come easily to most people.

Why Putting Yourself in Their Shoes Might Backfire
By Sachin Waikar

“When you play the role of the opposition and advocate for the opposing view, you can actually be the one who changes the most in response to that advocacy,” Tormala says.

But taking the perspective of the opposition backfires when we perceive opponents as having value systems different from our own. Instead, thinking about people who are like us in general but disagree on a specific issue should drive the most change in our attitudes toward that issue.

This holds true across domains from politics to business. “Say a company has a hiring debate about filling a vacant leadership position and different groups have different opinions about the best fit,” Tormala says. “Taking the opposing side’s perspective might soften you toward their position and increase your willingness to engage — but only if you don’t see them as fundamentally different from you.”

Tormala offers parting advice on the inherent objective of attitude-change efforts: “People often act as if the goal of persuasion is to ‘flip’ someone to the opposite attitude, from 100 to zero. Efforts to create that kind of change can actually increase people’s conviction about their existing attitudes. It’s often more effective to think of your persuasion goal as trying to move someone a little bit away from the extreme, as a way to get them to open up a bit and potentially change their behavior. An inch of movement can be impactful if it facilitates a more receptive and cooperative mindset.”

The science of influencing people: six ways to win an argument
By David Robson

Here’s a lesson that certain polemicists in the media might do well to remember – people are generally much more rational in their arguments, and more willing to own up to the limits of their knowledge and understanding, if they are treated with respect and compassion. Aggression, by contrast, leads them to feel that their identity is threatened, which in turn can make them closed-minded.

Assuming that the purpose of your argument is to change minds, rather than to signal your own superiority, you are much more likely to achieve your aims by arguing gently and kindly rather than belligerently, and affirming your respect for the person, even if you are telling them some hard truths. As a bonus, you will also come across better to onlookers. “There’s a lot of work showing that third-party observers always attribute high levels of competence when the person is conducting themselves with more civility,” says Dr Joe Vitriol, a psychologist at Lehigh University in Bethlehem, Pennsylvania. As Lady Mary Wortley Montagu put it in the 18th century: “Civility costs nothing and buys everything.”

How to Win an Argument
By Jazmine Hughes

When it’s time to make your own argument, clarity is crucial. It’s easy to talk, Mashwama says, but listening is much harder, so anticipate at least some miscommunication. “You can’t be persuasive if the other person doesn’t understand you,” he says. But avoid flooding the other side with facts: They can overwhelm decision making. Ultimately, you don’t really convince people — people convince themselves. You just give them the means to do that.

Why Facts Don’t Change Our Minds
By Elizabeth Kolbert

If reason is designed to generate sound judgments, then it’s hard to conceive of a more serious design flaw than confirmation bias. Imagine, Mercier and Sperber suggest, a mouse that thinks the way we do. Such a mouse, “bent on confirming its belief that there are no cats around,” would soon be dinner. To the extent that confirmation bias leads people to dismiss evidence of new or underappreciated threats—the human equivalent of the cat around the corner—it’s a trait that should have been selected against. The fact that both we and it survive, Mercier and Sperber argue, proves that it must have some adaptive function, and that function, they maintain, is related to our “hypersociability.”

Mercier and Sperber prefer the term “myside bias.” Humans, they point out, aren’t randomly credulous. Presented with someone else’s argument, we’re quite adept at spotting the weaknesses. Almost invariably, the positions we’re blind about are our own.

Caution On Bias Arguments
By Scott Alexander

I think bias arguments can be useful in a few cases.

First, it’s fair to point out a bias if this gives someone surprising new information. For example, if I say “The study proving Panexa works was done by the company producing Panexa”, that might surprise the other person in a way that “You are a straight man” wouldn’t. It carries factual information in a way that “You’re a product of a society laden with anti-tech populism” doesn’t.

Second, it’s fair to point out a bias if you can quantify it. For example, if 90% of social scientists are registered Democrats, that gets beyond the whole “I can name one bias predisposing scientists to be more liberal, you can name one bias predisposing scientists to be more conservative” arms race. Or if you did some kind of study, and X% of social scientists said something like “I feel uncomfortable expressing conservative views in my institution”, I think that’s fair to mention.

Third, it’s fair to point out a bias if there’s some unbiased alternative. If you argue I should stop trusting economists because “they’re naturally all biased towards capitalism”, I don’t know what to tell you, but if you argue I should stop trusting studies done by pharmaceutical companies, in favor of studies done by non-pharma-linked research labs, that’s a nice actionable suggestion. Sometimes this requires some kind of position on the A vs. B questions mentioned above: is a non-Jew a less biased source for Israel opinions than a Jew? Tough question.

Fourth, none of this should apply in private conversations between two people who trust each other. If well-intentioned smart friend who understands all the points above brings up a possible bias of mine in a spirit of mutual truth-seeking, I’ll take it seriously. I don’t think this contradicts the general argument, or is any different from other domains. I don’t want random members of the public shaming me for my degenerate lifestyle, but if a close friend thinks I’m harming myself then I want them to let me know.

Most important, I think first-person bias arguments are valuable. You should always be attentive to your own biases. First, because it’s easier for you; a rando on Twitter may not know how my whiteness or my Jewishness affects my thought processes, but I might have some idea. Second, because you’re more likely to be honest: you’re less likely to invent random biases to accuse yourself of, and more likely to focus on things that really worry you. Third, you have an option besides just shrugging or counterarguing. You can approach your potential biases in a spirit of curiosity and try to explore them.

Why Do People Fall for Fake News?
By Gordon Pennycook and David Rand

… we recently ran a set of studies in which participants of various political persuasions indicated whether they believed a series of news stories. We showed them real headlines taken from social media, some of which were true and some of which were false. We gauged whether our participants would engage in reasoning or “go with their gut” by having them complete something called the cognitive reflection test, a test widely used in psychology and behavioral economics. It consists of questions with intuitively compelling but incorrect answers, which can be easily shown to be wrong with a modicum of reasoning. (For example: “If you’re running a race and you pass the person in second place, what place are you in?” If you’re not thinking you might say “first place,” when of course the answer is second place.)

We found that people who engaged in more reflective reasoning were better at telling true from false, regardless of whether the headlines aligned with their political views. (We controlled for demographic facts such as level of education as well as political leaning.) In follow-up studies yet to be published, we have shown that this finding was replicated using a pool of participants that was nationally representative with respect to age, gender, ethnicity and region of residence, and that it applies not just to the ability to discern true claims from false ones but also to the ability to identify excessively partisan coverage of true events.

Our results strongly suggest that somehow cultivating or promoting our reasoning abilities should be part of the solution to the kinds of partisan misinformation that circulate on social media. And other new research provides evidence that even in highly political contexts, people are not as irrational as the rationalization camp contends. Recent studies have shown, for instance, that correcting partisan misperceptions does not backfire most of the time — contrary to the results of Professors Nyhan and Reifler described above — but instead leads to more accurate beliefs.

We are not arguing that findings such as Professor Kahan’s that support the rationalization theory are unreliable. Our argument is that cases in which our reasoning goes awry — which are surprising and attention-grabbing — seem to be exceptions rather than the rule. Reason is not always, or even typically, held captive by our partisan biases. In many and perhaps most cases, it seems, reason does promote the formation of accurate beliefs.

This is not just an academic debate; it has real implications for public policy. Our research suggests that the solution to politically charged misinformation should involve devoting resources to the spread of accurate information and to training or encouraging people to think more critically. You aren’t doomed to be unreasonable, even in highly politicized times. Just remember that this is also true of people you disagree with.

The Problem With Believing What We’re Told
By Gary Marcus and Annie Duke

In a study published in November in the journal SSRN, Patricia Moravec of Indiana University’s Kelly School of Business and others looked at whether they could improve people’s ability to spot fake news. When first asked to assess the believability of true and false headlines posted on social media, the 68 participants—a mix of Democrats, Republicans and independents—were more likely to believe stories that confirmed their own prior views. But a simple intervention had an effect: asking participants to rate the truthfulness of the headlines. That tiny bit of critical reflection mattered, and it even extended to other articles that the participants hadn’t been asked to rate. The results suggest that just asking yourself, “Is what I just learned true?” could be a valuable habit.

Similar research has shown that just prompting people to consider why their beliefs might not be true leads them to think more accurately. Even young children can learn to be more critical in their assessments of what’s truthful, through curricula such as Philosophy for Children and other programs that emphasize the value of careful questioning and interactive dialogue. Ask students to ponder Plato, and they just might grow up to be more thoughtful and reflective citizens.

Rather than holding our collective breath waiting for social media companies to magically resolve the problem with yet-to-be invented algorithms for filtering out fake news, we need to promote information literacy. Nudging people into critical reflection is becoming ever more important, as malicious actors find more potent ways to use technology and social media to leverage the frailties of the human mind. We can start by recognizing our own cognitive weaknesses and taking responsibility for overcoming them.

Degrees of Maybe: How We Can All Make Better Predictions

VEDANTAM: We look at how some people actually are better than others at predicting what’s going to happen in the future. Ironically, these aren’t the people you usually find on television bloviating about what’s going to happen next week. They’re ordinary people who happen to know a very important secret – predicting the future isn’t about being unusually smart or especially knowledgeable. It’s about understanding the pitfalls in the way we think and practicing better habits of mind.

Phil Tetlock is a psychologist at the University of Pennsylvania. Over several decades, he’s shown that the predictions of so-called experts are often no better than what he calls dart-throwing chimpanzees. After spending years criticizing forecasts and forecasters, Phil decided to look at people who were very good at predicting the future. In his book “Superforecasting: The Art And Science Of Prediction,” Phil explores how we can learn from these people to become better forecasters ourselves.

Phil, welcome to HIDDEN BRAIN. So Phil, lots of people watch television at night, and millions of people feel like throwing things at their television set each evening as they listen to pundits and prognosticators explain the day’s news and predict what’s going to happen next. Of all the people in the country, you probably have more cause than most to hurl your coffee cup at the television set because starting in 1984, you conducted a study that analyzed the predictions of experts in various fields. What did you find?

TETLOCK: Well, we found that pundits didn’t know as much about the future as they thought they did. But it might be useful, before we start throwing things at the poor pundits on the – at the – on the TV screen, to consider their predicament. They’re under pressure to say something interesting, so they resort to interesting linguistic gambits.

They say things like, well, I think there’s a distinct possibility that Putin’s next move will be on Estonia. Now, that’s a wonderful phrase – distinct possibility. It’s wonderfully elastic because if Putin does move into Estonia, they can say hey, I told you there was a distinct possibility he was going to do that. And if he doesn’t, they can say hey, I just said it was possible. So they’re very well-positioned.

Now, if you play the game the way it really should be played – the forecasting game – and use actual probabilities, say you play it the way Nate Silver plays it, and you wind up with a, say, a 70 percent probability that Hillary will win the election a few days before the election in November ’16, you’re much more subject to embarrassment. If he said there’s a distinct possibility that Hillary would win, he could – he would have been very safely covered. Because when you ask people to translate distinct possibility into numbers, it means anything from about 20 percent to about 80 percent.

VEDANTAM: I’m going to ask you one final question. And this is also, I think, a potential critique of superforecasting, but it comes in the form of a forecast that I’m going to make. The reason I think many of us make forecasts or look to prognosticators and pundits to make forecasts is that it gives us a feeling like we have a handle on the future. It gives us a sense of reassurance.

And this is why liberals like to watch the pundits on MSNBC and conservatives like to watch the pundits on Fox. You know, a more cautious style that sort of says, you know, the chance that Donald Trump is going to be impeached is, you know, 11.3 percent, or the chance that you’re going to die from cancer is 65.3 percent, these estimates run up against a very powerful psychological impulse we have for certainty, that we actually want someone to hold our hand and tell us, you’re not going to die. We don’t want a probability estimate. We want actually an assurance that things are going to turn out the way we hope.

So here’s my last question for you. If someone advises people to do something that runs against their emotional need for well-being and reassurance, I am going to forecast that that advice, however well-intentioned, however accurate, is likely not going to be followed by most people. What do you make of my forecast, Phil?

TETLOCK: (Laughter) Well, I think there’s a lot of truth to what you say. I think people – when people think about the future, they have a lot of goals. And they want to affirm their loyalty to their ideological tribe. They want to feel good about their past commitments. They want to reinforce their preconceptions. So those are all social and psychological goals that people have when they do forecasting.

And forecasting tournaments are very unusual worlds. We create a world in which only one thing matters. It’s pure accuracy. So it’s somewhat analogous to the sort of world that’s created in financial markets or London bookies or Las Vegas bookies. All that matters is the accuracy of the odds. I would say this. I would say people would be better off if they were more honest with themselves about the functions that their beliefs serve.

Do I believe this because it helps me get along with my friends or my boss, helps me fit in, helps me feel good about myself? Or do I believe this because it really is the best synthesis of the best available evidence? If you’re playing in a forecasting tournament, it’s only the latter thing that matters.

But you’re right. When people sit down in their living room and they’re watching their favorite pundits, they’re cheering for their team. It’s a different kind of psychology. They’re playing a different kind of game. So all I’m saying is you’re better off if you’re honest with yourself about what game you’re playing.

Accurately predicting the future is central to everything. Here’s how you can do it better.
By Robert Wiblin and Keiran Harris

Robert Wiblin: When I was reading the stylized fact of like, yeah, people think… people are drawn to things if it’s 0%, 50% likely or 100% likely. I was wondering whether that tendency might be able to explain some kind of weird behavior that I observe in people. So one is that it seems quite common for people to kind of have a relatively uninformed view about something but become extremely confident about their kind of split… their quick judgments about it even though if they really sat down and thought about it and realized that there’s so much that they don’t know, which kind of seems like they can have some evidence that gets them to 80%, 90% likely or like 90% confidence and then they just kind of push it up to 100 because they can’t be bothered thinking about this anymore. And then you’ve got the… yeah, these people who are very under confident about the ability to draw distinctions between like 40% likely and 60% likely, who kind of get stuck in this maybes and they’re like, well it’s unknowable, it’s like it might happen, it might not happen, and they kind of miss out on the opportunity to draw these most of the distinctions between likelihood.

Philip Tetlock: Exactly. And I mean, take an issue that is politically polarizing in the United States, such as climate change, and forecasts of how rapidly the climate is changing as a function of greenhouse gases and perhaps other factors. Would I be considered to be a climate… am I a believer in climate change or am I disbeliever, a denialist… as it were, If I say to you, “Well, when I think about the UN intergovernmental panel on Climate Change Forecast for year 2100, the global surface temperature forecasts, I’m 70%… 72% confident that they’re within plus or minus 0.3 degrees centigrade in their projections.” And you kind of look at me and say, “Well, it’s kind of precise and odd,” but I’ve just acknowledged I think there is a 28% chance they could be wrong. Now they could be wrong on the upside or the downside, but let’s say error bars are symmetric, so there’s a 14% chance that they could be-

Robert Wiblin: Underestimating.

Philip Tetlock: Could be overestimating as well as underestimating. So I’m flirting with the idea that they might be wrong, right? So if you are living in a polarized political world in which expressions of political views are symbols of tribal identification, they’re not statements that, oh, this is my best faith effort to understand the world. I’ve thought about this and I’ve read these reports and I’ve looked at… I’m not a climate expert, but here’s my best guesstimate. And if I went through all the work of doing that, and by the way, I haven’t, I’m just…] this is a hypothetical person, I don’t have the cognitive energy to do this, but if someone had gone to all the cognitive energy of reading all these reports and trying to get up to speed on it and concluded say 72%, what would the reward be? They wouldn’t really belong in any camp, would they?

Philip Tetlock: The climate proponents would kind of roll their eyes and say, “Get on board. You’re slowing down the momentum for the cause by giving sucker and some emotional support to the denialists,” and denialists will say, “Well, you’re kind of suckered by the believers.” You’re not going to please anybody very much. You’re not going to have a community of co-believers with whom you can comfortably talk about climate change in the bar. You’re going to be weird, you’re going to be an outlier.

Robert Wiblin: Might be able to cobble together kind of four economists or something to have a beer with.

Philip Tetlock: Could be something like that, but there’s not a good intellectual home for you. And if you think that the major function of your beliefs is to help you fit into the social world, it’s not to help you make sense of the world itself, then why go to all the bother of participating in forecasting tournaments? And I think that’s one of the key reasons why forecasting tournaments are hard sell. I think people… forecasts do not just serve an accuracy function, people aren’t just interested in accuracy, they’re interested in fitting in, they want to avoid embarrassment, they don’t want their friends to call them names, I don’t want to be called a denialist or a racist or whatever other kind of thing I might be… whatever the epithet you might incur by assigning a probability on the wrong side of maybe.

Robert Wiblin: Speaking of climate change, often when I go into the media, kind of every week, it seems like there are new wild predictions being made about how bad climate change could be, which I guess sometimes sounds suspect to me, but I’m not a climate scientist and I don’t really have time in my day-to-day work to look into how scientifically grounded these forecasts are. Yeah, I guess you might have encountered these forecasts as well and those… there are also people who claim that it’s not going to be a problem at all. How do you kind of disentangle a problem like that in real life?

Philip Tetlock: Well, the first… one of the things I’ve learned to do over all this work is never pretend to be a subject matter expert in anything my people are forecasting on. So I’m not an expert in North Korea, I’m not an expert on the euro, I’m not an expert on Columbian narcoterrorism or Syrian Civil War and I’m not an expert on the climate either. Now I think there is an issue of people feeling that, especially on the climate activist side, that the only way for them to build up political momentum for getting people to make sacrifices in the long-term is to get them to believe that things are going to hell in a handbasket right now and that floods and tornadoes and hurricanes and whatnot are unprecedented. I know that there are other people who say, “Oh, that doesn’t look so unprecedented to us.”

Philip Tetlock: And that’s a debate that’s completely apart from the larger question of warming, the long-term warming trend as a function of greenhouse gases is a question of well, how rapidly would hurricanes increase, or have hurricanes increased at all over the last 150 years? I don’t know what the answer to that is, I know that there are people who disagree about it. It’s really important, I think, I’m a process guy, I don’t want to have too many opinions. It’s not useful for me to have too many opinions, who would care what my opinion on this is? Why should anyone care what my opinion is? But I do know that there are incentives for people to exaggerate and that happened over and over and that happened and you’re much more likely to exaggerate when you’re not in a forecasting tournament, you’re not playing an accuracy. When you’re playing a political power maximization media exposure game, exaggeration is the way to go. If you’re in a forecasting tournament and you play that way, you’re going to get creamed.

Robert Wiblin: Yeah. I guess I wasn’t so much asking about climate change specifically, but I suppose again, maybe I have a rule of thumb that when advocates on a topic are speaking then I’m a lot more cautious about believing anything that they say, and I suppose it’s true of climate change, it’s true of many different issues that it can be like… but that means, it means that they could be right and I was just like, it’s very hard for me to figure out, it means they could make some big mistakes potentially.

Philip Tetlock: I think exaggeration adds to the noise and I think it’s probably shortsighted for advocates, for activists to exaggerate, but I understand the temptation.

Straw Men and Viewpoint Manicheanism
By Rick Repetti

The difference between uttering a false statement one believes is true and uttering a statement one knows to be false is crucial; only the latter is lying. Likewise, the difference between engaging in faulty reasoning one thinks is rational, and using reasoning one knows is fallacious is crucial; only the latter is sophistry. Absent good evidence that someone is lying or engaging in bad faith, it is better to explain why one doubts the veracity of a statement or the validity of its reasoning.

If, however, all you care about is promoting the narrative of your political tribe, then you have already chosen to treat the matter as a zero-sum game; you’re likely already infected with Viewpoint Manicheanism. Consequently, honesty, truth, and fair debate are seen as naive in the grab for power, persuasion, and minds. All is fair in love and war. We’re the good guys, they’re the bad guys. The ends justify the means.

Posted in Games.