Skip to content


Culture war games: of humane impulses and correct conduct

Adults are the only ones who fell for the Momo hoax
By Andrew Tarantola

I wish I could tell you that moral panics were something new but, as Chris Ferguson, professor and co-chair of psychology at Florida’s Stetson University, explains to Engadget, they’ve been around for millenia.

“I mean, you can see narratives in Plato’s dialogues where Athenians are talking about Greek plays — that they’re going to be morally corrupting, that they’re going to cause delinquency in kids,” Ferguson points out. “That’s why Socrates was killed, right? Essentially, that his his ideas were going to corrupt the youth of Athens. Socrates was the Momo challenge of his day.”

Unfortunately, humanity appears to still be roughly as gullible as we were in the 5th century BC as new moral panics crop up with uncanny regularity. In recent decades we’ve seen panics about Dungeons and Dragons leading to Satanism, hidden messages in Beatles songs, killer forest clowns, the Blue Whale, the Knockout Game, and the Tide Pod Challenge.

Despite the unique nature of threat presented in each panic, this phenomenon follows a pair of basic motifs, Ferguson explained.

“There’s this inherent protectiveness of kids,” he said. “There’s also the sense of like, kids are idiots and therefore adults have to step in and ‘do something.’ — hence the idea that your teenager can simply watch a YouTube video and then suddenly want to kill themselves. It’s ridiculous if you think about if for 30 seconds but, nonetheless, this is an appealing sort of narrative.”

“There’s the general sense of teens behaving badly and technology oftentimes being the culprit in some way or another,” Ferguson continued. “It just seems that we’re kind of wired, particularly as we get older, to be more and more suspicious of technology and popular culture.”

That is due, in part, because the popular culture right now isn’t the popular culture that the people in power grew up with. It’s a “kids today with their music and their hair” situation, Ferguson argues. He points out that “Mid-adult mammals tend to be the most dominant in social species,” but as they age, their power erodes until they are forced out of their position by a younger, fitter rival. “As we get older, eventually we’re going to become less and less relevant,” he said. Faced with that prospect, older members of society may begin to view fresh ideas and new technologies as evidence of society’s overall moral decline.

Fortnite: Is Prince Harry right to want game banned?
By Jane Wakefield

Prince Harry has called for a ban on Fortnite, saying the survival game beloved by teenagers around the world, was “created to addict”.

His words add to a growing debate among health workers, governments and lobby groups about whether gaming can be harmful to health.

The remarks come just before the Gaming Bafta Awards, one of the biggest nights in the UK’s gaming calendar, which take place in London on Thursday evening.

But is he right?

At an event at a YMCA in west London, the Duke of Sussex launched a scathing attack on social media and gaming.

Of Fortnite, he said: “That game shouldn’t be allowed. Where is the benefit of having it in your household?

“It’s created to addict, an addiction to keep you in front of a computer for as long as possible. It’s so irresponsible.

“It’s like waiting for the damage to be done and kids turning up on your doorsteps and families being broken down.”

He added that social media was “more addictive than alcohol and drugs”.

Lunacy in England.
By The New York Times

Does the report mean that idiocy is increasing in England? If so, the explanation is simple. There has, within the last three or four years, been let loose on the British public a swarm of penny papers of a semi-humourous type. That the habitual perusal of these papers must tend to develop idiocy there cannot be the slightest doubt. Then, too, Punch has during the last year issued a series of cheap reprints of its earlier jokes. The low price at which these reprints are sold tempts the British buyer who is suffering from the depression caused by the fogs and rains, and who, like most persons in such a condition, feels a morbid desire to increase his gloom. If we add to these causes the strain of the British intellect resulting from the effort to understand the policy of Lord Rosebery, it is easy to understand why idiocy is increasing.

It is probable, however, that the increase shown by the report of the Commissioners in Lunacy is chiefly in the category of “persons of unsound mind.” Now this points directly to bicycle riders. Not that every bicycle rider is a person of unsound mind, that is to say, at the time of beginning the fatal practice. Still, there is not the slightest doubt that bicycle riding, if persisted in, leads to weakness of mind, general lunacy, and homicidal mania. In the opinion of the ablest and most experienced of British lunatics, the habit of watching the revolution of the forward wheel develops in the mind of the bicycle rider a tendency to reason in a circle. This is followed by a general weakening of the mental faculties, which in many cases goes so far that the unhappy victim fully believes every promise made in the advertisements of bicycle manufacturers. Then, again, the pneumatic tire, with its exasperating habit of bursting at unexpected practice of piercing itself with a concealed pin or a sharp stone whenever the rider is far from home, is little short of maddening in its effects. That the bicycle rider often becomes a prey to homicidal mania is painfully evident to every one who is familiar with the London streets. Men who are apparently mild clergymen or placid physicians will, when mounted on the bicycle, run down other men and women without distinction, and leave them dead or dying on the pavement, without waiting either to help the wounded or to remove the bodies of the dead. There are probably hundreds of bicycle riders in England to-day each one of whom has slain from ten to twenty persons and wounded double that number by driving bicycles over their bodies. These bicyclists are on ordinary occasions men of humane impulses and correct conduct, but they are a prey to that thirst from homicide which habitual bicycling so commonly produces, and it would be absurd to speak of them as persons of sound mind.

When Dungeons & Dragons Set Off a ‘Moral Panic’
By Clyde Haberman

Determining cause and effect is often not simple, and that has been the case throughout the history of games.

A century ago, H. G. Wells, the English titan of science fiction, invented a tabletop game called Little Wars with a friend, Jerome K. Jerome. Though a pacifist, Wells was intrigued by war games. He wrote a handbook for his creation, filled with clear rules of combat for opposing infantry, cavalry and artillery. That was in 1913. A year later, World War I broke out.

You see the connection, don’t you?

Protruding Breasts! Acidic Pulp! #*@&!$% Senators! McCarthyism! Commies! Crime! And Punishment!
By R.C. Baker

In his medical practice, Wertham saw some hard cases—juvenile muggers, murderers, rapists. In Seduction, he begins with a gardening metaphor for the relationship between children and society: “If a plant fails to grow properly because attacked by a pest, only a poor gardener would look for the cause in that plant alone.” He then observes, “To send a child to a reformatory is a serious step. But many children’s-court judges do it with a light heart and a heavy calendar.” Wertham advocated a holistic approach to juvenile delinquency, but then attacked comic books as its major cause. “All comics with their words and expletives in balloons are bad for reading.” “What is the social meaning of these supermen, super women … super-ducks, super-mice, super-magicians, super-safecrackers? How did Nietzsche get into the nursery?” And although the superhero, Western, and romance comics were easily distinguishable from the crime and horror genres that emerged in the late 1940s, Wertham viewed all comics as police blotters. “[Children] know a crime comic when they see one, whatever the disguise”; Wonder Woman is a “crime comic which we have found to be one of the most harmful”; “Western comics are mostly just crime comic books in a Western setting”; “children have received a false concept of ‘love’ … they lump together ‘love, murder, and robbery.’” Some crimes are said to directly imitate scenes from comics. Many are guilty by association—millions of children read comics, ergo, criminal children are likely to have read comics. When listing brutalities, Wertham throws in such asides as, “Incidentally, I have seen children vomit over comic books.” Such anecdotes illuminate a pattern of observation without sourcing that becomes increasingly irritating. “There are quite a number of obscure stores where children congregate, often in back rooms, to read and buy secondhand comic books … in some parts of cities, men hang around these stores which sometimes are foci of childhood prostitution. Evidently comic books prepare the little girls well.” Are these stores located in New York? Chicago? Sheboygan? Wertham leaves us in the dark. He also claimed that powerful forces were arrayed against him because the sheer number of comic books was essential to the health of the pulp-paper manufacturers, forcing him on a “Don Quixotic enterprise … fighting not windmills, but paper mills.”

When the Public Feared That Library Books Could Spread Deadly Diseases
By Joseph Hayes

Books were viewed as possible vehicles of disease transmission for several reasons. At a time when public libraries were relatively new, it was easy to worry about who had last touched a book and whether they might have been sick. Books that appeared to be benign might conceal diseases that could be unleashed “in the act of opening them,” Mann says. People were concerned about health conditions caused by “inhaling book dust,” Greenberg writes, and the possibility of “contracting cancer by coming in contact with malignant tissue expectorated upon the pages.”

The great book scare reached fever pitch in the summer of 1879, Mann says. That year, a librarian in Chicago named W.F. Poole reported that he had been asked whether books could transmit disease. Upon further investigation, Poole located several doctors who claimed to have knowledge of disease-spreading books. People in England started asking the same question, and concerns about diseased books developed “roughly contemporaneously” in the United States and Britain, Mann says.

The “great book scare” rose from a combination of new theories about infection and a distaste for the concept of public libraries themselves. Many Americans and Britons feared the library because it provided easy access to what they saw as obscene or subversive books, argues Mann. And while fears of disease were distinct from fears of seditious content, “opponents of the public library system” helped stoke the fires of the book scare, Greenberg writes.

Penny dreadfuls: the Victorian equivalent of video games
By Kate Summerscale

The prevalence of penny dreadfuls (as they were known in the press) or penny bloods (as they were known to shopkeepers and schoolboys) had by 1895 become a subject of great public concern. More than a million boys’ periodicals were being sold a week, most of them to working-class lads who had been taught to read in the state-funded schools set up over the previous two decades. “Tons of this trash is vomited forth from Fleet Street every day,” observed the Motherwell Times. The new wave of literate children sought out cheap magazines as a diversion from the rote-learning and drill of the school curriculum, and then from the repetitive tasks of mechanised industry. Penny fiction was Britain’s first taste of mass-produced popular culture for the young, and – like movies, comics, video games and computer games in the century that followed – was held responsible for anything from petty theft to homicide.

The dreadfuls were also implicated in social unrest. Since 1884, when the vote had been extended to most British men, the press had often pointed out that children raised on such literature would grow up to elect the rulers of the nation. Penny dreadfuls were “the poison which is threatening to destroy the manhood of the democracy”, announced the Pall Mall Gazette in 1886. The Quarterly Review went a step further, warning its readers in 1890 that “the class we have made our masters” might be transformed by these publications into “agents for the overthrow of society”. The dreadfuls gave a frightening intimation of the uses to which the labourers of Britain could put their literacy and newly won power: these fantasies of wealth and adventure might foster ambition, discontent, defiance, a spirit of insurgency. There was no knowing the consequences of enlarging the minds and dreams of the lower orders.

The shocking tale of the penny dreadful
By Hephzibah Anderson

According to the moralists, young errand boys, sailors and textile workers all were susceptible to stories that left them dissatisfied with their own small lives, making them yearn for wealth and adventure beyond their station and glamorising the criminal life. As journalist James Greenwood put it in 1874, they were nothing less than “penny packets of poison”. The great unwashed had been taught how to read, the argument went, but not what to read.

Eventually, the debate evolved to question the extent to which literature can shape character. When 13-year-old Robert Coombes, the subject of Kate Summerscale’s new book, The Wicked Boy, was arrested for murdering his mother in London in 1895, the prosecution naturally sought to scapegoat penny dreadfuls. But this time most of the media agreed that they played little part in his matricidal actions. As the Pall Mall Gazette noted: “The truth is that in respect to the effect of reading in boys of the poorer class the world has got into one of those queer illogical stupidities that so easily beset it. In every other age and class man is held responsible for his reading, and not reading responsible for man. The books a man or woman reads are less the making of character than the expression of it”.

Perhaps it wasn’t what was being read so much as who was reading it that lay at the root of society’s unease. Penny dreadfuls may not have explicitly promoted bloodthirsty deeds but their tattered pages did showcase a certain giddy disregard for authority. How else to explain the campaign against The Wild Boys? Yes, it contained violence, a smattering of nudity and some flagellation. And yes, its boy heroes were petty criminals. But those same boys also helped others dodge child-stealers and rescued drowning women – they were not without a moral code, it just wasn’t the same as Victorian society’s.

A menace to society: the war on pinball in America
By Hadley Meares

Games like pinball have been played since ancient times. During the decadent reign of Louis XIV, restless courtiers at Versailles became enchanted with a game they called ‘bagatelle’ which means a ‘trifle’ in French. This game was played on a slanted felt board. A wooden cue was used to hit balls into numbered depressions in the board – usually guarded by metal pins. The game arrived in America in the 19th century, and by the turn of the 20th century attempts were being made to commercialise the game. According to Edward Trapunski, author of the invaluable pinball history Special When Lit (1979), the first successful coin operated bagatelle game, Baffle Ball, was produced by the D Gottlieb Company at the end of 1931.

Soon the metal plunger took the place of the wooden cue stick, and lights, bumpers and elaborate artwork appeared on the machines. The game had arrived at the right time – the Depression had just hit America hard, and the one-nickel amusement helped entertain many struggling citizens. It also kept many small businesses afloat, since the operator and location owner usually split the profits 50/50. The game was particularly popular with youngsters in claustrophobic cities like New York, which boasted an estimated 20,000 machines by 1941. That year, one local judge who was confronted with a pinball machine during a case voiced the complaint of many older citizens when he whined: ‘Will you please take this thing away tonight. I can’t get away from these infernal things. They have them wherever I go.’

Although pinball was quickly vilified in many parts of America, the poster child for the vilification was none other than ‘the little flower’ himself: the pugnacious, all-powerful Fiorello H La Guardia, mayor of New York City from 1934 to 1945. La Guardia argued that pinball was a ‘racket dominated by interests heavily tainted with criminality’, which took money from the ‘pockets of school children’. In one rant, he fumed:

The main [pinball] distributors and wholesale manufacturers are slimy crews of tinhorns. Well-dressed and living in luxury on penny thievery…I mean the manufacturers in Illinois and Michigan. They are the chief offenders. They are down in the same gutter level of the tinhorn.

How One Perfect Shot Saved Pinball From Being Illegal
By Matt Blitz

Which brings us back to the May morning in 1976 with Mr. Sharpe waiting patiently to enter the courtroom. He had been hired by the Music & Amusement Association (MAA, for short) to be their star witness in their pursuit to overturn the ban on pinball in New York City. Roger Sharpe, besides being a writer on the subject matter, was also a superb player himself, widely considered to be the best in country. He had been provided with two machines to prove his case, with one being a backup in the event that the first machine broke. While the MAA had been granted this hearing due to one committee member’s sponsored bill to overturn the ban, the other committee members were known to be against lifting the ban on pinball. The the MAA, the bill, and Mr. Sharpe were underdogs in this fight.

Upon entering the courtroom, Sharpe began eloquently to argue why the ban should be overturned, stating that while in the past, it may have been associated with gambling, this was no longer the case. It was a game that tested your patience, hand eye coordination, and reflexes. Quite simply, it was a game of skill, not chance.

As expected, Mr. Sharpe was asked to prove this assertion. Thus, he began to play one of the machines in the pinball game of his life. But he was soon stopped by one particularly grumpy councilman. Afraid that the “pinballers” had tampered with the machine, he demanded that Mr. Sharpe use the backup. Sharpe agreed, but this added another degree of difficulty. You see, Mr. Sharpe was extremely familiar with the first machine, having practiced on it a great deal in preparation for this hearing. He was not nearly as experienced the backup machine.

Nonetheless, he agreed and began playing on the backup. Despite playing well with the weight of a giant silver ball on him, the grumpy council member was not impressed. With the ban on the verge of not being overturned, Sharpe pulled a move that has become pinball legend.

Reminiscent of another New York sporting legend, he declared that if he could make the ball go through the middle lane on his next turn, then he would have proven that pinball is a game of skill- essentially, he was calling his shot, and staking the future of pinball on it. Pulling back the plunger, he let that silver ball fly. Upon contact with a flipper, the ball zoomed up and down, through the middle lane. Just as Sharpe had said it would. He had become the Babe Ruth of pinball and, with that, proved that there was indeed skill to the game of pinball. The council immediately overturned the ban on pinball. By playing a “mean pinball,” Roger Sharpe had saved the game.

What technology are we addicted to this time?
By Louis Anslow

In 1936, prominent education author Azriel L. Eisenberg argued that radio had “brought many a disturbing influence in its wake,” and that parents “cannot lock out this intruder because it has gained an invincible hold of their children.”

Radio listening habits were also lamented in 1939 as replacing more traditional childhood pastimes, like playing cops and robbers.

A 1946 Parent Teacher Association report stated that radio could be used as a “means of emotional overstimulation or as a retreat into a shadow world of reality.” Excessive listening was, no surprise, also partly blamed for a murder committed by a 14-year-old in 1957.

When Jazz Was a Public Health Crisis
By Jessie Wright-Mendoza

Theories about the connection between music and health have persisted throughout history. Instances of music being used to soothe maladies feature in the Bible and in Greek and Roman mythology. Johnson writes that Beethoven was criticized for the “deafness and madness” contained in his compositions. The recurring theme is generally that the “positive vibrations” produced by rhythmic sounds are associated with nature and good health, while discordant sounds like jazz music are antithetical to nature and could, therefore, have a negative effect on health.

For critics, jazz symbolized the chaos of a society that was changing and modernizing at warp speed. American life was literally becoming noisier, as electric appliances moved into homes and automobiles took over the roads.

What’s more, young people, disillusioned by war, were turning their backs on the uptight social mores of the Victorian Era. They gathered in speakeasies where people from different genders, races, and social classes mixed freely and they danced the Charleston, itself described as resembling an epileptic fit.

All this external stimulation was thought to cause neurasthenia, a neurological condition that caused headaches, agitation, and depression. Extreme cases could manifest physically: doctors described jazz enthusiasts that were “nervous and fidgety,” with “perpetually jerking jaws.” Milwaukee’s public health commissioner claimed that the music damaged the nervous system, and a Ladies’ Home Journal article reported that it caused brain cells to atrophy. In Cincinnati, a maternity hospital successfully petitioned to have a nearby jazz club shut down, arguing that exposing newborns to the offending music would have the effect of “imperiling the happiness of future generations.”

BlackBerry addiction cure is in your hands
By Richard Waters

Members of the Knights of Columbus Adult Education Committee, agonising nearly 80 years ago about the socially disruptive power of technology, would surely have had a thing or two to say about the BlackBerry.

These worthies, part of a Catholic fraternity in the US dedicated to self-help, were alert to the dark side of the supposed progress of their day. As recounted by Claude Fischer …, the committee posed some intriguing questions about the spread of new technology. For instance: “Does the telephone make men more active or more lazy?”

The concerns of today’s BlackBerry spouses, however, tap into a deeper strand of technophobia than simply watching their other half lost in mindless scrolling. Part of the angst is probably simply a reflection of how fast the technology has spread.

The first telephone exchange in the world was set up in 1878, in New Haven, Connecticut: by the turn of the century, only 15 per cent of American homes had telephones and it was not until after the second world war that more than half were connected, says Sheldon Hochheiser, a former corporate historian at AT&T.

The cellular phone, by contrast, was introduced in 1983 and had become ubiquitous in less than two decades. E-mail has arrived, for most people, only within the last decade, but it is already finding its way into the bedroom. That is not much time for new social norms to be established.

Another troubling aspect of the BlackBerry is the way it shuts out the casual bystander. Communications technology has always had the power to intrude into everyday life, leaving a person who is present but not part of the conversation feeling like an outsider. The BlackBerry takes this feeling of alienation to a new level: hidden under the desk but glanced at constantly, it sends a message that the user has his or her mind on more interesting things.

Perhaps most disturbing of all is the blurring of work and home life. Portable e-mail lets work flow into non-work time in a way that even the mobile phone cannot match. Steve Barley, a professor at Stanford university, says Silicon Valley executives have added nearly an hour to their average working day by staying connected outside the office – yet most of this time is probably wasted and merely reflects a fear of what will happen if they lose touch.

The Knights of Columbus had it all figured out nearly a century ago. “Unless [people] individually master these things, the things will weaken them,” they concluded.

How Viewers Grow Addicted To Television
By Daniel Goleman

Recent studies have found that 2 to 12 percent of viewers see themselves as addicted to television: they feel unhappy watching as much as they do, yet seem powerless to stop themselves.

Portraits of those who admit to being television addicts are emerging from the research. For instance, a study of 491 men and women reported this year by Robin Smith Jacobvitz of the University of New Mexico offers these character sketches:

A 32-year-old police officer has three sets in his home. Although he is married with two children and has a full-time job, he manages to watch 71 hours of television a week. He says, ”I rarely go out anymore.”

A 33-year-old woman who has three children, is divorced and has no job reports watching television 69 hours a week. She says, ”Television can easily become like a companion if you’re not careful.”

A housewife who is 50, with no children, watches 90 hours of television a week. She says, ”I’m home almost every day and my TV is my way of enjoying my day.”

India treats ‘Netflix addiction’ as internet use surges
By Swati Gupta

“Addiction to internet is a big thing. It is a reality. It happens a lot with young people, and we see it often,” said Dr. Amit Sen, a child psychiatrist with Children First, a practice in Delhi.

“There are reward systems in the brain. It is a dopamine kick. When you are winning in a game, you get a dopamine kick. If you are doing cocaine, you get the same kick,” Sen said.

The same brain centers stimulated by substance abuse are stimulated by internet addiction, he added.

Sen sees one to two new patients every month who are severely addicted to internet surfing, online gaming or streaming sites, like Netflix and YouTube.

“Sometimes, the parents switch off the internet or disconnect the Wi-Fi, and there is a violent reaction,” he said. “All hell breaks loose.”

Sharma’s mid-20s patient had been coping with the stress of unemployment and other personal issues, he said.

“That was contributing to mild distress in this person,” said Sharma, nothing this is the first case he’s seen of “Netflix addiction.” “These shows/series used to help him out to overcome this mild stress.”

How to cure a Netflix addiction: An expert reveals the remedy for excessive binge watching
By Brent McCluskey

Is Netflix addiction becoming more prevalent? According to Cash, “definitely.” With the increase of smartphones and other devices that connect to Netflix, those addicted to binge watching, or screens in general, face a serious problem: Unlimited access to content.

“I’ve been watching this develop over the years and what’s happened is these devices have gone into everybody’s home,” Cash said. “The creation of smartphones allows people to be carrying internet access with them all the time, 24/7. Parents themselves are often pretty screen-addicted and are handing these devices to their kids at younger and younger ages. It’s impacting the culture and … the problem is absolutely growing.”

What happens in your brain when you binge-watch a TV show
By Danielle Page

Watching episode after episode of a show feels good — but why is that? Dr. Renee Carr, Psy.D, a clinical psychologist, says it’s due to the chemicals being released in our brain. “When engaged in an activity that’s enjoyable such as binge watching, your brain produces dopamine,” she explains. “This chemical gives the body a natural, internal reward of pleasure that reinforces continued engagement in that activity. It is the brain’s signal that communicates to the body, ‘This feels good. You should keep doing this!’ When binge watching your favorite show, your brain is continually producing dopamine, and your body experiences a drug-like high. You experience a pseudo-addiction to the show because you develop cravings for dopamine.”

According to Dr. Carr, the process we experience while binge watching is the same one that occurs when a drug or other type of addiction begins. “The neuronal pathways that cause heroin and sex addictions are the same as an addiction to binge watching,” Carr explains. “Your body does not discriminate against pleasure. It can become addicted to any activity or substance that consistently produces dopamine.”

Debunking the 6 biggest myths about ‘technology addiction’
By Christopher J. Ferguson

Anything fun results in an increased dopamine release in the “pleasure circuits” of the brain – whether it’s going for a swim, reading a good book, having a good conversation, eating or having sex. Technology use causes dopamine release similar to other normal, fun activities: about 50 to 100 percent above normal levels.

Cocaine, by contrast, increases dopamine 350 percent, and methamphetamine a whopping 1,200 percent. In addition, recent evidence has found significant differences in how dopamine receptors work among people whose computer use has caused problems in their daily lives, compared to substance abusers. But I believe people who claim brain responses to video games and drugs are similar are trying to liken the drip of a faucet to a waterfall.

Comparisons between technology addictions and substance abuse are also often based on brain imaging studies, which themselves have at times proven unreliable at documenting what their authors claim. Other recent imaging studies have also disproved past claims that violent games desensitized young brains, leading children to show less emotional connection with others’ suffering.

A study finds Netflix is right — videogames are a competitor to TV watching and streaming
By Steve Goldstein

Netflix raised eyebrows Thursday night when the streaming service said the popular videogame “Fortnite” is more of a competitor than HBO.

But an academic study finds Netflix NFLX, -6.15% is absolutely right.

Gray Kimbrough has studied what’s called the American Time Use Survey from the Labor Department for what it says about videogame use.

Increased gaming is offset by decreasing time spent watching television, movies and streaming video, he found in a study presented at the American Economic Association annual meeting.

The increased prevalence is, perhaps not surprisingly, focused on young men.

Between 2015 and 2017 for men aged between 21 and 30, time spent on gaming rose to 4 hours from 2.3 hours, while time spent on watching TV, movies or streaming fell to 14.9 hours from 16.9 hours. Young women also are spending more time playing games, though not as much: Their time rose to 1.4 hours from 0.8 hours, while TV, movies and streaming time fell to 13.6 hours from 15.3 hours.

There are other differences between the genders. Young men living with their parents spent markedly more time playing games, a phenomenon not shared by women living with their parents.

All that said, these young men are not leaving the labor force to play games, as another academic study has suggested.

“What I do see is evidence that men who have just left the labor market are gaming more than those staying,” he said.

Gap in NHS provision forcing gaming addicts to seek help abroad
By Sarah Marsh

Henrietta Bowden-Jones, the director of the Centre for Internet Disorders, set up by Central and North West London NHS foundation trust and opening in October, said there were about 45 people on a waiting list to be treated.

“What we don’t have is a good idea of the prevalence of the problem among the population as at the moment there are no high-quality prevalence surveys for this illness in the UK. I hope to see a well funded independent prevalence survey over the course of the next year or so,” Bowden-Jones said.

“You need to hear of the issues at ground level from the people destroying their lives with one activity or another. Then you report the issues to the people who can, if they so wish, implement preventive measures to protect the population.”

Gaming disorder is defined by the World Health Organization as a pattern of persistent or recurrent gaming behaviour so severe that it takes “precedence over other life interests”. Symptoms include impaired control over gaming and continuation or escalation of gaming despite negative consequences.

“Gaming addiction is becoming more of an issue,” said Jeff van Reenen, an addiction treatment programme manager at the Priory’s hospital in Chelmsford. “Even in our experience, more among a younger target group … we are seeing it presenting or co-presenting with other addictive behaviours. I saw a patient who came here for gaming addiction as well as addiction to porn and sex. Addictive behaviours are always about escapism.”

Study suggests pathological gaming is a symptom of bigger problems — rather than a unique mental disease
By Eric W. Dolan

The researchers surveyed 477 boys and 491 girls once per year for four years regarding their relationships with their parents, their social support, their academic stress, their self-control, and their gaming behaviors.

Ferguson and his colleagues found that self-control had a stronger relationship with pathological gaming than time spend playing video games.

In addition, participants who felt subjected to more overprotective parental behaviors and had less parental communication tended to have higher levels of academic stress, which in turn predicted a lack of self-control and an increase in daily gaming hours.

“Our study was conducted with Korean youth. In South Korea, there is particular pressure socially to succeed academically. Our evidence suggests that pathological gaming doesn’t originate so much from exposure to games, but through a combination of academic pressure and parental pressure,” Ferguson told PsyPost.

“This causes stress and a loss of self-control, wherein youth use games as an escape from their stress. Rather than thinking of pathological gaming as a disease caused by video games, we might be better to think about it as symptomatic of a larger structural, social and family problem within a person’s life.”

But the study — like all research — includes limitations. In particular, it is unclear how well the results generalize to other countries and cultures.

“I think the main caveat is, of course, this is a sample of Korean youth and we can’t be sure that the patterns of pressure necessarily apply to youth from other countries such as the U.S. or U.K. For instance, within U.S. samples I’ve worked with, evidence suggests pathological gaming results from other mental disorders such as ADHD, but does not cause them in return,” Ferguson said.

“This is a tricky topic because we have a historical pattern of people (particularly older adults) reflexively blaming technology and media for perceived social problems. Our data suggests we have to be cautious in blaming technology for behavior problems — often the picture is much more complicated than that.”

Escape to another world
By Ryan Avent

A life spent buried in video games, scraping by on meagre pay from irregular work or dependent on others, might seem empty and sad. Whether it is emptier and sadder than one spent buried in finance, accumulating points during long hours at the office while neglecting other aspects of life, is a matter of perspective. But what does seem clear is that the choices we make in life are shaped by the options available to us. A society that dislikes the idea of young men gaming their days away should perhaps invest in more dynamic difficulty adjustment in real life. And a society which regards such adjustments as fundamentally unfair should be more tolerant of those who choose to spend their time in an alternate reality, enjoying the distractions and the succour it provides to those who feel that the outside world is more rigged than the game.

How Top Gamers Earn Up to $15,000 an Hour
By Patrick Shanley

Over the past five years, the gaming industry has more than doubled, rocketing to $43.8  billion in revenue in 2018, according to the NPD Group. Skilled gamers — buoyed by the rise of streaming platforms like Google’s YouTube and Amazon’s Twitch — have turned into stars who can not only attract millions of fans but also earn millions of dollars. Top Twitch streamer Tyler “Ninja” Blevins, for example, has said he made $10  million in 2018 playing online game Fortnite.

“There’s been incredible [revenue] growth across the board,” says Mike Aragon, who oversees Twitch’s partnerships with streamers as senior vp content. “The entire ecosystem has become more mainstream.”

Being a professional video gamer has become so lucrative, in fact, that disputes are arising about who has the right to the advertising revenue and brand endorsements that have started to roll in for top streamers. On May  20, esports player Turner “Tfue” Tenney became the first major player to sue his team, FaZe Clan, alleging that it has limited his business opportunities and pocketed 80  percent of his earnings in violation of California’s Talent Agencies Act. FaZe Clan responded claiming that it has collected “a total of $60,000” of the “millions” Tenney has earned since signing with the team.

Streaming personalities regularly appear live on camera for more than eight hours a day, responding to fan comments and questions as they play. Twitch, the largest live-streaming platform, averages nearly 1.3  million concurrent viewers daily. Streamers monetize those viewers in a variety of ways, through in-stream ads, donations and paid subscriptions to their channel.

At the heart of this exploding new business is a wave of interest from major brands — Bud Light, Coca-Cola, Intel, Toyota and T-Mobile among them — that are drawn to the sizeable young audience tuning in for live streams from Blevins, Lupo, Tenney and others. “I get approached for endorsements multiple times a day,” says Lupo, 32, who broadcasts to his 2.8  million Twitch followers for upward of 10 hours a day, averaging 4,000 concurrent viewers. He says he’s been contacted by potential sponsors from the luxury, alcohol, insurance, entertainment and home decor industries.

Multiple sources at the Hollywood agencies tell THR that per-hour rates for endorsing a company during a live stream can reach as high as five figures for the most popular gamers. On average, a gamer can make anywhere from a couple thousand to $15,000 per hour. A brand’s overall commitment to a single streamer could total as much as $500,000.

“It’s become something that nobody predicted,” says Steven Ekstract, brand director of Global Licensing Group. “Traditional brands had no idea. Now they’re all getting into it.”

Money may be rolling in faster than ever before, but many predict it’s just the tip of the iceberg. Goldman Sachs has estimated that esports and online game streaming viewership will reach 300  million people by 2022, surpassing the audience for Major League Baseball.

This esports giant draws in more viewers than the Super Bowl, and it’s expected to get even bigger
By Annie Pei

Over 10,000 “League of Legends” fans descended upon St. Louis, Missouri this weekend for one of the biggest annual esports events in North America: The North American League of Legends Championship Series Spring Split Finals.

Though still a far cry from the stadium attendance numbers hit by many traditional sports leagues, online viewership for the NALCS finals brought in a total of 600,000 concurrent viewers on Twitch and YouTube combined during the final game, which saw esports team Team Liquid take home the title after over four hours of competitive play.

Go back to November, and viewership numbers from the “League of Legends” World Championship finals — held in South Korea and also hosted by the game’s publisher, Riot Games — showed that almost 100 million unique viewers tuned in to the event online. For comparison, last year’s Super Bowl had just over 98 million viewers, the smallest viewership number for the event since 2008. This was after viewership for 2017’s Super Bowl LI had dipped to 103 million from just over 111 million the year prior.

While esports have long been popular in many Asian countries, the space has grown worldwide over the past few years, including in North America. “League of Legends” is just one game driving the esports industry, which will top $1 billion in revenue this year, according to research from Newzoo.

US teenager wins £2.4m playing computer game Fortnite
By Joe Tidy

The event is seen as a major moment in e-sports, which is estimated to be a billion-dollar industry in 2019.

However, its record for the biggest prize pool is already set to be broken by another event called The International, taking place in August.

The Fortnite finals saw 100 players battling on giant computer screens.

Forty million players attempted to qualify over 10 weeks of online competition.

More than 30 nations were represented with 70 players coming from the US, 14 from France and 11 from the UK.

US teenager becomes first Fortnite World Cup champion, winning $3m
By Jay Castello

Sixteen-year-old Bugha represents the average age of a competitor, while others, including fifth-place finalist Thiago “King” Lapp from Argentina, were as young as 13. They were competing for a slice of the World Cup’s $30m (£24m) prize pool, currently the biggest in esports history – and the same amount awarded to teams in the recent women’s football World Cup.

One 15-year-old British player, Jaden Ashman, took home over £1m by placing second with his partner in the duos version of the competition on Saturday. He told the BBC that he would probably save half of it and put “quite a lot of it into a house and my family”.

His mother admitted that she had been “quite against his gaming”. But with Ashman, King, Bugha, and others taking home life-changing amounts of money, and every competitor in the final 100 earning at least $50,000, it’s clear that professional gaming can be an incredibly lucrative career for those few who are lucky, talented, and hardworking enough to make it.

With Dad’s support, one teen is playing ‘Fortnite’ instead of going to high school
By Dugan Arnett

As venture capitalists began pouring money into gaming — seeing big potential in leagues and massive online audiences — the influx of cash created ample opportunity for talented gamers.

Today, weekly online tournaments yield thousands of dollars in payouts. Professional eSports organizations, backed by billionaire owners from Mark Cuban to Robert Kraft, dole out hefty contracts to the country’s top players. And this is to say nothing of streaming revenue, in which the world’s best — or most charismatic — gamers can make large sums simply by playing online and allowing others to watch live via sites like YouTube and Twitch.

Last year, the world’s most famous gamer, a pink-haired 28-year-old from Illinois who goes by the tag “Ninja,” told ESPN that he earns close to seven figures a month from gaming.

Despite the recent popularity of eSports, the industry’s long-term stability remains a question. Games come and go. Professional teams fold. And while it’s true that the industry has seen an influx of investment, the ability to carve out a living, experts say, remains exceedingly rare.

“In the same way as traditional sports, there’s a thin layer at the top who makes a living at it,” says T.L. Taylor, a professor at MIT who has written extensively about eSports. “But there’s a mass of folks who are aspirational and want to make it and never will.”

Five damaging myths about video games – let’s shoot ’em up
By Pete Etchells

In the summer of 2018, the World Health Organization formally included “gaming disorder” in its diagnostic manual, the International Classification of Diseases, for the first time. It was a decision that ignited a furious debate in the academic community. One group of scholars argued that such a diagnostic label will provide greater access to treatment and financial help for those experiencing genuine harm from playing video games. Others (myself included) argued that the decision was premature; that the scientific evidence for gaming addiction simply wasn’t accurate or meaningful enough (yet).

Part of the problem lies in the checklists used to determine whether a disorder exists. Historically, the criteria for gaming addiction were derived from those used for other sorts of addiction. While that might be a reasonable place to start, it might not tell us the whole story about what the unique aspects of gaming addiction look like. For example, one of the standard criteria is that people become preoccupied with games, or start playing them exclusively, instead of engaging in other hobbies. However, these don’t sit very well as a benchmark for what you might consider to be “harmful” engagement, because games themselves (unlike abused drugs, say) are not inherently harmful.

Also, using this as a criterion has the potential to inflate the prevalence of addiction. While there will be people out there for whom gaming can become problematic, the chances are that this is a small group.

Moreover, some research suggests that gaming addiction is fairly short-lived. Data looking at players over a six-month period has shown that of those who initially exhibited the diagnostic criteria for addiction, none met the threshold at the end of the study.

This is not to say there isn’t anything about games to be worried about. Increasingly, and particularly in the case of mobile games, gambling-like mechanisms in the form of in-app purchases and loot boxes are being used as sources of income.

Video game ‘loot boxes’ would be outlawed in many games under forthcoming federal bill
By Tony Romm and Craig Timberg

Hawley’s Protecting Children From Abusive Games Act takes aim at a growing industry revenue stream that analysts say could be worth more than $50 billion — but one that increasingly has triggered worldwide scrutiny out of fear it fosters addictive behaviors and entices kids to gamble.

Hawley’s proposed bill, outlined Wednesday, covers games explicitly targeted to players younger than 18 as well as those for broader audiences where developers are aware that kids are making in-game purchases. Along with outlawing loot boxes, these video games also would be banned from offering “pay to win” schemes, where players must spend money to access additional content or gain digital advantages over rival players.

“Social media and video games prey on user addiction, siphoning our kids’ attention from the real world and extracting profits from fostering compulsive habits,” Hawley said in a statement. “No matter this business model’s advantages to the tech industry, one thing is clear: There is no excuse for exploiting children through such practices.”

Offering one “notorious example,” Hawley’s office pointed to Candy Crush, a popular, free smartphone puzzle app that allows users to spend $149.99 on a bundle of goods that include virtual currency and other items that make the game easier to play.

A spokesman for the game’s publisher, Activision Blizzard, declined to comment.

“When a game is designed for kids, game developers shouldn’t be allowed to monetize addiction,” Hawley said. “And when kids play games designed for adults, they should be walled off from compulsive microtransactions.”

Purchases made within games — often called “micropayments” or “in-app purchases” — have come under scrutiny in recent years, in part, because children often use their parents’ credit cards or other payment methods to rack up charges that can run into the hundreds or thousands of dollars.

Parents have complained to the Federal Trade Commission that such charges often happen without their permission or end up being much larger than they expect. A federal court in 2016 found that Amazon unfairly charged parents for purchases their children made while using apps that were marketed as “free.” (Amazon chief executive Jeff Bezos owns The Washington Post.)

Video-Game Policies Change After ‘Loot Box’ Criticism
By Christopher Palmeri and Ben Brody

The video-game industry — responding to criticism that in-game purchases can amount to gambling or tempt kids into overspending — is changing its policies.

Gaming-device makers Sony Corp., Microsoft Corp. and Nintendo Co. will now disclose the probability that a buyer will obtain a desired item after a purchase. Game publishers such as Activision Blizzard Inc. and Electronic Arts Inc. will make the disclosures as well. Companies had already to agreed to include labeling that told consumers there would be opportunities to buy things within games, according to the Entertainment Software Association, a trade group.

Console makers are looking to adopt the new policies in 2020.

In-game purchases have been a huge source of revenue for the industry, which frequently charges extra for new characters, missions or weapons. Often the content is hidden inside what’s called a “loot box,” so buyers don’t know exactly which items they’ll be getting when they purchase the box.

Regulators around the world have grappled with whether that constitutes gambling. The approach also has led kids to spend more on games than their parents intended.

Oxford Researcher Blames ESA Reaction for Prolonged Gaming Addiction Crisis
By Brian Crecente

“It’s very easy to think of something like the World Health Organization as an organized body that really has its ducks in a row,” Przybylski said during his talk. “But they really didn’t, they were really quick soft about this gaming disorder thing. The people who were working on the ICD letter, they’d really gone out on a limb and there was a lot of debate internally and then the ESA decided to publish a rebuttal.”

The problem, Przybylski said, was that the rebuttal “cherry-picked” bits of different research — including his — that showed games weren’t so bad or that games are great for cognitive development, but excluded parts of the same research that showed that the good found in games are often overblown.

That rebuttal, Przybylski said, forced the WHO’s hand and led to what he called a “circle the wagons moment” within the United Nations.

“So now they’re much more convinced that they were right the whole time and you’re all evil,” he told the room of game developers. “I would have warned you not to do this.

“My piece of advice here is I think you probably all should consider bracing for impact.”

Przybylski said that there are a lot of bad things that can come out of the sort of reactive stance the ESA took about addiction and video games.

“We’re going to stigmatize the hobby of more than a billion people on the planet,” he said. “There is going to be a lot of really dumb regulations coming down the pipe. Depending on the patchwork of markets and regulations, in some places more aggressive regulators are going to fragment the market. There are going to be some kind of labeling rules, there’s going to be sin taxes. And there are going to be fines.

When the fun stops: ICD-11’s new gaming disorder
By Sachin Shah and Stephen Kaar

The media has run full force with sensational stories of children enraptured by virtual worlds. Such panics are nothing new; two years ago it was Pokémon Go that had everybody worried. But it goes back even further. Prof. Przybylski directed us towards a clinical report (Schink, 1991) featuring cases of “Nintendo enuresis” – boys so transfixed by Super Mario games they would wet themselves rather than visit the bathroom. The problem resolved, according to the report, once the boys “learned to use the pause button.”

Delving further, Prof. Przybylski told us about a phenomenon Wallis (1997) documented during relative youth of the internet. A psychiatrist, Dr Ivan K Goldberg, had effectively dreamed up a diagnosis called “Internet Addiction Disorder” to parody the complexity and rigidity of the DSM. He was alarmed, however, to find people were identifying with the hoax disorder, one symptom of which was that “important social or occupational activities are given up or reduced because of Internet use”. He then re-named his disorder to “pathological Internet-use disorder”, to remove any notion that the internet was an addictive substance. “If you expand the concept of addiction to include everything people can overdo,” Dr Goldberg had said, “then you must talk about people being addicted to books, etc.”

Could it be, then, that we are seeing cases that are more related to media panic than genuine pathology? “You can’t add to the prevalence just because people are worried,” Dr Bowden-Jones stressed. “You have to have cases. The cases need to have to have had the problem for at least a year, which is a long time.” Gaming disorder isn’t about people who dedicate themselves to a game for a few days in order to beat it. “I would not include those,” she said. “I don’t think any professional would do that. I would accept that some people need that challenge. So if the prevalence is high, it will be high because we’re seeing people who are struggling because of an activity.” She acknowledged that the ICD-11 criteria does allow diagnosis of gaming disorder if the problem has persisted for less than a year, if the problems caused are severe enough (though we note that what qualifies as “severe” isn’t defined).

Perhaps the key question to this whole issue is whether or not a video game can be addictive in the pathological sense. Prof. Przybylski does not see convincing evidence for this, but stresses that absence of evidence is not evidence of absence. Whatever evidence currently exists does not meet the threshold he would set for formulation of a disorder, and he feels psychiatrists are jumping the gun. Whatever their stance on gaming disorder, all experts seem united in pushing for better research. Prof. Przybylski’s camp in particular have stressed the importance of transparency in studies, including pre-registration of hypotheses and plans prior to data collection. “It’s the Texas Sharpshooter fallacy,” Prof. Przybylski said of gaming disorder research, referring to the process of forming a hypothesis after the results are known. “They draw the target on the side of the barn only after they’ve sprayed it with the machine gun.” Additionally, he feels a lot of valuable data is held by video games companies about how players engage with their games, but companies don’t share this data. “These companies need to do transparent, open, and reproducible science. I think that if they plan on surviving the next 20 years, it will happen.”

Reevaluating Internet Gaming Disorder
By Christopher J. Ferguson

Usually, claims about games are made by using pseudo-scientific brain-related claims, but such claims can be applied to almost anything. Consider cat addicts (and one should do an internet search for “cat hoarder” if there is any doubt such people exist). Cats have mechanisms to keep people engaged with them, such as running between one’s feet or purring loudly. Stroking a cat surely “releases” dopamine in the brain (if one is to put things in such a simplistic way). And cats work on variable reinforcement schedules . . . sometimes they love us, sometimes they hate us, and we never know when. Do not the cat addicts of the world deserve a DSM diagnosis just as much as the game addicts?

Ultimately, we must ask ourselves, “Why games?” It’s probably true that a small number of individuals overdo a wide variety of ego-syntonic activities. But there’s little evidence for either the APA or WHO to single out video games. It thus becomes an inescapable conclusion that the focus on games has little to do with science or clinical practice, and more to do with the societal moral panic around games and other technology.

Violent video games ‘may have been a factor’ in Alesha MacPhail murder
By Dave Burke

In a 2015 report, the APA ruled: “The research demonstrates a consistent relation between violent video game use and increases in aggressive behaviour, aggressive cognitions and aggressive affect, and decreases in pro-social behaviour, empathy and sensitivity to aggression.

“It is the accumulation of risk factors that tends to lead to aggressive or violent behaviour. The research reviewed here demonstrates that violent video game use is one such risk factor.”

These Violent Delights Don’t Have Violent Ends: Study Finds No link Between Violent Video Games And Teen Aggression
By Matthew Warren

In the new study, Andrew Przybylski and Netta Weinstein sought to examine the link between violent video games and aggression more rigorously, avoiding the methodological pitfalls of past work and sticking to an analysis plan they described before starting out the study.

The pair asked just over one thousand British 14- and 15-year-olds to list the games they had played in the past month, and how long they played them for. Rather than relying on the teenagers to describe whether these games were violent, the researchers checked whether or not each game contained violent content according to the European video game rating system, PEGI (of the more than 1,500 games participants listed, nearly two in three, such as Grand Theft Auto V, were rated as violent).

To measure aggression, the researchers again avoided questioning the gamers directly. Instead, they asked the teenagers’ caregivers to complete a survey on the aggressive behaviours their child had shown over the past month, such as whether they had fought or bullied other children. They also directly measured the teens’ “trait” levels of aggression, asking them the extent to which they felt that various statements characterised them (for example, “Given enough provocation, I may hit another person”). The researchers used this measure to control for baseline levels of aggression in the analysis.

Przybylski and Weinstein found that their participants played video games for two hours per day, on average, and almost 49 per cent of the girls and 68 per cent of the boys had played one or more violent games in the past month. But the amount of time they’d spent playing violent video games did not predict how aggressive they had been. These results remained the same when the researchers classified the games’ content on the American rather than European video game rating scale.

New bill targeting video games won’t reduce violence
By Chris Ferguson and Patrick Markey

If there is a link between video games and school shootings, we’ve found it to be in the opposite direction than what’s implied by the proposed tax. Our research shows that from 1996 to 2011, as video game sales soared, youth violence dropped by over 80 percent. Countries that consume more video games per capita than the U.S., such as South Korea, Japan, and the Netherlands, have among the lowest violence rates on the planet. The finding that violent video games are related to decreases in violent crime is extremely robust and reached multiple times by different scholars. Long-term studies of youth have not found that playing action games predicts later bullying, violent crime, or conduct disorder. As a result, many scientists in this area no longer believe violent media cause societal violence.

Put simply, if we’re serious about tackling school violence, there is little evidence that doing anything related to violent games will help.

Video Games Aren’t Why Shootings Happen. Politicians Still Blame Them.
By Kevin Draper

People who commit mass shootings sometimes identify as video gamers, but James Ivory, who studies media and video games at Virginia Tech, cautioned to be aware of the base rate effect. Of course some mass shooters will have played violent video games, he said — video games are ubiquitous in society, especially among men, who are much more likely to commit mass shootings.

“It is very similar to saying the perpetrator wears shoes,” Dr. Ivory explained. “They do, but so do their peers in the general population.”

Researchers have some good data on what causes people to commit violent crime, but much less data on what causes them to commit mass shootings, in large part because they happen relatively infrequently.

There is no universally accepted definition for what constitutes a mass shooting. For a long time, the F.B.I. considered it to be a single shooting in which four or more people were killed. By that definition, a handful occur in the United States each year. Using a definition with fewer victims, or including those injured but not killed, a few hundred occur each year.

Either count pales in comparison to the other one million violent crimes reported each year.

While cautioning that he was hesitant to imply that most mass shooters fit a specific profile, Dr. Ferguson listed some commonalities. They tend to have mental health problems, sometimes undiagnosed, a history of antisocial behavior, have often come to the attention of law enforcement or other authorities and are what criminologists call “injustice collectors,” he said.

“The problem is, you could take that profile and collect 500,000 people that fit,” he said. “There are a lot of angry jerks out there that don’t go on to commit mass shootings.”

Mass shootings aren’t growing more common – and evidence contradicts common stereotypes about the killers
By Christopher J. Ferguson

Mass homicides get a lot of news coverage which keeps our focus on the frequency of their occurrence. Just how frequent is sometimes muddled by shifting definitions of mass homicide, and confusion with other terms such as active shooter.

But using standard definitions, most data suggest that the prevalence of mass shootings has stayed fairly consistent over the past few decades.

To be sure, the U.S. has experienced many mass homicides. Even stability might be depressing given that rates of other violent crimes have declined precipitously in the U.S. over the past 25 years. Why mass homicides have stayed stagnant while other homicides have plummeted in frequency is a question worth asking.

Nonetheless, it does not appear that the U.S. is awash in an epidemic of such crimes, at least comparing to previous decades going back to the 1970s.

Mass homicides are horrific tragedies and society must do whatever is possible to understand them fully in order to prevent them. But people also need to separate the data from the myths and the social, political and moral narratives that often form around crime.

Kids who play violent videogames may be more likely to pick up a gun and pull the trigger
By Linda Carroll

Among the 76 children who played videogames that included guns, 61.8% handled the weapon, as compared 56.8% of the 74 who played a game including sword violence and 44.3% of the 70 who played a non-violent game.

Children who played violent videogames were also more likely to pull the trigger, researchers found.

How many times children pulled the trigger depended on the videogame they watched.

It was a median of “10.1 times if they played the version of Minecraft where the monsters could be killed with guns, 3.6 times if they played the version of Minecraft where the monsters could be killed with swords and 3.0 times if they played the version of Minecraft without weapons and monsters,” Bushman said in an email.

“The more important outcome, though, is pulling the trigger of a gun while pointing that gun at oneself or one’s partner (children were tested in pairs),” Bushman said. There, the median was 3.4 times for the game with gun violence, 1.5 times for the game with swords and 0.2 times for non-violent games.

The new study “is the most rigorous design that can be conducted,” said Cassandra Crifasi, deputy director of the Johns Hopkins Center for Gun Policy and Research.

expert reaction to study on violent video games and behaviour with real guns
By The Science Media Centre

Prof Andrew Przybylski, Associate Professor and Director of Research at the Oxford Internet Institute, University of Oxford, said:

“This is a very curious study which I believe should be viewed with high skepticism given the combination of its attention grabbing claims and actual details on how the study was conducted.

“Quite worrying, the study claims that it is a clinical trial but it does not meet this standard. There is an entry on clinicaltrials.gov but discrepancies between the information here and what is presented in the paper lead me to worry about the peer review process. Many measures present in the paper are absent in the trial plan, the numbers of participants vary between the plan and the paper and many essential details, such as the statistical analysis plan are entirely missing. Taking these together I’m quite concerned that the public or journalists might conclude this study was rigorously done… it was not.

“Further, the authors are non-commital on whether it complied with best research practices. Many indicators relating to well-done science such as data sharing are hinted at but are not followed through on. The registry shows the study is ongoing and the researchers have repeatedly, on three occasions, once recently as last month, delayed submitting their results for quality control checks on clinicaltrials.gov.

“The paper has numerous other obvious problems such as low statistical power, an unrealistic scenario involving how a 12 year old would view a decoy weapon in a university psychology laboratory, and its casting of Minecraft as a violent video game but the two statistical anomalies stick out. First, the results and outcomes mentioned in the plan registered on the trials page are either not statistically significant or are just barely, because there is not enough detail in the analysis plan this patter is suspect given the researchers stopped data collection early. Second, nearly 30 additional models are tested giving the hypothesis a large number of times to ‘work’ but the non violent version of Minecraft only looks like it has a smaller effect on handling a gun in one of these cases. We would expect at least one or two of these to be different on the basis of chance.

“All in all, the concerns I highlight should underline any doubts readers might have given the authors are essentially claiming that playing Minecraft for 20 minutes could convince an adolescent to play with a mysterious gun in a university laboratory is a good idea. It’s far more likely that these findings and the resulting press release is tapping into something less news worthy, statistical noise.”

Two Researchers Challenged a Scientific Study About Violent Video Games—and Took a Hit for Being Right
By Alison McCook, Retraction Watch

Others have questioned Bushman’s work without incident—since 2016, Joseph Hilgard has approached Bushman and his colleagues regarding two papers about the real-life impacts of violent video games. The authors shared the data, and the journals took action. One journal flagged a 2017 paper showing the presence of weapons increased aggressive thoughts with a note warning readers of errors, another retracted a 2016 paper that showed gifted children disappoint on verbal tasks after watching violent cartoons. Hilgard, now an assistant professor at Illinois State University, told Retraction Watch in 2017 (when the paper was retracted) that the authors were “quite helpful.”

In December 2017, RUB “closed the case,” Ombudsman Eysel said. “We came to the well-grounded conclusion to apply the rule of protection of the honest whistleblower,” he told me. Markey and Elson have reposted the timeline of what happened to their website without attaching the documents they received from OSU.

Although the anxiety of the OSU complaint has now dissipated, Elson is left with another worry: If this paper by such a prominent lab about a topic of major public importance is flawed, what else is? “This was one paper—How many other papers are there out there that you can’t trust?” he told me. “What about all of these labs that are not as well known? Where not as many prying eyes are looking on? That worries me.”

Bushman is still frequently quoted in the news about the roots of teen violence—which is awkward for Markey, who says he gets upset when any researchers use data gathered in a controlled lab environment to try to explain something as multifactorial as school shootings. “Any scholar who makes that leap is making errors,” Markey told me.

The whole experience has been a bit surreal, Markey said. Though they were eventually vindicated, Markey and Elson paid a price for telling the world what had happened.

“At the end of the day, we weren’t saying anything that wasn’t true,” Elson told me. But simply ridding the literature of one problematic paper took countless hours of emails and discussions, not to mention the anxiety of dealing with the OSU complaint. Bringing out the truth about the shoddy study, “Took more time than actual research projects that we had,” he said. “It probably took more time than the actual study.”

For Markey, watching his friend and colleague live under the shadow of a complaint for a year was difficult, and not worth it. “At the end of the day, our goal was to remove bad science. And we achieved that goal. But I have no desire to ever do this again,” Markey said. “Because all it got me was grief.”

The Pulling Report
By Michael A. Stackpole

One of the most dangerous aspects of a magical world view is that it repopulates our world with demons that can force us to do things we do not want to do. As a result, adults no longer have to accept responsibility for themselves or their unruly children. Whereas the line, “The devil made me do it,” brought laughs twenty years ago, now it is seen as a defense for murder, an excuse for suicide and a shelter from blame for a host of other crimes.

Worst of all, this magical world view brings with it a fanatical self-righteousness that slops over into accusations of diabolical duplicity when it is questioned. Doubting the existence of Satanism and a conspiracy is not just doubting the evidence for the same. It is not just doubting the word of a witness concerning sacrifices of which one can find no trace. Within the magical world view, the mere act of doubting becomes an act of treason against God. To question the existence of a worldwide Satanic conspiracy means the skeptic is either a high ranking member of that conspiracy out to spread disinformation, or a poor, pitiful, ignorant dupe of that conspiracy.

A magical world view enables a person to see relationships between things that do not exist. It invests power in things that cannot be controlled and, therefore, responsibility for actions does not have to be accepted. It creates around a believer a smug cocoon that insulates him from any fragment of reality that might disturb him. Finally, it puts everyone who dares challenge their beliefs in the camp of the Enemy in some cosmic struggle between good and evil.

In reality, a person questioning the existence of the Satanic conspiracy is merely pointing out that the emperor is wearing no clothes. In that case, one can understand why the emperor’s tailors get upset and suggest the person doing the pointing is a tool of the devil. Then the question comes down to one of whether the crowd will believe the evidence they have before them, or if they will buy into the tailors’ fantasies.

The Rise of the Professional Dungeon Master
By Mary Pilon

To meet the rising D&D demand, some professional DMs now train other professionals. Rory Philstrom, a pastor with the Evangelical Lutheran Church in America, based in Bloomington, Minn., is also a self-described “semi-professional” DM. His website features a photoshopped image of Jesus holding a D-20 die and reads: “As baptized children of God, we can do all things through Christ who strengthens us. One of those things happens to be: play Dungeons & Dragons like a boss.” He hosts a four-day retreat, Pastors and Dragons ($440 to $675, depending on your lodging preference), at which you’ll learn how to use D&D as a tool to explore Christian teachings and enhance the work of your ministry. He’s also run an eight-session campaign for his confirmation class, and his church hosts a game night twice a month. “It’s been a blast,” he says.

Philstrom partially credits the game with helping him build collegial relationships and navigate the “realities of ministry” since his ordination in 2012. “When you’re at the table, people are putting themselves out there. They take the contents of their imaginations and translate it into story. You can ask people what they want to be, and people show all sorts of facets of their personalities,” he says. His fantastical style is particularly gutsy considering that Dungeons & Dragons was once the focus of a moral panic among far-right Christian groups. The 1980s, in particular, saw scores of accusations that the game endorsed witchcraft and satanic worship. Opponents tied it to murders and suicides, claims that have long since been debunked.

Netflix Has Their “Han Shot First” Moment with 13 Reasons Why
By Christopher Ferguson

This year, a pair of widely publicized studies claimed to link the show to an increase in teen suicides. One of these did no such thing. If 13 Reasons Why caused suicide among viewers, we’d expect suicides to increase among teen girls and young adult women: the demographic most similar to that of the show’s protagonist. But no effects were found for young adults, and suicides among teen girls actually decreased for one month after the show’s release. Only suicides among teen boys increased and only some months after the show’s debut. Suicides among teen boys were already increasing before the show was released, which suggests that the show’s timing coincided with a trend, not that it was a causal agent of that trend. The suicides of several male celebrities—including Aaron Hernandez, Chris Cornell and Chester Bennington of Linkin Park—at around the same time are more likely to have influenced the male suicides than the female-focused 13 Reasons Why. Indeed, many people who read this study were unimpressed.

A second study appeared to show clearer correlations, but didn’t control for seasonal patterns in suicide. April (when the first season was released) tends to be a high suicide month, and suicides have been increasing across age groups for several years. Thus, a peak in April 2017 was to be expected. This doesn’t mean 13 Reasons Why caused it.

Curiously, these two studies use the same Centers for Disease Control dataset. By running the data in different ways, they get different results. This is concerning. I have asked both groups for the data files used to calculate their results. Neither has complied. This means we have to take their word for it that their analyses are sound. Major decisions about artistic integrity should not be based on non-transparent science.

Blaming social media for youth suicide trends is misguided and dangerous
By Christopher J. Ferguson and Patrick Markey

In the past 10 years, teen suicides have increased, and this is certainly something we should pay attention to. But this likely is not a new trend brought on by smartphones or social media. Teen (meaning 15 to 19 years old) suicide rates today are about the same as they were in the early 1990s, well before anyone held an iPhone.

One large-scale data set of more than 1 million teens even found that teens are slightly happier today than they were in the 1990s. Even if you only examine data from the past decade, it becomes clear that both the number of suicides and the raw increase in suicides are higher among middle-aged adults, who use less tech. While there were 1,151 more teen and preteen (aged 10 to 19) suicides in 2017 than a decade ago, we see the number of suicides committed by adults (aged 45 to 54) dramatically increase by 3,480 during this same time period.

What these statistics suggest is that the recent increase in suicides is not unique to young people and, more importantly, youth suicides rates today are not much different than they were in the recent past, when there was no Facebook, Twitter, or Snapchat to blame.

Unfortunately, a few recent irresponsible headlines and claims by scholars continue to perpetrate the notion that social media increases risk for suicide. One recent study suggested a rise in depression and suicide among teens is linked to this technology, but didn’t bother to actually measure technology use among teens. The study offered no data, for instance, to suggest that teens who use more screens experience greater depression or committed suicide more often than those using less screens.

Nor is it clear that annual changes in depression or suicide correspond with annual changes in screen use. Another study implicated technology use in teen depression among girls but not boys. However, a close read suggested that technology use might account for less than a third of one percent of risk for depression. A more recent study unpacking the same data debunked the technology and depression claims, noting that the risk of eating potatoes or wearing eyeglasses for depression is about the same magnitude. Nobody warns parents of the dangers of eyeglasses.

Do we really have a ‘suicidal generation’?
By Tom Chivers

There’s this trick that climate deniers used to use. They used to say “there’s been no warming since 1998”. And in a weird way they were right: looking at global atmospheric surface temperatures, none of the years that followed was as hot as 1998.

But they were cheating. They picked 1998 deliberately since it was an outlier – an El Niño year much hotter than the years around it. If you were, on the other hand, to measure from 1997 or 1999, then there were lots of much hotter years on record; and the clear trend was that later years, on average, were hotter than earlier ones. It was a wobbly, noisy line, with some outliers, but the average temperature really was going up, and the only way you could hide that trend was by cherry-picking statistics.

I was thinking about this as I read the Sunday Times splash this week, which (using as-yet unavailable data from the Office for National Statistics) claimed that the “suicide rate among teenagers has nearly doubled in eight years”. It expressed concerns that we are raising “a suicidal generation”.

They said that the suicide rate among 15- to 19-year-olds in 2010 was just over three per 100,000. The ONS figures, due out in September, will (apparently) show that it is now over five per 100,000. Inevitably enough, the piece links the purported rise to the growth of social media since 2010.

But this is – and I don’t want to get too technical here, but bear with me – absolute bollocks from top to bottom. It’s a masterclass in what scientists call “hypothesising after results are known”, or HARKing. If you have the data in front of you, then you can make it say almost anything you like.

First, it’s worth noting that very few teenagers kill themselves. The total number of suicide deaths among 15- to 19-year-olds in 2017 in England and Wales was 177, out of about 3.25 million. That means that small changes can look like big percentage swings. More important, though, the Sunday Times story did exactly what the climate deniers did. The year 2010 had the lowest rate of teen suicides of any year since at least 1981, when the ONS records begin. You could compare it with literally any other year and you’d see a rise.

Added to which, picking social media as your reason for is completely arbitrary. Social media did not start in 2010. The BBC TV series Sherlock, starring Benedict Cumberbatch and Martin Freeman, did, though. Maybe we should blame that.

You could, if you wanted to, use the same trick to tell the exact opposite story. Facebook was first released in 2004, when the suicide rate among 15- to 19-year-olds in England and Wales was 4.7. But after six years of social media being available, it had dropped to 3.1! It’s a life-saver, no?

We’re told that too much screen time hurts our kids. Where’s the evidence?
By Andrew Przybylski and Amy Orben

Where do we go from here? Well, it’s probably best to retire the idea that the amount of time teens spend on social media is a meaningful metric influencing their wellbeing. There are many good reasons to be sceptical of the role of Facebook, Snapchat and TikTok in our society but it would be a mistake to assume science supports fears that every minute online compromises mental health. In fact, this idea risks trivialising and stigmatising those who struggle with mental health on a daily basis.

Moving beyond screen time to explain the interplay between technology and the wellbeing of our adolescent population requires us to face some tough questions. It’s all well and good to remember “neurotransmitter deposits” aren’t a thing, and this goldfish nonsense has been repeatedly debunked. But it remains the case that we don’t understand fully the impact of big tech on our society.

The fact is that much of the data that would enable scientists to uncover the nuanced and complex effects of technology is locked behind closed doors in Silicon Valley. Until Google, Facebook and the large gaming companies share the data being saved on to their servers with every click, tap or swipe on their products, we will be in the dark about the effects of these products on mental health. Until then, we’ll all be dancing to the steady drumbeat of monetised fear sold by the moral entrepreneurs.

Screen Time Might Not Be As Bad For Mental Health As We Thought
By Roni Dengler

Other scientists, in a response published in PNAS, argue the study is flawed. Joshua Foster, a personality and social psychologist at the University of South Alabama and M. Hope Jackson, a psychologist in the Mobile area, point out Przybylski’s study focuses on a single question about social media use, one that doesn’t cover the weekend, when tweens have the most free time.

The psychologists further draw attention to the fact that the survey question only asks about “chatting or interacting with friends.” This leaves out how many hours the tweens spend on the platforms consuming content but not socializing. That might explain why most adolescents claimed the number of hours they used social media in a day was “none” or “less than hour” on average.

In response, Przybylski writes, “No self-report measurement is perfect.” It’s a point he and colleagues brought up in their study. The team was “dissatisfied” with having to rely on questionnaires but point out that there’s no evidence to suggest that the survey they used is any better or worse than other available tools.

‘Mischievous Responders’ Confound Research On Teens
By Anya Kamenetz

Teenagers face some serious issues: drugs, bullying, sexual violence, depression, gangs. They don’t always like to talk about these things with adults.

One way that researchers and educators can get around that is to give teens a survey — a simple, anonymous questionnaire they can fill out by themselves without any grown-ups hovering over them. Hundreds of thousands of students take such surveys every year. School districts use them to gather data; so do the federal government, states and independent researchers.

But a new research paper points out one huge potential flaw in all this research: kids who skew the results by making stuff up for a giggle. “Mischievous Responders,” they’re called.

They may say they’re 7 feet tall, or weigh 400 pounds, or have three children. They may exaggerate their sexual experiences, or lie about their supposed criminal activities. In other words, kids will be kids, especially when you ask them about sensitive issues.

In a 2003 study, 19 percent of teens who claimed to be adopted actually weren’t, according to follow-up interviews with their parents. When you excluded these kids (who also gave extreme responses on other items), the study no longer found a significant difference between adopted children and those who weren’t on behaviors like drug use, drinking and skipping school. The paper had to be retracted. In yet another survey, fully 99 percent of 253 students who claimed to use an artificial limb were just kidding.

“Part of you laughs about it, and the researcher side is terrified,” says Robinson-Cimpian. “We have to do something about this. We can’t base research and policy and beliefs about these kids on faulty data.”

Embrace the unknown
By Chris Ferguson

Science laundering is the washing away of inconvenient data, methodological weaknesses, failed replications, weak effect sizes, and between-study inconsistencies. The cleaned-up results of a research field are then presented as more solid, consistent and generalisable to real-world concerns than they are. Individual studies can be irresponsibly promoted by press release, or entire research fields summarised in policy statements in ways that cherry-pick data to support a particular narrative. Such promotions are undoubtedly satisfactory and easier to digest in the short-term, but they are fundamentally deceitful, and they cast psychology as a dishonest science.

Why do this? Why not change course and release honest statements for research fields that are messy, inconsistent, have systematic methodological weaknesses or that may be outright unreproducible? Incentive structures. Individual scholars are likely seduced by their own hypotheses for a multitude of reasons, both good and bad. Big claims get grants, headlines, book sales and personal prestige. I note this not to imply wrongdoing, but to acknowledge we are all human and respond to incentives.

These incentive structures have been well documented in science more widely, and psychology specifically, in recent years. Unfortunately, the public remains largely unaware of such debates, and ill-equipped to critically evaluate research. As one recent example, Jean Twenge and colleagues (2018) released a study, covered widely in the press, linking screen use to youth suicides. However, another scholar with access to the same dataset noted in an interview that the magnitude of effect is about the same as for eating potatoes on suicide (see Gonzalez, 2018: effect sizes ranged from r = .01 to .11 depending on outcome). Such correlations are likely within Meehl’s ‘crud factor’ for psychological science, wherein everything tends to correlate with everything else, to a small but meaningless degree.

Does psychology have a conflict-of-interest problem?
By Tom Chivers

It is only in the past two decades that many disciplines, led by the medical journals, have codified rules requiring full transparency about payments to researchers. The ICMJE issued its guidelines in 2009; and in 2013, a US law called the Sunshine Act came into force that requires pharmaceutical companies to declare their payments to doctors and hospitals. These rules were introduced as researchers became aware that COIs can colour scientific objectivity. Meta-analyses looking at the work of scientists with COIs have found that their work is consistently more likely to return positive results … and that research funded by for-profit organizations is more likely to find benefits from interventions than is non-profit-funded research …

The COIs in these kinds of study generally relate to companies directly funding relevant research or paying scientists, rather than to fees for speaking engagements or consulting. But the ICMJE guidelines say that researchers should declare “all monies from sources with relevance to the submitted work”, including personal fees, defined as “monies paid to you for services rendered, generally honoraria, royalties, or fees for consulting, lectures, speakers bureaus, expert testimony, employment, or other affiliations”. Reimbursement for speaking engagements or consultancy “fits quite clearly with what [the ICMJE guidelines] call personal fees”, says Adam Dunn, who studies COIs in pharmaceutical research at Macquarie University in Sydney, Australia.

Most COI declarations in research papers run on an honour system: scientists are expected to declare, but there is little actual checking. Last year, for instance, a well-known cancer researcher, José Baselga at the Memorial Sloan Kettering Cancer Center in New York City, resigned after failing to declare millions of dollars he had received from various pharmaceutical companies. Journalists found the payments in a federal database related to the Sunshine Act. COI problems have affected psychology, too: this year, a PLoS ONE paper about mindfulness was retracted over methodological concerns …, but its editors also noted that the authors had failed to disclose their employment at an institute that sold related mindfulness products.

Fraud Ain’t The Game
By James Heathers

Basically, scientists are allowed to make mistakes without the assumption that they have done something untoward — no matter how their work looks — and I am allowed to ask if I have identified something which constitutes a mistake.

The only important thing to do when you have found an error (or a bunch of them) is to tell other people. Preferably the author, who should take responsibility for the problems identified if doing so would be justified.

But in the absence of that, telling everyone else works too.

The scientific record is important. Even for research you might think is deeply silly, even when it’s the Southern Maine Journal of Basketweaving, even when it’s not your field, even when you think what’s been written is so facile and arse-backwards that no ‘reasonable’ person would ever believe it (so why get involved?)

Because far more important than the life and times of any individual paper is building a scientific environment where mistakes are located, publicly identified, and corrected. You’ll never know whose time or money you are saving. But money and time is saved. It’s no more abstract than the ‘good’ the work can do if it exists.

So: where the errors are from, I have no opinion.

People have told me previously that this is terribly wishy-washy, which I consider to be bollocks. My suggestion to illuminate this opinion for you is: try it. Put yourself in the public domain and accuse people of bad faith. You will be wrong a lot, and then you will get into trouble. It’s much easier to support the argument that someone can’t add than litigate their intentions when the cat ate their calculator.

And wrong is wrong.

And if wrong enough, should get fixed, or get gone.

How often do authors with retractions for misconduct continue to publish?
By Retraction Watch

RW: You found that while publication rates may decline after the first retraction, “There was no difference in the decline in publication rates between authors associated with a retraction for misconduct and those not associated with such a retraction.” Does this suggest that the cause of retraction, as well as the number, has less effect than just having a retraction at all?

MB: It might do, but the overwhelming majority of individuals were associated with a retraction for misconduct. (That doesn’t mean that the author committed misconduct, but that the retraction notice listed a reason for retraction which falls under the category of misconduct). So it might be that there were too few individuals who were not associated with a retraction for misconduct to detect differences between the groups.

RW: Other studies have found that authors who have retracted papers see a much higher rate of decline in citations of their work overall if the retractions are for misconduct than if they are for honest error. You found that “After the first retraction, citation rates of retracted papers declined whereas those of unretracted papers by the same authors remained unchanged.” What do you make of that finding?

MB: Based on our experience with the Sato/Iwamoto case, when we looked closely at unretracted papers, we commonly found issues that raised concerns about the integrity of the papers. We have notified these to the journals affected but many are yet to take action even after a few years. If our experience applies to other authors with multiple retractions, it seems likely that many of their unretracted papers would also have issues identified if the paper is closely examined. However, there doesn’t seem to be a process for systematically examining the entire body of research of individuals with multiple retractions. Institutional investigations mostly seem to be quite narrow in scope, only investigating papers about which concerns have been raised.

So there are potentially several explanations, including that the unretracted papers might have no issues with their integrity. Alternately, the papers might have integrity issues that are not detected by people who cite them, especially if they are unaware of concerns raised about the author’s retracted papers.

A Credibility Crisis in Food Science
By James Hamblin

The Wansink saga has forced reflection on my own lack of skepticism toward research that confirms what I already believe, in this case that food environments shape our eating behaviors. For example, among his other retracted studies are those finding that we buy more groceries when we shop hungry and order healthier food when we preorder lunch. All of this seems intuitive. I have used the phrase health halo in my own writing, and am still inclined to think it’s a valid idea.

It’s easy to let down one’s skepticism toward apparently virtuous work. Studies are manipulated and buried and disingenuously designed or executed all the time for commercial reasons, notoriously in domains like pharmaceuticals, where there is a clear incentive to prove that a product is safe and effective—that the years and millions of dollars that went into developing a drug were not wasted, and rather that they were in service of a safe and effective billion-dollar product. But a bulk of the inquiry into Wansink’s research practices centered on a study about getting children to choose fruits and vegetables as snacks if they were marked with stickers bearing popular cartoon characters. Why would someone fabricate a study about how to get kids to eat more fruits and vegetables?

This Ivy League food scientist was a media darling. He just submitted his resignation, the school says.
By Eli Rosenberg and Herman Wong

For years, Wansink enjoyed a level of prominence that many academics would strive for, his work spawning countless news stories. He published a study showing that people who ate from “bottomless” bowls of soup continue to eat as their bowls are refilled, as a parable about the potential health effects of large portion sizes. Another, with the title “Bad popcorn in big buckets,” similarly warned about the perils of presenting food in big quantities, according to Vox.

He was given an appointment at the Department of Agriculture’s Center for Nutrition Policy and Promotion and helped oversee the shaping of federal dietary guidelines, according to Vox. He was cited in popular media outlets such as O, the Oprah Magazine and the “Today” show and featured in newspapers such as the New York Times and The Washington Post. According to the blog the Skeptical Scientist, which is run by PhD student Tim van der Zee, the hundreds of papers Wansink published drew so much attention that they were cited some 20,000 times.

But problems started to bubble up in 2016, after Wansink wrote a blog post about his research that drew wide criticism, according to BuzzFeed. Other researchers began investigating his studies and raised questions about his methodology. In 2017, Cornell undertook a review of four of his papers that found “numerous instances of inappropriate data handling and statistical analysis,” but said the errors “did not constitute scientific misconduct.”

Smartphones aren’t making millennials grow horns. Here’s how to spot a bad study
By Nsikan Akpan

When The Washington Post published its story early in the morning on June 20, the original version did not include an interview from a researcher who was not involved in the study. The BBC story also lacks outside commentary on Shahar and Sayers’ study.

The Washington Post updated its story with additional context, more than eight hours after publication, according to the Wayback Machine. By then the story had received massive news coverage, many of which cited The Washington Post and BBC as sources.

Molly Gannon, a communications manager for The Washington Post, shared this statement in response to our questions:

Our story reports on the findings of studies that ran in multiple peer-reviewed journals and includes interviews with the scientists who conducted the research. It also includes an interview with an outside expert, which reflects our standard practice, and which was moved higher in the story to make it more prominent. The word ‘horns’ was used by one of the scientists.

On Tuesday, The Washington Post also updated its story to include Shahar’s possible conflict of interest regarding his posture pillows company.

A BBC spokesperson said “This is an article about osteobiography, of which Dr Sahar’s study was one example of many.”

The weaknesses in the “horns” study are opaque to most readers. But things you can look for when you’re trying to suss out whether a science story holds up:

  1. outside commentary on the study at-hand
  2. clues about whether the research was peer-reviewed and by whom,
  3. what data the study uses as a source
  4. and finally: Does the study claim more than it proves?

This last point and this episode offer a reminder about the modern news cycle.

History shows us — such as with Andrew Wakefield’s retracted study on measles and autism — that the stakes are high when reporting on science and health. Such misinformation erodes the public’s ability to comprehend what is empirically right and backed by facts versus what is fiction.

‘Text neck’ — aka ‘horns’ — paper earns corrections
By Retraction Watch

A spokesperson for Scientific Reports told us:

When we became aware of criticisms of this paper, we carefully investigated the concerns raised following an established process. This further assessment of the manuscript, which took additional information into account, revealed that the methodology and data remained valid. It was determined, however, that the paper should be corrected to more accurately represent the study and the conclusions that could be drawn from it. We have also updated the competing interests statement for the paper.

The journal has had some eye-catching episodes recently, including the retraction of a paper that claimed a homeopathic remedy can treat pain in rats, the removal of a cartoon that appeared to include U.S. President Donald Trump’s face in feces, and mass resignations from its editorial board over concerns about how it handled allegations of plagiarism.

The Rise of Junk Science
By Alex Gillis

One of many junk studies that still disturbs Franco appeared in 2016 in Scientific Reports, an open-access journal from Springer, a reputable publisher, that accepts a range of high- and low-quality papers. The study suggested that the vaccine for the human papillomavirus (HPV) can cause neurological damage: scientists had injected the vaccine into twenty-four mice and found changes in two parts of the mice’s brains. Franco is an expert in cancer epidemiology, including that of cancers associated with HPV, and he’s familiar with the HPV vaccine, which has been proven to prevent cervical cancer in women. He spied the flaws in the paper immediately—though a casual reader might never have noticed them.

The mice in the experiment had been given 1,000 times more vaccine than the maximum allowable for a child. Even when factoring in a mouse’s faster metabolism, the doses were at least eighty times more powerful than what a typical vaccine would contain for a human, explains Dave Hawkes, a virologist in Australia. And, in spite of the high doses, the effects on the mice were still unclear due to other problems in the study, including ambiguity about what additional substances the mice were given and why the thickness of one tissue section was measured as a concentration (similar to saying a piece of wood is 6 percent thick).

Sharon Hanley, a professor at the Hokkaido University Graduate School of Medicine, in Japan, demonstrated the power of bad science and antivaccine sentiment and lobbying in the country, where an antivaccination movement has bloomed. Because people with difficult-to-diagnose disorders blamed the HPV vaccine for their illnesses—and because the media quoted from flimsy research—Japan stopped recommending the vaccine in 2013. Two years later, Hanley and three colleagues published a paper in The Lancet, a reputable journal, showing that new HPV vaccinations in Sapporo, Japan, had plummeted from 70 percent of girls in 2013 to less than 1 percent in 2014.

Franco had collaborated with Hanley and others on the situation there. After reading the HPV study, he alerted many of his colleagues to the outlandishness of the content, and he was part of an international group of epidemiologists that asked Scientific Reports to retract the paper—which it did, seventeen months later. “The problem is, the paper will likely find another home somewhere,” Franco says. “We’ll need to play Whac-a-Mole again.”

Jeffrey Beall, a former librarian at the University of Colorado Denver, was one of the first people to show the bizarre and disturbing relationships between academics and junk publishers, coining the term “predatory publishers.” A few years ago, Franco invited him to speak in Montreal, where Beall told his audience that the situation was “a recipe for corruption.” He’d been warning academics since 2012 via Beall’s List, an online blacklist, which he says received more than 15,000 daily page views before he deleted it last year.

Beall was particularly successful in revealing how companies and academics support the “garbage,” as he puts it. In 2013, for example, Springer formed an alliance with Frontiers, a publisher on Beall’s List. And, in 2017, Kamla-Raj Enterprises, also on Beall’s List, announced that it had signed an agreement with Taylor & Francis, a real publisher, to copublish fifteen of Kamla’s journals. One of those fifteen, Studies of Tribes and Tribals, states its aim as “understanding human beings especially aboriginals, backwards and minorities.”

Revealing the corruption of junk, mediocre, and real publishers created a great deal of stress for Beall. His own university launched a misconduct case against him in early 2017, after an angry publisher complained that he’d fabricated information about it (the publisher had appeared on his blacklist). Beall won the case—it was “baseless,” he says—but he retired early and deleted his blog in part because of personal attacks in academic circles. “I couldn’t take it anymore,” he says.

An economics professor at Thompson Rivers University, in BC, has experienced similar stress. In 2017, Derek Pyne, whom Beall calls “another hero,” published a study about the university’s School of Business and Economics indicating that many of his colleagues (thirty-eight in total) wrote for junk journals that were on Beall’s List. Pyne wrote that “the school has adopted a research metric that counts predatory publications equally with real publications.” He found a direct connection between padded CVs and higher salaries and promotions. Mike Henry, the school’s dean, says that a committee is updating the school’s promotion and tenure standards. Meanwhile, Pyne, whose story was widely covered in the media, says the entire experience was bizarre.

Pyne says Thompson Rivers banished him from the campus in May 2018, suspended him in July, and allowed him back on campus in December. He says he was threatened with medical leave if he didn’t agree to a “psychological evaluation.” When I asked the associate dean for a comment, he said the issue was a “personnel matter” and hung up on me. The Canadian Association of University Teachers is now investigating whether Pyne’s academic freedom was violated.

Bad Data Analysis and Psychology’s Replication Crisis
By Christopher J. Ferguson

For almost a decade now, psychological science has been undergoing a significant replication crisis wherein many previously held truisms are proving to be false. Put simply, many psychological findings that make it into headlines, or even policy statements issued by professional guilds like the American Psychological Association, are false and rely upon flimsy science. This happens for two reasons. The first is publication bias wherein studies that find novel, exciting things are preferred to those that find nothing at all. And second, death by press release, or the tendency of psychology to market trivial and unreliable outcomes as having more impact than they actually do.

The tendency to publish only or mainly statistically significant results and not null findings is due to a perverse incentive structure within academia. Most of our academic jobs depend more on producing research and getting grants than they do teaching or committee work. This creates the infamous publish or perish structure, in which we either publish lots of science articles or we don’t get tenure, promotions, raises, prestige, etc. Scientific studies typically require an investment of months or even years. As an academic, if we invest years on a science study we must get it published or we will lose our jobs or funding.

Reproducibility meets accountability: introducing the replications initiative at Royal Society Open Science
By Chris Chambers

Replication – it’s the quiet achiever of science, making sure previous findings stand the test of time. If the scientific process were a steam ship, innovation would be sipping cognac in the captain’s chair while replication is down in the furnaces shoveling coal and maintaining the turbines. Innovation gets all the glory but without replication the ship is going nowhere.

In the social and life sciences, especially, replication is terminally neglected. A retrospective analysis of over a hundred years of published articles in psychology estimated that just 1 in 1000 reported a close replication by independent researchers.

What explains this unhappy state of affairs? In a word, incentives. Researchers have little reason to conduct replication studies within fields that place a premium on novelty and innovation. Also, because psychology and neuroscience are endemically underpowered, a high quality replication study usually requires a substantially larger sample size than the original work, calling on a much greater resource investment from the researchers.

Even if researchers summon the necessary motivation and resources to conduct a replication, getting their completed studies published can be a thankless and frustrating task. Many journal editors – especially those who prize novelty – will desk-reject replication studies without them ever seeing an expert reviewer. And those replications that go further face a grueling journey through traditional peer review. Where the replication study fails to reproduce the original findings, proponents of the original work will point to some difference in methods, however trivial, as the cause for the non-replication. And where the replication succeeds, the reviewers and editor are likely to conclude that “we knew this already, so what does the study add”? All paths veer toward the same destination: rejection and the file drawer.

In the long term, our vision is broader. As Sanjay Srivastava pointed out over five years ago, we must realign the incentives in publishing to make journals accountable for the reproducibility of the work they disseminate. If a journal achieves fame for publishing a large quantity of impactful, novel studies with positive results, but is then bound to publishing attempted replications of those studies, then the journal has a reputational incentive to ensure that the original work is as robust as possible.

We hope to see this model normalise replication studies and give the scientists who conduct them the prominence they deserve. Replication has spent long enough shoveling coal in obscurity. It’s time for a taste of the captain’s chair.

Where there’s smoke, there’s no fire: Don’t blame movies for getting young people hooked on cigarettes
By Chris Ferguson

One of the big casualties of this replication crisis is the very idea of automaticity, or the notion that people are mechanically and unconsciously primed to change their behavior due to subtle environmental cues. It turns out it’s not really that easy to influence people’s behavior.

Social priming is similar to theories of media effects, beliefs that media acts as a kind of hypodermic needle inserting behaviors into viewers. The fact that social priming doesn’t seem to work well probably explains why fictional media doesn’t appear to be a powerful influence on behavior.

Gaming Disorder: The World Health Organisation Jumps the Shark
By Christopher Ferguson

Some people have asked me how long we’re likely to be stuck with gaming disorder. It’s hard to say. Some disorders, such as Dissociative Identity Disorder (formerly Multiple Personality Disorder), have remained official diagnoses, despite decades of controversy as to whether they describe real phenomena. By contrast, homosexuality was removed from the DSM—but only because social attitudes towards homosexuality had finally changed. As the social perception of homosexuality evolved from deviance to a perfectly acceptable sexual orientation, the DSM was forced to follow suit. This points to the alarming degree to which social narratives help define mental illness—something which does not happen with, for example, influenza or colon cancer. As attitudes toward gaming change, so too will perceptions of the validity of gaming disorder.

Prince Harry Wants to Ban Fortnite? Here’s What He’s Missing
By Jennifer Senior

Why — and how — had it so quickly become the rabid preoccupation of so many?

A great deal of the answer is that Fortnite is social. More than social, actually: It is, as the tech writer and developer Owen Williams has written, a destination, an actual place. “It’s like going to church, or the mall,” Williams explained on his blog, Charged, late last year, “except there’s an entire universe to mess around in together.”

Which explains a certain wisecrack my son likes to make when he peels off to play. “I’m going to see my friends now,” he says, though he’s in fact joining them on his headset. Jumping into a game of Fortnite is paying a social call, the equivalent of dropping in on a cocktail party.

That Fortnite is its own place — specifically “a third place,” or lively harbor for communities outside of home and work — matters quite a lot. Middle-class children today don’t have much freedom to find such places. They’re rigidly scheduled and aggressively sheltered — parents of my generation are more inclined to roll their children in bubble wrap and tuck them on a high shelf for storage than allow them to wander off to parks or shopping malls on their own. Gaming is their form of self-determination, a means to take control of their constricted, highly regimented lives.

I’ll toss in at least one paradoxical, unanticipated benefit of this socializing, at least in my house: My son now demands to see far more of his friends in real life. All that socializing via headset has whetted his appetite for embodied interaction. (Perhaps he’s an outlier. But Williams says the same thing has happened to him.) Sometimes those play dates don’t even involve Fortnite. But when they do, they’re far more social than meets the eye. The kids aren’t just plugged into their devices, but to one another — barking orders, exchanging intel, passing joysticks, cracking jokes.

The benefits of video games: why screen time isn’t always bad
By Dr Pete Etchells

Take Minecraft for example. It may seem like a fairly isolating, single-player experience to the outsider, but it brings people together in all sorts of ways. Some play to connect with their friends, others share in the creative experience of building something monumental, and it’s even been used as an interactive tool to teach students basic chemistry (see the University of Hull’s MolCraft project).

Elsewhere, studies have shown that video games can be used as therapeutic interventions to help soldiers overcome PTSD, and to help children with cancer stick to treatment regimes.

My disabled son’s amazing gaming life in the World of Warcraft
By Vicky Schaubert

At the end of the blog post about Mats’s death, Robert posted an email address for anyone wanting to get in touch.

“I wrote and cried. Then I hit publish. I didn’t know if any replies would come… and then the first email arrived – a heartfelt condolence from one of the players from Starlight.

“I read the email aloud: ‘It is with heavy heart I write this post for a man I never met, but knew so well.’ It made such an impression.”

Then came more messages of condolence – more stories of Mats’s gaming life.

“He transcended his physical boundaries and enriched the lives of people all over the world,” read one. “Mats’s passing has hit me very hard. I can’t put into words how much I’ll miss him,” said the next. “I don’t believe that one single person is the heart of Starlight. But if one was, it would have been him.”

Robert says: “An entire society, a tiny nation of people began to take shape.

“And it was on a scale that we had no idea existed. More and more emails arrived that testified about the kind of significance Mats had.”

Violent video games are not to blame for our own failings
By Tom Chatfield

Many things can become harmful in the absence of mechanisms for safely exploring and understanding them; and the most important mechanisms we possess for keeping others and ourselves safe online are about time, attention and sharing. In our digital age, danger lies more than ever in the gaps in our knowledge and care; while the steps towards becoming a discerning digital citizen equally entail boundaries, permissions, conversations — and love and attention unstintingly given.

Like other media, games can be a refuge, a distraction, an experiment and a joy. But they should never become either a scapegoat for society’s inadequacies — or something we abandon our children to in the absence of attention and care.

Why Videogames Trigger the Nightly Meltdown—and How to Help Your Child Cope
By Julie Jargon

* Institute rules around game play—and follow them consistently. “Don’t let yourself fall into these traps, like, ‘Well, he didn’t play last week so we let him play an extra three hours this week,’” said Michael Milham, vice president of research and founding director of the Center for the Developing Brain at the Child Mind Institute.

Some experts advise parents to warn kids about 20 minutes before it’s time to shut down so they know what to expect and don’t begin a new level or mission. Dr. Milham suggests not letting kids play videogames right up until bedtime, because some have trouble settling down to sleep.

* Give your children a role in creating the rules. Susan Groner, a parenting coach and author, said children are more invested in following guidelines when they have a hand in developing them. If you and your child can come to an agreement on when videogames can be played and for how long, you can try it for a week and then revise if it’s not working. She also suggests having kids set a timer so they can monitor their own game-playing.

* It’s never too late to establish rules. A few weeks ago, Ms. Counts decided to do a “detox” with her son. In a written agreement, he said he’d try to whittle down game-playing to an hour a day and that the two would do more activities together, including reading books. While they haven’t reached their goal yet, she said it’s helped by giving her son something else to look forward to.

* If a serious problem develops, seek treatment for a possible underlying condition. Chris Ferguson, the Stetson University professor, said concerned parents should seek treatment from professionals who specialize in kids and teens.

“I would not take kids to a place that specializes in gaming addiction. A lot of them are capitalizing on this moral panic and they don’t have empirically valid treatments. They may treat the gaming addiction and give you your depressed kid back,” he said.

The “Real” Harm of Screen Time?
By Mike Brooks Ph.D.

When I see my kids in the summer preferring to stay indoors and play games on the iPad or Xbox to going to the pool or hiking in the woods, it bothers me. I can’t blame them though. I know the truth is that if I’d had their screen options when I was a kid/teen, I would have spent a lot more time with screens and less time outdoors and hanging out with my friends. There are only so many hours in a day. All the time we are spending on screens must mean that other activities that we engaged in prior to the smartphone are being displaced.

Perhaps therein lies one of the “real” harms of screen time. It’s harm to our values as parents. We value:

  1. Creative play with toys for younger kids
  2. Outdoor fun & games
  3. In-person social interactions
  4. Reading books and novels for fun (crazy idea, right?)
  5. Family time uninterrupted by screens
  6. Focused work time uninterrupted by texting, social media, and push notifications
  7. Being bored and making up engaging things to do
  8. Car trips in which people talk to one another

Now, we might argue that screens infringing on the above do cause measurable harm in terms of well-being and relationship satisfaction. Let’s forget all that for a moment though. When I’m being honest with myself, I can say that regardless of how much quantifiable harm screens are causing, I believe that there are beneficial things being lost or missed when kids are on the screen too much. I feel like my values, and the values that I’m trying to instill in my kids, are being jeopardized.

I daresay that if we would get widespread agreement among parents if we were to create some type of values survey for parents to rate statements such as:

  1. I believe it is important for my child to leisure read.
  2. I believe it is important for my child to play outdoors.
  3. I believe it is important for my child to spend time with friends in-person on a regular basis

‘There are wolves in the forest…’
By Professor Andrew Przybylski

At best, screen time an umbrella term, but not a particularly useful one. It’s a fuzzy term encompassing rich worlds of social interaction, argument, content consumption, and production. Finishing your online shop is miles away from swiping through Tinder but they, and thousands of other uses, fall under this heading. An hour of your son or daughter playing guitar hero on Xbox counts the same as practicing the guitar by following lessons on YouTube.

Still, this shorthand does grab your attention. You can remember instances of screen time that bothered you, perhaps an inattentive spouse or focused child. But this isn’t a good representation of what’s going on when you’re using tech. Using one term is an over-simplification for this kind of digital life. It’s as if you wanted to understand your nutrition in terms of food time.

None of this is to say that digital technology and the digital world don’t impact us. It’s just that we’re asking the wrong questions and barking up the wrong tree when we pay attention to screen time. It’s a poor stand-in for a rich digital world that has a lot of ups and downs. There is a cottage industry of fear merchants. They sell us on ideas like screen time changes the brain, destroys generations, or is more addictive than narcotics. One might commend their entrepreneurial spirit but lending them credence runs us the very real risk of distraction. There are wolves in the forest, but if we listen to their screen time claims, we risk missing the real challenges and opportunities of the digital age.

REDEF ORIGINAL: Fortnite Is the Future, but Probably Not for the Reasons You Think
By Matthew Ball

The term “Metaverse” stems from Neal Stephenson’s 1992 novel Snow Crash, and describes a collective virtual shared space that’s created by the convergence of virtually enhanced physical reality and persistent virtual space. In its fullest form, the Metaverse experience would span most, if not all virtual words, be foundational to real-world AR experiences and interactions, and would serve as an equivalent “digital” reality where all “physical” humans would simultaneously co-exist. It is an evolution of the Internet. More commonly, the Metaverse is understood to resemble the world describe by Ernest Cline’s Ready Player One (brought to film by Steven Spielberg in 2018).

Of course, early versions of the Metaverse will be far simpler – but the foundational elements will go well beyond “gaming”. Specifically, we’d see in-game economies (e.g. trading, bartering and buying items) become more of an industry where humans will literally “work”.

“If you look at why people are paid to do things, it’s because they’re creating a good or delivering a service that’s valuable to somebody,” Sweeney told Venturebeat in 2017. “There’s just as much potential for that in these virtual environments as there is in the real world. If, by playing a game or doing something in a virtual world, you’re making someone else’s life better, then you can be paid for that.”

To this end, a crucial difference between a vibrant game, including Fortnite, and the Metaverse is that the latter “should not simply be a means for the developer to suck money out of the users. It should be a bi-directional thing where users participate. Some pay, some sell, some buy, and there’s a real economy….in which everybody can be rewarded for participating in many different ways”, to further quote Sweeney (A semblance of this has existed for more than twenty years in so-called “Gold Farming”, where players, often employed by a larger company and typically in lower-income countries, would spend a work-day collecting digital resources for sale inside or outside of a game).

To Sweeney, the Metaverse represents the “next version” of the Internet – a matter of when, not if. Furthermore, he believes the core technology will soon be available: “The big thing we are lacking now,” he said, “are the ‘deep inputs’ that come from inward- and outward-looking cameras that can capture our facial expressions as well as the environment around us… That technology has already been proven to work at a high-end commercial level costing tens of thousands of dollars. It’s probably as little as three years away.”

The impending possibility (and broader inevitability) of the Metaverse is separate from whether Epic can, should or will pursue it. But it’s clear that Sweeney wants to build an open Metaverse before someone else builds a closed one. Many are trying.

Sweeney speaks of the Metaverse in terms of its capabilities to connect humans in new ways. Mark Zuckerberg has often said the same, which was why he acquired Oculus: “Strategically we want to start building the next major computing platform that will come after mobile. There are not many things that are candidates to be the next major computing platform… [Oculus is a] long-term bet on the future of computing…. Immersive virtual and augmented reality will become a part of people’s everyday life.”

Zuckerberg, of course, wants that platform to be controlled by Facebook, noting, “history suggests there will be more platforms to come, and whoever builds and defines these [will shape the future and reap the benefits].”

This is what Sweeney fears, and what motivates him to have Epic lead as quickly as possible. “As we build up these platforms toward the Metaverse, if these platforms are locked down and controlled by these proprietary companies, they are going to have far more power over our lives, our private data, and our private interactions with other people than any platform in previous history,” Sweeney said in May 2017, two months before declaring: “The amount of power possessed by Google and Facebook. President Eisenhower said it about the military-industrial complex. They pose a grave threat to our democracy.” (Sweeney has also said that as “founder and controlling shareholder of Epic”, he “would never allow” Epic to “share user data…with any other company. We [won’t] share it, sell it, or broker access to it for advertising like so many other companies do.”)

To be successful, any social network needs to start from a place of value or utility to its users – rather than the goal of being a social network. Similarly, Fortnite’s great advantage isn’t that it was built to be the Metaverse, but that it’s already a massive social square that’s gradually taking on the qualities of one.

The proof of Fortnite’s unique potential was demonstrated live on February 1, 2019. At 2pm Eastern, the Marshmello (who ranks #10 in DJ Magazine’s Top 100) held a live concert that was held exclusively inside Fortnite. The event, which was live synced to the real Marshmello, was attended by 11MM in the game – with millions more watching live via Twitch and YouTube – many of whom used their characters’ user-specific dance moves to join in. The event was stunning. And it showcases the potential of the Metaverse (including payment for performances, music rights, etc.), wherein a user can have potentially unlimited experiences inside a single medium.

Let’s Play War
By Jonathon Keats

The number of people who participate in virtual worlds and MMOs is staggering. At its peak, Second Life hosted 800,000 inhabitants—nearly the number of people living in San Francisco—and World of Warcraft reached a peak population of 12 million. Another massively popular genre—one more pertinent to promoting peace—is the God game genre. (Wright’s titles alone have sold 180 million copies.) …

But God games have never fit the massive multiplayer format, since the premise of a God game is omnipotence, which logically cannot be shared. Electronic Arts, the publisher of SimCity, tried to split the difference with an online multiplayer re-release in 2013. (Cities remained autonomous, but could trade and collaborate on “great works.”) The awkward combination of antithetical genres quite naturally provoked a backlash. SimCity cannot become what it was never meant to be. What’s needed instead are games designed from the start to allow a massive multiplicity of players to interact in open-ended possibility spaces.

Crucially, these virtual worlds would not be neutral backdrops in the vein of Second Life. Like SimCity and war games, they’d be logically rigorous and internally consistent. There’d be causality and consequences, and there’d be tension, drawn out by constraints such as limited resources and time pressure. Also like SimCity and war games, these virtual worlds would be simplified, model worlds with deliberate and explicit compromises tailored to the topics being gamed. There could be many permutations, so that none inadvertently becomes authoritative. The only real guideline for setting variables would be to adjust them to breed what Wright has described as “life at the edge of chaos.”

Within these worlds, scenarios could be played out by the massive multiplicity of globally networked gamers. Players wouldn’t need to be designated red or blue, but could simply be themselves, self-organizing into larger factions as happens in many MMOs. Scenarios could be crises and opportunities. Imagine a global financial meltdown that destroys the value of all government-issued currencies, provoking the United Nations to issue a “globo” as an emergency unit of exchange. Would the globo be adopted, or would private currencies quash it? And what would be the consequences as the economy got rebuilt? A single universal currency might be a stabilizing force, binding the economic interests of people and nations, or it could be destabilizing on account of its scale and complexity. It could promote peace or provoke war. Games allowing players to collaborate and compete their way out of crisis would serve as crowd-sourced simulations, each different, none decisive, all informative.

As the number of players increased through the evolution of world gaming, the outcomes of these games would inform an increasingly large proportion of the planet. At a certain stage, if the numbers became great enough, gameplay would verge on reality—and even merge into reality—because players would collectively accumulate sufficient anticipatory experience to play their part in the real world more wisely. Whole aspects of game-generated infrastructure—such as in-game non-governmental organizations and businesses—could be readily exported since the essential relationships would have already been built. Games would also serve as richly informative polls, revealing public opinion to politicians.

To save democracy, we must disrupt it
By Carl Miller

The first stage was to lay out the basic facts about Uber, which were put onto a Wikipedia timeline, and independently validated. The next stage, the most difficult, was to bring people together from all sides to share their feelings. To do this vTaiwan used a platform called pol.is. Taipei taxi drivers, representatives of Uber, members of the government, business leaders, trade unions and taxi users were all asked to log on. People were asked to draft statements beginning with “My feeling is . . .” and everyone else was asked to abstain, agree, or disagree with them.

As they did so, each person’s little avatar bounced around the map, staying close to the people they kept agreeing with, and moving away from others when disagreements emerged. The software created and analysed a matrix comprising what each person thought about every comment. “The aim,” Colin told me, one of the inventors of pol.is, “was to give the agenda-setting power to the people. In voting, the cake is baked. The goal is to engage citizens far earlier, when everyone is arguing over the ingredients.”

Over the first few days, pol.is kept visualising how opinions emerged, clustered, responded, divided and recombined. Eventually two groups emerged. Group One clustered around a statement in support of banning Uber. Group Two clustered around a statement expressing a preference for using Uber.

This, of course, is the opposite of a consensus – it is polarisation. And if we were talking about Twitter or Facebook, we’d see echo chambers, spats, competing online petitions and massively contradictory information flowing to politicians. But pol.is produced something more useful than just feedback: “we found that it became a consensus-generating mechanism”, said Colin. People were asked to continue to draft statements, but the ones that were given visibility were those that garnered support from both sides.

The process itself encouraged people to start posting more nuanced statements, and by the fourth week a consensus statement had emerged: “The government should leverage this opportunity to challenge the taxi industry to improve their management and quality control systems so that drivers and riders would enjoy the same quality service as Uber” (95% of participants agreed).

On 23 May 2016, the Taiwanese government pledged to ratify all the pol.is consensus items: taxis no longer needed to be painted yellow, app-based taxis were free to operate as long as they didn’t undercut existing meters, and so on. vTaiwan had succeeded in putting the people at the heart of decision-making.

Just two months later, Taiwan’s new premier declared that “all substantial national issues should go through a vTaiwan-like process”. It was used to break a six-year deadlock over the sale of alcohol online, and has now been applied to problems as diverse as cyber-bullying, telemedicine, tax and information security. In all, 19 topics have gone through the process, largely relating to online and digital regulation, and 16 have resulted in decisive Government action.

Inside Taiwan’s new digital democracy
By Audrey Tang

People often ask me about the future of democracy. To me, democracy’s future is based on a culture of listening. Taiwan has no “legacy systems” of representative democracy (to use the language of technology) and the internet is highly developed. This means we can experiment with new modes of democracy. As President Tsai Ing-wen said at her inauguration three years ago: “Before, democracy was a showdown between two opposing values. Now, democracy is a conversation between many diverse values.”

At a time when the world is rethinking basic elements of governance, Taiwan’s digital democracy—in which the people take the initiative, and the government responds in the here and now—can serve as a demonstration of new forms of citizen and state co-operation and dialogue for the 21st century.

‘The New Childhood’ and How Games, Social Media Are Good for Kids
By Asi Burak

Some of the thoughts and concepts you present feel almost like a framework that is meant to “wake up” decision-makers and thought leaders — educators, government, nonprofits, and other people with influence.

What I find interesting is that on the one hand, everyone — parents, educators, policy makers, etc.—seem worried about how to prepare kids for the so-called fourth-industrial revolution. They’re asking: How do we make sure that people can still live a meaningful, productive life after A.I., automation, the internet of things, and bioengineering completely transform our society, our economy, our culture? And then, on the other hand, folks are also panicked about the way kids are playing video games or using social media all the time.

Well, it seems to me that there’s a fundamental paradox here. See, digital play is the best possible way to prepare kids for what’s coming. How do I know that? Because play has always been the best way to prepare kids for the future. The research is clear about this; the science is conclusive. Through play, kids learn key social and emotional skills. It’s how they cultivate self-regulation and executive function skills. And so much more.

But a lot of people seem to make the mistake of thinking that play is a neutral thing, that there is such a thing as “pure” play. That’s not true. You can’t separate play from the context, or the zeitgeist of a particular era. That’s one of the ideas I explain in the book: so many of the things we think of as the sacred components of the childhood experience—the sandbox, the family dinner, the Teddy bear—were actually developed during the industrial era. Why? To prepare kids with the skills for the economic and technological realities of the 20th century.

So, it’s not just that kids need to play. It’s also that they need to play with toys and games that fit the contexts in which they live. Today’s kids live in a connected world. So, they need connected play. They need to participate in activities that prepare them to navigate a networked world with ease. And video games are already doing just that. If parents, teachers, and caretakers get involved—if they start playing video games with their kids—well, then, I’m certain that everything will be alright.

Posted in Games.