Sorting by

×

MISINFORMATION

INFODEMIC: Misinformation | Disinformation | Grey Information

“The Coronavirus disease (COVID-19) is the first pandemic in history in which technology and social media are being used on a massive scale to keep people safe, informed, productive and connected. At the same time, the technology we rely on to keep connected and informed is enabling and amplifying an infodemic that continues to undermine the global response and jeopardizes measures to control the pandemic.” –World Health Organization, September 2020

In response to the WHO, we are collecting information and research not only to better understand disinformation from an ethnographic and behavioural standpoint but also to start an extensive cross-collaboration with experts on finding measures to counteract.

ARTICLES

Misinformation about risks, prevention, and treatment of COVID-19 has cost lives. Misinformation comes from many sources, with many motives for spreading and believing it. In caring capably and compassionately for patients, a substantial majority of health professionals and health care organizations have vigorously defended the standards of medical science and public health practice. However, a vocal minority and their sponsors or allies have exploited their medical credentials to the detriment of the public. They have understated known risks of severe illness, challenged the safety and effectiveness of vaccines without evidence, touted unproved and risky treatments, and amplified conspiracy theories about science and scientists. These activities have compounded the ethical stress and moral injury the health care workforce has experienced during repeated pandemic surges.1

Passing laws to stop physicians and others from sharing unsound medical advice has proved challenging. In the US, the government (including professional licensing boards) may not infringe the right to free speech guaranteed by the First Amendment of the US Constitution. Free speech rights can have counter-intuitive effects on government regulation. The public might expect that prohibiting something would be more difficult than regulating how it is described. However, a law banning tobacco is only a political challenge, whereas a law banning the advertising of tobacco would violate the First Amendment. Advertising tobacco to children can be prohibited only because children are not lawful users of tobacco products.

READ: FULL RESEARCH PAPER

A century after the world’s worst flu epidemic, rapid spread of misinformation is undermining trust in vaccines crucial to public health, warns Heidi Larson.

A hundred years ago this month, the death rate from the 1918 influenza was at its peak. An estimated 500 million people were infected over the course of the pandemic; between 50 million and 100 million died, around 3% of the global population at the time.

A century on, advances in vaccines have made massive outbreaks of flu — and measles, rubella, diphtheria and polio — rare. But people still discount their risks of disease. Few realize that flu and its complications caused an estimated 80,000 deaths in the United States alone this past winter, mainly in the elderly and infirm. Of the 183 children whose deaths were confirmed as flu-related, 80% had not been vaccinated that season, according to the US Centers for Disease Control and Prevention.

I predict that the next major outbreak — whether of a highly fatal strain of influenza or something else — will not be due to a lack of preventive technologies. Instead, emotional contagion, digitally enabled, could erode trust in vaccines so much as to render them moot. The deluge of conflicting information, misinformation and manipulated information on social media should be recognized as a global public-health threat. 

READ: The biggest pandemic risk? Viral misinformation 

Many psychiatrists conceptualize mental illnesses, including psychotic disorders, across a continuum where their borders can be ambiguous. The same can be said of individual symptoms such as delusions, where the line separating clear-cut pathology from nonpathological or subclinical “delusion-like beliefs” is often blurred. However, the categorical distinction between mental illness and normality is fundamental to diagnostic reliability and crucial to clinical decisions about whether and how to intervene.

Conspiracy theory beliefs are delusion-like beliefs that are commonly encountered within today’s political landscape. Surveys have consistently revealed that approximately one-half of the population believes in at least 1 conspiracy theory, highlighting the normality of such beliefs despite their potential outlandishness.

Here are 3 questions you can ask to help differentiate conspiracy theory beliefs from delusions.

READ: FULL ARTICLE

It’s a distinction we learn as kids. But it turns out judging facts isn’t nearly as black-and-white as your third-grade teacher might have had you believe.

In reality, we rely on a biased set of cognitive processes to arrive at a given conclusion or belief. This natural tendency to cherry-pick and twist the facts to fit with our existing beliefs is known as motivated reasoning—and we all do it.

READ: Why we believe alternative facts 

Although conspiracy theories are endorsed by about half the population and occasionally turn out to be true, they are more typically false beliefs that, by definition, have a paranoid theme. Consequently, psychological research to date has focused on determining whether there are traits that account for belief in conspiracy theories (BCT) within a deficit model.

Alternatively, a two-component, socio-epistemic model of BCT is proposed that seeks to account for the ubiquity of conspiracy theories, their variance along a continuum, and the inconsistency of research findings likening them to psychopathology.

Within this model, epistemic mistrust is the core component underlying conspiracist ideation that manifests as the rejection of authoritative information, focuses the specificity of conspiracy theory beliefs, and can sometimes be understood as a sociocultural response to breaches of trust, inequities of power, and existing racial prejudices.

  • Once voices of authority are negated due to mistrust, the resulting epistemic vacuum can send individuals “down the rabbit hole” looking for answers where they are vulnerable to the biased processing of information and misinformation within an increasingly “post-truth” world.

The two-component, socio-epistemic model of BCT argues for mitigation strategies that address both mistrust and misinformation processing, with interventions for individuals, institutions of authority, and society as a whole.

READ: FULL ARTICLE

The pandemic has thrust many mainstream journalists into unfamiliar grounds, including coverage of expert opinion that is not backed up by peer-reviewed content, reporting on preprints, and assessing high-complexity instant-response science. How did they manage? We asked five journalists from mainstream media about their experience.

 

  • Apoorva Mandavilli, reporter on science and global health for The New York Times, USA. 
  • Chloé Hecketsweiler covers health, pharmacy and biotechnology for Le Monde, France. 
  • Rema Nagarajan is a journalist writing about public health for the Times of India, India. 
  • Sabine Righetti writes about science and innovation for Folha de S. Paulo, Brazil. 
  • Tamar Kahn is a science and health journalist with Business Day, South Africa.

READ: What do journalists say about covering science during the COVID-19 pandemic?

Private firms, straddling traditional marketing and the shadow world of geopolitical influence operations, are selling services once conducted principally by intelligence agencies.

They sow discord, meddle in elections, seed false narratives and push viral conspiracies, mostly on social media. And they offer clients something precious: deniability.

“Disinfo-for-hire actors being employed by government or government-adjacent actors is growing and serious,” said Graham Brookie, director of the Atlantic Council’s Digital Forensic Research Lab, calling it “a boom industry.”

READ: FULL ARTICLE

Bogus remedies, myths and fake news about COVID-19 can cost lives.

Here’s how some scientists are fighting back. Eating sea lettuce or injecting disinfectant won’t prevent you from getting COVID-19. Holding your breath for ten seconds is not a test for SARS-CoV-2. And Russian President Vladimir Putin did not release 500 lions in Moscow to persuade the city’s residents to stay indoors as part of the efforts to fight the pandemic. The rapid global spread of COVID-19 has been accompanied by what the World Health Organization has described as a “massive infodemic”.

Huge demand for information on the disease, its toll on health-care systems and lives and the many unanswered questions about a virus that was discovered only in December have created the perfect breeding ground for myths, fake news and conspiracy theories (see ‘Eight ways to spot misinformation’). Some can be dismissed as ludicrous and largely harmless, but others are life-threatening.

READ: Coronavirus misinformation, and how scientists can help to fight it

Anti-vaccination groups are projected to dominate social media in the next decade if left unchallenged. To counter their viral misinformation at a time when COVID-19 vaccines are being rolled out, our research team has produced a “psychological vaccine” that helps people detect and resist the lies and hoaxes they encounter online.

The World Health Organization (WHO) expressed concern about a global misinformation “infodemic” in February 2020, recognising that the COVID-19 pandemic would be fought both on the ground and on social media. That’s because an effective vaccine rollout will rely on high vaccine confidence, and viral misinformation can adversely affect that confidence, leading to vaccine hesitancy.

The spread of false information about COVID-19 poses a serious risk to not only the success of vaccination campaigns but to public health in general. Our solution is to inoculate people against false information – and we’ve borrowed from the logic of real-life vaccines to inform our approach.

READ: COVID-19 misinformation: scientists create a ‘psychological vaccine’

 

It is a little-known fact that humans have not one but two immune systems. The first, the biophysical immune system – the one we’ve all heard much about – responds to infections as they enter the body, detecting and eliminating intruders such as the coronavirus.

The second is the behavioural immune system, which adapts our behaviour to preemptively avoid potentially infectious people, places and things. The behavioural immune system is the first line of defence against infectious disease. It prompts people to socially conform with known traditions and to avoid foreign, dissimilar and potentially infectious groups.

In a recently published study, my colleagues and I at the University of Cambridge examined the impact of the behavioural immune system on our attitudes towards obedience and authority. We found that high rates of infectious diseases – and the disease-avoidance they promote – may fundamentally shape political opinions and social institutions.

READ: COVID-19 could nudge minds

As part of a joint research project by the DFRLab and the Associated Press, this report examines the information environments of four countries – China, the United States, Russia and Iran – during the first six months of the COVID-19 outbreak and the false narratives that took hold there. The report focuses on how false narratives about the origins of the virus spread globally on social media and beyond, and the geopolitical consequences of those narratives. 


As became clear in the report’s analysis, these competing narratives resulted in an escalatory series of finger-pointing, with countries blaming one another as the source of the virus, including accusations it was developed in a lab and spread intentionally. While these narratives lacked any evidence, they took on a life of their own, becoming part of the global informational chaos surrounding the pandemic – what the World Health Organization has termed an infodemic.

READ: FULL REPORT

Misinformation on COVID-19 is so pervasive that even some patients dying from the disease still say it’s a hoax. In March 2020, nearly 30% of U.S. adults believed the Chinese government created the coronavirus as a bioweapon (Social Science & Medicine, Vol. 263, 2020) and in June, a quarter believed the outbreak was intentionally planned by people in power (Pew Research Center, 2020).

Such falsehoods, which research shows have influenced attitudes and behaviors around protective measures such as mask-wearing, are an ongoing hurdle as countries around the world struggle to get the virus under control.

Psychological studies of both misinformation (also called fake news), which refers to any claims or depictions that are inaccurate, and disinformation, a subset of misinformation intended to mislead, are helping expose the harmful impact of fake news—and offering potential remedies. But psychologists who study fake news warn that it’s an uphill battle, one that will ultimately require a global cooperative effort among researchers, governments, and social media platforms. 

READ: Controlling the spread of misinformation

From outlandish suggestions that people can beat COVID-19 by drinking bleach to conspiracy theories that vaccines can alter a person’s DNA, the COVID-19 pandemic has made clear the challenges medical misinformation poses in the digital age. In a recent interview, Kasisomayajula “Vish” Viswanath, Lee Kum Kee Professor of Health Communication in the Department of Social and Behavioral Sciences at Harvard T. H. Chan School of Public Health, discussed the threat of misinformation and disinformation and how we can fight it.

READ: Fighting the spread of COVID-19 misinformation 

The difference between a disinformation attack and a cyberattack is the target. Cyberattacks are aimed at computer infrastructure, while disinformation exploits our inherent cognitive biases and logical fallacies. In traditional cybersecurity attacks, the tools are malware, viruses, trojans, botnets, and social engineering. Disinformation attacks use manipulated, mis-contextualized, misappropriated information, deepfakes, cheapfakes, and so on.

There is a lot of similarity of the actions, strategies, tactics, and harms from cybersecurity and disinformation attacks. Infact, we see nefarious actors using both types of attacks for disruption. Using information operations and cyberattacks creates more havoc. They may lead with disinformation campaigns as reconnaissance for a cyberattack or data exfiltration from cyberattacks to launch a targeted disinformation campaign.

READ: Disinformation Is a Cybersecurity Threat

In 2019, and again in 2020, Facebook removed covert social media influence operations that targeted Libya and were linked to the Russian businessman Yevgeny Prigozhin. The campaigns—the first exposed in October 2019, the second in December 2020—shared several tactics: Both created Facebook pages masquerading as independent media outlets and posted political cartoons. But by December 2020, the operatives linked to Prigozhin had updated their toolkit: This time, one media outlet involved in the operation had an on-the-ground presence, with branded merchandise and a daily podcast.

Between 2018 and 2020, Facebook and Twitter announced that they had taken down 147 influence operations in total, according to our examination of their public announcements of disinformation takedowns during that time period. Facebook describes such operations as “coordinated inauthentic behavior,” and Twitter dubs them “state-backed information operations.” Our investigation of these takedowns revealed that in 2020 disinformation actors stuck with some tried and true strategies, but also evolved in important ways, often in response to social media platform detection strategies. Political actors are increasingly outsourcing their disinformation work to third-party PR and marketing firms and using AI-generated profile pictures. Platforms have changed too, more frequently attributing takedowns to specific actors.

Here are our five takeaways on how online disinformation campaigns and platform responses changed in 2020, and how they didn’t.

READ: How disinformation evolved in 2020  

This is the age of partisanship. As our beliefs become increasingly polarised and digital echo chambers begin to dictate our realities, many of us are finding ourselves inadvertent partisans. In this time of filter bubbles, we have been taught to rely on the left-right political distinction as an essential tool for measuring who is likely to think like us and with whom we should bond.

But partisanship isn’t just a matter of direction – that is, whether one’s beliefs and identity lean politically left or right. Partisanship also has a second, often overlooked, dimension captured by the intensity or extremity of one’s beliefs and identity.

READ: The partisan brain: cognitive study

The UK has approved giving one dose of the Pfizer/BioNTech COVID-19 vaccine to all children aged 12-15, with vaccines largely being given within the education system. Schools are helping coordinate the rollout, including the consent process. Under-16s need parental consent to have the vaccine.

Unfortunately, schools, parents and teenagers have also become the victims of anti-vaccination misinformation campaigns. For example, a fake vaccine consent form was recently sent to many UK schools in late September 2021. It reportedly arrived in an email disguised as being from the NHS, and a few schools believed it to be genuine and sent it to parents and guardians.

The form contained a lot of misinformation and was evidently designed to dissuade parents from giving their consent by depicting vaccines as less safe and effective than they are. If you’re a parent whose child is being offered a COVID-19 vaccine, here’s what you should know.

READ: New wave of misinformation  

Nearly every type of media — newspapers, social media, websites, apps, online stores and television — shares some blame for the proliferation of misinformation influencing vaccine hesitancy in the U.S.

Why it matters: Several recent studies and reports suggest that the COVID-19 infodemic has less to do with the failure of one medium than the lack of societal trust in key institutions that are struggling to deliver a clear and consistent message.

READ: Vaccine misinformation spreads to every kind of media 

Misinformation has caused confusion and led people to decline Covid-19 vaccines, reject public health measures such as masking and physical distancing, and use unproven treatments, according to a new advisory from the U.S. Surgeon General. Misinformation can also be spread intentionally to serve a malicious purpose, including tricking people for financial gain or political advantage. Called “disinformation,” these organized campaigns can cause true harm in communities by causing distrust in the medical system.

The transformative and ground-breaking work of doctors, nurses, and scientists working to stop the outbreak has been stymied because of mis- and disinformation that minimize the threat of the disease, manufacture fears about the vaccine, and polarize simple interventions that keep people safe.  These efforts are both widespread and cleverly focused to reach those most vulnerable to Covid-19.

An analysis of over 4.5 million social media posts found that false news stories were 70 percent more likely to be shared than true stories. This failure is causing significant harm and undermining our Covid-19 response and recovery efforts.

READ: Misinformation Is the Biggest Threat to Ending This Pandemic

The blame for the recent assault on the US Capitol and President Donald Trump’s broader dismantling of democratic institutions and norms can be laid at least partly on misinformation and conspiracy theories.

Those who spread misinformation, like Trump himself, are exploiting people’s lack of media literacy — it’s easy to spread lies to people who are prone to believe what they read online without questioning it.

Journalists are trained in this sort of thing — that is, the responsible ones who are trying to counter misinformation with truth.

Here are five fundamental lessons from Journalism 101 that all citizens can learn to improve their media literacy and fact-checking skills.

READ: 5 ways to spot misinformation and stop sharing it online

The confluence of misinformation and infectious disease isn’t unique to COVID-19. Misinformation contributed to the spread of the Ebola epidemic in West Africa, and it plagues efforts to educate the public on the importance of vaccinating against measles. But when it comes to COVID-19, the pandemic has come to be defined by a tsunami of persistent misinformation to the public on everything from the utility of masks and the efficacy of school closures, to the wisdom behind social distancing, and even the promise of untested remedies. According to a study published by the National Bureau of Economic Research, areas of the country exposed to television programming that downplayed the severity of the pandemic saw greater numbers of cases and deaths—because people didn’t follow public health precautions.

READ: COVID Misinformation Is Killing People 

The fake news recorded until 30 June 2020 in two websites (Globo Corporation website G1 and Ministry of Health) were collected and categorized according to their content. From each piece of fake news, the following information was extracted: publication date, title, channel (e.g., WhatsApp), format (text, photo, video), and website in which it was recorded. Terms were selected from fake news titles for analysis in Google Trends to determine whether the number of searches using the selected terms had increased after the fake news appeared. The Brazilian regions with the highest percent increase in searches using the terms were also identified.

In the two websites, 329 fake news about COVID-19 were retrieved. Most fake news were spread through WhatsApp and Facebook. The most frequent thematic categories were: politics (20.1%), epidemiology and statistics (e.g., proportion of cases and deaths) (19.5%), and prevention (16.1%). According to Google Trends, the number of searches using the terms retrieved from the fake news increased 34.3% during the period studied. The largest increase was recorded in the Southeast (45.1%) and the Northeast (27.8%).

READ: Analysis of fake news disseminated during the COVID-19 pandemic in Brazil

Widespread acceptance of a vaccine for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) will be the next major step in fighting the coronavirus disease 2019 (COVID-19) pandemic, but achieving high uptake will be a challenge and may be impeded by online misinformation. To inform successful vaccination campaigns, we conducted a randomized controlled trial in the UK and the USA to quantify how exposure to online misinformation around COVID-19 vaccines affects intent to vaccinate to protect oneself or others. Here we show that in both countries—as of September 2020—fewer people would ‘definitely’ take a vaccine than is likely required for herd immunity, and that, relative to factual information, recent misinformation induced a decline in intent of 6.2 percentage points (95th percentile interval 3.9 to 8.5) in the UK and 6.4 percentage points (95th percentile interval 4.0 to 8.8) in the USA among those who stated that they would definitely accept a vaccine. We also find that some sociodemographic groups are differentially impacted by exposure to misinformation.

Finally, we show that scientific-sounding misinformation is more strongly associated with declines in vaccination intent.

READ: Measuring the impact of COVID-19 vaccine misinformation on vaccination intent in the UK and USA

Political polarisation in the USA is impeding vaccination of the population against SARS-CoV-2. Today, the lowest COVID-19 vaccination rates in the USA are overwhelmingly in Republican-leaning states and counties.1 At a time when the delta variant is spreading, these are also the areas experiencing surges in admissions to hospital and intensive care.1 If political divides on COVID-19 vaccination become ingrained, the consequences could include greater resistance to all vaccination and outbreaks of other vaccine-preventable diseases. Understanding and countering this trend are urgent public health priorities. 

READ: Uncoupling vaccination from politics: a call to action 

The COVID-19 crisis has contributed to the development of an ‘infodemic’ that hinders an adequate public health approach to managing the crisis. The WHO defines an infodemic as an ‘overabundance of information—some accurate and some not—that occurs during an epidemic’. Fake news, propaganda and conspiracy theories, ubiquitous in the era of social media, have spread since the beginning of the COVID-19 pandemic. Reports suggest that a rumour is three times more likely to be spread in social media than accurate information. This is of great concern as it often leads to reduced trust in health institutions and services and impedes the evidence-informed approach in managing the pandemic. The infodemic may also promote hate speech and associated stigmatisation which may contribute to exclusions of vulnerable sections of society.

  • The COVID-19 crisis has contributed to the development of an ‘infodemic’ that hinders an adequate public health approach to managing the crisis.
  • The adverse effects of the infodemic may be exacerbated in low and middle-income countries (LMICs) where low health literacy levels, poor health infrastructure and poor resource settings exist.
  • In order to manage the COVID-19 infodemic in LMICs, a three-level approach is suggested in the context of countering fake news.
  • Public health communicators may benefit from using unique methods such as story-telling to understand the underlying factors that promote infodemics as well as explore practical tools that can counter the tide of misinformation.
  • Laws in LMICs to address privacy concerns in the use of data and personal health information are weak and concerted efforts need to be made to strengthen them.

READ: Combating the COVID-19 infodemic

Bringing increased attention to vaccination now is especially important as the world continues to combat COVID-19, a novel infectious disease that is easily transmissible and has an uncertain disease course, disproportionately affects elderly and ethnic/racial minority populations, provides fertile ground for misinformation, and has become politicized. As an uncertain population anticipates the development of a COVID-19 vaccine to help return society to some semblance of normalcy, priming the public for vaccine acceptance is an urgent public health priority. With daily lives interrupted and vaccine discussions dominating news headlines, government hearings, and social media discourse, this urgency should be used as a teachable moment to promote vaccine literacy, address hesitancy, and build resilience to misinformation specific to a COVID-19 vaccine and about vaccination more generally. These efforts require us to re-engage the public, community leaders, health care providers, public health practitioners, policymakers, and health agencies in addressing the challenges associated with bolstering vaccine-related knowledge, attitudes, and behaviors.

READ: Using a Global Pandemic as a Teachable Moment  

False statements — about Covid-19 and so much else — spread like a virus online. Scientists should study them like one.

The Internet is full of misinformation — that is, inaccurate statements — including the sinister, intentionally misleading subset known as disinformation. Both are spreading, a contagion that imperils society just as surely as the coronavirus itself. Those who spread it run the gamut of society; a new study by Cornell University researchers concludes that President Trump has been the leading source of Covid-19 misinformation reported by news media, who often repeat the information “without question or correction.”

READ: The pernicious contagion of misinformation   

Last February, Cadbury Chocolate fell victim to a hoax. The image below went viral in an Indonesian WhatsApp group called “Viral Media Johor”, and later in a Nigerian group.

Obviously, the post was fake news. The man in the image is Aminu Ogwuche, who was arrested on suspicion of involvement with the bombing of a Nigerian bus station back in 2014. He has never worked for Mondelez, the company that makes Cadbury chocolate, and their products are not infected with HIV. Indeed, it’s not even possible to contract HIV from eating food contaminated with HIV-positive blood.

But the problem that this story and others like it pose is real. Rumours, hoaxes and misinformation find fertile breeding ground on social media. But as Google, Facebook, Twitter and other social media platforms increasingly crack down on misinformation, the purveyors of false stories are seeking refuge on direct messaging apps such as WhatsApp.

READ: FULL ARTICLE 

The conspiracy theory video “Plandemic” recently went viral. Despite being taken down by YouTube and Facebook, it continues to get uploaded and viewed millions of times. The video is an interview with conspiracy theorist Judy Mikovits, a disgraced former virology researcher who believes the COVID-19 pandemic is based on vast deception, with the purpose of profiting from selling vaccinations.

The video is rife with misinformation and conspiracy theories. Many high-quality fact-checks and debunkings have been published by reputable outlets such as SciencePolitifact and FactCheck.

As scholars who research how to counter science misinformation and conspiracy theories, we believe there is also value in exposing the rhetorical techniques used in “Plandemic.” As we outline in our Conspiracy Theory Handbook and How to Spot COVID-19 Conspiracy Theories, there are seven distinctive traits of conspiratorial thinking. “Plandemic” offers textbook examples of them all.

READ: FULL ARTICLE 

RESEARCH

Misinformation about risks, prevention, and treatment of COVID-19 has cost lives. Misinformation comes from many sources, with many motives for spreading and believing it. In caring capably and compassionately for patients, a substantial majority of health professionals and health care organizations have vigorously defended the standards of medical science and public health practice. However, a vocal minority and their sponsors or allies have exploited their medical credentials to the detriment of the public. They have understated known risks of severe illness, challenged the safety and effectiveness of vaccines without evidence, touted unproved and risky treatments, and amplified conspiracy theories about science and scientists. These activities have compounded the ethical stress and moral injury the health care workforce has experienced during repeated pandemic surges.

READ: FULL RESEARCH PAPER

We examined how age and exposure to different types of COVID-19 (mis)information affect misinformation beliefs, perceived credibility of the message and intention-to-share it on WhatsApp.

Through two mixed-design online experiments in the UK and Brazil (total N = 1454) we first randomly exposed adult WhatsApp users to full misinformation, partial misinformation, or full truth about the therapeutic powers of garlic to cure COVID-19.

We then exposed all participants to corrective information from the World Health Organisation debunking this claim. We found stronger misinformation beliefs among younger adults (18–54) in both the UK and Brazil and possible backfire effects of corrective information among older adults (55+) in the UK. Corrective information from the WHO was effective in enhancing perceived credibility and intention-to-share of accurate information across all groups in both countries.

Our findings call for evidence-based infodemic interventions by health agencies, with greater engagement of younger adults in pandemic misinformation management efforts.

READ: FULL STUDY

Key Findings

During the COVID-19 pandemic, both Russia and China engaged in news manipulation that served their political goals

  • Russian media advanced anti-U.S. conspiracy theories about the virus.
  • Chinese media advanced pro-China news that laundered their reputation in terms of COVID-19 response. Russian media also supported this laundering effort.
  • Russia and China directly threatened global health and well-being by using authoritarian power over the media to oppose public health measures.

An enduring collection capability would enable real-time detection and analysis of state-actor propaganda

  • Existing natural language processing methods can be combined to make sense of news reporting by nation, on a global scale.
  • A public system for monitoring global news that detects and describes global news themes by nation is plausible.

READ: FULL REPORT

Highlights

  • We investigate the conditions under which disinformation gains social influence
  • Game theory shows that disinformation pays for advisers when they are ignored
  • Such a strategic adviser outcompeted an honest one in swaying human participants
  • Individuals, communicating dyads, and majority groups followed a strategic adviser

Competition for social influence is a major force shaping societies, from baboons guiding their troop in different directions, to politicians competing for voters, to influencers competing for attention on social media.

Social influence is invariably a competitive exercise with multiple influencers competing for it. We study which strategy maximizes social influence under competition.

Applying game theory to a scenario where two advisers compete for the attention of a client, we find that the rational solution for advisers is to communicate truthfully when favored by the client, but to lie when ignored. Across seven pre-registered studies, testing 802 participants, such a strategic adviser consistently outcompeted an honest adviser.

Strategic dishonesty outperformed truth-telling in swaying individual voters, the majority vote in anonymously voting groups, and the consensus vote in communicating groups.

Our findings help explain the success of political movements that thrive on disinformation, and vocal underdog politicians with no credible program.

READ: FULL RESEARCH PAPER

Highlights

  • We investigate the conditions under which disinformation gains social influence
  • Game theory shows that disinformation pays for advisers when they are ignored
  • Such a strategic adviser outcompeted an honest one in swaying human participants
  • Individuals, communicating dyads, and majority groups followed a strategic adviser

Competition for social influence is a major force shaping societies, from baboons guiding their troop in different directions, to politicians competing for voters, to influencers competing for attention on social media.

Social influence is invariably a competitive exercise with multiple influencers competing for it. We study which strategy maximizes social influence under competition.

Applying game theory to a scenario where two advisers compete for the attention of a client, we find that the rational solution for advisers is to communicate truthfully when favored by the client, but to lie when ignored. Across seven pre-registered studies, testing 802 participants, such a strategic adviser consistently outcompeted an honest adviser.

Strategic dishonesty outperformed truth-telling in swaying individual voters, the majority vote in anonymously voting groups, and the consensus vote in communicating groups.

Our findings help explain the success of political movements that thrive on disinformation, and vocal underdog politicians with no credible program.

READ: FULL RESEARCH PAPER

Background

There is a deluge of information and misinformation about COVID-19. The present survey was conducted to explore the sources of information /misinformation for healthcare professionals from India.

Methods

A cross-sectional online survey using snowballing technique was conducted from 24 Mar to 10 Apr 2020. The questionnaire was pretested and developed using standard techniques. It was circulated among medical students and physicians. Data were analysed using the STATA software.

Results

Data of 758 participants were analysed. A total of 255 (33.6%) medical students, 335 (44.2%) nonspecialists and 168 (22.1%) specialists participated. The most common source of formal and informal information was official government websites and online news, respectively. A total of 517 (68.2%) participants accepted receiving misinformation. Social media and family and friends were the most common sources of misinformation. Seventy-two percent of participants agreed that spread of information helped to contain COVID-19, but more than that 75% agreed to having received inaccurate information. Seventy-four percent of respondents felt the need for regulation of information during such times; 26% and 33% felt that information about COVID-19 made them feel uncomfortable and distracts routine decision-making, respectively, and 50% felt it was difficult to differentiate correct from incorrect information about COVID-19.

Conclusion

The study explored the sources of information and misinformation and found a high prevalence of misinformation, especially from social media. We suggest the need to better manage the flow of information so that it can be an effective weapon against SARS-CoV2. There is a need for doctors to adapt to the changing times of infodemics accompanying pandemics.

READ: FULL STUDY

Background

The COVID-19 pandemic fueled one of the most rapid vaccine developments in history. However, misinformation spread through online social media often leads to negative vaccine sentiment and hesitancy.

Methods

To investigate COVID-19 vaccine-related discussion in social media, we conducted a sentiment analysis and Latent Dirichlet Allocation topic modelling on textual data collected from 13 Reddit communities focusing on the COVID-19 vaccine from Dec 1, 2020, to May 15, 2021. Data were aggregated and analyzed by month to detect changes in any sentiment and latent topics.

Results

Polarity analysis suggested these communities expressed more positive sentiment than negative regarding the vaccine-related discussions and has remained static over time. Topic modelling revealed community members mainly focused on side effects rather than outlandish conspiracy theories.

Conclusion

Covid-19 vaccine-related content from 13 subreddits show that the sentiments expressed in these communities are overall more positive than negative and have not meaningfully changed since December 2020. Keywords indicating vaccine hesitancy were detected throughout the LDA topic modelling. Public sentiment and topic modelling analysis regarding vaccines could facilitate the implementation of appropriate messaging, digital interventions, and new policies to promote vaccine confidence.

READ: FULL STUDY 

Widespread acceptance of a vaccine for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) will be the next major step in fighting the coronavirus disease 2019 (COVID-19) pandemic, but achieving high uptake will be a challenge and maybe impeded by online misinformation.

To inform successful vaccination campaigns, we conducted a randomized controlled trial in the UK and the USA to quantify how exposure to online misinformation around COVID-19 vaccines affects intent to vaccinate to protect oneself or others.

Here we show that in both countries—as of September 2020—fewer people would ‘definitely’ take a vaccine than is likely required for herd immunity, and that, relative to factual information, recent misinformation induced a decline in intent of 6.2 percentage points (95th percentile interval 3.9 to 8.5) in the UK and 6.4 percentage points (95th percentile interval 4.0 to 8.8) in the USA among those who stated that they would definitely accept a vaccine. We also find that some sociodemographic groups are differentially impacted by exposure to misinformation. Finally, we show that scientific-sounding misinformation is more strongly associated with declines in vaccination intent.

READ: FULL STUDY

The global spread of coronavirus disease 2019 (COVID-19) created a fertile ground for attempts to influence and destabilize different populations and countries. Both Russia and China appear to have employed information manipulation during the COVID-19 pandemic in service to their respective global agendas.

This report uses exploratory qualitative analysis to systematically describe the types of COVID-19-related malign and subversive information efforts with which Russia- and China-associated outlets appear to have targeted U.S. audiences from January 2020 to July 2020 and organizes them into a framework.

This work lays the foundation for a better understanding of how and whether Russia and China might act and coordinate in the domain of malign and subversive information efforts in the future. This report is the first in a series that will use big data, computational linguistics, and machine learning to test findings and hypotheses generated by the initial analysis.

READ: FULL STUDY

The coronavirus pandemic has seen a marked rise in medical disinformation across social media. A variety of claims have garnered considerable traction, including the assertion that COVID is a hoax or deliberately manufactured, that 5G frequency radiation causes coronavirus, and that the pandemic is a ruse by big pharmaceutical companies to profiteer off a vaccine.

An estimated 30% of some populations subscribe some form of COVID medico-scientific conspiracy narratives, with detrimental impacts for themselves and others. Consequently, exposing the lack of veracity of these claims is of considerable importance.

Previous work has demonstrated that historical medical and scientific conspiracies are highly unlikely to be sustainable. In this article, an expanded model for a hypothetical en masse COVID conspiracy is derived.

Analysis suggests that even under ideal circumstances for conspirators, commonly encountered conspiratorial claims are highly unlikely to endure, and would quickly be exposed.

This work also explores the spectrum of medico-scientific acceptance, motivations behind the propagation of falsehoods, and the urgent need for the medical and scientific community to anticipate and counter the emergence of falsehoods.

READ: FULL STUDY

Advanced persistent threat (APT) is widely acknowledged to be the most sophisticated and potent class of security threat.

APT refers to knowledgeable human attackers that are organized, highly sophisticated and motivated to achieve their objectives against a targeted organization(s) over a prolonged period.

Strategically-motivated APTs or S-APTs are distinct in that they draw their objectives from the broader strategic agenda of third parties such as criminal syndicates, nation-states, and rival corporations.

In this paper, we review the use of the term “advanced persistent threat,” and present a formal definition. We then draw on military science, the science of organized conflict, for a theoretical basis to develop a rigorous and holistic model of the stages of an APT operation which we subsequently use to explain how S-APTs execute their strategically motivated operations using tactics, techniques and procedures.

Finally, we present a general disinformation model, derived from situation awareness theory, and explain how disinformation can be used to attack the situation awareness and decision making of not only S-APT operators, but also the entities that back them.

READ: FULL PAPER 

 

The duration and impact of the COVID-19 pandemic depends largely on individual and societal actions which are influenced by the quality and salience of the information to which they are exposed. Unfortunately, COVID-19 misinformation has proliferated. Despite growing attempts to mitigate COVID-19 misinformation, there is still uncertainty regarding the best way to ameliorate the impact of COVID-19 misinformation. To address this gap, the current study uses a meta-analysis to evaluate the relative impact of interventions designed to mitigate COVID-19-related misinformation.

We searched multiple databases and gray literature from January 2020 to September 2021. The primary outcome was COVID-19 misinformation belief.

We examined study quality and meta-analysis was used to pool data with similar interventions and outcomes. 16 studies were analyzed in the meta-analysis, including data from 33378 individuals. The mean effect size of interventions to mitigate COVID-19 misinformation was positive, but not statistically significant [d = 2.018, 95% CI (-0.14, 4.18), p = .065, k = 16].

We found evidence of publication bias. Interventions were more effective in cases where participants were involved with the topic, and where text-only mitigation was used. The limited focus on non-U.S. studies and marginalized populations is concerning given the greater COVID-19 mortality burden on vulnerable communities globally.

The findings of this meta-analysis describe the current state of the literature and prescribe specific recommendations to better address the proliferation of COVID-19 misinformation, providing insights helpful to mitigating pandemic outcomes.

READ: FULL STUDY

Introduction Infectious disease misinformation is widespread and poses challenges to disease control. There is limited evidence on how to effectively counter health misinformation in a community setting, particularly in low-income regions, and unsettled scientific debate about whether misinformation should be directly discussed and debunked, or implicitly countered by providing scientifically correct information.

Methods The Contagious Misinformation Trial developed and tested interventions designed to counter highly prevalent infectious disease misinformation in Sierra Leone, namely the beliefs that (1) mosquitoes cause typhoid and (2) typhoid co-occurs with malaria. The information intervention for group A (n=246) explicitly discussed misinformation and explained why it was incorrect and then provided the scientifically correct information. The intervention for group B (n=245) only focused on providing correct information, without directly discussing related misinformation. Both interventions were delivered via audio dramas on WhatsApp that incorporated local cultural understandings of typhoid. Participants were randomised 1:1:1 to the intervention groups or the control group (n=245), who received two episodes about breast feeding.

READ: Debunking highly prevalent health misinformation

Individuals who encounter false information on social media may actively spread it further, by sharing or otherwise engaging with it. Much of the spread of disinformation can thus be attributed to human action. Four studies (total N = 2,634) explored the effect of message attributes (authoritativeness of source, consensus indicators), viewer characteristics (digital literacy, personality, and demographic variables) and their interaction (consistency between message and recipient beliefs) on self-reported likelihood of spreading examples of disinformation.

Participants also reported whether they had shared real-world disinformation in the past. Reported likelihood of sharing was not influenced by authoritativeness of the source of the material, nor indicators of how many other people had previously engaged with it.

Participants’ level of digital literacy had little effect on their responses. The people reporting the greatest likelihood of sharing disinformation were those who thought it likely to be true, or who had pre-existing attitudes consistent with it. They were likely to have previous familiarity with the materials. Across the four studies, personality (lower Agreeableness and Conscientiousness, higher Extraversion and Neuroticism) and demographic variables (male gender, lower age and lower education) were weakly and inconsistently associated with self-reported likelihood of sharing.

These findings have implications for strategies more or less likely to work in countering disinformation in social media.

READ: FULL ARTICLE 

A descriptive ecological study explored the percentage of the population that is unable to recognize fake news, the percentage who trust social network content, and the percentage who use it as their sole news source in Argentina, Brazil, Chile, Colombia, Mexico, and Peru, up to 29 November 2020. Internet penetration rate, Facebook penetration rate, and COVID-19 mortality were calculated for each country. Information was obtained from literature searches and government and news portals in the selected countries, according to the World Health Organization’s five proposed action areas: identifying evidence, translating knowledge and science, amplifying action, quantifying impact, and coordination and governance.

READ: Infodemic: fake news and COVID-19 mortality  

Previous research has argued that fake news may have grave consequences for health behavior, but
surprisingly, no empirical data have been provided to support this assumption. This issue takes on new
urgency in the context of the coronavirus pandemic, and the accompanying wave of online misinformation.
In this large preregistered study (N = 3,746), we investigated the effect of a single exposure to fabricated
news stories about COVID-19 on related behavioral intentions.

We observed small but measurable effects on some behavioral intentions but not others—for example, participants who read a story about problems with a forthcoming contact-tracing app reported a 5% reduction in willingness to download the app. These
data suggest that one-off fake news exposure may have behavioral consequences, though the effects are not large. We also found no effects of providing a general warning about the dangers of online misinformation on response to the fake stories, regardless of the framing of the warning in positive or negative terms. This suggests that generic warnings about online misinformation, such as those used by governments and social media companies, are unlikely to be effective.

We conclude with a call for more empirical research on the real-world consequences of fake news.

READ: Quantifying the Effects of Fake News on Behavior

The COVID-19 pandemic presents multifaceted challenges for the US health care system. One such challenge is in delivering vital health information to the public—a task made harder by the scourge of health misinformation across the information ecosystem (Southwell et al., p. S288 in this issue of AJPH, and Southwell et al.1). I offer concrete recommendations for public health information officers and communication professionals drafting communication campaigns for health agencies and health organizations to maximize the chance that timely health advisories reach the public.

READ: Concrete Recommendations for Cutting Through Misinformation 

We present the results of an international study, integrating previous research about predictors of belief in misinformation (both in general and specifically about COVID-19), and, in turn, how susceptibility to misinformation about the virus affects key self-reported health behaviours.

In summary, while belief in misinformation about COVID-19 is not held by a majority of people in any country that we examined, specific misinformation claims are consistently deemed reliable by a substantial segment of the public and pose a potential risk to public health.

Crucially, we demonstrate a clear link, replicated internationally, between susceptibility to misinformation and vaccine hesitancy and a reduced likelihood of complying with public health guidance. We highlight the key role that scientists play as disseminators of factual and reliable information, as well as the potential importance of fostering numeracy and critical thinking skills as a way to reduce susceptibility to misinformation. Further research should explore how digital media and risk literacy interventions may impact how (mis)information is received, processed and shared, and how they can be leveraged to improve resilience against misinformation on a societal level.

READ: Susceptibility to misinformation  

A Surgeon General’s Advisory is a public statement that calls the American people’s attention to a public health issue and provides recommendations for how that issue should be addressed. Advisories are reserved for significant public health challenges that need the American people’s immediate awareness.

READ: SG Misinformation Advisory  

Rumors and conspiracy theories, can contribute to vaccine hesitancy. Monitoring online data related to COVID-19 vaccine candidates can track vaccine misinformation in real-time and assist in negating its impact. This study aimed to examine COVID-19 vaccine rumors and conspiracy theories circulating on online platforms, understand their context, and then review interventions to manage this misinformation and increase vaccine acceptance.

READ: COVID-19 vaccine rumors and conspiracy theories

The COVID-19 pandemic has shown that false or misleading health-related information can dangerously undermine the response to a public health crisis. These messages include the  inadvertent spread of erroneous information (misinformation) or deliberately created and propagated false or misleading information (disinformation). Misinformation and disinformation have contributed to reduced trust in medical professionals and public health responders, increased belief in false medical cures, politicized public health countermeasures aimed at curbing transmission of the disease, and increased loss of life.

READ: Misinformation/Disinformation Costs an Estimated $50 to $300 Million Each Day

Kathleen Hall Jamieson is director of the Annenberg Public Policy Center of the University of Pennsylvania and co-founder of FactCheck.org.

I have spent much of my career studying ways to blunt the effects of disinformation and help the public make sense of the complexities of politics and science. When my colleagues and I probed the relation between the consumption of misinformation and the embrace, or dismissal, of protective behaviors that will ultimately stop the coronavirus’s spread, the results were clear: Those who believe false ideas and conspiracy theories about COVID-19 and vaccines are less likely to engage in mask wearing, social distancing, hand washing and vaccination.

In the midst of a raging pandemic, the importance of science communication is indisputable. Mention “science communication,” though, and what comes to mind in this context are public service announcements touting the 3 Ws (Wear a mask, Watch your distance, Wash your hands) or the FAQ pages of the Centers for Disease Control and Prevention. Ask someone what “science communicator” evokes, and responses might include a family physician and experts such as Anthony S. Fauci, director of the National Institute of Allergy and Infectious Diseases, and CNN’s Sanjay Gupta, who appear so regularly on our screens that we think of them as friends. But Fauci isn’t on your family Zoom call when a cousin mistakenly asserts that the CDC has found that wearing a mask makes you more likely to get COVID-19. Nor is Gupta at the ready when your friend’s daughter wonders whether the COVID vaccine contains microchips designed to track us.

READ: How to Debunk Misinformation about COVID, Vaccines and Masks 

Individuals who encounter false information on social media may actively spread it further, by sharing or otherwise engaging with it. Much of the spread of disinformation can thus be attributed to human action. Four studies (total N = 2,634) explored the effect of message attributes (authoritativeness of source, consensus indicators), viewer characteristics (digital literacy, personality, and demographic variables) and their interaction (consistency between message and recipient beliefs) on self-reported likelihood of spreading examples of disinformation.

Participants also reported whether they had shared real-world disinformation in the past. Reported likelihood of sharing was not influenced by authoritativeness of the source of the material, nor indicators of how many other people had previously engaged with it. Participants’ level of digital literacy had little effect on their responses. The people reporting the greatest likelihood of sharing disinformation were those who thought it likely to be true, or who had pre-existing attitudes consistent with it. They were likely to have previous familiarity with the materials. Across the four studies, personality (lower Agreeableness and Conscientiousness, higher Extraversion and Neuroticism) and demographic variables (male gender, lower age and lower education) were weakly and inconsistently associated with self-reported likelihood of sharing.

These findings have implications for strategies more or less likely to work in countering disinformation in social media.

READ: Why do people spread false information online?  

We address the diffusion of information about the COVID-19 with a massive data analysis on Twitter, Instagram, YouTube, Reddit and Gab. We analyze engagement and interest in the COVID-19 topic and provide a differential assessment on the evolution of the discourse on a global scale for each platform and their users. We fit information spreading with epidemic models characterizing the basic reproduction number R0 for each social media platform. Moreover, we identify information spreading from questionable sources, finding different volumes of misinformation in each platform. However, information from both reliable and questionable sources do not present different spreading patterns. Finally, we provide platform-dependent numerical estimates of rumors’ amplification.

READ: The COVID-19 social media infodemic

Understanding the threat posed by anti-vaccination efforts on social media is critically important with the forthcoming need for worldwide COVID-19 vaccination programs. We globally evaluate the effect of social media and online foreign disinformation campaigns on vaccination rates and attitudes towards vaccine safety.

We found the use of social media to organise offline action to be highly predictive of the belief that vaccinations are unsafe, with such beliefs mounting as more organisation occurs on social media. In addition, the prevalence of foreign disinformation is highly statistically and substantively significant in predicting a drop in mean vaccination coverage over time. A 1-point shift upwards in the 5-point disinformation scale is associated with a 2-percentage point drop in mean vaccination coverage year over year. We also found support for the connection of foreign disinformation with negative social media activity about vaccination. The substantive effect of foreign disinformation is to increase the number of negative vaccine tweets by 15% for the median country.

READ: Social media and vaccine hesitancy

The proliferation of misinformation on social media platforms is faster than the spread of Corona Virus Diseases (COVID-19) and it can generate hefty deleterious consequences on health amid a disaster like COVID-19. Drawing upon research on the stimulus-response theory (hypodermic needle theory) and the resilience theory, this study tested a conceptual framework considering general misinformation belief, conspiracy belief, and religious misinformation belief as the stimulus; and credibility evaluations as resilience strategy; and their effects on COVID-19 individual responses. Using a self-administered online survey during the COVID-19 pandemic, the study obtained 483 useable responses and after test, finds that all-inclusive, the propagation of misinformation on social media undermines the COVID-19 individual responses. Particularly, credibility evaluation of misinformation strongly predicts the COVID-19 individual responses with positive influences and religious misinformation beliefs as well as conspiracy beliefs and general misinformation beliefs come next and influence negatively. The findings and general recommendations will help the public, in general, to be cautious about misinformation, and the respective authority of a country, in particular, for initiating proper safety measures about disastrous misinformation to protect the public health from being exploited.

READ: Effects of misinformation on COVID-19 

Focusing on the dynamics between governments and big tech, on cybercrime, and on disinformation and fake news, this paper examines some of the risks that have been highlighted and aggravated as societies have transitioned at speed to a more virtual way of living.

The COVID-19 pandemic has been called the ‘great accelerator’ of digital transformation, with technology at the forefront of countries’ response to the crisis. The experience of the past year has underscored that tech governance must be based on human-centric values that protect the rights of individuals but also work towards a public good.

In the case of the development of track-and-trace apps, for instance, a successful solution should simultaneously be both respectful of individual privacy and robust from a cybersecurity perspective, while also effectively serving essential epidemiological goals.

READ: The COVID-19 pandemic and trends in technology

The 2013–2016 West Africa Ebola virus disease pandemic was the largest, longest, deadliest, and most geographically expansive outbreak in the 40-year interval since Ebola was first identified. Fear-related behaviors played an important role in shaping the outbreak. Fear-related behaviors are defined as “individual or collective behaviors and actions initiated in response to fear reactions that are triggered by a perceived threat or actual exposure to a potentially traumatizing event. FRBs modify the future risk of harm.” Particularly notable are behaviors such as treating Ebola patients in-home or private clinic settings, the “laying of hands” on Ebola-infected individuals to perform faith-based healing, observing hands-on funeral and burial customs, foregoing available life-saving treatment, and stigmatizing Ebola survivors and health professionals. Future directions include modelling the onset, operation, and perpetuation of fear-related behaviors and devising strategies to redirect behavioral responses to mass threats in a manner that reduces risks and promotes resilience.

READ: The Role of Fear-Related Behaviors

Background on the “mark of the beast” narrative with vaccines

This recent comment from Congresswoman Greene is not the first association to be made between vaccines and the “mark of the beast.” The “mark of the beast” narrative originates from a Bible passage, which describes a sign or mark to distinguish people who choose to worship the beast/the antichrist. Some passages claim that the  mark will dictate one’s  ability to buy and sell in the end times. Greene’s statement linking the “mark of the beast” to vaccine passports is the latest in a series of recent arguments  against vaccine passports. The Virality Project covered some of these narratives in our recent post on vaccine passports.

Associating vaccination with this religious concept of a “mark” is a long-running theme in online anti-vaccination groups with highly religious members. The content pre-dates the COVID-19 pandemic: the film “Anthrax, Smallpox vaccinations and the mark of the beast” made this connection as early as 2005 (still available on Amazon as of the writing of this post). The COVID-19 vaccine is the most recent vaccine to be linked to the “mark of the beast” content, joining smallpox, MMR, polio, and a long list of vaccines in the past. 

The “mark” is closely connected to the conspiratorial idea that microchips will be injected through the COVID-19 vaccine. This narrative also pre-dates the pandemic: after an emergency order to get people vaccinated was issued during the 2019 measles outbreak in New York, conspiracy and religious groups shared the idea that the MMR vaccine would contain a microchip with the “mark of the beast.” This is false. 

READ: Mark of the Beast meets Vaccine Passports

Because the novel coronavirus is highly contagious, the widespread use of preventive measures such as masking, physical distancing, and eventually vaccination is needed to bring it under control. We hypothesized that accepting conspiracy theories that were circulating in mainstream and social media early in the COVID-19 pandemic in the US would be negatively related to the uptake of preventive behaviors and also of vaccination when a vaccine becomes available.

Highlights

  • Belief in conspiracies related to COVID-19 in the US is prevalent and stable across time.

  • Conspiracy beliefs prospectively predict resistance to preventive action and vaccination.

  • Perceptions of the harms of the MMR vaccine partially account for vaccine hesitancy

  • Conservative ideology and media use are positively related to conspiracy beliefs.

  • Conspiracy beliefs pose challenges in obtaining public support to prevent coronavirus infection

READ: Conspiracy theories as barriers to controlling the spread of COVID-19 in the U.S.

We present a model of online content sharing where agents sequentially observe an article and must decide whether to share it with others. This content may or may not contain misinformation.

Agents gain utility from positive social media interactions but do not want to be called out for propagating misinformation.

We characterize the (Bayesian-Nash) equilibria of this social media game and show sharing exhibits strategic complementarity.

Our first main result establishes that the impact of homophily on content virality is non-monotone: homophily reduces the broader circulation of an article, but it creates echo chambers that impose less discipline on the sharing of low-reliability content.

This insight underpins our second main result, which demonstrates that social media platforms interested in maximizing engagement tend to design their algorithms to create more homophilic communication patterns (“filter bubbles”).

We show that platform incentives to amplify misinformation are particularly pronounced for low-reliability content likely to contain misinformation and when there is greater polarization and more divisive content. F

Finally, we discuss various regulatory solutions to such platform-manufactured misinformation.

READ: FULL STUDY 

Humans learn about the world by collectively acquiring information, filtering it, and sharing what we know. Misinformation undermines this process. The repercussions are extensive. Without reliable and accurate sources of information, we cannot hope to halt climate change, make reasoned democratic decisions, or control a global pandemic. Most analyses of misinformation focus on popular and social media, but the scientific enterprise faces a parallel set of problems—from hype and hyperbole to publication bias and citation misdirection, predatory publishing, and filter bubbles. In this perspective, we highlight these parallels and discuss future research directions and interventions.

Misinformation has reached crisis proportions. It poses a risk to international peace (1), interferes with democratic decision making (2), endangers the well-being of the planet (3), and threatens public health (45). Public support for policies to control the spread of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is being undercut by misinformation, leading to the World Health Organization’s “infodemic” declaration (6). Ultimately, misinformation undermines collective sense making and collective action. We cannot solve problems of public health, social inequity, or climate change without also addressing the growing problem of misinformation.

READ: Peer-Reviewed Paper  

 

It’s January. Armed supporters of President Trump have just stormed the U.S. Capitol; a deadly and dramatic siege unfolds on national TV and grips the nation. In this emotionally charged moment, a screenshot of an email from University of Washington President Ana Mari Cauce emerges on Facebook. In it, she announces the UW will remove all American flags from the Tacoma campus for the safety of staff and students.

The message has the earmarks of spoofing, an Internet scam. For example, it incorrectly identifies Cauce in her signature as “Interim President of UW Tacoma.” But that doesn’t stop this fabricated story from being shared, mostly in Facebook groups outside Seattle and the University community.

Emails from concerned alumni and community members trickle in, some requesting verification, others expressing anger and disgust. The Tacoma campus issues a disclaimer on its official Facebook page to squash the story before it spreads to a wider audience. Eventually, the rumor loses traction.

READ: FULL ARTICLE

A major factor in vaccine dissent is the proliferation of vaccine opposed content online(Kata, 2012). The affordances of the internet havecontributed to the rise of alleged conspiracies (Fahnestock, 2016),like the link between autism and vaccines,which is further amplified through social media and bots (Broniatowski et al., 2018). The widespread availability of misinformation has been an integral part of the vaccine opposed movement’s success. They have been highly effective in spreading their messages and arguments through social media and content sharing platforms (Wilson & Keelan, 2013).People also seek out and join digital communities to gain support and health advice from others similar to them(Zhang, He, & Sang, 2013).

READ: Peer-Reviewed Paper

During a May 4 keynote presentation at a virtual conference hosted by the George Washington University Institute for Data, Democracy & PoliticsKate Starbird, a University of Washington Center for an Informed Public cofounder and Human Centered Design & Engineering associate professor, discussed “the notion of participatory disinformation that connects the behaviors of political elites, media figures, grassroots activists, and online audiences to the violence at the Capitol,” as Justin Hendrix of Tech Policy Press wrote in a May 13 analysis of Starbird’s research.

But what exactly is participatory disinformation, how does it work and what does it look like?

READ: Full Article  

There is a growing concern that e-commerce platforms are amplifying vaccine-misinformation. To investigate, we conduct two-sets of algorithmic audits for vaccine misinformation on the search and recommendation algorithms of Amazon—world’s leading e-retailer.

First, we systematically audit search-results belonging to vaccine-related search-queries without logging into the platform—unpersonalized audits. We find 10.47% of search-results promote misinformative health products. We also observe ranking-bias, with Amazon ranking misinformative search-results higher than debunking search-results. Next, we analyze the effects of personalization due to account-history, where history is built progressively by performing various real-world user-actions, such as clicking a product. We find evidence of filter-bubble effect in Amazon’s recommendations; accounts performing actions on misinformative products are presented with more misinformation compared to accounts performing actions on neutral and debunking products.

Interestingly, once user clicks on a misinformative product, homepage recommendations become more contaminated compared to when user shows an intention to buy that product.

READ: Peer-Reviewed Paper

The Covid-19 pandemic has been accompanied by a parallel “infodemic” (Rothkopf 2003; WHO 2020a), a term used by the World Health Organization (WHO) to describe the widespread sharing of false and misleading information about the novel coronavirus. Misleading information about the disease has been a problem in diverse societies around the globe. It has been blamed for fatal poisonings in Iran (Forrest 2020), racial hatred and violence against people of Asian descent (Kozlowska 2020), and the use of unproven and potentially dangerous drugs (Rogers et al. 2020). A video promoting a range of false claims and conspiracy theories about the disease, including an antivaccine message, spread widely (Alba 2020) across social media platforms and around the world. Those spreading misinformation include friends and relatives with the best intentions, opportunists with books and nutritional supplements to sell, and world leaders trying to consolidate political power.

  • The Covid-19 pandemic comes at a time when we were already grappling with information overload and pervasive misinformation.
  • In a crisis, humans communicate in a process called collective sensemaking in order to understand uncertain and dynamic circumstances.
  • Collective sensemaking is a vital process, but we can make mistakes—or the process can be manipulated and exploited.

READ: Peer-Reviewed Paper

 

Collective behavior provides a framework for understanding how the actions and properties of groups emerge from the way individuals generate and share information. In humans, information flows were initially shaped by natural selection yet are increasingly structured by emerging communication technologies. Our larger, more complex social networks now transfer high-fidelity information over vast distances at low cost. The digital age and the rise of social media have accelerated changes to our social systems, with poorly understood functional consequences. This gap in our knowledge represents a principal challenge to scientific progress, democracy, and actions to address global crises. We argue that the study of collective behavior must rise to a “crisis discipline” just as medicine, conservation, and climate science have, with a focus on providing actionable insight to policymakers and regulators for the stewardship of social systems.

READ: Peer-Reviewed Paper

Recent studies have documented a shift from moderate political attitudes to more extreme attitudes at the ends of the political spectrum. This can be seen in Political Correctness (PC) on the left, and white identitarian (WI) attitudes on the ‘Alt-Right’ (AR). While highly covered in mainstream media, limited academic research has investigated their possible antecedents and psychological correlates.

The current study investigated the prevalence and psychological predictors of these attitudes. Utilising a quota-based sample of 512 U.S. participants, we found that extreme political attitudes were associated with various personality traits, social media use, and upbringing. PC attitudes were associated with agreeableness, black-white thinking, social-media use, and perceived overprotective parenting. WI attitudes were associated with low agreeableness and openness, and high black-white thinking.

Our results show that extreme left and right attitudes are separated by individual differences, and that authoritarianism can be seen on both the left and the right.

READ: FULL RESEARCH ARTICLE

It is seemingly impossible to avoid hearing news about the coronavirus pandemic and about the various COVID-19 vaccines on the market. Vaccine misinformation is a common occurrence and I don’t expect it to go away any time soon.

Since you, and those around you, must make the decision to vaccinate yourselves, the information you encounter online and that you might choose to share plays an important role in the decision to vaccinate (or not). I study misinformation online, specifically around vaccines, and I want to caution you about the kind of misinformation you may see in the coming months and offer you tools to stop its spread.

Vaccine misinformation is not new. Claims like “vaccines cause autism” have lingered since the early 1990s despite numerous scientific studies that show there is no link between the MMR vaccine and Autism Spectrum Disorder. What we are seeing now is classic vaccine-opposed narratives and misinformation applied to the COVID vaccine. False reports claim that the COVID vaccine causes side effects like Bell’s palsy or even death. These stories are built around bits of true information presented in a misleading way. For example, while it is true that six people died during the Pfizer-BioNTech trials, only two were given the vaccine. The other four received a placebo. There is no evidence to suggest the deaths are connected to the vaccine.

READ: Full Article  

In late July, a conspiracy theory that over 45,000 people in the US have died from COVID-19 vaccines spread online after a lawsuit was filed in federal court by America’s Frontline Doctors. The group, which has filed similar lawsuits before, regularly spreads anti-vaccine and COVID-19 misinformation. The claims made in the filing are based on written testimony from an anonymous whistleblower, who allegedly proved that COVID-19 vaccine deaths are underreported using data from Vaccine Adverse Event Reporting System (VAERS) as evidence. This lawsuit and its accompanying testimony of the unfactual analysis is an example of a common method used to spread misleading anti-vaccine content: misrepresentation of data from self-reporting databases. Here, we discuss this method alongside others that present debunked or decontextualized scientific studies and analysis as factual.

In a recent Virality Project blog post, we discussed how anti-vaccine advocates attempt to undermine established medical and public health voices as they spread their own views. These repeated attacks not only aim to delegitimize medical experts or the institutions they represent, but are also intentionally applied to shift credibility and authority toward anti-vaccine advocates and influencers.

READ: Academic Paper

In recent months, social media sites like Facebook, Instagram, and Twitter have taken significant steps to reduce the amount of vaccine-related misinformation on their platforms through the banning of anti-vaccination groups, take-downs of accounts of prominent anti-vaccine advocates, and the moderation of vaccine opposed misinformation. While these efforts are vital, misinformation around the COVID-19 vaccine is still abundant, and anti-vaccine advocates are still able to use these social media platforms to spread vaccine hesitancy. 

In order to avoid moderation, vaccine-opposed communities have developed a range of tactics that they believe reduces the chances of their content being flagged by automated content moderation practices. The most common of these is lexical variation—the practice of changing how a keyword or phrase is spelled while still conveying its original meaning without triggering keyword moderation. Commonly used variations include “V@ccine,” “Vak-Seen,” and “V@X.” Not only does the proliferation of these terms assist in evading moderation, it also makes it difficult for researchers and public health communicators to understand the full picture of vaccine-related misinformation being shared. 

READ: Academic Paper

On May 9, a TikTok video went viral that purported to show a refrigerator magnet sticking to the arm of someone who had recently received a COVID-19 vaccine. Between the time when the video was posted and when TikTok removed it on May 13, the video was viewed over 9.6 million times. While a number of the top comments remarked on the outlandishness of the claim that vaccines make people magnetic, in the past month, the effects of this online narrative have had a real world impact. On June 8, just a month after the TikTok video went viral, a prominent anti-vaccine activist attempted to demonstrate this “magnetism” in the Ohio state legislature  as evidence that the vaccines are unsafe. 

The May 9 TikTok video became part of a proliferation of videos in the so-called “Magnet Challenge.” These videos vary from satirical takes on the conspiracy theory to steadfast beliefs that the vaccines make people magnetic. While some of the content for the Magnet Challenge may have appeared to poke fun at the ridiculousness of anti-vaccine claims, the conspiracy ultimately has had harmful negative effects online, such as encouraging others to not get the vaccine. Looking at the origin and spread of the magnet videos, we discuss how the range of responses to the challenge, including videos that debunk the challenge, raise difficult moderation questions on how to handle health misinformation that may be considered satirical.

READ: Academic Paper

One of the overarching dynamics of online misinformation about the COVID-19 vaccines is that very little of it is new. Many of the tropes about efficacy, ingredients, safety, and others are quite old. However, recent claims alleging that women’s menstrual cycles were changing if the women were in the presence of others who’d recently gotten the shot, are novel. This narrative bubbled up in anti-vaccine communities, who have long discussed “vaccine shedding,” and was amplified in a recent tweet on April 20 by Naomi Wolf, an anti-vaccine activist and conspiracy theorist who has repeatedly spread vaccine misinformation. Wolf tweeted last week that unvaccinated women were experiencing menstruation issues after being around vaccinated women. 

This is just one of several posts by accounts with large followings in the past week who have raised concerns around women’s health after exposure to the COVID-19 vaccine.

READ: Academic Paper

Social media platforms rarely provide data to misinformation researchers. This is problematic as platforms play a major role in the diffusion and amplification of mis- and disinformation narratives. Scientists are often left working with partial or biased data and must rush to archive relevant data as soon as it appears on the platforms, before it is suddenly and permanently removed by deplatforming operations. Alternatively, scientists have conducted off-platform laboratory research that approximates social media use. While this can provide useful insights, this approach can have severely limited external validity (though see Munger, 2017; Pennycook et al. 2020). 

For researchers in the field of misinformation, emphasizing the necessity of establishing better collaborations with social media platforms has become routine. In-lab studies and off-platform investigations can only take us so far. Increased data access would enable researchers to perform studies on a broader scale, allow for improved characterization of misinformation in real-world contexts, and facilitate the testing of interventions to prevent the spread of misinformation.

The current paper highlights 15 opinions from researchers detailing these possibilities and describes research that could hypothetically be conducted if social media data were more readily available.

READ: Journal Article

Research at the intersection of machine learning and the social sciences has provided critical new insights into social behavior. At the same time, a variety of issues have been identified with the machine learning models used to analyze social data. These issues range from technical problems with the data used and features constructed, to problematic modeling assumptions, to limited interpretability, to the models’ contributions to bias and inequality. Computational researchers have sought out technical solutions to these problems. The primary contribution of the present work is to argue that there is a limit to these technical solutions. At this limit, we must instead turn to social theory.

We show how social theory can be used to answer basic methodological and interpretive questions that technical solutions cannot when building machine learning models, and when assessing, comparing, and using those models. In both cases, we draw on related existing critiques, provide examples of how social theory has already been used constructively in existing work, and discuss where other existing work may have benefited from the use of specific social theories. We believe this paper can act as a guide for computer and social scientists alike to navigate the substantive questions involved in applying the tools of machine learning to social data.

READ: Journal Article  

The backfire effect is when a correction increases belief in the very misconception it is attempting to correct, and it is often used as a reason not to correct misinformation. The current study aimed to test whether correcting misinformation increases belief more than a no-correction control. Furthermore, we aimed to examine whether item-level differences in backfire rates were associated with test-retest reliability or theoretically meaningful factors.

These factors included worldview-related attributes, namely perceived importance and strength of pre-correction belief, and familiarity-related attributes, namely perceived novelty and the illusory truth effect. In two nearly identical experiments, we conducted a longitudinal pre/post design with N = 388 and 532 participants. Participants rated 21 misinformation items and were assigned to a correction condition or test-retest control.

We found that no items backfired more in the correction condition compared to test-retest control or initial belief ratings. Item backfire rates were strongly negatively correlated with item reliability (⍴ = -.61 / -.73) and did not correlate with worldview-related attributes.

Familiarity-related attributes were significantly correlated with backfire rate, though they did not consistently account for unique variance beyond reliability. While there have been previous papers highlighting the non-replicable nature of backfire effects, the current findings provide a potential mechanism for this poor replicability. It is crucial for future research into backfire effects to use reliable measures, report the reliability of their measures, and take reliability into account in analyses. Furthermore, fact-checkers and communicators should not avoid giving corrective information due to backfire concerns.

READ: Academic Paper

The internet has become a popular resource to learn about health and to investigate one’s own health condition. However, given the large amount of inaccurate information online, people can easily become misinformed. Individuals have always obtained information from outside the formal health care system, so how has the internet changed people’s engagement with health information? This review explores how individuals interact with health misinformation online, whether it be through search, user-generated content, or mobile apps. We discuss whether personal access to information is helping or hindering health outcomes and how the perceived trustworthiness of the institutions communicating health has changed over time. To conclude, we propose several constructive strategies for improving the online information ecosystem. Misinformation concerning health has particularly severe consequences with regard to people’s quality of life and even their risk of mortality; therefore, understanding it within today’s modern context is an extremely important task.

READ: Academic Paper

Prior work established the benefits of server-recorded user engagement measures (e.g. clickthrough rates) for improving the results of search engines and recommendation systems. Client-side measures of post-click behavior received relatively little attention despite the fact that publishers have now the ability to measure how millions of people interact with their content at a fine resolution using client-side logging. In this study, we examine patterns of user engagement in a large, client-side log dataset of over 7.7 million page views (including both mobile and non-mobile devices) of 66,821 news articles from seven popular news publishers.

For each page view we use three summary statistics: dwell time, the furthest position the user reached on the page, and the amount of interaction with the page through any form of input (touch, mouse move, etc.). We show that simple transformations on these summary statistics reveal six prototypical modes of reading that range from scanning to extensive reading and persist across sites. Furthermore, we develop a novel measure of information gain in text to capture the development of ideas within the body of articles and investigate how information gain relates to the engagement with articles.

Finally, we show that our new measure of information gain is particularly useful for predicting reading of news articles before publication, and that the measure captures unique information not available otherwise.

READ: Academic Paper

PODCAST

On hot button topics such as vaccines, climate change, and genetically modified foods, people across the political spectrum are susceptible to the psychological forces that lead them to disbelieve what scientists are telling them and to seek out information that confirms rather than challenges their biases.

So what are these psychological forces? Why do people doubt and deny scientific findings? Is science denial worse now than it’s ever been? Does the American public trust scientists or look upon them with suspicion? How have the internet and social media magnified science skepticism? And what can scientists, science communicators, and science educators do to help people gain a more accurate understanding of how science works?

Psychologists Barbara Hofer of Middlebury College and Gale Sinatra of the University of Southern California, authors of the book Science Denial: Why it Happens and What to Do About it, discuss these and other questions.

LISTEN: The psychology of science denial, doubt, and disbelief

 

The BBC’s media editor Amol Rajan asks James Ball, special correspondent at BuzzFeed News, and Mark Frankel, social media editor at BBC News, about the different meanings of ‘fake news’ and how journalists should respond to it.

LISTEN: The truth about fake news 

Heidi J. Larson, the author of Stuck: How Vaccine Rumors Start — and Why They Don’t Go Away and the director of the Vaccine Confidence Project. Larson’s work in vaccine hesitancy traces the root causes to a lack of trust in and anxieties about our institutions. That distrust is shared and amplified by the social media platforms we have available today.

LISTEN: On the Root Causes of Vaccine Hesitancy

VIDEO

Online health misinformation poses a major threat to efforts to prevent the spread of Covid and other infectious diseases. Understanding how misleading claims can be bolstered through data visualizations and quantitative statements is crucial to developing effective countermeasures.

 

PANELISTS

Carl Bergstrom is a professor of biology at the University of Washington and a faculty member at the UW Center for an Informed Public who studies the flow of information. He is the co-author of Calling Bullshit: The Art of Skepticism in a Data-Driven World and works to prevent the spread of quantitative misinformation and promote data literacy.

Crystal Lee is a PhD candidate at MIT and a Fellow at the Berkman Klein Center at Harvard University. She works broadly on topics related to the social and political dimensions of computing, data visualization, and disability.

Chase Small is a researcher at the Stanford Internet Observatory. His current work focuses on the Virality Project, a coalition of research entities focused on supporting real-time information exchange between the misinformation research community, public health officials, government agencies, civil society organizations, and social media platforms. This fall, Chase conducted research for the Election Integrity Partnership, which had a similar mission for the 2020 US election.

VIEW: Webinar

 

Sander van der Linden explains his radical idea: that people can be “inoculated” against falling for fake news.

Sander van der Linden, Cambridge’s professor of “defence against the dark arts”. His team works with governments and organisations such as Google to find ways to fight against misinformation, disinformation, and  conspiracy theories.

VIEW: The Vaccine for Fake News

Conspiracy theories played a role in the insurrection at the US Capitol in January and continue to fuel resistance, in some circles, to getting vaccinated for Covid-19. It may seem like conspiracy theories are more prevalent now than ever, and more common on the political right than on the left — but are they really? Recent research suggests that no one is totally immune. That’s because conspiracy theories tap into fundamental aspects of human psychology, helping to explain why they’re so alluring —and so hard to dispel once they take hold. Researchers are trying to develop ways to disrupt their influence on our minds and our society. Join us for a discussion with two experts — a social psychologist and a political theorist —about the psychological underpinnings and political consequences of conspiracy theories. Tune in to learn more and get your questions answered. And, if you can’t join us live, please register and we’ll send a link to the replay following the event. Speakers: -Nancy Rosenblum, Harvard University -Sander van der Linden, University of Cambridge
 

Co-director of the Center for Vaccine Development at Texas Children’s Hospital Dr. Peter Hotez warns against the dangers of vaccine disinformation and how it has cost many lives.

VIEW: ‘Death By Anti-Science’

In the digital age, governments are increasingly dependent on the internet and social media for disseminating information to citizens. But in countries like the UK, US, and Canada, where nearly half of the population graduated high school before the web was invented, how much do citizens understand about how information is processed, manipulated, and presented to them through internet-based communications?

Especially as the spread of disinformation on these technologies has ripped at the fabric of democracies around the world, are citizens equipped with the media literacy skills to consume, understand, and react to misleading or outright false information populating their news feeds?

VIEW: Disinformation in the Digital Age

Twitter, Facebook, WhatsApp and YouTube have all seen a host of misinformation being spread on their platform, with fake news sources amassing over 50 million engagements.

Facebook, which has over 2 billion accounts worldwide, has announced that it’s busy removing false claims from its site and will promote the guidance of the World Health organisation.

Despite this push from FB, online analytics show that in the UK the NHS website is still seeing fewer engagements than fake news sources from the US.

Katie Razzall is joined by Anna-Sophie Harling from NewsGuard, an online browser extension that reports on digital misinformation, and Victoria Baine a former Facebook Trust and Safety Manager to discuss what needs to be done to protect ourselves from digital misinformation.

VIEW: Coronavirus: The conspiracy theories spreading fake news  

In this presentation, recorded in May 2021, University of Washington Center for an Informed Public postdoctoral fellow Kolina Koliai, who studies vaccine misinformation, discusses vaccine hesitancy, COVID-19 vaccine misinformation and vaccine hesitancy narratives on social media.

VIEW: COVID-19 vaccine misinformation and narratives

Among the conspiracy theories circulating about the coronavirus pandemic, one claim is that Covid-19 vaccines contain microchips that the government or global elites like Bill Gates would use to track citizens. Despite viral videos claiming a chip in the vaccines makes people’s arms magnetic, the conspiracy is false.

“That’s just not possible as far as the size that would be required for that microchip,” said Dr. Matt Laurens, a pediatric infectious disease specialist at the University of Maryland School of Medicine who also serves as a co-investigator on the phase three trials of the Moderna and Novavax Covid vaccines. “Second, that microchip would have to have an associated power source, and then in addition, that power source would have to transmit a signal through at least an inch of muscle and fat and skin to a remote device, which again, just doesn’t make sense.”

VIEW: CNBC segment debunking vaccine tracking chip conspiracies