The Infodemic Metaphor

Apr 8, 2021 —

Where did the term “infodemic” come from?

From early in the outbreak, medical officials warned about an epidemic of misinformation, which would make the task of tackling the spread of the virus much harder. In its Situation Report of early February 2020 the World Health Organization (WHO) warned that “the 2019-nCoV outbreak and response has been accompanied by a massive ‘infodemic,’” which they defined as “an over-abundance of information—some accurate and some not—that makes it hard for people to find trustworthy sources and reliable guidance when they need it.” The report clarified that the WHO was combatting not merely an overabundance of information, but the rise of particular kinds of misinformation: “Due to the high demand for timely and trustworthy information about 2019-nCoV, WHO technical risk communication and social media teams have been working closely to track and respond to myths and rumours.” In a speech on 15 February 2020, WHO Director-General Tedros Adhanom Ghebreyesus made the point more forcefully. “We are not just fighting an epidemic,” he explained. “We’re fighting an infodemic. Fake news spreads faster and more easily than this virus, and is just as dangerous.” The UN Secretary-General António Guterres offered a similar assessment, arguing that the spread of the COVID-19 pandemic “has also given rise to a second pandemic of misinformation, from harmful health advice to wild conspiracy theories.”

The term “infodemic” was in fact first coined in an article by David Rothkopf in the Washington Post during the SARS outbreak of 2003. Rothkopf argued that, as with other prominent events such as terrorist attacks, the fevered media response to the epidemic was out of proportion to the reality: “a few facts, mixed with fear, speculation and rumor, amplified and relayed swiftly worldwide by modern information technologies have affected national and international economies, politics and even security in ways that are utterly disproportionate with the root realities.” Rothkopf’s neologism did not gain much traction, but it has become widespread during the coronavirus pandemic, quickly losing its quotation marks as it became accepted as an instantly recognizable term. For example, both the New York Times and The Lancet headlined the word in their reports on the WHO’s creation of a new information platform, as part of their collaboration with social media platforms to amplify accurate health information. But the term infodemic is often combined with other, older metaphors used to describe the spread of problematic information, taken from military/espionage/propaganda (“disinformation”) or weather (floods, torrents etc). For example, Sylvie Briand, director of Infectious Hazards Management at WHO’s Health Emergencies Programme, explained that “we know that every outbreak will be accompanied by a kind of tsunami of information” (emphasis added). The difference, Briand continued as she switched metaphors, is that “now with social media … this phenomenon is amplified, it goes faster and further, like the viruses that travel with people and go faster and further.” The idea of an infodemic has become a compelling shorthand way for alerting audiences to the problem of online misinformation during the pandemic. There are, for example, over 4000 articles returned on a search for the term in the Factiva database of news articles, and a similar number of academic publications in Google Scholar. In this blog post we will address two questions. First, has there actually been an overabundance of information, misinformation and/or disinformation surrounding COVID-19? Second, how accurate is the metaphor of an infodemic anyway?

Is there an epidemic of information?

The WHO’s initial warning was as much about a glut of potentially accurate information as it was about the spread of rumours, myths, and conspiracy theories surrounding the new disease. The problem was partly caused by the rapid production and widespread promotion by scientists, journalists, and the public of preprints, those academic articles that have not yet undergone peer review, but which are available in open access versions online. In their study of the role of the preprint in the early months of the pandemic, Gazendam et al. (2020) found that there had been an exponential increase in scientific publications relating to COVID-19, often with a quick turnaround time from submission to publication, with many of the articles taking the form of commentaries and opinion pieces rather than original research findings. The danger of this publishing process—not entirely new, albeit on a far greater scale to anything seen before—is that intriguing yet unconfirmed preliminary results can gain wide coverage, but any future clarifications, criticisms, or retractions tend to receive less notice. For example, Gazendam et al. draw attention to a preprint article that claimed to have found similarities between SARs-CoV-2 and HIV, a finding which unsurprisingly fuelled conspiracy theories about bioengineering. Although the article was later withdrawn, it quickly became one of the most widely shared scientific papers in the last decade, quickly moving far beyond medical circles.

However, most commentators use the term “infodemic” to mean not merely an overload of well-meaning information, but a potentially catastrophic explosion—to switch metaphors again—of either accidentally or intentionally misleading information. The suggestion is usually that the spread of misinformation and disinformation has been at an unprecedented level. But is that true? Has the epidemic of information reached pandemic proportions? This is a very hard question to answer. The feeling of being overwhelmed by information is not new, but, as Chico Camargo and Felix Simon of the Oxford Internet Institute (OII) point out (drawing on Hugo Mercier’s recent book Not Born Yesterday), many people have learned how to navigate their way through the contemporary mediascape by developing cognitive strategies such as selective attention. Some early studies in the pandemic indicated many people had a good idea of where to find reliable information around COVID-19, even if some of them chose to ignore scientific experts, national health institutions, and the mainstream media. Determining whether we are currently living through an epidemic of online misinformation is also difficult because it relies first on there being a clear agreed-upon distinction between high and low quality information, and second on knowing precisely what information individuals experience online, how they feel about it, and what they do with it. Although social media researchers can identify, for example, which posts have gone “viral” by tracking both volume and engagement metrics, the social media platforms do not provide transparent access to data that would show what actual users see on their feeds.

Setting aside these important caveats, it is nevertheless possible to produce some approximations of the size of the problem of online mal-information during the pandemic. A report from the OII in April 2020, for example, found that only a minority of respondents in its survey had come across a lot of misinformation concerning COVID-19. However, the OII study, and a similar one from January 2021 by KCL/Ipsos, found that young people and the less well educated are more likely to get their news and information from social media, and come across a significant amount of misinformation. Some studies that focused on individual platforms understandably came to more pessimistic conclusions, because they were not looking at the overall media diet of individuals. Yang et al. (2020), for example, found that low-credibility information on Twitter about Covid-19 circulates at about the same volume as information from the New York Times—but that doesn’t necessarily imply that misinformation during the pandemic has out-gunned reliable sources. A study by BBC Monitoring of the most popular, conspiracy-minded anti-vaccination accounts on Instagram, for instance, found that the number of followers of these accounts increased five-fold during 2020. In contrast, using the Covaxxy Dashboard from Indiana’s Observatory on Social Media we learn that the New York Times was shared on Twitter ten times more than the conspiracy-leaning Zero Hedge website in the first week of April 2021. Yet this seemingly optimistic finding does not tell us the full story of the relative importance of good vs. bad information. On the one hand, conspiracy theorists will often link to an article from a reliable source such as the New York Times to back up their alternative interpretation of events (or provide proof of what they perceive as the bias of the mainstream media). On the other, conspiracist articles from sites like Zero Hedge are shared by those debunking myths and rumours.

The score card is mixed, but many studies nevertheless concur in their assessment that, while the spread of low-quality information during the pandemic is a serious problem, it still has not drowned out high-quality sources of news and health information. Broniatowski et al. (2021), for example, argue that there might be an excess of information about COVID-19 circulating online, but misinformation and disinformation are not winning out. Nor is the problem significantly worse than prior to the pandemic. In fact, their study found that links to high-quality information sources are more common in Covid-related online interactions than those relating to topics other than health, or in comparison to previous episodes of disease outbreaks. The reason, Broniatowski et al. suggest, is that global health experts, in conjunction with the social media platforms, have done a comparatively good job in promoting authoritative sources of information. These findings about the limits to the “infodemic” hypothesis are corroborated by the many opinion polls during the pandemic that have shown that belief in particular conspiracy theories (e.g. that the virus was deliberately created in a lab) amounts to about 25% of respondents in the US and 20% in the UK. While overall 15% of British people think that authorities are covering up important information about the coronavirus, for those who get their news mainly from social media (mainly the young), it’s as high as 40%. Conspiracist vaccine hesitancy ranges from roughly 10% in the UK to 40% in France. In a literal sense, then, the idea that the online viral spread of misinformation has reached overwhelming, pandemic proportions is undoubtedly exaggerated. Nevertheless, it is still possible to be alarmed at the prevalence of various harmless, bizarre or damaging untruths about Covid-19 among particular communities and sections of the population in individual countries.

Just as an aside—and this is a question we will look at in more detail in a future blog post—it is also worth considering how conspiracy theories stack up against other kinds of misinformation in the datasphere. In their study of anti-vaccination discourse, for example, First Draft found that conspiracy theory makes up only 29% of the vaccine hesitancy discourse in English-language online spaces, although it is 59% in French ones. In a similar vein, Islam et al. (2020) found that in their dataset of “COVID-19 infodemic in 25 languages from 87 countries” 89% of the reports were classified as rumours, 7.8% were conspiracy theories, and 3.5% involved cases of stigma. Conspiracy theories might have attracted considerable media attention, but they usually make up a comparatively small part of misinformation, which in turn, as we’ve seen, itself only forms a minor component of the overall mediascape. However, as we’ll explain in a future blog, conspiracy theories may well have an outsize influence as a particularly appealing and intractable kind of misinformation.

Is the infodemic metaphor accurate?

It is sometimes unclear whether commentators are using the term “infodemic” metaphorically or literally. Most use it as a convenient shorthand to suggests parallels between the way the virus spreads and the way misinformation about the virus spreads. In his original article, Rothkopf insisted that the similarities between infodemics and epidemics are remarkably close: “In virtually every respect they behave just like any other disease, with an epidemiology all their own, identifiable symptoms, well-known carriers, even straightforward cures.” Some researchers have focused on reconstructing the pathways of transmission of particular pieces of Covid-19 misinformation, such as the Plandemic documentary that was shared on social media platforms 8 million times within its first week, with 2.5M likes, shares, and comments on Facebook alone. Other researchers have concentrated on the role that “superspreaders” have played in the infodemic. A study by a team of researchers at Cornell, for example, concluded that President Trump was by far the biggest driver of coronavirus misinformation. In addition to considering the mechanisms of spread, other researchers have looked into the possibility of “inoculation” against the threat of misinformation. This is meant as a metaphor, but it is a particularly appealing one because it suggests, like actual vaccines, there might be a miracle cure to our current plight.

Although most of these studies rely on an analogy between the virus and misinformation, some have taken the comparison more literally. Some researchers have started to explore possible correlation between the prevalence of low credibility news and low vaccine take up, with the Covaxxy Dashboard, for example, providing an intriguing parallel set of maps with US states colour-coded according to twin measures of misinformation and vaccination adoption. Other researchers have taken the parallel more literally. Cinelli et al. (2020), for instance, start from the premise that “models to forecast virus spreading … account for the behavioral response of the population with respect to public health interventions and the communication dynamics behind content consumption.” They explain how they “model the spread of information with epidemic models, characterizing for each platform its basic reproduction number (𝑅0), i.e. the average number of secondary cases (users that start posting about COVID-19) an ‘infectious’ individual (an individual already posting on COVID-19) will create.” Ligot et al. (2021) take the parallel between epidemiology and infodemiology even further, looking at the incubation period and spread over time of misinformation, trying to identify whether there is a similar time-lag between the initial emergence of a piece of misinformation and its subsequent transmission. They then engage in a form of “contact tracing” of a particular set of misinformation URLs circulating on social media, in order to identify both the “multiple carriers” (i.e. repeat offenders) and individual superspreaders of online misinformation. Next they examine what they consider to be “mutations” in the cultural DNA of misinformation topics (from 5G and bioweapons, to anti-lockdown and anti-vaxx), which they characterise as equivalent to new strains of the virus adapting to new environments. Finally, their hope is that by identifying topic mutations in real time they will in the future be able to engage in inoculation against the infodemic by providing relevant counter messaging.

Where many commentators have used the infodemic metaphor without much reflection, Ligot et al. self-consciously adopt the analogy in order to see, in a spirit of pragmatism, what research insights it might generate. And their work indeed opens up some suggestive lines of inquiry, such as the idea of performing an equivalent of genome sequencing to see how particular narrative strands of conspiracy theories are recombined into new variants, all the while leaving tell-tale traces of their original form and content visible to the microscopic attention of cultural research. Although Ligot et al. are keen to emphasize the benefits of the comparison, they do not address its limitations, and in places suggest that the same mechanisms might underpin both realms.

The infodemic metaphor has become widely adopted, and it does capture in a striking way some of the dangers of the spread of mal-information online. However, it is worth thinking about some of the implications of the metaphor, and the ways in which the comparison does not work. First of all, information does not literally spread like a virus. Cells have no conscious ability to resist infection by a bacterium or a virus, but people do have some choice in whether to accept and pass on a particular piece of online content, or at the very least it is not inevitable that an individual recipient of online misinformation will succumb to its truth-altering message. Discussions of the infodemic usually imply that social media is a particularly dangerous space of transmission, with viral memes—especially conspiracist ones—able to bypass a user’s rational defence mechanisms. However, some recent research suggests that a great deal of misinformation is spread not through malicious intention or being duped, but a simple lack of conscious attention. In contrast to both these positions, cultural studies research has shown that, compared to the traditional model of many-to-one broadcasting, the internet can enable more active and participatory forms of media engagement in realms such as conspiracy theorising—even if that more utopian model of Web 2.0 does not always hold true. Indeed, there is a long tradition of work in cultural studies and communications that argues against the “hypodermic needle” theory of media influence. Even the metaphor of the viral transmission of memes is not new. Writing in the 1990s, Douglas Rushkoff invoked the idea that ideas and images could circulate virally in the mediascape without the conscious control of manipulating producers of content precisely to argue against familiar accounts of the entertainment industry as a conspiracy. While some conspiracy entrepreneurs like Alex Jones and David Icke act as misinformation “superspreaders,” their community of followers are not merely passive recipients of their messages, for better or worse.

Instead of an analogy with disease infection, it is more helpful to think in terms such as supply and demand (another metaphor, of course). We need to consider the ideological and emotional investment of those who consume conspiracy narratives as well as the financial and political incentives of those who produce them, not to mention the infrastructural logics of the platforms that host them and promote them via recommendation algorithms. The infodemic metaphor also suggests that the solution to the problem will come from tracking, tracing, and quarantining “diseased” pieces of information. Like other social media platforms during the pandemic, Facebook has engaged in “performative transparency,” proudly announcing, for example, that by March 2021 it had removed 12 million pieces of misinformation related to COVID-19 and vaccines, a fact that independent researchers are not able to verify, leaving aside the fact that, as studies have shown, a great deal of the deplatformed content still circulates online in other realms. But, as we are finding in our research, fact-checking and removal of harmful misinformation only scratches the surface of the problem at best, and, at worst, can exacerbate it. After all, unless we address why people are drawn to conspiracy narratives—why they find a sense of identity and community in these stories—we will always be playing whack-a-mole.

The analogy between viruses and information also falsely suggests that there is a single point of origin of the “disease” that can be clearly identified, allowing a targeted intervention in the form of a vaccine. Although social psychologists have found evidence of the success of some forms of “inoculation” that build up “resistance” and even “immunity” (pre-bunking, digital media literacy training, etc.), the effects tend to be comparatively short lived. The Covid-19 disease is caused by the SARs-CoV-2 virus (with its many emerging variants), but the “disease” of disinformation does not have a clear causal counterpart. Research into the viral spread of misinformation online has to rely on a classificatory system of low vs. high quality sources, sites, and content. Sometimes this takes the form of a list of unreliable news websites maintained by organisations such as NewsGuard, or a database of false claims identified by fact-checking groups such as Full Fact or COVID19Misinformation.org. These taxonomic activities by think tanks, campaigning charities, and other civil society organisations are very useful—and it is frustrating that we have to rely so heavily on a piecemeal network of under-funded voluntary organisations for this important work. Nevertheless, the infodemic metaphor assumes that there is a clear-cut distinction between good information and bad information, and that our task is to identify and neutralise the threat from the latter. Yet the notion that there is a binary division between healthy and unhealthy information has much in common with the conspiracist mindset that sees the unfolding of history in Manichean terms, as an ultimate and apocalyptic struggle between good and evil. As Boris Nordenboos has suggested, the characterization of harmful misinformation as contaminating the body politic recalls the kind of Cold War paranoia that still structures much thinking in Russia and other authoritarian states, which view internal dissent as necessarily the result of clandestine Western influence. Likewise in the West, many commentators at first explained the spread of harmful information online during the coronavirus pandemic through the dominant paradigm—that had coalesced around the 2016 US elections—of disinformation as the product of foreign interference. However, the pandemic (along with the US elections of 2020) has made clear that the source of misinformation is just as likely to be home-grown.

So, what should we do about the term “infodemic”? Simon and Camargo have argued that the metaphor is dangerous because it can push policy in ill-thought-out directions (e.g. by making illiberal disinformation counter-measures seem a matter of vital public health, beyond discussion). Although we agree with many of their concerns, it is probably too late to put the genie back in the bottle. The term has gained too much traction, and, despite its flaws, it has the potential to suggest fruitful lines of inquiry. However, we can still insist that people should think about the implications of the metaphor, paying attention to when and why the parallel doesn’t fit. More generally, there needs to be greater consideration of the range of metaphors that are being used to describe how ideas, images, and narratives spread online, and the ideological baggage that each metaphor brings with it. The idea of escaping figuration altogether by using a scientifically objective language is naïve. Instead, we need to be alert to both the insights and the blind spots that different analogies generate. In addition to the medical, economic, meteorological, and military figurative language we’ve mentioned here, there is increasing interest among digital media researchers in applying ecological metaphors to information dysfunction. Ecology provides a potentially productive way of thinking about the complex interactions between the content, the users, the technological infrastructure, and the social dynamics of the different digital platforms, but it is not free of its own unspoken assumptions. Instead of simply replacing “infodemic” with a different coinage, researchers should instead make sure that they are more aware that all explanatory models have complex figurative entanglements.

This post is published under the terms of the Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) licence.