I just realized that I haven’t shared my slides from LOEX a few weeks ago. So, scroll to the bottom for the complete slidedeck. Of course, it would help to have some context to understand what was going on. So here are my presentation notes, slightly reworked for this post. Also, there were a couple of caveats I made during the presentation:
- I’m focusing on teaching information evaluation during a one-shot library session in a first-year course. The English 101 writing courses that dominate library instruction. Higher-level classes in other subjects get treated differently.
- The pedagogical tactics I’m recommending have been thoroughly researched and shown to be effective…just not by librarians. This is stuff coming out of cognitive psychology, education, political science, and other fields. So, while I’m not going to include assessment data, please know that there is a ton of assessment data out there from earlier studies.
Anyways. Let’s begin…
Introduction
Everyone is talking about fake news these days when, in fact, fake news isn’t really new at all. Fake news is simply a means (among many) of monetizing confirmation bias. And it’s been with us for as long as there’s been news. Cicero complains about fake news in De Oratore. The yellow journalism of the 1890s started a major war. The tabloids in the supermarket won’t leave Bat Boy alone. My contention is that the existence of fake news is not the problem we should be focusing on. I mean, it is a problem, but the more salient problem we’re dealing with is the rhetoric of fake news. The spread of a deep mistrust of traditional media coupled with the valorization of motivated reasoning. Otherwise known as post-truth.
Post-truth
We know that trust in the media is at an all time low. And the current rhetorical climate is to blame. When the President of the United States dismisses every criticism as “fake news” and urges his followers to reject any news that doesn’t glorify his (objectively terrible) regime, we’re looking at post-truth. The massive deregulation of the 1990s (cf., the Telecommunications Act of 1996) exacerbated nascent market forces in the media industry, leading media companies to slowly trade journalistic integrity for shareholder value. The distinction between professional journalist and partisan hack slowly disintegrated and the amateur enthusiast was elevated to “speak truth to power” while readers were circumscribed into filter bubble after filter bubble. Remember how participatory journalism was supposed to save the media? Fifteen years later, the new lingo is “don’t read the comments.” Journalism is incorporated, information is democratized, the expert is dead. It’s all post-truth. And if it feels liberating, it’s not. As Habermas warned, the existence of an independent, trustworthy news media is essential to the proper functioning of a democratic society (assuming that’s what you’re into). We’ve only got two ways out; (1) make the news media more trustworthy and (2) help people understand the social nature of that trustworthiness. I’ll set aside the first, only because it’s a different conversation, however, I do want to argue that librarians can help with the second…but it’s not going to be with LibGuides. We can only help if we pay attention to the underlying cognitive processes that lead to the post-truth mindset.
Post-Truth Psychology
There are dozens of cognitive biases that contribute to the post-truth mindset. I’ll just focus on a few:
- Directional Reasoning:
- All reasoning is either directional (aimed at a specific result) or accuracy-based (aimed at a correct result). When we fall into directional reasoning, our critical thinking skills are diminished. (cf. Kunda, 1990; Lenker, 2016)
- Hostile Media Effect
- The tendency of partisans to see the media as biased against their side on an issue. A by-product of confirmation bias and selective reading. (cf., Gunther & Liebhart, 2006; Gunther & Schmitt, 2004)
- Dunning-Kruger Effect
- Lack of knowledge about a subject yields overconfidence in the ability to evaluate information about that subject. Sort of like, the less you know, the more you think you know. (cf. Kruger & Dunning, 1999)
- Defensive rationalization
- We construct our sense of self out of our beliefs, and those beliefs we feel most passionate about are inevitably the beliefs that form the core of our identity. So, a challenge to a strongly held belief can feel like an attack on the self. Often, when a strongly held belief is criticized, we end up clinging to that belief even more.
These and other cognitive biases show up in the library classroom. Students are motivated to find articles that back up their thesis statement rather than articles that may help guide and refine their thesis. Students are more skeptical towards traditional news sources that their professor’s may prefer (e.g., NYT or WaPo). Students sometimes lack the domain knowledge needed to make accurate judgments about an information source. Students may react negatively when exposed to potentially controversial topics. All this and more. So what should we do?
Post-Truth Pedagogy
If we want to help students learn to evaluate popular sources (i.e., news media) then we have to do our best to avoid triggering the cognitive biases that contribute to the post-truth mindset. Surprisingly, there is virtually nothing in the library literature on avoiding confirmation bias in the classroom. Thankfully, there are hundreds, if not thousands, of articles in other disciplines, going back decades in some cases. Here are a few ideas that have been proven to work:
Timing matters
Cognitive biases set in very quickly and the effectiveness of an instruction intervention decreases over time (Tetlock & Kim, 1987). When working with novice learners, try to get instruction on information evaluation as early as possible, preferably before they have settled on a research question.
Accountability matters
Students show more integrative, complex reasoning when asked to justify their decisions to someone else. (Druckman, 2012; Taber & Lodge, 2006; Tetlock, 1983). So, give them as much opportunity as possible to explore their decisions with their peers. Develop activities and mechanisms for peer-instruction. Put another way, lecturing about how to evaluate information is less successful than asking students to explain their search decisions to a peer. When students feel empowered and take ownership over their own search and evaluation behaviors, they are more receptive to suggestions for improvement.
Google is not our enemy
Okay, raise your hand if you use a large, multisubject database to introduce students to searching for popular sources. I think lots of us do. I used to. And then one day I realized something. When I wake up in the morning, head downstairs, make breakfast for the kids, and then sit down to read the morning news, I never say to myself, “all right, let’s fire up ProQuest.” Hell no. I go to Facebook. Google News. Washington Post. Local paper. RSS feeds. Twitter. I never get my news from a database and neither do our students. Nor should they. I want to teach students how to evaluate information in a natural setting, not the artificial setting of a database. So many markers of credibility are stripped out when news gets repackaged for databases: size and placement on original page, comments, images, related articles, and so on. So, we examine Google, talk about algorithms, etc.
Avoid controversial topics
Polarizing examples encourage defensive rationalization and directional reasoning. This has been proven over and over again. It’s the way we are. Just in terms of the neuroscience, the way I get agitated when I read something praising Trump is the same way a Trump-supporter gets agitated when they see criticism of Trump. The amygdala lights up, adrenaline spikes, and it’s fight-or-flight. When that happens, it can be harder for some students to focus on the lesson’s objectives. Things like curiosity, openness to new ideas, empathy, critical thinking…these are all affected by strong emotional responses. This is why I don’t stand in front of a class and say “all right, let’s all Google abortion” or “how credible is this article on rape culture” or “let’s come up with synonyms for white supremacy.” These implicitly force students to take a side and focus on the issue rather than on the broader critical thinking skills that are the actual focus of the lesson. This isn’t to say that students shouldn’t explore complex social issues; this is just to say that a 50-minute library session for a Freshman composition class isn’t the best place to do it. Or at least, it’s not the place to do it so blatantly. Save it for a class that’s explicitly addressing those topics. Otherwise it comes across like crass, #critlib virtue-signaling. Speaking of which. While I think we ought to avoid using polarizing topics directly, there is a place for indirect engagement with complex social issues. When I teach students about Google, I use an activity almost identical to this one by Jacob Berg and we talk about AdWords and PageRank and SEO and bias and the under-representation of marginalized voices and so on. It’s an approach built upon leading students to uncover these things organically. Like, giving them the cognitive tools to uncover biases on their own. Berg uses Safiya Noble’s example of searching Google Images for ‘beautiful women’ and asking students to discuss why all of the photos are of white women. I sometimes do the same thing, only with the phrase ‘successful person’ and students discuss why it’s mostly photos of corporate white guys in suits. Again, it’s a Socratic, indirect approach (“Why are there no people of color in the results?”) rather than a top-down, direct approach (“Let’s evaluate these results for #blacklivesmatter”).
Rethink reliability
But wait! You’re still talking about searching. When do you get to the information evaluation part? When do we get to use the CRAAP test? The answers are: “you already have” and “never.” Rather than leave information evaluation to a silly mnemonic that gets slapped on after you find an article, I think evaluation should be built into the entire search process. And it all comes down to distinguishing between reliability and usefulness. Let’s take a look at the CRAAP test. Here’s an article I used as an example in my presentation:
Is it any good? Let’s CRAAP test it.
- Currency: Well, it’s from February 2015, so it’s a little over two years old. Is that current enough? Has anything changed in horse racing? It’s more complicated than just checking the date; you kind of have to know quite a bit about horse-racing.
- Relevance: If I’m trying to prove that steroids are a problem in horse-racing, then yes it’s relevant. But, wait, that’s directional reasoning that’s going to lead me to discount as irrelevant any articles that do not support my thesis. That’s not good. The only cognitively warranted way to establish relevance is to gather a large number of articles on steroids in horse racing and see how they cohere. So, relevance really needs to be understood in context, not as applied to a single article.
- Authority: Who is Frank Angst? Granted, that’s a pretty sweet name; I can’t decide if he was in a mid-’70s New York proto-punk band or a mid-’80s DC hardcore band. Either way, I’m going to have to research this guy to figure out if he can be trusted. And that takes a little while.
- Accuracy: Okay. Here’s a fun paradox that goes all the way back to Plato’s Charmides; I call it the “paradox of expertise.” How can a non-expert evaluate the claims made by an expert? If we just blindly trust, then we’re being gullible. If we want to independently verify the accuracy of what they say, then it would seem that we need to become experts ourselves. Like, how can I know if Frank Angst is telling the truth if I don’t know the truth to begin with? (I’ll get to the answer in a minute)
- Purpose: This seems pretty straight-forward. Unless you’ve been reading too much Barthes or Derrida and you think that it’s impossible to uncover authorial intent. In which case, purpose is irrelevant. (shoot, information evaluation in general becomes irrelevant for people like Derrida).
The idea here is that the CRAAP test makes a lot of epistemological assumptions that obscure just how difficult it really is. Another idea here is that the CRAAP test has nothing to do with reliability. It’s a test for the usefulness of an article. Reliability comes from somewhere else. Importantly, reliability is a property of information sources not information itself. Authors, publishers, news outlets, etc. can be reliable, not single articles.
Basically, an information source is reliable (credible) to the extent that it tends to produce true beliefs. I highly recommend checking out Alvin Goldman’s work on reliabilism for the full details. But the short version is that, while no information source is perfect, some information sources are more likely to lead to true beliefs. Take a look at this 2012 poll from Farleigh Dickinson University:
Sure, all media is biased. But, some media are more biased than others. And some media are more likely to yield true beliefs. You can measure it. People who watch FOX News display less political awareness than people who listen to NPR. It’s not a matter of FOX News is unreliable and NPR is reliable. It’s a matter of degree. Just like a broken clock is right twice a day, FOX News has some factual reporting (but only outside of their morning and evening “entertainment” programming). But, on balance, FOX News has way less factual reporting than the Washington Post or NPR. And this gets back to the question of accuracy. We can sidestep having to become experts ourselves if we allow that we have better epistemic grounds to trust articles from some publications than from others. I didn’t have time in my presentation, and it gets sort of technical, but if you look at reliability through a Bayesian lens, you can start to really dig into related issues like consistency across multiple courses, the likelihood that a source would report something (e.g., how likely is it for FOX to report something that makes Trump look bad), and other things.
Focus on the process
Anyways, the approach I recommend to teaching information evaluation is to get students to the point that they are thinking about evaluation before they even begin searching. Focus on the search process; don’t leave evaluation as an after-thought. Think about it this way. Cognitive biases are often subtle and sub-conscious. Likewise, information evaluation is occuring during the search process but it, too is often subtle and sub-conscious. And that’s where cognitive biases are going to have the greatest effect. So turn searching into evaluating. Show Google works and how results are manipulated. Show them things like the ‘site:’ tag. Talk about truth and fairness in reporting. Talk about media ethics. Talk about consistency across multiple sources. Examine how different sources cover the same event. Again, it’s not about asking “is this article credible?” It’s about getting students to ask “given the source of this article, and given the way the issue is reported elsewhere, can I trust what I’m reading?”
Conclusion
Wow. This turned into a 2500 word rant. Sorry about that. The nutshell is this: simply giving students a bullet pointed list of “ways to spot fake news” isn’t sufficient; you need to teach in a way that avoids triggering poor cognitive processes. Time instruction correctly. Include accountability; have students justify their search behaviors to their peers. Avoid emotionally-charged examples. Move beyond acronyms. Focus on the search process, not the search results. Teaching information evaluation is as much about how you teach as it is what you teach.
Slides
Sources
Cassino, D., Woolley, P., & Jenkins, K. (2012). What you know depends on what you watch: Current events knowledge across popular news sources. Fairleigh Dickinson University’s Public Mind Poll. Retrieved from publicmind.fdu.edu/2012/confirmed/final.pdf
Druckman, J. N. (2012). The politics of motivation. Critical Review, 24(2), 199-216. doi:10.1080/08913811.2012.711022
Goldman, A. (1999). Knowledge in a social world. Oxford: Oxford University Press.
Gunther, A. C., & Liebhart, J. L. (2006). Broad reach or biased source? Decomposing the hostile media effect. Journal of Communication, 56(3), 449-466. doi:10.1111/j.1460-2466.2006.00295.x
Gunther, A. C., & Schmitt, K. (2004). Mapping boundaries of the hostile media effect. Journal of Communication, 54(1), 55-70.
Habermas, J. (2006). Political communication in media society: Does democracy still enjoy an epistemic dimension? The impact of normative theory on empirical research. Communication Theory, 16(4), 411-426. doi:10.1111/j.1468-2885.2006.00280.x
Habermas, J. r. (1989). The structural transformation of the public sphere : an inquiry into a category of bourgeois society (T. Burger, Trans.). Cambridge, MA: MIT Press.
Jonas, E., Schulz-Hardt, S., Frey, D., & Thelen, N. (2001). Confirmation bias in sequential information search after preliminary decisions: An expansion of dissonance theoretical research on selective exposure to information. Journal of Personality and Social Psychology, 80(4), 557-571. doi:10.1037/0022-3514.80.4.557
Kaplan, J. T., Gimbel, S. I., & Harris, S. (2016). Neural correlates of maintaining one’s political beliefs in the face of counterevidence. Scientific Reports, 6.
Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: How difficulties in recognizing one’s own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology, 77(6), 1121-1134. doi:10.1037//0022-3514.77.6.1121
Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108(3), 480-498.
Kuran, T., & Sunstein, C. R. (1999). Availability cascades and risk regulation. Stanford Law Review, 683-768.
Lavine, H. G., Johnston, C. D., & Steenbergen, M. R. (2012). The ambivalent partisan: How critical loyalty promotes democracy. New York: Oxford University Press.
Lenker, M. (2016). Motivated reasoning, political information, and information literacy education. portal: Libraries and the Academy, 16(3), 511-528.
Schaffner, B., & Luks, S. (2017). This is what Trump voters said when asked to compare his inauguration crowd with Obama’s. The Washington Post. Retrieved from https://www.washingtonpost.com/news/monkey-cage/wp/2017/01/25/we-asked-people-which-inauguration-crowd-was-bigger-heres-what-they-said/?utm_term=.db79a652500f
Seeber, K. (2017). Wiretaps and CRAAP. Retrieved from http://kevinseeber.com/blog/wiretaps-and-craap/
Taber, C. S., & Lodge, M. (2006). Motivated skepticism in the evaluation of political beliefs. American Journal of Political Science, 50(3), 755-769. doi:10.1111/j.1540-5907.2006.00214.x
Tetlock, P. E. (1983). Accountability and complexity of thought. Journal of Personality and Social Psychology: Attitudes and Social Cognition, 45(1), 74-83. doi:http://dx.doi.org/10.1037/0022-3514.45.1.74
Tetlock, P. E., & Kim, J. I. (1987). Accountability and judgment processes in a personality prediction task. Journal of Personality and Social Psychology: Attitudes and Social Cognition, 52(4), 700-709. doi:http://dx.doi.org/10.1037/0022-3514.52.4.700
Vraga, E. K., Tully, M., Akin, H., & Rojas, H. (2012). Modifying perceptions of hostility and credibility of news coverage of an environmental controversy through media literacy. Journalism, 13(7), 942-959. doi:doi:10.1177/1464884912455906
Hi, Lane:
Interesting and thoughtful discussion, as usual. I agree with your point that the CRAAP test includes some stuff about usefulness, like relevance. (And that “accuracy” thing I could never figure out.) But the currency and authority stuff seem to have something to do with more than just usefulness and which goes in the direction of reliability, contrary to your claim that the CRAAP test “has nothing to do with reliability.” For example, a number of interpretations of the CRAAP test include reliability of the information’s source (like publisher) under “authority.” I agree with you that “reliability” is not a good term to use for individual items of information; reliability involves a track record and applies more to information sources like authors and journals than individual articles or monographs. But we still need some term of approbation for individual information sources, perhaps “likelihood of being true” or “probability of being true”, and the criteria of currency and authority seem relevant here. I think part of what the CRAAP test is doing is checking the likelihood of a piece of information (like an article or monograph) being true. Most of this is stuffed in the “authority” criterion, though currency also contributes to a piece of information’s likelihood of being true.
Do you think this idea that individual information items can be considered to have a probability of being true is accurate, and that the CRAAP test at least partly is intended to roughly determine this probability? (Also, why couldn’t they call it the “CAARP test”? It would have been classier and could have led to some cool fish-related graphics.)
[…] “LOEX 2017: Teaching Popular Source Evaluation in an Era of Fake News, Post-Truth, and Confirmation Bias” Lane Wilkinson, Instruction Librarian at the University of Tennessee at Chattanooga. https://senseandreference.wordpress.com/2017/06/02/loex2017/ […]
[…] then Wilkinson’s (2017) very apt writing up of her presentation that boils down to the fact that it is humans, messy, […]