information literacy, writing instruction, and the problem of stochastic parrots

Old postcard showing three parrotsI was invited to speak to writing instructors at the University of Minnesota, Duluth, who are in the process of thinking through the information literacy portion of their learning outcomes. Always a pleasure to connect with a discipline that seems so closely aligned with academic library work.

Abstract: For more than three decades, my job was to help students learn how information works. Though information literacy, as we call it, matters to me because inquiry is ideally a form of education that Paolo Freire called “the practice of freedom,” the students I worked with were understandably focused on formulating questions and selecting the kinds of sources that would satisfy their teacher rather than engaging in genuine curiosity. Tellingly, a Project Information Literacy study of recent college graduates found less than a third felt that college prepared them to ask questions of their own. Librarians and writing instructors both face a fundamental tension between our higher goals and the reality of our service roles to other disciplines. Two concepts that seem important but are too often overlooked are first, understanding the underlying ethical moves and commitments that characterize good honest work, whether it’s science, journalism, or an informative TikTok, and second, understanding how information systems shape our experiences, especially now that we no longer simply seek information, it seeks us. Today we’ll explore ways these concepts could be addressed without losing sight of the practical needs of writing instructors and their students to satisfy disciplinary expectations.

Thanks so much for inviting me to be part of your colloquium. I’m speaking to you from southern Minnesota, the unceded territory of the Dakota people of the Očhéthi Šakówin. I am currently just a couple of miles from the site of the largest mass execution in US history, when 38 Dakota men were hanged on the day after Christmas, 1862. That site is now called Reconciliation Park, a space for an annual memorial gathering. We’re still working on reconciliation, and will probably be at it for many generations.

To prepare for this discussion, I decided to ask the notorious ChatGPT to analyze the value of research papers in undergraduate courses. It did a pretty credible job, though seemed a little confused about whether the question referred to writing papers or reading them – the wording of my question, I realized, was ambiguous. What was remarkable, though, was the speed with which is produced an answer in competent, error-free prose, and in a classic five-paragraph essay format, at that. This is why there was so much hype around this large language model and its simple user interface: it seems to do something that, for mere humans, takes time and effort and appears to do it faster and better.

I also asked it to tell me about criticisms of the research paper assignment in the style of Barbara Fister. It didn’t attempt my snarky style  — probably didn’t have enough training data – but it did summarize a handful of the arguments I’ve made in the past, presented in three bland, error-free paragraphs.

It was, in short, quite ready to tell me both why research paper assignments are good, and why they are bad, because it doesn’t care about being right. It’s just responding to customer prompts, rather like Google Search, which presents results based on what you ask, not what is true.

For example, if you search for “Russia collusion 2016 election” you get very different results than if you search for “Russia collusion hoax.” And media manipulators know how to put these keywords and phrases to use even as most users of search engines assume they’re simply getting links to the most relevant sources.

Thanks to scholars like Safiya Noble, we’ve become a little more aware of bias in search engine results, but we’re not necessarily as wary of it in language models like GPT. To avoid scandal, OpenAI, which created ChatGPT, has taken steps to make sure it doesn’t include offensive output scooped up in its training data by paying Kenyans a pittance to label toxic sludge to be fed back into the system to teach it what not to say. Imagine spending hours a day labeling horrific stuff so the machine knows not to use it. This is how the seeming magic of AI happens: through a massive appropriation of content and hundreds of thousands of human judgments about nasty stuff, with the labor largely outsourced to low-paid workers who are invisible to the end user. A chat program is much more likely to appear magical and unbiased if we think no humans are involved.

Why have I gone off on this tangent? It struck me that I was giving ChatGPT an assignment not unlike what we ask of students. Sure, assigning research papers is supposed to help students learn new things, practice critical thinking, and develop their writing skills, as ChatGPT told me. But in practice, students think we’re asking them to respond to instructor-defined questions by scraping some information from Google and library databases, mashing it up, and presenting it in error-free prose that is bland and pseudo-academic. We’re asking them to become, like ChatGPT, stochastic parrots.


This is a phrase coined by Timnit Gebru and her coauthors in their paper “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” This fairly technical paper, which actually got Gebru and another author fired from their positions at Google, argues that there are a number of problems with large natural language models like ChatGPT. A language model is stochastic, in that it’s based on probabilities but has a certain amount of unpredictability. And it’s like a parrot in that it isn’t concerned with the meaning of words, but only fluency based on statistical likelihood of words being next to other words. They say a language model like the one underlying ChatGPT is

a system for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data according to probabilistic information about how they combine, but without any reference to meaning: a stochastic parrot.

Is learning to write in college like being trained to put words next to each other in a predictable way, more concerned with correctness and mimicry than with creativity or curiosity, or even factual truth?

Of course, it’s not that bad, but I’d like to suggest that we can avoid making college writing tasks seem both tedious and meaningless while also honoring the need students have to produce the kind of “academic” prose in the genres their instructors in the disciplines will assign.

The term paper or research paper has been criticized since at least the early 1980s, when Richard Larson famously called it “a non-form of writing.” He argued that it’s a mistake to think of the “research paper” as a genre distinct from other forms of writing.  As he put it, “the so-called ‘research paper,’ . . . implicitly equates ‘research’ with looking up books in the library and taking down information from those books” (pp. 812, 813).

Of course, now students are more likely to look up articles in a library database (or on Google) and copy from them rather than books, but the basic problem hasn’t changed. A few years ago, the Citation Project collected 174 first-year writing samples from sixteen colleges and universities and found that student writers grabbed a sentence or two from the sources they cited, and most of those sentences were from the first or second page of the source. In other words, they assumed their job was to mine sources for usable quotes, but it wasn’t necessary to read and understand them.

Larson criticized the “research paper” as an artificial genre, but instead argues research should be a feature of all kinds of writing, and I still find this persuasive. He wrote:

in order to function as educated, informed men and women they have to engage in research, from the beginning of and throughout their work as writers. I think that they should know what research can embrace, and I think they should be encouraged to view research as broadly, and conduct it as imaginatively, as they can. I think they should be held accountable for their opinions and should be required to say from evidence, why they believe what they assert. I think that they should be able to recognize that data from “research” will affect their entire lives, and that they should know how to evaluate such data as well as to gather them. And I think they should know their responsibilities for telling their listeners and readers where their data came from. (p. 816)

I love this. I love the idea that research should be part of our lives, not something done only for a peculiar kind of school-based writing that erases the self and derives authority from other people – or, more commonly, not people but things: sources.

And yet, despite decades of critique, the “research paper” is a stubborn thing. A recent book that studied writing programs at comprehensive public universities found 97% of instructors still assign the kind of paper Larson found problematic. The reason for this may be something Larson pointed out: it’s seen as service to other departments. In the first year, especially, it’s practice for writing in other courses, assuming that this practice will transfer to any disciplinary situation. In your program you do address the differences in those discourse communities in the advanced courses, but it’s hard to see how the practice provided in most first-year writing courses readily transfers to writing expectations in history, economics, or biology. And, of course, undergraduate students migrate across multiple disciplines. We’re asking them to appear fluent in a variety of discourses in ways that few of their professors would be comfortable attempting.

So, we often feel stuck. Librarians and writing instructors have quite a lot in common. Our work tends to be highly student-centric and our teaching is largely in service to other departments, whose faculty don’t want to do it themselves. We see a lot of students, but we don’t produce majors so much as help others produce majors. And while we can’t always control the kinds of writing and research situations our students will encounter elsewhere, we’re expected to prepare students for those situations. So choosing what to teach and how is complicated. And students’ lives are involved.

Another thing we have in common is that we believe passionately in the importance of what we do. Being able to inquire and compose meaning matters in the world, not just in college. To quote from your department’s mission statement, students are learning so they can “engage meaningfully with the complex societies in which they live,” but in practice writing instruction tends to focus on making sure students can survive the writing demands they will face in college but, possibly, never again after graduation. Think of the time spend on documentation, those weird hand-coded hyperlinks that are supposed to send readers to related texts. Good writers in any situation draw on other people’s ideas and find ways to make their sources clear, yet how many writing occasions outside academia require carefully-formatted endnotes in MLA or APA format? Yet documentation takes up an awful lot of time and produces a lot of anxiety – which actually can interfere with understanding what citations are for.

Likewise, librarians have expansive definitions of information literacy – here’s the one used by the Association of College and Research Libraries:

Information literacy is the set of integrated abilities encompassing the reflective discovery of information, the understanding of how information is produced and valued, and the use of information in creating new knowledge and participating ethically in communities of learning.

It’s not just how to find and evaluate sources, it speaks to a broader understanding of how information is created and circulated in society. And it’s implied, as in your statement, all of this is to be able to engage meaningfully in the world. But in practice we spent a lot of time introducing students to local library systems and resources they would use in the next four years and  will probably never again access after graduation. In fact, most of our database contracts forbid graduates to use them. Yet we sure spend a lot of our time explaining how these tools work.

Student success matters. We want our students to survive and graduate. But student success isn’t the same as lifelong learning, which we all also claim to care about.  Sure, some of what students learn as writers will transfer to post-graduate situations, but a lot of it is really just . . . academic.

What can we do? I’m going to throw out a couple of big ideas and follow up with some examples that may or may not seem practical in your teaching. And hopefully in the Q&A we can kick around more ideas.

Two big ideas that have been on my mind for a long time are first, helping students understand how information flows through our information infrastructures – how it’s made and how it gets to us – and second, how the decisions people make as they use those infrastructures to create and share information are informed by the ethical stances they may, or may not, take toward the creation of knowledge.

I think this grows out of my impatience with the ways information provided by libraries (as well as on the open web) presume we are all consumers shopping for stuff. Google Search and library databases alike are kind of like shopping platforms. We plug in words and then choose the products that suit us, although we still have the problem of finding the right match in a world of abundance.

For example, in your library students might type in “information overload” and have over 26,000 possible peer-reviewed sources to choose from. So much consumer choice! But I’m guessing many students would become frustrated than none of them seem to be exactly what they’re looking for. Perhaps it’s not surprising if they randomly download some sources and cherry pick a quote or two from the first or second pages.

So how do we make research something other than a frustrating shopping expedition that positions you as a consumer and a performer, not someone actively creating meaning? First, I think we have to reframe the activity as one that isn’t testing how well students can mimic experts (who would be unlikely to embark on a research project knowing nothing about the topic) but rather to help them see themselves as curious people who have a role in making meaning for themselves and others.

One simple way of making that distinction is to talk about finding out rather than finding sources.[i] Secondly, talk about developing a research question, rather than choosing and narrowing a topic. Of course, students will want to know how many sources, and what kinds count. They’ll want to know how many pages. But try to keep the focus on open-ended curiosity and process rather than signaling what a finished product should look like.

To do this, give them practice asking questions — not just critiquing someone else’s ideas, looking for faults, but open-minded and sometimes simple questions. What is this? Why is this here? Where did it come from? How is it connected to other things? What are the implications? This practice of intellectual curiosity could start with course readings or events on campus or something in the news, and could take the form of crowd-sourced curiosity, building on one another’s questions, moving out from a source rather than breaking it down. This models curiosity as community-based questioning and promotes a sense that inquiry is a conversation and they are part of it. It also helps students think about research as a process of constructing understanding of how things fit together, it’s not a matter of shopping for the right brand of authority that you can clothe yourself in, as if for a costume party. It’s rather a communal activity of sense-making.

One possible way to stage this process (which, full disclosure, I haven’t tried): have students start with wide-open questions about an issue, delve into finding out what they can, work toward crafting a researchable question through a process of generating and revising their ideas, and in the end turn in a progress report rather than a final paper. A big part of the process that we tend to overlook is that  much of an undergraduate’s time will be spent establishing context, seeing the general landscape of the issue they’re dealing with, before they can settle on a research question. This would channel their curiosity from the work of establishing a basic understanding of an issue – finding out – toward a more specific and complex question. Rather than pretend to have answers, have them make a case for why that question is worth asking and how others have begun to approach it, and what questions remain. It could it keep the focus on process rather than product, and would reduce the stress of having to produce a high-stakes final paper, full of quotes and citations, while worrying about plagiarism, and about final exams.

Sometimes, of course, writing may not be an open-ended discovery task, it may take the form of developing an argument, with sources brought in to provide evidence and persuade, and with a definite end product. This is where the ethical practice piece can come in. How can you make your case with integrity? Integrity means avoiding adopting or rejecting evidence merely to strengthen an argument. It means being willing to change your mind rather than clinging to a predetermined answer. It means seeking out a diversity of approaches and voices and treating them fairly. It means using your skill as a writer to inform and engage, not to destroy your enemies.

I noticed that your admittedly incredibly ambitious writing program outcomes statement emphasizes rhetorical moves and the use of sources, but doesn’t include anything explicit about making ethical choices.  We tend to assume these ethical moves are obvious, but they aren’t, not in a world where arguments are things you set out to win and no holds are barred – and skillful use of rhetorical moves and selective deployment of evidence can be destructive.

I think it would be useful to tie the kinds of ethical moves and commitments that make for strong, data-informed arguments to the evaluation of sources. Making good choices isn’t so much a question of branding or appearance – does this count as a scholarly article? Does it have a killer quote I can use? – as it is a matter of  what practices and principles underlie the creation of that source. Was it created by people who follow traditions of integrity? Don’t trust, say, The New York Times simply because it has a respectable brand, ask whether the journalists there adhere to journalistic standards. Likewise, don’t trust science, trust the ways scientists go about discovery, the ways they avoid tainting their findings. Don’t think about what to trust, think about why to trust. And since trust is at a very low point these days, it may be necessary to take the time to discuss what underlies good information, whether it’s the practice of peer review or a tradition of confirming sources and checking facts in news reporting.

Traditionally, both media and information literacy instruction has highlighted examining texts in isolation, whether they are videos or images or peer-reviewed articles. I think it’s important to pull back and also examine the technological systems we use as we tune into digital streams of information. How did that information find me? Is it playing to my confirmation biases? What may I be missing?

This may seem far too complex for undergraduates who are just trying to get their assignment done so they can study for the biology test tomorrow. But this is a world they are familiar with, much more so than the library databases we throw them into. You could start the discussion of source evaluation with their own lived experience of sorting through information and deciding what to share. They often have a sophisticated understanding of how rhetorical moves and platform infrastructures are shaped to influence audiences. They may well have developed their own sense of what meaning-making behavior online is ethical or not. Have them discuss how they create, share, and process information in their own lives and how they make decisions about the truth-value of the things they encounter or seek in everyday life. Then you can begin to talk about how knowledge institutions – science, scholarship, journalism – make those moves.

In practical terms, you may find the SIFT method useful as a quick way of sorting through the information that inundates us online and is particularly useful when choosing among non-scholarly sources. This was developed by Mike Caulfield as a more effective way than lengthy checklists to make judgements about digital information.

It has four steps, and you can stop at any point in the process.

  • First, Stop: check how this information is affecting your emotions. Is it manipulating your feelings? Is it tweaking your confirmation biases?
  • Second: Investigate the source. Who published this? Wikipedia can be your friend. Maybe you can make a decision at this point about whether to invest time in the source or whether to move on.
  • If you’re still unsure, go to the third step: Find better coverage. Before you do a deep dive, see how others have addressed the issue. Is this an outlier? Does it do a good job or is there something better out there?
  • If you haven’t made a final decision yet, or want to be absolutely sure, try the fourth step: trace the sources to see if they have been used fairly and accurately.

The point is to make decisions at the speed of the internet. Nobody has time to carefully vet every piece of information they come across. In those situations, we need ways to make good choices efficiently. It gets at that need for context that we all have when faced with the unfamiliar, and it scales, given how many decisions we make about information every day.

Interestingly, Caulfield found that if students spend more time examining a source after using SIFT, they tend to second-guess themselves and go down misleading rabbit-holes. If we put too much of a premium on deep critique and skepticism in situations where students don’t already have a well-established knowledge base, they run the risk of not trusting anything. I find students to be already highly skeptical and likely to think all information is produced for personal gain and is therefor suspect, whether it’s a Tweet or a scientific paper or a news article. Yet being able to outsource the work of understanding some things to people who have already done the work of investigating it in depth and with expertise is absolutely essential.

I realize my big issues – understanding what it takes to inquire with integrity and how to understand how information systems influence what we see – are not easy to address in courses that already have far too much to cover. And they may seem too complex for anxious students who come out of a school system that has trained them to think correct answers matter more than questions. But if we believe what we teach matters in the world, we should aspire to do more than train them to become stochastic parrots.

We should respect their own experience and agency as they go about the process of making meaning and encourage their own creativity and curiosity. We should give them authentic occasions for expressing themselves and make them feel safe doing so. We should convey the sense that writing and research is, in a sense, the practice of freedom. And we should resist, as much as possible, the notion that the purpose of writing instruction is to train students to write fluent and error-free prose. After all, that’s what ChatGPT is for.

Illustration of a woman in a yellow striped outfit looking at a blue macaw from the 1920s.

[i] Wendy Holliday and Jim Rogers (2013), “Talking about information literacy: The mediating role of discourse in a college writing classroom,” portal: Libraries and the Academy 13(30), 257-271,

Images courtesy of NYPL and Public Domain Pictures.

5 thoughts on “information literacy, writing instruction, and the problem of stochastic parrots

  1. Thank you for this very thoughtful and revealing talk. We are in the midst of talking about ChatGPT and AI in general and how to respond, pedagogically speaking, to its ramifications. We’ve considered some of the things you mention in your article including, of course, the importance of agency in developing intellectually. BUt one thing we haven’t touch on yet are the social justice issues inherent in these tools which can remain hidden (e.g., the underpaid labor that tags inappropriate content, for instance). I have forwarded the transcript of your talk to our ChatGPT ad hoc committee which prepares for and is meeting weekly to have targeted discussions regarding the implications of ChatGPT and other AI tools.

    Thank you,
    Pamela M. Salela, Associate Professor
    University of Illinois Springfield

    1. Good to hear from you, Pamela! The Stochastic Parrots article by Gebru et al is very good at summarizing the many social justice issues with large language models, which I suspect are common in processing-heavy AI. That sounds like a good adhoc committee for the present moment. Best of luck with it.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.