BETA
This is a BETA experience. You may opt-out by clicking here
Edit Story

The College Essay Is Still Very Much Alive

Following

The German philosopher Martin Heidegger, in his work The Question Concerning Technology, describes the ways in which technology shapes and orders the world through the example of a hydroelectric plant on the Rhine River. In the example, the beauty of the river is overshadowed by its modern role as a “water-power supplier,” which “derives from the essence of the power station.” The river thus becomes valuable only for the sake of its role in production—its output—rather than its inherent beauty.

The image of the power plant on the Rhine demonstrates the ways in which technology can shift from being a helpful tool to an ordering principle of human life. The tension between these two ways of approaching technology crops up with almost every significant innovation, and the release of OpenAI’s ChatGPT is no exception. The unveiling of the software—an uncannily sophisticated large language model—was met with particular consternation (or, in some cases, celebration) for its potential impact on higher education. Stephen Marche’s December 6th article for The Atlantic is a prominent example of the sensationalized response to ChatGPT, in which Marche boldly argues that with the innovations in AI, “the college essay is dead.” However, rather than proclaiming the merits of AI as an educational tool (as he seems to have intended), Marche reduces students’ work to technological output, valuable for its product rather than its process.

Marche argues that the increasing sophistication of AI will not only create new opportunities for students to cheat or use language-generating software to write their essays, but that it will make the college essay obsolete altogether. The article takes particular aim at the unwillingness of the humanities to adapt to groundbreaking technologies. He writes that “In a tech-centered world, language matters, voice and style matter, the study of eloquence matters, history matters, ethical systems matter. But the situation requires humanists to explain why they matter, not constantly undermine their own intellectual foundations. The humanities promise students a journey to an irrelevant, self-consuming future; then they wonder why their enrollments are collapsing. Is it any surprise that nearly half of humanities graduates regret their choice of major?” Couching sweeping statements such as these in a broader argument about the lack of collaboration and mutual understanding between the fields of humanities and technology, Marche misrepresents the value of the humanities as a discipline and therefore the role that new technologies can and do play in humanities classrooms.

While critiques of Marche’s article have focused primarily on the role that AI will play in the changing landscape of academia, little attention has been directed to the article’s fundamental misunderstanding of the value of the college essay—and the substance of liberal arts education more broadly. If we cannot articulate the reason that the college essay has been the cornerstone of education in the humanities, it is impossible to determine whether it will be helped or hurt as we reach new horizons of technological advancement.

This is where Marche’s article falls short. He writes: “Practical matters are at stake: Humanities departments judge their undergraduate students on the basis of their essays. They give Ph.D.s on the basis of a dissertation’s composition. What happens when both processes can be significantly automated?”

Ultimately, while an essay or dissertation is often the product of learning in the humanities, it is not the core substance. Seminar discussions, theoretical inquiries, stages of peer reviewing, oral defenses—these are the foundation upon which essays are constructed. And, during a time in which calls for diversity in higher academia have reached a fever pitch, these elements of an education in the humanities require students to investigate their unique identities and the role of those identities in the theoretical conversations into which they intend to enter. Because of this, the humanities are not simply an education in what to think, but how to think. As Plutarch famously stated: “[T]he mind does not require filling like a bottle, but rather, like wood, it only requires kindling to create in it an impulse to think independently and an ardent desire for the truth.”

Perhaps, as Daniel Lametti argues in his own response to Marche’s article, ChatGPT can be used like other tools such as Grammarly or EasyBib. However, all of these automated tools are only as good as their input—bibliographic software will replicate whatever errors or stylistic formatting are present in the article cited (i.e., if a journal stylizes an author’s name in all capital letters or renders the volume and issue number in a way that the automated system cannot cleanly read). If students trust the software without ever bothering to learn the intricacies of Chicago or MLA styles, they could produce citations that are woefully incorrect without the ability to identify and correct errors. ChatGPT, likewise, produces language by replicating linguistic patterns found in millions of pages of data that have been input into the system—so, while it can produce a whole host of materials, from essays to poems to music, it cannot (to quell any fears of a technological dystopia) think for itself. A simple Twitter search for “AI malapropism” reveals hundreds of anecdotal examples of AI systems regurgitating text based on its probability of occurrence in lieu of the contextually-based wording intended by the author. Thus, if students do not understand what makes for good prose or stylistic academic writing, or if they cannot formulate a nuanced and original argument based on primary source material, how are they to judge whether the output of ChatGPT offers them a more compelling paper than what they have (or could have) written themselves?

Heidegger concludes his reflections on technology not by villainizing or valorizing technology, but instead by arguing that technology is valuable insofar as it is a tool to be used—not a manner of being in the world. Distilling the value of a student’s knowledge into simple output indicates that we have not made technology more human, but that we have come to regard humans as mere technology. ChatGPT and other AI innovations can certainly be tools in the arsenal of the humanist. But the humanist’s value, which Marche so adamantly questions, is in their ability to help students use the tools at their disposal rather than become merely machines of informational output.

Follow me on Twitter or LinkedInCheck out my website