DH Makes Explicit

Screenshot of XML markup.

“I’m still looking for that nugget, that thing I can take away from DH and say ‘here’s the contribution; this is how it relates to someone like me.'”

After a digitally inclined guest lecture on campus last week, a fellow grad student pressed me with this basic question on what Digital Humanities brings to the scholarly table. I understood her mild bewilderment. For those who are not technologically inclined, DH in practice can seem like a heap of techno babble and gimmickry haphazardly tossed over, at best, quasi-scholalry inquiry. Likewise, for those are not humanistically inclined, DH in practice can seem like a misguided use of technological equipment and computational methods.

It’s difficult to even label these reactions as misconceptions. The [intellectual, monetary] hype surrounding DH unfortunately tends to connote lofty ideals, revolutionary ontologies, the latest tools, as well as unnerving intimations of “progress” and the future of the humanities. Furthermore, Matthew Kirschenbaum, on several occasions, has reminded readers about the formation of the term “digital humanities” and its specific relationship with “marketing and uptake” (1) and its current use as a “tactical” term to “get things done” in the academy, be it obtain funding or promote a career (2).

In short, there’s a need for both reconciliation and promotion of the term’s more meaningful usages, particular as a label that describes new practices in the humanities for curious and skeptical onlookers alike. If DH is to be inclusive, its practitioners should take care to articulate clearly their goals and methods to colleagues in all humanities disciplines, not only those who are digitally literate. If DH is to be an advocate for the humanities to the public–as Alan Liu thinks they can be (3)–clear articulation becomes more important still.

DH Makes Explicit

In a 2013 interview, Johanna Drucker recollected that the “mantra of 1990s Digital Humanities” required “making everything explicit we, as humanists, have long left implicit” (4). Her comments refer to the logic of programming–“coding” as a structure for inquiry–but this sentiment also offers an attractive and powerful model for an inclusive DH, and for its full partnership in the humanities in general.

Quickly, let’s apply that framework to a variety of examples:

Screenshot of XML markup.
While hierarchical markup languages like XML make texts machine-readable, their use first requires that textual scholars consistently analyze and describe their texts’ discrete physical characteristics.
  • Textual Studies: TEI, markup, editing. Like its analog counterpart, digital editing requires its practitioners to throw into relief bibliographic data embedded in physical texts. Markup languages like XML require the attribution of values to textual data. In a simplistic view, explaining a text to a computer requires us to explain it first to ourselves. (We’re having quite a time with William Blake’s Four Zoas manuscript over in the Blake Archive.)
  • Literary History: or, the Moretti movement. When I first read Moretti’s now-mandatory Graphs, Maps, Trees (5), I found the middle “Maps” section to be the least provocative. Perhaps that initial reading holds up, but only because the methods described are also the most familiar. When asking “Do maps add anything to our knowledge of literature?”, Moretti illustrates the “centric composition” of Mary Mitford’s Our Village. The map is not an answer to anything, but rather evidence of a narrative feature that requires explanation. In other words, the map makes explicit what our brains are already doing in constructing the narrative.
  • Collaboration: DH in practice, in theory. At a recent inter-departmental panel on “Evaluating Digital Projects as Scholarship,” I was stunned to see a senior faculty member cling so tightly to the image of the isolated scholar, the sole author. Yet the incident is also evidence of the disarming effectiveness of DH to explicate, even exaggerate, the collaborative nature of scholarly inquiry, of “work,” of language. While monograph production still remains the normative argumentative structure in the humanities, DH has the ability to critique these modes of production through a kind of processional remediation. In other words, while DH often remediates a variety of texts, it also remediates a variety of roles in the production of those texts and the production of knowledge. Publishers and editors give way to IT directors and programmers; grad seminars give way to graduate research assistantships. This collaborative stance also makes DH an exceptionally natural partner for critical theorists in a variety of backgrounds, whether poststructuralists, McGannian editors, or feminists. The “digital” is so inherently problematic for hegemonic, centralized [hermeneutic] authority that its position as a polemic is only limited by its increasing permeance as common practice. New endeavors like “open peer review,” crowdsourcing, and collaborative authorship represent only the tip of the iceberg.
  • The University: departments and disciplines. The idea of “interdisciplinary study” has long been used as shorthand for “diverse research interests,” but how diverse has it usually been? Maybe an English prof who crosses the quad to visit the history department. DH has proven to be an effective identifier of false boundaries within the university structure, particularly the “big one” between sciences and the humanities. Increasingly common vocabularies and technologies have made it possible for humanists to approach the sciences, and vice versa, with informed critical perspectives. It’s happening at the undergraduate level, too, with examples like Stanford’s new CS+English dual major or U of R’s newly revised Digital Media Studies major.

Where Have All the Computers Gone?

For the most part, computer technology is de-emphasized in this outward-facing characterization of DH, and yet it’s because of this de-emphasis that I believe this strategy to be the most advantageous for communicating with peers–and the public–outside of our communities of digital scholars. Coding and building are important for practitioners of DH, but the mere use of technology can’t be why DH is important for the humanities. Instead, when we use DH to “make explicit,” we appeal to a common method of all critical inquiry: to identify and articulate underlying ideological operations, whether they exist in cultural structures, like gender, or cultural artifacts, like literature.

DH’s unique contribution, then, comes with the specific manifestation of this “classic” line of inquiry through new technologies that help us ask, ideally, better questions. And, as we can see with even a cursory list of examples, it’s not simply the “products” of DH that make explicit, but the practice as well.

—————

Eric Loy is a PhD student in the Dept. of English at the University of Rochester.

—————

1. Kirschenbaum, Matthew. “What is Digital Humanities and What’s It Doing in English Departments?” Debates in the Digital Humanities. Ed. Matthew K. Gold. Minneapolis: GC CUNY, 2013. Web. <http://dhdebates.gc.cuny.edu/debates/text/38>

2. Kirschenbaum, Matthew. “Digital Humanities As/Is a Tactical Term.” Debates in the Digital Humanities. Ed. Matthew K. Gold. Minneapolis: GC CUNY, 2013. Web. <http://dhdebates.gc.cuny.edu/debates/text/48>

3. Liu, Alan. “Where is Cultural Criticism in the Digital Humanities?” Debates in the Digital Humanities. Ed. Matthew K. Gold. Minneapolis: GC CUNY, 2013. Web. <http://dhdebates.gc.cuny.edu/debates/text/20>

4. Berdan, Jennifer. “The Emerging Field of Digital Humanities: An Interview with Johanna Drucker.” InterActions: UCLA Journal of Education and Information Studies 9.2 (2013). Web. <https://escholarship.org/uc/item/1355x2bn>

5. Moretti, Franco. Graphs, Maps, Trees: Abstract Models for Literary History. New York: Verso, 2007. Print.

In “The Shape of the Civil War,” the Heritage of DH

“Every epoch, in fact, not only dreams the one to follow, but in dreaming, precipitates its awakening. It bears its end within itself and unfolds it cunningly.” – Walter Benjamin, The Arcades Project

The early twentieth century German philosopher and critic Walter Benjamin was convinced that the popular architecture and cultural technologies of the nineteenth century—iron-and-glass arcades, panoramas, and exhibition halls—were seedlings of modern ways of thinking about and interacting with the world. The increasingly virtual, dreamlike, and commercial culture of the twentieth century was not at all novel, he thought, nor did it represent a sudden breakage with the traditions of the past. Rather, the “mass culture” of the 1920s and 30s was simply the convergence of a number of cultural trends that had developed decades, even a century earlier, in the form of Victorian escapism, alienation, and hyperconsumption—which were themselves a deferred outgrowth of Enlightenment thought, and so on. “They were destined for this end,” he writes, “from the beginning.”

It is all too easy, and self-congratulatory, to privilege the present—to think of it as new and unprecedented, as an always-peaking wave that the well-prepared can ride confidently. There is an undoubtedly euphoric feeling associated with participating in a moment of innovational upheaval, and of being the “ideal” customer or user of a new product—think of the social reward system built around purchasing new technology, or rapid consumption of today’s (but certainly not yesterday’s!) viral trend. But as Benjamin suggests, we live in a present assembled out of the materials of the past, rather than one that willed itself into existence ex nihilo.

This is the theme of a wealth of contemporary scholarship on technology. It is now a truism—even a traditionalist like Simon Schama promotes this idea—that the 1500 year-old Talmud, with its endlessly cross-referenced, hyper-embedded page layout, is a direct ancestor of and perhaps even model for the World Wide Web. In recent books Writing on the Wall and The Victorian Internet, journalist Tom Standage presents a convincing case that Martin Luther’s hammered theses and nineteenth century telecommunication not only resemble social media and the Internet, respectively; they also established the intellectual and social conditions necessary for their creation. These arguments are not reducible to pattern-finding, nor do they simply hinge on visual or structural coincidences, or on modern biases projected into the past. Rather, they show that the needs addressed by contemporary technology are deeply, even primordially embedded in human thought and desire, and that they find a proper expression in each successive phase of cultural development. In other words, the telegraph is not the cause of the Internet. Rather, the telegraph and the Internet arise from the same cause.

At a recent talk at the University of Rochester, Civil War historian (and University of Richmond president) Ed Ayers, a pioneer in digital humanities research and infrastructure-building, made a similar and compelling case about the genealogy of DH. Confronting the popular claim that DH is simply a new coat of (bureaucratic and distracting) paint on traditional humanistic methods, Ayers discussed at length the “History of the Civil War in the United States,” a most unusual visual timeline from late nineteenth century historian Arthur Hodgkin Scaife’s “Comparative and Synoptical System of History Applied to All Countries.”

Civil War Chart

Most simply described as a geographic chronology, Scaife’s chart both ingeniously and awkwardly attempts to illustrate the “shape” of the Civil War. He charts Union and Confederate troop movements over time and through space in parallel bands representing each state where hostilities took place, and to each side, bar graphs allow comparative readings of two dubiously interrelated statistics: the manpower of each army and the value of each side’s respective currency. It is unclear if Scaife actually thought these things were connected, or if he was simply trying to prove that correlation between any two reasonably derived statistics can be molded into a pattern by juxtaposition alone.

Scaife’s chart, Ayers argues, is a significant early expression of the quantificational impulse that drives digital humanities. It is easy to imagine this work—which is at once a graph, map, and tree, in Franco Moretti’s terms—making the rounds in the DH community, even earning grant funding for future and more complex or interactive implementations. DH scholars, Ayers suggested, would do well to embrace this material as evidence that the urge to represent complex sets of social, cultural, and historical information in a visual form precedes computers and does not at all replace scholarly historical research. Scaife read several dozen volumes of the best Civil War history available just thirty years after the war in order to create this one page. The effort undertaken is undeniable.

As a historian and a progenitor of data visualization, Scaife seems to have been done in by his belief in completism. It is impossible to convey the social complexity, the political causes, and the human cost of a tragedy like the Civil War in a single chart, or even in a single volume; few events in human history are as well-documented and as bottomlessly analyzable. His claim that his synoptic method could be “applied to all countries” is equally tough to swallow. According to the Slate magazine article in which Ayers first learned about Scaife, only a few other charts were ever published, including ones documenting the “‘Cuban Question,’ English history, and the life of William Gladstone.” In its failure to live up to its lofty ambitions, Ayers noted, the chart works equally as a warning and as an heirloom: even as we celebrate it for its untimely ingenuity, we must also recognize in it the folly of expecting new methods to “solve” the problems of humanities research.

Eitan Freedenberg is an Andrew W. Mellon Fellow in Digital Humanities and a PhD student in the Graduate Program in Visual and Cultural Studies at the University of Rochester.