NYCDH: Community and methodology

This weekend saw the first meeting of NYCDH, an exciting initiative to join the various digital humanities efforts on campuses across the city under one umbrella group. The event, which was moderated by Ray Siemens and Lynne Siemens of the University of Victoria and DHSI, was meant to build community among what can often be disparate organizations despite our physical proximity. Just like Digital Experiments, the NYU DH working group, tries to toggle between members’ individual research interests and an encompassing collaborative research project, one of the challenges NYCDH faces is not only getting Columbia students to go downtown or NYU students to go to New Jersey, but also representing the wide range of projects taking place under the “digital humanities” banner.

One important way to draw together this disparate body—as with any subfield—is through discussions of methodology, as Dennis Tenen pointed out at the meeting. Digital humanists, especially those in literature, are sometimes accused of practicing a “new formalism,” breaking texts and languages down into their constituent parts and acting as if those parts are finite or inherently meaningful. While both the applicability and usefulness of the critique can be debated, it’s a stumbling block that’s at the heart of DH methodology: how do we quantify, in order to encode or decode, certain aesthetic or interpretive qualities? And at the same time, if you want to count the number of, say, letter pamphlets printed in the eighteenth century, don’t you first have to come up with a formal definition of a letter and a pamphlet? It seems that the tools of the digital humanities tend to push inquiry toward considerations of form. The concern should be less whether that is a legitimate scholarly stance—I think it is—and more how  we can  mediate the relationship between form and content.

In Digital Experiments, we’ve started thinking about these questions through a long-term collaborative project on epigraphs. From early modern plays to scholarly articles (to blog sidebars), epigraphs are exceedingly common units of text, but they’re often discarded in analysis, whether we’re using digital or more traditional methods. Text-mining programs often strip away paratextual materials like epigraphs to get to the “actual” texts, while scholars rarely quote them as evidence. As a starting point, we’re trying to think about ways to define our objects of inquiry, both empirically and formally: we’re building a relational database in which we’ll enter many different examples of epigraphs, and we’re also working with Python to attempt to identify the epigraphs within texts (perhaps based on the use of white space). Just like the epigraphs project allows members of Digital Experiments to transition between their own research interests and the group project, these methods bring together formal and more interpretive techniques. We’ll post updates on the project as it progresses on our website.

Epigraph for The Epigraphs Project: “What does big data have to say about little texts?” —Collin Jennings

Advertisements

Teaching with DH

There’s been a lot of recent discussion about how to incorporate tools of the digital humanities into teaching, with a panel on the topic proposed for the 2014 ASECS and a book, Brett Hirsch’s Digital Humanities Pedagogy, just out from Open Book Publishers. This question goes to the heart of how to define DH—as a field, subfield, speciality, discipline, etc.—and of its role in the humanities writ large. There are a number of ways to think about using DH in teaching: we can present students with our own projects and research, teach them particular tools or programs, have them read the emerging body of theory for the topic, and ask them to produce work in digital formats, among other methods. This semester, I’m continuing my effort to use digital tools to integrate the study of material texts into the classroom. I require students to sign up for Tumblr and contribute to a collaborative class blog, in which they identify archival materials associated with our weekly readings and post commentary on them. Tumblr is supposed to be a relatively easy blogging platform, but it often proves challenging to my “digital native” students, who have to learn how to navigate a web-publishing program. Part of the goal of the exercise, then, is to teach them how to use this technology and others like it, which are becoming elements of many writing-oriented jobs. At the same time, I encourage them to open up their writing styles, taking on a “blog voice” rather than the often-stilted prose of an academic essay.

Another goal is to get them to think about the material forms in which writing takes shape, and about how those forms influence the content of the texts we are studying. In my Cultural History of Media course at Marymount Manhattan College, for example, we are studying the history of writing technologies from the invention of the alphabet through digital media. Just in the first week of class, students have produced some fabulous blog posts enriching our discussion, analyzing materials from a papyrus scroll of Plato’s Phaedrus to one of Emily Dickinson’s pressed flowers. In BritLit I, which covers Beowulf to Milton, I’m tackling the question from the other direction, asking students to find new-media adaptations of our canonical works—using a broad definition of “new media” that could include print, photography, film, etc. The idea is to track how these works have been remediated in the cultural imagination over time, on a Tumblr I’m calling “BritLit 2.0.”

While these students (and those in a previous course, BritLit II at NYU) have found some fascinating items, I sometimes wonder whether the fact that everything is presented in the digital medium—with the same layout of digital image and text—actually flattens rather than highlights the material differences between these media moments. Again, this connection between the digital and the material seems to be a common point of interest right now, with another ASECS 2014 panel addressing the issue.

These individual blogs will continue to evolve over the course of the semester, as will my thinking about the role of DH in the classroom. I think it’s important to conceptualize the ways in which we are not just transferring existing forms of writing or analysis onto our computers: new genres like blogs and tweets should allow us to conduct and present research in new ways. You can take a look at the Cultural History of Media and BritLit2.0 blogs here and here.

Postscript

Well, it looks like the government reading our letters isn’t only a historical analogy: in the wake of the NSA leaks, the Times reveals that the government records metadata for all paper mail, too. It should remind us once again that what we think of the quintessentially private form of communication—the sealed letter—in fact travels through many hands and many levels of government bureaucracy before reaching a recipient. This ongoing debate about publicity and privacy in a digital world becomes increasingly complicated, and revealing, when we put it in a long history of communications media and the relationship between the domestic sphere and political exigency.

Close-reading code

Image

I spent the past week at the Digital Humanities Summer Institute at the University of Victoria in British Columbia, learning (among other things) how to use the Text Encoding Initiative guidelines to create digital editions of manuscript documents. I practiced on a set of seventeenth-century manuscript newsletters from the Bodleian Library’s Carte collection, which I took photos of during a research trip last summer. I’m hoping to use these samples to develop a grant proposal for a larger scale project for archiving newsletters, which are totally unavailable in digital form. Most researchers of early news circulation misunderstand these incredibly rich historical sources because they are so hard to access—located in often distant library archives and sometimes misclassified as personal letters.

In addition to this exciting future project (add it to the Future Projects List!), the course  got me thinking about the role those in the digital humanities should play in changing definitions of reading and literacy within English departments. The documents I produced are supposed to be digital transcriptions of an existing hard-copy text, and I did my best to encode all the features of my source, from alternate spellings to shifts in  handwriting. At the same time, I was both adding significant information to the document and providing my own interpretation of it. TEI allows users to create lists of people and places (to take two major categories) associated with texts. Therefore, my digital edition of just two four-page newsletters includes biographies of people from Thomas Carte, the eighteenth-century collector of the documents, to King Charles II, who appears as a topic of news. I also encoded a list of places mentioned in the news articles, providing coordinate points for the cities. This information clearly offers the researcher much more than does the original document.Image

It seems that code literacy—whether in TEI (which is based on XML), HTML, Java, or other coding languages—will increasingly be accepted as a new form of linguistic competency. Already, some graduate programs are allowing these skills to take the place of the traditional foreign-language requirement. And we could do more to connect our existing pedagogical technique of close reading with such new literacies. An insightful investigator could draw out many of the interpretive choices I made in marking up this document just from carefully reading the code. For example, I decided that it was important to correct archaic spellings—”haue” to “have,” “kingdome” to “kingdom”—but not to regularize capitalization. A student might ask: Why did I make this decision? What does it say about my goals for the markup and its application? (Answer: I would want the digital edition to be easily searchable, which is generally not affected by capitalization). As literature departments become more interdisciplinary and linguistically diverse, it should be a priority to marry our traditional pedagogical and methodological techniques with these emerging forms of literacy.

Image

In defense of recaps

David Simon demonstrated a pretty fundamental misunderstanding of the narrative form in which he operates when he came out against the now-ubiquitous phenomenon of weekly TV recapping. Simon argues about recappers, “They don’t know what we’re building. And by the way, that’s true for the people who say we’re great. They don’t know. It doesn’t matter whether they love it or they hate it. It doesn’t mean anything until there’s a beginning, middle and an end.” He is, simply, wrong. TV is a serial medium, and it makes perfect sense to respond to shows on a serial and ongoing basis. Sure, the best shows have carefully thought-out dramatic arcs (which recappers follow more closely than anyone else), but those arcs are made up of individual episodes that the viewer, at least at first, can only watch one at a time. Over the course of a single season, let alone four or five, a show can change in any number of ways: it can get cancelled, actors can quit or die, or viewers can get bored and stop watching. Simon’s complaint, aside from its needless jab at his most dedicated fans, showed a surprising obtuseness about the genre in which he works.

What are … our new robot overlords?

I had a pretty exciting Wednesday this week – helping IBM make computers smarter than humans. Designers at their Watson research facility in Yorktown Heights, N.Y., are designing a computer to play Jeopardy, and as a former contestant living in the New York metro area, they asked me to come “spar” with the machine. Turns out they were looking for average Jeopardy contestants like myself – those who had won no more than two games (I, sadly, placed second in my only game). Unfortunately I signed something prohibiting me from saying how the computer did, but let’s just say it was competitive. More than competitive.

Apart from the cool factor of having a computer answer trivia in the form of a question, the potential software could help computers understand human language better and solve problems we didn’t think machines could grasp. Just another step on the way to making ourselves obsolete, I guess.

Here’s the NY Times article on the overall project from a few months ago.

Disclosure: IBM paid me a (small) consulting fee for participating in the sparring matches. I’m eager to think and write more about the effort as soon as they tell me I won’t get sued for doing so!

The language of crisis

Fascinating conference today and yesterday at Yale Law School on new media, journalism and economic models for the future. A combination of the usual suspects (Clay Shirky, Jeff Jarvis, Steve Brill) and a variety of journalists, law professors, economists, sociologists … One of the great things the conference is doing is enacting its precepts with a live video stream, live blog and constant Twitter updates – it’s even creating a feedback loop between David Carr tweeting in the audience, panelists reading the tweets and responding during their talks, and Carr asking questions in person.

One disappointing thing, though: about 80 percent of the panelists are middle-aged white men. What does it say about this language of crisis that these are the people having the discussions?