Virtual and Augmented Reality Digital (and/or Deformative?) Humanities Institute at Duke

I am excited to be participating in the the NEH-funded Virtual and Augmented Reality Digital Humanities Institute — or V/AR-DHI — next month (July 23 – August 3, 2018) at Duke University. I am hoping to adapt “deformative” methods (as described by Mark Sample following a provocation from Lisa Samuels and Jerome McGann) as a means of transformatively interrogating audiovisual media such as film and digital video in the spaces opened up by virtual and augmented reality technologies. In preparation, I have been experimenting with photogrammetric methods to reconstruct the three-dimensional spaces depicted on two-dimensional screens. The results, so far, have been … modest — nothing yet in comparison to artist Claire Hentschker’s excellent Shining360 (2016) or Gregory Chatonsky’s The Kiss (2015). There is something interesting, though, about the dispersal of the character Neo’s body into an amorphous blob and the disappearance of bullet time’s eponymous bullet in this scene from The Matrix, and there’s something incredibly eerie about the hidden image behind the image in this famous scene from Frankenstein, where the monster’s face is first revealed and his head made virtually to protrude from the screen through a series of jump cuts. Certainly, these tests stand in an intriguing (if uncertain) deformative relation to these iconic moments. In any case, I look forward to seeing where (if anywhere) this leads, and to experimenting further at the Institute next month.

The Video Essay: Writing with Video About Film and Media

2017-08-01 09.45.19 am

This fall, I am excited to teach a new course, “The Video Essay: Writing with Video About Film and Media,” as a part of Stanford’s Introductory Seminars program. Geared towards sophomores from any major, this small class will combine practical instruction in video editing, analysis and discussion of exemplary video essays, hands-on lab sessions, and group critique of student work.

The course draws essential inspiration from the NEH-funded “Scholarship in Sound & Image” workshop, organized by Christian Keathley and Jason Mittell at Middlebury College, which I participated in back in 2015.

More info about the course can be found on Stanford’s Introductory Seminars website.

#SCMS17 Conference Program

16864700_1218972544844963_3352301399512773470_n

The conference program for the 2017 SCMS conference in Chicago is now available online (opens as PDF). As I mentioned recently, I will be participating in panel K3 (Friday, March 24, 2017, 9:00-10:45am) — a workshop dedicated to “Deformative Criticism and Digital Experimentations in Film & Media Studies.”

Speculative Data: Full Text, MLA 2016 #WeirdDH

SpeculativeData-jpg.001

Below you’ll find the full text of my talk from the Weird DH panel organized by Mark Sample at the 2016 MLA conference in Austin Texas. Other speakers on the panel included Jeremy Justus, Micki Kaufman, and Kim Knight.

***

Speculative Data: Post-Empirical Approaches to the “Datafication” of Affect and Activity

Shane Denson, Duke University

A common critique of the digital humanities questions the relevance (or propriety) of quantitative, data-based methods for the study of literature and culture; in its most extreme form, this type of criticism insinuates a complicity between DH and the neoliberal techno-culture that turns all human activity, if not all of life itself, into “big data” to be mined for profit. Now, it may sound from this description that I am simply setting up a strawman to knock down, so I should admit up front that I am not wholly unsympathetic to the critique of datafication. But I do want to complicate things a bit. Specifically, I want to draw on recent reconceptions of DH as “deformed humanities” – as an aesthetically and politically invested field of “deformance”-based practice – and describe some ways in which a decidedly “weird” DH can avail itself of data collection in order to interrogate and critique “datafication” itself.

SpeculativeData-jpg.002

My focus is on work conducted in and around Duke University’s S-1: Speculative Sensation Lab, where literary scholars, media theorists, artists, and “makers” of all sorts collaborate on projects that blur the boundaries between art and digital scholarship. The S-1 Lab, co-directed by Mark Hansen and Mark Olson, experiments with biometric and environmental sensing technologies to expand our access to sensory experience beyond the five senses. Much of our work involves making “things to think with,” i.e. experimental “set-ups” designed to generate theoretical and aesthetic insight and to focus our mediated sensory apparatus on the conditions of mediation itself. Harnessing digital technologies for the work of media theory, this experimentation can rightly be classed, alongside such practices as “critical making,” in the broad space of the digital humanities. But due to their emphatically self-reflexive nature, these experiments challenge borders between theory and practice, scholarship and art, and must therefore be qualified, following Mark Sample, as decidedly “weird DH.”

SpeculativeData-jpg.003.jpeg

One such project, Manifest Data, uses a piece of “benevolent spyware” that collects and parses data about personal Internet usage in such a way as to produce 3D-printable sculptural objects, thus giving form to data and reclaiming its personal value from corporate cooptation. In a way that is both symbolic and material, this project counters the invisibility and “naturalness” of mechanisms by which companies like Google and Facebook expropriate value from the data we produce. Through a series of translations between the digital and the physical—through a multi-stage process of collecting, sculpting, resculpting, and manifesting data in virtual, physical, and augmented spaces—the project highlights the materiality of the interface between human and nonhuman agencies in an increasingly datafied field of activity. (If you’re interested in this project, which involves “data portraits” based on users’ online activity and even some weird data-driven garden gnomes designed to dispel the bad spirits of digital capital, you can read more about it in the latest issue of Hyperrhiz.)

SpeculativeData-jpg.004

Another ongoing project, about which I will say more in a moment, uses data collected through (scientifically questionable) biofeedback devices to perform realtime collective transformations of audiovisual materials, opening theoretical notions of what Steven Shaviro calls “post-cinematic affect” to robustly material, media-archaeological, and aesthetic investigations.

SpeculativeData-jpg.005

These and other projects, I contend, point the way towards a truly “weird DH” that is reflexive enough to suspect its own data-driven methods but not paralyzed into inactivity.

Weird DH and/as Digital Critical (Media) Studies:

So I’m trying to position these projects as a form of weird digital critical (media) studies, designed to enact and reflect (in increasingly self-reflexive ways) on the use of digital tools and processes for the interrogation of the material, cultural, and medial parameters of life in digital environments.

SpeculativeData-jpg.006

Using digital techniques to reflect on the affordances and limitations of digital media and interfaces, these projects are close in spirit to new media art, but they are also apposite with practices and theories of “digital rhetoric,” as described by Doug Eyman, with Gregory Ulman’s “electracy,” or with Casey Boyle’s posthuman rhetoric of multistability, which celebrates the rhetorical affordances of digital glitches in exposing the affordances and limitations of computational media in the broader realm of an interagential relational field that includes both humans and nonhumans. In short, these projects enact what we might call, following Stanley Cavell, the “automatisms” of digital media – the generative affordances and limitations that are constantly produced, reproduced, and potentially transformed or “deformed” in creative engagements with media. Digital tools are used in such a way as to problematize their very instrumentality, hence moving towards a post-empirical or post-positivistic form of datafication as much as towards a post-instrumental digitality.

SpeculativeData-jpg.007

Algorithmic Nickelodeon / Datafied Attention:

My key example is a project tentatively called the “algorithmic nickelodeon.” Here we use consumer-grade EEG headsets to interrogate the media-technical construction and capture of human attention, and thus to complicate datafication by subjecting it to self-reflexive, speculative, and media-archaeological operations. The devices in question cost about $100 and are marketed as tools for improving concentration, attention, and memory. The headset measures a variety of brainwave activity and, by means of a proprietary algorithm, computes values for “attention” and “meditation” that can be tracked and, with the help of software applications, trained and supposedly optimized. In the S-1 Lab, we have sought to tap into these processes in order not just to criticize the scientifically dubious nature of these claims but rather to probe and better understand the nature of the automatisms and interfaces taking place here and in media of attention more generally. Specifically, we have designed a film- and media-theoretical application of the apparatus, which allows us to think early and contemporary moving images together, to conceive pre- and post-cinema in terms of their common deviations from the attention economy of classical cinema, and to reflect more broadly on the technological-material reorganizations of attention involved in media change. This is an emphatically experimental (that is, speculative, post-positivistic) application, and it involves a sort of post-cinematic reenactment of early film’s viewing situations in the context of traveling shows, vaudeville theaters, and nickelodeons. With the help of a Python script written by lab member Luke Caldwell, a group of viewers wearing the Neurosky EEG devices influence the playback of video clips in real time, for example changing the speed of a video or the size of the projected image in response to changes in attention as registered through brain-wave activity.

At the center of the experimentation is the fact of “time-axis manipulation,” which Friedrich Kittler highlights as one of the truly novel affordances of technical media, like the phonograph and cinema, that arose around 1900 and marked, for him, a radical departure from the symbolic realms of pre-technical arts and literature. Now it became possible to inscribe “reality itself,” or to record a spectrum of frequencies (like sound and light) directly, unfiltered through alphabetic writing; and it became possible as well to manipulate the speed or even playback direction of this reality.

SpeculativeData-jpg.009

Recall that the cinema’s standard of 24 fps only solidified and became obligatory with the introduction of sound, as a solution to a concrete problem introduced by the addition of a sonic register to filmic images. Before the late 1920s, and especially in the first two decades of film, there was a great deal of variability in projection speed, and this was “a feature, not a bug” of the early cinematic setup. Kittler writes: “standardization is always upper management’s escape from technological possibilities. In serious matters such as test procedures or mass entertainment, TAM [time-axis manipulation] remains triumphant. [….] frequency modulation is indeed the technological correlative of attention” (Gramophone Film Typewriter 34-35). Kittler’s pomp aside, his statement highlights a significant fact about the early film experience: Early projectionists, who were simultaneously film editors and entertainers in their own right, would modulate the speed of their hand-cranked apparatuses in response to their audience’s interest and attention. If the audience was bored by a plodding bit of exposition, the projectionist could speed it up to get to a more exciting part of the movie, for example. Crucially, though: the early projectionist could only respond to the outward signs of the audience’s interest, excitement, or attention – as embodied, for example, in a yawn, a boo, or a cheer.

SpeculativeData-jpg.010

But with the help of an EEG, we can read human attention – or some construction of “attention” – directly, even in cases where there is no outward or voluntary expression of it, and even without its conscious registration. By correlating the speed of projection to these inward and involuntary movements of the audience’s neurological apparatus, such that low attention levels cause the images to speed up or slow down, attention is rendered visible and, to a certain extent, opened to conscious and collective efforts to manipulate it and the frequency of images now indexed to it.

According to Hugo Münsterberg, who wrote one of the first book-length works of film theory in 1916, cinema’s images anyway embody, externalize, and make visible the faculties of human psychology; “attention,” for example, is said to be embodied by the close-up. With our EEG setup, we can literalize Münsterberg’s claim by correlating higher attention levels with a greater zoom factor applied to the projected image. If the audience pays attention, the image grows; if attention flags, the image shrinks. But this literalization raises more questions than it answers, it would seem. On the one hand, it participates in a process of “datafication,” turning brain wave patterns into a stream of data called “attention,” but whose relation to attention in ordinary senses is altogether unclear. But this datafication simultaneously opens up a space of affective or aesthetic experience in which the problematic nature of the experimental “set-up” announces itself to us in a self-reflexive doubling: we realize suddenly that “it’s a setup”; “we’ve been framed” – first by the cinema’s construction of attentive spectators and now by this digital apparatus that treats attention as an algorithmically computed value.

So in a way, the apparatus is a pedagogical/didactic tool: it not only allows us to reenact (in a highly transformed manner) the experience of early cinema, but it also helps us to think about the construction of “attention” itself in technical apparatuses both then and now. In addition to this function, it also generates a lot of data that can indeed be subjected to statistical analysis, correlation, and visualization, and that might be marshaled in arguments about the comparative medial impacts or effects of various media regimes. Our point, however, remains more critical, and highly dubious of any positivistic understanding of this data. The technocrats of the advertising industry, the true inheritors of Münsterberg the industrial psychologist, are anyway much more effective at instrumentalizing attention and reducing it to a psychotechnical variable. With a sufficiently “weird” DH approach, we hope to stimulate a more speculative, non-positivistic, and hence post-empirical relation to such datafication. Remitting contemporary attention procedures to the early establishment of what Kittler refers to as the “link between physiology and technology” (73) upon which modern entertainment media are built, this weird DH aims not only to explore the current transformations of affect, attention, and agency – that is, to study their reconfigurations – but also potentially to empower media users to influence such configuration, if only on a small scale, rather than leave it completely up to the technocrats.

Speculative Data #WeirdDH #MLA16 #S107

storify-mla16

The “Weird DH” panel at MLA 2016 in Austin, chaired by Mark Sample, was great fun, and it generated quite a bit of discussion, both online and off. Luckily, Eileen Clancy preserved the twitter discussion in a Storify (https://storify.com/clancynewyork/weird-dh), so in case you couldn’t make it out last week, you can still catch up on some of the topics and reactions to the talks by Jeremy Justus, Micki Kaufman, Kim Knight, and myself.

(If I get around to it, I will also be posting the full text of my talk very soon.)

Conversations in the Digital Humanities at Duke

page-events-dhduke

Today, Oct. 2, 2015, the Franklin Humanities Institute, the Wired! Lab, the PhD Lab in Digital Knowledge, and HASTAC@Duke will be presenting “Conversations in the Digital Humanities,” the inaugural event of the new Digital Humanities Initiative at Duke University. More information about the event, in which I will be participating alongside colleagues from the S-1: Speculative Sensation Lab, can be found on the FHI website.

Also, all of the 10-minute “lightning talks” will be live-streamed. The first block of sessions, from 2:15-3:45pm EST, will be streamed here, and the second block, from 4:00-5:40pm, will be viewable here. (Apparently, the videos will be archived and available after the fact as well.)

Here is the complete schedule:

2:00 – 2:15
Welcome and Introduction to Digital Humanities Initiative

2:15 – 3:45 
Session 1 (10 minutes per talk)

  1. Project Vox (Andrew Janiak, and Liz Milewicz)
  2. NC Jukebox (Trudi Abel, Victoria Szabo)
  3. Visualizing Cultures: The Shiseido Project (Gennifer Weisenfeld)
  4. Going Global in Mughal India (Sumathi Ramaswamy)
  5. Israel’s Occupation in the Digital Age (Rebecca Stein)
  6. Digital Athens: Archaeology meets ArcGIS (Tim Shea, Sheila Dillon)
  7. Early Medieval Networks (J. Clare Woods)

3:45 – 4:00
Coffee Break

4:00 – 5:40 
Session 2 (10 minutes per talk)

  1. Painting the Apostles – A Case Study in “The Lives of Things” (Mark Olson, Mariano Tepper, and Caroline Bruzelius)
  2. Digital Archaeology: From the Field to Virtual Reality (Maurizio Forte)
  3. The Memory Project (Luo Zhou)
  4. Veoveo, children at play (Raquel Salvatella de Prada)
  5. “Things to Think With”: Weird DH, Data, and Experimental Media Theory (S-1 Lab)
  6. s_traits, Generative Authorship and the Emergence Lab (Bill Seaman and John Supko)
  7. Found Objects and Fireflies (Scott Lindroth)
  8. Project Provoke (Mary Caton Lingold and others)

5:40 – 6:00 
Reception

Things to Think With

mindwave2

As a late addition to the program, the Duke S-1 Speculative Sensation Lab will be participating in “Conversations in the Digital Humanities” this coming Friday, October 2, 2015, at the Franklin Humanities Institute at Duke. The event, which will consist of a series of brief “lightning talks” on a range of topics that run the gamut of current DH work, will take place from 2:00-6:00pm in the FHI Garage in Smith Warehouse, Bay 4. More info here: Conversations in the Digital Humanities.

Here is the abstract for the S-1 Lab’s presentation, which I will be participating in along with Lab co-director Mark Olson and our resident programmer Luke Caldwell:

“Things to Think With”: Weird DH, Data, and Experimental Media Theory

S-1 Speculative Sensation Lab

The S-1 Speculative Sensation Lab, co-directed by Mark Hansen and Mark Olson, experiments with biometric and environmental sensing technologies to expand our access to sensory experience beyond the five senses. Much of our work involves making “things to think with,” i.e. experimental “set-ups” designed to generate theoretical and aesthetic insight and to focus our mediated sensory apparatus on the conditions of mediation itself. Harnessing digital technologies for the work of media theory, this experimentation can rightly be classed, alongside such practices as “critical making,” in the broad space of the digital humanities. But due to their emphatically self-reflexive nature, these experiments challenge borders between theory and practice, scholarship and art, and must therefore be qualified, following Mark Sample, as decidedly “weird DH.”

In this presentation, we discuss a current project that utilizes consumer-grade EEG headsets, in conjunction with a custom Python script by lab member Luke Caldwell, to reflect on the contemporary shape of “attention,” as it is constructed and addressed in individual and networked forms across media ranging from early cinema to “post-cinema.”

Sculpting Data (and Painting Networks)

data-gnome-animation

On March 28, 2015, members of the Duke S-1 Speculative Sensation Lab will take over a panel at the 2015 AEGS Conference <how do you do Digital Humanities?>. (See here for the conference website, which includes the full program.) General conference info:

The conference will be held in Tompkins Hall on the NC State University campus in Raleigh, NC, on Friday, March 27th and Saturday, March 28th.  Friday evening we will host a keynote panel of Digital Humanities scholars. These scholars will discuss how they “do” Digital Humanities in their research and pedagogy. On Saturday, participants will present their research in 15 minutes presentations.
Again, the final panel of the conference, Session IV (1:55 – 3:10pm on Saturday, March 28), will be devoted to the S-1 Lab’s recent work, especially the Manifest Data project that I have been posting about here. Titled “Digital Metabolisms: Manifesting Data as a Collaborative Research Process,” the panel consists of the following presentations:

Amanda Starling Gould, Duke University, “Digital Metabolism: Using Digital Tools to Hack Humanities Research”

Luke Caldwell, Duke University, “Leveraging Benevolent Spyware for Humanities Research”

Libi Striegl, Duke University, “3D Printing as Artistic Research Intervention”

Karin & Shane Denson, Duke University, “Sculpting Data”

David Rambo, Duke University, “Manifest Data as Digital Manifest Destiny”

(Observant readers of this blog will notice that I am to give two presentations on March 28: both at NC State and at the SCMS conference in Montreal. In fact, Karin will be representing the two of us in Raleigh, but we’re putting together some presentation materials that we’re quite proud of — and that we think will creatively solve the logistical problems of being in two places at once! More soon!)

Mario Modding Madness

2015-02-03 09.24.30 pm

In case you missed it: you can watch a split-screen video presentation of my digital humanities-oriented talk, “Visualizing Digital Seriality,” which I gave last Friday, January 30, 2015, at Duke University — here (or click the image above).

More about the project can be found here.

Visualizing Digital Seriality, Or: All Your Mods Are Belong to Us!

Visualizing_Digital_Seriality

In this post, I want to outline some ongoing work in progress that I’ve been pursuing as part of my postdoctoral research project on seriality as an aesthetic form and as a process of collectivization in digital games and gaming communities. The larger context, as readers of this blog will know, is a collaborative project I am conducting with Andreas Jahn-Sudmann of the Freie Universität Berlin, titled “Digital Seriality” — which in turn is part of an even larger research network, the DFG Research Unit “Popular Seriality–Aesthetics and Practice.” I’ll touch on this bigger picture here and there as necessary, but I want to concentrate more specifically in the following on some thoughts and research techniques that I’ve been developing in the context of Victoria Szabo’s “Historical & Cultural Visualization” course, which I audited this semester at Duke University. In this hands-on course, we looked at a number of techniques and technologies for conducting digital humanities-type research, including web-based and cartographic research and presentation, augmented and virtual reality, and data-intensive research and visualization. We engaged with a great variety of tools and applications, approaching them experimentally in order to evaluate their particular affordances and limitations with respect to humanities work. My own engagements were guided by the following questions: How might the tools and methods of digital humanities be adapted for my research on seriality in digital games, and to what end? What, more specifically, can visualization techniques add to the study of digital seriality?

I’ll try to offer some answers to these questions in what follows, but let me indicate briefly why I decided to pursue them in the first place. To begin with, seriality challenges methods of single-author and oeuvre or work-centric approaches, as serialization processes unfold across oftentimes long temporal frames and involve collaborative production processes — including not only team-based authorship in industrial contexts but also feedback loops between producers and their audiences, which can exert considerable influence on the ongoing serial development. Moreover, such tendencies are exacerbated with the advent of digital platforms, in which these feedback loops multiply and and accelerate (e.g. in Internet forums established or monitored by serial content producers and, perhaps more significantly, in real-time algorithmic monitoring of serialized consumption on platforms like Netflix), while the contents of serial media are themselves subject to unprecedented degrees of proliferation, reproduction, and remix under conditions of digitalization. Accordingly, an incredible amount of data is generated, so that it is natural to wonder whether any of the methods developed in the digital humanities might help us to approach phenomena of serialization in the digital era. In the context of digital games and game series, the objects of study — both the games themselves and the online channels of communication around which gaming communities form — are digital from the start, but there is such an overwhelming amount of data to sort through that it can be hard to see the forest for the trees. As a result, visualization techniques in particular seem like a promising route to gaining some perspective, or (to mix metaphors a bit) for establishing a first foothold in order to begin climbing what appears an insurmountable mountain of data. Of particular interest here are: 1) “distant reading” techniques (as famously elaborated by Franco Moretti), which might be adapted to the objects of digital games, and 2) tools for network analysis, which might be applied in order to visualize and investigate social formations that emerge around games and game series.

Inter-Ludic-Seriality

Before elaborating on how I have undertaken to employ these approaches, let me say a bit more about the framework of my project and the theoretical perspective on digital seriality that Andreas Jahn-Sudmann and I have developed at greater length in our jointly authored paper “Digital Seriality.” Our starting point for investigating serial forms and processes in games and gaming communities is what we call “inter-ludic seriality” — that is, the serialization processes that take place between games, establishing series such as Super Mario Bros. 1, 2, 3 etc. or Pokemon Red and Blue, Gold and Silver, Ruby and Sapphire, Black and White etc. For the most part, such inter-ludic series are constituted by fairly standard, commercially motivated practices of serialization, expressed in sequels, spin-offs, and the like; accordingly, they are a familiar part of the popular culture that has developed under capitalist modernity since the time of industrialization. Thus, there is lots of continuity with pre-digital seriality, but there are other forms of seriality involved as well.

Intra-Ludic-Seriality

“Intra-ludic seriality” refers to processes of repetition and variation that take place within games themselves, for example in the 8 “worlds” and 32 “levels” of Super Mario Bros. Here, a general framework is basically repeated while varying and in some cases increasingly difficult tasks and obstacles are introduced as Mario searches for the lost princess. Following cues from Umberto Eco and others, this formula of “repetition + variation” is taken here as the formal core of seriality; games can therefore be seen to involve an operational form of seriality that is in many ways more basic than, while often foundational to, the narrative serialization processes that they also display.

Para-Ludic-Seriality

Indeed, this low-level seriality is matched by higher-level processes that encompass but go beyond the realm of narrative — beyond even the games themselves. What we call “para-ludic seriality” involves tie-ins and cross-overs with other media, including the increasingly dominant trend towards transmedia storytelling, aggressive merchandising, and the like. Clearly, this is part of an expanding commercial realm, but it is also the basis for more.

Serial-Superstructure

There is a social superstructure, itself highly serialized, that forms around or atop these serialized media, as fans take to the Internet to discuss (and play) their favorite games. In itself, this type of series-based community-building is nothing new. In fact, it may just be a niche form of a much more general phenomenon that is characteristic for modernity at large. Benedict Anderson and Jean-Paul Sartre before him have described modern forms of collectivity in terms of “seriality,” and they have linked these formations to serialized media consumption and those media’s serial forms — newspapers, novels, photography, and radio have effectively “serialized” community and identity throughout the nineteenth and twentieth centuries.

Infra-Ludic-Seriality

Interestingly, though, in the digital era, this high-level community-building seriality is sometimes folded into an ultra low-level, “infra-ludic” level of seriality — a level that is generally invisible and that takes place at the level of code. (I have discussed this level before, with reference to the BASIC game Super Star Trek, but I have never explicitly identified it as “infra-ludic seriality” before.) This enfolding of community into code, broadly speaking, is what motivates the enterprise of critical code studies, when it is defined (for example, by Mark Marino) as

an approach that applies critical hermeneutics to the interpretation of computer code, program architecture, and documentation within a socio-historical context. CCS holds that lines of code are not value-neutral and can be analyzed using the theoretical approaches applied to other semiotic systems in addition to particular interpretive methods developed particularly for the discussions of programs. Critical Code Studies follows the work of Critical Legal Studies, in that it practitioners apply critical theory to a functional document (legal document or computer program) to explicate meaning in excess of the document’s functionality, critiquing more than merely aesthetics and efficiency. Meaning grows out of the functioning of the code but is not limited to the literal processes the code enacts. Through CCS, practitioners may critique the larger human and computer systems, from the level of the computer to the level of the society in which these code objects circulate and exert influence.

Basically, then, the questions that I am here pursuing are concerned with the possibilities of crossing CCS with DH — and with observing the consequences for a critical investigation of digital game-based seriality. My goal in this undertaking is to find a means of correlating formations in the high-level superstructure with the infra-ludic serialization at the level of code — not only through close readings of individual texts but by way of large collections of data produced by online collectives.

Romhacking

As a case study, I have been looking at ROMhacking.net, a website devoted to the community of hackers and modders of games for (mostly) older platforms and consoles. “Community” is an important notion in the site’s conception of itself and its relation to its users, as evidenced in the site’s “about” page:

ROMhacking.net is the innovative new community site that aggressively aims to bring several different areas of the community together. First, it serves as a successor to, and merges content from, ROMhacking.com and The Whirlpool. Besides being a simple archive site, ROMhacking.net’s purpose is to bring the ROMhacking Community to the next level. We want to put the word ‘community’ back into the ROMhacking community.

The ROMhacking community in recent years has been scattered and stagnant. It is our goal and hope to bring people back together and breathe some new life into the community. We want to encourage new people to join the hobby and make it easier than ever for them to do so.

Among other things, the site includes a vast collection of Super Mario Bros. mods (at the time of writing, 205 different hacks, some of which include several variations). These are fan-based modifications of Nintendo’s iconic game from 1985, which substitute different characters, add new levels, change the game’s graphics, sound, or thematic elements, etc. — hence perpetuating an unofficial serialization process that runs parallel to Nintendo’s own official game series, and forming the basis of communal formations through more or less direct manipulation of computer code (in the form of assembly language, hex code, or mediated through specialized software platforms, including emulators and tools for altering the game). In other words, the social superstructure of serial collectivity gets inscribed directly into the infra-ludic level of code, leaving traces that can be studied for a better understanding of digital seriality.

But how should we study them? Even this relatively small sample is still quite large by the standards of a traditional, close reading-based criticism. What would we be looking for anyway? The various mods are distributed as patches (.ips files) which have to be applied to a ROM file of the original game; the patches are just instruction files indicating how the game’s code is to be modified by the computer. As such, the patch files can be seen, rather abstractly, as crystallizations of the serialization process: if repetition + variation is the formal core of seriality, the patches are the records of pure variation, waiting to be plugged back into the framework of the game (the repeating element). But when we do plug it back in, then what? We can play the game in an emulator, and certainly it would be interesting — but extremely time-consuming — to compare them all in terms of visual appearance, gameplay, and interface. Or we can open the modified game file in a hex editor, in which case we might get lucky and find an interesting trace of the serialization process, such as the following:

2014-12-02 12.23.19 pm

Similar to Super Star Trek with its REM comments documenting its own serial and collective genesis, here we find an embedded infratext in the hexcode of “Millennium Mario,” a mod by an unknown hacker reportedly dating back to January 1, 2000. Note, in particular, the reference to a fellow modder, “toma,” the self-glorifying “1337” comment, and the skewed ASCII art — all signs of a community of serialization operating at a level subterranean to gameplay. But this example also demonstrates the need for a more systematic approach — as well as the obstacles to systematicity, for at stake here is not just code but also the software we use to access it and other “parergodic” elements, including even the display window size or “view” settings of the hex editor:

Millennium-Resize-Infratext

In a sense, this might be seen as a first demonstration of the importance of visualization not only in the communication of results but in the constitution of research objects! In any case, it clearly establishes the need to think carefully about what it is, precisely, that we are studying: serialization is not imprinted clearly and legibly in the code, but is distributed in the interfaces of software and hardware, gameplay and modification, code and community.

Again, I follow Mark Marino’s conception of critical code studies, particularly with respect to his broad understanding of the object of study:

What can be interpreted?

Everything. The code, the documentation, the comments, the structures — all will be open to interpretation. Greater understanding of (and access to) these elements will help critics build complex readings. In “A Box Darkly,” discussed below, Nick Montfort and Michael Mateas counter Cayley’s claim of the necessity for executability, by acknowledging that code can be written for programs that will never be executed. Within CCS, if code is part of the program or a paratext (understood broadly), it contributes to meaning. I would also include interpretations of markup languages and scripts, as extensions of code. Within the code, there will be the actual symbols but also, more broadly, procedures, structures, and gestures. There will be paradigmatic choices made in the construction of the program, methods chosen over others and connotations.

In addition to symbols and characters in the program files themselves, paratextual features will also be important for informed readers. The history of the program, the author, the programming language, the genre, the funding source for the research and development (be it military, industrial, entertainment, or other), all shape meaning, although any one reading might emphasize just a few of these aspects. The goal need not be code analysis for code’s sake, but analyzing code to better understand programs and the networks of other programs and humans they interact with, organize, represent, manipulate, transform, and otherwise engage.

But, especially when we’re dealing with a large set of serialized texts and paratexts, this expansion of code and the attendant proliferation of data exacerbates our methodological problems. How are we to conduct a “critical hermeneutics” of the binary files, their accompanying README files, the ROMhacking website, and its extensive database — all of which contain information relevant to an assessment of the multi-layered processes of digital seriality? It is here, I suggest, that CCS can profit from combination with DH methods.

2014-12-08 03.44.57 pm

The first step in my attempt to do so was to mine data from the ROMhacking website and paratexts distributed with the patches and to create a spreadsheet with relevant metadata (you can download the Excel file here: SMB-Hacks-Dec1). On this basis, I began trying to analyze and visualize the data with Tableau. But while this yielded some basic information that might be relevant for assessing the serial community (e.g. the number of mods produced each year, including upward and downward trends; a list of the top modders in the community; and a look at trends in the types of mods/hacks being produced), the visualizations themselves were not very interesting or informative on their own (click on the image below for an interactive version):

2014-12-08 04.02.56 pm

How could this high-level metadata be coordinated with and brought to bear on the code-level serialization processes that we saw in the hexcode above? In looking for an answer, it became clear that I would have to find a way to collect some data about the code. The mods, themselves basically just “diff” files, could be opened and compared with the “diff” function that powers a lot of DH-based textual analysis (for example, with juxta), but the hexadecimal code that we can access here — and the sheer amount of it in each modded game, which consists of over 42000 bytes — is not particularly conducive to analysis with such tools. Many existing hex editors also include a “diff” analysis, but it occurred to me that it would be more desirable to have a graphical display of differences between the files in order to see the changes at a glance. My thinking here was inspired by hexcompare, a Linux-based visual diff program for quickly visualizing the differences between two binary programs:

hexcompare

However, the comparison here is restricted to local use on a Linux machine, and it only considers two files at a time. If this type of analysis is to be of any use for seriality studies, it will have to assess a much larger set of files and/or automate the comparison process. This is where Eric Monson and Angela Zoss from Visualization & Information Services at Duke University came in and helped me to develop an alternative approach. Eric Monson wrote a script that analyzes the mod patch files and records the basic “diff” information they contain: the address or offset at which they instruct the computer to modify the game file, as well as the number of bytes that they instruct it to write. With this information (also recorded in the Excel file linked to above), a much more useful and interactive visualization can be created with Tableau (click for an interactive version):

2014-12-08 04.38.04 pm

Here, Gannt charts are used to represent the size and location of changes that a given mod makes to the original Mario game; it is possible to see a large number of these mods at a single glance, to filter them by year, by modder, by title, or even size (some mods expand the original code), etc., and in this way we can begin to see patterns emerging. Thus, we bring a sort of “distant reading” to the level of code, combining DH and CCS. (Contrast this approach with Marino’s 2006 call to “make the code the text,” which despite his broad understanding of code and acknowledgement that software/hardware and text/paratext distinctions are non-absolute, was still basically geared towards a conception of CCS that encouraged critical engagements of the “close-reading” type. As I have argued, however, researching seriality in particular requires that we oscillate between big-picture and micro-level analyses, between distant readings of larger trends and developments and detailed comparisons between individual elements or episodes in the serial chain.)

But to complete this approach, we still need to correlate this code-based data with the social level of online modding communities. For this purpose, I used Palladio (a tool explicitly designed for DH work by the Humanities + Design lab at Stanford) to graph networks on the basis of metadata contained in Readme.txt files.

shout-outs

Here, I have mapped the references (“shout-outs,” etc) that modders made to one another in these paratexts, thus revealing a picture of digital seriality as an imagined community of modders.

community-references

Here, on the other hand, I have mapped references from paratextual materials associated with individual mods to various online communities that have come and gone over the years. We see early references to the defunct TEKhacks, by way of Zophar’s Domain, Acmlm’s and Insectduel’s boards, with more recent references to Romhacking.net, the most recent community site and the one that I am studying here.

smb3-references

As an example of how the social network and code-level analyses might be correlated, here I’ve filtered the network graph to show only those modders who refer in their paratexts to Super Mario Bros. 3 (hence bringing inter-ludic seriality to bear on their para- and infra-ludic interventions). The resulting graph reveals a small network of actors whose serializing activity involves mixing and referencing between SMB1 and SMB3, as well as between each other. The Tableau screenshot on the right then selects just these modders and reveals possible similarities and sites of serialization (for closer scrutiny with hexcompare or tools derived from the modding community itself). For example, we find that the modder AP’s SMB3-inspired patches from September 2005 and flamepanther’s SMB DX patches from Oct 2005 exhibit traces of possible overlap that deserve to be looked at in detail. The modder insectduel’s After World 8 (a mod that is referenced by many in the scene) from February 2006 shares large blocks around 31000-32000 bytes with many of the prolific modder Googie’s mods (which themselves seem to exhibit a characteristic signature) from 2004-2006. Of course, recognizing these patterns is just the beginning of inquiry, but at least it is a beginning. From here, we still have to resort to “close reading” techniques and to tools that are not conducive to a broad view; more integrated toolsets remain to be developed. Nevertheless, these methods do seem promising as a way of directing research, showing us where to look in greater depth, and revealing trends and points of contact that would otherwise remain invisible.

Finally, by way of conclusion and to demonstrate what some of this more detailed work looks like, I’d like to return to the “Millennium Mario” mod I considered briefly above. As we saw, there was an interesting infratextual shoutout and some ASCII art in the opening section of the hexcode. With Tableau, we can filter the “diff” view to display only those mods that exhibit changes in the first 500 bytes of code, and to map that section of code in greater resolution (this is done with the slider in the bottom right corner, marked “Start” — referring to the byte count at which a change in the game starts):

Millennium-and-other-earlybytemods1

Here we find two distinctive (visual) matches: viz. between “Millennium Mario” and Raysyde’s “Super Mario Bros. – Remix 2” from 1999, and between ATA’s “Super Mario Bros. – Yoshi’s Quest” and Krillian’s “Mario Adventure 2,” both from 2000. The latter two mods, while clearly different from the former two, also exhibit some overlap in the changes made to the first 20 or so bytes, so it will be interesting to compare them as well.

hexcompare-SMBRemix2-Millennium-b

Now we can use hexcompare for finer analysis — i.e. to determine if the content of the changed addresses is also identical (the visual match only tells us that something has been changed in the same place, not whether the same change has been made there).

hexcompare-SMBRemix2-Millennium

Here we find that Raysyde’s “Super Mario Bros. – Remix 2” does in fact display the same changes in the opening bytes, including the reference to “toma” and the ASCII art. This then is a clear indication of infra-ludic serialization: the borrowing, repetition, and variation of code-level work between members of the modding community. This essentially serial connection (an infra-serial link) would hardly be apparent from the level of the mods’ respective interfaces, though:

2000-01-01-Millennium1999-01-01-SMBRemix2

When we compare “Millennium Mario” with ATA’s “Super Mario Bros. – Yoshi’s Quest,” we find the ASCII art gone, despite the visual match in Tableau’s mapping of their “diff” indications for the opening bytes:

hexcompare-Millennium-YoshisQuest

“Yoshi’s Quest” corresponds in this respect to Krillian’s “Mario Adventure 2”:

hexcompare-YoshisQuest-MarioAdv2

Thus we have another clear indication of infra-ludic serialization, which would hardly have been evident other than by means of a directed filtering of the large dataset, in conjunction with a close analysis of the underlying code.

Again, however, this is just the beginning of the analysis — or more broadly of an encounter between DH and CCS. Ideally, the dataset would be expanded beyond ROMhacking.net’s database; other online communities would be mined for data; and, above all, more integrative tools would be developed for correlating social network graphs and diff-maps, for correlating community and code. Perhaps a crowdsourced approach to some of this would be appropriate; for what it’s worth, and in case anyone is inclined to contribute, my data and the interactive Tableau charts are linked above. But the real work, I suspect, lies in building the right tools for the job, and this will clearly not be an easy task. Alas, like digital seriality itself, this is work in progress, and thus it remains work “to be continued”…

Thanks finally to Eric Monson, Angela Zoss, Victoria Szabo, Patrick LeMieux, Max Symuleski, and the participants in the Fall 2014 “Historical & Cultural Visualization Proseminar 1” at Duke University for the various sorts of help, feedback, and useful tips they offered on this project!