Conversations in the Digital Humanities at Duke


Today, Oct. 2, 2015, the Franklin Humanities Institute, the Wired! Lab, the PhD Lab in Digital Knowledge, and HASTAC@Duke will be presenting “Conversations in the Digital Humanities,” the inaugural event of the new Digital Humanities Initiative at Duke University. More information about the event, in which I will be participating alongside colleagues from the S-1: Speculative Sensation Lab, can be found on the FHI website.

Also, all of the 10-minute “lightning talks” will be live-streamed. The first block of sessions, from 2:15-3:45pm EST, will be streamed here, and the second block, from 4:00-5:40pm, will be viewable here. (Apparently, the videos will be archived and available after the fact as well.)

Here is the complete schedule:

2:00 – 2:15
Welcome and Introduction to Digital Humanities Initiative

2:15 – 3:45 
Session 1 (10 minutes per talk)

  1. Project Vox (Andrew Janiak, and Liz Milewicz)
  2. NC Jukebox (Trudi Abel, Victoria Szabo)
  3. Visualizing Cultures: The Shiseido Project (Gennifer Weisenfeld)
  4. Going Global in Mughal India (Sumathi Ramaswamy)
  5. Israel’s Occupation in the Digital Age (Rebecca Stein)
  6. Digital Athens: Archaeology meets ArcGIS (Tim Shea, Sheila Dillon)
  7. Early Medieval Networks (J. Clare Woods)

3:45 – 4:00
Coffee Break

4:00 – 5:40 
Session 2 (10 minutes per talk)

  1. Painting the Apostles – A Case Study in “The Lives of Things” (Mark Olson, Mariano Tepper, and Caroline Bruzelius)
  2. Digital Archaeology: From the Field to Virtual Reality (Maurizio Forte)
  3. The Memory Project (Luo Zhou)
  4. Veoveo, children at play (Raquel Salvatella de Prada)
  5. “Things to Think With”: Weird DH, Data, and Experimental Media Theory (S-1 Lab)
  6. s_traits, Generative Authorship and the Emergence Lab (Bill Seaman and John Supko)
  7. Found Objects and Fireflies (Scott Lindroth)
  8. Project Provoke (Mary Caton Lingold and others)

5:40 – 6:00 

Things to Think With


As a late addition to the program, the Duke S-1 Speculative Sensation Lab will be participating in “Conversations in the Digital Humanities” this coming Friday, October 2, 2015, at the Franklin Humanities Institute at Duke. The event, which will consist of a series of brief “lightning talks” on a range of topics that run the gamut of current DH work, will take place from 2:00-6:00pm in the FHI Garage in Smith Warehouse, Bay 4. More info here: Conversations in the Digital Humanities.

Here is the abstract for the S-1 Lab’s presentation, which I will be participating in along with Lab co-director Mark Olson and our resident programmer Luke Caldwell:

“Things to Think With”: Weird DH, Data, and Experimental Media Theory

S-1 Speculative Sensation Lab

The S-1 Speculative Sensation Lab, co-directed by Mark Hansen and Mark Olson, experiments with biometric and environmental sensing technologies to expand our access to sensory experience beyond the five senses. Much of our work involves making “things to think with,” i.e. experimental “set-ups” designed to generate theoretical and aesthetic insight and to focus our mediated sensory apparatus on the conditions of mediation itself. Harnessing digital technologies for the work of media theory, this experimentation can rightly be classed, alongside such practices as “critical making,” in the broad space of the digital humanities. But due to their emphatically self-reflexive nature, these experiments challenge borders between theory and practice, scholarship and art, and must therefore be qualified, following Mark Sample, as decidedly “weird DH.”

In this presentation, we discuss a current project that utilizes consumer-grade EEG headsets, in conjunction with a custom Python script by lab member Luke Caldwell, to reflect on the contemporary shape of “attention,” as it is constructed and addressed in individual and networked forms across media ranging from early cinema to “post-cinema.”

Sculpting Data (& Painting Networks) — Full Video

Above, a video explaining the collaborative art/theory work that my wife Karin and I have been doing lately — both as a part of the Duke S-1 Speculative Sensation Lab‘s Manifest Data project and in a spin-off project that will be going on display at Duke University next month. The video is being shown right now (at the time of this posting) at North Carolina State University — at the 6th annual AEGS conference “How do you do humanities?,” where Karin is representing the two of us and presenting alongside Amanda Starling Gould, Luke Caldwell, Libi Striegl, and David Rambo.

Wish I could be there, but I’ve got another panel here at SCMS in Montreal today…

Sculpting Data (and Painting Networks)


On March 28, 2015, members of the Duke S-1 Speculative Sensation Lab will take over a panel at the 2015 AEGS Conference <how do you do Digital Humanities?>. (See here for the conference website, which includes the full program.) General conference info:

The conference will be held in Tompkins Hall on the NC State University campus in Raleigh, NC, on Friday, March 27th and Saturday, March 28th.  Friday evening we will host a keynote panel of Digital Humanities scholars. These scholars will discuss how they “do” Digital Humanities in their research and pedagogy. On Saturday, participants will present their research in 15 minutes presentations.
Again, the final panel of the conference, Session IV (1:55 – 3:10pm on Saturday, March 28), will be devoted to the S-1 Lab’s recent work, especially the Manifest Data project that I have been posting about here. Titled “Digital Metabolisms: Manifesting Data as a Collaborative Research Process,” the panel consists of the following presentations:

Amanda Starling Gould, Duke University, “Digital Metabolism: Using Digital Tools to Hack Humanities Research”

Luke Caldwell, Duke University, “Leveraging Benevolent Spyware for Humanities Research”

Libi Striegl, Duke University, “3D Printing as Artistic Research Intervention”

Karin & Shane Denson, Duke University, “Sculpting Data”

David Rambo, Duke University, “Manifest Data as Digital Manifest Destiny”

(Observant readers of this blog will notice that I am to give two presentations on March 28: both at NC State and at the SCMS conference in Montreal. In fact, Karin will be representing the two of us in Raleigh, but we’re putting together some presentation materials that we’re quite proud of — and that we think will creatively solve the logistical problems of being in two places at once! More soon!)

Mario Modding Madness

2015-02-03 09.24.30 pm

In case you missed it: you can watch a split-screen video presentation of my digital humanities-oriented talk, “Visualizing Digital Seriality,” which I gave last Friday, January 30, 2015, at Duke University — here (or click the image above).

More about the project can be found here.

Livestream: Visualizing Digital Seriality

2015-01-28 02.54.54 pm

According to the Duke Visualization Friday Forum website, my talk this Friday — “Visualizing Digital Seriality: Correlating Code and Community in the Super Mario Modding Scene” — will be streamed live: here.

The talk will take place at 12:00 Eastern time, Jan. 30, 2015.

Visualizing Digital Seriality / Duke Visualization Friday Forum


On January 30, 2015 (12:00-1:00pm), I will be speaking about visualization techniques and game-related serialization processes at the Duke Visualization Friday Forum. Organized by Eric Monson and Angela Zoss, this is a very exciting and robustly interdisciplinary venue, as the long list of sponsors for the weekly forum indicates: Information Science + Information Studies, the Duke immersive Virtual Environment (or DiVE), Media Arts + Sciences, Data and Visualization Services, the Department of Computer Science, Research Computing at the Office of Information Technology, and Visualization & Interactive Systems.

As this list indicates, the Visualization Friday Forum has the potential to take just about anyone — but especially humanities-types like me — out of their comfort zone; but it does so in the most comfortable way possible: the informal setting of a lunchtime chat fosters a type of exchange that is interdisciplinary in the best sense. Artists, computer scientists, media scholars, digital humanists, historians, literary critics, mathematicians, and researchers in the natural sciences, among others, make a genuine effort to understand one another. And, to judge from the times I have been present or watched a live-stream of the Forum, this effort is usually quite successful.

So here’s hoping that my own effort at interdisciplinary dialogue will be as successful! In my talk, I will discuss an ongoing project, some preliminary findings of which I posted not too long ago. Here’s the abstract:

Visualizing Digital Seriality: Correlating Code and Community in the Super Mario Modding Scene

Shane Denson (DAAD Postdoctoral Fellow, Duke Literature)

Seriality is a common feature of game franchises, with their various sequels, spin-offs, and other forms of continuation; such serialization informs social processes of community-building among fans, while it also takes place at much lower levels in the repetition and variation that characterizes a series of game levels, for example, or in the modularized and recycled code of game engines. This presentation considers how tools and methods of digital humanities – including “distant reading” and visualization techniques – can shed light on serialization processes in digital games and gaming communities. The vibrant “modding” scene that has arisen around the classic Nintendo game Super Mario Bros. (1985) serves as a case study. Automated “reading” techniques allow us to survey a large collection of fan-based game modifications, while visualization software such as Tableau and Palladio help to bridge the gap between code and community, revealing otherwise invisible connections and patterns of seriality.

Visualizing Digital Seriality, Or: All Your Mods Are Belong to Us!


In this post, I want to outline some ongoing work in progress that I’ve been pursuing as part of my postdoctoral research project on seriality as an aesthetic form and as a process of collectivization in digital games and gaming communities. The larger context, as readers of this blog will know, is a collaborative project I am conducting with Andreas Jahn-Sudmann of the Freie Universität Berlin, titled “Digital Seriality” — which in turn is part of an even larger research network, the DFG Research Unit “Popular Seriality–Aesthetics and Practice.” I’ll touch on this bigger picture here and there as necessary, but I want to concentrate more specifically in the following on some thoughts and research techniques that I’ve been developing in the context of Victoria Szabo’s “Historical & Cultural Visualization” course, which I audited this semester at Duke University. In this hands-on course, we looked at a number of techniques and technologies for conducting digital humanities-type research, including web-based and cartographic research and presentation, augmented and virtual reality, and data-intensive research and visualization. We engaged with a great variety of tools and applications, approaching them experimentally in order to evaluate their particular affordances and limitations with respect to humanities work. My own engagements were guided by the following questions: How might the tools and methods of digital humanities be adapted for my research on seriality in digital games, and to what end? What, more specifically, can visualization techniques add to the study of digital seriality?

I’ll try to offer some answers to these questions in what follows, but let me indicate briefly why I decided to pursue them in the first place. To begin with, seriality challenges methods of single-author and oeuvre or work-centric approaches, as serialization processes unfold across oftentimes long temporal frames and involve collaborative production processes — including not only team-based authorship in industrial contexts but also feedback loops between producers and their audiences, which can exert considerable influence on the ongoing serial development. Moreover, such tendencies are exacerbated with the advent of digital platforms, in which these feedback loops multiply and and accelerate (e.g. in Internet forums established or monitored by serial content producers and, perhaps more significantly, in real-time algorithmic monitoring of serialized consumption on platforms like Netflix), while the contents of serial media are themselves subject to unprecedented degrees of proliferation, reproduction, and remix under conditions of digitalization. Accordingly, an incredible amount of data is generated, so that it is natural to wonder whether any of the methods developed in the digital humanities might help us to approach phenomena of serialization in the digital era. In the context of digital games and game series, the objects of study — both the games themselves and the online channels of communication around which gaming communities form — are digital from the start, but there is such an overwhelming amount of data to sort through that it can be hard to see the forest for the trees. As a result, visualization techniques in particular seem like a promising route to gaining some perspective, or (to mix metaphors a bit) for establishing a first foothold in order to begin climbing what appears an insurmountable mountain of data. Of particular interest here are: 1) “distant reading” techniques (as famously elaborated by Franco Moretti), which might be adapted to the objects of digital games, and 2) tools for network analysis, which might be applied in order to visualize and investigate social formations that emerge around games and game series.


Before elaborating on how I have undertaken to employ these approaches, let me say a bit more about the framework of my project and the theoretical perspective on digital seriality that Andreas Jahn-Sudmann and I have developed at greater length in our jointly authored paper “Digital Seriality.” Our starting point for investigating serial forms and processes in games and gaming communities is what we call “inter-ludic seriality” — that is, the serialization processes that take place between games, establishing series such as Super Mario Bros. 1, 2, 3 etc. or Pokemon Red and Blue, Gold and Silver, Ruby and Sapphire, Black and White etc. For the most part, such inter-ludic series are constituted by fairly standard, commercially motivated practices of serialization, expressed in sequels, spin-offs, and the like; accordingly, they are a familiar part of the popular culture that has developed under capitalist modernity since the time of industrialization. Thus, there is lots of continuity with pre-digital seriality, but there are other forms of seriality involved as well.


“Intra-ludic seriality” refers to processes of repetition and variation that take place within games themselves, for example in the 8 “worlds” and 32 “levels” of Super Mario Bros. Here, a general framework is basically repeated while varying and in some cases increasingly difficult tasks and obstacles are introduced as Mario searches for the lost princess. Following cues from Umberto Eco and others, this formula of “repetition + variation” is taken here as the formal core of seriality; games can therefore be seen to involve an operational form of seriality that is in many ways more basic than, while often foundational to, the narrative serialization processes that they also display.


Indeed, this low-level seriality is matched by higher-level processes that encompass but go beyond the realm of narrative — beyond even the games themselves. What we call “para-ludic seriality” involves tie-ins and cross-overs with other media, including the increasingly dominant trend towards transmedia storytelling, aggressive merchandising, and the like. Clearly, this is part of an expanding commercial realm, but it is also the basis for more.


There is a social superstructure, itself highly serialized, that forms around or atop these serialized media, as fans take to the Internet to discuss (and play) their favorite games. In itself, this type of series-based community-building is nothing new. In fact, it may just be a niche form of a much more general phenomenon that is characteristic for modernity at large. Benedict Anderson and Jean-Paul Sartre before him have described modern forms of collectivity in terms of “seriality,” and they have linked these formations to serialized media consumption and those media’s serial forms — newspapers, novels, photography, and radio have effectively “serialized” community and identity throughout the nineteenth and twentieth centuries.


Interestingly, though, in the digital era, this high-level community-building seriality is sometimes folded into an ultra low-level, “infra-ludic” level of seriality — a level that is generally invisible and that takes place at the level of code. (I have discussed this level before, with reference to the BASIC game Super Star Trek, but I have never explicitly identified it as “infra-ludic seriality” before.) This enfolding of community into code, broadly speaking, is what motivates the enterprise of critical code studies, when it is defined (for example, by Mark Marino) as

an approach that applies critical hermeneutics to the interpretation of computer code, program architecture, and documentation within a socio-historical context. CCS holds that lines of code are not value-neutral and can be analyzed using the theoretical approaches applied to other semiotic systems in addition to particular interpretive methods developed particularly for the discussions of programs. Critical Code Studies follows the work of Critical Legal Studies, in that it practitioners apply critical theory to a functional document (legal document or computer program) to explicate meaning in excess of the document’s functionality, critiquing more than merely aesthetics and efficiency. Meaning grows out of the functioning of the code but is not limited to the literal processes the code enacts. Through CCS, practitioners may critique the larger human and computer systems, from the level of the computer to the level of the society in which these code objects circulate and exert influence.

Basically, then, the questions that I am here pursuing are concerned with the possibilities of crossing CCS with DH — and with observing the consequences for a critical investigation of digital game-based seriality. My goal in this undertaking is to find a means of correlating formations in the high-level superstructure with the infra-ludic serialization at the level of code — not only through close readings of individual texts but by way of large collections of data produced by online collectives.


As a case study, I have been looking at, a website devoted to the community of hackers and modders of games for (mostly) older platforms and consoles. “Community” is an important notion in the site’s conception of itself and its relation to its users, as evidenced in the site’s “about” page: is the innovative new community site that aggressively aims to bring several different areas of the community together. First, it serves as a successor to, and merges content from, and The Whirlpool. Besides being a simple archive site,’s purpose is to bring the ROMhacking Community to the next level. We want to put the word ‘community’ back into the ROMhacking community.

The ROMhacking community in recent years has been scattered and stagnant. It is our goal and hope to bring people back together and breathe some new life into the community. We want to encourage new people to join the hobby and make it easier than ever for them to do so.

Among other things, the site includes a vast collection of Super Mario Bros. mods (at the time of writing, 205 different hacks, some of which include several variations). These are fan-based modifications of Nintendo’s iconic game from 1985, which substitute different characters, add new levels, change the game’s graphics, sound, or thematic elements, etc. — hence perpetuating an unofficial serialization process that runs parallel to Nintendo’s own official game series, and forming the basis of communal formations through more or less direct manipulation of computer code (in the form of assembly language, hex code, or mediated through specialized software platforms, including emulators and tools for altering the game). In other words, the social superstructure of serial collectivity gets inscribed directly into the infra-ludic level of code, leaving traces that can be studied for a better understanding of digital seriality.

But how should we study them? Even this relatively small sample is still quite large by the standards of a traditional, close reading-based criticism. What would we be looking for anyway? The various mods are distributed as patches (.ips files) which have to be applied to a ROM file of the original game; the patches are just instruction files indicating how the game’s code is to be modified by the computer. As such, the patch files can be seen, rather abstractly, as crystallizations of the serialization process: if repetition + variation is the formal core of seriality, the patches are the records of pure variation, waiting to be plugged back into the framework of the game (the repeating element). But when we do plug it back in, then what? We can play the game in an emulator, and certainly it would be interesting — but extremely time-consuming — to compare them all in terms of visual appearance, gameplay, and interface. Or we can open the modified game file in a hex editor, in which case we might get lucky and find an interesting trace of the serialization process, such as the following:

2014-12-02 12.23.19 pm

Similar to Super Star Trek with its REM comments documenting its own serial and collective genesis, here we find an embedded infratext in the hexcode of “Millennium Mario,” a mod by an unknown hacker reportedly dating back to January 1, 2000. Note, in particular, the reference to a fellow modder, “toma,” the self-glorifying “1337” comment, and the skewed ASCII art — all signs of a community of serialization operating at a level subterranean to gameplay. But this example also demonstrates the need for a more systematic approach — as well as the obstacles to systematicity, for at stake here is not just code but also the software we use to access it and other “parergodic” elements, including even the display window size or “view” settings of the hex editor:


In a sense, this might be seen as a first demonstration of the importance of visualization not only in the communication of results but in the constitution of research objects! In any case, it clearly establishes the need to think carefully about what it is, precisely, that we are studying: serialization is not imprinted clearly and legibly in the code, but is distributed in the interfaces of software and hardware, gameplay and modification, code and community.

Again, I follow Mark Marino’s conception of critical code studies, particularly with respect to his broad understanding of the object of study:

What can be interpreted?

Everything. The code, the documentation, the comments, the structures — all will be open to interpretation. Greater understanding of (and access to) these elements will help critics build complex readings. In “A Box Darkly,” discussed below, Nick Montfort and Michael Mateas counter Cayley’s claim of the necessity for executability, by acknowledging that code can be written for programs that will never be executed. Within CCS, if code is part of the program or a paratext (understood broadly), it contributes to meaning. I would also include interpretations of markup languages and scripts, as extensions of code. Within the code, there will be the actual symbols but also, more broadly, procedures, structures, and gestures. There will be paradigmatic choices made in the construction of the program, methods chosen over others and connotations.

In addition to symbols and characters in the program files themselves, paratextual features will also be important for informed readers. The history of the program, the author, the programming language, the genre, the funding source for the research and development (be it military, industrial, entertainment, or other), all shape meaning, although any one reading might emphasize just a few of these aspects. The goal need not be code analysis for code’s sake, but analyzing code to better understand programs and the networks of other programs and humans they interact with, organize, represent, manipulate, transform, and otherwise engage.

But, especially when we’re dealing with a large set of serialized texts and paratexts, this expansion of code and the attendant proliferation of data exacerbates our methodological problems. How are we to conduct a “critical hermeneutics” of the binary files, their accompanying README files, the ROMhacking website, and its extensive database — all of which contain information relevant to an assessment of the multi-layered processes of digital seriality? It is here, I suggest, that CCS can profit from combination with DH methods.

2014-12-08 03.44.57 pm

The first step in my attempt to do so was to mine data from the ROMhacking website and paratexts distributed with the patches and to create a spreadsheet with relevant metadata (you can download the Excel file here: SMB-Hacks-Dec1). On this basis, I began trying to analyze and visualize the data with Tableau. But while this yielded some basic information that might be relevant for assessing the serial community (e.g. the number of mods produced each year, including upward and downward trends; a list of the top modders in the community; and a look at trends in the types of mods/hacks being produced), the visualizations themselves were not very interesting or informative on their own (click on the image below for an interactive version):

2014-12-08 04.02.56 pm

How could this high-level metadata be coordinated with and brought to bear on the code-level serialization processes that we saw in the hexcode above? In looking for an answer, it became clear that I would have to find a way to collect some data about the code. The mods, themselves basically just “diff” files, could be opened and compared with the “diff” function that powers a lot of DH-based textual analysis (for example, with juxta), but the hexadecimal code that we can access here — and the sheer amount of it in each modded game, which consists of over 42000 bytes — is not particularly conducive to analysis with such tools. Many existing hex editors also include a “diff” analysis, but it occurred to me that it would be more desirable to have a graphical display of differences between the files in order to see the changes at a glance. My thinking here was inspired by hexcompare, a Linux-based visual diff program for quickly visualizing the differences between two binary programs:


However, the comparison here is restricted to local use on a Linux machine, and it only considers two files at a time. If this type of analysis is to be of any use for seriality studies, it will have to assess a much larger set of files and/or automate the comparison process. This is where Eric Monson and Angela Zoss from Visualization & Information Services at Duke University came in and helped me to develop an alternative approach. Eric Monson wrote a script that analyzes the mod patch files and records the basic “diff” information they contain: the address or offset at which they instruct the computer to modify the game file, as well as the number of bytes that they instruct it to write. With this information (also recorded in the Excel file linked to above), a much more useful and interactive visualization can be created with Tableau (click for an interactive version):

2014-12-08 04.38.04 pm

Here, Gannt charts are used to represent the size and location of changes that a given mod makes to the original Mario game; it is possible to see a large number of these mods at a single glance, to filter them by year, by modder, by title, or even size (some mods expand the original code), etc., and in this way we can begin to see patterns emerging. Thus, we bring a sort of “distant reading” to the level of code, combining DH and CCS. (Contrast this approach with Marino’s 2006 call to “make the code the text,” which despite his broad understanding of code and acknowledgement that software/hardware and text/paratext distinctions are non-absolute, was still basically geared towards a conception of CCS that encouraged critical engagements of the “close-reading” type. As I have argued, however, researching seriality in particular requires that we oscillate between big-picture and micro-level analyses, between distant readings of larger trends and developments and detailed comparisons between individual elements or episodes in the serial chain.)

But to complete this approach, we still need to correlate this code-based data with the social level of online modding communities. For this purpose, I used Palladio (a tool explicitly designed for DH work by the Humanities + Design lab at Stanford) to graph networks on the basis of metadata contained in Readme.txt files.


Here, I have mapped the references (“shout-outs,” etc) that modders made to one another in these paratexts, thus revealing a picture of digital seriality as an imagined community of modders.


Here, on the other hand, I have mapped references from paratextual materials associated with individual mods to various online communities that have come and gone over the years. We see early references to the defunct TEKhacks, by way of Zophar’s Domain, Acmlm’s and Insectduel’s boards, with more recent references to, the most recent community site and the one that I am studying here.


As an example of how the social network and code-level analyses might be correlated, here I’ve filtered the network graph to show only those modders who refer in their paratexts to Super Mario Bros. 3 (hence bringing inter-ludic seriality to bear on their para- and infra-ludic interventions). The resulting graph reveals a small network of actors whose serializing activity involves mixing and referencing between SMB1 and SMB3, as well as between each other. The Tableau screenshot on the right then selects just these modders and reveals possible similarities and sites of serialization (for closer scrutiny with hexcompare or tools derived from the modding community itself). For example, we find that the modder AP’s SMB3-inspired patches from September 2005 and flamepanther’s SMB DX patches from Oct 2005 exhibit traces of possible overlap that deserve to be looked at in detail. The modder insectduel’s After World 8 (a mod that is referenced by many in the scene) from February 2006 shares large blocks around 31000-32000 bytes with many of the prolific modder Googie’s mods (which themselves seem to exhibit a characteristic signature) from 2004-2006. Of course, recognizing these patterns is just the beginning of inquiry, but at least it is a beginning. From here, we still have to resort to “close reading” techniques and to tools that are not conducive to a broad view; more integrated toolsets remain to be developed. Nevertheless, these methods do seem promising as a way of directing research, showing us where to look in greater depth, and revealing trends and points of contact that would otherwise remain invisible.

Finally, by way of conclusion and to demonstrate what some of this more detailed work looks like, I’d like to return to the “Millennium Mario” mod I considered briefly above. As we saw, there was an interesting infratextual shoutout and some ASCII art in the opening section of the hexcode. With Tableau, we can filter the “diff” view to display only those mods that exhibit changes in the first 500 bytes of code, and to map that section of code in greater resolution (this is done with the slider in the bottom right corner, marked “Start” — referring to the byte count at which a change in the game starts):


Here we find two distinctive (visual) matches: viz. between “Millennium Mario” and Raysyde’s “Super Mario Bros. – Remix 2” from 1999, and between ATA’s “Super Mario Bros. – Yoshi’s Quest” and Krillian’s “Mario Adventure 2,” both from 2000. The latter two mods, while clearly different from the former two, also exhibit some overlap in the changes made to the first 20 or so bytes, so it will be interesting to compare them as well.


Now we can use hexcompare for finer analysis — i.e. to determine if the content of the changed addresses is also identical (the visual match only tells us that something has been changed in the same place, not whether the same change has been made there).


Here we find that Raysyde’s “Super Mario Bros. – Remix 2” does in fact display the same changes in the opening bytes, including the reference to “toma” and the ASCII art. This then is a clear indication of infra-ludic serialization: the borrowing, repetition, and variation of code-level work between members of the modding community. This essentially serial connection (an infra-serial link) would hardly be apparent from the level of the mods’ respective interfaces, though:


When we compare “Millennium Mario” with ATA’s “Super Mario Bros. – Yoshi’s Quest,” we find the ASCII art gone, despite the visual match in Tableau’s mapping of their “diff” indications for the opening bytes:


“Yoshi’s Quest” corresponds in this respect to Krillian’s “Mario Adventure 2”:


Thus we have another clear indication of infra-ludic serialization, which would hardly have been evident other than by means of a directed filtering of the large dataset, in conjunction with a close analysis of the underlying code.

Again, however, this is just the beginning of the analysis — or more broadly of an encounter between DH and CCS. Ideally, the dataset would be expanded beyond’s database; other online communities would be mined for data; and, above all, more integrative tools would be developed for correlating social network graphs and diff-maps, for correlating community and code. Perhaps a crowdsourced approach to some of this would be appropriate; for what it’s worth, and in case anyone is inclined to contribute, my data and the interactive Tableau charts are linked above. But the real work, I suspect, lies in building the right tools for the job, and this will clearly not be an easy task. Alas, like digital seriality itself, this is work in progress, and thus it remains work “to be continued”…

Thanks finally to Eric Monson, Angela Zoss, Victoria Szabo, Patrick LeMieux, Max Symuleski, and the participants in the Fall 2014 “Historical & Cultural Visualization Proseminar 1” at Duke University for the various sorts of help, feedback, and useful tips they offered on this project!

Profanity TV / Digital Humanities: Independent Studies Final Presentation

On Tuesday, July 17 (6:00 pm, room 615 of the “Conti-Hochhaus” at Königsworther Platz 1), the participants in my independent studies course on “Digital Media and Humanities Research” will present their projects at an event that is open to all interested parties. Linda Kötteritzsch, Julia Schmedes, and Mandy Schwarze will discuss their blog, “Bonfire of the Televised Profanities,” and its significance at the intersection of TV studies and digital media, while Urthe Rehmstedt and Maren Sonnenberg will approach questions around the digital humanities through the medium of the video essay. Spread the word, and check it out if you can!