Out Now: ETC Media 110

etc_media_110

I am proud to have a piece on “Pre-Sponsive Gestures” and the work of French media artist Grégory Chatonsky included in the new issue of the Montreal-based ETC Media. Looks like a great issue, and happy to be in such good company!

CURRENT ISSUE // 110
GRÉGORY CHATONSKY: APRÈS LE RÉSEAU / AFTER THE NETWORK

Issue 110 of ETC MEDIA is dedicated to Grégory Chatonsky, who has curated the form and content of this special issue. A Montreal resident for the last ten years, the artist is a pioneer of net art, founding Incident.net in 1994, and an unflagging explorer of the relationships between technology and anonymous existence. In this issue, the artist and a few other friends, artists, philosophers, art historians, and art critics reconsider the last two decades of experimentation, a time in which the world drastically changed through the widespread use of the Internet to reach a digital omnipresence that heralds a near extinction. Divided into 3 sections—“infinitude,” “hyperproduction,” “without ourselves”—ETC MEDIA becomes a platform for navigating in our era and gaining a better understanding of a future whose portents remain deeply ambivalent—promising and threatening all at once. Rather than being reduced to trendy notions often misunderstood by the contemporary art milieu, the concepts of post-digital, accelerationism, and speculative materialism constellate a world in the process of perishing and being born.

Collaborators

Grégory Chatonsky
Eve K. Tremblay
Pau Waelder
Bertrand Gervais and Arnaud Regnauld
Shane Denson
DeForrest Brown Jr.
Goliath Dyèvre
Pierre Cassou-Noguès
Erik Bordeleau
Nora N. Khan
Dylan Trigg
Pierre-Alexandre Fradet
Jussi Parikka

Frankenstein@200

frankenstein-logo-no-border

Happy to be on the steering committee for Frankenstein@200 — a year-long series of events taking place at Stanford in 2018. I’ll be participating in a number of ways, including  talks and several courses related to Frankenstein, among other things. I’ll post details here in due time. Also be sure to check out the project website, which is still under construction, but which is already chock full of announcements and constantly being updated.

The year 2018 marks the 200th anniversary of the publishing of Mary Shelley’s novel Frankenstein. The novel is eerily relevant today as we face ethical dilemmas around appropriate use of stem cells, questions about organ donation and organ harvesting, as well as animal to human transplants. Additionally, the rise of artificial intelligence portends an uncertain future of the boundaries between machines and humans. Frankenstein@200, will be a year-long series of academic courses and programs including a film festival, a play, a lecture series and an international Health Humanities Conference that will examine the numerous moral, scientific, sociological, ethical and spiritual dimensions of the work, and why Dr. Frankenstein and his monster still capture the moral imagination today. This project will be sponsored by the Stanford Medicine & the Muse Program in partnership with the Stanford Humanities Center, the Stanford Arts Institute, the Office of Religious Life, the Vice Provost for Teaching and LearningStanford Continuing Studies, the Cantor Arts Center, the Department of Art & Art History, and the Center for Biomedical Ethics.

Out Now: Network Ecologies

NetworkEcologies

Network Ecologies is a great new open-access collection edited by Amanda Starling Gould and Florian Wiencek and published by the Duke Franklin Humanities Institute. The collection takes advantage of the Scalar publishing platform to include a variety of media alongside scholarly texts. Among other things, it includes a collection of artworks by Karin Denson and myself, which we developed for an exhibit at Duke in 2015 (also organized by Amanda Starling Gould) and which grew out of a collaboration with the Duke S-1: Speculative Sensation Lab. There is also an archive of videos from a 2013 symposium, including contributions from Jussi Parikka, Mark Hansen, Stephanie Boluk, Patrick LeMieux, and many others. Lots of great things to discover here–check it out!

 

Post-Cinema AR

13122900_1733166986928333_1269794987351584805_o13131607_1733166983595000_88851548299674089_o

The augmented reality piece featured on the cover of Post-Cinema: Theorizing 21st-Century Film (http://reframe.sussex.ac.uk/post-cinema/), a collaborative piece made by Karin Denson and me, was displayed recently at a glitch-oriented gallery show organized by some nice people associated with Savannah College of Art and Design.

Try it out for yourself here: http://reframe.sussex.ac.uk/post-cinema/artwork/.

After.Video at Libre Graphics 2016 in London

banner_glitch_1

Recently, I posted about a project called after.video, which contains an augmented (AR) glitch/video/image-based theory piece that Karin Denson and I collaborated on. It has now been announced that the official launch of after.video, Volume 1: Assemblages — a “video book” consisting of a paperback book and video elements stored on a Raspberry Pi computer packaged in a VHS case, which will also be available online — will take place at the Libre Graphics Meeting 2016 in London (Sunday, April 17th at 4:20pm).

Coming Soon: after.video

av3d_v03

I just saw the official announcement for this exciting project, which I’m proud to be a part of (with a collaborative piece I made with Karin Denson).

after.video, Volume 1: Assemblages is a “video book” — a paperback book and video stored on a Raspberry Pi computer packaged in a VHS case. It will also be available as online video and book PDF download.

Edited by Oliver Lerone Schultz, Adnan Hadzi, Pablo de Soto, and Laila Shereen Sakr (VJ Um Amel), it will be published this year (2016) by Open Humanities Press.

The piece I developed with Karin is a theory/practice hybrid called “Scannable Images: Materialities of Post-Cinema after Video.” It involves digital video, databending/datamoshing, generative text, animated gifs, and augmented reality components, in addition to several paintings in acrylic (not included in the video book).

Here’s some more info about the book from the OpenMute Press site:

Theorising a World of Video

after.video realizes the world through moving images and reassembles theory after video. Extending the formats of ‘theory’, it reflects a new situation in which world and video have grown together.

This is an edited collection of assembled and annotated video essays living in two instantiations: an online version – located on the web at http://after.video/assemblages, and an offline version – stored on a server inside a VHS (Video Home System) case. This is both a digital and analog object: manifested, in a scholarly gesture, as a ‘video book’.

We hope that different tribes — from DIY hackercamps and medialabs, to unsatisfied academic visionaries, avantgarde-mesh-videographers and independent media collectives, even iTV and home-cinema addicted sofasurfers — will cherish this contribution to an ever more fragmented, ever more colorful spectrum of video-culture, consumption and appropriation…

Table of Contents

Control Societies 
Peter Woodbridge + Gary Hall + Clare Birchall
Scannable images: materialities of Post-Cinema after Video 
Karin + Shane Denson
Isistanbul 
Serhat Köksal
The Crying Selfie
Rózsa Zita Farkas
Guided Meditation 
Deborah Ligotrio
Contingent Feminist Tacticks for Working with Machines 
Lucia Egaña Rojas
Capturing the Ephemeral and Contestational 
Eric Kiuitenberg
Surveillance Assemblies 
Adnan Hadzi
You Spin me Round – Full Circle 
Andreas Treske

Editorial Collective

Oliver Lerone Schultz
Adnan Hadzi
Pablo de Soto
Laila Shereen Sakr (VJ Um Amel)

Tech Team

Jacob Friedman – Open Hypervideo Programmer
Anton Galanopoulos – Micro-Computer Programmer

Producers

Adnan Hadzi – OHP Managing Producer
Jacob Friedman – OHV Format Development & Interface Design
Joscha Jäger – OHV Format Development & Interface Design
Oliver Lerone Schultz – Coordination CDC, Video Vortex #9, OHP

Cover artwork and booklet design: Jacob Friedman
Copyright: the authors
Licence: after.video is dual licensed under the terms of the MIT license and the GPL3
http://www.gnu.org/licenses/gpl-3.0.html
Language: English
Assembly On-demand
OpenMute Press

Acknowledgements

Co-Initiated + Funded by

Art + Civic Media as part of Centre for Digital Cultures @ Leuphana University.
Art + Civic Media was funded through Innovation Incubator, a major EU project financed by the European Regional Development Fund (ERDF) and the federal state of Lower Saxony.

Thanks to

Joscha Jaeger – Open Hypervideo (and making this an open licensed capsule!)
Timon Beyes – Centre for Digital Cultures, Lüneburg
Mathias Fuchs – Centre for Digital Cultures, Lüneburg
Gary Hall – School of Art and Design, Coventry University
Simon Worthington – OpenMute

http://www.metamute.org/shop/openmute-press/after.video

Speculative Data: Full Text, MLA 2016 #WeirdDH

SpeculativeData-jpg.001

Below you’ll find the full text of my talk from the Weird DH panel organized by Mark Sample at the 2016 MLA conference in Austin Texas. Other speakers on the panel included Jeremy Justus, Micki Kaufman, and Kim Knight.

***

Speculative Data: Post-Empirical Approaches to the “Datafication” of Affect and Activity

Shane Denson, Duke University

A common critique of the digital humanities questions the relevance (or propriety) of quantitative, data-based methods for the study of literature and culture; in its most extreme form, this type of criticism insinuates a complicity between DH and the neoliberal techno-culture that turns all human activity, if not all of life itself, into “big data” to be mined for profit. Now, it may sound from this description that I am simply setting up a strawman to knock down, so I should admit up front that I am not wholly unsympathetic to the critique of datafication. But I do want to complicate things a bit. Specifically, I want to draw on recent reconceptions of DH as “deformed humanities” – as an aesthetically and politically invested field of “deformance”-based practice – and describe some ways in which a decidedly “weird” DH can avail itself of data collection in order to interrogate and critique “datafication” itself.

SpeculativeData-jpg.002

My focus is on work conducted in and around Duke University’s S-1: Speculative Sensation Lab, where literary scholars, media theorists, artists, and “makers” of all sorts collaborate on projects that blur the boundaries between art and digital scholarship. The S-1 Lab, co-directed by Mark Hansen and Mark Olson, experiments with biometric and environmental sensing technologies to expand our access to sensory experience beyond the five senses. Much of our work involves making “things to think with,” i.e. experimental “set-ups” designed to generate theoretical and aesthetic insight and to focus our mediated sensory apparatus on the conditions of mediation itself. Harnessing digital technologies for the work of media theory, this experimentation can rightly be classed, alongside such practices as “critical making,” in the broad space of the digital humanities. But due to their emphatically self-reflexive nature, these experiments challenge borders between theory and practice, scholarship and art, and must therefore be qualified, following Mark Sample, as decidedly “weird DH.”

SpeculativeData-jpg.003.jpeg

One such project, Manifest Data, uses a piece of “benevolent spyware” that collects and parses data about personal Internet usage in such a way as to produce 3D-printable sculptural objects, thus giving form to data and reclaiming its personal value from corporate cooptation. In a way that is both symbolic and material, this project counters the invisibility and “naturalness” of mechanisms by which companies like Google and Facebook expropriate value from the data we produce. Through a series of translations between the digital and the physical—through a multi-stage process of collecting, sculpting, resculpting, and manifesting data in virtual, physical, and augmented spaces—the project highlights the materiality of the interface between human and nonhuman agencies in an increasingly datafied field of activity. (If you’re interested in this project, which involves “data portraits” based on users’ online activity and even some weird data-driven garden gnomes designed to dispel the bad spirits of digital capital, you can read more about it in the latest issue of Hyperrhiz.)

SpeculativeData-jpg.004

Another ongoing project, about which I will say more in a moment, uses data collected through (scientifically questionable) biofeedback devices to perform realtime collective transformations of audiovisual materials, opening theoretical notions of what Steven Shaviro calls “post-cinematic affect” to robustly material, media-archaeological, and aesthetic investigations.

SpeculativeData-jpg.005

These and other projects, I contend, point the way towards a truly “weird DH” that is reflexive enough to suspect its own data-driven methods but not paralyzed into inactivity.

Weird DH and/as Digital Critical (Media) Studies:

So I’m trying to position these projects as a form of weird digital critical (media) studies, designed to enact and reflect (in increasingly self-reflexive ways) on the use of digital tools and processes for the interrogation of the material, cultural, and medial parameters of life in digital environments.

SpeculativeData-jpg.006

Using digital techniques to reflect on the affordances and limitations of digital media and interfaces, these projects are close in spirit to new media art, but they are also apposite with practices and theories of “digital rhetoric,” as described by Doug Eyman, with Gregory Ulman’s “electracy,” or with Casey Boyle’s posthuman rhetoric of multistability, which celebrates the rhetorical affordances of digital glitches in exposing the affordances and limitations of computational media in the broader realm of an interagential relational field that includes both humans and nonhumans. In short, these projects enact what we might call, following Stanley Cavell, the “automatisms” of digital media – the generative affordances and limitations that are constantly produced, reproduced, and potentially transformed or “deformed” in creative engagements with media. Digital tools are used in such a way as to problematize their very instrumentality, hence moving towards a post-empirical or post-positivistic form of datafication as much as towards a post-instrumental digitality.

SpeculativeData-jpg.007

Algorithmic Nickelodeon / Datafied Attention:

My key example is a project tentatively called the “algorithmic nickelodeon.” Here we use consumer-grade EEG headsets to interrogate the media-technical construction and capture of human attention, and thus to complicate datafication by subjecting it to self-reflexive, speculative, and media-archaeological operations. The devices in question cost about $100 and are marketed as tools for improving concentration, attention, and memory. The headset measures a variety of brainwave activity and, by means of a proprietary algorithm, computes values for “attention” and “meditation” that can be tracked and, with the help of software applications, trained and supposedly optimized. In the S-1 Lab, we have sought to tap into these processes in order not just to criticize the scientifically dubious nature of these claims but rather to probe and better understand the nature of the automatisms and interfaces taking place here and in media of attention more generally. Specifically, we have designed a film- and media-theoretical application of the apparatus, which allows us to think early and contemporary moving images together, to conceive pre- and post-cinema in terms of their common deviations from the attention economy of classical cinema, and to reflect more broadly on the technological-material reorganizations of attention involved in media change. This is an emphatically experimental (that is, speculative, post-positivistic) application, and it involves a sort of post-cinematic reenactment of early film’s viewing situations in the context of traveling shows, vaudeville theaters, and nickelodeons. With the help of a Python script written by lab member Luke Caldwell, a group of viewers wearing the Neurosky EEG devices influence the playback of video clips in real time, for example changing the speed of a video or the size of the projected image in response to changes in attention as registered through brain-wave activity.

At the center of the experimentation is the fact of “time-axis manipulation,” which Friedrich Kittler highlights as one of the truly novel affordances of technical media, like the phonograph and cinema, that arose around 1900 and marked, for him, a radical departure from the symbolic realms of pre-technical arts and literature. Now it became possible to inscribe “reality itself,” or to record a spectrum of frequencies (like sound and light) directly, unfiltered through alphabetic writing; and it became possible as well to manipulate the speed or even playback direction of this reality.

SpeculativeData-jpg.009

Recall that the cinema’s standard of 24 fps only solidified and became obligatory with the introduction of sound, as a solution to a concrete problem introduced by the addition of a sonic register to filmic images. Before the late 1920s, and especially in the first two decades of film, there was a great deal of variability in projection speed, and this was “a feature, not a bug” of the early cinematic setup. Kittler writes: “standardization is always upper management’s escape from technological possibilities. In serious matters such as test procedures or mass entertainment, TAM [time-axis manipulation] remains triumphant. [….] frequency modulation is indeed the technological correlative of attention” (Gramophone Film Typewriter 34-35). Kittler’s pomp aside, his statement highlights a significant fact about the early film experience: Early projectionists, who were simultaneously film editors and entertainers in their own right, would modulate the speed of their hand-cranked apparatuses in response to their audience’s interest and attention. If the audience was bored by a plodding bit of exposition, the projectionist could speed it up to get to a more exciting part of the movie, for example. Crucially, though: the early projectionist could only respond to the outward signs of the audience’s interest, excitement, or attention – as embodied, for example, in a yawn, a boo, or a cheer.

SpeculativeData-jpg.010

But with the help of an EEG, we can read human attention – or some construction of “attention” – directly, even in cases where there is no outward or voluntary expression of it, and even without its conscious registration. By correlating the speed of projection to these inward and involuntary movements of the audience’s neurological apparatus, such that low attention levels cause the images to speed up or slow down, attention is rendered visible and, to a certain extent, opened to conscious and collective efforts to manipulate it and the frequency of images now indexed to it.

According to Hugo Münsterberg, who wrote one of the first book-length works of film theory in 1916, cinema’s images anyway embody, externalize, and make visible the faculties of human psychology; “attention,” for example, is said to be embodied by the close-up. With our EEG setup, we can literalize Münsterberg’s claim by correlating higher attention levels with a greater zoom factor applied to the projected image. If the audience pays attention, the image grows; if attention flags, the image shrinks. But this literalization raises more questions than it answers, it would seem. On the one hand, it participates in a process of “datafication,” turning brain wave patterns into a stream of data called “attention,” but whose relation to attention in ordinary senses is altogether unclear. But this datafication simultaneously opens up a space of affective or aesthetic experience in which the problematic nature of the experimental “set-up” announces itself to us in a self-reflexive doubling: we realize suddenly that “it’s a setup”; “we’ve been framed” – first by the cinema’s construction of attentive spectators and now by this digital apparatus that treats attention as an algorithmically computed value.

So in a way, the apparatus is a pedagogical/didactic tool: it not only allows us to reenact (in a highly transformed manner) the experience of early cinema, but it also helps us to think about the construction of “attention” itself in technical apparatuses both then and now. In addition to this function, it also generates a lot of data that can indeed be subjected to statistical analysis, correlation, and visualization, and that might be marshaled in arguments about the comparative medial impacts or effects of various media regimes. Our point, however, remains more critical, and highly dubious of any positivistic understanding of this data. The technocrats of the advertising industry, the true inheritors of Münsterberg the industrial psychologist, are anyway much more effective at instrumentalizing attention and reducing it to a psychotechnical variable. With a sufficiently “weird” DH approach, we hope to stimulate a more speculative, non-positivistic, and hence post-empirical relation to such datafication. Remitting contemporary attention procedures to the early establishment of what Kittler refers to as the “link between physiology and technology” (73) upon which modern entertainment media are built, this weird DH aims not only to explore the current transformations of affect, attention, and agency – that is, to study their reconfigurations – but also potentially to empower media users to influence such configuration, if only on a small scale, rather than leave it completely up to the technocrats.