Creation scene and aftermath, as described in Mary Shelley’s Frankenstein (Chapter 5, 1831 edition) and interpreted by Cris Valenzuela’s text-to-image machine-learning demo ( utilizing AttnGAN (Attentional Generative Adversarial Networks).

Made for the upcoming Videographic Frankenstein exhibition at the Department of Art & Art History, Stanford University (Sept. 26 – Oct. 26, 2018). More info here:

Coming Soon:


I just saw the official announcement for this exciting project, which I’m proud to be a part of (with a collaborative piece I made with Karin Denson)., Volume 1: Assemblages is a “video book” — a paperback book and video stored on a Raspberry Pi computer packaged in a VHS case. It will also be available as online video and book PDF download.

Edited by Oliver Lerone Schultz, Adnan Hadzi, Pablo de Soto, and Laila Shereen Sakr (VJ Um Amel), it will be published this year (2016) by Open Humanities Press.

The piece I developed with Karin is a theory/practice hybrid called “Scannable Images: Materialities of Post-Cinema after Video.” It involves digital video, databending/datamoshing, generative text, animated gifs, and augmented reality components, in addition to several paintings in acrylic (not included in the video book).

Here’s some more info about the book from the OpenMute Press site:

Theorising a World of Video realizes the world through moving images and reassembles theory after video. Extending the formats of ‘theory’, it reflects a new situation in which world and video have grown together.

This is an edited collection of assembled and annotated video essays living in two instantiations: an online version – located on the web at, and an offline version – stored on a server inside a VHS (Video Home System) case. This is both a digital and analog object: manifested, in a scholarly gesture, as a ‘video book’.

We hope that different tribes — from DIY hackercamps and medialabs, to unsatisfied academic visionaries, avantgarde-mesh-videographers and independent media collectives, even iTV and home-cinema addicted sofasurfers — will cherish this contribution to an ever more fragmented, ever more colorful spectrum of video-culture, consumption and appropriation…

Table of Contents

Control Societies 
Peter Woodbridge + Gary Hall + Clare Birchall
Scannable images: materialities of Post-Cinema after Video 
Karin + Shane Denson
Serhat Köksal
The Crying Selfie
Rózsa Zita Farkas
Guided Meditation 
Deborah Ligotrio
Contingent Feminist Tacticks for Working with Machines 
Lucia Egaña Rojas
Capturing the Ephemeral and Contestational 
Eric Kiuitenberg
Surveillance Assemblies 
Adnan Hadzi
You Spin me Round – Full Circle 
Andreas Treske

Editorial Collective

Oliver Lerone Schultz
Adnan Hadzi
Pablo de Soto
Laila Shereen Sakr (VJ Um Amel)

Tech Team

Jacob Friedman – Open Hypervideo Programmer
Anton Galanopoulos – Micro-Computer Programmer


Adnan Hadzi – OHP Managing Producer
Jacob Friedman – OHV Format Development & Interface Design
Joscha Jäger – OHV Format Development & Interface Design
Oliver Lerone Schultz – Coordination CDC, Video Vortex #9, OHP

Cover artwork and booklet design: Jacob Friedman
Copyright: the authors
Licence: is dual licensed under the terms of the MIT license and the GPL3
Language: English
Assembly On-demand
OpenMute Press


Co-Initiated + Funded by

Art + Civic Media as part of Centre for Digital Cultures @ Leuphana University.
Art + Civic Media was funded through Innovation Incubator, a major EU project financed by the European Regional Development Fund (ERDF) and the federal state of Lower Saxony.

Thanks to

Joscha Jaeger – Open Hypervideo (and making this an open licensed capsule!)
Timon Beyes – Centre for Digital Cultures, Lüneburg
Mathias Fuchs – Centre for Digital Cultures, Lüneburg
Gary Hall – School of Art and Design, Coventry University
Simon Worthington – OpenMute

The Gnomes Are Back: Business cARd 2.0


Ever since our old AR platform was bought out and shut down by Apple, the “data gnomes” that Karin and I developed in conjunction with the Duke S-1: Speculative Sensation Lab’s “Manifest Data” project have been bumbling about in digital limbo, banished to 404 hell. So today I finally made the first steps in migrating our beloved creatures over to a new AR platform (Wikitude), where they’re starting to feel at home. While I was at it, I went ahead and reprogrammed my business card:

2016-01-31 12.21.55 pm

The QR code on the front now redirects the browser to, while the AR content on the back side is made visible with the Wikitude app (free on iOS or Android) — just search for “Shane Denson” and point your phone/tablet’s camera at the image below:

2016-01-31 12.22.20 pm

(In case you’re wondering what this is: it’s a “data portrait” generated from my Internet browsing behavior. You can make your own with the code included in the S-1 Lab’s Manifest Data kit.)

“Portrait of the Artist as a Data Cloud I + II” and “The 9”

The New Krass


Portrait of the Artist
as a Data Cloud I +II

Karin + Shane Denson
20″ x 20″
Acrylic on canvas

These “Data Portraits” are data-generated objects based on personal Internet usage, processed with a custom Python script written by Luke Caldwell and hand-painted by Karin Denson. Scanning one of the nine QR codes on the right will unlock augmented reality (AR) scenarios that will be superimposed on the Data Portraits. The scenarios, some of which are interactive, explore various facets of contemporary interactions between physical, virtual, and augmented realities.

The 9

Karin + Shane Denson
20″ x 20″
Acrylic on canvas

Scan one of the 9 QR codes and point your device at the two “Data Portraits” on the left. Each of the QR codes triggers a different set of augmented reality (AR) contents on the Data Portraits. Experiment: try touching, listening to, or moving the objects on your screen.

View original post

Audiovisualities Lab — Film Screening and Project Showcase


On April 8, 2015, I will be participating in this event, hosted by the Duke Audiovisualities Lab. During the “project showcase” portion of the event, several of the people involved in Bill Seaman and John Supko‘s Generative Media Authorship seminar — including Eren GumrukcuogluAaron Kutnick, and myself — will be presenting generative works. I will be showing some of the databending/glitch-video work I’ve been doing lately (see, for example, here and here). Refreshments and drinks will be served!

Manifest Data @ Media Arts + Sciences Rendez-Vous


This Thursday, March 5, 2015 (4:15pm, Bay 10, Smith Warehouse at Duke University), members of the S-1 Speculative Sensation Lab, including Amanda Starling Gould, Luke Caldwell, David Rambo, and myself, will be presenting our collaborative art/theory project Manifest Data. As usual, there will be drinks and light refreshments!

Emergence Lab at Duke Media Arts + Sciences Rendezvous

2015-02-24 10.28.03 am

This Thursday, February 26, 2015, the Emergence Lab (headed by media artist Bill Seaman and composer John Supko) will be taking over the Duke Media Arts + Sciences Rendezvous. If you don’t know their work already, be sure to check out Seaman and Supko’s collaborative album s_traits (also available on iTunes and elsewhere), which has been getting a lot of attention in the media lately — including a mention in the New York Times list of top classical recordings of 2014:

‘S_TRAITS’ Bill Seaman, media artist; John Supko, composer (Cotton Goods). This hypnotic disc is derived from more than 110 hours of audio sourced from field recordings, digital noise, documentaries and piano music. A software program developed by the composer John Supko juxtaposed samples from the audio database into multitrack compositions; he and the media artist Bill Seaman then finessed the computer’s handiwork into these often eerily beautiful tracks. VIVIEN SCHWEITZER

In their Generative Media Authorship seminar, which I have been auditing this semester, we have been exploring similar (and wildly different) methods for creating generative artworks and systems in a variety of media, including text, audio, and images in both analog and digital forms. The techniques and ideas we’ve been developing there have dovetailed nicely with the work that Karin Denson and I have been doing lately with the S-1 Lab as well (in particular, the generative sculpture and augmented reality pieces we’ve been making for the lab’s collaborative Manifest Data project). I have experimented with writing Markov chains in Python and javascript, turning text into sound, making sound out of images, and making movies out of all-of-the-above — and I have witnessed people with far greater skills than me do some amazing things with computers, cameras, numbers, books, and fishtanks!

On Thursday (at 4:15pm) several of us will be speaking about our generative experiments and works-in-progress. I will be talking about video glitches and post-cinema, as discussed in my two previous blog posts (here and here), while I am especially excited to see S-1 collaborator Aaron Kutnick‘s demonstration of his raspberry pi-based eidetic camera and to hear composer Eren Gumrukcuoglu‘s machine-based music. I also look forward to meeting Duke biology professor Sönke Johnsen and composer Vladimir Smirnov. All around, this promises to be a great event, so check it out if you’re in the area!