FrankensteinsDeepDream

Creation scene and aftermath, as described in Mary Shelley’s Frankenstein (Chapter 5, 1831 edition) and interpreted by Cris Valenzuela’s text-to-image machine-learning demo (http://t2i.cvalenzuelab.com) utilizing AttnGAN (Attentional Generative Adversarial Networks).

Made for the upcoming Videographic Frankenstein exhibition at the Department of Art & Art History, Stanford University (Sept. 26 – Oct. 26, 2018). More info here: https://art.stanford.edu/exhibitions/videographic-frankenstein

Advertisements

Coming Soon: after.video

av3d_v03

I just saw the official announcement for this exciting project, which I’m proud to be a part of (with a collaborative piece I made with Karin Denson).

after.video, Volume 1: Assemblages is a “video book” — a paperback book and video stored on a Raspberry Pi computer packaged in a VHS case. It will also be available as online video and book PDF download.

Edited by Oliver Lerone Schultz, Adnan Hadzi, Pablo de Soto, and Laila Shereen Sakr (VJ Um Amel), it will be published this year (2016) by Open Humanities Press.

The piece I developed with Karin is a theory/practice hybrid called “Scannable Images: Materialities of Post-Cinema after Video.” It involves digital video, databending/datamoshing, generative text, animated gifs, and augmented reality components, in addition to several paintings in acrylic (not included in the video book).

Here’s some more info about the book from the OpenMute Press site:

Theorising a World of Video

after.video realizes the world through moving images and reassembles theory after video. Extending the formats of ‘theory’, it reflects a new situation in which world and video have grown together.

This is an edited collection of assembled and annotated video essays living in two instantiations: an online version – located on the web at http://after.video/assemblages, and an offline version – stored on a server inside a VHS (Video Home System) case. This is both a digital and analog object: manifested, in a scholarly gesture, as a ‘video book’.

We hope that different tribes — from DIY hackercamps and medialabs, to unsatisfied academic visionaries, avantgarde-mesh-videographers and independent media collectives, even iTV and home-cinema addicted sofasurfers — will cherish this contribution to an ever more fragmented, ever more colorful spectrum of video-culture, consumption and appropriation…

Table of Contents

Control Societies 
Peter Woodbridge + Gary Hall + Clare Birchall
Scannable images: materialities of Post-Cinema after Video 
Karin + Shane Denson
Isistanbul 
Serhat Köksal
The Crying Selfie
Rózsa Zita Farkas
Guided Meditation 
Deborah Ligotrio
Contingent Feminist Tacticks for Working with Machines 
Lucia Egaña Rojas
Capturing the Ephemeral and Contestational 
Eric Kiuitenberg
Surveillance Assemblies 
Adnan Hadzi
You Spin me Round – Full Circle 
Andreas Treske

Editorial Collective

Oliver Lerone Schultz
Adnan Hadzi
Pablo de Soto
Laila Shereen Sakr (VJ Um Amel)

Tech Team

Jacob Friedman – Open Hypervideo Programmer
Anton Galanopoulos – Micro-Computer Programmer

Producers

Adnan Hadzi – OHP Managing Producer
Jacob Friedman – OHV Format Development & Interface Design
Joscha Jäger – OHV Format Development & Interface Design
Oliver Lerone Schultz – Coordination CDC, Video Vortex #9, OHP

Cover artwork and booklet design: Jacob Friedman
Copyright: the authors
Licence: after.video is dual licensed under the terms of the MIT license and the GPL3
http://www.gnu.org/licenses/gpl-3.0.html
Language: English
Assembly On-demand
OpenMute Press

Acknowledgements

Co-Initiated + Funded by

Art + Civic Media as part of Centre for Digital Cultures @ Leuphana University.
Art + Civic Media was funded through Innovation Incubator, a major EU project financed by the European Regional Development Fund (ERDF) and the federal state of Lower Saxony.

Thanks to

Joscha Jaeger – Open Hypervideo (and making this an open licensed capsule!)
Timon Beyes – Centre for Digital Cultures, Lüneburg
Mathias Fuchs – Centre for Digital Cultures, Lüneburg
Gary Hall – School of Art and Design, Coventry University
Simon Worthington – OpenMute

http://www.metamute.org/shop/openmute-press/after.video

The Gnomes Are Back: Business cARd 2.0

gnome-cARd

Ever since our old AR platform was bought out and shut down by Apple, the “data gnomes” that Karin and I developed in conjunction with the Duke S-1: Speculative Sensation Lab’s “Manifest Data” project have been bumbling about in digital limbo, banished to 404 hell. So today I finally made the first steps in migrating our beloved creatures over to a new AR platform (Wikitude), where they’re starting to feel at home. While I was at it, I went ahead and reprogrammed my business card:

2016-01-31 12.21.55 pm

The QR code on the front now redirects the browser to shanedenson.com, while the AR content on the back side is made visible with the Wikitude app (free on iOS or Android) — just search for “Shane Denson” and point your phone/tablet’s camera at the image below:

2016-01-31 12.22.20 pm

(In case you’re wondering what this is: it’s a “data portrait” generated from my Internet browsing behavior. You can make your own with the code included in the S-1 Lab’s Manifest Data kit.)

“Portrait of the Artist as a Data Cloud I + II” and “The 9”

The New Krass

IMG_9435aIMG_9433a

Portrait of the Artist
as a Data Cloud I +II

Karin + Shane Denson
20″ x 20″
Acrylic on canvas

These “Data Portraits” are data-generated objects based on personal Internet usage, processed with a custom Python script written by Luke Caldwell and hand-painted by Karin Denson. Scanning one of the nine QR codes on the right will unlock augmented reality (AR) scenarios that will be superimposed on the Data Portraits. The scenarios, some of which are interactive, explore various facets of contemporary interactions between physical, virtual, and augmented realities.

The 9

Karin + Shane Denson
20″ x 20″
Acrylic on canvas

Scan one of the 9 QR codes and point your device at the two “Data Portraits” on the left. Each of the QR codes triggers a different set of augmented reality (AR) contents on the Data Portraits. Experiment: try touching, listening to, or moving the objects on your screen.

View original post

Audiovisualities Lab — Film Screening and Project Showcase

AVLab_Expo_2015

On April 8, 2015, I will be participating in this event, hosted by the Duke Audiovisualities Lab. During the “project showcase” portion of the event, several of the people involved in Bill Seaman and John Supko‘s Generative Media Authorship seminar — including Eren GumrukcuogluAaron Kutnick, and myself — will be presenting generative works. I will be showing some of the databending/glitch-video work I’ve been doing lately (see, for example, here and here). Refreshments and drinks will be served!

Manifest Data @ Media Arts + Sciences Rendez-Vous

manifest-data-2

This Thursday, March 5, 2015 (4:15pm, Bay 10, Smith Warehouse at Duke University), members of the S-1 Speculative Sensation Lab, including Amanda Starling Gould, Luke Caldwell, David Rambo, and myself, will be presenting our collaborative art/theory project Manifest Data. As usual, there will be drinks and light refreshments!