Band releases album as Linux kernel module…Wait…What?

A band called netcat has released its new album, Cycles Per Instruction, in a number of formats, including what looks to be a first: the album can be compiled as a Linux kernel module. Their story goes something like this: “This repository contains the album’s track data in source files, that (for complexity’s sake) came from .ogg files that were encoded from .wav files that were created from .mp3 files that were encoded from the mastered .wav files which were generated from ProTools final mix .wav files that were created from 24-track analog tape.”

The band writes:

Cycles Per Instruction combines free improvised performance with software programs that the band wrote to create sound in novel ways. When all is said and done, the computer ends up being a 4th improvising member of the band, fed by data and expressing sound via algorithms.

“Listening to netcat is a stimulating reminder that your brain organically creates electricity” – Loren Chambers

The Internet is an Apt Motherfucker

This piece combines improvisational playing on cello, synth, and drums, with three main technological components. The first component is a purpose-built synthesis/sequencer program*. The piece opens with this program layering a base motif 64 times with a random time offset, creating a blurred, textural reference to the original motif that varies with each performance. The second component is a generative Markov model of phoneme sequences derived from Wikipedia and a collection of scientific papers*. We use the model to generate novel, incoherent speech sounds. The third component is a sentiment-aware model of statements of preference derived from peoples’ actual statements of preference on the internet*. We use the model to generate positive/negative sentiment couplets, recited in synthesized speech.


This piece combines human improvisors with custom software* that generates sound by analyzing real-time internet traffic. Our goal is to fuse computer network communication with human communication. We capture network traffic and send it to custom software that converts it into MIDI command messages. These MIDI messages then drive software synthesizers that create the sounds on the recording. Each synthesizer has a specific, human-configured sound and set of tunings, but the timing and individual note selection is dictated by the timing and trajectory of packets moving through the network. We improvise with the computers by live-mixing the network synthesizers, as well as on our acoustic/electric instruments.

Approximating the Circumference of the Earth

This piece is a structured improvisation for cello, synth, and chango. The chango* is a novel computer musical instrument that uses computer vision to convert patterns of light into patterns of sound. The chango player associates a different tone with each different part of a frame of video and the light intensity in a tone’s region of the frame dictates its volume. Selectively illuminating parts of the frame plays tones and tone clusters with sound intensity proportional to the light intensity.