Euroshield 1 with Teensy 3.6 and cli builds on Arch Linux
These are basically notes for future me, but maybe they will help someone else doing the same thing.
I just got my lovely new euroshield 1 today and pulled out a teensy 3.6 that has been sitting in a drawer for far too long to pop into it.
The first thing to note: the USB port faces down, just line the pins up with the bottom and it’s going to hang weirdly off the top – the extra pins on the 3.6 aren’t used by the euroshield but it supports the teensy 3.6 just fine.
I’m skipping all the misdirections and false starts here of course!
The first thing to do is basically follow the arch linux arduino setup notes as written.
That meant for me installing the packages
arduino-builder is enough.
Then add yourself to the
lock groups (lock I think is only needed if you want to run the GUI – I wanted to do this to test so I did it!)
Make sure the
cdc_acm kernel module is loaded. (It already was for me.)
As I write, the latest version of the IDE is 1.8.10 but teensyduino only supports 1.8.9 so that’s the version I installed.
Follow the instructions in this guide to install the stock arduino and then teensyduino.
That meant for me downloading 1.8.9 from the arduino site, extracting it (
tar xvf), downloading the udev rules file and copying it to
/etc/udev/rules.d/ and then downloading the linux x64 installer for teensyduino, making it executable with
chmod +x and then running it.
The installer will ask you to find the arduino directory – point it to the stock arduino you just unpacked.
The reason you can’t just point it to the
/usr/share/arduino/ dir and install things directly is that the teensyduino is really strict about the installation you’re modifying. The file size of one of the java jars is slightly different than the stock build so it refused to continue with the arch package. More from the man himself
After installation is complete copy the teensy directories manually:
hardware/teensy -> /usr/share/ardiuno/hardware/teensy
hardware/tools -> /usr/share/ardiuno/hardware/tools
examples/Teensy -> /usr/share/ardiuno/examples/Teensy
You’ll also need to manually alter the
/usr/share/arduino/hardware/teensy/avr/boards.txt file and set defaults for certain options usually selected in the menu of the IDE. Check out this wonderful blog post for more troubleshooting info.
I had to set these values for the teensy 3.6:
teensy36.build.fcpu=240000000 teensy36.build.keylayout=US_ENGLISH teensy36.build.usbtype=USB_SERIAL teensy36.build.flags.ldspecs=
But check out the
hardware/teensy/avr/boards.txt file and look for values corresponding to the board you want in the
After that, I copied the VCO example from the euroshield downloads on the forum and renamed it
vco.ino – to finally build and upload it:
arduino-builder -fqbn teensy:avr:teensy36 -hardware /usr/share/arduino/hardware -tools /usr/share/arduino/tools -tools /usr/share/arduino/tools-builder vco.ino
fqbn is going to be
teensy:avr: plus the name of your teensy as found in the
And, there were sinewaves and triangle waves and everything was good!
I want to port my pulsar synthesis implementation from pippi to this module – with a complex profile of params controlled by the logistic equation, whose seed values are CV controlled from the euroshield inputs.
But that’s for another post!
GUIs and Pippi
One of the projects I’m working on now involves a lot of what I guess you’d call traditional sequencing: rhythms better expressed as pattern fragments than algorithms, pitches and other shapes that are more comfortable expressed on some kind of pianoroll-style grid than typed in a lilypond-style text format.
I used to love the reason UIs for this. The piano roll and drum sequencing GUIs had their limitations (mostly I wanted a fluid grid and more flexibility for working with polyrhythms) but they were really useful UIs.
Last year I decided OK, a piano roll GUI would be a really useful component to have in the projects I’m working on with pippi. Working with MIDI as an intermediary format wasn’t very attractive, so I decided to start working on my own.
It’s not as polished as the reason GUI (for example I have yet to implement dragging a phantom box to select a group of events – what’s that called?) but I can draw in a complex set of events, even snap them to a reconfigurable grid, and then render a block of audio out by running every event through a given pippi instrument, just as if I’d played the sequence with a MIDI controller into astrid directly. Well, better since astrid does on-demand rendering per event, so the result of rendering a sequence with the piano roll has sample-accurate timing. Something I never cared too much about when performing with astrid, but is very nice to have for offline / non-realtime work where I often care very much about perfectly aligning events and segments with each other.
Aside: pippi is a python library for composing computer music. Astrid is the interactive component which supports writing pippi scripts as instruments, and then performing with them via a command interface, MIDI I/O or through a custom zmq message protocol. Years ago it was just part of pippi itself but when I threw out the python 2 version of pippi based around byte strings as buffers to write the current python 3 version with
SoundBuffer classes that wrap memoryviews (among a lot of other improvements and additions) I also threw out the old fragile performance code. Astrid now still is fundamentally a just-in-time rather than a hard realtime system – meaning all renders (unless they are scheduled) are done on demand, and bring the latency of the render overhead along with them. There’s a normal inner DSP loop – I’m basing things around JACK now, so the usual JACK callback is where buffers that have been queued to play get mixed together block-by-block, and that’s all the callback does. It actually ends up being a pretty stable approach, and once the render is complete playback is very deterministic – a tight stream through all the buffers in motion at the rate of the current JACK block size. In practice the latency has never been an issue for me, and my approach to performance has long been more of a conductor than a haptic instrumentalist so I’m not bothered by the lack of tight sync to external I/O like sweeping a MIDI knob over a filter. It’s quite possible to play a normal synth piano with a MIDI controller without any noticeable latency on a pretty old thinkpad, and if you are manipulating a stream of small grains, you can filter-sweep in realtime to your heart’s content… but it’s not for everyone. It really shines when you want to develop systems which play themselves, helped along through a command interface or maybe a MIDI knob here or there, which is what I’m most interested in.
My workflow for non-realtime pieces is basically to do everything with a series of pippi scripts though. The structure of the program isn’t really standard from project to project but there are some patterns I’ve noticed that I’ve started to repeat between projects.
Working backward, there’s almost always some kind of
mixture.py script which does the final assembly of all the intermediate sections so far, and probably some additional processing to each layer as a whole, and then finally the mixed output as a whole. (Just minor mastering-type stuff like compression or limiting, or larger scale mix automation on the tracks, etc.)
I tend to build the piece vertically in layers, a lot like you’d do in a traditional DAW – these channels for the drum tracks, these for the bass, etc. Except the channels are scripts which render intermediate WAV files into a stems directory and later get assembled by the mixture script into their final sequence & mixture. I’ll generally have some numbering or naming scheme for the variations that get rendered and through each layer mix and match my favorites – unless I’m working on something which is meant to be run from top to bottom for each render like Amber or any of the Unattended Computer pieces, etc.
Beyond that the specifics get tied very closely to the needs of whatever I’m working on.
My dream is to be able to coordinate much more granular blocks on a traditional DAW timeline, where I can choose from a pool of instrument scripts with the same interfaces and affordances as the current Astrid implementation, but optionally pin renders (probably with seed values) as blocks, just like a normal audio segment in a DAW, or compose sequences at the block level by diving into a piano roll GUI or a rhythm sequencing GUI just like reason. The major difference being of course that all of the blocks would be outputs of fluid scripts which can be regenerated on demand, or on every run, etc.
I started trying to think through a lot of this over the weekend – how it might look, what affordances it might have. Some thoughts I’ve arrived at so far:
The project format should be simple and easily allow its elements to be manipulated externally. A simple directory structure with human-readable text files for sequences and metadata, and clearly labeled PCM audio files for all intermediate blocks and renders.
This could maybe look something like:
/orc- containing individual instrument scripts whose filenames map to the names they can be called in the GUI, just like the astrid command interface.
/scores- obviously I’m stealing from csound with these names, but I don’t feel this division is restrictive. Score files here are the text versions of fragments which can be edited with the GUIs, but also referred to by name from any instrument script and used for further processing internally.
/blocks- individual renders of segments/blocks could be organized into sub-directories by instrument name, and contain basic timing / position data in the filename itself.
/stems- I’d also like to support a processing pipeline where each channel in the GUI can have a script callback to do processing on a full sequence of blocks. These would be cached by name here.
/stems/drums1.py- could for example be the post-processing script for an individual channel or channel group. I think it makes sense for these to have a difference interface and location than the core instruments but I’m not totally sure.
mix.py- could be the final output script in the pipeline which would be fed all the stems for one last (optional) processing pass.
I want to approach the GUIs as individual pieces which can be composed together in the main DAW timeline GUI, or used ad hoc without having to create a full blown session. Just want to draw out a chorale passage that can be easily fed back into some arbitrary pippi script? Just fire up the piano roll and do it. Want a graphical interface for composing a short rhythm sequence that’s a little too complex for the built-in ASCII rhythms pippi uses to do cleanly? Fire up the rhythm sequencer GUI, etc.
On the same token I doubt I’ll have any integrated text editor GUI – I have no desire to reimplement vim, and probably other users will prefer to bring their own editors as well. So the GUIs should be able to easily find scripts, and watch them for changes – just like astrid does right now with its command interface.
Still making a full project should probably look something like
astrid new project <project-name> on the command line, and launching GUIs something like
astrid daw for the main timeline, or
astrid pianoroll for the pianoroll alone, etc. Skeletons for new instruments would be nice too – maybe
astrid new instrument <instrument name> which could create a simple instrument template like:
from pippi import dsp def play(ctx): out = dsp.buffer() yield out
The pianoroll is a natural place to start, since I have the GUI begun already, and it’s the most desirable interface for the thing I’m working on now. I’m really excited to tackle the DAW part of this though, which I think will lead to some interesting possibilities on the macro level that I wouldn’t otherwise think of just working with my usual scripts, or a set of fixed-render blocks in Arduor.
Technically There's a Computer Inside
Taking my first steps with the Starling Via (Scanner) today. Whatever the modular version of button mashing is called, that’s what I’m doing so far. Got the 2hp MIDI module spitting out signals coming from a python script running on my synth computer and using that to modulate… something on the Via.
The Koma Field Kit is sending its FM radio out into the Via, and its LFO out pitched up into audio range. The other main source of pitch is the DC motor which is driven by the output from the Via, and is miced with an induction mic and a contact mic, all of it sitting on top of a speaker which is outputing the main signal from the field kit.
The envelope follower is tracking the aux out on the field kit channels and is patched into the search input of the radio.
The overall rhythm in the shifting timbre is coming from the Via being modulated with the MIDI script. I’m just throwing notes at it willy-nilly, I have no idea what I’m actually telling it to do but this is a fun first step!
The most difficult part of this project so far has been trying to reconcile my desire to keep the entire system fluid and surprising while maintaining a coherent and compelling narrative.
What exactly constitutes a coherent narrative though?
Dangling above the stove, an unseen spider sways on the path of the heated air.
Is this compelling? Is this coherent?
Just next door there was Gary. Below the ground floor unit, his front door was nearly buried, out of sight down a cleanly finished flight of concrete steps hidden behind a pair of overgrown bushes. The warm light from a small cluster of glass bricks that served as his only window would glow in the darker months; the exhaust from a free-standing AC choking out a familiar gust in the brighter months. We all knew Gary somehow. He was gone twice a year for several weeks at a time -- doing research, he said -- but mostly there he was. Buying some lightbulbs at Grace-Ann's, or dottering along the street like a cat on a leash, stopping and talking or watching or listening. More research, probably. Charlie once saw him carry two overfull bags of unshucked corn down those well-kept steps to his basement home. He wasn't seen for three days.
Is this compelling? Is this coherent?
The crow didn't like the way the squirrel ran and ran. The squirrel would run up a tree, and across a roof, and through the bushes and along a wire. Always running and chasing. One day, the crow stopped the squirrel and said "you shouldn't run so much, you will never see where you're going, you can never appreciate where you are." The squirrel didn't listen, and ran off up a tree, along a branch and onto a high wire. The crow flew up above the wire and watched the squirrel run, until he reached an exposed patch of wire, just touching a nearby branch. Later, another squirrel ran past. "You shouldn't run so much" said the crow.
Is this compelling? Is this coherent?
While these little stories may or may not feel coherent or be compelling, they are all fixed. None of them have any fluidity on the page – the worlds they might create are foregone, past-tense – they are set. Read them again and there they are.
We could point to a degree of fluidity that’s always present in these fixed narratives – especially good ones. A good story invites re-reading, and the best always seem to offer up something new despite their ultimate plasticity. Most good fixed-media pieces offer this: a musical recording, a film, a TV show, a painting. We can come back to the good stuff. Is that necessarily a quality of the work, or just as much how we change and the way reading, listening viewing are creative acts?
Generative works are another thing. The work emerges from the rules that govern it, and those rules could change over time, or modify themselves. A complex structure can emerge from incredibly simple rules. The apparent structure itself can be modulated – rewritten – as simple rules unfold and interact. Retelling a story is a fluid process, the common thread from telling to telling comes from the underlying structure, and the gamut of materials & material processes being worked with. The elements of the story might stay the same, but the retelling can change the story in fundamental and (satisfyingly) surprising ways.
Take this short render of one of the cues from South Seas as a simple example. There is an apparent structure that emerges from this basic rule-set: Given a sequence of events, the events are most likely short (around a second or less) but there is a small chance the event will be long (several seconds or more) – and the next event always begins around halfway through the length of the current event. It’s almost nothing, as far as algorithms go, but it creates a simple phrasing that adds a palatable structure to what would otherwise sound like a monotonous stream of events. It creates a little story, clustering the events into phrases and adding a breath which acts like the consequent of a call-and-response for each phrase. Suddenly out of almost nothing, there’s a little story being told.
Here’s another short render, the only change to the cue script being that the lengths of each event are uniform and much longer. All of the details are different, but the difference we really hear is in that tiny tweak to the pacing of events. This is a simple example but this sort of simple modulation of structure can be incredibly effective, and completely change the way something feels. You could say it tells a different story with the same characters, the same locations etc.
And finally just for comparison, here’s another render using the same event length calculations from the original, but discarding the long events for a uniform set of short events. It has a kind of plodding, relentless feeling that the others don’t have.
Coherency emerges from the structure in part as well as from the act of listening – sound has a capacity for the superposition of coherency that I think language just can’t support. Basically it’s much easier to roll the dice with sound without arriving at incoherent mush. Almost any structure imposed on the sound gets promoted to narrative. This is a beautiful thing, and one of the most compelling reasons for working with generative processes in my opinion. It’s much more of a straightforward process to erase yourself from the whole thing – let the process recompose the story in surprising ways, to take on a life of its own without having to coax it here or there. The whole process feels playful.
So how do you generate a coherent story? Lets take a couple famous examples of generative story-telling. Look at madlibs – the approach is to use the grammar of a sentence as a structure to find spots for erasure. You scramble up the particulars: names, places, descriptive modifiers within an otherwise simple & coherent framework. Even this process erodes coherency – the simplest story can become totally absurd just by shuffling these static elements around. The process actually relies on the erosion of coherency – that’s why the result is funny. That’s what makes it fun.
Another famous example that comes to mind are the “choose your own adventure” stories I grew up with. These aren’t really generative stories though – they’re multiple fixed stories, a series of pre-determined alternate paths. Once you take every path, the potential for variation is completely consumed. There isn’t any real affordance of agency here, just a set of fixed variations that lead to a finite number of linear stories. They’re just a compressed way of writing out several variations of a story, not a real platform for fluid storytelling. Still, that’s not to say they aren’t fun, but they don’t have the endless potential of even a madlibs approach, although they maintain coherency by lacking true freedom in variation.
Video games seem to take a sort of hybrid approach. There is the total freedom of navigating in an open world, but stories are always told in a choose your own adventure style. There are a fixed number of predetermined narrative paths that can be explored. They may be far more numerous than the traditional choose your own adventure, but there’s still always the hand-off moment where the fixed story takes over – you have the agency to choose when you experience each part of it, and which part of which story you might want to participate in, but the story itself unfolds in a goal-oriented choose your own adventure style. Open this door and you get eaten by the bear! Open the other door and you find a great treasure! That said, I have extremely limited experience with games and I suspect there are much more creative approaches to storytelling in practice out there.
My working theory at the moment is that the success of a generative story lies somewhere in its capacity for interaction – that is, the observer-participant needs real agency for it to have freedom and fluidity without totally destroying coherency (This almost comes for free with a piece of generative music.)
I keep coming back to the dynamic I remember from my middle school days of playing Dungeons and Dragons. The dungeon-master isn’t really there to tell you a story, their primary role is to facilitate the agency of the players, and to illustrate the (dice-roll-driven but also pre-composed in part) reactions of the story-world to their decisions. This feels like an appropriate role for a story-telling-system that doesn’t want to dictate the story per se, but create the potential for a story – which is ultimately constructed in its reading through the choices and observations of its readers.
This all sort of begs the question: why not just let the story become incoherent? Why not embrace the incoherency and drop the mitigating strategies – isn’t that in the end, freer? More fluid? My feeling is that allowing incoherency to bloom is a valid strategy, but just a strategy, not really a workable approach in general. If you throw away all the signposts of structure and self-logic then that lack of structure eventually becomes the foremost element, and you’re left with a set of endless variations that amount to the feeling of a single state: incoherence. This is a useful place to visit – something like Stockhausen’s listening to “the beauty as it flies” – living entirely in the moment, almost static in its constant change. A useful place to visit, and re-visit, but just one potential form in a universe of possible forms.
While the musical cues for this project are implemented as sets of generative scripts, another component for the audio side of things I have been working on is based around the idea of a synthetic environment. Inspired by other hyperreal field recording projects like Luc Ferrari’s Presque Rien series, Michael Pisaro’s July Mountain or Francis Dhomont’s Signe Dionysos which (more or less) don’t immediately reveal themselves as synthetic environments even though they might be composed of impossible or at least unlikely components. (How did that train get inside the frog pond??)
In 2016 I started collecting field recordings in multiples to use in constructing new synthetic environments – each one based on a real environment. I only have two so far: ~30 hours of recordings on my porch made at the same time each day for a month, and a handful of recordings of muffin baking in my kitchen. I started cataloging interesting moments in the recordings in a notebook – at 23 seconds a short bird chirp, 32 seconds a distant metallic clang, etc. It was a great way to spend a few days; field recordings cranked up on the stereo just listening and writing, but after doing about 5 hours worth I realized I really needed to automate the process somehow.
Last year I finally picked up a machine learning book, hoping to be able to train an algorithm on the recordings and have it classify them based on low level features extracted from the audio. The classic example of this is a dataset of Iris petal/sepal lengths & widths used to predict the species. Given a fixed set of labels (one per species) a collection of measurements can be used to predict which species it best matches. This is basically what I was looking for, but would require a training dataset with human-provided labels to learn from. Rather than try to do a supervised process where I’d take my original notebooks and use them to come up with the labels for the classifier (this is a bird, this is a car engine, this is a distant metallic rattle…) it seemed more interesting (and probably less tedious) to take an unsupervised approach and try to have the algorithm infer classifications and groups from the data itself.
I decided to start by focusing on the spectral centroid of these recordings because of this really cool study by Beau Sievers et al on the correlation between emotional arousal and the spectral centroid. The spectral centroid is the mean frequency in a set of frequencies – a sound with lots of high frequency energy and low frequency energy could have a centroid somewhere in the middle of the spectrum, while a pure sinewave at 200hz would have a spectral centroid of 200hz.
An initial experiment doing analysis on fixed-length overlapping grains didn’t go very far. I segmented the field recordings into small overlapping grains, found the spectral centroid for each grain, and then reconstructed the sound by shuffling the grains around so they would go in order of highest spectral centroid to lowest. Instead of the smooth sweep from high-energy sounds to low-energy sounds that I imagined, the result was basically noise. I was bummed out and left things there.
The wonderful folks at Starling Labs are working on a cool project that involves doing analysis on a set of field recordings of train whistles. On Monday I had a long conversation about their process so far and the analysis approaches they’d been trying. It got me excited to pick up on this project again and find a better approach to segmenting the field recordings for analysis – instead of just cutting them into fixed sized grains which seemed to produce mush.
This weekend I updated the script I was working with to do segmentation using the
aubio library’s onset detection, breaking the field recordings up into segments between onsets instead of arbitrary fixed-length slices. The script does an analysis pass on each sound file (usually about an hour of audio or so per file) – finding segments, doing feature extraction (spectral centroid, flatness, contrast, bandwidth and rolloff) on each segment and storing the results in an sqlite3 database to use for later processing.
That’s pretty much as far as I got this weekend! Doing one pass of analysis on the entire dataset takes about 3 hours so the only tuning to the analysis stage I’ve done so far is to low pass the audio before doing analysis (at 80hz) which I hope compensates a bit for all the low wind noise rumbling in the porch recordings.
Below is a new test reconstruction, doing the same type of sorting on the spectral centroid – highest to lowest – but placing each segment on an equally spaced 10ms grid, cutting down any segments longer than 1 second, then applying a little amplitude envelope (3ms taper plus a hanning fadeout) and stopping after accumulating 5 minutes of output. (Which means this places roughly 30,000 variable-sized overlapping audio segments at 10ms intervals in order of highest spectral centroid to lowest in the space of 5 minutes.)
Segmenting the sounds based on onset detection is already producing way more interesting results! I’m looking forward to studying the data and tuning the approach – and, one day trying to wrap my head around the machine learning component of this to do unsupervised classification of the sounds into a 2d space, so instead of simply moving from highest to lowest across a single feature dimension (the centroid) I can play with moving through a parameter space that hopefully has a meaningful correlation to the content of each sound segment. I love the idea of being able to move slowly from the region of the birds into the region of the revving of the car engines and so on.
Taking this approach it would be possible to match environments to locations in a story, and move through the environment’s sound-space in some meaningful-sounding way that correlates to the generative action in the story. If Pippi is at home in Villa Villakula and is visited by an annoying fancy gentleman, the environment could shift positions in the parameter space along with the mood of the characters or the intensity of the action etc – and allowing for that to be controlled by an automated process would let the environment change with the story even though the story itself may be indeterminate.
Anyway, here’s the most recent test render from this afternoon – things begin in muffin-baking world and slide off into the sound-world of my porch pretty fast. The church bells really start to clang by the end!
South Seas Development So Far
South Seas is an interactive retelling of a Pippi Longstocking story by Astrid Lindgren. This blog is a journal of its development.
I’m still working out the final form of the project. I’m aiming for something somewhere between a role playing game, drum circle, campfire story, and a movie. I guess.
In this first post, I’ll describe the basic architecture of the website portion of the project. (It’s the core of the project.)
Here is a simplified diagram of the processes running on the
Sounds are structured in the code as cues – python scripts that use my
pippi computer music library – and the
RENDERER process keeps a queue filled for each musical cue. Yes, cue queues. The renderer process is actually quite a number of processes. Each cue gets its own process which has main loop that checks the (estimated) size of the queue for the given cue. If the queue is full, it waits a second and then checks the size again. If the queue is not full, the primary cue renderer process will tell a pool of render processes to do the actual rendering work, and they will push the new renders onto the appropriate queue. Each cue has its own pool of worker processes that actually run the python scripts that constitute the cues, use the wonderful
sox program to encode them as mp3 files, and then convert those files into base64-encoded strings suitable for stuffing into the
src attribute of an html
audio tag as a data-URI. Those
audio-tag-ready strings are what is actually pushed into the redis queue.
The conductor process orchestrates switching between sections – just a slug value stored in redis – by way of a main loop that has a few different behaviors. If the timer is enabled (again this is just a flag in the redis db, checked periodically by the conductor) then the conductor keeps track of the (approx) elapsed time the current section has been set. There is a hard-coded lookup table that maps sections to lengths in seconds. If the elapsed time has exceeded the length for the current section, the conductor looks into another hard-coded table that defines the order of sections to see which section is next, and switches to that section. It also broadcasts a message to all connected clients to tell them that the section has changed. If there is a title card associated with that section, it sends another broadcast message with that info. (Again stored just as a fixed value in a lookup table.) If there is a voiceover cue associated with that section (another lookup table – mapping to a special cue script) it will broadcast the playback of that cue to all clients.
When the conductor reaches the end of the list of sections, if looping is enabled, it will return to the start of the section list.
Looping may be disabled, and the timer may be disabled, to pause on a section, or allow messages from the
CONSOLE process to flip to an arbitrary section. Mostly that’s just to make development easier, and be able to hang out on one section while I test it out.
There are a number of
spink playback processes watching the value of
current_section in redis. Each process represents a unique channel – there are 12 in total now. When the section changes, the
spink creates a new playback process for each musical cue associated with that section. A section can have any number of musical cues associated with it. Each musical cue is a python script that defines a
play function which returns a tuple containing a delay time and sound data. (Recall the
play functions in these cue scripts are only actually executed by the
RENDERER when the cue queue is below some threshold.) Each playback process pops a fresh render off the render queue, which is bundled with a delay time. The process broadcasts the sound to every client listening on this channel, and then sleeps for the given delay time.
The result is that each channel gets pushed a unique render and delay time for each cue.
I’ll describe what happens in the client (a web browser on a laptop or a phone or something) in a future blog post.
Finally, there is a simple curses
INFO client which I just leave running in a tmux session on the server (along with the
CONSOLE command client and a tail of the system log) which displays a count of the number of currently connected clients per channel, the name of the current section, the elapsed time in that section, whether looping is set in the conductor, if the timer is running, etc.
I suppose that just leaves the
WEBSERVER-WORKER processes listed in the diagram – these are just simple
flask endpoints which can subscribe to the redis broadcast channels via server-sent-events, and which deliver the various templates – the outer frame for the page & the inner section templates – as well as SVG templates, etc. All the requests from the clients pass through these flask controllers.
I’ll describe the HTML/CSS/SVG frontend in more detail in a later post.
Eric Lyon - Red Velvet
I’m very curious, and interested to know, ah, your ideas…
In a 1971 lecture on Moment-Forming and Integration, (later published in Stockhausen on Music) Karlheinz Stockhausen summarized his musical system, moment form, by reading from a poem by William Blake: “He who kisses the Joy as it flies / Lives in Eternity’s Sunrise.” Stockhausen’s moment form is a psychological sum of “beauty and shit” as AGF might put it. He was interested in a textural, compositional, and contextual soup of Joycian abrupt changes, where the disparate elements in the work don’t move progressively; they jump, skitter, and blend in multiplicity and aesthetic diversity.
Eric Lyon’s Red Velvet takes this Zen-like mode of listening – “the Joy as it flies” – and stretches it by stitching humor and calculated abandon into works that leap happily across stylistic divides, while still managing a surprising and compelling capacity for narrative. This music, however synthetic, marks its discourse with realism. This is music for our times, really: self-aware postmodern commentary, scatterbrain tangents of haunting millennial choirs bleeding into 80s dance-pop, beauty, silence, computer chip dissonance, and the acute cynicism of a terrified but enamored, global, and modern people – a people who are culturally connected for better or worse in a continuum somewhere between McDonalds homogeneity and Zen detachment.
But I just said to myself, ‘why not?’
Butter, from Red Velvet, begins like some grand dive off a 28th century Tokyo skyscraper with a whirl of twisted synthesis and hyperactive textural modulations, jumping and sliding from hectic to serene and back again before offering the simple rationale: “why not?” So what’s the overall effect? Simply put, Red Velvet compels many modes of listening. Lyon’s music all but requires an active and studied listen, not unlike the concentration involved in soaking in all layers of a four voice Bach fugue for example. Although Red Velvet sometimes engages the sort of polyphonic vertical listening Bach’s music is best suited for, the music overall is a lateral experience. One moment may stimulate a mode of listening usually associated with the extreme gestural minimalism of Bernhard Günter. After a short time in that sound world – usually just enough to establish the musical setting – a typically graceful transition will then, for example, spring the music into something requiring a mode of listening usually associated with the noise-metal band Black Dice.
The real magic of Lyon’s compositions lies in these transitions: the juxtapositions inform and quite radically transform what might otherwise be a comfortable or traditional listening experience. And over time an overall aesthetic impression of the piece emerges, as in Stockhausen’s moment form. That overall impression has, for me, continually proved to be restless, unfixed, and hard to qualify. In American Pioneers: Ives to Cage and Beyond Alan Rich describes the experience of listening to La Monte Young’s The Well Tuned Piano as “a continuous meditation across a flood of images. Hearing the work properly,” he continues, “is possible only by disconnecting oneself with the expectations of classical harmonies such as Imperial Bösendorfers [the piano Young’s piece was conceived on] are wont to produce. Freely associating, one hears instead virtually the entire range of worldwide musical experience.” Red Velvet clearly encourages this sort of free association, and the spectrum of musical experience Lyon pans across during the tenure of his recording is dramatically far-reaching and rewarding in its many facets.
Blue Sky Research - Inshore Waters EP
Jonathan Fisher - also known as Blue Sky Research and the admin of Hippocamp.net, a Manchester England based netlabel - has taken a slight departure on Inshore Waters from his previous beat driven synth pastorals. The very brief EP - it clocks in at under ten minutes with four tracks - was inspired by the “changing weather” he encountered on a trip to the English coast. The pieces are steeped in gentle acoustic guitar sound-phantoms alongside touches of harmonica, a disaffected weather forcast, and sounds of the coast.
The album’s opener, Firth of Tay, glides in with the ease of an Ahmad Szabo or Apollo-era Brian Eno - tactfully coaxing the character of a windchime from delay-drenched guitar and harmonica samples and then gliding as effortlessly back into the misty silences it casually shuffled from in just over 60 seconds. This, and the two closing pieces - Firth of Tay Forcast (which draws on just the guitar sounds from the opening piece) and North Foreland (a distant reverberation of sitar-like harmonica and the sighing crashing of waves) - offer themselves as fleeting textural Haiku. Each briefly explores the particular cadences of a chosen set of sounds much as the eye might passingly move from rock to rock and crest to crest in viewing a scene like the one depicted in the album’s accompanying photograph.
Bristol Channel, however, settles down for a few moments to take in the entire scene. Plucked guitar tones ebb like rolling waves around a subtle drone melody one might expect to be trumpeted from the blowhole of an Atlantic-dwelling whale who had done a bit of study in Buddhism and the pronouncement of “Om.” Rhodes-like guitars eventually chime gentle harmonic patterns over top until slipping away again a minute later and leaving only the swiftly retreating decay of muted guitar.
Like Szabo’s This Book is About Words Fisher’s Inshore Waters fragments warm samples of acoustic guitar into ambient chamber works - but Fisher doesn’t make his edits hard. The allure of the glitch is sacrificed for small moments that bleed seamlessly into each other. DSP is largely ignored for the simple transformations of delay, reverse, and reverb - and the guitar is left to its natural texture.
This EP is a lovely aural snapshot of a pleasant trip not far from home, handed discreetly to the listener for an impressionistic recap of the journey.
(hippocamp is currently moving its mp3 archives to another server - the album is down at time of writing, but should be back up soon if Jon gets scene.org to host the files. it’s worth the wait.)
Erik Bünger - Variations on a Theme by Lou Reed
When I found Erik Bünger’s Variations on a Theme by Lou Reed at iDEAL Recordings I had never heard of him - I loved the recording, so the next day (today) I decided to try to find a little more out about him. A quick google search first revealed that he was a member of both Cycling 74’s Max/MSP mailing list as well as the Yahoo! based plunderphonia mailing list dedicated to the discussion of John Oswald and other plunderphonics related subjects.
These pieces of information fit nicely into the puzzle I had already been presented. Variations on a Theme by Lou Reed is - technically speaking - simply a gradual granular deconstruction and time-stretching of an enchanting loop from The Velvet Underground & Nico’s famous Verve album produced by Andy Warhol. Bünger chooses a sublime moment with chiming guitars and Lou Reed passionately crooning the word “heroin.” Already he’s set up a situation pregnant with possible meaning. Is he doing violence to Lou Reed’s passion for heroin? The glam-rock and drug-soaked lifestyle both Reed and Warhol led? Or is the fracturing of Reed’s voice and guitars a method of zooming in past the surface of meaning to raw aesthetics & skittering molecules of sound - maybe Bünger is attempting to get past the rock and roll, sex and drugs casing of Reed’s music to the sounds themselves. Is he liberating Reed’s own music from himself?
Bünger manages to engage this dialog in the simplest of possible methods. He simply slows the sample (pitch intact) down. The end result is a shattered and stuttering sitar-like drone, with the individual phonemes in Reed’s voice stretched into a sort of raga.
Further digging reveals records of several solo and ensemble performances as well as affiliation with a Swedish arts collective called The Nursery. A realaudio recording of a February performance of the Erik Bünger ensemble, where the group deconstructs the sounds and images of instrument instruction tapes from the 80s and reforms them into a virtuosic improvised set of laptop-powered joystick wiggling is paried with a few words that I can only assume Bünger has penned:
- For me it’s all about revenge, to take control over those who have controlled me. Those people who think control is the first criteria on musical ability lose the control in our hands.
I’ll leave this review on that positive note - it’s refreshing to see someone who clearly has a virtuosic control of his technique acknowledge the hollowness of it.
Update: Bünger has a new project online for the Swedish Radio called “let them sing it for you.” It’s an “interactive, ever-growing sound art project” which lets you enter text into a flash interface that plays the text back using plundered clips from pop songs. It’s never been more fun to reappropriate the plastic, commercial history of pop music for your own personal creativity. :-) Here’s the link.
Allegorical Power Series VI
Since June 2003 Antiopic records has been publishing a series of monthly mp3 compilations intended to provoke a socially conscious dialog through “abstract or experimental art and music” - discussion of which they consider to be too often limited to matters of aesthetics. Some of the most creative talents in new music have contributed mp3s (and in one case a quicktime movie) to what is turning into some of the most consistetly high-caliber online releases around to date - on par certainly with the likes of Fallt’s popular invalidObject series.
November’s chapter is smaller than previous releases and unfortunately contains some weaker material than has been presented before. Even so, Volume VI has many high points - such as duul_drv’s moving combination of drones and a field recording of an arrest under words, there is something hidden, as well as Julien Ottavi and Dion Workman’s beautiful Beginning Again, Desist’s lovely and understated Untitled, and We’re Breaking Up’s well balanced microsound collage of radio static Receiver 1. The two remaining contributions from Presocratics and Lovely Midget aren’t failures by any means - they’re simply overshadowed by the strengths of the other releases, not to mention the incredible back-catalog (all of which is still availible for download) of the previous volumes in the series.
Not every piece of music immediately inspires a socially informed critique, and certainly none is simply an exercise in propaganda - but the very act of enclosing these works in the casing of a suggestion to mindful political consideration gives them another dimension to be unraveled. It’s usually a fun and sometimes challenging task that tends to inform the possible aesthetic readings of the works rather than negating them.