July

double densed uncounthest hour of allbleakest age with a bad of wind and a barrel of rain

double densed uncounthest hour of allbleakest age with a bad of wind and a barrel of rain is an in-progress piece for resonators and brass. I’m keeping a composition log here as I work on it.

There are sure to be many detours. Getting it in shape might involve:

Wednesday July 17th

It only took like a week to get the new libpippi microsound module fixed up enough to drive pippi’s grain clouds. They’re much faster! Shaved about 20% off the runtime for the test suite, too. :)

The new “formation” engine is lacking support for grain masking and burst tables. I’ll probably just port that pretty directly over from the pulsar osc implementation. On the shortlist as well is to support basic morphing between stacks of grains, like the 2D pulsar osc. I’m not sure if it’s better to support that in the “tape” oscs which drive the grains, or at the level of the formation. I like the idea of tapeoscs being general purpose grain/cutup/microsound engines with a bring-your-own-orchestrator type of philosophy. The formation engine is the first orchestrator, but I’d like to explore others which are oriented more around grain introspection & support some kind of callback-based orchestration.

In the meantime, while I planned to move onto (finally!) refactoring the Wishart waveset segmentation stuff in libpippi next, I decided instead to take a detour and work on astrid quantization grids and sync first.

In previous versions of astrid instruments could register themselves to be quantized against a set of subdivisions (and multiples) of one global shared grid. That grid had one tempo and all quantized events followed it, basically.

This time around I’m still orienting everything around a global grid, however this time I’m using a monotonic system clock and crudely smashing that into ticks. Eventually I suppose I could replace this with a true monotonic tick stream – it exists already in the scheduler, but each instrument has its own scheduler so they would need to be locked together somehow in order to use it for this purpose. Having a single (optional, probably) external clock source for instruments to latch onto seems easier to implement and potentially more flexible.

Meantime good old CLOCK_MONOTONIC (actually CLOCK_MONOTONIC_RAW on linux) seems to be working well enough to syncronize events within musical time. This approach is not sample-accurate (though it could potentially be if I eliminate the variability in the processing that happens between calculating the delay and shuffling the buffer stacks around on playback) but it also won’t drift, since every process will share the same monotonic clock.

Otherwise the basic idea is that the quantization grid interval can be passed along with the render from the instrument itself along with an optional offset value. The interval will set the grid quantization and the offset can be used to introduce a phase rotation.

Later on it would be cool to support mutators of some kind (ideally in some way so that they could be shared between instruments) which could act the way that the ones built into pippi’s rhythm module do.

Here’s two (python) astrid instruments synced to a 1 second grid. They’re audibly misaligned sometimes. I’m curious if it’s possible to make the alignment worse than this in some scenarios (we’ll see!) but otherwise I’m totally down with a little alignment slop on quantization. Freebie humanization. ;-)

Sunday July 7th

It’s July! I’m back from Texas. Somehow the littlefield.c instrument actually worked out. I had some head-scratcher moments trying to get the butterworth filters hastily ported from Soundpipe to work with the pulsar osc bank – oddly the same filters worked fine on mic input… – and the python script that was relaying MIDI input to LMDB params crashed during the first 10 minutes of our set…

Otherwise though, it was all very encouraging.

Before I dive into various cleanup projects I’m trying to get libpippi’s microsound library stable this month. Doing microsound stuff in python is still pretty slow and clunky, and doesn’t work very well with the new stream callbacks. It’s time to try to make the switch from the old cython graincloud to the new approach I’ve been tinkering on in libpippi.

Trouble is, I haven’t really finished figuring out how to approach the new engine. I’d been working on basically a stream-based rewrite of the old engine, but this weekend I started shuffling things around.

In the old engine, grains were coupled tightly to the orchestrator routines and were all wavetable-based. I’m experimenting now with grains having unit generator “sources” and a unit generator window function.

Better would be to have the source be a full ugen graph, but I’m starting with a single unit generator as the source. I don’t have many ugens in libpippi yet, but the ugen wrapper around the tape osc seems like a good place to start.

Things I’m thinking through:

The next opportunities to put astrid to work are coming up in August: some shows with A Name For Tomorrow, and a weird theatrical event my friend Paul is doing. In both cases I’d like to focus on timbre and microsound.

First up: get some basic graincloud fun happening with the new ugen-based grains. At least functional enough to replace the current (slow, but functional) cython grainclouds. If there’s time this month to make a first pass at bringing some of the waveset segmentation and processing routines over as well, that would be great.


Log June 2024

Log May 2024

Log April 2024

Log March 2024

Log February 2024

Log January 2024

Log December 2023