double densed uncounthest hour of allbleakest age with a bad of wind and a barrel of rain
double densed uncounthest hour of allbleakest age with a bad of wind and a barrel of rain is an in-progress piece for resonators and brass. I’m keeping a composition log here as I work on it.
There are sure to be many detours. Getting it in shape might involve:
- more work on libpippi unit generators & integrating streams into astrid
- testing serial triggers with the solenoid tree & relay system built into the passive mixer
- finishing the passive mixer / relay system and firmware (what to do about the enclosure!?)
- general astrid debugging and quality-of-life improvements…
- composing maybe?
Monday June 24th
I took an extra day off work and I still feel woefully behind. I guess this is the way it goes, I suppose I’ll never really feel ready unless I take the safest possible path?
I really don’t want to be programming at all in Littlefield if I can help it. Tonight is a last ditch attempt to prepare some more small instrument scripts alongside the main one I’ve been working on this year.
- A basic interface on top of the sampler that feels nice to play, hopefully
- A sketch of something for the Mari Rubio score so I don’t have to program it all once I’m in TX
Hopefully, like last year I’ll just run all the instruments at once, and map the controls well enough to control what feels useful… I’m running low on controls though. :-)
The pulsar osc bank controls were spread over 2/3rds of my controller, I’m trying to cut that down to 1/3 by making controls that map to multiple params.
The other 2/3rds I’d like to use for the looper/cutup thing, and possibly a combination of global FX control (I really need a global high/lowpass…) and controls for the sequencer…
Oy, I donno.
Sunday June 23rd
Oh boy, more leak fixes this weekend!
- Forgetting to
sem_close
some of the sempahore guards on sampler memory was leaving file descriptors behind in a weird zombieDEL
state (when looking at them inlsof
) and eventually the hard limit of per-process file descriptors would get hit and everything would grind to a mysterious error-free halt. - I wasn’t actually acquiring / releasing the semaphore when creating
new shared memory blocks in the sampler! That’s fine for long-lived
memory like the instrument ringbuffers, but after a while (between ~2
and ~10 minutes depending on the number of python render processes) luck
would align the
lpsampler_create
calls and blow the process up.
My initial fix was to add a foreground cleanup routine: using the async API for linenoise with aselect()
timeout yields the main console thread back to loop over the python processes and replace any that have become zombies or otherwise failed.
The real fix was to just acquire the sempahore lock as soon as it’s opened in thelpsampler_create
routine, to synchronize with any other processes that are still in the middle of reading from it.
I won’t claim there are no more leaks… (AddressSanitizer complains of some on shutdown, at least, so there must be some cleanup issues still) but running the python and C instruments for a very long time using the sampler & resampler interfaces is actually stable!
I was getting a bit worried at all the mysterious crashes. Debugging concurrent python programs is just as annoying as microcontroller debugging without any serial or hardware debugger interfaces: things just stop working silently and mysteriously…
Using coredumpctl list
after a crash seems like a much
easier way to get some clue than trying to dig through the system logs.
I’m dumping way too many messages into the system log haha. Anyway,
after seeing a new crash in the coredumpctl
list (which
lists the PID for the crashed process too) it’s possible to get a
backtrace with actually useful info, even after the program has been
killed, by using coredumpctl gdb <PID>
.
I’ll be able to spend tonight and tomorrow (I decided to take an extra day of vacation for prep) on instrument design and starting on a realization score for one of the more complex pieces we’ll be trying in Littlefield next week…
I wish I was further along in instrument building, but I’m (knock on wood!) feeling pretty good about being able to rely on these new astrid interfaces to refine instruments during recording.
I’m going to do my best to get a set of base instruments carved out from all the tests, though.
There’s one more bit of plumbing to test, which is to have instruments sample from external resampler ring buffers. I’d like to have one sampler instrument which acts a bit more like a loop pedal of some kind, and can sample from the outputs of arbitrary instruments…
Running multiple instances of instruments is also untested after all the messaging changes. I’m expecting weird behavior so the plan is to just run one instance of any given instrument, and rely on the (now more robust) render pools for polyphony, or build it directly into the stream callback for C instruments.
Tuesday June 18th
When I was five or six years old I had this Bugs Bunny read-a-long 7” storybook. Something about Bugs going to space with a rocket fuel made from carrots, of course. I don’t remember the story very well, but I do remember how magical the experience felt.
I remember lots of really awesome sound effects, and getting lost in my imagination looking at the illustrations on each page. Somehow that felt like a much more rich media experience than watching a cartoon on TV, which I assume by then I must have been doing, too.
The interactivity was simple, but still present: turning each page at the sound of the tone, flipping the record on my little Fisher Price turntable. It was an engaging way to tell a story.
Sometime in 2010 or 2011 I started working on Pippi – which at first
was just a file called dsp.py
that used the python standard
library to do some basic cutup and granular tricks. It took a few
iterations, but after a while I started to get into the idea of being
able to publish a physical read-along score. I made some early
experiments after developing Pippi enough (at that point I think I was
calling it Fabric thanks to the suggestion of a friend) to make
a standalone piece with it. One of those was called Amber,
another called Pocket Suite, and another was called
Williams, which was a re-write of an earlier pre-python piece
written in ChucK.
I’ve kept doing these studies, working toward a stable API for Pippi that I could feel comfortable publishing in book form as a score.
The early idea was to try to tell the story through the score itself somehow, with the score also generating the layout for the book & inserting illustrations, etc. Each iteration unique. I’m not as hooked on the idea of the score being so present in the telling of the story, but I do still like the idea of publishing the full source code along with the book: everything needed to make new variations, or even something completely different if the reader feels inspired to do so.
What I’m still very much after capturing & expanding on is that experience I had with the Bugs Bunny read-along records: holding the book in your hand, having some control over the playback of the audio, and getting that magical (to me anyway) intermedia experience of reading and listening at once in the service of a single story.
I’m nearly 15 years into this journey, but the studies I’m doing now are starting to get a little closer to this. (Go here if you’d like to get them in the mail.)
I’ve been working on this all for so long I don’t even really think about the book project as such anymore, but libpippi is getting close enough to a point where I feel I’ll be able to remove the numpy-backed buffers from pippi and stamp the API as frozen on 2.0 soon enough. Which I guess is the thing that’s been keeping from considering going as far as publishing source code in print: I’d like it to be stable enough that years after publishing Pippi will still run the scores without issue, I guess.
There’s more to say, another time. I just started thinking about it all again I guess because I’m about to play music with some friends I haven’t seen since I started working on all this. Feels a bit nuts how long this project has dragged on, but satisfying to feel like I’m actually getting somewhere, slow as I’m moving.
Wednesday June 12th
I took astrid out for a test run with the a name for tomorrow fellers this weekend while I was in Milwaukee. It all actually kinda worked out! I have lots of small (and some large) tweaks I’d like to make.
In the engine, I’d like to expose the shared memory sampler to python
instruments – probably via the ctx
that gets passed to
every callback. Then, I need to make sure play command params are still
being passed in properly (I think I broke it a while back, but I should
be able to reuse the parser for update messages) so I can use them to
trigger the sampler… but I’ll need to play around with it in the
instrument script a bit to know what feels right.
I was also craving some way to store and recall snapshots of parameter states and that turned out to be pretty straightforward to implement. Params are just stored with integer keys that correspond to an enum of all the params the instrument knows about, so storing the snapshot just loops over every param from 0 to N and (if the param exists in the LMDB session) writes the bytes of its value into a shared memory blob. Recall loops over the blob and writes the values back into the LMDB session.
Being able to store & recall the param state of the instrument(s) is pretty exciting. J and I were talking about the freedom that would come from being able to dial in to a nice place, snapshot it, and feel no anxiety about taking it somewhere totally far away since the previous (or any) state is always just a recall command away.
I don’t think it’s worth trying to finish before the session in Texas, but it would be nice also to eventually implement a sampler / recording feature for param changes over time – and internal commands, too. Being able to store and replay some gesture coming in from the external controller, or a sequence of commands on the console could be very useful.
I also fixed the last memory leaks! Feels great to watch memory get
reclaimed while I play, I was a bit worried that would turn into a giant
project but the problem ended up being exactly what I suspected: I just
wasn’t munmapping some mmaped shared memory when sending buffers off to
the mixer, so the calls to shm_unlink
weren’t doing
anything since the kernel thought they were still being actively
used.
I’ve got another week here in Madison (I’m cat-sitting) to practice and tune the instrument scripts, then just under a week at home again to make any modifications to the hardware side of things before heading off to Texas…
I’m hoping Andrew has an acoustic guitar I can use as a resonator – that ended up working out well. I also kind of like the idea of not really fixing on one resonator, but trying out whatever’s around. Might grab some backup transducers and even see if I can fit a second amp in my bag when I’m home again…
Wednesday June 5th
I’m starting to count the days… I was hoping to be done with the plumbing-type instrument building by this weekend, and spend the next couple weeks before I go to Texas just practicing and developing the instrument script. I’m not too far off, but there are still some wildcards:
- I haven’t dealt with the remaining memory leaks. I’m back on the framework laptop which has enough RAM to let me play for quite a long time before I need to shut down the instruments and reclaim shared memory… It would still make me feel better to get that properly sorted.
- There have been some odd shutdown bugs I haven’t bothered to look into. I realized this morning when adding a new message type that I’d forgotten to update the cython header to match the enum of message types, which could have easily caused problems on shutdown since the shutdown message had the wrong value. Hopefully that’s all it was, but I guess it’s not a big deal if I have to kill the process manually on shutdown sometimes for now.
- I’ve been happily using the new shared memory sampler for both relaying renders from python to the audio thread, and as a shared memory circular buffer for sampling from the ADC… but I’d like to build out the python interfaces to it so that I can add some more resampling capabilities, capture internal buffers and stash them for use later, and so on from within instrument scripts.
- It’s still easy enough to get serial communication to break mysteriously that I’ve decided to abandon it again for now. I love the feel of the daisy controls but I don’t want my main interface for control to be unstable and I’ve spent enough time on it at this point.
That said, thankfully I got MIDI control working again today after work already! This morning I was struggling to get the python rtmidi callback to behave inside of instrument scripts. It seemed like the simplest path to just adapt one of the many python implementations for MIDI handling I’d already done while I think about future adaptations. Callback messages were getting backed up somewhere in python, likely due to a threading problem. Python concurrency still confuses me. Maybe eventually I’ll spend enough time with cpython internals and the standard library source to understand the magic, but in the meantime I decided to try using the ALSA API for the first time to add MIDI support in C, and it turned out to be super easy! No mysteries, just added a new MIDI listener thread to astrid instruments and passed a pointer to the instrument struct into it. No crazy scoping issues or mysterious silences and throttled logging etc etc – it more or less just worked on the first try.
This also means I’m switching back to my bigger MIDI controller (the faderfox MX12 which I love – so many controls!) and that means I get to map waaaaay more params of the littlefield instruments to direct control. :-)
Tuesday June 4th
Almost something!
More of the pitch controls are wired up now, but I’m still finding my way into interfacing with them. In this recording the parameters of the littlefield instrument are being sequenced by littleseq, and I’m just toggling littleseq on and off and issuing a console command here & there.
One such command is mtrak
(which is short for microphone
pitch tracking, but chosen because it’s one letter off from
Amtrak and I’m a dork) which toggles on a pitch tracker that
follows mic input and maps the (slewed) frequency to half of the osc
bank. I added the barebones port of librosa’s yin
implementation to libpippi for just such an occasion a couple years ago
so it’s fun to actually be using the thing with a realtime instrument
finally!
Other observations:
- It’s easy to break things hard by putting garbage into the LMDB session: for example changing the session keys without resetting the database. Really weird stuff happens. I think for now I’m just going to destroy the sessions on close, but it would be nice to fix that because when it’s working it’s cool to have the state be persistent between runs: fire up an instrument the next day and everything is where you’d left it.
- I think I need to add support for python param update callbacks, because I want to map at least one of the daisy controls to something whose mapping I can change while playing. If I use the python instrument as an intermediary I can just relay whatever mapped update from python to C and switch, lets say, a knob controlling the speed of a gate to one that’s controlling the shape of the osc bank and a filter on the mic input, or whatever seems useful.
- I actually like this kind of compiled instrument + scripted sequencer (with optional async rendering) combination. I would still like to add support for unit generator graphs so python can get into the realtime stream fun, but I will probably continue to write C instruments for more than just debugging.
- Serial control to/from the daisy is pretty unstable. If it works, it keeps on working, but sometimes I seem to be running into communication issues and something gets the data misaligned so that none of the headers arrive on boundaries and everything blows up. (Well, daisy controls stop being interpreted at all.) That needs review. I’d like for it to be able to recover from misalignments at runtime too, so hopefully that won’t become some giant rabbithole…
Speaking of rabbits, the baby bunnies around here are already looking like teenagers. One of them hopped right up to me while I was working on astrid in the park this morning! Cute, lanky little survivors.
Sunday June 2nd
Oops, it’s June already!
A couple days ago I said:
[Sending params as strings] simplifies the daisy firmware concerns a bit, too. (Even tho it’s more annoying to work with strings than just
memcpy
some bytes into a field, that’s OK.)
Which made me feel sheepish today since I could not figure out what was going wrong with the daisy firmware when I adjusted it to send strings with printf encoded floats instead of writing the bytes of the float into a buffer… I’m not the only one who lost half a day to this, it seems! :-)
Anyway, after flailing around I started to wonder if printf had some
special behavior for floats when running on an stm32. Floats aren’t
always super well supported on microcontrollers… but in this case the
reason for the different behavior in printf was just to keep the
firmware blob sizes down, so it makes sense that the default
configuration strips this support out. Seems like a good way to slim
down most firmwares since it’s not a super common need I’d imagine. I
ended up finding that post linked above which shared that updating the
linker flags with LDFLAGS += -u _printf_float
re enables
printf float support!
It’s pretty exciting to have a few controls mapped out, running
alongside the littleseq
python instrument which is also
sequencing the parameters of the littlefield
C instrument.
(Not the most original names, they’re named after the town in Texas
where I plan to use them in an ensemble context for the first time.)
It’s fun to have a workable – how long has it been this time? –
combination of command inputs, live coding, microphones and knobs to
twiddle going again. Interacting with littleseq
feels good,
but I also need to figure out how to make good use of the realtime
controls I have available via the daisy petal I’m using for that
purpose. It has:
- 6 knobs
- 1 encoder with a push-button center
- 3 wonderfully solid feeling mechanical switches
- 4 momentary footswitch style buttons which also feel nice to mash on
- 1 expression pedal
And of course audio inputs and outputs I don’t plan to use for this… though maybe some audio-reactive controls like piezo triggers would be cool to try to sort out if there’s time?
I’m coming around to the idea of trying to keep all the realtime
controls to the microphone/exciter feedback pairs, and the various
controls available on the daisy petal. I want to map every parameter to
physical controls! There are so many parameters though… (LMDB is also
still showing no signs at all of causing problems handling them in the
audio thread!) and while I don’t really love live-coding in performance,
I don’t really mind live-tweaking… so I think if I build up
littleseq
more so that I can essentially enable and disable
features and groups of things easily, and tweak the algorithms for
controlling them now and then… that opens up being able to work with
modulating a lot more aspects of the sound in different
configurations.
Control mapping and parametrization is always tough.
Log May 2024
Log April 2024
Log March 2024
Log February 2024
Log January 2024
Log December 2023