Sostenuto pedal using Elk Pi hat

Hi all,

Let me take advantage of the “Elk Showcase” forum category and present you a little bit of the work I’m doing using the Elk Pi hat.

As some of you certainly know, I have researched sutainer / freeze / extrapolation / droning / call-them-what-you-like algorithms a bit last year, and eventually came up with a really simple and efficient time-domain algorithm for the task that works on all sorts of input signals (well, almost), guarantees no tonal coloration (on average) and produces non-static evolving output tones. I also wrote a scientific paper about it with the help of Dr. Leonardo Gabrielli from Università Politecnica dell Marche, Ancona, Italy, which was presented at the DAFx18 conference in Aveiro, Portugal, last year. Take a look at the paper if you wish: http://dafx2018.web.ua.pt/papers/DAFx2018_paper_11.pdf

The Elk guys contacted me a few months back and asked me if I wanted to make a prototype of some device using their upcoming Elk Pi hat, and making a guitar pedal out of this algorithm was the obvious choice for me. At the moment I’ve completed the software part and I believe the hardware side of affairs is mostly done too. In order to avoid forgetting about this journey and also to provide a nice log for whoever cares enough, I’ll post from time to time about the development process in this forum thread.

So, one morning I’ve received a shiny new Raspberry Pi 3, with power supply, SD card (that I soon discovered was broken), and Elk Pi hat board. Here’s the system assembled as per instructions that you can find now on github.


Once I had downloaded the Elk Audio OS image and flashed it on a working SD card, I could easily setup a shared Ethernet connection and login via SSH. The version of the image “at the time” (a few weeks ago) offered a standard login, but in the latest iterations there’s this nicer “textual logo”.

I already had the plugin written in VST3 format (without JUCE and the like, I started from the helloworld example in the VST SDK itself). It’s a pretty simple thing: you have two “continuous” parameters (dry and wet gains) and a “sustain” button that “stops” the sound. That’s all. Recompiling it for Elk was a breeze, I just needed to download the toolchain that the Elk guys gave me. The only minor “oddity” was that I also needed to apply a tiny patch (also supplied by them) to the VST SDK that allows for ARM64 builds - I’ve been assured that this it will be already included in the next official VST SDK version. I just used SSH to finally transfer files to the SD card without problems.

Next time I’ll talk about how I managed to run and control the plugin on the board.

10 Likes

Even though not strictly necessary, I kept consistency with the directory layout of the image w.r.t. plugins and configuration files, that is I created a freeze-vst3 folder inside /home/mind/plugins and put my freeze.vst3 bundle inside that, and I put all configuration files in /home/mind/config_files .

Ah, BTW, the “working name” of the project is “freeze” but I need to pick another one as there’s already a similar product with the same name on the market. I think that “sostenuto” would be a good one, but if you happen to have any ideas I’m all ears.

Now, in order to run Sushi, that is the plugin host, I wrote a rather trivial configuration file following the examples I found in the config_files directory itself and the Sushi documentation in the official github repo. Honestly, a little bit of guesswork was involved, but not much.

At first things didn’t work: I had segfaults using the RASPA backend and other errors using JACK and dummy backends. With a little bit of patience I found out that Sushi was trying to setActive() before setupProcessing(). Therefore I reported the bug – which is maybe fixed by now? – and added a simple couple-of-lines workaround in my code. That was the only time I ever touched the plugin source.

Another important note: while trying to find out what was the problem, I set the track mode to stereo in the configuration file and didn’t touch it ever since (the right channel is never going to be used and the CPU usage is low anyway, so… I will check when I have time :smiley:).

The plugin does almost nothing in its default state (it just lets the dry signal pass through), so at this point I could only check that indeed the signal was going through as expected when Sushi was running, which was the case. In order to “see it working” I used Open Stage Control, again as suggested in the official docs. I could create a small GUI in no time, yet IIRC in order to find out parameter names I had to look at the Sushi log file in /tmp/sushi.log.

Now the plugin was running and working as it should. Performance-wise these are the numbers I got (measuring as explained here):

  • an extremely stable 4.6/4.7% CPU usage when not “freezing”
  • a much less stable circa 13% on average and max 15.6% CPU usage when “freezing” – this was no surprise, the algorithm uses random (yet controlled) number of simultaneous “playback voices”

Keep in mind that this was an unoptimized debug build, and that on my main desktop PC the numbers are about 3%/8% (and with worse latency). Not bad, not bad at all! I am curious to know if anybody has other interesting performance measurements to show.

3 Likes