DPF Distrho plugin collection built for Elk

The DPF Distrho plugin collection is built for Elk!

The repository of the setup for building for Elk is available here:

Binaries for Elk Pi 32 bit available to download here - built as VST 2 plugins.

Post here to discuss this collection on Elk!

I’m very interested in the glBars plug-in in this collection!
I assume it is a visualizer of the audio drawing bars in openGL… anyway that is what the screenshot looks like!

Can anyone confirm it working on Elk? I can’t currently get the HDMI out to work on my Pi3 (won’t boot unless it is not plugged in…), but I’d like to try with the Pi4 which is working for video out for me…


I do not know anything other than that all plugins we distribute for Elk are built to run headless, with no graphic display of any kind. In that case, this plugin would just be pass-through.

While it is possible to use the display to draw OpenGL graphics, I am not aware of a current example on the Elk Pi that does so.

Ilias of Elk

@Ilias thanks for your help. I realize that the ProjectM/ProM plug-in is even closer to what I need to start with. It seems that it would in fact be possible to build this plug-in for Elk, but only if I can figure out how to get OpenGL headers/lib into the cross-compilation sysroots folder tree. It seems what I would need is mesa-common-dev (for gl/gl.h).

A related issue is not knowing how to work around the seemingly limited environment within Elk Audio OS, as I would need OpenGL set up there as well for any such plug-in to run. I have not found familiar tools like apt-get or dpkg. I have found opkg in /usr/share, but there doesn’t seem to be much available in the way of packages via this tool. Why aren’t any package management tools available as they are with Raspbian etc?

Perhaps I should be focusing instead on modifying a variation on the Elk image suitable for my purposes, so I would have the environment I need from the moment my modified image is flashed?

There’s not really anything I need to do other than get access to the framebuffer for visualization, so perhaps I can learn from someone at Elk how to do what I need to do to get this set up?

Thanks again for your help.


I should add that the plugin will always have to be headless if it is to run within Sushi on Elk.

The way we handle user interfaces for the platform, is to have the GUI be an entirely separate process, communicating with Sushi and the plugins it hosts over gRPC, OSC, optionally involving Sensei too.

Just in case you’ve missed them, you may want to take a look at our developer-targeted articles on our homepage. Specifically those on OSC, gRPC and not the least Sensei.

As for using OpenGL, we currently do not have an image released which includes QT, or the dependencies for OpenGL, but have had the functionality in the past and know it works, just not specifically the Raspberry Pi4 image.

Do get back to us if you want to test writing a GUI which uses QT and OpenGL, and I will bring it up with the team towards including in an upcoming image.

Edit: Meanwhile, the display and examples we do have, are with the ElkPi Blackboard, all available on our github page!

Ilias Bergström

Thanks, I should clarify that rather than wanting to write a GUI using QT and OpenGL, I wish instead to have support on the image for a plug-in which acts as an audiovisualizer, and can ideally take additional control signals from MIDI/OSC/gRPC etc.

In other words the framebuffer normally posting the console would be replaced by this visual output, same as when launching an openFrameworks program and so forth.

It would be much appreciated if you could get it included in an upcoming image!



If I understand you correctly, you are asking then for the original display output of the plugin to be maintained and be rendered to an off-screen buffer?

If so, I am afraid that is unlikely to be implemented for Elk OS / Sushi.

One of the several reasons our operating system is so performant with audio, is because we have stripped away the processes and libraries needed for supporting a heavyweight desktop GUI - irrespective if it is rendered to a screen or an off-screen buffer.

If we were to support the very many different GUI and rendering libraries used for the native GUI rendering of all plugins available, it would be counterproductive to the goal of an operating system optimized for audio.

Ilias of Elk


No, this is not quite what I meant. As with the usual for Elk, I am completely fine with there being absolutely no GUI for the plug-in (or the OS), for it to continue to run in a headless manner.

The thing I am looking for is the most optimal way to add visual output as in audiovisualization, which when using low level graphics can be extremely performant, as doing so bypasses any kind of windowing system and writes pixels directly to a framebuffer. Since there is already a framebuffer available for Raspberry Pi 4-based Elk, it is really not much more expensive than posting the console, which Elk already does.

I mentioned openFrameworks because it is an example of a framework that can run even on Raspbian Lite, which also has no windowing system, and the behavior is identical to what I’m looking for - when a openFrameworks app begins running, it overrides the framebuffer output so that the EGL surface is written instead of the console. After quitting, the console can be seen again. I think with openGL included in the Elk image, this should be fairly easy for visualization plugins to do as well.

Since most of us are SSHing into Elk and controlling it remotely in some manner, the main microHDMI output from the RPi 4 is not really being used anyway. So for my application, it would be great to get one or even both microHDMI ports running very low level visualization output on /dev/fb0 (and /dev/fb1 ideally)

PS: I would consider such a possibility as being helpful for even a pure audio OS, as users may want to do things like make Tuner plugins, FFT-based spectral displays, and other things that are related exclusively to the audio. To be more clear, my application is to do extremely detailed visualization of prominent elements/tracks of a multitrack sushi session, rather than being forced to visualize the master audio output, by which time the summing of all tracks makes it very difficult to visualize say, just the lead synthesizer.


Thank you for the clarification, I understand now. I am not the right developer to speak to regarding direct framebuffer access, but I will ask my relevant colleagues to get back to you on that.

Of course a rich visualization is very relevant indeed! In the cases we have needed a rich colour display, we have used Qt embedded which could then use OpenGL, as I’ve already mentioned.

Meanwhile, the way to go forward would probably still be the visualization software to be its own unique process, which communicates either with/through Sushi, or directly with your plugins. Over gRPC/OSC, or for the direct case, any method you choose.

Ilias of Elk

Thanks for that!

Yes your idea sounds good, I could run another process on the RPi 4 and communicate between sushi and that process, do I understand correctly? It is very important to have everything running on the RPi itself.

I think in that case, I will still need to get OpenGL support onto the board, but maybe I can see about how to do this myself… maybe I will ask over at OpenFrameworks community, they probably know about what is needed in this case. I think it is likely to build Mesa. This would be good as I believe there are some Vulkan APIs available for RPi now:

But yes, going through Qt embedded seems like it could be nearly as optimal, as I assume it is much more stripped down than full Qt…

Anyhow perhaps I will try to follow up over email so I can describe in more detail what I’m trying to develop. There’s a good chance I can give back for this help some useful framework level code to help other Elk users develop things like audio meters, tuners, spectral displays with what I am building!


Indeed, GUI in a separate process with communication between that and Sushi as you say!

I am actually a computer graphics programmer in training and have worked several jobs where OpenGL was an important part, albeit never for embedded platforms, always on the desktop.

I have always used a framework though which I request an OpenGL context, and then draw to that directly using “raw” OpenGL API commands. Be it through QT, JUCE, OpenFrameworks, Processing or Python, the steps to set up and get a context running are all I use from the framework, then it is always “RAW” or in the cases of managed languages thinly wrapped OpenGL commands.

I checked also with a colleague more knowledgeable on embedded platforms and he agrees that he sees no benefit to not using QT to create a context for drawing.

So I think the best way of going forward is indeed that we make a build of the Elk Audio OS with QT embedded, and using that you can experiment with creating your accelerated display using EGL.

Let us know when you want to go ahead with that and we will put it in the pipeline - always assuming the RPi4 is the platform.

Ilias of Elk.

Yes that is perfect! Yes it will only be for RPi4, I had Elk briefly for RPi3 but I don’t think the board is sufficiently powerful for my purposes.

I understand from this that although glBars and ProM had plug-in wrappers for regular Linux, in the embedded context it makes more sense to not have this graphics context be within a plugin wrapper.

I have also learned that Qt embedded sounds like it has extremely low overhead, and will probably add some conveniences rather than doing it in a truly bare metal manner, so it is definitely the best way to go.