MIDI editor / sequencer

Not sure this is the correct category, please move it is necessary.

I have built and used a couple of DIY ARM music devices, including the axoloti and the Zynthian. They are very different but both very cool. The Zynthian is the closest to the intent of ELK, but is not quite as thorough in its development philosophy. They dont feel like they can make requirements of plugins like “be headless” because they just want to use the plugins they can find.

I have a particular project in mind, and if I was doing this as my job I think I would find ELK a very good fit. I would like to create a dedicated box that is a MIDI sequencer, and if CV is available, that is even better.

I could just take an existing piano roll MIDI DAW like Reaper, which is already cross compiled for ARM, and run that on a RPI 3 B +, and for midi it would probably do the job, but I would like something a bit more purpose built.

It is more pragmatic to integrate existing code instead of writing it one’s self. Does anyone know of any existing project or tool that has either the headless MIDI sequencer part, or the GUI piano roll part?

Also, what is the expected latency for MIDI on ELK? I saw the estimates for audio in another topic.

Also, I ordered the dev kit so I am not just kicking tires.

Hi, our apologies for the late reply!

In its current form, Elk is created for ultra-low latency audio on Linux, and to achieve this, you need to have your code in the form of a headless plugin (VST2/VST3/LV2) for our host Sushi.

The most complete code for creating a DAW is JUCE’s Tracktion engine, which another member of this forum is currently working on getting to run as a plugin on Linux.

While there are several open-source applications with a timeline, I am not aware of one which can also be embedded into a plugin, or that has an architecture where the core and the GUI are completely separated, other than the abovementioned Tracktion library.

In any case, as you are aware the GUI will not be able to run on the Elk Pi unless you specifically port it to use QT through EGLFS plugin, and run it as a separate process.

Currently, MIDI processing for Elk runs in the normal Linux Kernel, not the Real-Time Xenomai kernel, so if all you do is MIDI processing only, you will not get a performance advantage with Elk as it is.
For Control Voltage processing however, Elks advantages do stand!

Ilias Bergström

1 Like

Thank you for those informations, a lot of it is new. The detail about MIDI not being processed by the Elk kernel is interesting and I didnt know it. Also porting traktion to run as a VST is a really valuable project and also seems a bit ambitious.

I have this long-standing obsession about separating the editing of MIDI sequences from the execution of those sequences at runtime. I have an axoloti which is a nice DIY cortex m4 based project that has this idea of a patch editor where the patch is converted to dsp code via a code generation step.
I would like to borrow this idea but instead of a patch editor, it would be a piano roll, written in some language that is nice for UI, and the UI can run separately from the sequencer, which runs on a processor that has the possibility of very tight timing and extensible IO.
I am a developer but have resisted diving into this project because I always felt that I might happen upon a project like this and not have to try it myself.
Also I have zero hardware knowledge.

I certainly see the merit in separating the GUI from the core so that they can run on separate devices!

First, the Tracktion engine in a plugin is actually quite realistic, it currently already works to a minimal extent (it has one single MIDI input), and is actively worked on as we speak, see this thread on the JUCE forum.

You may be aware that we encourage that the software running on the Elk Pi, is remote-controlled from GUI’s on separate devices, over Open Sound Control messages and/or gRPC.

Since Elk Pi uses our own Linux distribution optimized for audio, even if MIDI is not in the real-time Xenomai kernel, MIDI performance will be much less obstructed than what it would have been if it were to run on a device which runs a full-fledged general purpose operating system, i.e. Win/Mac desktop, or tablet.

And since you also are interested in CV input and output, Elk Pi is definitely still a good solution for your idea - even more so since if you wished, you can definitely run both your MIDI sequencing and audio synthesis, on the same device, as we did for our example instrument at the ADC '19 conference.

Ilias B.

The analysis of this use case seems to expose the “sequencer” as possibly just a midi file player. I found an open source project called FluidSynth that seems well suited to be the headless MIDI or audio player, and another open source project called simply Midi Editor at midieditor.org which could be the GUI part, and they could pretty easily communicate over ELKs control RPC api and the midi sequences can be streamed as text, or loaded through sysex, I dont know. You could bundle midi editing events in the GUI and every n edits could get sent to the “server” as OSC or other simple messages. This would create the illusion for the user, within reason, that their edits are being reflected in close to real time on the server.

Hi Gminorcoles. I just want to add to Ilias’ answer above. If all you require is a midi sequencer, i.e. dispatching midi messages according to a timeline, Elk would probably be overkill for your needs. Though if you want to output (or input) CV/Gate in sync with the midi messages in order to build a hardware sequencer box, then I’d say that Elk is a good fit for your needs. Even thought there’s plenty of hw resources to do audio processing as well.

At the moment, Sushi, the plugin host in Elk does not really have a timeline. It has a concept of tempo, time signatures and bars, but that’s about it. The reason is that we see it most being used in a live context rather than as a full fledged recording tool.

I think your best bet would be to either roll your own sequencer, or take an existing sequencer software with more andvanced timeline features like the tracktion engine mentioned above, wrap it in a plugin and load that plugin in Sushi. The plugin would then get timing information (tempo, position, start/stop) from Sushi itself. From the plugin you could then output midi messages, cv and gate information and Elk would make sure that it all is tightly synchronised.

Funny you mention Fluidsynth :slight_smile: we have already wrapped the Soundfont player part of Fluidsynth in a plugin and made that work in Elk. See this thread. I don’t know much about the midi playing part of Fluidsynth, but my guess it that it could be a good place to start.

Moved to general discussion category.