Dependency for ProJucer

I tried building JUCE on the PI and it looks like a header file is missing. Googling suggests we need the right version of libfreetype installed.

$ make
Package x11 was not found in the pkg-config search path.
Perhaps you should add the directory containing x11.pc' to the PKG_CONFIG_PATH environment variable No package 'x11' found Package xext was not found in the pkg-config search path. Perhaps you should add the directory containing xext.pc’
to the PKG_CONFIG_PATH environment variable
No package ‘xext’ found
Package xinerama was not found in the pkg-config search path.
Perhaps you should add the directory containing xinerama.pc' to the PKG_CONFIG_PATH environment variable No package 'xinerama' found Package webkit2gtk-4.0 was not found in the pkg-config search path. Perhaps you should add the directory containing webkit2gtk-4.0.pc’
to the PKG_CONFIG_PATH environment variable
No package ‘webkit2gtk-4.0’ found
Package gtk±x11-3.0 was not found in the pkg-config search path.
Perhaps you should add the directory containing `gtk±x11-3.0.pc’
to the PKG_CONFIG_PATH environment variable
No package ‘gtk±x11-3.0’ found
.
.
.
Compiling include_juce_graphics.cpp
In file included from …/…/JuceLibraryCode/include_juce_graphics.cpp:9:
…/…/…/…/modules/juce_graphics/juce_graphics.cpp:98:12: fatal error: ft2build.h: No such file or directory
#include <ft2build.h>
^~~~~~~~~~~~
compilation terminated.
make: *** [Makefile:439: build/intermediate/Debug/include_juce_graphics_f817e147.o] Error 1

Hi @Dauq,

building JUCE and/or plugins on the Pi is not recommended nor something that we support. Besides, it will be terribly slow!

The distro does not have an Xorg server nor other desktop manager, which we believe are too heavy and not suitable for embedded products. The latest image with QT uses directly the framebuffer through the eGLFS.

However, you can easily cross-compile JUCE plugins on your computer
following the instructions in this document. The cross-compiling toolchain can be installed in pretty much any modern Linux distribution. There are no dependencies on JACK server etc. so it should be easy to use from any Linux VM, too.

I was hoping to cut out the VM cross compiling and copy step. I know the PI is slow, but the Linux VM is even slower under OSX.

Thanks for the very quick reply.

Some of us are using a VM under macOS, even myself before switching to a native Linux computer one year ago… it’s slower than the host but not too much and still way faster than building on the Pi, unless you have a very old Mac.

Also, you can use literally any Linux VM image for this, you don’t need audio nor even a graphical desktop. The performance bottleneck is usually just disk access which is slower on the VM.

We have on our list to dockerize the Yocto cross-compilng toolchain, in that way it will be possible to run it directly from a macOS terminal.

1 Like

So together with the partial release on Github of docs & SDK, we prepared some instructions to run the toolchain on macOS with Docker.

@Dauq, I guess this should target your use case as well.

The only difference compared with a VM environment is that you won’t be able to run the Projucer inside it, unless you modify the Docker containers to set up an X11 server etc. At the moment, you will have to manually edited the generated Linux Makefile and change the paths to the global JUCE modules and VST 2.x SDK so that they refer to directories in the Docker volume.

1 Like

Very nice, thanks.

Any reason not bind to a docker mount of a local directory in place of samba?

HI @Dauq,
I’m not so expert in Docker. Just to be sure that I understand your question, if what you want is can I install the toolchain on a macOS local directory mounted through Docker?

then, the answer is probably NO. The toolchain needs a Linux-like filesystem for stuff like case-sensitive, symbolic links, etc.

But probably you can use osxfs instead of Samba, it might be more efficient.

Thanks. Will try osxfs.

Samba isn’t working for me (can’t go to it in the Finder and have tried all the workarounds).

Use of OSXFS is working!

~/s/l/dockerelk: docker run -it --rm -v /Volumes/CaseSensitive/elkbuild/workdir:/workdir busybox chown -R 1000:1000 /workdir

~/s/l/dockerelk:  docker run -it --rm -v /Volumes/CaseSensitive/elkbuild/workdir:/workdir crops/extsdk-container --url /workdir/elk-glibc-x86_64-elk-sika-image-dev-aarch64-raspberrypi3-64-toolchain-1.0.sh                                                             1123ms
ELK Poky based distribution for Yocto Project SDK installer version 1.0
=======================================================================
You are about to install the SDK to "/workdir". Proceed [Y/n]? Y
Extracting SDK................................................................................................................................................................................................................................................................................................................................................................................................................

Still running, but I’m optimistic.

I created the CaseSensitive volume via Disk Utility:
image

I’m skipped the docker volume create --name elkvolume and referencing the volume as -v /Volumes/CaseSensitive/elkbuild/workdir:/workdir.

FYI, Samba just didn’t work in Mojave (OSX would not let me see it even with workarounds for ip alias, etc). And mounting a non-case sensitive volume failed with errors while trying to extract the SDK. (which is a shame because it’s pretty easy and a good idea to avoid case sensitivity in apps)

@Stefano Do you want me to submit a PR to update the docker instructions in https://github.com/elk-audio/elkpi-sdk … or do you want to copy/paste the above.

If PR, let me know if you want to give me commit access to a dev branch or if you want me to fork.

P.S. The install of the SDK in docker is pretty slow and is still running (last message “Setting it up…”). Even the extraction was pretty slow.

The sdk build is done.

The following message is a bit odd:

 $ . /workdir/environment-setup-aarch64-elk-linux```

Is there any reason why this is not part of .bashrc (or equivalent)?

Cloned Elks’s JUCE master branch and ran make in /workdir/JUCE/extras/Projucer/Builds/LinuxMakefile:

Error: unknown architecture `native'

Error: unrecognized option -march=native
cc1plus: error: unknown value ‘native’ for -march
cc1plus: note: valid arguments are: armv8-a armv8.1-a armv8.2-a armv8.3-a armv8.4-a

Working now that I set export TARGET_ARCH="-march=armv8-a". Is that the right choice and shouldn’t this have a good default in environment-setup-aarch64-elk-linux?

Also, if I want to install my own tools in the docker image, can I use sudo apt? It’s asking for a password for sdkuser. Did a quick search in the forum and git for sdkuser and found nothing. Should I just setup with a Dockerfile of my own?

1 Like

The build of ProJucer failed. Did I miss a step in the config? This and the arch issue suggest that…

Compiling include_juce_graphics.cpp
In file included from …/…/JuceLibraryCode/include_juce_graphics.cpp:9:
…/…/…/…/modules/juce_graphics/juce_graphics.cpp:98:12: fatal error: ft2build.h: No such file or directory
#include <ft2build.h>
^~~~~~~~~~~~
compilation terminated.
Makefile:437: recipe for target ‘build/intermediate/Debug/include_juce_graphics_f817e147.o’ failed
make: *** [build/intermediate/Debug/include_juce_graphics_f817e147.o] Error 1

Hi @Dauq,
are you following the instructions in the doc here:

?

It seems to me that both errors you are experiencing should not be there if you used the suggested make invocation that address exactly those two problems, i.e. you should run:

AR=aarch64-elk-linux-gcc-ar make -j`nproc` CONFIG=Release CFLAGS="-DJUCE_HEADLESS_PLUGIN_CLIENT=1" TARGET_ARCH="-march=armv8-a -mtune=cortex-a72"

instead of simply make. Of course you can prepare a script to wrap this and anything else.

I think I’m doing effectively the same thing.

As a test, I just ran:

  • source environment-setup-aarch64-elk-linux
  • cd JUCE/extras/Projucer/Builds/LinuxMakefile/
  • make clean
  • AR=aarch64-elk-linux-gcc-ar make -jnproc CONFIG=Release CFLAGS="-DJUCE_HEADLESS_PLUGIN_CLIENT=1" TARGET_ARCH="-march=armv8-a -mtune=cortex-a72"

Not sure there’s much benefit to get Projucer working inside Docker (and in that case we should be using the native target, not ARM). There’s no reason I can’t run Projucer on the native (non-docker side) and then cross compile the plugin via Docker.

The ARCH is correctly set in the SDK, the problem is in the Makefile generated by Projucer. That TARGET_ARCH is a Projucer variable, nothing that is standard system-wide. If it is not set, by default JUCE Makefiles set it to native, that’s why you have to override it.

Not very expert with Docker to answer you on this, maybe you can look at the CROPS eSDK Dockerfiles and you will find info about the password?

You can run Projucer on native, and then cross-compile. The only issue is that some of the path generated on the Makefile by Projucer will not be correct, so you’ll have to manually fix the generated Makefile (or prepare e.g. a sed script or similar).

Still, you shouldn’t get that ft2build.h missing… we are on our way to the ADC conference so it will be hard to answer the following days, I’ll take a look as soon as we are back next week.

Hi @Dauq,
reviving this thread since @frederic was working on a similar setup and found what the issue was.

It seems related with the pkg-config installation in Docker, weird but in this context the proper paths are not returned. The workaround, quoting verbatim Frederic, is:

Blockquote
I had to manually add an extra include path in the Makefile “-I/workdir/sysroots/aarch64-elk-linux/usr/include/freetype2/”

Hi,
About the password issue I was not able to find info about it, but I could modify the Dockerfile of the base image of the cross-compilation toolchain (https://github.com/crops/extsdk-container) to give root permissions (without password) to the sdkuser. You just need to add this line to the sudoers.usersetup file in the repo:

sdkuser ALL=NOPASSWD: ALL

Then you rebuild the image and use that one for running the cross-compilation toolchain.

This will allow you to install dependencies and to whatever you want to do inside the running docker container, but in fact this was useless to me because it is still complicated to actually install dependencies for the arm subsystem in the docker image which is the one used for the cross compilation. As @Stefano reported above, I fixed the dependency issue by adding an extra include path in the makefile generated by Projucer.

Moved to public Plugin Development section.