EB corbos Linux SDK

Elektrobit

Overview

EB corbos Linux built on Ubuntu is a long-term maintained embedded Linux distribution focused on automotive ECUs. Elektrobit can provide security patches for a frozen package set for up to 15 years on quite low pricing. To realize this, Elektrobit partners with Canonical. EB corbos Linux uses many Ubuntu packages, qualifies these packages for automotive embedded use-cases in reference images, and adds additional embedded optimized components, to create an industry grade embedded Linux build toolkit.

In contrast to general purpose Linux distributions, EB corbos Linux allows a user to create a completely customer specific image from scratch in a reproducible way. This is realized using this SDK. A free variant of EB corbos Linux is available at the Elektrobit homepage. To kick-start the development of new ECUs, a full EB corbos Linux release also contains pre-qualified reference images which already implement typical automotive use-cases. The free variant doesn’t contain proprietary hardware drivers or pre-qualified reference images. Please contact Elektrobit sales to get a full evaluation package of EB corbos Linux.

Embedded Systems

The image above shows a range of embedded system architectures.

Very simple embedded systems run only a bare-metal Linux. An example for such a system is the Raspberry Pi running Raspberry Pi OS. Such images can be easily directly generated with tools like elbe, kiwi-ng or debos, but this architecture doesn’t fit industrial needs.

For real world industrial solutions at least secure boot is required, and typically a Trusted Execution Environment (TEE) is involved in the secure boot process. This is depicted above as a simple embedded system. Such images may already require a more complex partition layout, depending on the bootloader and SoC requirements.

In the automotive world, in addition to a Posix OS, typically also a safety certified realtime OS like classic Autosar is involved. This is depicted above as an embedded system. If this is combined with an A/B schema for the update of the Linux and the classic Autosar the storage layout gets quite complex and can hardly be directly created with the tools mentioned above.

Our day to day business at Elektrobit are automotive high-performance controllers (HPCs). HPCs extend the embedded system architecture with a hypervisor and multiple virtual machines (VMs), like an additional Android VM for infotainment solutions. The target of EB corbos Linux, and this SDK, is to fully support such embedded high-performance controller system architectures, and allow development and maintenance of such systems in an easy, efficient and reliable way.

This repository provides a template workspace to start developing your own Linux images and applications. It’s based on a dev container to provide a consistent build environment. This dev container can also be used stand-alone with other IDEs or in CI environments. For more details about the container and stand-alone usage look at the dev container repository.

Setup

The EB corbos Linux template workspace is tested using Ubuntu 22.04 and Ubuntu 24.04 host environments on x86_64 machines. It is not compatible with other host CPU architectures, but arm64 host support is planned for a future release.

The build host needs to provide a Docker installation and a Python 3 installation, including Python3 venv. Docker needs support for running privileged containers.

The EB corbos Linux template workspace is based on a dev container, and is not using VMs for cross-building. This simplifies the setup and provides good build speed, but it requires support for executing non-native binaries if images for foreign architectures shall be built. To make this work, the host needs to support binfmt. On Ubuntu hosts, binfmt can be enabled by installing the packages binfmt-support and qemu-user-static. To allow mount operations which are required during image build, a privileged execution of the container is necessary, and the /dev folder needs to be bind-mounted into the container to allow access to newly created losetup devices. Running other workloads on the build host may cause issues, since binfmt and losetup configure the kernel and therefore modify the host environment for all running processes and containers.

The following sections assume that you don’t have an Ubuntu 22.04 or 24.04 host OS and use the Remote SSH feature of Visual Studio Code to connect to a remote environment as build host. This will work if you can SSH into the build host and doesn’t require UI-support on the build host. On Windows, WSL2 should also work.

Optional: Prepare Virtual Box VM

If you don’t already have an Ubuntu development host, you can create a new one using VirtualBox, a free hypervisor available for many operating systems.

First download an Ubuntu ISO image. For preparing this section, I used an Ubuntu 24.04 server ISO, since a desktop UI is not needed. Then download and install VirtualBox, and create a new virtual machine with the following options:

  • RAM: 8192 MB (less should also work)
  • CPU: 3 cores (more is better, less will also work)
  • Disc: 100 GB (more is better, less will also work)
  • A second, host-only network interface.

Skipping automatic installation will allow you to change the hardware settings before the installation, if you add the second interface after installation, you must configure it manually.

Boot the VM with the Ubuntu ISO image and follow the installation wizard. I have chosen the minimal server variant.

After installation, log in to the VM and install openssh-server, docker and git: sudo apt install openssh-server docker.io git. Get the IP address of the VM by running the command ip addr. The address starting with 192.168. is the one of the host-only interface. For me, the address was 192.168.56.106.

Enabling nested virtualization for KVM support

The Linux KVM technology allows running virtual machines, for the same CPU architecture as the host, with almost native speed. To make use of this in VirtualBox, you need to disable the Windows Hypervisor. Please be aware that this may affect other virtualization tooling like Windows WSL. To disable the Windows Hypervisor, open a PowerShell as Administrator, and run bcdedit /set hypervisorlaunchtype off. Afterwards, you need to reboot your Windows machine.

After the reboot, you can enable nested virtualization for your VirtualBox VM by editing the machine, choosing System > CPU and enabling the checkbox for nested VT-x/AMD-V.

Setup Visual Studio Code

Install Visual Studio Code on your local machine. It’s available for free for all major operating systems.

Run Visual Studio Code (VS Code) and open the extensions view (CTRL + SHIFT + X). Now install the Remote SSH and the Dev Containers extensions.

If you will not use an remote development host you can skip the next two sections and start with installing the required tools.

Prepare SSH connection

Let’s try to connect to the Ubuntu remote development host. Open a new terminal in VS Code and type ssh <your user>@<IP of the host>. In my case it is: ssh ebcl@192.168.56.106. If it works, you are asked to accept the key, then you can login with your password. This will give you a shell on the remote development host.

If you are on Windows, and you get an error that ssh is not available, you can install git for windows. This will also give you a ssh client.

To avoid typing your password all the time, you can authenticate with a key. To use key authentication, disconnect from the remote host by typing exit, and then run ssh-copy-id <your user>@<IP of the host> in the VS Code shell. If you are on Windows and get the error that the command ssh-copy-id is not known, you can use type $env:USERPROFILE\.ssh\id_rsa.pub | ssh <your user>@<IP of the host> "cat >> .ssh/authorized_keys". If you don’t have an SSH authentication key, you can create one using the ssh-keygen command.

Connect using VS Code Remote SSH plugin

Now you are ready to use the Remote SSH. Open VS Code, then open the command palette (Ctrl + Shift + P) and choose Remote SSH: Connect to host. Select Add new host and enter <your user>@<IP of the host>. In my case, I entered ebcl@192.168.56.106. Then select Linux as the host OS. VS Code will install the remote VS Code server on the remote host, and open a window connected to this server. If it works, you should see SSH: <IP of the host> in the lower left corner. Pressing on this element will bring up the connection menu.

Install required tools and clone ebcl_template repository

If you start from a plain Ubuntu 22.04 installation, you can install the needed dependencies using the following command: sudo apt install docker.io python3 python3-venv python-is-python3 binfmt-support qemu-user-static

To use dev containers, your user (on the remote machine) needs to be able to create local Docker containers. To give your user these rights, you need to add the user to the docker group with the command: sudo usermod -aG docker $USER. The changes become active after a new login. Close the remote connection using the menu in the lower left corner of your VS Code window and reopen the connection using the command palette or if not using a remote machine simply log out and in again.

To use the SDK, we need git to clone the remote repository (or you download it otherwise), and we need Docker to run the dev container. All other required tools come as part of the container.

Open again a shell on the remote machine, change you your preferred storage location, and clone the ebcl_template repository by running: git clone https://github.com/Elektrobit/ebcl_template.git. This will give you a new folder ebcl_template.

In VS Code, open “File > Open Workspace from File…”, navigate to the ebcl_template folder and select ebcl_sdk.code-workspace. Now you can enter the dev container by opening the remote menu in the lower left corner and selecting “Reopen in Container”. This will automatically run a local build of the EB corbos Linux dev container. The first time, when the container is built completely from scratch, may take quite some time. On my machine it takes about 30 minutes. When you open the workspace a second time, it will be a matter of seconds.

Now you are ready to start development!

Using the EBcL SDK VS Code integration

To use VS Code for developing with the EBcL SDK, choose File > Open Workspace from File and navigate to the ebcl_template location. Select the ebcl_sdk.code-workspace file. This will open the folder bind-mounted in the docker dev container environment.

Now you can use the VS Code build tasks (Ctrl + Shift + B) to build the example images and build and package the example applications.

Using the EBcL SDK container stand-alone.

If you don’t want to use VS Code, or you want to integrate the EBcL SDK in your CI workflows, you can use the dev container stand-alone. For more details on how to do this, take a look at dev container.

Developing images

EB corbos Linux is intended as an embedded Linux distribution build kit, like Yocto. Instead of starting from a pre-defined and pre-configured already integrated image, the user can describe the image needed to solve the problem in an easy, clean and maintainable way, and the EB corbos Linux SDK will build exactly this image. In comparison to Yocto, where all packages are built from scratch, EB corbos Linux is using the packages from the Canonical Ubuntu distribution. This has the advantage that the same binaries are used which run on millions of servers in the cloud, and millions of single board computers. The effort to qualify and security-maintain these packages is shared with all these users. To keep all these advantages, it is mandatory to use the pre-built binaries, and accept the limitations caused by this.

We also know from our experience with automotive ECUs that embedded solutions often have very special needs, and that it may be not possible to stick with the defaults set by Canonical Ubuntu in all cases. For such edge cases, the EBcL SDK provides everything to modify a package, and use the modified variant instead of the default package. If this way is chosen, large parts of the benefits of the Canonical packages are dropped, and a solution specific maintenance and qualification is needed, causing effort over the whole lifetime of the embedded solution.

Customers of EB corbos Linux can order such adaptations, including the qualification and maintenance of the customer specific package, as an add-on to the EB corbos Linux base offer. Using the defaults where possible, and adapt where really needed, delivers the needed flexibility for complex embedded solutions, while minimizing the development, qualification and maintenance efforts.

Image concept

Embedded Systems

EB corbos Linux is designed to build embedded high-performance controllers. Such systems typically use quite powerful and complex arm64 SoCs, and involve hypervisors, real-time operating systems, trusted execution environments, and a non-trivial boot process involving secure boot. The requirements from the SoC and the bootloaders to the required eMMC storage layout is often complex and quite different between different SoCs. To tackle this challenge, EB corbos Linux considers all the different boxes in the diagram above as separate build artifacts, which can be binary integrated into an overall image as the last build step. These build steps are organized using make, and the EB corbos Linux SDK provides small helper tools to create these artifacts. The integration, if needed, is done using Embdgen, an Elektrobit-launched open-source tool, which is able to create binary images in various formats from different binary artifacts.

Let’s take a closer look at this build for the very simple QEMU build target. Typically QEMU gets a disc image, a Linux kernel binary and optionally an initrd.img, together with some configuration parameters.

Embedded Systems

From a run-time point of view, there are dependencies between these three artifacts caused by the used kernel version. The used kernel modules need to fit to the used kernel, and the C library used in the root filesystem must fit to the used kernel interface. From a build-time point of view, and also for our QEMU target point of view, these are three different artifacts. This has an important impact on the development process and workflow. If the initrd behavior shall change, only the initrd image needs to be rebuilt.

EB corbos Linux makes use of a set of small helper tools to support a flexible build flow and optimized build speed and development experience. These tools read yaml configuration files to specify the generated artifacts. To avoid redundant configuration, these configuration files support hierarchical includes. For the QEMU example the full build flow is:

Embedded Systems

The image.yaml defines the storage layout, and is used as input for the embdgen integration step. The base.yaml contains the common configuration, like used apt repositories, and is included by the specifications of the different artifacts. The root.yaml describes the root filesystem of the Linux VM. This file system is generated using debootstrap, installing additional required packages, and finally applying solution specific configuration, given as overlay files or scripts. The debootstrap and package installation step is handled by the root generator. This quite time-consuming step only needs to be repeated when the package selection is changed. The root configurator applies the solution specific configuration. The output of these two steps is a tarball of the root filesystem content. Embdgen is used to convert this tarball into a disc image. The initrd.yaml specifies the content of the initrd.img. For QEMU, we need to load the virt-IO block driver, to be able to mount the root filesystem. The boot.yaml specifies the kernel which shall be used, and the boot generator is used to download the right Debian packages and extract the kernel binary. The chaining of these tools is done using a good old makefile.

Image specification

Let’s take a look at this QEMU build flow example in detail and see how the details of this solution are specified and the roles of the different build helper tools.

Embedded Systems

Let’s look at it from left to right. The base.yaml specifies the common aspects of all the generated artifacts. It configures the kernel package, the used apt repositories and the target CPU architecture.

# Kernel package to use
kernel: linux-generic
# Apt repositories to use
apt_repos:
  - apt_repo: http://ports.ubuntu.com/ubuntu-ports
    distro: jammy
    components:
      - main
      - universe
  - apt_repo: http://ports.ubuntu.com/ubuntu-ports
    distro: jammy-security
    components:
      - main
      - universe
# CPU architecture
arch: arm64

The boot.yaml builds on top of the base.yaml. It specifies to download the dependencies of the used kernel package, which is necessary if a meta-package is used, and it specifies that the config* and vmlinuz* files from the boot folder shall be used as results. The tar flag specifies that the results shall not be bundled as a tarball, but instead directly copied to the output folder.

# Derive values from base.yaml - relative path
base: base.yaml
# Download dependencies of the kernel package - necessary if meta-package is specified
download_deps: true
# Files to copy from the packages
files:
  - boot/vmlinuz*
  - boot/config*
# Do not pack the files as tar - we need to provide the kernel binary to QEMU
tar: false

The boot generator reads this configuration, and the base.yaml, downloads and extracts the package linux-generic and its dependencies to a temporary folder, and copies the kernel binary and kernel configuration to the given output folder. In general, the boot generator is the tool to automate the build steps of the boot artifacts, like kernel collection and generation of SoC specific binary artifacts.

Let’s now take a look at the initrd.img generation. The initrd images created by the tooling from the server and desktop world are very flexible and complete from a feature point of view, but completely bloated from an embedded point of view. Since we know our target hardware and software in detail, we don’t need flexibility, but typically we want to have the best startup performance we can squeeze out of the used hardware. The initrd generator is a small helper tool to build a minimal initrd.img, to get the best possible startup performance. It also helps to fast and easily customize the initrd content, e.g. for implementing a secure boot solution.

# Derive values from base.yaml - relative path
base: base.yaml
# Root device to mount
root_device: /dev/vda1
# List of kernel modules
modules:
  - kernel/drivers/block/virtio_blk.ko # virtio_blk is needed for QEMU

The initrd specification also derives the values from the base.yaml, and specifies that the /dev/vda1 shall be used as device for the root filesystem. Since the Canonical default kernel has no built-in support for virt-IO block devices, we have to load this driver in the initrd.img, to be able to mount the root filesystem. This is done by specifying the kernel module in the modules list. Because of this line, the initrd generator downloads and extracts the specified kernel package and its dependencies, detects the kernel version, gets the right module, adds it to the initrd.img, and loads it before mounting the root filesystem. How this works in detail will be described in the later chapters.

# Derive the base configuration
base: base.yaml
# Reset the kernel - should not be installed
kernel: null
# Name of the archive.
name: ubuntu
# Packages to install in the root tarball
packages:
  - systemd
  - udev        # udev will create the device node for ttyS0
  - util-linux
# Scripts to configure the root tarball
scripts:
  - name: config_root.sh # Name of the script, relative path to this file
    env: chroot # Type of execution environment

The last missing artifact is our root filesystem. The root.yaml describes the used root filesystem. It doesn’t need to contain a kernel, since the kernel is provided separately to QEMU. For Debian based distributions, a minimal set of required packages are specified by the used base distribution, in our case Ubuntu Jammy. These packages are installed automatically, and we only need to specify what we want to have on top. In this case, it is systemd as init manager, udev to create the device nodes, and util-linux to provide the basic CLI tools. In addition, a config script is specified which adapts the configuration to our needs. This script is executed in a chroot environment. The name is used as the name for the resulting tarball of the root filesystem.

The build flow is using the root generator and the root configurator to separate the installation and configuration steps. The installation step takes much longer than the configuration step, and it only needs to be repeated when the package selection was adapted. This separation allows a fast iterative configuration of the root filesystem.

The last step is to convert the configured root tarball into a disc image. The storage layout is specified in the image.yaml, and is picked up by embdgen. For the QEMU image we use a simple gpt partition table based image with only one partition. This partition is using the ext4 file format, has a size of 2 GB, and is filled with the contents of our root tarball.

# Partition layout of the image
# For more details see https://elektrobit.github.io/embdgen/index.html
image:
  type: gpt
  boot_partition: root

  parts:
    - name: root
      type: partition
      fstype: ext4
      size: 2 GB
      content:
        type: ext4
        content:
          type: archive
          archive: build/ubuntu.config.tar

All together, we have a complete specification of our embedded solution, targeting QEMU as our virtual hardware.

Configuration parameters

The following list gives an overview of the supported configuration parameters for the EB corbos Linux build helper tools. In the round brackets it is noted for which files which option is applicable. Embdgen is developed separately, and the details and options for the storage specification is documented in the embdgen documentation.

  • base (boot/initrd/root/config) [default: None ]: Parent configuration file. If specified, the values from the parent file will be used if not otherwise specified in the current file.

  • arch (boot/initrd/root) [default: arm64 ]: The CPU architecture of the target hardware. The supported values are arm64, amd64 and armhf.

  • use_fakeroot (boot/initrd/root/config) [default: False ]: Use fakeroot in the generator tools where possible, instead of sudo and chroot. This may cause issues for edge-cases.

  • apt_repos (boot/initrd/root) [default: None ]: A list of apt repositories to download the required Debian packages. Example:

apt_repos:
  - apt_repo: http://archive.ubuntu.com/ubuntu
    distro: jammy
    components:
      - main
      - universe
  - apt_repo: http://archive.ubuntu.com/ubuntu
    distro: jammy-security
    components:
      - main
      - universe

In addition, an armored public key file or URL can be given as “key”, and a unarmored gpg file can be given as “gpg”, to authenticate the package sources.

  • use_ebcl_apt (boot/initrd/root) [default: No ]: If yes, the public apt repository of the EB corbos Linux will be added. By default, the latest release will be used if the ebcl_version parameter is not given. This is a convenience feature, but be aware that this public apt repository doesn’t provide customer specific or proprietary packages.

  • ebcl_version (boot/initrd/root) [default: latest release ]: EB corbos Linux release version, for the automatically generated apt repository.

  • host_files (boot/initrd/root) [default: None ]: Files to include from the host or container environment. Example:

host_files:
  - source: bootargs-overlay.dts
    destination: boot
  - source: bootargs.its
    destination: boot

The destination is the path in the target root filesystem or chroot environment. In addition, the parameters “mode”, to specify the mode of the file, “uid”, to specify the owner of the file, and “gid”, to specify the owning group of the file, can be used.

  • files (boot) [default: None ]: Files to get as result from the chroot environment. Example:
files:
  - boot/vmlinuz*
  - boot/config*

These files can be part of an extracted Debian package, or result of a script executed in the chroot environment.

  • scripts (boot/initrd/root/config) [default: None ]: The scripts which shall be executed.
scripts:
  - name: config_root.sh
    env: chroot

The supported environments are “chroot”, to run the script in a chroot environment, “fake”, to run the script in a fakeroot environment, “sudo” to run the script with root privileges, or “shell” to run the script in a plain shell environment. For “chroot” the script will be placed at “/” and executed from this folder. For all other environments, the current work directory will be the folder containing the target environment. In addition, parameters which are forwarded to the script can be provided as “params”.

  • template (initrd/root) [default: None ]: A Jinja2 template to create a configuration. In case of the initrd generator, a template for the init script can be provided. In case of the root generator, a template for the kiwi-ng XML image specification can be provided.

  • name (boot/initrd/root) [default: None ]: A name which is used in the filenames of the generated artifacts.

  • download_deps (boot/initrd) [default: True ]: Download the dependencies of the specified packages. This parameter must be True, to use e.g. a meta-package for the kernel binary and modules.

  • base_tarball (boot/initrd) [default: None ]: A base chroot environment for generating the boot artifacts and for the initrd.img. If no base chroot environment is given, a minimal busybox based environment will be used.

  • packages (boot/initrd/root/config) [default: None ]: A list of packages. For the root generator, these packages are installed in the base debootstrap environment. For the initrd generator, these packages will be downloaded, extracted and integrated into the resulting initrd.img. For the boot generator, these packages will be downloaded and extracted to get the kernel binary.

  • kernel (boot/initrd/root) [default: None ]: Name of the kernel package. For the initrd generator, these packages will be downloaded and extracted to a temporary folder to get the required kernel modules.

  • tar (boot) [default: True ]: Flag for packing the boot artifacts as a tarball. If embdgen is used to write the artifacts to an image, this will preserve the owner and mode of the artifacts.

  • busybox (initrd) [default: busybox-static ]: Name of the busybox package for the minimal busybox environment.

  • modules (initrd) [default: None ]: List of kernel modules to add and load from the initrd.img. Example:

modules:
  - kernel/drivers/virtio/virtio.ko 
  - kernel/drivers/virtio/virtio_ring.ko 
  - kernel/drivers/block/virtio_blk.ko 
  - kernel/net/core/failover.ko 
  - kernel/drivers/net/net_failover.ko 
  - kernel/drivers/net/virtio_net.ko
  • root_device (initrd) [default: None ]: Name of the root device to mount.

  • devices (initrd) [default: None ]: List of device nodes to add. Example:

devices:
  - name: mmcblk1
    type: block
    major: 8
    minor: 0
  - name: console
    type: char
    major: 5
    minor: 1

In addition, the parameters “mode”, to specify the mode of the device node, “uid”, to specify the owner of the device node, and “gid”, to specify the owning group of the device node, can be used.

  • kernel_version (initrd) [default: auto detected ]: The kernel version of the copied modules.

  • modules_folder (initrd) [default: None ]: A folder in the host or container environment containing the kernel modules. This can be used to provide modules from a local kernel build. Example:

modules_folder: $$RESULTS$$

The string “$$RESULTS$$” will be replaced with the path to the output folder, for all paths given in yaml config files of the build tools.

  • result_pattern (root) [default: auto detected ]: A name pattern to match the build result, e.g. *.tar.xz for kiwi-ng tbz builds.

  • image (boot/initrd/root/config) [default: None ]: A kiwi-ng XML image description. This parameter can be used to integrate old image descriptions into new build flows.

  • berrymill_conf (root) [default: None ]: A berrymill.conf used for berrymill build. If none is given, the configuration will be automatically generated using the provided apt repositories. This parameter can be used to integrate old image descriptions into new build flows.

  • use_berrymill (root) [default: True ]: Flag to use berrymill for kiwi-ng build. If this flag is set to false, kiwi-ng will be called without the berrymill wrapper.

  • use_bootstrap_package (root) [default: True ]: Flag if a bootstrap package shall be used for kiwi-ng builds. If this flag is set to True, one of the specified repositories needs to provide the bootstrap package.

  • bootstrap_package (root) [default: bootstrap-root-ubuntu-jammy ]: Name of the bootstrap package for the kiwi-ng build.

  • bootstrap (root) [default: None ]: List of additional bootstrap packages for the kiwi-ng build.

  • kiwi_root_overlays (root) [default: None ]: List of root overlay folders for the kiwi-ng build.

  • use_kiwi_defaults (root) [default: True ]: If this flag is true, the “root” folder and the kiwi-ng config scripts next to the appliance.kiwi, will be provided to kiwi-ng.

  • kiwi_scripts (root) [default: None ]: List of additional scripts which will be provided to kiwi-ng during the build.

  • kvm (root) [default: True ]: Flag if KVM acceleration shall be used for kiwi-ng builds.

  • image_version (root) [default: 1.0.0 ]: Image version for the generated kiwi-ng image description.

  • type (root) [default: debootstrap ]: Type of the root filesystem generator to use. The supported generators are “debootstrap” and “kiwi”.

  • primary_repo (root) [default: auto selected Ubuntu Jammy repository ]: The primary apt repository for the debootstrap or kiwi-ng build. The main component of this repository is used for debootstrap.

  • primary_distro (root) [default: jammy ]: The name of the distribution used for debootstrap.

  • root_password (root) [default: linux ]: The root password of the generated root filesystem.

  • hostname (root) [default: ebcl ]: The hostname of the generated root filesystem.

  • domain (root) [default: elektrobit.com ]: The domain name of the generated root filesystem.

  • console (root) [default: auto configured ]: The console parameter of the generated root filesystem. If none is given, “ttyS0,115200” is used for amd64, and “ttyAMA0,115200” is used for amd64.

  • sysroot_packages (boot/initrd/root/config) [default: None ]: List of additional packages which shall be installed for sysroot builds. This can be used to add additional development headers.

  • sysroot_defaults (boot/initrd/root/config) [default: True ]: Flag if the default additional packages for sysroot builds shall be added. If yes, in addition to the specified packages the packages “build-essential” and “g++” will be added.

Building an image from scratch

Let’s develop a new EB corbos Linux image step by step, for the NXP RDB2 board using the NXP S32G2 SoC. According to the NXP S32G2 user manual, the following bootloader layout is required:

The space between 0x0 and 0x1d_3000 is occupied by some or all of the following components: IVT, QSPI Parameters, DCD, HSE_FW, SYS_IMG, Application Boot Code Header, TF-A FIP image. The actual layout is determined at boot time and can be obtained from the arm-trusted-firmware.

IVT: Offset: 0x1000 Size: 0x100 AppBootCode Header: Offset: 0x1200 Size: 0x40 U-Boot/FIP: Offset: 0x1240 Size: 0x3d400 U-Boot Environment: Offset: 0x1e0000 Size: 0x2000

For SD/eMMC the partitioned space begins at 0x1d_3000.

For our SD card image, this means, the first 256B of the FIP image containing the ATF and the U-Boot needs to be written to block 0, then a gap of 0x2000 B is required, at position 0x1e0000 B, for the U-Boot env, and then the remaining part of the ATF and U-Boot image can be written. The partition table and partitions come afterwards.

Further the user manual describes that the kernel can be provided as a FIT image, and one way to provide this FIT image is to put it on the first partition, which has to be FAT32, using the name fitimage.

All these requirements can be fulfilled with the following embdgen image description:

# Partition layout of the image
# For more details see https://elektrobit.github.io/embdgen/index.html
image:
  type: mbr
  boot_partition: boot

  parts:
    - name: u-boot part 1
      type: raw
      start: 0
      size:  256 B
      content:
        type:  raw
        file:  out/fip.s32
  
    - name: u-boot part 2
      type: raw
      start:  512 B
      content:
        type:   raw
        file:   out/fip.s32
        offset: 512 B

    - name: uboot.env
      type:  empty
      start: 0x1e0000 B
      size:  0x2000 B

    - name: boot
      type: partition
      fstype: fat32
      content:
        type: fat32
        content:
          type: files
          files:
            - out/fitimage
      size: 100 MB

    - name: root
      type: partition
      fstype: ext4
      size: 2 GB
      content:
        type: ext4
        content:
          type: archive
          archive: out/ebcl_rdb2.config.tar

You may notice that this image description requires three artifacts:

  • fip.s32: This is the binary image containing the arm trusted firmware (ATF) and the U-Boot bootloader.

  • fitimage: This is the binary flattened image tree (FIT) containing the kernel and device tree.

  • ebcl_rdb2.config.tar: This is a tarball containing the contents of our Linux root filesystem.

Since the NXP S32G2 SoC is supported by EB corbos Linux, a FIP image and a kernel binary is provided as part of the releases and free download. The fip.s32 image is contained in the Debian package arm-trusted-firmware-s32g, and provided on https://linux.elektrobit.com/eb-corbos-linux/1.2 as part of the distribution ebcl_nxp_public in the component nxp_public. The kernel binary and modules are provided by the same distro and component, packaged as linux-image-unsigned-5.15.0-1023-s32-eb, linux-modules-5.15.0-1023-s32-eb and linux-modules-extra-5.15.0-1023-s32-eb.

The tooling to build the fitimage is contained in the packages u-boot-s32-tools, arm-trusted-firmware-s32g, device-tree-compiler, and nautilos-uboot-tools. We need to install these tools in some environment to be able to build the fitimage. Adding them to the root filesystem would be a possibility, but not a good one, since this would bloat the root filesystem and also gives very useful tools to an attacker trying to hack our embedded solution. Since the tooling is only needed during build time, a better approach is to install it in a separate environment. This could be our build host, but since we want reproducible builds, the better solution is to use the root generator to define and create a well specified chroot build environment.

Let’s first define some common settings used by our image overall, as base.yaml:

# Kernel package to use
kernel: linux-image-unsigned-5.15.0-1023-s32-eb
# CPU architecture
arch: arm64
# Add the EB corbos Linux apt repo
use_ebcl_apt: true
# Add repo with NXP RDB2 packages
apt_repos:
  - apt_repo: http://linux.elektrobit.com/eb-corbos-linux/1.2
    distro: ebcl_nxp_public
    components:
      - nxp_public
    key: file:///build/keys/elektrobit.pub
    gpg: /etc/berrymill/keyrings.d/elektrobit.gpg

This base.yaml states that we want to use the kernel package linux-image-unsigned-5.15.0-1023-s32-eb, build an arm64 image, and make use of the default EBcL apt repository, and the EBcL NXP additions. Now we can base on this file and define our fitimage build environment as boot_root.yaml:

# Derive values from base.yaml - relative path
base: base.yaml
# Name of the boot root archive
name: boot_root
# Packages for boot_root.tar
packages:
  - linux-image-unsigned-5.15.0-1023-s32-eb
  - linux-modules-5.15.0-1023-s32-eb
  - linux-modules-extra-5.15.0-1023-s32-eb
  - u-boot-s32-tools
  - arm-trusted-firmware-s32g
  - device-tree-compiler
  - nautilos-uboot-tools

We install all the above mentioned packages into this environment. For building the fitimage, and for extracting the fip.s32, we can make use of the boot generator:

# Derive values from base.yaml - relative path
base: base.yaml
# Reset the kernel value - we don't want to download and extract it
kernel: null
# Do not pack the files as tar
tar: false
# do not download and extract these packages, they are already installed in the boot_root.tar
use_packages: false
# Name of the boot root archive
base_tarball: $$RESULTS$$/boot_root.tar
# Files to copy form the host environment
host_files:
  - source: bootargs-overlay.dts
    destination: boot
  - source: bootargs.its
    destination: boot
  - source: $$RESULTS$$/initrd.img
    destination: boot
    base_dir: .
# Scripts to build the fitimage and fip.s32
scripts:
  - name: build_fitimage.sh # Build the fitimage in the boot_root.tar environment
    env: chroot
# Files to copy to the build folder
files:
  - boot/fip.s32
  - boot/fitimage

The kernel is already part of the chroot tarball environment, and we don’t need to download it again. We need to provide the fitimage and fip.s32 binaries directly to embdgen, so we don’t want to pack it. The tarball created by the root generator will be named “boot_root.tar”, because of the name given in the boot_root.yaml. Because of the “base_tarball” parameter, the boot generator will pick up the tarball, extract it and chroot into this environment. The boot generator will also replace the string “$$RESULTS$$” with the path to the given output folder. In addition, we need the files “bootargs-overlay.dts”, “bootargs.its” and “$$RESULTS$$/initrd.img” in the host environment. These files will be copied into the chroot environment and used for building the fitimage. The script build_fitimage.sh implements the fitimage building. When this script has done its job, the files fip.s32 and fitimage will be copied to the output folder.

To use this recipe, we first need the input artifacts. The bootargs.its is the fitimage description we need to provide. The following description will do the job:

/dts-v1/;
/ {
  description = "BaseOS Boot Image";
  #address-cells = <1>;
  images {
    kernel-1 {
      description = "Linux kernel image";
      data = /incbin/("Image");
      type = "kernel";
      arch = "arm64";
      os = "linux";
      compression = "none";
      load = <0x80000000>;
      entry = <0x80000000>;
      hash-1 { algo = "md5"; };
    };
    fdt-1 {
      description = "Flattened device tree blob";
      data = /incbin/("target.dtb");
      type = "flat_dt";
      arch = "arm64";
      compression = "none";
      hash-1 { algo = "md5"; };
    };
    ramdisk-1 {
      description = "Initial ramdisk image";
      data = /incbin/("initrd");
      type = "ramdisk";
      arch = "arm64";
      os = "linux";
      compression = "gzip";
      load =  <0x90000000>;
      entry = <0x90000000>;
      hash-1 { algo = "md5"; };
    };
  };
  configurations {
    default = "config-1";
    config-1 {
      description = "Default boot config";
      kernel = "kernel-1";
      ramdisk = "ramdisk-1";
      fdt = "fdt-1";
    };
  };
};

It describes a fitimage consisting of a kernel binary, a device tree and an initrd.img.

The bootargs-overlay.dts is the U-Boot configuration:

/dts-v1/;
/plugin/;

&{/chosen} {
    bootargs = "console=ttyLF0,115200 earlycon nohz=off coherent_pool=64M root=/dev/mmcblk0p2 selinux=0 rw";
};

The initrd.img is the initial ramdisk we want to use. We can use the initrd generator to create such an initrd.img which fits our needs. As long as we don’t want to implement secure boot, your needs are quite small. We just want to use /dev/mmcblk0p2 as root partition, which is partition two of the internal eMMC storage.

# Derive values from base.yaml - relative path
base: base.yaml
# Reset the kernel value - we don't want to download and extract it
kernel: null
# Root device to mount
root_device: /dev/mmcblk0p2

Running the initrd generator with this spec will create us a minimal initrd.img.

The final missing input is the script to generate the fitimage. We can use the following script:

#!/bin/sh

set -e

#======================
# Get NXP S32G ATF (secure boot image)
#---------------------------------------
cp /usr/lib/arm-trusted-firmware-s32g/s32g274ardb2/fip.s32 \
    /boot/fip.s32

#======================
# Rename kernel
#---------------------------------------
echo "Rename kernel..."
if [ ! -f /boot/vmlinuz ]; then
    mv /boot/vmlinuz-* /boot/Image
fi
mv /boot/vmlinuz /boot/Image
mv /boot/initrd.img /boot/initrd

#======================
# Get NXP S32G device tree
#---------------------------------------
cp /lib/firmware/*/device-tree/freescale/s32g274a-rdb2.dtb \
    /boot/fsl-s32g274a-rdb2.dtb

#======================
# Create fit image
#---------------------------------------
cd /boot

dtc -I dts -O dtb -o bootargs-overlay.dtbo bootargs-overlay.dts

fdtoverlay -i fsl-s32g274a-rdb2.dtb -o target.dtb bootargs-overlay.dtbo
ls -lah bootargs-overlay.dtbo

mkimage -f bootargs.its fitimage

Now we are prepared to build our fitimage, and get the fip.s32 binary.

S32G2

We can build the boot_root.tar using the command root_generator boot_root.yaml ./out, then we can build the initrd.img using the command initrd_generator initrd.yaml ./out, and finally we can build the fitimage using the command boot_generator boot.yaml ./out.

To avoid typing all these commands by hand, we can use make. The following Makefile will do the job:

#--------------
# Result folder
#--------------

result_folder ?= ./out

#---------------------
# Select bash as shell
#---------------------

SHELL := /bin/bash

#---------------------
# Image specifications
#---------------------

partition_layout ?= image.yaml

initrd_spec ?= initrd.yaml
boot_root_spec ?= boot_root.yaml
boot_spec ?= boot.yaml

#-------------------------
# Additional configuration
#-------------------------

# Build script for the fitimage
build_fitimage ?= build_fitimage.sh

# Layout of the fitimage
fitimage_config ?= bootargs.its

# NXP bootloader config
bootloader_config ?= bootargs-overlay.dts

#--------------------
# Generated artifacts
#--------------------

# Disc image
disc_image ?= $(result_folder)/image.raw

# Boot root tarball
boot_root ?= $(result_folder)/boot_root.tar

# Disc image
fitimage ?= $(result_folder)/fitimage

# Generated initrd.img
initrd_img ?= $(result_folder)/initrd.img

#--------------------------
# Image build configuration
#--------------------------

# The initrd image is built using the initrd generator.
# initrd_spec: specification of the initrd image.
$(initrd_img): $(initrd_spec)
	@echo "Build initrd.img..."
	mkdir -p $(result_folder)
	set -o pipefail && initrd_generator $(initrd_spec) $(result_folder) 2>&1 | tee $(initrd_img).log

# The root generator is used to build a sysroot variant of the root filesystem.
# root_filesystem_spec: specification of the root filesystem
#
# --no-config means that the configuration step is skipped
$(sysroot_tarball): $(root_filesystem_spec)
	@echo "Build sysroot.tar..."
	mkdir -p $(result_folder)
	set -o pipefail && root_generator --sysroot --no-config $(root_filesystem_spec) $(result_folder) 2>&1 | tee $(sysroot_tarball).log


# The root generator is used to build a chroot environment which contains all tools for building the fitimage.
# boot_root_spec: specification of the fitimage build environment
$(boot_root): $(boot_root_spec)
	@echo "Build $(boot_root) from $(boot_root_spec)..."
	mkdir -p $(result_folder)
	set -o pipefail && root_generator --no-config $(boot_root_spec) $(result_folder) 2>&1 | tee $(boot_root).log

# The boot generator is used to run the fitimage build in a chroot environment.
# boot_spec: spec of the fitimage build environment
# boot_root: tarball of the fitimage build environment
# build_fitimage: build script for the fitimage
# fitimage_config: fitimage layout configuration
# fitimage_config: bootloader configuration
# initrd_img: the initrd.img which is embedded in the fitimage
# initrd_spec: the initrd.img specification
$(fitimage): $(boot_spec) $(boot_root) $(build_fitimage) $(fitimage_config) $(fitimage_config) $(initrd_img)
	@echo "Build $(fitimage)..."
	mkdir -p $(result_folder)
	set -o pipefail && boot_generator $(boot_spec) $(result_folder) 2>&1 | tee $(fitimage).log

#--------------------------------
# Default make targets for images
#--------------------------------

# build of the initrd.img(s)
.PHONY: initrd
initrd: $(initrd_img)

# build of the fitimage
.PHONY: boot
boot: $(fitimage)

# build of the fitimage build env
.PHONY: boot_root
boot_root: $(boot_root)

# clean - delete the generated artifacts
.PHONY: clean
clean:
	rm -rf $(result_folder)

Now the board specific parts are done, and the only missing piece to build the image is the root filesystem. A minimal root filesystem making use of the systemd init manager can be specified as:

base: base.yaml
name: ebcl_rdb2
type: debootstrap
packages:
  - systemd
  - udev
  - util-linux
# Scripts to configure the root tarball
scripts:
  - name: config_root.sh # Name of the script, relative path to this file
    env: fake

The config_root.sh script is needed to link systemd as /sbin/init.

#!/bin/sh

# Link systemd as init
ln -s /usr/lib/systemd/systemd ./sbin/init

To build the root filesystem tarball, we can run root_generator root.yaml ./out, or we extend our Makefile.

#---------------------
# Image specifications
#---------------------

# Specification of the root filesystem content and configuration
root_filesystem_spec ?= root.yaml

#-------------------------
# Additional configuration
#-------------------------

# Config script for root filesystem
config_root ?= config_root.sh

#--------------------
# Generated artifacts
#--------------------

# Base root tarball
base_tarball ?= $(result_folder)/ebcl_rdb2.tar

# Configured root tarball
root_tarball ?= $(result_folder)/ebcl_rdb2.config.tar

#--------------------------
# Image build configuration
#--------------------------

# The root generator is used to build the base root filesystem tarball.
# root_filesystem_spec: specification of the root filesystem packages.
#
# This first step only installs the specified packages.
User configuration
# is done as a second step, because the build of this tarball is quite 
# time consuming and configuration is fast.
This is an optimization for 
# the image development process.
$(base_tarball): $(root_filesystem_spec)
	@echo "Build root.tar..."
	mkdir -p $(result_folder)
	set -o pipefail && root_generator --no-config $(root_filesystem_spec) $(result_folder) 2>&1 | tee $(base_tarball).log

# The root configurator is used to run the user configuration scripts
# as a separate step in the build process.
# base_tarball: tarball which is configured
# config_root: the used configuration script
$(root_tarball): $(base_tarball) $(config_root)
	@echo "Configuring ${base_tarball} as ${root_tarball}..."
	mkdir -p $(result_folder)
	set -o pipefail && root_configurator $(root_filesystem_spec) $(base_tarball) $(root_tarball) 2>&1 | tee $(root_tarball).log

The above makefile splits the image installation and the configuration step of building the root tarball. This is useful if you expect changes for the configuration, because the installation step is quite time consuming, and the configuration step is quite fast. This optimization can save you a lot of build time.

Finally we need to run embdgen to build our binary image. This can be done manually running embdgen image.yaml ./out, but we can also add it to our Makefile.

#---------------------
# Image specifications
#---------------------

# Specification of the partition layout of the image.raw
partition_layout ?= image.yaml

#--------------------
# Generated artifacts
#--------------------

# Disc image
disc_image ?= $(result_folder)/image.raw

#--------------------------
# Image build configuration
#--------------------------

# Embdgen is used to build the SD card image.
# fitimage: the fitimage containing the kernel, the device tree and the initrd.img
# root_tarball: the contents of the root filesystem
# partition_layout: the partition layout of the SD card image
#
# The bootloader fip.s32 is not explicitly mentioned, since it is built in one step
# with the fitimage.
$(disc_image): $(fitimage) $(root_tarball) $(partition_layout)
	@echo "Build image..."
	mkdir -p $(result_folder)
	set -o pipefail && embdgen -o ./$(disc_image) $(partition_layout) 2>&1 | tee $(disc_image).log

Now you have an image which you can flash to your NXP RDB2 board. The overall build flow with the changes above is:

S32G2

As described in the previous chapters, a EB corbos Linux image typically consists of a Makefile and specification yaml files. The Makefile is SoC specific, should be only changed during SoC and board bring-up, and can be shared with all images for the SoC. If different image variants are used, also parts of the image configuration can be shared. To avoid redundancy, the example images make use of this sharing. The example images are contained in the images folder of the EB corbos Linux template workspace, and are structured by CPU architecture, distribution, init-manager and further variant descriptions. The example image for amd64 and the QEMU target, using the EBcL distribution, the crinit init-manager, and the debootstrap root filesystem builder is contained in images/amd64/qemu/ebcl/crinit/debootstrap, and you can build and run it by executing make in this folder.

Please be aware that the example images are only considered for educational purposes. These images are not pre-qualified. If you are an EB corbos Linux customer, and want to start a new industrial embedded Linux project which requires qualification and maintenance, please choose one of the provided reference images as a starting point. These images are already pre-qualified and get up to 15 years of maintenance.

The amd64 images

EB corbos Linux doesn’t support any amd64 hardware at the moment, but we provide some QEMU amd64 images. Using amd64 for development may help to make your development flow much smoother since you don’t have to handle the tricky aspects of cross-building.

For amd64/qemu we provide example images for EB corbos Linux (EBcL) and for Ubuntu Jammy. The difference between EBcl and Jammy is, that EBcL provides some additional components, like the crinit init-manager and the elos logging and event framework, and that EBcL provides a qualified security maintenance release every three months, while Jammy is proving updates continuously, using less strict qualification and documentation.

The amd64 Jammy images

In images/amd64/qemu/jammy you can find five basic example images demonstrating how to use the EB corbos Linux SDK. This folder contains the common configuration shared by all the examples, and makes use of the QEMU images/qemu*.mk include makefiles.

# Kernel package to use
kernel: linux-image-generic
# Apt repositories to use
apt_repos:
  - apt_repo: http://archive.ubuntu.com/ubuntu
    distro: jammy
    components:
      - main
      - universe
  - apt_repo: http://archive.ubuntu.com/ubuntu
    distro: jammy-security
    components:
      - main
      - universe
# CPU architecture
arch: 'amd64'

All examples make use of the kernel “linux-image-generic”. This is a meta-package and always takes the latest available Ubuntu Jammy package. The Canonical Ubuntu apt repositories are used to build the examples.

# Partition layout of the image
# For more details see https://elektrobit.github.io/embdgen/index.html
image:
  type: gpt
  boot_partition: root

  parts:
    - name: root
      type: partition
      fstype: ext4
      size: 2 GB
      content:
        type: ext4
        content:
          type: archive
          archive: build/ubuntu.config.tar

All examples make use of a very simple image consisting of a gpt partition table and a single ext4 root partition with a size of 2 GB.

# Derive values from base.yaml - relative path
base: base.yaml
# Root device to mount
root_device: /dev/vda1
# List of kernel modules
modules:
  - kernel/drivers/block/virtio_blk.ko # virtio_blk is needed for QEMU

Also the initrd.img is shared by all examples. It first loads the virt-IO block driver and then mounts /dev/vda1 as the root filesystem.

# Derive values from base.yaml - relative path
base: base.yaml
# Download dependencies of the kernel package - necessary if meta-package is specified
download_deps: true
# Files to copy from the packages
files:
  - boot/vmlinuz*
  - boot/config*
# Do not pack the files as tar - we need to provide the kernel binary to QEMU
tar: false

The boot.yaml is also not image specific. It’s used to download and extract the kernel binary. In addition, the kernel config is extracted.

# Derive the base configuration
base: base.yaml
# Reset the kernel - should not be installed
kernel: null
# Name of the archive.
name: ubuntu
# Packages to install in the root tarball
packages:
  - systemd
  - udev        # udev will create the device node for ttyS0
  - util-linux
# Scripts to configure the root tarball
scripts:
  - name: config_root.sh # Name of the script, relative path to this file
    env: chroot # Type of execution environment - chfake means fakechroot

The root.yaml shares the common parts of the root filesystem configuration of all these example images. All examples use “ubuntu” as name, by default have a minimal root filesystem only consisting of debootstrap and systemd, udev, and util-linux additionally installed, and use the config_root.sh as configuration, which links systemd as /sbin/init.

The amd64 Jammy berrymill image

At the moment, the EBcL SDK makes use of two more generic Linux root filesystem builders, debootstrap and kiwi-ng. The default is debootstrap, because it provides a much better build speed, but also the previously used kiwi-ng is still supported. Future EBcL major release lines may drop kiwi-ng and come with a more embedded optimized solution, so ideally you make use of the root.yaml instead of using an own kiwi-ng XML image description.

The amd64/qemu/jammy/berrymill image makes use of the above mentioned configurations, and extends it with an own root.yaml and a specific Makefile.

# Config to use as a base
base: ../root.yaml
# Add the EB corbos Linux apt repo to provide the bootstrap package
use_ebcl_apt: true
# Overwrite the image builder type - ensure kiwi is used
type: kiwi
# Pattern to match the result file
result_pattern: '*.tar.xz'

This root.yaml inherits the root.yaml from the parent folder, described above, and adds the EBcL apt repository, which provides the required kiwi-ng bootstrap package, set the build type to “kiwi” and updates the build result search pattern to “*.tar.xz”, since there is no way to disable the result compression with kiwi-ng.

# Makefile for Jammy QEMU amd64 image using kiwi

# Arch for sysroot extraction
arch = x86_64

#---------------------
# Image specifications
#---------------------

# Specification of the partition layout of the image.raw
partition_layout = ../image.yaml
# Specification of the root filesystem content and configuration
root_filesystem_spec = root.yaml
# Specification of the initrd.img
initrd_spec = ../initrd.yaml
# Specification of the kernel
boot_spec = ../boot.yaml

#-------------------------
# Additional configuration
#-------------------------

# Config script for root filesystem
config_root = ../config_root.sh


#--------------------
# Generated artifacts
#--------------------

# Disc image
disc_image = $(result_folder)/image.raw

# Base root tarball
base_tarball = $(result_folder)/ubuntu.tar.xz

# Configured root tarball
root_tarball = $(result_folder)/ubuntu.config.tar

# Generated initrd.img
initrd_img = $(result_folder)/initrd.img

# Kernel image
kernel = $(result_folder)/vmlinuz

# Sysroot tarball
sysroot_tarball = $(result_folder)/ubuntu_sysroot.tar


#-------------------
# Run the QEMU image
#-------------------

# QEMU kernel command line
kernel_cmdline_append = "rw"


# for building
include ../../../../qemu.mk

# for running QEMU
include ../../../../qemu_x86_64.mk

The Makefile point make to the right specification files, sets the flag to mount the root filesystem as writable, and includes the base makefiles describing how to build an QEMU image and how to run the build results using QEMU.

The amd64 Jammy debootstrap image

In general, kiwi-ng can also build images using debootstrap instead of a pre-built bootstrap package. This brings the limitation that only one apt repository is supported, which needs to provide a proper main component, and that a debootstrap script must be available in the build VM for the selected distribution. The EBcL SDK can make use of this for Ubuntu Jammy builds, and the image amd64/qemu/jammy/debootstrap is a proof of concept showing how to do it.

# CPU architecture
arch: amd64
# Name of tarball
name: ubuntu
# APT repo for kiwi build
apt_repos:
  - apt_repo: http://archive.ubuntu.com/ubuntu
    distro: jammy
    components:
      - main
      - universe
# Use debootstrap instead of bootstrap package
# This allows us to use only one apt repo.
use_bootstrap_package: false
# Select required bootstrap packages
bootstrap:
  - apt
# Packages to install in the root tarball
packages:
  - systemd
  - udev
  - util-linux
# Overwrite the image builder type - ensure kiwi is used
type: kiwi
# Pattern to match the result file
result_pattern: '*.tar.xz'
# Scripts to configure the root tarball
scripts:
  - name: ../config_root.sh # Name of the script, relative path to this file
    env: chroot # Type of execution environment

The root.yaml configures the Ubuntu Jammy apt repository as the single apt repository to use, and sets the “use_bootstrap_package” to false, which will result in a kiwi-ng build not relying on the EBcL bootstrap package.

The amd64 Jammy debootstrap image

The images/amd64/qemu/jammy/debootstrap image makes use of the debootstrap root filesystem builder. The only difference to the shared configuration is that debootstrap is explicitly selected.

# Config to use as a base
base: ../root.yaml
# Overwrite the image builder type - ensure debootstrap is used
type: debootstrap

The Makefile is similar to the one above.

The amd64 Jammy kernel source image

The amd64/qemu/jammy/kernel_src image is a proof of concept showing how to make use of local compiled kernels with EBcL builds. The boot.yaml is used to get the kernel configuration of the Ubuntu Jammy kernel. The initrd.yaml extends the shared initrd.yaml with the line “modules_folder: $$RESULTS$$”. The parameter “modules_folder” can be used to provide kernel modules from the host environment, and the string “$$RESULTS$$” will be replaced with the path to the build folder.

The Makefile extends the default QEMU makefile with a bunch of new make targets.

#--------------------------
# Image build configuration
#--------------------------

$(source):
	@echo "Get kernel sources..."
	mkdir -p kernel
	cd kernel && apt -y source linux
	sudo apt -y build-dep linux
	cd $(kernel_dir) && chmod +x scripts/*.sh

$(kconfig): $(boot_spec) $(source)
	@echo "Get kernel config as $(kconfig)..."
	mkdir -p $(result_folder)
	set -o pipefail && boot_generator $(boot_spec) $(result_folder) 2>&1 | tee $(kconfig).log
	@echo "Renaming $(result_folder)/config-* as $(kconfig)..."
	mv $(result_folder)/config-* $(kconfig)
	@echo "Copying $(kconfig) to $(kernel_dir)..."
	cp $(kconfig) $(kernel_dir)/.config
	@echo "Set all not defined values of the kernel config to defaults..."
	cd $(kernel_dir) && yes "" | $(MAKE) $(kernel_make_args) olddefconfig
	@echo "Copying modified config as olddefconfig..."
	cp $(kernel_dir)/.config $(result_folder)/olddefconfig

$(kernel): $(kconfig) $(source)
	@echo "Get kernel binary..."
	cd $(kernel_dir) && yes "" | $(MAKE) -j 16 bzImage
	cd $(kernel_dir) && INSTALL_PATH=../../$(result_folder) $(MAKE) install
	cp -v $(result_folder)/vmlinuz-* $(kernel)
	@echo "Results were written to $(kernel)"

$(modules): $(kernel)
	@echo "Get virtio driver..."
	cd $(kernel_dir) && $(MAKE) -j 16 modules
	cd $(kernel_dir) && chmod +x debian/scripts/sign-module
	mkdir -p $(result_folder)
	cd $(kernel_dir) && INSTALL_MOD_PATH=../../$(result_folder) $(MAKE) modules_install

$(initrd_img): $(initrd_spec) $(modules)
	@echo "Build initrd.img..."
	mkdir -p $(result_folder)
	set -o pipefail && initrd_generator $(initrd_spec) $(result_folder) 2>&1 | tee $(initrd_img).log

#--------------------
# Helper make targets
#--------------------

# Rebuild the kernel binary
.PHONY: rebuild_kernel
rebuild_kernel:
	mkdir -p $(result_folder)
	cd $(kernel_dir) && yes "" | $(MAKE) -j 16 bzImage
	cd $(kernel_dir) && INSTALL_PATH=../../$(result_folder) $(MAKE) install
	cp -v $(result_folder)/vmlinuz-* $(kernel)
	@echo "Results were written to $(kernel)"

# Rebuild the kernel modules
.PHONY: rebuild_modules 
rebuild_modules: kernel
	mkdir -p $(result_folder)
	cd $(kernel_dir) && $(MAKE) modules -j 16
	cd $(kernel_dir) && chmod +x debian/scripts/sign-module
	rm -rf build/lib
	cd $(kernel_dir) && INSTALL_MOD_PATH=../../$(result_folder) $(MAKE) modules_install

The “$(source)” is responsible for fetching the kernel sources using apt, and installing the kernel build dependencies. The “$(kconfig)” target gets the default config for the used kernel package and adds it to the kernel source tree. The “$(kernel)” target describes how to compile the kernel and get the kernel binary. The “$(modules)” describes how to build and install the modules to the results folder. The new make for the initrd.img adds the dependency to the locally built kernel modules.

Overall, these new rules describe how to fetch the kernel sources and build the kernel binary and modules. These binaries are then picked up by the default QEMU build flow and make rules.

The amd64 Jammy kiwi image

The EBcL SDK makes by default use of berrymill for kiwi-ng builds, but it also supports using kiwi-ng directly. The image description in amd64/qemu/jammy/kiwi is a proof of concept how to use kiwi-ng without berrymill. Setting the flag “use_berrymill” to false does the trick. This build variant has some limitations compared to the berrymill build. Derived images are not supported, and the current implementation doesn’t use apt repository authentication.

The amd64 EB corbos Linux images

EB corbos Linux (EBcL) is an embedded Linux distribution targeting automotive and other industrial embedded Linux solutions. The main differences between EBcL and Ubuntu are the release and qualification handling, and some additional components added by EBcL which allow building more lightweight and better performing embedded images.

# Kernel package to use
kernel: linux-image-generic
use_ebcl_apt: true
# Additional apt repos
apt_repos:
  # Get latest security fixes
  - apt_repo: http://archive.ubuntu.com/ubuntu
    distro: jammy-security
    components:
      - main
      - universe
# CPU architecture
arch: 'amd64'

Again, the base.yaml is used to define the kernel package, the apt repos and the CPU architecture. The EBcL repo can be added using the “use_ebcl_apt” flag. For experimenting and if we want the latest security patches without qualification, we can add the Ubuntu Jammy repositories.

The boot.yaml is not different to the one used for the Jammy images, and just extracts the kernel binary and configuration form the given kernel package. The image.yaml and the initrd.yaml are also identical to the ones used with the Jammy images.

The amd64 EB corbos Linux systemd images

EBcL supports the systemd init-manager and if startup time and the resource footprint are not too critical, it’s a quite good choice because all of the Ubuntu packages are fully compatible with it, and all services come with their configs for systemd. To run systemd without providing the init-manager using the kernel command line, we can link it as /sbin/init. This is done using the config_root.sh script.

The amd64 EB corbos Linux systemd berrymill image

The amd64/qemu/ebcl/systemd/berrymill defines a QEMU image using berrymill and kiwi-ng for building the root filesystem. This root filesystem is a very minimal one, only providing systemd, udev and the default command line tools.

The amd64 EB corbos Linux systemd debootstrap image

The amd64/qemu/ebcl/systemd/debootstrap defines a QEMU image using debootstrap for building the root filesystem. This root filesystem is a very minimal one, only providing systemd, udev and the default command line tools.

The amd64 EB corbos Linux crinit images

EBcL adds crinit init-manger, as an alternative to systemd. Crinit is a much more lightweight init-manager, compared with systemd, and tailored to embedded. Since all the hardware and use-cases are very well known in advance for an embedded system, many dynamic configuration and detection features of systemd can be skipped, which results in a faster and much more lightweight solution. The drawback of using crinit is that the Ubuntu packages are not prepared for crinit, and all service and startup configuration needs to be done by the user.

The necessary minimal configuration to use crinit is contained in images/amd64/qemu/ebcl/crinit/crinit_config, and this folder is copied as overlay to the root filesystem using the root.yaml. The script config_root.sh ensures that the sbin/init script, provided in the overlay, is executable. Instead of systemd, crinit and its command line client crinit-ctl is installed.

Let’s take a closer look at the crinit_config overlay. The sbin/init mounts the /proc filesystem and then runs the crinit init-manager. The /etc folder contains a minimal crinit configuration. The file /etc/crinit/default.series is the main configuration file, and the folder /etc/crinit/crinit.d contains the services we want to run. The task /etc/crinit/crinit.d/agetty-ttyS0.crinit runs agetty on the serial console ttyS0, so that we can login using the QEMU serial console. The task /etc/crinit/crinit.d/earlysetup.crinit sets the hostname, so that we get proper logs. The task /etc/crinit/crinit.d/mount.crinit takes care of mounting the additional filesystems.

The amd64 EB corbos Linux crinit berrymill image

The amd64/qemu/ebcl/crinit/berrymill defines a QEMU image using berrymill and kiwi-ng for building the root filesystem. This root filesystem is a very minimal one, only providing crinit.

The amd64 EB corbos Linux crinit debootstrap image

The amd64/qemu/ebcl/crinit/debootstrap defines a QEMU image using debootstrap for building the root filesystem. This root filesystem is a very minimal one, only providing crinit.

The amd64 EB corbos Linux server images

The previous images were all very minimal images, only providing enough to boot and login to the system. For developing an embedded system this is the right place to start development, but for exploring and playing with the system it’s too less. The server images provide a more complete user experience and add logging, network, apt and ssh.

The amd64 EB corbos Linux server crinit image

The crinit variant of the server image is contained in images/amd64/qemu/ebcl/server. In addition to crinit, it provides the elos logging and event manager, which is a lightweight replacement of journald and dbus, which allows automatic log evaluation and event handling. To manage the network interfaces, netifd from the OpenWRT world is used. It’s a very powerful and nevertheless lightweight network manager used in many router solutions. Als NTP client ntpdate is used. To allow remote login openssh-server is added. The image also contains apt to allow easy installation of additional packages, and the typical Linux tools and editors for playing and exploring.

The root_common.yaml is the shared root specification of all the EBcL server variants. It defines the name, the architecture and the common tools and services, like openssh-server. The root.yaml extends the package list with the crinit and elos specific packages, and defines the overlay for the crinit configuration and the config script for the crinit variant. This config_root.sh sets a machine ID, required by elos, and generates a /etc/hosts file.

Let’s take a look at the server configuration. In addition to the /usr/sbin/init, which runs crinit, a ntp_time.sh is provided. This ntp_time.sh does a one-shot NTP time update, as soon as the network is up, to avoid issues with apt and other time sensitive services. The /etc/apt folder provides the apt repository configuration for EBcL and Ubuntu Jammy. The file /etc/config/network/network is evaluated by netifd to bring up the network interfaces. This configuration makes use of an static IPv6 and a dynamic IPv4 configuration. The crinit tasks are extended with tasks to run elos, bring up the network, run the SSH service, and trigger the NTP time update. The file /etc/elos/elosd.json contains some basic elos configuration, to use it as syslog demon. The config /etc/ssh/sshd_config.d/10-root-login.conf enables SSH login is root. The config /etc/gai.conf ensures that IPv4 DNS is preferred over IPv6. The other config files just set some reasonable defaults.

The amd64 EB corbos Linux server systemd image

The folder images/amd64/qemu/ebcl/server/systemd contains a variant of the EBcL server image using systemd as init manager. It’s mainly provided as a reference, to compare the configuration and performance.

The arm64 images

EB corbos Linux comes with arm64 based example images for rpi4 and nxp s32g boards at the moment. To ease development and testing we also provide QEMU arm64 images.

For arm64/qemu we provide example images for EB corbos Linux (EBcL) and for Ubuntu Jammy. The difference between EBcl and Jammy is, that EBcL provides some additional components, like the crinit init-manager and the elos logging and event framework, and that EBcL provides a qualified security maintenance release every three months, while Jammy is proving updates continuously, using less strict qualification and documentation.

The arm64 Jammy images

In images/arm64/qemu/jammy you can find two basic example images demonstrating how to use the EB corbos Linux SDK. This folder contains the common configuration shared by all the examples, and makes use of the QEMU images/qemu*.mk include makefiles.

# Kernel package to use
kernel: linux-image-generic
# Apt repositories to use
apt_repos:
  - apt_repo: http://archive.ubuntu.com/ubuntu
    distro: jammy
    components:
      - main
      - universe
  - apt_repo: http://archive.ubuntu.com/ubuntu
    distro: jammy-security
    components:
      - main
      - universe
# CPU architecture
arch: 'arm64'

All examples make use of the kernel “linux-image-generic”. This is a meta-package and always takes the latest available Ubuntu Jammy package. The Canonical Ubuntu apt repositories are used to build the examples.

Note that the only difference to the corresponding amd64 image is the arch specification in the last line, all further shared yaml files for the arm64 Jammy images with berrymill and debootstrap are identical to the amd64 QEMU jammy images, and hence documented already in the previous section.

The arm64 Jammy images

At the moment, the EBcL SDK makes use of two more generic Linux root filesystem builders, debootstrap and kiwi-ng. The default is debootstrap, because it provides a much better build speed, but also the previously used kiwi-ng is still supported. Note that kiwi-ng is wrapped by berrymill to provide additional features like derived images. Future EBcL major release lines may drop kiwi-ng and come with a more embedded optimized solution, so ideally you make use of the root.yaml instead of using an own kiwi-ng XML image description.

The arm64/qemu/jammy/berrymill image makes use of the above mentioned configurations, and extends it with an own root.yaml and a specific Makefile.

# Config to use as a base
base: ../root.yaml
# Add the EB corbos Linux apt repo to provide the bootstrap package
use_ebcl_apt: true
# Overwrite the image builder type - ensure kiwi is used
type: kiwi
# Pattern to match the result file
result_pattern: '*.tar.xz'

This root.yaml inherits the root.yaml from the parent folder, described above, and adds the EBcL apt repository, which provides the required kiwi-ng bootstrap package, set the build type to “kiwi” and updates the build result search pattern to “*.tar.xz”, since there is no way to disable the result compression with kiwi-ng.

# Makefile for any QEMU arm64 image using kiwi

# Arch for sysroot extraction
arch = aarch64

#---------------------
# Image specifications
#---------------------

# Specification of the partition layout of the image.raw
partition_layout = ../image.yaml
# Specification of the root filesystem content and configuration
root_filesystem_spec = root.yaml
# Specification of the initrd.img
initrd_spec = ../initrd.yaml
# Specification of the kernel
boot_spec = ../boot.yaml

#-------------------------
# Additional configuration
#-------------------------

# Config script for root filesystem
config_root = ../config_root.sh


#--------------------
# Generated artifacts
#--------------------

# Disc image
disc_image = $(result_folder)/image.raw

# Base root tarball
base_tarball = $(result_folder)/ubuntu.tar.xz

# Configured root tarball
root_tarball = $(result_folder)/ubuntu.config.tar

# Generated initrd.img
initrd_img = $(result_folder)/initrd.img

# Kernel image
kernel = $(result_folder)/vmlinuz

# Sysroot tarball
sysroot_tarball = $(result_folder)/ubuntu_sysroot.tar


#-------------------
# Run the QEMU image
#-------------------

# QEMU kernel command line
kernel_cmdline_append = "rw"


# for building
include ../../../../qemu.mk

# for running QEMU
include ../../../../qemu_aarch64.mk

The Makefile point make to the right specification files, sets the flag to mount the root filesystem as writable, and includes the base makefiles describing how to build an QEMU image and how to run the build results using QEMU.

The arm64 EB corbos Linux images

EB corbos Linux (EBcL) is an embedded Linux distribution targeting automotive and other industrial embedded Linux solutions. The main differences between EBcL and Ubuntu are the release and qualification handling, and some additional components added by EBcL which allow building more lightweight and better performing embedded images. The code is again very similar to the amd64 QEMU images.

The differences for aarch64 are the adaption of the architecture in base.yaml and in *.mk files.

Supported images

The following images are supported:

  • aarch64 EB corbos Linux systemd berrymill
  • aarch64 EB corbos Linux systemd debootstrap image
  • aarch64 EB corbos Linux crinit images
  • aarch64 EB corbos Linux crinit berrymill image

Their functionality and implementation is analog to the corresponding amd64 images.

EB corbos Linux example images for the Raspberry Pi 4

EB corbos Linux comes with development support for the Raspberry Pi 4. This means, you can use a Raspberry Pi 4 board for early development and demos, and you get support, but it’s not qualified for mass production. The Raspberry Pi 4 example images make use of the kernel and firmware packages provided by Ubuntu Ports.

# Kernel package to use
kernel: linux-image-raspi
use_ebcl_apt: true
# Additional apt repos
apt_repos:
  # Get Ubuntu Raspberry Pi packages
  - apt_repo: http://ports.ubuntu.com/ubuntu-ports
    distro: jammy
    components:
      - main
      - universe
      - multiverse
      - restricted
  # Get latest security fixes
  - apt_repo: http://ports.ubuntu.com/ubuntu-ports
    distro: jammy-security
    components:
      - main
      - universe
      - multiverse
      - restricted
# CPU architecture
arch: arm64

For booting, the Raspberry Pi expects to find a fat32 partition as first partition on the SD card, and this partition is expected to contain the firmware and kernel binaries and devicetrees, and some configuration files. For this image, we make use of the split archive feature of embdgen. This feature allows the distribution of the content of one tarball to multiple partitions. The following image.yaml gets the content of build/ebcl_pi4.config.tar, and puts the content of the /boot folder to the boot partition and puts the remaining content to the root partition.

# Partition layout of the image
# For more details see https://elektrobit.github.io/embdgen/index.html
contents:
  - name: archive
    type: split_archive
    archive: build/ebcl_pi4.config.tar
    splits:
      - name: boot
        root: boot
    remaining: root

image:
  type: mbr
  boot_partition: boot

  parts:
    - name: boot
      type: partition
      fstype: fat32
      size: 200 MB
      content:
        type: fat32
        content: archive.boot

    - name: root
      type: partition
      fstype: ext4
      size: 4 GB
      content:
        type: ext4
        content: archive.root

The commandline.txt and config.txt are just taken from a prebuilt Raspberry Pi OS image.

#!/bin/sh

# Create a hostname file
echo "ebcl-pi4" > ./etc/hostname

# Create /etc/hosts
cat > ./etc/hosts <<- EOF
127.0.0.1       localhost
::1             localhost ip6-localhost ip6-loopback
ff02::1         ip6-allnodes
ff02::2         ip6-allrouters
EOF

# Copy Raspi device trees
cp ./usr/lib/firmware/5.15.0-1060-raspi/device-tree/broadcom/bcm2711*.dtb ./boot/
# Copy device tree overlays
cp -R ./usr/lib/firmware/5.15.0-1060-raspi/device-tree/overlays ./boot/
# Copy raspi firmware
cp ./usr/lib/linux-firmware-raspi/* ./boot/

# Copy kernel as the expected name
cp ./boot/vmlinuz-* ./boot/kernel8.img || true
# Copy initrd as the expected name
cp ./boot/initrd.img-* ./boot/initramfs8 || true

# Delete the symlinks
rm ./boot/vmlinuz || true
rm ./boot/initrd.img || true

The shared config_root.sh creates a hostname and hosts file, and makes sure the kernel, bootloader and device trees are available at the expected location and name.

EBcL Raspberry Pi 4 systemd image

The folder images/arm64/raspberry/pi4/systemd contains the systemd variant of the Raspberry Pi 4 image. This image is not a minimal one, but brings what you expect to find in a Raspberry Pi server image. Since we use the split archive feature, we install also the kernel and bootloader package to the root filesystem, which feels a bit simpler and we don’t need to care about the needed kernel modules, but also give a bit more bloated and less secure root filesystem.

base: ../base.yaml
name: ebcl_pi4
packages:
  - linux-firmware-raspi
  - linux-raspi
  - u-boot-rpi
  - flash-kernel
  - systemd
  - systemd-coredump
  - systemd-timesyncd
  - udev
  - util-linux
  - netbase
  - locales
  - file
  - findutils
  - kmod
  - iproute2
  - iptables
  - iputils-ping
  - vim
  - nano
  - strace
  - apt
  - openssh-server
  - openssh-client
# Scripts to configure the root tarball
scripts:
  - name: ../config_root.sh # Name of the script, relative path to this file
    env: fake
  - name: config_systemd.sh # Name of the script, relative path to this file
    env: chroot
host_files:
  - source: ../cmdline.txt
    destination: boot
  - source: ../config.txt
    destination: boot
  - source: systemd_config/* # Crinit configuration

The common config_root.sh is extended with a second, systemd specific, config_systemd.sh configuration file. This script links systemd as init-manager and enables the basic system services for network, NTP and DNS. The systemd_config overlay folder provides a basic system configuration, including apt and SSH.

EBcL Raspberry Pi 4 crinit image

The crinit variant of the Raspberry Pi 4 image, contained in images/arm64/raspberry/pi4/crinit, makes use of the crinit init-manager, elos for logging, and netifd for the network configuration. It also comes with apt and SSH support, and provides typical tools like vim or strace. The script config_crinit.sh takes care of creating the machine ID, needed by elos, and makes sure DNS is working. All other configuration is provided as an overlay in the crinit_config folder.

EB corbos Linux example images for the NXP RDB2 board

The folder_images/arm64/nxp/rdb2_ contains the EB corbos Linux (EBcL) example images for the NXP RDB2 development board, which is equipped with an NXP S32G2 SoC. The S32G2 has very specific storage layout requirements, and if you are interested into more details about required base configuration, take a look at Building an image from scratch.

# Kernel package to use
kernel: linux-image-unsigned-5.15.0-1023-s32-eb
# CPU architecture
arch: arm64
# Add the EB corbos Linux apt repo
use_ebcl_apt: true
# Add repo with NXP RDB2 packages
apt_repos:
  - apt_repo: http://linux.elektrobit.com/eb-corbos-linux/1.2
    distro: ebcl_nxp_public
    components:
      - nxp_public
    key: file:///build/keys/elektrobit.pub
    gpg: /etc/berrymill/keyrings.d/elektrobit.gpg

The packages for the NXP RDB2 board are provided as a separate distribution, called ebcl_nxp_public, as component nxp_public. For the RDB2 images, the kernel package linux-image-unsigned-5.15.0-1023-s32-eb is used. This package contains a Linux kernel image for the S32G2, and has slightly different kernel configuration as the Ubuntu Jammy default.

The image.yaml describes the required storage layout, the initrd.yaml is a very minimal initrd.img specification, just defining the root partition. The boot_root.yaml defines the environment used for building the fitimage. The bootargs* files are the S32G2 specific configurations for the fitimage layout and kernel command line. The script build_fitimage.sh is executed in the mentioned chroot environment, and automates the fitimage building. The boot.yaml wraps all the before mentioned, and is used together with the boot generator to automatically build the fitimage. The rdb2.mk defines the default build flow for the RDB2 images, and is included and extended if needed by the makefiles of the different image variants.

EBcL RDB2 systemd image

The minimal RDB2 systemd example image is contained in the folder images/arm64/nxp/rdb2/systemd. This image defines a minimal working RDB2 image, and provides only systemd, udev and util-linux in the userland.

EBcL RDB2 systemd server image

The folder images/arm64/nxp/rdb2/systemd/server contains a Raspberry Pi server like image for the RDB2 board. This image comes with and SSH server, apt, and mtd-utils.

EBcL RDB2 crinit image

The minimal RDB2 crinit example image is contained in the folder images/arm64/nxp/rdb2/crinit. This image defines a minimal working RDB2 image, and provides only crinit in the userland.

EBcL RDB2 network image

The RDB2 network example image is contained in the folder images/arm64/nxp/rdb2/network. This image contains crinit, elos, and netifd to provide a minimal Linux image with network support and logging. This image also shows how to use the boot generator and the root generator to add modules to a root filesystem. The boot.yaml defines that the lib/modules folder shall be extracted to the build results folder, and the root.yaml picks this result up and includes it into the root filesystem using the following yaml lines:

host_files:
  - source: $$RESULTS$$/modules
    destination: lib

The crinit_config folder contains a small implementation to make use of these modules. The crinit task crinit_config/etc/crinit/crinit.d/modprobe.crinit runs the script crinit_config/usr/sbin/load_modules.sh which loads all modules state in crinit_config/etc/kernel/runtime_modules.conf. This is a bit more involved, and requires more knowledge about the used hardware, as using udev would need, but it is also much more lightweight and faster than udev.

EBcL RDB2 kernel_src image

The folder images/arm64/nxp/rdb2/kernel_src contains a proof of concept on how to use a locally built kernel for an RDB2 image. The kernel_config.yaml is used to extract the default kernel config, the Makefile downloads and builds the kernel, and the boot.yaml picks the kernel binary up and adds it to the _fitimage. More details are described in the chapter Kernel development.

Kernel development

If you do the bring-up for a new board, you may need to adapt the kernel configuration. This section continues where “Building an image from scratch” ended.

Please be aware that EB corbos Linux allows you to “open the box”, but if you modify the provided binary packages the support and maintenance for these packages is not covered with the base offer. You can get support, qualification and long term maintenance as an add-on to the base offer, as a yearly fee for each package.

Nevertheless, let’s see how we can build our own kernel. To build a custom kernel package we need the kernel sources and the base kernel config. We can get the kernel sources and build dependencies using apt:

mkdir -p kernel
cd kernel
apt -y source linux-buildinfo-5.15.0-1034-s32-eb
sudo apt -y build-dep linux-buildinfo-5.15.0-1034-s32-eb

For extracting the kernel config, we can again make use of the boot generator:

# Derive values from base.yaml - relative path
base: base.yaml
# Do not pack the files as tar
tar: false
# download and extract the kernel package incl. depends
use_packages: true
# Files to copy to the build folder
files:
  - boot/config*

We can copy this config as .config into the kernel source and build the kernel using make.

To make use of our local built kernel binary we need and adapted boot.yaml:

# Derive values from base.yaml - relative path
base: base.yaml
# Reset the kernel value - we don't want to download and extract it
kernel: null
# Do not pack the files as tar
tar: false
# do not download and extract these packages, they are already installed in the boot_root.tar
use_packages: false
# Name of the boot root archive
base_tarball: $$RESULTS$$/boot_root.tar
# Files to copy form the host environment
host_files:
  - source: ../bootargs-overlay.dts
    destination: boot
  - source: ../bootargs.its
    destination: boot
  - source: $$RESULTS$$/initrd.img
    destination: boot
  - source: $$RESULTS$$/vmlinuz
    destination: boot
# Scripts to build the fitimage and fip.s32
scripts:
  - name: ../build_fitimage.sh # Build the fitimage in the boot_root.tar environment
    env: chroot
# Files to copy to the build folder
files:
  - boot/fip.s32
  - boot/fitimage

The only change compared to the old boot.yaml is that we add “$$RESULTS$$/vmlinuz” to the host_files. This means our kernel binary is copied to the /boot folder of the fitimage build environment, and will overwrite the one from the kernel Debian package. This will give us the following build flow:

S32G2

We can add this to our Makefile with the following changes:

#---------------------
# Image specifications
#---------------------

# Specification how to get the kernel config
kernel_config = kernel_config.yaml
# Kernel source package name
kernel_package = linux-buildinfo-5.15.0-1034-s32-eb

#--------------------
# Generated artifacts
#--------------------

# Kernel image
kernel = $(result_folder)/vmlinuz
# Kernel modules
modules = $(result_folder)/lib
# Kernel config
kconfig = $(result_folder)/config
# Kernel source
source = kernel
# Path of the kernel sources
kernel_dir = $(source)/linux-s32-eb-5.15.0
# Kernel make arguments
kernel_make_args = ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu-

#--------------------------
# Image build configuration
#--------------------------

# Download the kernel source form the apt repo
# and install the build dependencies.
$(source):
    @echo "Get kernel sources..."
    mkdir -p $(source)
    cd $(source) && apt -y source $(kernel_package)
    sudo apt -y build-dep $(kernel_package)
    cd $(kernel_dir) && chmod +x scripts/*.sh

# Get the kernel config from the configured kernel binary package.
$(kconfig): $(kernel_config) $(source)
    @echo "Get kernel config..."
    mkdir -p $(result_folder)
    set -o pipefail && boot_generator $(kernel_config) $(result_folder) 2>&1 | tee $(kconfig).log
    @echo "Renaming $(result_folder)/config-* as $(kconfig)..."
    mv $(result_folder)/config-* $(kconfig)
    @echo "Copying $(kconfig) to $(kernel_dir)..."
    cp $(result_folder)/config $(kernel_dir)/.config
    @echo "Set all not defined values of the kernel config to defaults..."
    cd $(kernel_dir) && $(MAKE) $(kernel_make_args) olddefconfig
    @echo "Copying modified config as olddefconfig..."
    cp $(kernel_dir)/.config $(result_folder)/olddefconfig

# Build the kernel binary
$(kernel): $(kconfig) $(source)
    @echo "Compile kernel..."
    cd $(kernel_dir) && $(MAKE) $(kernel_make_args) -j 16 Image
    @echo "Get kernel binary..."
    cp $(kernel_dir)/arch/arm64/boot/Image $(kernel)
    @echo "Results were written to $(kernel)"

# Adapt build spec for the fitimage
# Additional dependency to the kernel binary
# Please note that another boot_spec is used, see boot.yaml.
$(fitimage): $(boot_spec) $(boot_root) $(build_fitimage) $(fitimage_config) $(fitimage_config) $(initrd_img) $(kernel)
    @echo "Build $(fitimage)..."
    mkdir -p $(result_folder)
    set -o pipefail && boot_generator $(boot_spec) $(result_folder) 2>&1 | tee $(fitimage).log

# Make the modules and install them in the results folder
$(modules): $(kernel)
    @echo "Get virtio driver..."
    cd $(kernel_dir) && $(MAKE) $(kernel_make_args) modules -j 16
    cd $(kernel_dir) && chmod +x debian/scripts/sign-module
    mkdir -p $(result_folder)
    cd $(kernel_dir) && INSTALL_MOD_PATH=../../$(result_folder) $(MAKE) $(kernel_make_args) modules_install

#--------------------
# Helper make targets
#--------------------

# Configure the kernel binary
.PHONY: config_kernel
config_kernel:
    cd $(kernel_dir) && $(MAKE) $(kernel_make_args) menuconfig

# Rebuild the kernel binary
.PHONY: rebuild_kernel
rebuild_kernel:
    mkdir -p $(result_folder)
    cd $(kernel_dir) && $(MAKE) $(kernel_make_args) -j 16 Image
    @echo "Delete the old kernel binary..."
    rm -f $(kernel)
    @echo "Get the new kernel binary..."
    cp $(kernel_dir)/arch/arm64/boot/Image $(kernel)

# Rebuild the kernel modules
.PHONY: rebuild_modules 
rebuild_modules: kernel
    mkdir -p $(result_folder)
    cd $(kernel_dir) && $(MAKE) $(kernel_make_args) modules -j 16
    cd $(kernel_dir) && chmod +x debian/scripts/sign-module
    @echo "Delete the old kernel modules..."
    rm -rf $(modules)
    @echo "Install the new kernel modules..."
    cd $(kernel_dir) && INSTALL_MOD_PATH=../../$(result_folder) $(MAKE) $(kernel_make_args) modules_install

# clean - delete the generated artifacts
.PHONY: clean
clean:
    rm -rf $(source)
    rm -rf $(result_folder)

Building old Berrymill images

The previous versions of the EBcL SDK used kiwi-ng for building images. Using kiwi-ng was a quite pragmatic choice, since it’s an established tool to build images from binary IT distribution packages. Nevertheless, it turned out that kiwi-ng is not flexible enough to build typical embedded images. Starting from EBcL SDK 1.3 the new make and generator based builds are used. This approach has the advantage that it’s flexible enough for any imaginable build flow, and that the builds are much more efficient. Nevertheless, at least for EBcL 1.x, kiwi-ng is still provided and supported. If you are at the very beginning of your development, we recommend switching to the new build flow, since it is more efficient, and kiwi-ng support will be most likely dropped with the EBcL 2.x line.

For a stepwise transition, you can use the new build tools to build your existing kiwi-ng images. The folder images/example-old-images/qemu/berrymill contains an example showing how to build an old berrymill and kiwi-ng image using the new tools.

# Use Berrymill and Kiwi as image builder
type: kiwi
# Use KVM accelerated building - turn of if not supported on your host
kvm: true
# CPU architecture of the target
arch: 'amd64'
# Relative path to the kiwi appliance
image: appliance.kiwi
# Relative path to the berrymill.conf
berrymill_conf: berrymill.conf
# Result file name pattern
result_pattern: '*.qcow2'

The root generator supports a parameter called “image”, with the path to the appliance.kiwi as value. An existing berrymill.conf can be provided using the parameter “berrymill_conf”. The old images typically used qcow2 results, and the parameter “result_pattern” can be used to tell the root generator to search for such files. Using this his root.yaml, the root generator will run a berrymill build for the given appliance.kiwi, and the results will be placed in the given output folder. This call of the root generator, and the instructions to run the resulting image with QEMU, are contained in the example Makefile. The “ –no-config” is needed to tell the root generator to not extract the result and apply the configuration, since this will be already done as part of the kiwi-ng build.

Developing apps

The workspace provides two simple applications to explain the development workflow and interactions with the operating system.

For applications development, interaction with different target types is handled via a set of predefined, generic, Visual Studio Code tasks. Currently, the example workspace provides four different CMake presets for building, corresponding to four possible deployment targets. The supported presets are the following:

PresetArchSysrootImage
qemu-x86_64x86_64sysroot_x86_64/workspace/images/amd64/appdev/qemu/[crinit|systemd]
qemu-aarch64aarch64sysroot_aarch64/workspace/images/arm64/appdev/qemu/[crinit|systemd]
hardwareaarch64sysroot_aarch64/workspace/images/arm64/appdev/[rdb2|pi4]/[crinit|systemd]

The different columns represent the name of the preset, CPU architecture, the sysroot the application is built against and a path to a possible image configuration with either crinit or systemd as init daemon.

For each preset a configuration entry in /workspace/apps/common/deployment.targets is present, defining target address, ssh settings and user credentials as well as the gdb port used for debugging. Target access via ssh is based on the TARGET_IP, SSH_USER, SSH_PORT and SSH_PREFIX. In the current example configurations for remote targets, the SSH_PREFIX is used to handover the login password via sshpass -p {password}.

Build, execute and debug demo applications

The following section explains how to build, execute and debug the included example applications. All required steps are handled by Visual Studio Code tasks. Some of the mentioned tasks will reference an “active build preset”. This means that Visual Studio code will determine the application for which the task will be executed for. For this mechanism to work properly make sure, that the focused editor window shows a file of the application you want the task to run for. This file may belong to the application folder or any of its subfolders.

Build

Before you can build any of the example applications, please make sure to run make sysroot_install for the used image configuration. As an example, for the amd64 qemu image with crinit as init daemon, building the sysroot would be done like this:

cd /workspace/images/amd64/appdev/qemu/crinit
make sysroot_install

In order to build the applications, use the Visual Studio Code CMake extension on the Visual Studio Code bottom ribbon:

  • Choose active project as my-json-app or meminfo-reader
  • Choose active configure preset qemu-x86_64, qemu-aarch64 or hardware
  • Click on Build

Alternatively to the use of Visual Studio Code tasks, building can also be done directly via cmake. The following commands will configure, build and install the my-json-app for the qemu-x86_64 preset.

cd /workspace/apps/my-json-app/
cmake . --preset qemu-x86_64
cmake --build --preset qemu-x86_64

After building is done, the artifacts will be available in /build/results/apps/{application}/{preset}/build/install/.

Pre-execute steps

Before you can start the application for any of the available presets, you need to start the corresponding image. Again, we take the amd64 qemu crinit image as example. The following command will start the qemu instance, as well as builds the image beforehand if needed:

cd /workspace/images/amd64/appdev/qemu/crinit
make

Afterwards, you can run task Deploy app from active build preset to deploy the required artifacts for the currently active build preset.

Run demo applications

The applications can be started with the task Run app from active build preset. Based on the build preset, the ssh connection parameters are derived from /workspace/apps/common/deployment.targets and the application is called on the target via an ssh session. All output messages of the application, will be displayed in a terminal window associated to the used run task. Alternatively, you could also login via ssh to the target and call the application from there directly.

Post-execute steps

This step is not required for the provided example applications, since both terminate directly and don’t include any continuous loops. Nevertheless, your own applications, may behave differently. In order to stop the execution of an application you can either press CTRL-C in the corresponding terminal window or stop the parent task. To stop the parent task click on “■” (Stop icon) in the task Run app from active build preset to stop the application.

Debugging demo applications

Visual Studio Code can be used as a gdb frontend for debugging. In order to debug the application from the current active preset press “F5”. Before Visual Studio Code starts the gdb and after debugging, the following tasks are executed automatically.

Pre debugPost debug
  • Build and check target connection
    • Trigger incremental application build
    • Perform ssh connection test and update ssh keys, if needed
  • Build and check target connection
  • Update application deployment
  • Prepare application specific gdbinit file
  • Start gdbserver on remote target
  • Stop gdbserver on remote target

Packaging applications for EB corbos Linux

EB corbos Linux makes use of Debian packages, which are described in full detail in the Debian Policy Manual. On a high level, there are two types of packages, source packages and binary packages.

The source packages consist of a dsc file containing the metadata of the package and typically referencing source tarballs belonging to this package. As an example, you can take a look at the dsc of the nano editor. The source tarball contains a debian subfolder, and this subfolder contains all metadata of this package, and the binary packages which can be built using this source package, including the build instructions. Debian tooling like pbuilder can be used to build the binary packages out of a source package, for all supported platforms and variants.

The binary packages are Unix AR archives containing a file debian-binary, giving the version number of the used Debian binary package format, a control.tar.gz, containing the metadata of the package, a data.tar.gz, which is extracted to the filesystem when the package is installed, and potentially further metadata files. The advantage of using Debian binary packages and apt repositories is that you have a signature chain from your copy of the public key of the apt repository to the Debian binary package you download, which ensures that the package was provided by the right vendor, and was not manipulated on the way. Bundling the metadata with the software allows apt to ensure that the package is compatible with the target environment, and that all dependencies are fulfilled. For all installed packages, the package information is installed into /usr/share/doc. You can consult this folder to get the changelog and license information for your installed packages.

If you develop applications for EB corbos Linux, which shall be installed in the root filesystem, especially during build time, we recommend to package these applications, since this ensures that the right version is installed, all dependencies are available, and allows an easy reuse. If you develop applications which shall not be part of the root filesystem, e.g. to update them separately from the root image, a bundling according to the needs of the update solution is necessary, and a Debian packaging is not required.

We don’t recommend apt as an update tool for embedded solutions, since it doesn’t support an A/B update schema, and it’s not prepared to be used together with a read-only and dm-verity protected root filesystem, which you may use if you implement a secure boot solution. For such scenarios, the existing embedded update solutions, and containers are much better solutions. If you need a customized update solution, or consulting for building online updateable HPC platforms, please contact us.

Preparing the Debian package metadata

The first step to create a Debian package is to add the required metadata. You don’t need to do this by hand, there are various tools which will generate template metadata. We recommend dh_make to generate the metadata. If you want to explore other tooling for creating packages from a source, refer to https://wiki.debian.org/Packaging/SourcePackage.

The dh_make tool has some expectations about the folder name, and as a comfort feature the EBcL SDK provides a helper script prepare_deb_metadata to generate the metadata for an app. To generate the Debian metadata for an app contained in the apps folder fo the workspace, you can run prepare_deb_metadata [name of the app], e.g. prepare_deb_metadata my-json-app. For the example applications, you can also make use of the corresponding build task EBcL: Generate Debian metadata for app, which shows up in the build tasks menu (Ctrl + Shift + B). This will add a new subfolder debian to the app folder.

The generated metadata is just a template, and needs to be adjusted for successful building a package. Open the new debian/control and complete it. At minimum, you need to change the value of Section to misc or another valid value, and fill out the Description. If your app has build-time dependencies, you also need to add it to the Build-Depends list. For the my-json-app, the dependencies are:

Build-Depends: debhelper-compat (= 13), cmake, pkg-config, libjsoncpp-dev

Debian packages use several different metadata files. The most important ones are:

  • control: This file contains the details of the source and binary packages. For more details, refer to https://www.debian.org/doc/debian-policy/ch-controlfields.html.

  • rules: This file contains the package build rules that will be used to create the package. This file is a kind of Makefile. For more details, refer to https://www.debian.org/doc/debian-policy/ch-source.html#main-building-script-debian-rules.

  • copyright: This file contains the copyright of the package in a machine-readable format. For more details, refer to https://www.debian.org/doc/debian-policy/ch-archive.chtml#copyright-considerations.

  • changelog: The changelog of the package itself. It contains version number, revision, distribution, and urgency of the package. For more details, refer to https://www.debian.org/doc/debian-policy/ch-source.html#debian-changelog-debian-changelog.

  • patches: This folder can contain patches that are applied on top of the original source. For more details, refer to https://www.debian.org/doc/debian-policy/ch-source.html#vendor-specific-patch-series.

Packaging the application

If the package metadata is prepared, you can build the Debian packages for amd64 using pbuilder. The EBcL SDK provides also for pbuilder a comfort script to build application packages. You can run build_package [name of the app] [architecture], e.g. prepare_deb_metadata my-json-app amd64, to build the Debian binary package for your application. The results will be written to results/packages. For packaging the example applications, you can also make use of the corresponding build tasks EBcL: Package app, which shows up in the build tasks menu (Ctrl + Shift + B).

Adding the package to an image

To make the new Debian package available for image builds, we need to provide it as part of an apt repository. An apt repository can simply be a folder or a static server directory on the web, containing a Release and a Packages.gz file describing the contained packages. When we have an apt repository containing our new package, we can add this repository to your image specification, and then add the package to the list of installed packages.

As mentioned before, apt repositories are signed, so we need a GPG key to sign the metadata of the local apt repository, which we will set up to provide our locally built packages. There is again a comfort script and a VS Code build task to generate the key, but before generating the key, you should update the identity information contained in identity/env. When you have put your contact data in, you can generate the GPG key by running the task EBcL: Generate signing key, or running gen_sign_key in a shell. To use an existing key, you can copy the keyring into the workspace folder gpg-keys/.gnupg.

When the key is available, you can generate the apt repository metadata by running the VS Code build task EBcL: Prepare local repository, or the command prepare_repo_config. The command adds the needed index files and signatures to the folder_results/packages_. Please be aware that all found packages are added to the apt index, and if you have multiple builds of the same package in the folder, it’s somehow random which package is picked. It’s best to delete the old build, and re-run prepare_repo_config to ensure the expected package will be used.

To be able to use the repository in an image build, you need to serve it. You can do this by running the VS Code task EBcL: Serve local repository or the command serve_packages. Then you can add the apt repository to your image configuration using the IP address of the container, which get with ip addr:

apt_repos:
  - apt_repo: http://<Container IP>
    distro: local
    components:
      - main

The build tools

In below example of the build flow for the s32g you can see all currently supported tools marked in color. An explanation of the flow is already given in the Section “Building an image from scratch”.

BuildTools

The idea behind the set of build tools in this SDK follows the UNIX philosophy to make each program do one and only one thing well and by adding new features through connecting the output from one program to another program that again does one job well. This modular approach offers high flexibility and is easy to maintain.

Initrd generator

An initrd can be generated with the initrd generator.

initrd_generator — Built an initrd

Description

Creates a custom initial RAM disk (initrd) based on busybox using YAML configuration file. The synopsis is initrd_generator <initrd>.yaml <output_path>

BuildTools

The internal steps are:

  1. Read in YAML configuration file
  2. Add BusyBox binary
  3. Download and extract additional packages
  4. Add kernel modules, extracts the specified modules
  5. Creates device nodes for initrd image based on the configuration.
  6. Copy all specified files and directories into the initrd image
  7. Generate the init script
  8. Generate initrd based on all the files using cpio

Configuration options

# Derive values from base.yaml - relative path
base: <base.yaml>
# Download dependencies of the kernel package - necessary if meta-package is specified
download_deps: <true|false>
# Files to copy from the packages
files:
  - boot/vmlinuz*
  - boot/config*
# Do not pack the files as tar - we need to provide the kernel binary to QEMU
tar: false
# Add kernel modules
modules:
  - <path_to_module_1>
  - ...
modules_urls: [<module.deb>]
# Url to download the busybox from
# if none given busybox-static from mirrors defined in base.yaml will be used
busybox_url: '<url.deb>'
# If not using the kernel meta-package specify a concrete version
kversion: <version>
# Root device to mount
root_device: dev/<root_device>
# devices to be available in init, type can be block ot char
devices:
  - name: <name>
    type: <block|char>
    major: <major_number>
    minor: <minor_number>
  - ...
# Packages to add, e.g. e2fstools
packages:
  - <package name>
  - ...

Root generator

A root tarball can be generated with the root generator.

root_generator — Built a root tarball

Description

Creates a custom, root tarball using YAML configuration file. It can be used for example to build a normal rootfs, but also the build chroot build environments containing tooling needed for building other artifacts. The synopsis is root_generator <root>.yaml <output_path> --no-config --sysroot

BuildTools

The internal steps are:

  1. Read in YAML configuration file
  2. If sysroot is configured add generic sysroot packages like g++
  3. Depending on the configuration build the image with kiwi or debootstrap
  4. If configuration is not skipped run config.sh script if present
  5. Copy image tar to output folder

Configuration

Parameters

–no-config, -n: Skip the root filesystem configuration step

–sysroot, -s: Build a sysroot inside of a normal root tarball

Yaml file

Potential configuration parameters are documented in the section “Configuration parameters” and examples are given in the section “The example images”

Root configurator

A root tarball can be configured with user scripts by the root generator.

root_configurator — Built a root tarball

Description

Configures a tarball with user provided scripts. Splitting the root tarball generation from the configuration allows for fast configuration adaptions. The synopsis is root_configurator <root>.yaml <input>.tar <output>.tar

BuildTools

The internal steps are:

  1. Read in YAML configuration file
  2. If sysroot is configured add generic sysroot packages like g++
  3. Depending on the configuration build the image with kiwi or debootstrap
  4. If configuration is not skipped run config.sh script if present
  5. Copy image tar to output folder

Configuration

# You can define multiple configuration scripts that will run "in" the tarball
scripts:
  - name: <name.sh>
    env: <chroot|chfake>
  - name: ...

Boot generator

An initrd can be generated with the initrd generator.

boot_generator — Built an initrd

Description

Creates a custom initial RAM disk (initrd) based on busybox using YAML configuration file. The synopsis is boot_generator <boot>.yaml <output_path>

BuildTools

The internal steps are:

  1. Read in YAML configuration file
  2. Download debian packages
  3. In a temporary folder
    1. Extract debian packages into the folder
    2. Copy all specified (host)files and directories into the folder
    3. Run config scripts in the folder
  4. Generate boot tarball from the temporary folder, if configured

Configuration options

# Derive values from base.yaml - relative path
base: <base.yaml>
kernel: as in base yaml, if build locally set to null
tar: <true|false>
use_packages:<true|false>
# Name of the boot root archive, if given will be used as initial tarball base
base_tarball: $$RESULTS$$/boot_root.tar
# Files to copy form the host environment
host_files:
  - source: <file>
    destination: <folder>
# Files to copy to the build folder
files:
  - <file_path>
# You can define multiple configuration scripts that will run "in" the tarball
scripts:
  - name: <name.sh>
    env: <chroot|chfake>

Embdgen

Creates a disk image for embedded devices

embdgen — embedded disk generator.

Description

Creates a disk images mainly for embedded devices. The configuration is given in a YAML file that gives a declarative image in a hierarchial way. It used to combine all artifacts from the previous build steps - for example a Disk image, Initrd and Kernel binary - together to the (final) image. The synopsis is embdgen <config>.yaml

BuildTools

Please find the upstream documentation as well here: https://elektrobit.github.io/embdgen/index.html

Using EB corbos Linux SDK on arm64

The EB corbos Linux SDK is primary developed and tested on amd64 Linux hosts, but it is also possible to use it on arm64 hosts. It was tested once on a Raspberry Pi 5 8GB successfully.

Prepare the host

Please setup VisualStudio Code and Docker as described in Setup.

Prepare the dev container

The pre-built dev container is only available for amd64 hosts, so first the container needs to be build locally on arm64. First clone the EB corbos linux dev container repository from https://github.com/Elektrobit/ebcl_dev_container. Then build the container for arm64 by running the builder/build_container script.

Prepare the workspace

Next you need the EB corbos Linux template workspace. Clone the template git repository from https://github.com/Elektrobit/ebcl_template. Then open Visual Studio Code and install the dev container extension. Open the workspace file ebcl_sdk.code-workspace. Press the “Reopen in container” button of the popup or open the VS code shell by pressing Ctrl + Shift + P and select the “Reopen in container” command.

Build an arm64 image

Now you can build arm64 images. Open the folder containing the image description you want to build, e.g. /workspace/images/arm64/qemu/ebcl/crinit/debootstrap, in the integrated terminal in the dev container. Then run make to build the image. The build results are stored in a new created build subfolder. In case of QEMU images, the QEMU VM will be started automatically.

Cross-building

Cross building is supported when the host allows execution of binaries of the target architecture. To allow executing binaries for different architectures, install binfmt support. On Ubuntu, you can install it by running: sudo apt install binfmt-support qemu qemu-user-static.

Then you can build the image in the same was as the arm64 images. Open the image folder in the terminal in the dev container, e.g. /workspace/images/amd64/qemu/ebcl/crinit/debootstrap. Then run the image build by executing make in the folder.