Firmware-Action
Firmware-action is a tool to simplify building firmware. Think of it as Makefile
or Taskfile
but specifically for firmware. The tool it-self is written entirely in Golang.
Motivation behind the creation is to unify building of firmware across development environments. The goal of firmware-action is to run on your local machine but also in your CI/CD pipeline, with the same configuration producing the same output.
There is also an independent python tool to prepare Docker containers to be used with firmware-action. These are hosted on GitHub and are freely available (no need to build any Docker containers yourself).
There is also a GitHub action integration allowing you to use firmware-action in your GitHub CI/CD.
At the moment firmware-action has modules to build:
- coreboot
- linux
- tianocore / edk2
- firmware stitching (populating IFD regions with binaries)
- u-root
- u-boot
- universal module (run arbitrary commands in arbitrary Docker container)
This list should expand in the future (see issue 56).
Firmware-action is using dagger under the hood, which makes it a rather versatile tool. When firmware-action is used, it automatically downloads user-specified Docker containers in which it will attempt to build the firmware.
If your firmware consists of multiple components, such as coreboot
with linux
as the payload, you can simply define each as a module and define dependency between them. Each module is built separately, but can use the output of another module as input. In the coreboot
+ linux
example, you can call firmware-action to build coreboot
recursively, which will also build linux
due to the dependency definition. This way, you can build complex stacks of firmware in a single call.
Documentation is hosted in pages.
There is a standalone repository with usage examples at firmware-action-example.
Pre-compiled coreboot toolchains for docker containers are stored in separate directory firmware-action-toolchains.
Containers
We maintain multiple containers that can be freely used with firmware-action
. They are hosted both in GitHub registry as well as in DockerHub.
DockerHub registry contains only releases, while GitHub registry contains also containers build on main
branch.
Here is a list of all containers:
Container | Maintained | Note |
---|---|---|
coreboot_4.19 | [x] | |
coreboot_4.20 | [ ] | discontinued in favor of 4.20.1 |
coreboot_4.20.1 | [x] | |
coreboot_4.21 | [x] | |
coreboot_4.22 | [ ] | discontinued in favor of 4.22.1 |
coreboot_4.22.01 | [x] | |
coreboot_24.02 | [ ] | discontinued in favor of 24.02.01 |
coreboot_24.02.01 | [x] | |
coreboot_24.05 | [x] | |
udk2017 | [x] | |
edk2-stable202008 | [x] | |
edk2-stable202105 | [x] | |
edk2-stable202111 | [x] | |
edk2-stable202205 | [x] | |
edk2-stable202208 | [x] | |
edk2-stable202211 | [x] | |
edk2-stable202302 | [x] | |
edk2-stable202305 | [x] | |
edk2-stable202308 | [x] | |
edk2-stable202311 | [x] | |
edk2-stable202402 | [x] | |
edk2-stable202405 | [x] | |
edk2-stable202408 | [ ] | discontinued in favor of stable202408.01 |
edk2-stable202408.01 | [x] | |
edk2-stable202411 | [x] | |
linux_6.1.111 | [ ] | discontinued in favor of linux_6.1 |
linux_6.1.45 | [ ] | discontinued in favor of linux_6.1 |
linux_6.1 | [x] | |
linux_6.6.52 | [ ] | discontinued in favor of linux_6.6 |
linux_6.6 | [x] | |
linux_6.9.9 | [ ] | discontinued because not LTS |
linux_6.11 | [ ] | discontinued because not LTS |
linux_6.12 | [x] | |
uroot_0.14.0 | [x] | |
uboot_2025.01 | [x] |
Legacy containers
These were created by hand long time ago and since then have been replaced.
CONTRIBUTING
We use GitHub to host code, to track issues and feature requests, as well as accept pull requests.
For coding guidelines and commit message conventions, please look into CONVENTIONS.md.
Issues
As usual, check if issue already exists in GitHub issue tracker.
Please use issue template if applicable.
Pull Requests / Merge Requests
We accept GitHub pull requests.
Fork the project on GitHub, work in your fork and in branches, push these to your GitHub fork, and when ready, do a GitHub pull requests against https://github.com/9elements/firmware-action.
Organize your changes in small and meaningful commits which are easy to review. Not every commit in your pull request needs to be able to build and pass the CI tests, but the whole PR must build and pass CI all tests.
If the pull request closes an issue please note it as: "Fixes #NNN".
Code Reviews
It is not necessary to tag anyone for review, we are using CODEOWNERS file to define individuals or teams that are responsible. They will be tagged automatically.
Get started
This guide will provide instructions step by step on how to get started with firmware-action, and it will demonstrate the use on coreboot example.
In this guide we will:
- start a new repository
- in this guide it will be hosted in GitHub
- we will build a simple coreboot for QEMU
- we will be able to build coreboot in GitHub action and locally
The code from this example is available in firmware-action-example.
Prerequisites
- installed Docker
- installed git
- installed dagger (optional, needed for interactive debugging)
- installed taskfile (optional)
Start a new git repository
Start a new repository in GitHub and then clone it.
Add coreboot as git submodule
Add coreboot repository as a submodule:
git submodule add --depth=1 "https://review.coreboot.org/coreboot" coreboot
In this example we will work with coreboot 4.19
release (it is a bit older release from January 2023, but should suffice for demonstration)
( cd coreboot; git fetch origin tag "4.19"; git checkout "4.19" )
Recursively initialize submodules.
git submodule update --init --recursive --checkout
Create a coreboot configuration file
Now we need to create a configuration file for coreboot.
Either follow a coreboot guide to get a bare-bones-basic configuration, or just copy-paste this text into seabios_defconfig
file.
CONFIG_CBFS_SIZE=0x00400000
CONFIG_CONSOLE_CBMEM_BUFFER_SIZE=0x20000
CONFIG_SUBSYSTEM_VENDOR_ID=0x0000
CONFIG_SUBSYSTEM_DEVICE_ID=0x0000
CONFIG_I2C_TRANSFER_TIMEOUT_US=500000
CONFIG_CONSOLE_QEMU_DEBUGCON_PORT=0x402
CONFIG_POST_IO_PORT=0x80
CONFIG_SEABIOS_DEBUG_LEVEL=-1
Create a JSON configuration file
This configuration file is for firmware-action, so that it knows what to do and where to find things. Let's call it firmware-action.json
.
{
"coreboot": {
"coreboot-example": {
"depends": null,
"sdk_url": "ghcr.io/9elements/firmware-action/coreboot_4.19:main",
"repo_path": "coreboot-example/coreboot/",
"defconfig_path": "coreboot-example/coreboot_seabios_defconfig",
"output_dir": "output-coreboot-example/",
"container_output_dirs": null,
"container_output_files": [
"build/coreboot.rom",
"defconfig"
],
"blobs": {},
"container_input_dir": "inputs/",
"input_dirs": null,
"input_files": null
}
}
}
Field repo_path
is pointing to the location of our coreboot submodule which we added in previous step Repository.
Field defconfig_path
is pointing to the location of coreboot's configuration file which we created in previous step coreboot configuration.
Firmware action can be used to compile other firmware too, and even combine multiple firmware projects (to a certain degree).
For this reason the JSON configuration file is divided into categories (coreboot
, edk2
, etc). Each category can contain multiple entries.
Entries can depend on each other, which allows you to combine them - you can have for example coreboot
firmware with edk2
payload.
Get firmware-action
Firstly, you will need to install and setup Docker.
Then you can get firmware-action multiple ways:
Build from source
Git clone and build, we use Taskfile as build system, but you can go with just go build
.
git clone https://github.com/9elements/firmware-action.git
cd firmware-action
task build-go-binary
Download executable
Download pre-compiled executable from releases.
Arch Linux
There is AUR package available.
go install
go install -v github.com/9elements/firmware-action/cmd/firmware-action@latest
Run firmware-action locally
./firmware-action build --config=firmware-action.json --target=coreboot-example
firmware-action
will firstly download registry.dagger.io/engine
container needed for dagger and start it.
It will then proceed to download a coreboot
container 1, copy into it the specified files and then start compilation.
If compilation is successful, a new directory output-coreboot/
will be created 2 which will contain files 3 and possibly also directories 4.
Your working directory should look something like this:
.
|-- coreboot/
| `-- ...
|-- firmware-action.json
|-- output-coreboot/
| |-- coreboot.rom
| `-- defconfig
`-- seabios_defconfig
container_output_dirs
and container_output_files
are lists of directories and files to be extracted from the container once compilation finished successfully.
These are then placed into output_dir
.
1: The used container is specified by sdk_url
in the firmware-action configuration file.
2: Output directory is specified by output_dir
in firmware-action configuration file.
3: Output files are specified by container_output_files
in firmware-action configuration file.
4: Directories to output are specified by container_output_dirs
in firmware-action configuration file.
Run firmware-action in GitHub CI
Now that we have firmware-action
working on local system. Let's set up CI.
---
name: coreboot build
on:
push:
permissions:
contents: read
jobs:
# Example of building coreboot
build-coreboot-example:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
with:
submodules: 'recursive'
- name: firmware-action
uses: 9elements/firmware-action@v0.14.1
with:
config: 'coreboot-example.json'
target: 'coreboot-example'
recursive: 'false'
- name: Get artifacts
uses: actions/upload-artifact@v4
with:
name: coreboot-4.19
path: output-coreboot-example
retention-days: 30
Commit, push and watch. And that is it.
Now you should be able to build coreboot in CI and on your local machine.
TLDR; Usage
firmware-action
was created with the intention to unify building of firmware across environments.
As such, there are multiple ways to use it.
Local system
To get firmware-action loot into Get firmware-action section.
Run
firmware-action build --config=<path-to-JSON-config> --target=<my-target>
Help
Usage: firmware-action --config="firmware-action.json" <command> [flags]
Utility to create firmware images for several open source firmware solutions
Flags:
-h, --help Show context-sensitive help.
--json switch to JSON stdout and stderr output
--indent enable indentation for JSON output
--debug increase verbosity
--config="firmware-action.json" Path to configuration file
Commands:
build --config="firmware-action.json" --target=STRING [flags]
Build a target defined in configuration file
generate-config --config="firmware-action.json" [flags]
Generate empty configuration file
version --config="firmware-action.json" [flags]
Print version and exit
Run "firmware-action <command> --help" for more information on a command.
Github CI
You can use firmware-action
as any other action.
name: Firmware example build
jobs:
firmware_build:
runs-on: ubuntu-latest
steps:
- name: Build coreboot with firmware-action
uses: 9elements/firmware-action@main
with:
config: '<path to firmware-action JSON config>'
target: '<name of the target from JSON config>'
recursive: 'false'
Parametric builds with environment variables
To take advantage of matrix builds in GitHub, it is possible to use environment variables inside the JSON configuration file.
For example let's make COREBOOT_VERSION
environment variable which will hold version of coreboot.
JSON would look like this:
...
"sdk_url": "ghcr.io/9elements/firmware-action/coreboot_${COREBOOT_VERSION}:main",
...
"defconfig_path": "tests/coreboot_${COREBOOT_VERSION}/seabios.defconfig",
...
YAML would look like this:
name: Firmware example build
jobs:
firmware_build:
runs-on: ubuntu-latest
strategy:
matrix:
coreboot_version: ["4.19", "4.20"]
steps:
- name: Build coreboot with firmware-action
uses: 9elements/firmware-action@main
with:
config: '<path to firmware-action JSON config>'
target: '<name of the target from JSON config>'
recursive: 'false'
env:
COREBOOT_VERSION: ${{ matrix.coreboot_version }}
Examples
In our repository we have multiple examples (even though rather simple ones) defined in .github/workflows/example.yml.
Coreboot
Coreboot
.github/workflows/example.yml
:
build-coreboot:
needs:
- changes
- skip-check
strategy:
fail-fast: false
matrix:
coreboot-version: ['4.19', '4.20.1', '4.21', '24.02']
arch: ['amd64', 'arm64']
runs-on: ${{ matrix.arch == 'arm64' && 'ARM64' || 'ubuntu-latest' }}
if: ${{ ! (github.event_name == 'pull_request_review' && github.actor != 'github-actions[bot]') && needs.skip-check.outputs.changes == 'true' }}
# Skip if pull_request_review on PR not made by a bot
steps:
- name: Cleanup
run: |
rm -rf ./* || true
rm -rf ./.??* || true
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Restore cached coreboot repo
uses: actions/cache/restore@v4
id: cache-repo
with:
path: ./my_super_dooper_awesome_coreboot
key: coreboot-${{ matrix.coreboot-version }}-example
- name: Clone coreboot repo
if: steps.cache-repo.outputs.cache-hit != 'true'
run: |
git clone --branch "${{ matrix.coreboot-version }}" --depth 1 https://review.coreboot.org/coreboot my_super_dooper_awesome_coreboot
- name: Store coreboot repo in cache
uses: actions/cache/save@v4
if: steps.cache-repo.outputs.cache-hit != 'true'
with:
path: ./my_super_dooper_awesome_coreboot
key: coreboot-${{ matrix.coreboot-version }}-example
- name: Move my defconfig into place (filename must not contain '.defconfig')
run: |
mv "tests/coreboot_${{ matrix.coreboot-version }}/seabios.defconfig" "seabios_defconfig"
- name: firmware-action
uses: ./
# uses: 9elements/firmware-action
with:
config: |-
tests/example_config__coreboot.json
tests/example_config__uroot.json
target: 'coreboot-example'
recursive: 'false'
compile: ${{ needs.changes.outputs.compile }}
env:
COREBOOT_VERSION: ${{ matrix.coreboot-version }}
UROOT_VERSION: "dummy"
- name: Get artifacts
uses: actions/upload-artifact@v4
with:
name: coreboot-${{ matrix.coreboot-version }}-${{ matrix.arch }}
path: output-coreboot
retention-days: 14
tests/example_config__coreboot.json
:
{
"coreboot": {
"coreboot-example": {
"depends": null,
"sdk_url": "ghcr.io/9elements/firmware-action/coreboot_${COREBOOT_VERSION}:main",
"repo_path": "my_super_dooper_awesome_coreboot/",
"defconfig_path": "seabios_defconfig",
"output_dir": "output-coreboot/",
"container_output_dirs": null,
"container_output_files": ["build/coreboot.rom", "defconfig"],
"blobs": {},
"container_input_dir": "inputs/",
"input_dirs": null,
"input_files": null
}
}
}
Linux Kernel
Linux Kernel
.github/workflows/example.yml
:
build-linux:
needs:
- changes
- skip-check
strategy:
fail-fast: false
matrix:
linux-version: ['6.1', '6.6', '6.12']
arch: ['amd64', 'arm64']
runs-on: ${{ matrix.arch == 'arm64' && 'ARM64' || 'ubuntu-latest' }}
if: ${{ ! (github.event_name == 'pull_request_review' && github.actor != 'github-actions[bot]') && needs.skip-check.outputs.changes == 'true' }}
# Skip if pull_request_review on PR not made by a bot
steps:
- name: Cleanup
run: |
rm -rf ./* || true
rm -rf ./.??* || true
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Restore cached linux source
id: cache-repo
uses: actions/cache/restore@v4
with:
path: ./linux-${{ matrix.linux-version }}.tar.xz
key: linux-${{ matrix.linux-version }}-example
- name: Prepare linux kernel
run: |
# Download source files
wget --quiet --continue "https://cdn.kernel.org/pub/linux/kernel/v${LINUX_MAJOR_VERSION}.x/linux-${{ matrix.linux-version }}.tar.xz"
wget --quiet "https://cdn.kernel.org/pub/linux/kernel/v${LINUX_MAJOR_VERSION}.x/linux-${{ matrix.linux-version }}.tar.sign"
unxz --keep "linux-${{ matrix.linux-version }}.tar.xz" >/dev/null
# Verify GPG signature
gpg2 --locate-keys torvalds@kernel.org gregkh@kernel.org
gpg2 --verify "linux-${{ matrix.linux-version }}.tar.sign"
# Extract
tar -xvf "linux-${{ matrix.linux-version }}.tar"
env:
LINUX_MAJOR_VERSION: 6
- name: Store linux source in cache
uses: actions/cache/save@v4
if: steps.cache-repo.outputs.cache-hit != 'true'
with:
path: ./linux-${{ matrix.linux-version }}.tar.xz
key: linux-${{ matrix.linux-version }}-example
- name: Move my defconfig into place (filename must not contain '.defconfig')
run: |
mv "tests/linux_${{ matrix.linux-version }}/linux.defconfig" "ci_defconfig"
- name: firmware-action
uses: ./
# uses: 9elements/firmware-action
with:
config: |-
tests/example_config__uroot.json
tests/example_config__linux.json
target: 'linux-example'
recursive: 'false'
compile: ${{ needs.changes.outputs.compile }}
env:
LINUX_VERSION: ${{ matrix.linux-version }}
SYSTEM_ARCH: ${{ matrix.arch }}
UROOT_VERSION: "dummy"
- name: Get artifacts
uses: actions/upload-artifact@v4
with:
name: linux-${{ matrix.linux-version }}-${{ matrix.arch }}
path: output-linux
retention-days: 14
tests/example_config__linux.json
:
{
"linux": {
"linux-example": {
"depends": null,
"sdk_url": "ghcr.io/9elements/firmware-action/linux_${LINUX_VERSION}:main",
"arch": "${SYSTEM_ARCH}",
"repo_path": "linux-${LINUX_VERSION}/",
"defconfig_path": "ci_defconfig",
"output_dir": "output-linux/",
"container_output_dirs": null,
"container_output_files": ["vmlinux", "defconfig"],
"gcc_version": "",
"container_input_dir": "inputs/",
"input_dirs": null,
"input_files": null
}
}
}
Edk2
Edk2
.github/workflows/example.yml
:
build-edk2:
runs-on: ubuntu-latest
needs:
- changes
- skip-check
strategy:
fail-fast: false
matrix:
edk2-version: ['edk2-stable202208', 'edk2-stable202211']
# TODO
if: ${{ ! (github.event_name == 'pull_request_review' && github.actor != 'github-actions[bot]') && needs.skip-check.outputs.changes == 'true' }}
# Skip if pull_request_review on PR not made by a bot
steps:
- name: Cleanup
run: |
rm -rf ./* || true
rm -rf ./.??* || true
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Restore cached edk2 repo
uses: actions/cache/restore@v4
id: cache-repo
with:
path: ./Edk2
key: edk2-${{ matrix.edk2-version }}-example
- name: Clone edk2 repo
if: steps.cache-repo.outputs.cache-hit != 'true'
run: |
git clone --recurse-submodules --branch "${{ matrix.edk2-version }}" --depth 1 https://github.com/tianocore/edk2.git Edk2
- name: Prepare file with build arguments
run: |
echo "-D BOOTLOADER=COREBOOT -D TPM_ENABLE=TRUE -D NETWORK_IPXE=TRUE" > "edk2_config.cfg"
- name: Store edk2 repo in cache
uses: actions/cache/save@v4
if: steps.cache-repo.outputs.cache-hit != 'true'
with:
path: ./Edk2
key: edk2-${{ matrix.edk2-version }}-example
- name: Get versions of edk2
id: edk2_versions
run: |
echo "ver_current=$( echo ${{ matrix.edk2-version }} | tr -cd '0-9' )" >> "${GITHUB_OUTPUT}"
echo "ver_breaking=$( echo 'edk2-stable202305' | tr -cd '0-9' )" >> "${GITHUB_OUTPUT}"
- name: Use GCC5 for old edk2
id: gcc_toolchain
# GCC5 is deprecated since edk2-stable202305
# For more information see https://github.com/9elements/firmware-action/issues/340
run: |
if [[ ! ${{ steps.edk2_versions.outputs.ver_current }} < ${{ steps.edk2_versions.outputs.ver_breaking }} ]]; then
echo "gcc_toolchain_version=GCC" >> "${GITHUB_OUTPUT}"
else
echo "gcc_toolchain_version=GCC5" >> "${GITHUB_OUTPUT}"
fi
- name: firmware-action
uses: ./
# uses: 9elements/firmware-action
with:
config: 'tests/example_config__edk2.json'
target: 'edk2-example'
recursive: 'false'
compile: ${{ needs.changes.outputs.compile }}
env:
EDK2_VERSION: ${{ matrix.edk2-version }}
GCC_TOOLCHAIN_VERSION: ${{ steps.gcc_toolchain.outputs.gcc_toolchain_version }}
- name: Get artifacts
uses: actions/upload-artifact@v4
with:
name: ${{ matrix.edk2-version }}
path: output-edk2
retention-days: 14
tests/example_config__edk2.json
:
{
"edk2": {
"edk2-example": {
"depends": null,
"sdk_url": "ghcr.io/9elements/firmware-action/${EDK2_VERSION}:main",
"arch": "X64",
"repo_path": "Edk2/",
"defconfig_path": "edk2_config.cfg",
"output_dir": "output-edk2/",
"container_output_dirs": ["Build/"],
"container_output_files": null,
"build_command": "source ./edksetup.sh; build -a X64 -p UefiPayloadPkg/UefiPayloadPkg.dsc -b DEBUG -t ${GCC_TOOLCHAIN_VERSION}",
"container_input_dir": "inputs/",
"input_dirs": null,
"input_files": null
}
}
}
Firmware Stitching
Firmware Stitching
.github/workflows/example.yml
:
build-stitching:
needs:
- changes
- skip-check
strategy:
fail-fast: false
matrix:
coreboot-version: ['4.19']
arch: ['amd64', 'arm64']
runs-on: ${{ matrix.arch == 'arm64' && 'ARM64' || 'ubuntu-latest' }}
if: ${{ ! (github.event_name == 'pull_request_review' && github.actor != 'github-actions[bot]') && needs.skip-check.outputs.changes == 'true' }}
# Skip if pull_request_review on PR not made by a bot
steps:
- name: Cleanup
run: |
rm -rf ./* || true
rm -rf ./.??* || true
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Restore cached coreboot-blobs repo
uses: actions/cache/restore@v4
id: cache-repo
with:
path: ./stitch
key: coreboot-blobs-${{ matrix.coreboot-version }}-example
- name: Clone blobs repo
if: steps.cache-repo.outputs.cache-hit != 'true'
run: |
git clone --depth 1 https://review.coreboot.org/blobs stitch
- name: Store coreboot-blobs repo in cache
uses: actions/cache/save@v4
if: steps.cache-repo.outputs.cache-hit != 'true'
with:
path: ./stitch
key: coreboot-blobs-${{ matrix.coreboot-version }}-example
- name: firmware-action
uses: ./
# uses: 9elements/firmware-action
with:
config: 'tests/example_config__firmware_stitching.json'
target: 'stitching-example'
recursive: 'false'
compile: ${{ needs.changes.outputs.compile }}
env:
COREBOOT_VERSION: ${{ matrix.coreboot-version }}
- name: Get artifacts
uses: actions/upload-artifact@v4
with:
name: stitch-${{ matrix.coreboot-version }}-${{ matrix.arch }}
path: output-stitch
retention-days: 14
tests/example_config__firmware_stitching.json
:
{
"firmware_stitching": {
"stitching-example": {
"depends": null,
"sdk_url": "ghcr.io/9elements/firmware-action/coreboot_${COREBOOT_VERSION}:main",
"repo_path": "stitch/",
"container_output_dirs": null,
"container_output_files": ["new_descriptor.bin"],
"output_dir": "output-stitch/",
"base_file_path": "stitch/mainboard/intel/emeraldlake2/descriptor.bin",
"platform": "",
"ifdtool_entries": [
{
"path": "stitch/mainboard/intel/emeraldlake2/me.bin",
"target_region": "ME",
"optional_arguments": null
}
],
"container_input_dir": "inputs/",
"input_dirs": null,
"input_files": null
}
}
}
Configuration
Example of JSON configuration file
Example of JSON configuration file
{
"coreboot": {
"coreboot-example": {
"depends": null,
"sdk_url": "ghcr.io/9elements/firmware-action/coreboot_${COREBOOT_VERSION}:main",
"repo_path": "my_super_dooper_awesome_coreboot/",
"defconfig_path": "seabios_defconfig",
"output_dir": "output-coreboot/",
"container_output_dirs": null,
"container_output_files": [
"build/coreboot.rom",
"defconfig"
],
"blobs": {},
"container_input_dir": "inputs/",
"input_dirs": null,
"input_files": null
}
},
"edk2": {
"edk2-example": {
"depends": null,
"sdk_url": "ghcr.io/9elements/firmware-action/${EDK2_VERSION}:main",
"arch": "X64",
"repo_path": "Edk2/",
"defconfig_path": "edk2_config.cfg",
"output_dir": "output-edk2/",
"container_output_dirs": [
"Build/"
],
"container_output_files": null,
"build_command": "source ./edksetup.sh; build -a X64 -p UefiPayloadPkg/UefiPayloadPkg.dsc -b DEBUG -t ${GCC_TOOLCHAIN_VERSION}",
"container_input_dir": "inputs/",
"input_dirs": null,
"input_files": null
}
},
"firmware_stitching": {
"stitching-example": {
"depends": null,
"sdk_url": "ghcr.io/9elements/firmware-action/coreboot_${COREBOOT_VERSION}:main",
"repo_path": "stitch/",
"container_output_dirs": null,
"container_output_files": [
"new_descriptor.bin"
],
"output_dir": "output-stitch/",
"base_file_path": "stitch/mainboard/intel/emeraldlake2/descriptor.bin",
"platform": "",
"ifdtool_entries": [
{
"path": "stitch/mainboard/intel/emeraldlake2/me.bin",
"target_region": "ME",
"optional_arguments": null
}
],
"container_input_dir": "inputs/",
"input_dirs": null,
"input_files": null
}
},
"linux": {
"linux-example": {
"depends": null,
"sdk_url": "ghcr.io/9elements/firmware-action/linux_${LINUX_VERSION}:main",
"arch": "${SYSTEM_ARCH}",
"repo_path": "linux-${LINUX_VERSION}/",
"defconfig_path": "ci_defconfig",
"output_dir": "output-linux/",
"container_output_dirs": null,
"container_output_files": [
"vmlinux",
"defconfig"
],
"gcc_version": "",
"container_input_dir": "inputs/",
"input_dirs": null,
"input_files": null
}
},
"u-boot": {
"u-boot-example": {
"depends": null,
"sdk_url": "ghcr.io/9elements/firmware-action/uboot_${UBOOT_VERSION}:main",
"arch": "arm64",
"repo_path": "u-boot/",
"defconfig_path": "uboot_defconfig",
"output_dir": "output-uboot/",
"container_output_dirs": null,
"container_output_files": [
"u-boot",
"u-boot.cfg",
"u-boot.elf"
],
"container_input_dir": "inputs/",
"input_dirs": null,
"input_files": null
}
},
"u-root": {
"u-root-example": {
"depends": null,
"sdk_url": "ghcr.io/9elements/firmware-action/uroot_${UROOT_VERSION}:main",
"repo_path": "u-root/",
"output_dir": "output-uroot/",
"container_output_dirs": null,
"container_output_files": [
"initramfs.cpio"
],
"build_command": "go build; GOARCH=amd64 ./u-root -defaultsh gosh -o initramfs.cpio boot coreboot-app ./cmds/core/* ./cmds/boot/*",
"container_input_dir": "inputs/",
"input_dirs": null,
"input_files": null
}
}
}
Multiple configuration files can be supplied to firmware-action. Dependencies also work across files.
firmware-action build --config=config-01.json --config=config-02.json ...
Beware that modules with identical names are permitted, as long as they are not in the same configuration file.
firmware-action
processes the files in order in which they were supplied and in case of name-collision, the configuration in last file takes precedence.
The configuration is split by type (coreboot
, linux
, edk2
, ...).
In each type can be any number of modules.
Each module has a name, which can be anything as long as it is unique (unique string across all modules of all types). In the example above there are 3 modules (coreboot-example
, linux-example
, edk2-example
).
The configuration above can be simplified to this:
/
├── coreboot/
│ └── coreboot-example
├── edk2/
│ └── edk2-example
├── firmware_stitching/
│ └── stitching-example
└── linux/
└── linux-example
Not all types must be present or defined. If you are building coreboot and coreboot only, you can have only coreboot present.
/
└── coreboot/
└── coreboot_example
You can have multiple modules of each type, as long as their names are unique.
/
├── coreboot/
│ ├── coreboot_example
│ ├── coreboot_A
│ └── my_little_firmware
├── linux/
│ ├── linux_example
│ ├── linux_B
│ ├── asdf
│ └── asdf2
└── edk2/
├── edk2_example
└── edk2_C
Modules
Each module has sections:
depends
common
specific
// CorebootOpts is used to store all data needed to build coreboot.
type CorebootOpts struct {
// List of IDs this instance depends on
Depends []string `json:"depends"`
// Common options like paths etc.
CommonOpts
// Gives the (relative) path to the defconfig that should be used to build the target.
DefconfigPath string `json:"defconfig_path" validate:"required,filepath"`
// Blobs
// The blobs will be copied into the container into directory:
// 3rdparty/blobs/mainboard/${CONFIG_MAINBOARD_DIR}/
// And the blobs will remain their name
// NOTE: The blobs may not be added to the ROM, depends on provided defconfig.
// Example:
// Config:
// "CONFIG_PAYLOAD_FILE": "./my-payload.bin"
// Will result in blob "my-payload.bin" at
// "3rdparty/blobs/mainboard/${CONFIG_MAINBOARD_DIR}/my-payload.bin"
Blobs map[string]string `json:"blobs"`
}
common
& specific
are identical in function. There is no real difference between these two. They are split to simplify the code. They define things like path to source code, version and source of SDK to use, and so on.
depends
on the other hand allows you to specify dependency (or relation) between modules. For example your coreboot
uses edk2
as payload. So you can specify this dependency by listing name of the edk2
module in depends
of your coreboot
module.
{
"coreboot": {
"coreboot-example": {
"depends": ["edk2-example"],
...
}
},
"edk2": {
"edk2-example": {
"depends": null,
...
}
}
}
With such configuration, you can then run firmware-action
recursively, and it will build all of the modules in proper order.
./firmware-action build --config=./my-config.json --target=coreboot-example --recursive
In this case firmware-action
would build edk2-example
first and then coreboot-example
.
By changing inputs and outputs, you can then feed output of one module into input of another module.
This way you can build the entire firmware stack in single step.
Common and Specific
To explain each and every entry in the configuration, here are snippets of the source code with comments.
In the code below, the tag json
(for example json:"sdk_url"
) specifies what the field is called in JSON file.
Tag validate:"required"
, it means that the field is required and must not be empty. Empty required field will fail validation and terminate program with error.
Tag validate:"dirpath"
means that field must contain a valid path to a directory. It is not necessary for the path or directory to exists, but must be a valid path. Be warned - that means that the string must end with /
. For example output-coreboot/
.
Tag validate:"filepath"
means that the field must contain a valid path to a file. It is not necessary for the file to exist.
For more tails see go-playground/validator package.
Common
type CommonOpts struct {
// Specifies the container toolchain tag to use when building the image.
// This has an influence on the IASL, GCC and host GCC version that is used to build
// the target. You must match the source level and sdk_version.
// Can also be a absolute or relative path to Dockerfile to build the image on the fly.
// NOTE: Updating the sdk_version might result in different binaries using the
// same source code.
// Examples:
// https://ghcr.io/9elements/firmware-action/coreboot_4.19:main
// https://ghcr.io/9elements/firmware-action/coreboot_4.19:latest
// https://ghcr.io/9elements/firmware-action/edk2-stable202111:latest
// file://./my-image/Dockerfile
// file://./my-image/
// file://my-image/Dockerfile
// file:///home/user/my-image/Dockerfile
// file:///home/user/my-image/
// NOTE:
// 'file://' path cannot contain '..'
// See https://github.com/orgs/9elements/packages
SdkURL string `json:"sdk_url" validate:"required"`
// Gives the (relative) path to the target (firmware) repository.
// If the current repository contains the selected target, specify: '.'
// Otherwise the path should point to the target (firmware) repository submodule that
// had been previously checked out.
RepoPath string `json:"repo_path" validate:"required,dirpath"`
// Specifies the (relative) paths to directories where are produced files (inside Container).
ContainerOutputDirs []string `json:"container_output_dirs" validate:"dive,filepath|dirpath"`
// Specifies the (relative) paths to produced files (inside Container).
ContainerOutputFiles []string `json:"container_output_files" validate:"dive,filepath|dirpath"`
// Specifies the (relative) path to directory into which place the produced files.
// Directories listed in ContainerOutputDirs and files listed in ContainerOutputFiles
// will be exported here.
// Example:
// Following setting:
// ContainerOutputDirs = []string{"Build/"}
// ContainerOutputFiles = []string{"coreboot.rom", "defconfig"}
// OutputDir = "myOutput"
// Will result in following structure being copied out of the container:
// myOutput/
// ├── Build/
// ├── coreboot.rom
// └── defconfig
OutputDir string `json:"output_dir" validate:"required,filepath|dirpath"`
// Specifies the (relative) paths to directories which should be copied into the container.
InputDirs []string `json:"input_dirs" validate:"dive,filepath|dirpath"`
// Specifies the (relative) paths to file which should be copied into the container.
InputFiles []string `json:"input_files" validate:"dive,filepath|dirpath"`
// Specifies the path to directory where to place input files and directories inside container.
// Directories listed in ContainerInputDirs and files listed in ContainerInputFiles
// will be copied there.
// Example:
// Following setting:
// InputDirs = []string{"config-files/"}
// InputFiles = []string{"README.md", "Taskfile.yml"}
// ContainerInputDir = "myInput"
// Will result in following structure being copied into the container:
// myInput/
// ├── config-files/
// ├── README.md
// └── Taskfile.yml
ContainerInputDir string `json:"container_input_dir" validate:"filepath|dirpath"`
// Overview:
//
// | Configuration option | Host side | Direction | Container side |
// |:-----------------------|:-----------------------|:--------------------:|:-------------------------------|
// | RepoPath | $RepoPath | Host --> Container | $(pwd) |
// | | | | |
// | OutputDir | $(pwd)/$OutputDir | Host <-- Container | N/A |
// | ContainerOutputDirs | $(pwd)/$OutputDir/... | Host <-- Container | $ContainerOutputDirs |
// | ContainerOutputFiles | $(pwd)/$OutputDir/... | Host <-- Container | $ContainerOutputFiles |
// | | | | |
// | ContainerInputDir | N/A | Host --> Container | $(pwd)/$ContainerInputDir |
// | InputDirs | $InputDirs | Host --> Container | $(pwd)/$ContainerInputDir/... |
// | InputFiles | $InputFiles | Host --> Container | $(pwd)/$ContainerInputDir/... |
}
Specific / coreboot
// CorebootOpts is used to store all data needed to build coreboot.
type CorebootOpts struct {
// List of IDs this instance depends on
Depends []string `json:"depends"`
// Common options like paths etc.
CommonOpts
// Gives the (relative) path to the defconfig that should be used to build the target.
DefconfigPath string `json:"defconfig_path" validate:"required,filepath"`
// Blobs
// The blobs will be copied into the container into directory:
// 3rdparty/blobs/mainboard/${CONFIG_MAINBOARD_DIR}/
// And the blobs will remain their name
// NOTE: The blobs may not be added to the ROM, depends on provided defconfig.
// Example:
// Config:
// "CONFIG_PAYLOAD_FILE": "./my-payload.bin"
// Will result in blob "my-payload.bin" at
// "3rdparty/blobs/mainboard/${CONFIG_MAINBOARD_DIR}/my-payload.bin"
Blobs map[string]string `json:"blobs"`
}
Specific / Linux
// LinuxOpts is used to store all data needed to build linux
type LinuxOpts struct {
// List of IDs this instance depends on
// Example: [ "MyLittleCoreboot", "MyLittleEdk2"]
Depends []string `json:"depends"`
// Common options like paths etc.
CommonOpts
// Specifies target architecture, such as 'x86' or 'arm64'.
// Supported options:
// - 'i386'
// - 'amd64'
// - 'arm'
// - 'arm64'
Arch string `json:"arch"`
// Gives the (relative) path to the defconfig that should be used to build the target.
DefconfigPath string `json:"defconfig_path" validate:"required,filepath"`
// Linux specific options
LinuxSpecific
}
// LinuxSpecific is used to store data specific to linux
type LinuxSpecific struct {
// TODO: either use or remove
GccVersion string `json:"gcc_version"`
}
Specific / Edk2
// Edk2Opts is used to store all data needed to build edk2.
type Edk2Opts struct {
// List of IDs this instance depends on
// Example: [ "MyLittleCoreboot", "MyLittleLinux"]
Depends []string `json:"depends"`
// Common options like paths etc.
CommonOpts
// Specifies target architecture, such as 'x86' or 'arm64'. Currently unused for coreboot.
// Supported options:
// - 'AARCH64'
// - 'ARM'
// - 'IA32'
// - 'IA32X64'
// - 'X64'
Arch string `json:"arch"`
// Gives the (relative) path to the defconfig that should be used to build the target.
// For EDK2 this is a one-line file containing the build arguments such as
// '-D BOOTLOADER=COREBOOT -D TPM_ENABLE=TRUE -D NETWORK_IPXE=TRUE'.
DefconfigPath string `json:"defconfig_path" validate:"filepath"`
// Coreboot specific options
Edk2Specific `validate:"required"`
}
// Edk2Specific is used to store data specific to coreboot.
//
// simplified because of issue #92
type Edk2Specific struct {
// Specifies which build command to use
// GCC version is exposed in the container container as USE_GCC_VERSION environment variable
// Examples:
// "source ./edksetup.sh; build -t GCC5 -a IA32 -p UefiPayloadPkg/UefiPayloadPkg.dsc"
// "python UefiPayloadPkg/UniversalPayloadBuild.py"
// "Intel/AlderLakeFspPkg/BuildFv.sh"
BuildCommand string `json:"build_command" validate:"required"`
}
Specific / Firmware stitching
// FirmwareStitchingOpts is used to store all data needed to stitch firmware
type FirmwareStitchingOpts struct {
// List of IDs this instance depends on
Depends []string `json:"depends"`
// Common options like paths etc.
CommonOpts
// BaseFile into which inject files.
// !!! Must contain IFD !!!
// Examples:
// - coreboot.rom
// - ifd.bin
BaseFilePath string `json:"base_file_path" validate:"required,filepath"`
// Platform - passed to all `ifdtool` calls with `--platform`
Platform string `json:"platform"`
// List of instructions for ifdtool
IfdtoolEntries []IfdtoolEntry `json:"ifdtool_entries"`
// List of instructions for cbfstool
// TODO ???
}
// IfdtoolEntry is for injecting a file at `path` into region `TargetRegion`
type IfdtoolEntry struct {
// Gives the (relative) path to the binary blob
Path string `json:"path" validate:"required,filepath"`
// Region where to inject the file
// For supported options see `ifdtool --help`
TargetRegion string `json:"target_region" validate:"required"`
// Additional (optional) arguments and flags
// For example:
// `--platform adl`
// For supported options see `ifdtool --help`
OptionalArguments []string `json:"optional_arguments"`
// Ignore entry if the file is missing
IgnoreIfMissing bool `json:"ignore_if_missing" type:"boolean"`
// For internal use only - whether or not the blob should be injected
// Firstly it is checked if the blob file exists, if not a if `IgnoreIfMissing` is set to `true`,
// then `Skip` is set to `true` to remove need for additional repetitive checks later in program
Skip bool
}
Specific / u-root
// URootOpts is used to store all data needed to build u-root
type URootOpts struct {
// List of IDs this instance depends on
// Example: [ "MyLittleCoreboot", "MyLittleEdk2"]
Depends []string `json:"depends"`
// Common options like paths etc.
CommonOpts
// u-root specific options
URootSpecific
}
type URootSpecific struct {
// Specifies build command to use
BuildCommand string `json:"build_command" validate:"required"`
}
Specific / Universal module
// UniversalOpts is used to store all data needed to run universal commands
type UniversalOpts struct {
// List of IDs this instance depends on
// Example: [ "MyLittleCoreboot", "MyLittleEdk2"]
Depends []string `json:"depends"`
// Common options like paths etc.
CommonOpts
// Universal specific options
UniversalSpecific
}
type UniversalSpecific struct {
// Specifies build commands to execute inside container
BuildCommands []string `json:"build_commands" validate:"required"`
}
Specific / u-boot module
// UBootOpts is used to store all data needed to build u-root
type UBootOpts struct {
// List of IDs this instance depends on
// Example: [ "MyLittleCoreboot", "MyLittleEdk2"]
Depends []string `json:"depends"`
// Common options like paths etc.
CommonOpts
// Specifies target architecture, such as 'x86' or 'arm64'
Arch string `json:"arch"`
// Gives the (relative) path to the defconfig that should be used to build the target.
DefconfigPath string `json:"defconfig_path" validate:"required,filepath"`
}
Troubleshooting common problems
Many firmware-action
errors and warnings come with suggestion on how to fix them.
Other than that, here are some common problems and solutions.
The first thing when troubleshooting is to look through the output for errors and warnings. Many of these messages come with a suggestion
with instructions on possible solutions.
For example a warning message:
[WARN ] Git submodule seems to be uninitialized
- time: 2024-12-02T12:42:33.31416978+01:00
- suggestion: run 'git submodule update --depth 0 --init --recursive --checkout'
- offending_submodule: coreboot-linuxboot-example/linux
- origin of this message: main.run
Missing submodules / missing files
The problem can manifest in multiple way, most commonly with error messages of missing files.
make: *** BaseTools: No such file or directory. Stop.
Solution is to get all git submodules.
git submodule update --depth 1 --init --recursive --checkout
Coreboot blob not found
Blobs are copied into container separately from input_files
and input_dirs
, the path should point to files on your host.
Dagger problems
To troubleshoot dagger, please see dagger documentation.
Tips and Tricks
Python2 for building Intel FSP
Intel FSP can be build in the EDK2 containers. However the containers often have Python3
as default.
Most EDK2
containers have Python2
installed and contain script switch-to-python2
(/bin/switch-to-python2
) which will let you easily switch to Python2
as default.
To see which python version are installed and which python version is used as default, look into our compose.yaml. Specifically, look into edk2
containers and their arguments PYTHON_PACKAGES
(which python versions are installed) and PYTHON_VERSION
(which python version is the default).
Features
- Build Docker container on the fly
- Environment variables in JSON configuration
- Interactive mode
- Offline usage
- Recursive builds
Build Docker container on the fly
As already mentioned in Configuration/common section, firmware-action can build a Docker container on the fly when provided with Dockerfile
.
The sdk_url
field in configuration file accepts both URL and file-path. If file-path is provided, the container will be build and used (subsequent runs will no rebuild the container unless there changes were made to the Dockerfile
).
The file-path can be a absolute or relative path to Dockerfile
(or directory in which Dockerfile
is) to build the image on the fly.
// Examples:
// https://ghcr.io/9elements/firmware-action/coreboot_4.19:main
// https://ghcr.io/9elements/firmware-action/coreboot_4.19:latest
// https://ghcr.io/9elements/firmware-action/edk2-stable202111:latest
// file://./my-image/Dockerfile
// file://./my-image/
// file://my-image/Dockerfile
// file:///home/user/my-image/Dockerfile
// file:///home/user/my-image/
Docker engine assumes single Dockerfile
per directory, hence it requires path to the parent directory in which the Dockerfile
resides (not to the file itself). For user-comfort, firmware-action accepts both path to parent directory and path to the file.
If the path contains the Dockerfile
as last element, it will be removed before passed over to Docker engine.
Meaning that if user provides file:///home/user/my-image/Dockerfile
, the Docker engine will receive file:///home/user/my-image/
.
Interactive debugging
While I was playing around with firmware-action
I found early on that debugging what is going on inside the docker container is rather lengthy and annoying process. This was the moment when the idea of some interactive option was born.
Dropping the SSH feature in favor of Dagger build-in debugging
Dropping the SSH feature in favor of Dagger build-in debugging
Dagger since v0.12 supports new built-in interactive debugging.
We are already planning to re-write firmware-action
to use this new feature instead of the ssh
solution we are currently using. For more details see issue 269.
UPDATE: It is possible now to use the new and shiny feature of dagger for interactive debugging! As a result we have dropped the SSH feature.
Related:
Supplementary dagger documentation:
- Blog post Dagger 0.12: interactive debugging
- Documentation for Interactive Debugging
- Documentation for Custom applications
- Documentation for Interactive Terminal
To leverage the use of interactive debugging, you have to install dagger CLI.
Then when using firmware-action
, simply prepend the command with dagger run --interactive
.
Instead of:
firmware-action build --config=firmware-action.json --target=coreboot-example
Execute this:
dagger run --interactive firmware-action build --config=firmware-action.json --target=coreboot-example
If you are using Taskfile
to abstract away some of the complexity that comes with larger projects, simply prepend the whole Taskfile
command.
Instead of:
task build:coreboot-example
Execute this:
dagger run --interactive task build:coreboot-example
On build failure you will be dropped into the container and can debug the problem.
To exit the container run command exit
or press CTRL+D
.
Offline usage
firmware-action
under the hood uses dagger / docker. As such, the configuration contains entry sdk_url
which specifies the docker image / container to use.
However, in this configuration firmware-action
(or rather dagger
) will always connect to the internet and download the manifest to see if a new container needs to be downloaded. This applies to all tags (main
, latest
, v0.8.0
, and so on).
If you need to use firmware-action
offline, you have to first acquire the container. Either by running firmware-action
at least once online, or by other means provided by docker.
Then you need to change the firmware-action
configuration to include the image reference (digest hash).
"sdk_url": "http://ghcr.io/9elements/firmware-action/coreboot_4.19:main@sha256:25b4f859e26f84a276fe0c4395a4f0c713f5b564679fbff51a621903712a695b"
Digest hash can be found in the container hub. For firmware-action
containers see GitHub.
It will also be displayed every time firmware-action
is executed as INFO
message near the start:
[INFO ] Container information
- time: 2024-12-01T12:09:43.62620859+01:00
- Image reference: ghcr.io/9elements/firmware-action/coreboot_4.19:main@sha256:25b4f859e26f84a276fe0c4395a4f0c713f5b564679fbff51a621903712a695b
- origin of this message: container.Setup
Simply copy-paste the digest (or image reference) into your configuration file and firmware-action will not connect to the internet to fetch a container if one matching is already present.
Migration guide from v0.13.x to v0.14.0
The handling of coreboot
blobs have been refactored.
coreboot
can have far more blobs that we supported and the setup we had would not scale. We remove all of the hard-coded stuff and replace it with much more flexible setup where use has to define key-value map for the blobs.
The old way of defining blobs:
blobs: {
"payload_file_path": "./my-payload.bin"
}
The new way:
blobs: {
"CONFIG_PAYLOAD_FILE": "./my-payload.bin"
}
We have made a script that will allow you to migrate:
#!/usr/bin/env bash
set -Eeuo pipefail
export CONFIG_FILE="firmware-action.json"
sed -i 's/payload_file_path/CONFIG_PAYLOAD_FILE/g' "${CONFIG_FILE}"
sed -i 's/intel_ifd_path/CONFIG_IFD_BIN_PATH/g' "${CONFIG_FILE}"
sed -i 's/intel_me_path/CONFIG_ME_BIN_PATH/g' "${CONFIG_FILE}"
sed -i 's/intel_gbe_path/CONFIG_GBE_BIN_PATH/g' "${CONFIG_FILE}"
sed -i 's/intel_10gbe0_path/CONFIG_10GBE_0_BIN_PATH/g' "${CONFIG_FILE}"
sed -i 's/fsp_binary_path/CONFIG_FSP_FD_PATH/g' "${CONFIG_FILE}"
sed -i 's/fsp_header_path/CONFIG_FSP_HEADER_PATH/g' "${CONFIG_FILE}"
sed -i 's/vbt_path/CONFIG_INTEL_GMA_VBT_FILE/g' "${CONFIG_FILE}"
sed -i 's/ec_path/CONFIG_EC_BIN_PATH/g' "${CONFIG_FILE}"
Docker containers
Docker is used to build the firmware stacks. To do this efficiently, purpose-specific docker containers are pre-build and are published as packages in GitHub repository.
However there was a problem with too many dockerfiles with practically identical content, just because of different version of software installed inside.
So to simplify this, we needed some master-configuration on top of our dockerfiles. Instead of making up some custom configuration solution, we just decided to use existing and defined docker-compose yaml
config structure, with a custom parser (because there is no existing parser out there).
Docker-compose
The compose file is not used at the moment by actual docker-compose, it is manually parsed and then fed to dagger.
dagger does not support docker-compose at the time of writing.
There is also no existing docker-compose parser out there that we could use as off-the-shelf solution.
The custom parser implements only a limited feature-set out of the compose-file spec, just the bare-minimum needed to build the containers:
- services
networksvolumesconfigssecrets
This way, we can have a single parametric Dockerfile
for each item (coreboot, linux, edk2, ...) and introduce variation with scalable and maintainable single-file config.
Example of compose.yaml
file to build 2 versions of coreboot
docker image:
services:
coreboot_4.19:
build:
context: coreboot
args:
- COREBOOT_VERSION=4.19
coreboot_4.20:
build:
context: coreboot
args:
- COREBOOT_VERSION=4.20
Multi-stage builds
We use multi-stage builds to minimize the final container / image.
Environment variables
In the dockerfiles, we heavily rely on use of environment variables and arguments.
This allows for the parametric nature.
Testing
Containers are also tested to verify that they were build successfully.
The tests are rather simple, consisting solely from happy path tests. This might change in the future.
Test is done by executing a shell script which builds firmware in some hello-world configuration example. Nothing too fancy.
The path to said shell script is stored in environment variable VERIFICATION_TEST
.
Example of coreboot test
Example of coreboot test
#!/usr/bin/env bash
set -Eeuo pipefail
# Environment variables
export BUILD_TIMELESS=1
declare -a PAYLOADS=(
"seabios"
"seabios_coreinfo"
"seabios_nvramcui"
)
# Clone repo
git clone --branch "${VERIFICATION_TEST_COREBOOT_VERSION}" --depth 1 https://review.coreboot.org/coreboot
cd coreboot
# Make
for PAYLOAD in "${PAYLOADS[@]}"; do
echo "TESTING: ${PAYLOAD}"
make clean
cp "/tests/coreboot_${VERIFICATION_TEST_COREBOOT_VERSION}/${PAYLOAD}.defconfig" "./${PAYLOAD}.defconfig"
make defconfig KBUILD_DEFCONFIG="./${PAYLOAD}.defconfig"
make -j "$(nproc)" || make
done
In addition, there might be VERIFICATION_TEST_*
variables. These are used inside the test script and are rather use-case specific, however often used to store which version of firmware is being tested.
Adding new container
- (optional) Add new
Dockerfile
intodocker
directory - Add new entry in
docker/compose.yaml
- Add new entry into strategy matrix in
.github/workflows/docker-build-and-test.yml
- (optional) Add new strategy matrix in
.github/workflows/example.yml
examples- this requires adding new configuration file in
tests
directory
- this requires adding new configuration file in
- Add entry into a list of containers in
README.md
Discontinuing container
- Update entry in list of containers in
README.md
- Add new regex entry into
Setup()
function incmd/firmware-action/container/container.go
to warn about discontinued containers