Firmware-Action
Firmware-Action is a tool to simplify building firmware. Think of it as Makefile
or Taskfile
but specifically for firmware. The tool it-self is written entirely in Golang.
Motivation behind the creation is to unify building of firmware across development environments. The goal of firmware action is to run on your local machine but also in your CI/CD pipeline, with the same configuration producing the same output.
There is also an independent python tool to prepare Docker containers to be used with Firmware-Action. These are hosted on GitHub and are freely available (no need to build any Docker containers yourself).
There is also a GitHub action integration allowing you to use Firmware-Action in your GitHub CI/CD.
At the moment Firmware-Action supports:
- coreboot
- linux
- tianocore / edk2
- firmware stitching (populating IFD regions with binaries)
- u-root
This list should expand in the future (see issue 56).
Firmware-Action is using dagger under the hood, which makes it a rather versatile tool. When Firmware-Action is used, it automatically downloads user-specified Docker containers in which it will attempt to build the firmware.
If your firmware consists of multiple components, such as coreboot
with linux
as the payload, you can simply define each as a module and define dependency between them. Each module is built separately, but can use the output of another module as input. In the coreboot
+ linux
example, you can call Firmware-Action to build coreboot
recursively, which will also build linux
due to the dependency definition. This way, you can build complex stacks of firmware in a single call.
Documentation is hosted in pages.
CONTRIBUTING
We use GitHub to host code, to track issues and feature requests, as well as accept pull requests.
Issues
As usual, check if issue already exists in GitHub issue tracker.
Commit messages
Please conform to Conventional Commits specification.
Used tools
Linters
MegaLinter
MegaLinter is used in GitHub Flow, which checks all the source code.
Can be run locally, but it is pain. See MegaLinter Runner.
commitlint (soon depreciated)
commitlint checks if the commit message conforms to Conventional Commits specification.
bugbundle/commits (coming soon)
bugbundle/commits checks if the commit message conforms to Conventional Commits specification.
To run locally, check Tooling for Conventional Commits.
editorconfig
Please make sure to conform to provided editorconfig.
Get started
This guide will provide instructions step by step on how to get started with firmware-action, and it will demonstrate the use on coreboot example.
In this guide we will:
- start a new repository
- in this guide it will be hosted in GitHub
- we will build a simple coreboot for QEMU
- we will be able to build coreboot in GitHub action and locally
The code from this example is available in firmware-action-example.
Prerequisites
Start a new git repository
Start a new repository in GitHub and then clone it.
Add coreboot as git submodule
Add coreboot repository as a submodule:
git submodule add --depth=1 "https://review.coreboot.org/coreboot" coreboot
In this example we will work with coreboot 4.19
release (it is a bit older release from January 2023, but should suffice for demonstration)
( cd coreboot; git fetch origin tag "4.19"; git checkout "4.19" )
Recursively initialize submodules.
git submodule update --init --recursive
Create a coreboot configuration file
Now we need to create a configuration file for coreboot.
Either follow a coreboot guide to get a bare-bones-basic configuration, or just copy-paste this text into seabios_defconfig
file.
Create a JSON configuration file
This configuration file is for firmware-action, so that it knows what to do and where to find things. Let's call it firmware-action.json
.
Field repo_path
is pointing to the location of our coreboot submodule which we added in previous step Repository.
Field defconfig_path
is pointing to the location of coreboot's configuration file which we created in previous step coreboot configuration.
Firmware action can be used to compile other firmware too, and even combine multiple firmware projects (to a certain degree).
For this reason the JSON configuration file is divided into categories (coreboot
, edk2
, etc). Each category can contain multiple entries.
Entries can depend on each other, which allows you to combine them - you can have for example coreboot
firmware with edk2
payload.
Get firmware-action
You can get Firmware-Action multiple ways:
- clone the repository and build the executable yourself
- download pre-compiled executable from releases.
- Arch Linux AUR package
Run firmware-action locally
./firmware-action build --config=firmware-action.json --target=coreboot-example
firmware-action
will firstly download registry.dagger.io/engine
container needed for dagger and start it.
It will then proceed to download a coreboot
container 1, copy into it the specified files and then start compilation.
If compilation is successful, a new directory output-coreboot/
will be created 2 which will contain files 3 and possibly also directories 4.
Your working directory should look something like this:
.
|-- coreboot/
| `-- ...
|-- firmware-action.json
|-- output-coreboot/
| |-- coreboot.rom
| `-- defconfig
`-- seabios_defconfig
container_output_dirs
and container_output_files
are lists of directories and files to be extracted from the container once compilation finished successfully.
These are then placed into output_dir
.
1: The used container is specified by sdk_url
in the firmware-action configuration file.
2: Output directory is specified by output_dir
in firmware-action configuration file.
3: Output files are specified by container_output_files
in firmware-action configuration file.
4: Directories to output are specified by container_output_dirs
in firmware-action configuration file.
Run firmware-action in GitHub CI
Now that we have firmware-action
working on local system. Let's set up CI.
{{#include ../../firmware-action-example/.github/workflows/example.yml}}
Commit, push and watch. And that is it.
Now you should be able to build coreboot in CI and on your local machine.
Interactive mode
While I was playing around with firmware-action I found early on that debugging what is going on inside the docker container is rather lengthy and annoying process. This was the moment when the idea of some interactive option was born.
Issue #269
Dagger since v0.12 supports new built-in interactive debugging.
We are already planning to re-write firmware-action
to use this new feature instead of the ssh
solution we are currently using. For more details see issue 269.
On build failure open ssh
server in the container and let user connect into it to debug the problem. To enable this feature user has to pass argument --interactive
. User can ssh into the container with a randomly generated password.
The container will keep running until user presses ENTER
key in the terminal with firmware-action running.
The container is launched in the interactive mode before the failed command was started.
This reverting behavior is out of technical necessity.
The containers in dagger (at the time of writing) are single-use non-interactive containers. Dagger has a pipeline (command queue for each container) which starts executing only when specific functions such as Sync() are called which trigger evaluation of the pipeline inside the container.
To start a ssh
service and wait for user to log-in, the container has to be converted into a service which also forces evaluation of the pipeline. And if any of the commands should fail, it would fail to start the service
container.
As a workaround, when the evaluation of pipeline fails in the container, the container from previous step is converted into a service
container with everything as it was just before the failing command was executed. In essence, when you connect, you end up in pristine environment.
// Convert container to service with exposed SSH port
const sshPort = 22
sshServiceDoc := container.WithExposedPort(sshPort).AsService()
// Expose the SSH server to the host
sshServiceTunnel, err := client.Host().Tunnel(sshServiceDoc).Start(ctx)
if err != nil {
fmt.Println("Problem getting tunnel up")
return err
}
defer sshServiceTunnel.Stop(ctx) // nolint:errcheck
TLDR; Usage
firmware-action
was created with the intention to unify building of firmware across environments.
As such, there are multiple ways to use it.
Local system
To get firmware-action, there are few options:
- download compiled binary executable from releases
- build from source with taskfile
firmware-action
Arch Linux AUR package
Build from source
git clone https://github.com/9elements/firmware-action.git
cd firmware-action
task build-go-binary
Arch Linux AUR
For Arch Linux there is also a AUR package available.
Run
./firmware-action build --config=<path-to-JSON-config> --target=<my-target>
Help
Usage: firmware-action --config="firmware-action.json" <command> [flags]
Utility to create firmware images for several open source firmware solutions
Flags:
-h, --help Show context-sensitive help.
--json switch to JSON stdout and stderr output
--indent enable indentation for JSON output
--debug increase verbosity
--config="firmware-action.json" Path to configuration file
Commands:
build --config="firmware-action.json" --target=STRING [flags]
Build a target defined in configuration file
generate-config --config="firmware-action.json" [flags]
Generate empty configuration file
version --config="firmware-action.json" [flags]
Print version and exit
Run "firmware-action <command> --help" for more information on a command.
Github CI
You can use firmware-action
as any other action.
name: Firmware example build
jobs:
firmware_build:
runs-on: ubuntu-latest
steps:
- name: Build coreboot with firmware-action
uses: 9elements/firmware-action@main
with:
config: '<path to firmware-action JSON config>'
target: '<name of the target from JSON config>'
recursive: 'false'
Parametric builds with environment variables
To take advantage of matrix builds in GitHub, it is possible to use environment variables inside the JSON configuration file.
For example let's make COREBOOT_VERSION
environment variable which will hold version of coreboot.
JSON would look like this:
...
"sdk_url": "ghcr.io/9elements/firmware-action/coreboot_${COREBOOT_VERSION}:main",
...
"defconfig_path": "tests/coreboot_${COREBOOT_VERSION}/seabios.defconfig",
...
YAML would look like this:
name: Firmware example build
jobs:
firmware_build:
runs-on: ubuntu-latest
strategy:
matrix:
coreboot_version: ["4.19", "4.20"]
steps:
- name: Build coreboot with firmware-action
uses: 9elements/firmware-action@main
with:
config: '<path to firmware-action JSON config>'
target: '<name of the target from JSON config>'
recursive: 'false'
env:
COREBOOT_VERSION: ${{ matrix.coreboot_version }}
Examples
In our repository we have multiple examples (even though rather simple ones) defined in .github/workflows/example.yml.
Coreboot
Coreboot
.github/workflows/example.yml
:
build-coreboot:
needs:
- changes
- skip-check
strategy:
fail-fast: false
matrix:
coreboot-version: ['4.19', '4.20.1', '4.21', '24.02']
arch: ['amd64', 'arm64']
runs-on: ${{ matrix.arch == 'arm64' && 'ARM64' || 'ubuntu-latest' }}
if: ${{ ! (github.event_name == 'pull_request_review' && github.actor != 'github-actions[bot]') && needs.skip-check.outputs.changes == 'true' }}
# Skip if pull_request_review on PR not made by a bot
steps:
- name: Cleanup
run: |
rm -rf ./* || true
rm -rf ./.??* || true
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Restore cached coreboot repo
uses: actions/cache/restore@v4
id: cache-repo
with:
path: ./my_super_dooper_awesome_coreboot
key: coreboot-${{ matrix.coreboot-version }}
- name: Clone coreboot repo
if: steps.cache-repo.outputs.cache-hit != 'true'
run: |
git clone --branch "${{ matrix.coreboot-version }}" --depth 1 https://review.coreboot.org/coreboot my_super_dooper_awesome_coreboot
- name: Store coreboot repo in cache
uses: actions/cache/save@v4
if: steps.cache-repo.outputs.cache-hit != 'true'
with:
path: ./my_super_dooper_awesome_coreboot
key: coreboot-${{ matrix.coreboot-version }}
- name: Move my defconfig into place (filename must not contain '.defconfig')
run: |
mv "tests/coreboot_${{ matrix.coreboot-version }}/seabios.defconfig" "seabios_defconfig"
- name: firmware-action
uses: ./
# uses: 9elements/firmware-action
with:
config: 'tests/example_config__coreboot.json'
target: 'coreboot-example'
recursive: 'false'
compile: ${{ needs.changes.outputs.compile }}
env:
COREBOOT_VERSION: ${{ matrix.coreboot-version }}
- name: Get artifacts
uses: actions/upload-artifact@v4
with:
name: coreboot-${{ matrix.coreboot-version }}-${{ matrix.arch }}
path: output-coreboot
retention-days: 14
tests/example_config__coreboot.json
:
{
"coreboot": {
"coreboot-example": {
"depends": null,
"sdk_url": "ghcr.io/9elements/firmware-action/coreboot_${COREBOOT_VERSION}:main",
"repo_path": "my_super_dooper_awesome_coreboot/",
"defconfig_path": "seabios_defconfig",
"output_dir": "output-coreboot/",
"container_output_dirs": null,
"container_output_files": ["build/coreboot.rom", "defconfig"],
"blobs": {
"payload_file_path": "",
"intel_ifd_path": "",
"intel_me_path": "",
"intel_gbe_path": "",
"fsp_binary_path": "",
"fsp_header_path": "",
"vbt_path": "",
"ec_path": ""
},
"container_input_dir": "inputs/",
"input_dirs": null,
"input_files": null
}
}
}
Linux Kernel
Linux Kernel
.github/workflows/example.yml
:
build-linux:
needs:
- changes
- skip-check
strategy:
fail-fast: false
matrix:
linux-version: ['6.1.45', '6.1.111', '6.6.52', '6.9.9', '6.11']
arch: ['amd64', 'arm64']
runs-on: ${{ matrix.arch == 'arm64' && 'ARM64' || 'ubuntu-latest' }}
if: ${{ ! (github.event_name == 'pull_request_review' && github.actor != 'github-actions[bot]') && needs.skip-check.outputs.changes == 'true' }}
# Skip if pull_request_review on PR not made by a bot
steps:
- name: Cleanup
run: |
rm -rf ./* || true
rm -rf ./.??* || true
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Restore cached linux source
id: cache-repo
uses: actions/cache/restore@v4
with:
path: ./linux-${{ matrix.linux-version }}.tar.xz
key: linux-${{ matrix.linux-version }}
- name: Prepare linux kernel
run: |
# Download source files
wget --quiet --continue "https://cdn.kernel.org/pub/linux/kernel/v${LINUX_MAJOR_VERSION}.x/linux-${{ matrix.linux-version }}.tar.xz"
wget --quiet "https://cdn.kernel.org/pub/linux/kernel/v${LINUX_MAJOR_VERSION}.x/linux-${{ matrix.linux-version }}.tar.sign"
unxz --keep "linux-${{ matrix.linux-version }}.tar.xz" >/dev/null
# Verify GPG signature
gpg2 --locate-keys torvalds@kernel.org gregkh@kernel.org
gpg2 --verify "linux-${{ matrix.linux-version }}.tar.sign"
# Extract
tar -xvf "linux-${{ matrix.linux-version }}.tar"
env:
LINUX_MAJOR_VERSION: 6
- name: Store linux source in cache
uses: actions/cache/save@v4
if: steps.cache-repo.outputs.cache-hit != 'true'
with:
path: ./linux-${{ matrix.linux-version }}.tar.xz
key: linux-${{ matrix.linux-version }}
- name: Move my defconfig into place (filename must not contain '.defconfig')
run: |
mv "tests/linux_${{ matrix.linux-version }}/linux.defconfig" "ci_defconfig"
- name: firmware-action
uses: ./
# uses: 9elements/firmware-action
with:
config: 'tests/example_config__linux.json'
target: 'linux-example'
recursive: 'false'
compile: ${{ needs.changes.outputs.compile }}
env:
LINUX_VERSION: ${{ matrix.linux-version }}
SYSTEM_ARCH: ${{ matrix.arch }}
- name: Get artifacts
uses: actions/upload-artifact@v4
with:
name: linux-${{ matrix.linux-version }}-${{ matrix.arch }}
path: output-linux
retention-days: 14
tests/example_config__linux.json
:
{
"linux": {
"linux-example": {
"depends": null,
"sdk_url": "ghcr.io/9elements/firmware-action/linux_${LINUX_VERSION}:main",
"arch": "${SYSTEM_ARCH}",
"repo_path": "linux-${LINUX_VERSION}/",
"defconfig_path": "ci_defconfig",
"output_dir": "output-linux/",
"container_output_dirs": null,
"container_output_files": ["vmlinux", "defconfig"],
"gcc_version": "",
"container_input_dir": "inputs/",
"input_dirs": null,
"input_files": null
}
}
}
Edk2
Edk2
.github/workflows/example.yml
:
build-edk2:
runs-on: ubuntu-latest
needs:
- changes
- skip-check
strategy:
fail-fast: false
matrix:
edk2-version: ['edk2-stable202208', 'edk2-stable202211']
# TODO
if: ${{ ! (github.event_name == 'pull_request_review' && github.actor != 'github-actions[bot]') && needs.skip-check.outputs.changes == 'true' }}
# Skip if pull_request_review on PR not made by a bot
steps:
- name: Cleanup
run: |
rm -rf ./* || true
rm -rf ./.??* || true
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Restore cached edk2 repo
uses: actions/cache/restore@v4
id: cache-repo
with:
path: ./Edk2
key: edk2-${{ matrix.edk2-version }}
- name: Clone edk2 repo
if: steps.cache-repo.outputs.cache-hit != 'true'
run: |
git clone --recurse-submodules --branch "${{ matrix.edk2-version }}" --depth 1 https://github.com/tianocore/edk2.git Edk2
- name: Prepare file with build arguments
run: |
echo "-D BOOTLOADER=COREBOOT -D TPM_ENABLE=TRUE -D NETWORK_IPXE=TRUE" > "edk2_config.cfg"
- name: Store edk2 repo in cache
uses: actions/cache/save@v4
if: steps.cache-repo.outputs.cache-hit != 'true'
with:
path: ./Edk2
key: edk2-${{ matrix.edk2-version }}
- name: Get versions of edk2
id: edk2_versions
run: |
echo "ver_current=$( echo ${{ matrix.edk2-version }} | tr -cd '0-9' )" >> "${GITHUB_OUTPUT}"
echo "ver_breaking=$( echo 'edk2-stable202305' | tr -cd '0-9' )" >> "${GITHUB_OUTPUT}"
- name: Use GCC5 for old edk2
id: gcc_toolchain
# GCC5 is deprecated since edk2-stable202305
# For more information see https://github.com/9elements/firmware-action/issues/340
run: |
if [[ ! ${{ steps.edk2_versions.outputs.ver_current }} < ${{ steps.edk2_versions.outputs.ver_breaking }} ]]; then
echo "gcc_toolchain_version=GCC" >> "${GITHUB_OUTPUT}"
else
echo "gcc_toolchain_version=GCC5" >> "${GITHUB_OUTPUT}"
fi
- name: firmware-action
uses: ./
# uses: 9elements/firmware-action
with:
config: 'tests/example_config__edk2.json'
target: 'edk2-example'
recursive: 'false'
compile: ${{ needs.changes.outputs.compile }}
env:
EDK2_VERSION: ${{ matrix.edk2-version }}
GCC_TOOLCHAIN_VERSION: ${{ steps.gcc_toolchain.outputs.gcc_toolchain_version }}
- name: Get artifacts
uses: actions/upload-artifact@v4
with:
name: ${{ matrix.edk2-version }}
path: output-edk2
retention-days: 14
tests/example_config__edk2.json
:
{
"edk2": {
"edk2-example": {
"depends": null,
"sdk_url": "ghcr.io/9elements/firmware-action/${EDK2_VERSION}:main",
"arch": "X64",
"repo_path": "Edk2/",
"defconfig_path": "edk2_config.cfg",
"output_dir": "output-edk2/",
"container_output_dirs": ["Build/"],
"container_output_files": null,
"build_command": "source ./edksetup.sh; build -a X64 -p UefiPayloadPkg/UefiPayloadPkg.dsc -b DEBUG -t ${GCC_TOOLCHAIN_VERSION}",
"container_input_dir": "inputs/",
"input_dirs": null,
"input_files": null
}
}
}
Firmware Stitching
Firmware Stitching
.github/workflows/example.yml
:
build-stitching:
needs:
- changes
- skip-check
strategy:
fail-fast: false
matrix:
coreboot-version: ['4.19']
arch: ['amd64', 'arm64']
runs-on: ${{ matrix.arch == 'arm64' && 'ARM64' || 'ubuntu-latest' }}
if: ${{ ! (github.event_name == 'pull_request_review' && github.actor != 'github-actions[bot]') && needs.skip-check.outputs.changes == 'true' }}
# Skip if pull_request_review on PR not made by a bot
steps:
- name: Cleanup
run: |
rm -rf ./* || true
rm -rf ./.??* || true
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Restore cached coreboot-blobs repo
uses: actions/cache/restore@v4
id: cache-repo
with:
path: ./stitch
key: coreboot-blobs-${{ matrix.coreboot-version }}
- name: Clone blobs repo
if: steps.cache-repo.outputs.cache-hit != 'true'
run: |
git clone --depth 1 https://review.coreboot.org/blobs stitch
- name: Store coreboot-blobs repo in cache
uses: actions/cache/save@v4
if: steps.cache-repo.outputs.cache-hit != 'true'
with:
path: ./stitch
key: coreboot-blobs-${{ matrix.coreboot-version }}
- name: firmware-action
uses: ./
# uses: 9elements/firmware-action
with:
config: 'tests/example_config__firmware_stitching.json'
target: 'stitching-example'
recursive: 'false'
compile: ${{ needs.changes.outputs.compile }}
env:
COREBOOT_VERSION: ${{ matrix.coreboot-version }}
- name: Get artifacts
uses: actions/upload-artifact@v4
with:
name: stitch-${{ matrix.coreboot-version }}-${{ matrix.arch }}
path: output-stitch
retention-days: 14
tests/example_config__firmware_stitching.json
:
{
"firmware_stitching": {
"stitching-example": {
"depends": null,
"sdk_url": "ghcr.io/9elements/firmware-action/coreboot_${COREBOOT_VERSION}:main",
"repo_path": "stitch/",
"container_output_dirs": null,
"container_output_files": ["new_descriptor.bin"],
"output_dir": "output-stitch/",
"base_file_path": "stitch/mainboard/intel/emeraldlake2/descriptor.bin",
"platform": "",
"ifdtool_entries": [
{
"path": "stitch/mainboard/intel/emeraldlake2/me.bin",
"target_region": "ME",
"optional_arguments": null
}
],
"container_input_dir": "inputs/",
"input_dirs": null,
"input_files": null
}
}
}
Configuration
Example of JSON configuration file
Example of JSON configuration file
{
"coreboot": {
"coreboot-example": {
"depends": null,
"sdk_url": "ghcr.io/9elements/firmware-action/coreboot_${COREBOOT_VERSION}:main",
"repo_path": "my_super_dooper_awesome_coreboot/",
"defconfig_path": "seabios_defconfig",
"output_dir": "output-coreboot/",
"container_output_dirs": null,
"container_output_files": [
"build/coreboot.rom",
"defconfig"
],
"blobs": {
"payload_file_path": "",
"intel_ifd_path": "",
"intel_me_path": "",
"intel_gbe_path": "",
"fsp_binary_path": "",
"fsp_header_path": "",
"vbt_path": "",
"ec_path": ""
},
"container_input_dir": "inputs/",
"input_dirs": null,
"input_files": null
}
},
"edk2": {
"edk2-example": {
"depends": null,
"sdk_url": "ghcr.io/9elements/firmware-action/${EDK2_VERSION}:main",
"arch": "X64",
"repo_path": "Edk2/",
"defconfig_path": "edk2_config.cfg",
"output_dir": "output-edk2/",
"container_output_dirs": [
"Build/"
],
"container_output_files": null,
"build_command": "source ./edksetup.sh; build -a X64 -p UefiPayloadPkg/UefiPayloadPkg.dsc -b DEBUG -t ${GCC_TOOLCHAIN_VERSION}",
"container_input_dir": "inputs/",
"input_dirs": null,
"input_files": null
}
},
"firmware_stitching": {
"stitching-example": {
"depends": null,
"sdk_url": "ghcr.io/9elements/firmware-action/coreboot_${COREBOOT_VERSION}:main",
"repo_path": "stitch/",
"container_output_dirs": null,
"container_output_files": [
"new_descriptor.bin"
],
"output_dir": "output-stitch/",
"base_file_path": "stitch/mainboard/intel/emeraldlake2/descriptor.bin",
"platform": "",
"ifdtool_entries": [
{
"path": "stitch/mainboard/intel/emeraldlake2/me.bin",
"target_region": "ME",
"optional_arguments": null
}
],
"container_input_dir": "inputs/",
"input_dirs": null,
"input_files": null
}
},
"linux": {
"linux-example": {
"depends": null,
"sdk_url": "ghcr.io/9elements/firmware-action/linux_${LINUX_VERSION}:main",
"arch": "${SYSTEM_ARCH}",
"repo_path": "linux-${LINUX_VERSION}/",
"defconfig_path": "ci_defconfig",
"output_dir": "output-linux/",
"container_output_dirs": null,
"container_output_files": [
"vmlinux",
"defconfig"
],
"gcc_version": "",
"container_input_dir": "inputs/",
"input_dirs": null,
"input_files": null
}
},
"u-root": {
"u-root-example": {
"depends": null,
"sdk_url": "ghcr.io/9elements/firmware-action/uroot_${UROOT_VERSION}:main",
"repo_path": "u-root/",
"output_dir": "output-uroot/",
"container_output_dirs": null,
"container_output_files": [
"initramfs.cpio"
],
"build_command": "go build; GOARCH=amd64 ./u-root -defaultsh gosh -o initramfs.cpio boot coreboot-app ./cmds/core/* ./cmds/boot/*",
"container_input_dir": "inputs/",
"input_dirs": null,
"input_files": null
}
}
}
The config are split by type (coreboot
, linux
, edk2
, ...).
In each type can be any number of modules.
Each module has a name, which can be anything as long as it is unique (unique string across all modules of all types). In the example above there are 3 modules (coreboot-example
, linux-example
, edk2-example
).
The configuration above can be simplified to this:
/
├── coreboot/
│ └── coreboot-example
├── edk2/
│ └── edk2-example
├── firmware_stitching/
│ └── stitching-example
└── linux/
└── linux-example
Not all types must be present or defined. If you are building coreboot and coreboot only, you can have only coreboot present.
/
└── coreboot/
└── coreboot_example
You can have multiple modules of each type, as long as their names are unique.
/
├── coreboot/
│ ├── coreboot_example
│ ├── coreboot_A
│ └── my_little_firmware
├── linux/
│ ├── linux_example
│ ├── linux_B
│ ├── asdf
│ └── asdf2
└── edk2/
├── edk2_example
└── edk2_C
Modules
Each module has sections:
depends
common
specific
// CorebootOpts is used to store all data needed to build coreboot.
type CorebootOpts struct {
// List of IDs this instance depends on
Depends []string `json:"depends"`
// Common options like paths etc.
CommonOpts
// Gives the (relative) path to the defconfig that should be used to build the target.
DefconfigPath string `json:"defconfig_path" validate:"required,filepath"`
// Coreboot specific options
Blobs CorebootBlobs `json:"blobs"`
}
common
& specific
are identical in function. There is no real difference between these two. They are split to simplify the code. They define things like path to source code, version and source of SDK to use, and so on.
depends
on the other hand allows you to specify dependency (or relation) between modules. For example your coreboot
uses edk2
as payload. So you can specify this dependency by listing name of the edk2
module in depends
of your coreboot
module.
{
"coreboot": {
"coreboot-example": {
"depends": ["edk2-example"],
...
}
},
"edk2": {
"edk2-example": {
"depends": null,
...
}
}
}
With such configuration, you can then run firmware-action
recursively, and it will build all of the modules in proper order.
./firmware-action build --config=./my-config.json --target=coreboot-example --recursive
In this case firmware-action
would build edk2-example
first and then coreboot-example
.
By changing inputs and outputs, you can then feed output of one module into input of another module.
This way you can build the entire firmware stack in single step.
Common and Specific
To explain each and every entry in the configuration, here are snippets of the source code with comments.
In the code below, the tag json
(for example json:"sdk_url"
) specifies what the field is called in JSON file.
Tag validate:"required"
, it means that the field is required and must not be empty. Empty required field will fail validation and terminate program with error.
Tag validate:"dirpath"
means that field must contain a valid path to a directory. It is not necessary for the path or directory to exists, but must be a valid path. Be warned - that means that the string must end with /
. For example output-coreboot/
.
Tag validate:"filepath"
means that the field must contain a valid path to a file. It is not necessary for the file to exist.
For more tails see go-playground/validator package.
Common
type CommonOpts struct {
// Specifies the container toolchain tag to use when building the image.
// This has an influence on the IASL, GCC and host GCC version that is used to build
// the target. You must match the source level and sdk_version.
// NOTE: Updating the sdk_version might result in different binaries using the
// same source code.
// Examples:
// https://ghcr.io/9elements/firmware-action/coreboot_4.19:main
// https://ghcr.io/9elements/firmware-action/coreboot_4.19:latest
// https://ghcr.io/9elements/firmware-action/edk2-stable202111:latest
// See https://github.com/orgs/9elements/packages
SdkURL string `json:"sdk_url" validate:"required"`
// Gives the (relative) path to the target (firmware) repository.
// If the current repository contains the selected target, specify: '.'
// Otherwise the path should point to the target (firmware) repository submodule that
// had been previously checked out.
RepoPath string `json:"repo_path" validate:"required,dirpath"`
// Specifies the (relative) paths to directories where are produced files (inside Container).
ContainerOutputDirs []string `json:"container_output_dirs" validate:"dive,dirpath"`
// Specifies the (relative) paths to produced files (inside Container).
ContainerOutputFiles []string `json:"container_output_files" validate:"dive,filepath"`
// Specifies the (relative) path to directory into which place the produced files.
// Directories listed in ContainerOutputDirs and files listed in ContainerOutputFiles
// will be exported here.
// Example:
// Following setting:
// ContainerOutputDirs = []string{"Build/"}
// ContainerOutputFiles = []string{"coreboot.rom", "defconfig"}
// OutputDir = "myOutput"
// Will result in following structure being copied out of the container:
// myOutput/
// ├── Build/
// ├── coreboot.rom
// └── defconfig
OutputDir string `json:"output_dir" validate:"required,dirpath"`
// Specifies the (relative) paths to directories which should be copied into the container.
InputDirs []string `json:"input_dirs" validate:"dive,dirpath"`
// Specifies the (relative) paths to file which should be copied into the container.
InputFiles []string `json:"input_files" validate:"dive,filepath"`
// Specifies the path to directory where to place input files and directories inside container.
// Directories listed in ContainerInputDirs and files listed in ContainerInputFiles
// will be copied there.
// Example:
// Following setting:
// InputDirs = []string{"config-files/"}
// InputFiles = []string{"README.md", "Taskfile.yml"}
// ContainerInputDir = "myInput"
// Will result in following structure being copied into the container:
// myInput/
// ├── config-files/
// ├── README.md
// └── Taskfile.yml
ContainerInputDir string `json:"container_input_dir" validate:"dirpath"`
// Overview:
//
// | Configuration option | Direction |
// |:-----------------------|:--------------------:|
// | RepoPath | Host --> Container |
// | | |
// | ContainerOutputDirs | Host <-- Container |
// | ContainerOutputFiles | Host <-- Container |
// | OutputDir | Host <-- Container |
// | | |
// | InputDirs | Host --> Container |
// | InputFiles | Host --> Container |
// | ContainerInputDir | Host --> Container |
}
Specific / coreboot
// CorebootOpts is used to store all data needed to build coreboot.
type CorebootOpts struct {
// List of IDs this instance depends on
Depends []string `json:"depends"`
// Common options like paths etc.
CommonOpts
// Gives the (relative) path to the defconfig that should be used to build the target.
DefconfigPath string `json:"defconfig_path" validate:"required,filepath"`
// Coreboot specific options
Blobs CorebootBlobs `json:"blobs"`
}
// CorebootBlobs is used to store data specific to coreboot.
type CorebootBlobs struct {
// ** List of supported blobs **
// NOTE: The blobs may not be added to the ROM, depends on provided defconfig.
//
// Gives the (relative) path to the payload.
// In a 'coreboot' build, the file will be placed at
// `3rdparty/blobs/mainboard/$(MAINBOARDDIR)/payload`.
// The Kconfig `CONFIG_PAYLOAD_FILE` will be changed to point to the same path.
PayloadFilePath string `json:"payload_file_path" type:"blob"`
// Gives the (relative) path to the Intel Flash descriptor binary.
// In a 'coreboot' build, the file will be placed at
// `3rdparty/blobs/mainboard/$(CONFIG_MAINBOARD_DIR)/descriptor.bin`.
// The Kconfig `CONFIG_IFD_BIN_PATH` will be changed to point to the same path.
IntelIfdPath string `json:"intel_ifd_path" type:"blob"`
// Gives the (relative) path to the Intel Management engine binary.
// In a 'coreboot' build, the file will be placed at
// `3rdparty/blobs/mainboard/$(CONFIG_MAINBOARD_DIR)/me.bin`.
// The Kconfig `CONFIG_ME_BIN_PATH` will be changed to point to the same path.
IntelMePath string `json:"intel_me_path" type:"blob"`
// Gives the (relative) path to the Intel Gigabit Ethernet engine binary.
// In a 'coreboot' build, the file will be placed at
// `3rdparty/blobs/mainboard/$(CONFIG_MAINBOARD_DIR)/gbe.bin`.
// The Kconfig `CONFIG_GBE_BIN_PATH` will be changed to point to the same path.
IntelGbePath string `json:"intel_gbe_path" type:"blob"`
// Gives the (relative) path to the Intel FSP binary.
// In a 'coreboot' build, the file will be placed at
// `3rdparty/blobs/mainboard/$(CONFIG_MAINBOARD_DIR)/Fsp.fd`.
// The Kconfig `CONFIG_FSP_FD_PATH` will be changed to point to the same path.
FspBinaryPath string `json:"fsp_binary_path" type:"blob"`
// Gives the (relative) path to the Intel FSP header folder.
// In a 'coreboot' build, the files will be placed at
// `3rdparty/blobs/mainboard/$(CONFIG_MAINBOARD_DIR)/Include`.
// The Kconfig `CONFIG_FSP_HEADER_PATH` will be changed to point to the same path.
FspHeaderPath string `json:"fsp_header_path" type:"blob"`
// Gives the (relative) path to the Video BIOS Table binary.
// In a 'coreboot' build, the files will be placed at
// `3rdparty/blobs/mainboard/$(CONFIG_MAINBOARD_DIR)/vbt.bin`.
// The Kconfig `CONFIG_INTEL_GMA_VBT_FILE` will be changed to point to the same path.
VbtPath string `json:"vbt_path" type:"blob"`
// Gives the (relative) path to the Embedded Controller binary.
// In a 'coreboot' build, the files will be placed at
// `3rdparty/blobs/mainboard/$(CONFIG_MAINBOARD_DIR)/ec.bin`.
// The Kconfig `CONFIG_EC_BIN_PATH` will be changed to point to the same path.
EcPath string `json:"ec_path" type:"blob"`
}
Specific / Linux
// LinuxOpts is used to store all data needed to build linux
type LinuxOpts struct {
// List of IDs this instance depends on
// Example: [ "MyLittleCoreboot", "MyLittleEdk2"]
Depends []string `json:"depends"`
// Common options like paths etc.
CommonOpts
// Specifies target architecture, such as 'x86' or 'arm64'.
// Supported options:
// - 'i386'
// - 'amd64'
// - 'arm'
// - 'arm64'
Arch string `json:"arch"`
// Gives the (relative) path to the defconfig that should be used to build the target.
DefconfigPath string `json:"defconfig_path" validate:"required,filepath"`
// Linux specific options
LinuxSpecific
}
// LinuxSpecific is used to store data specific to linux
type LinuxSpecific struct {
// TODO: either use or remove
GccVersion string `json:"gcc_version"`
}
Specific / Edk2
// Edk2Opts is used to store all data needed to build edk2.
type Edk2Opts struct {
// List of IDs this instance depends on
// Example: [ "MyLittleCoreboot", "MyLittleLinux"]
Depends []string `json:"depends"`
// Common options like paths etc.
CommonOpts
// Specifies target architecture, such as 'x86' or 'arm64'. Currently unused for coreboot.
// Supported options:
// - 'AARCH64'
// - 'ARM'
// - 'IA32'
// - 'IA32X64'
// - 'X64'
Arch string `json:"arch"`
// Gives the (relative) path to the defconfig that should be used to build the target.
// For EDK2 this is a one-line file containing the build arguments such as
// '-D BOOTLOADER=COREBOOT -D TPM_ENABLE=TRUE -D NETWORK_IPXE=TRUE'.
DefconfigPath string `json:"defconfig_path" validate:"filepath"`
// Coreboot specific options
Edk2Specific `validate:"required"`
}
// Edk2Specific is used to store data specific to coreboot.
//
// simplified because of issue #92
type Edk2Specific struct {
// Specifies which build command to use
// GCC version is exposed in the container container as USE_GCC_VERSION environment variable
// Examples:
// "source ./edksetup.sh; build -t GCC5 -a IA32 -p UefiPayloadPkg/UefiPayloadPkg.dsc"
// "python UefiPayloadPkg/UniversalPayloadBuild.py"
// "Intel/AlderLakeFspPkg/BuildFv.sh"
BuildCommand string `json:"build_command" validate:"required"`
}
Specific / Firmware stitching
// FirmwareStitchingOpts is used to store all data needed to stitch firmware
type FirmwareStitchingOpts struct {
// List of IDs this instance depends on
Depends []string `json:"depends"`
// Common options like paths etc.
CommonOpts
// BaseFile into which inject files.
// !!! Must contain IFD !!!
// Examples:
// - coreboot.rom
// - ifd.bin
BaseFilePath string `json:"base_file_path" validate:"required,filepath"`
// Platform - passed to all `ifdtool` calls with `--platform`
Platform string `json:"platform"`
// List of instructions for ifdtool
IfdtoolEntries []IfdtoolEntry `json:"ifdtool_entries"`
// List of instructions for cbfstool
// TODO ???
}
// IfdtoolEntry is for injecting a file at `path` into region `TargetRegion`
type IfdtoolEntry struct {
// Gives the (relative) path to the binary blob
Path string `json:"path" validate:"required,filepath"`
// Region where to inject the file
// For supported options see `ifdtool --help`
TargetRegion string `json:"target_region" validate:"required"`
// Additional (optional) arguments and flags
// For example:
// `--platform adl`
// For supported options see `ifdtool --help`
OptionalArguments []string `json:"optional_arguments"`
// Ignore entry if the file is missing
IgnoreIfMissing bool `json:"ignore_if_missing" type:"boolean"`
// For internal use only - whether or not the blob should be injected
// Firstly it is checked if the blob file exists, if not a if `IgnoreIfMissing` is set to `true`,
// then `Skip` is set to `true` to remove need for additional repetitive checks later in program
Skip bool
}
Features
Docker containers
Docker is used to build the firmware stacks. To do this efficiently, purpose-specific docker containers are pre-build and are published as packages in GitHub repository.
However there was a problem with too many dockerfiles with practically identical content, just because of different version of software installed inside.
So to simplify this, we needed some master-configuration on top of our dockerfiles. Instead of making up some custom configuration solution, we just decided to use existing and defined docker-compose yaml
config structure, with a custom parser (because there is no existing parser out there).
Docker-compose
The compose file is not used at the moment by actual docker-compose, it is manually parsed and then fed to dagger.
dagger does not support docker-compose at the time of writing.
There is also no existing docker-compose parser out there that we could use as off-the-shelf solution.
The custom parser implements only a limited feature-set out of the compose-file spec, just the bare-minimum needed to build the containers:
- services
networksvolumesconfigssecrets
This way, we can have a single parametric Dockerfile
for each item (coreboot, linux, edk2, ...) and introduce variation with scalable and maintainable single-file config.
Example of compose.yaml
file to build 2 versions of coreboot
docker image:
services:
coreboot_4.19:
build:
context: coreboot
args:
- COREBOOT_VERSION=4.19
coreboot_4.20:
build:
context: coreboot
args:
- COREBOOT_VERSION=4.20
Multi-stage builds
We use multi-stage builds to minimize the final container / image.
Environment variables
In the dockerfiles, we heavily rely on use of environment variables and arguments.
This allows for the parametric nature.
Testing
Containers are also tested to verify that they were build successfully.
The tests are rather simple, consisting solely from happy path tests. This might change in the future.
Test is done by executing a shell script which builds firmware in some hello-world configuration example. Nothing too fancy.
The path to said shell script is stored in environment variable VERIFICATION_TEST
.
Example of coreboot test
Example of coreboot test
#!/usr/bin/env bash
set -Eeuo pipefail
# Environment variables
export BUILD_TIMELESS=1
declare -a PAYLOADS=(
"seabios"
"seabios_coreinfo"
"seabios_nvramcui"
)
# Clone repo
git clone --branch "${VERIFICATION_TEST_COREBOOT_VERSION}" --depth 1 https://review.coreboot.org/coreboot
cd coreboot
# Make
for PAYLOAD in "${PAYLOADS[@]}"; do
echo "TESTING: ${PAYLOAD}"
make clean
cp "/tests/coreboot_${VERIFICATION_TEST_COREBOOT_VERSION}/${PAYLOAD}.defconfig" "./${PAYLOAD}.defconfig"
make defconfig KBUILD_DEFCONFIG="./${PAYLOAD}.defconfig"
make -j "$(nproc)" || make
done
In addition, there might be VERIFICATION_TEST_*
variables. These are used inside the test script and are rather use-case specific, however often used to store which version of firmware is being tested.
Adding new container
- (optional) Add new
Dockerfile
intodocker
directory - Add new entry in
docker/compose.yaml
- Add new entry into strategy matrix in
.github/workflows/docker-build-and-test.yml
- (optional) Add new strategy matrix in
.github/workflows/example.yml
examples- this requires adding new configuration file in
tests
directory
- this requires adding new configuration file in