Linux From Scratch
This tutorial shows how to run a simple cross-compiled kernel on
the QEMU emulator. We will target compilation for the
ARM Cortex-A9 processor, with hardware acceleration for
floating-point operations. The musl libc
implementation will be used. The cross-compiler will be configured
and built using crosstool-ng.
All operations will be executed inside Docker to take advantage of layer caching. Cross-compilation can take up to an hour, so having a reproducible environment with cached layers significantly speeds up repeated runs.
The tutorial describes the Dockerfile assembled from other guides that perform these steps on the host. Docker is expected to be installed and functioning on the system.
[base] Preparing environment
Before we start, choose the operating system for the Docker image that will run the cross-compilation process. I will use the recent Ubuntu 22.04 image as the base.
In the base stage, install the packages that will be used for cross-compilation.
If you are not familiar with steps in Docker,
see Docker
multi-stage builds.
To avoid permission issues when accessing artifacts in a mounted volume, add a user with UID 1000. The UID should match your host user’s UID; check it with:
$: id -u
1000The base stage should produce the following code:
# Download the base for the container
FROM ubuntu:22.04 AS base
# Update and download necessary tooling
RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get install -y qemu-system-arm git build-essential gcc g++ gawk \
bison flex texinfo help2man make libncurses5-dev python3-dev \
autoconf automake libtool libtool-bin cpio unzip rsync bc device-tree-compiler wget curl
# Set to be a user as we cannot be the root for issuing cross-compilation
RUN useradd -m -u 1000 -s /bin/bash builder && \
echo "builder ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers
[build-crosstng-env] Building cross-compiler
This step follows the official crosstool-ng manual.
The first build will produce the crosstool-ng host tool. All
sources are available on GitHub: (https://github.com/crosstool-ng/crosstool-ng).
To ensure a reproducible build we select a specific crosstool-ng tag
— here it is 1.26.0. Before building, crosstool-ng must
be “bootstrapped” to generate the necessary sources and
configuration files for the host platform. The bootstrapping step
runs from the root of the cloned repository by executing
./bootstrap. During this step the package generates
menuconfig entries, package version descriptions, and other files
required to build the ct-ng tool. The
ct-ng program manages the cross-compiler configuration
and orchestrates building the final binutils and toolchain.
$ ./bootstrap
INFO :: *** Generating package version descriptions
INFO :: Master packages: autoconf ... zlib zstd
INFO :: Generating 'config/versions/autoconf.in'
...
INFO :: *** Generating menu/choice selections
INFO :: Generating arch.in (choice)
INFO :: Generating kernel.in (choice)
...
INFO :: *** Gathering the list of data files to install
INFO :: *** Running autoreconf
...
INFO :: *** Done!Next, provide configuration for the build tools using the
./configure script.
In Embedded Linux we discussed different libc implementations. As an example, you can change the default newlib to musl. To do that through the TUI:
- Enable experimental features in crosstool-ng (Paths and misc options > Try features marked as EXPERIMENTAL)
- Change the target OS to Linux (Operating System > Target OS > linux)
- Set the C-library to musl (C-library > C-library > musl)
Use search (press /) and look for MUSL.
You’ll see something like:
| Symbol: LIBC_MUSL [=y] │
│ Type : bool │
│ Defined at config/gen/libc.in:55 │
│ Prompt: musl │
│ Depends on: GEN_CHOICE_LIBC [=y] && !WINDOWS [=n] && !BARE_METAL [=n] && EXPERIMENTA │
│ Location: │
│ > C-library │
│ (1) > C library (GEN_CHOICE_LIBC [=y]) │
│ Selects: LIBC_SUPPORT_THREADS_NATIVE [=y] && CC_CORE_NEEDED [=y] │TIP: Search for “Depends on” and inspect the prompts to find prerequisites that must be adjusted.
You can also provide a defconfig file inside the
crosstool-ng directory with contents like:
CT_CONFIG_VERSION="4"
CT_EXPERIMENTAL=y
CT_KERNEL_LINUX=y
CT_LIBC_MUSL=y
CT_GETTEXT_NEEDED=yRunning ./ct-ng defconfig shows which config is
loaded.
Passing --enable-local to ./configure
instructs the build to install crosstool-ng into the user’s home
directory.
After the previous steps, run make to build the host
tool that will then build the cross-compiler.
Once the tool is built, select the target cross-compiler to build
with ./ct-ng <target>. To list available sample
configurations use ./ct-ng list-samples.
$ ./ct-ng list-samples
Status Sample name
[L...] aarch64-ol7u9-linux-gnu
[L...] aarch64-ol8u6-linux-gnu
[L...] aarch64-ol8u7-linux-gnu
...
[L...] arm-bare_newlib_cortex_m3_nommu-eabi
[L...] arm-cortex_a15-linux-gnueabihf
[L...] arm-cortex_a8-linux-gnueabi
[L..X] arm-cortexa5-linux-uclibcgnueabihf
[L..X] arm-cortexa9_neon-linux-gnueabihf
[L..X] x86_64-w64-mingw32,arm-cortexa9_neon-linux-gnueabihf
[L...] arm-multilib-linux-uclibcgnueabi
[L...] arm-nano-eabi
[L...] arm-none-eabi
...
[L..X] arm-picolibc-default
[L..X] arm-picolibc-eabi
...
L (Local) : sample was found in current directory
G (Global) : sample was installed with crosstool-NG
X (EXPERIMENTAL): sample may use EXPERIMENTAL features
B (BROKEN) : sample is currently broken
O (OBSOLETE) : sample needs to be upgradedTo build and install the fully configured cross-compiler run
./ct-ng build.
# Download crosstool-ng
FROM base AS build-crosstng-env
USER builder
WORKDIR /home/builder
RUN git clone https://github.com/crosstool-ng/crosstool-ng ./crosstool-ng
RUN git -C ./crosstool-ng checkout crosstool-ng-1.26.0
# Bootstrap the crosstool-ng
WORKDIR /home/builder/crosstool-ng
RUN ./bootstrap
RUN ./configure --enable-local
RUN make -j$(nproc)
COPY defconfig .
RUN ./ct-ng defconfig
RUN ./ct-ng arm-cortexa9_neon-linux-gnueabihf
RUN ./ct-ng build
[build-kernel-env] Building kernel
At the kernel build stage we use the prepared cross-toolchain.
First, add the toolchain’s bin directory to PATH. Tell the kernel
build system which compiler and architecture to use by setting the
CROSS_COMPILE and ARCH environment
variables.
Next select the device configuration to use for the kernel. For this tutorial we will use “vexpress”.
After that you can build the kernel image and device tree.
FROM build-crosstng-env AS build-kernel-env
WORKDIR /home/builder
# Set cross-compiler path and target architecture
ENV PATH="/home/builder/x-tools/arm-cortexa9_neon-linux-gnueabihf/bin:$PATH"
ENV CROSS_COMPILE=arm-cortexa9_neon-linux-gnueabihf-
ENV ARCH=arm
# Download Linux kernel
RUN wget cdn.kernel.org/pub/linux/kernel/v6.x/linux-6.17.2.tar.xz
RUN tar xf linux-6.17.2.tar.xz
# Build kernel
# Set default config
RUN make -C linux-6.17.2 vexpress_defconfig
# Build kernel as zImage
RUN make -C linux-6.17.2 zImage -j$(nproc)
# Build device tree file
RUN make -C linux-6.17.2 dtbs -j$(nproc)
Running image on QEMU
Now we can run the kernel under QEMU. Running the produced kernel image without a rootfs will result in a kernel panic.
We are still missing a few parts, but you can prepare an interactive session that copies necessary artifacts from previous stages and exports the required environment variables.
# Prepare for interactive run
FROM base AS runtime
USER builder
WORKDIR /home/builder
ENV PATH="/home/builder/x-tools/arm-cortexa9_neon-linux-gnueabihf/bin:$PATH"
ENV CROSS_COMPILE=arm-cortexa9_neon-linux-gnueabihf-
ENV ARCH=arm
# We need to copy artifacts from previous stages
COPY --chown=builder:builder --from=build-crosstng-env /home/builder/crosstool-ng /home/builder/crosstool-ng
COPY --chown=builder:builder --from=build-crosstng-env /home/builder/x-tools /home/builder/x-tools
COPY --chown=builder:builder --from=build-kernel-env /home/builder/linux-6.17.2 /home/builder/linux-6.17.2
CMD ["/bin/bash"]
First build the Docker image:
docker build -t emb .
Next, run an interactive container:
docker run -it emb::latest
Inside the container execute QEMU. The command options specify:
- -M
- selects the emulated machine - -m
- guest RAM size - -kernel
- the kernel image - -dtb
- the device tree blob - -append
- kernel command line (here we set the console) - -nographic - disable graphical output and redirect I/O to the console
cd /home/builder/linux-6.17.2 && qemu-system-arm \
-M vexpress-a9 \
-m 256M \
-kernel arch/arm/boot/zImage \
-dtb arch/arm/boot/dts/arm/vexpress-v2p-ca9.dtb \
-append "console=ttyAMA0" \
-nographicThe kernel will panic:
Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0)
CPU: 0 UID: 0 PID: 1 Comm: swapper/0 Not tainted 6.17.2 #1 NONE
Hardware name: ARM-Versatile Express
Call trace:
unwind_backtrace from show_stack+0x10/0x14
show_stack from dump_stack_lvl+0x54/0x68
dump_stack_lvl from vpanic+0xf4/0x2e4
vpanic from __do_trace_suspend_resume+0x0/0x4c
---[ end Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0) ]---To exit QEMU press the key combination CTRL+A X.
[build-rootfs] Building root filesystem
To finish our custom Linux build we need to produce a root
filesystem (rootfs) that will be loaded into RAM. Userspace is based
on BusyBox. BusyBox contains only the most essential tools. The
project reduced the codebase so it could run from very small media;
tools are compiled into a single binary. Programs normally found
under /bin are symbolic links to
/bin/busybox and are dispatched by that binary, so
invoking cat actually calls
busybox cat.
$ ls -la /bin/
total 1028
drwxr-xr-x 2 0 0 1900 Dec 5 18:42 .
drwxr-xr-x 14 0 0 320 Dec 6 11:27 ..
lrwxrwxrwx 1 0 0 7 Dec 5 18:42 arch > busybox
lrwxrwxrwx 1 0 0 7 Dec 5 18:42 ash > busybox
lrwxrwxrwx 1 0 0 7 Dec 5 18:42 base32 > busybox
lrwxrwxrwx 1 0 0 7 Dec 5 18:42 base64 > busybox
-rwxr-xr-x 1 0 0 1051780 Dec 5 18:42 busybox
lrwxrwxrwx 1 0 0 7 Dec 5 18:42 cat > busybox
...Preparing the rootfs folder structure
Before building BusyBox, prepare the rootfs directory and create required folders:
mkdir rootfs && cd rootfs && \
mkdir -p bin dev etc home lib proc sbin sys tmp usr var \
usr/bin usr/lib usr/sbin var/logBuilding BusyBox
BusyBox can be cloned from the official repository
(git://busybox.net/busybox.git). We will use stable version 1.36.
One configuration you must change is CONFIG_PREFIX to
set the installation destination to your rootfs. If you don’t
provide a .config, run make defconfig. To
further customize options run make menuconfig. In the
menu, under “Settings”, search for “Destination path for ‘make
install’” and set it to our ../rootfs directory.
Before building export the CROSS_COMPILE variable
(as in previous steps). After configuration build with
make -j$(nproc) and then install with
make install into the rootfs.
The bin/busybox binary uses shared libraries. You
can copy all produced .so files from the BusyBox build
into /rootfs/lib, or minimize the rootfs by copying
only required libraries. To inspect the shared objects used by the
binary, use the cross-tool readelf:
$ arm-cortexa9_neon-linux-gnueabihf-readelf -a bin/busybox | grep "\.so.[0-9]"
[Requesting program interpreter: /lib/ld-linux-armhf.so.3]
0x00000001 (NEEDED) Shared library: [libm.so.6]
0x00000001 (NEEDED) Shared library: [libresolv.so.2]
0x00000001 (NEEDED) Shared library: [libc.so.6]
000000: Version: 1 File: libresolv.so.2 Cnt: 1
0x0020: Version: 1 File: libm.so.6 Cnt: 2
0x0050: Version: 1 File: libc.so.6 Cnt: 10Copying shared libraries
All listed .so* files must be present in the rootfs
so the kernel can find them at runtime. Shared libraries and headers
produced during the cross-toolchain build are placed in the
toolchain sysroot. To find the sysroot, ask the cross-compiler:
$ arm-cortexa9_neon-linux-gnueabihf-gcc -print-sysroot
/home/builder/x-tools/arm-cortexa9_neon-linux-gnueabihf/
arm-cortexa9_neon-linux-gnueabihf/sysrootThen copy the required shared libraries into
rootfs/lib:
export SYSROOT=$(arm-cortexa9_neon-linux-gnueabihf-gcc -print-sysroot) && \
cp $SYSROOT/lib/ld-linux-armhf.so.3 rootfs/lib/ && \
cp $SYSROOT/lib/libm.so.6 rootfs/lib/ && \
cp $SYSROOT/lib/libresolv.so.2 rootfs/lib/ && \
cp $SYSROOT/lib/libc.so.6 rootfs/lib/Preparing the initramfs image
The final step before running the kernel is packing the rootfs
into an archive using the cpio
tool.
In copy-out mode, cpio copies files into an archive. It reads a list of filenames, one per line, on the standard input, and writes the archive onto the standard output. A typical way to generate the list of filenames is with the find command.
Important: run the find command inside
the rootfs directory.
We use the portable “newc” archive format, which supports large inode counts.
The new (SVR4) portable format, which supports file systems having more than 65536 i-nodes.
The use of --owner root:root option allow us to
archive file with UID:GID 0:0, means that the files are going to be
mounted with root:root permissions.
At the end we need to gzip the file
cd rootfs && find . | cpio -H newc -o --owner root:root > ~/initramfs.cpio \
&& gzip -k ~/initramfs.cpio```The final Dockerfile stage for building rootfs:
# Build rootfs ---------------------------------------------------------------------
FROM base AS build-rootfs-env
USER builder
WORKDIR /home/builder
COPY --chown=builder:builder --from=build-crosstng-env /home/builder/x-tools /home/builder/x-tools
ENV PATH="/home/builder/x-tools/arm-cortexa9_neon-linux-gnueabihf/bin:$PATH"
ENV CROSS_COMPILE=arm-cortexa9_neon-linux-gnueabihf-
ENV ARCH=arm
RUN mkdir rootfs && cd rootfs && \
mkdir -p bin dev etc home lib proc sbin sys tmp usr var \
usr/bin usr/lib usr/sbin var/log
# Clone, configure and build busybox
RUN git clone git://busybox.net/busybox.git
RUN git -C ./busybox checkout tags/1_36_0
COPY busybox.config busybox/.config
RUN make -C busybox -j
RUN make -C busybox install
# Copy required libraries
RUN export SYSROOT=$(arm-cortexa9_neon-linux-gnueabihf-gcc -print-sysroot) && \
cp $SYSROOT/lib/ld-linux-armhf.so.3 rootfs/lib/ && \
cp $SYSROOT/lib/libm.so.6 rootfs/lib/ && \
cp $SYSROOT/lib/libresolv.so.2 rootfs/lib/ && \
cp $SYSROOT/lib/libc.so.6 rootfs/lib/
# Pack the rootfs
RUN cd rootfs && find . | cpio -H newc -o --owner root:root > ~/initramfs.cpio
RUN gzip -k initramfs.cpio
[runtime] Prepare interactive session
To make the container more convenient, prepare an interactive runtime image that already includes environment variables and the artifacts copied from build stages. This simplifies iterative testing.
# (INTERACTIVE) RUNTIME STAGE ------------------------------------------------------
# Prepare for interactive run
FROM base AS runtime
USER builder
WORKDIR /home/builder
ENV PATH="/home/builder/x-tools/arm-cortexa9_neon-linux-gnueabihf/bin:$PATH"
ENV CROSS_COMPILE=arm-cortexa9_neon-linux-gnueabihf-
ENV ARCH=arm
# We need to copy artifacts from previous stages
COPY --chown=builder:builder --from=build-crosstng-env /home/builder/crosstool-ng /home/builder/crosstool-ng
COPY --chown=builder:builder --from=build-crosstng-env /home/builder/x-tools /home/builder/x-tools
COPY --chown=builder:builder --from=build-kernel-env /home/builder/linux-6.17.2 /home/builder/linux-6.17.2
COPY --chown=builder:builder --from=build-rootfs-env /home/builder/busybox /home/builder/busybox
COPY --chown=builder:builder --from=build-rootfs-env /home/builder/rootfs /home/builder/rootfs
COPY --chown=builder:builder --from=build-rootfs-env /home/builder/initramfs.cpio /home/builder/initramfs.cpio
COPY --chown=builder:builder --from=build-rootfs-env /home/builder/initramfs.cpio.gz /home/builder/initramfs.cpio.gz
COPY ./boot.sh /home/builder/boot.sh
CMD ["/bin/bash"]
Running QEMU
To build the Docker image use:
docker build -t emb .
To run a container interactively and mount a shared host directory (so you can exchange files between host and container) use:
docker run -v ./shared:/shared -it emb:latest
Inside the Docker container you can finally start the Linux, you should see a prompt for entering your commands.
$ cd linux-6.17.2 && qemu-system-arm -m 256M \
-nographic \
-M vexpress-a9 \
-kernel arch/arm/boot/zImage \
-append "console=ttyAMA0 rdinit=/bin/sh" \
-dtb arch/arm/boot/dts/arm/vexpress-v2p-ca9.dtb \
-initrd ~/initramfs.cpio.gzFull Dockerfile
The full created Docker image can be found on my public github repository. Pay attention the repository is live. If you want to have version described in this article use tag stable-version.
Resources
- Simmonds, C. Mastering Embedded Linux Programming (3rd ed.)
- Karol Przybylski - Linuxdev