RHEL on Raspberry Pi 4 and provisioning with Satellite
Introduction
I need a few more machines in my lab (who doesn’t?), and normally I’d just create more virtual machines on one of my big servers. But those servers are at capacity, and the prices on new servers (and especially RAM) is really high.
Also, those large servers tend to use a lot of energy and have heavy cooling requirements. This means lots of heat and noisy fans. I want to minimize the heat and noise I add. The room I have dedicated to servers is generally 5-10 degrees farenheit warmer than the rest of my house and I can hear the fan noise from adjacent rooms.
I’m wondering if, instead of a big server, having a fleet of very inexpensive small machines could work better?
I’ve experimented with running silent, low heat systems long ago when VIAs C3 and Intel Atom in Micro-ITX formats were en vogue. At the time, those systems disappointed me in terms of performance. Now, almost 2 decades later, I figure it’s worth another try.
In 2026, it seems like the way forward for inexpensive low power, low heat, low noise systems might be ARM processors. There’s a lot of variety out there, but Raspberry Pis seem to be the an immediate starting point since they have a well established ecosystem and a lot of community support.
At between USD$70-90 (Feb 2026 prices) for a Raspberry Pi 4 w/ 8GB of RAM, it may work out financially to be cheaper than buying a large server, but it also has to work and fulfill my requirements!
Companion Videos:
To go with these blog post, I’ve created some videos. A time of writing the video series is still in development, but will be updated here as they are released.
Personal Requirements
I have several end-requirements:
- Must be able to mount in a standard 19” rack.
- Must be able to remotely manage the power state
- Must be able to remotely manage the console
- Must run Red Hat Enterprise Linux (and ideally CoreOS, too)
- Must be able to provision over the network and ideally through Red Hat Satellite
The last two requirements are probably going to be the hardest to fulfill. Raspberry Pis are not actually supported by Red Hat, but there do appear to be work-a-rounds posted on the internet.
Hardware
For the first requirement, there are several Raspberry Pi rack mounting solutions available: from simple brackets to full, modular enclosures. I decided on purchasing a more complicated solution that could support Power-over-Ethernet (PoE) hats and 2.5” SATA drives: the UCTonics Pi Rack Pro.
This chassis allows for 4 Raspberry Pis per rack unit. Some of the more basic brackets allow for much higher density, but at the expense of supporting add-ons like the PoE hat. I think 4 Pis per Rack Unit is a good compromise for me.
The chassis is going for a “blade module” vibe, but it really falls short on that. Really, it’s just 4 simplistic trays with some add-on boards. The build quality is OK, but there are too many screws and there are some cable routing issues, notably around the HDMI cable. I wish that, like the SD Card, they had contrived some way to bring the HDMI port to the front of the unit. The LED panel is a neat addition, but the power button is kind of useless. Unfortunately, the LED screen and power button add-ons are not well supported in any Linux distributions and drivers are only as source code.
That being said, I’m reasonably happy with this enclosure, but I’m not sure the almost $300 price point is justified.
Still, this checks off requirement #1: Must be able to mount in a standard 19” rack.
Using standard Raspberry Pi PoE Hats, I’m able to power these machines via PoE. In addition, with the right switching hardware, I can remotely control the power to each port to provide remote power management.
This checks off requirement #2: Must be able to remotely manage the power state.
For being able to remotely view the console, there are various options including projects like PiKVM and products like NanoKVM. These make what was once a pricey endeavor, adding out-of-bounds-management to a computer, relatively inexpensive.
Despite that, I still don’t want to add a PiKVM or NanoKVM per Raspberry Pi. While these solutions are cheap individually, the cost still adds up when you have to buy them in bulk!
So, I decided to buy a TESmart 16-port Rack Mount KVM. This does not allow remote viewing over network, it is simply switching 16 different KVM inputs to a single output. While it doesn’t capture video output, it can be controlled remotely, either via network or RS-232. The thought here is to connect a SINGLE PiKVM, NanoKVM or similar to the KVM switch.
I can then hook up to 16 Raspberry Pis (4 UCTronics Chassis) to the KVM Switch and through a combination of the remote control on the KVM and PiKVM or NanoKVM, I can interact with all 16 Raspberry Pis consoles over the network, albeit not at the same time. I think this is a reasonable compromise.
Cost-wise, I think this makes sense. A NanoKVM is roughly USD$55, so sixteen of them would cost USD$880! That’s not including costs for cables and extra ports on an switch. By contrast, this KVM switch cost me about USD$300 and I’ve seen similar models for anywhere between USD$250-600.
For this to be truly seamless, I’ll likely have to modify the software of the PiKVM or NanoKVM also be able to control the KVM Switch. I’m not ready to take on that project yet, but there is a clear path forward. So, I consider this checking off my requirement #3 “Must be able to remotely manage the console”.
With the UCTronics chassis, PoE Hats, TESmart KVM Switch, and remote KVM solution (like PiKVM or NanoKVM), I think I have all my hardware requirements satisfied, and I can move on to the RHEL and Satellite.
Red Hat Enterprise Linux
Since I’m planning use these Raspberry Pis to replace x86_64 servers, I need to be able to run RHEL or RHEL adjacent Operating Systems like CentOS or Fedora.
This is a hurdle, because Raspberry Pis are not a supported hardware platform for RHEL!
That being said, it is possible to run RHEL on Raspberry Pi, but it requires some special handling. Most notably, RHEL for ARM requires that the hardware have UEFI firmware.
UEFI Background
UEFI originated with Intel Itanium (ia64). The processor architecture required new firmware and Intel/HP decided to create EFI (Extensible Firmware Interface) rather than leverage the Open Firmware standard used in many other workstation/server platforms. EFI was later ported to x86 and x86_64 as a replacement for the legacy IBM-compatible BIOS firmware and has become the new standard for that architecture in the past few years. Along the way EFI got renamed UEFI (Unified Extensible Firmware Interface) and got ported to ARM, but the uptake in the ARM ecosystem has been spotty at best. RHEL on ARM requires UEFI and all supported ARM machines include UEFI in non-volatile storage (ROM, Flash, et cetera).
Raspberry Pis do not include UEFI and have their own unique firmware. However, the community has ported an open-source UEFI implementations to the Raspberry Pi 4 and 5. The executable binaries can be stored on an SD card and then chain-loaded by the Raspberry Pi’s native firmware. Since the UEFI binaries are stored on volatile storage (i.e. an SD Card) rather than in a non-volatile ROM or Flash memory, I’m going to have to be careful when partitioning and installing an Operating System. A wrong move and I could overwrite or delete the partition holding the UEFI binaries, thus preventing the system from booting.
There are two current repositories for different versions of the Raspberry Pi:
NB at time of writing, Google does not bring the RPi 5 repo up in search…instead it links to an abandoned effort. The above link comes from a recent FreeBSD Wiki page. I’ve not tested the RPi 5 at all, YMMV.
Installing the firmware
Getting the UEFI firmware installed is a fairly straightforward exercise. I’m going to be using the SD card to hold the UEFI firmware and boot from it.
First, wipe any existing partition table and create a new GPT partition table on the SD card. GPT or Global Partition Table is a newer format that replaces the older DOS/MBR-style partition tables. A DOS/MBR-style partition table will work but seems to cause issues with the RHEL-installer.
Then, create a smallish partition of type EFI System. 100 MB should be sufficient.
## My device is mmcblk0, yours may be different. Proceed with caution or you may have data loss!
# # Clear any existing partition table:
# wipefs -a /dev/mmcblk0
/dev/mmcblk0: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54
/dev/mmcblk0: 8 bytes were erased at offset 0x76e47fe00 (gpt): 45 46 49 20 50 41 52 54
/dev/mmcblk0: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
/dev/mmcblk0: calling ioctl to re-read partition table: Success
# fdisk /dev/mmcblk0
Welcome to fdisk (util-linux 2.41.3).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Device does not contain a recognized partition table.
Created a new DOS (MBR) disklabel with disk identifier 0x4f27bc74.
###
### fdisk still defaults to DOS/MBR-style partition tables
### the 'g' command will force it into GPT mode.
###
Command (m for help): g
Created a new GPT disklabel (GUID: 714CC8E9-2EE4-4E1D-80B2-F088DE7F3D35).
Command (m for help): n
Partition number (1-128, default 1): 1
First sector (2048-62333918, default 2048): 2048
Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-62333918, default 62332927): +100M
###
### There may still be existing data and filesystem headers on partitions
### they'll be erased later, but it's no harm to remove it now
###
Created a new partition 1 of type 'Linux filesystem' and of size 100 MiB.
Partition #1 contains a vfat signature.
Do you want to remove the signature? [Y]es/[N]o: y
The signature will be removed by a write command.
###
### Setting the partition type to EFI System (Type 1) is
### absolutely required. Otherwise things won't boot!
###
Command (m for help): t
Selected partition 1
Partition type or alias (type L to list all): 01
Changed type of partition 'Linux filesystem' to 'EFI System'.
Command (m for help): p
Disk /dev/mmcblk0: 29.72 GiB, 31914983424 bytes, 62333952 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 714CC8E9-2EE4-4E1D-80B2-F088DE7F3D35
Device Start End Sectors Size Type
/dev/mmcblk0p1 2048 206847 204800 100M EFI System
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
Next, make a VFAT filesystem on that partition and extract the EFI archive at the root of that filesystem.
# mkfs.vfat /dev/mmcblk0p1
mkfs.fat 4.2 (2021-01-31)
# mount /dev/mmcblk0p1 /mnt
# pushd /mnt
# unzip /path/to/RPi4_UEFI_Firmware_v1.50.zip
Archive: /path/to/RPi4_UEFI_Firmware_v1.50.zip
inflating: RPI_EFI.fd
inflating: bcm2711-rpi-4-b.dtb
inflating: bcm2711-rpi-400.dtb
inflating: bcm2711-rpi-cm4.dtb
inflating: config.txt
inflating: fixup4.dat
inflating: start4.elf
creating: overlays/
inflating: overlays/miniuart-bt.dtbo
inflating: overlays/upstream-pi4.dtbo
inflating: Readme.md
creating: firmware/
inflating: firmware/LICENCE.txt
creating: firmware/brcm/
inflating: firmware/brcm/brcmfmac43455-sdio.clm_blob
inflating: firmware/brcm/brcmfmac43455-sdio.bin
inflating: firmware/brcm/brcmfmac43455-sdio.txt
inflating: firmware/brcm/brcmfmac43455-sdio.Raspberry
inflating: firmware/Readme.txt
# popd
# umount /mnt
# sync
Now the SD card should be ready. The Pi should boot into the UEFI firmware:
Creating RHEL Installation Media
RHEL can be downloaded from Red Hat’s portal. Since I work for Red Hat, I have an employee subscription, but no cost RHEL subscriptions are available for selected use cases.
I always use the Boot ISO image. This ISO image doesn’t have packages and will need network connectivity to either Satellite, a repository server, or the Red Hat CDN to complete an install. The Full ISO images are now exceeding 10 GiB, and no longer fits on my USB flash drives.
In Fedora, the Fedora Media Writer can be used to write an ISO image to a USB drive.
There are so many tools out there, so use whatever is
available/convenient. As a last resort, a simple dd command can be
used:
# dd if=/path/to/rhel.iso of=/dev/myusbdrive
Just be sure to ensure the iso is the ARM64 (aarch64) version! I’ve accidentally wasted several hours of my life by not paying close attention and trying to use the x86_64 version.
Pre-requisite: Enable more than 3 GiB of RAM
In the RPi4 UEFI firmware, there is a default setting that restricts the usable amount of RAM to 3 GiB. This was to work around a bug present in some versions of the Linux kernel. This default seems, quite frankly, out of date since affected Linux kernel versions are several years old.
RHEL 9 has a relatively recent kernel, and does not suffer from this bug. So, we should disable this limitation and use the full amount of RAM.
I accidentally forgot to change this setting, and noticed a high incident of failures during installation because of ram disk corruption. Once I enabled the full amount of RAM, this issue disappeared completely.
To change this setting and enable the full amount of RAM:
- On boot, press
Escto enter the UEFI settings menu:
- Select
Device Manager:
- Select
Rapsberry Pi Configuration:
- Select
Advanced Configuration:
- Change
Limit RAM to 3 GBtoDisabled:
- Press
F10to save:
- Either hard reset the machine or navigate to the main menu and select
Reset:
- On subsequent boots, the UEFI setting menu should list 8GiB of RAM:
Booting/Installing RHEL
To boot from the USB thumb drive, access the EFI settings menu by pressing Esc.
Then navigate to the Boot Manager menu
And select the USB devices that has the installation media. In my case, it’s I used a SanDisk Cruzer USB drive, so it’s relatively easy to find.
After this RHEL boots and the installation process is basically the same as RHEL on x86_64!
ONE MAJOR EXCEPTION IS THAT YOU MUST PROTECT THE EFI SYSTEM PARTITION FROM BEING OVERWRITTEN!
If the RHEL installer reformats the EFI partition, or wipes the partition table completely, then the UEFI binaries that were installed will, of course, go away. This is the trouble with having the EFI binaries on an SD card as opposed in a ROM or on-board Flash memory.
Manual partitioning is required guarantee the EFI System partition is protected, but it was a bit tricky. There seem to be some bugs in Anaconda (the RHEL Installer) that cause it to freeze or incorrectly think the EFI partition is on LVM.
In the end, this is the partition table I settled on:
Again, it’s very important not to reformat the EFI System Partition!
Once the install is successful, Anaconda will generate a kickstart file that can be used to repeat the installation. This kickstart file can be used as a template for other machines, so that we can avoid using the installer GUI in the future.
After installation, the generated kickstart should be located in
/root/anaconda-ks.cfg.
The kickstart is pretty standard, with the major customization being in the disk partitioning section:
# Generated using Blivet version 3.6.0
ignoredisk --only-use=mmcblk1,sda
# System bootloader configuration
bootloader --append="crashkernel=1G-4G:256M,4G-64G:320M,64G-:576M" --location=mbr --boot-drive=mmcblk1
# Partition clearing information
clearpart --none --initlabel
# Disk partitioning information
part /boot/efi --fstype="efi" --noformat --onpart=mmcblk1p1 --fsoptions="umask=0077,shortname=winnt"
part pv.2270 --fstype="lvmpv" --ondisk=sda --size=4095
part /boot --fstype="xfs" --ondisk=mmcblk1 --size=1024
part / --fstype="xfs" --ondisk=mmcblk1 --size=20480
volgroup rhel_rpi1 --pesize=4096 pv.2270
logvol swap --fstype="swap" --size=4092 --name=swap --vgname=rhel_rpi1
This is not the exact partition scheme I wanted, but it’s close enough, and is a good starting point for a more refined configuration.
Since I’m able to get a basic RHEL installation working, this checks off my requirement #4 “Must run Red Hat Enterprise Linux”!
Now, hopefully, after rebooting at the end of installation, RHEL comes up without any additional effort. This does seems to be the case with the latest version of the UEFI Firmware (1.50) and RHEL 9.7. Earlier versions of either RHEL or the UEFI firmware had issues with configuring a boot entry for RHEL. That configuration needed to be manually done after installation.
Configuring the EFI boot menu
This section is probably no longer needed, but may be useful in case the installer doesn’t create a boot entry, or if there is a desire to create a custom one for some reason.
First, enter the EFI settings menu by pressing Esc during the boot sequence:
Next, navigate to Boot Maintenance Manager:
Next, navigate to Boot Options:
Next, navigate to Add Boot Option:
Select the SD Card:
I still have my USB Drive plugged in at this point, so it shows up as
a volume labeled ANACONDA. The SD Card has no volume label, and will
have HD(1,GPT) somewhere in the middle if my instructions were
followed.
At this point, we are now navigating the filesystem inside the EFI System volume.
Navigate to <EFI>:
Navigate to <redhat>:
Select shim.efi:
Add a description:
Select Commit Changes and Exit:
This newly created boot option will be set to the last possible boot
option. This is likely not desired, but is easy to change using the
Change Boot Order screen:
This screen can be a bit confusing. It shows the current boot order. The top-most item will be attempted first, and then if that fails, each subsequent item will be attempted.
To change press Enter, and then use the + and - keys to move items
up and down until the desired order is displayed, and then hit Esc.
Now select Commit Changes and Exit to make save the boot order.
Now, on reboot, provided our Red Hat Enterprise Linux option is the
default, the Raspberry Pi should boot directly into RHEL!
We can always manually select it by going into the Boot Manager:
and selecting Red Hat Enterprise Linux:
Provisioning over the Network with Satellite
So the installation procedure above works, but is a bit labor intensive and I’d rather not do it again, especially if I build this out to a full 16 Raspberry Pis!
At this junction, I can’t automate everything. Notably, installing the UEFI firmware and partitioning the SD Card will have to be done by hand.
However, I can install and reinstall RHEL over and over using a very similar process to how I provision x86_64 RHEL Machines.
The provisioning part of this setup is very likely not supported by Red Hat. However, other portions, like hosting ARM64 content in Satellite is. Since this is my home lab, I don’t care about support, but still caveat emptor.
This section assumes an already working Satellite provisioning setup for x86_64 servers!
Importing the RHEL Content into Satellite
The first step here, is getting our content into Satellite. This is relatively straightforward and the ARM architecture follows the standard for RHEL. We have a AppStream and BaseOS repositories, as well as Kickstart repositories.
Kickstart repositories are designed for provisioning and are tied to a specific RHEL release, e.g. 9.4, 10.1, et cetera.
The normal AppStream and BaseOS repositories are rolling releases, and are used post-provisioning for patching and installing new software.
These repositories need to be synced to the Satellite, added to a Content View, and then, optionally, that Content View promoted to a Lifecycle Environment.
All this setup, is really beyond the scope of this post, but below are
the hammer commands to create a Content View. In my environment, I
have one Lifecycle Environment, named Production, and the below
commands will promote the new content view in to that Lifecycle
Environment and create a corresponding Activation Key.
### Setting a shell variable for the organization id. This will
### likely be one but will be dependent on your installation.
# ORG_ID=1
### Enable the various Repository sets for RHEL. Not all of these are
### strictly needed.
###
### Note the Kickstart repositories need to be a specific version,
### 9.7, whereas most other repositories just use the rolling 9
### release.
# hammer repository-set enable --organization-id ${ORG_ID} --name "Red Hat Enterprise Linux 9 for ARM 64 - BaseOS (Kickstart)" --basearch aarch64 --releasever 9.7
# hammer repository-set enable --organization-id ${ORG_ID} --name "Red Hat Enterprise Linux 9 for ARM 64 - AppStream (Kickstart)" --basearch aarch64 --releasever 9.7
# hammer repository-set enable --organization-id ${ORG_ID} --name "Red Hat Enterprise Linux 9 for ARM 64 - AppStream (RPMs)" --basearch aarch64 --releasever 9
# hammer repository-set enable --organization-id ${ORG_ID} --name "Red Hat Enterprise Linux 9 for ARM 64 - BaseOS (RPMs)" --basearch aarch64 --releasever 9
# hammer repository-set enable --organization-id ${ORG_ID} --name "Red Hat Enterprise Linux 9 for ARM 64 - Supplementary (RPMs)" --basearch aarch64 --releasever 9
# hammer repository-set enable --organization-id ${ORG_ID} --name "Red Hat Satellite Client 6 for RHEL 9 aarch64 (RPMs)" --basearch aarch64
### Synchronize all this content down to the Satellite server.
# hammer product synchronize --organization-id ${ORG_ID} --name "Red Hat Enterprise Linux for ARM 64"
### Optionally add this product to the daily sync plan. I think this
### sync plan is the default now for Satellite, but I'm not sure.
# hammer product set-sync-plan --organization-id ${ORG_ID} --name "Red Hat Enterprise Linux for ARM 64" --sync-plan "Daily Sync"
### Generate a list of repositories. This command will generate a
### correct list ONLY if this is the first time adding any ARM64
### products/repositories to the Satellite.
# REPOS=$(hammer repository list --organization-id ${ORG_ID} | egrep 'aarch64' | awk '{print $1","}' | tr -d '\n' | sed -e 's/,$//')
### Create the Content View
# hammer content-view create --organization-id ${ORG_ID} --name "RHEL 9 ARM64" --repository-ids ${REPOS}
### Publish the Content View
# hammer content-view publish --organization-id ${ORG_ID} --name "RHEL 9 ARM64"
### Promote the Content View
# hammer content-view version promote --content-view "RHEL 9 ARM64" --organization-id ${ORG_ID} --to-lifecycle-environment "Production"
### Create Activation Key. Since I'm using Simple Content Access
### mode, I don't have to attach a subscription.
# hammer activation-key create --name RHEL9_ARM64 --organization-id ${ORG_ID} --content-view "RHEL 9 ARM64" --lifecycle-environment "Production"
After all those hammer commands, I have all the content-related
parts of the Satellite configuration done.
Satellite Provisioning Flow
In Satellite, host provisioning is generally “network-driven”. What that means is that the host always boots from the network and Satellite serves different network boot files to influence the behavior of the host.
So, when Satellite is provisioning the host, i.e. it’s listed in build status in Satellite, the network boot files will be generated such that those file contain configuration that instructs the host to run the installer and use a Satellite generated kickstart file.
Once provisioning is finished, then those network boot files are modified to instruct the host to boot off local media (e.g. the SD Card or attached USB or SATA drive).
By managing hosts this way, I can re-provision RHEL by just clicking a button in Satellite and rebooting the host.
There are several challenges to getting this to work.
First Challenge: DHCP Booting
The first challenge is getting the DHCP server to send ARM-specific boot files to ARM servers.
By default, the configuration for Satellite’s built-in ISC-DHCP server only includes logic for handling x86_64 and x86 machines.
When a DHCP client requests and receives a lease, there are a large number of DHCP options that can be passed to augment the base protocol. These options are defined by RFC2939 and the IANA maintains a list of DHCP Options. Option 93, Client System Architecture, is the relevant option for booting. Option 93 is defined by RFC4578 and, again, the IANA maintains a current list of Processor Architecture Types. Interestingly, this list is for DHCPv6 and I could not find one for IPv4 DHCP, but this list appears to be accurate for both variants of DHCP.
The IANA Processor Architecture Types list defines 0x06 for
x86/i386, 0x07 for x86_64/amd64, 0x09 for EBC (EFI byte code, a
generic EFI type), and 0x0b for ARM64. These values are for
traditional tftp-based netbooting, which I will be using. There are a
separate set of codes for http-based netbooting.
In the default /etc/dhcp/dhcpd.conf file, there is an if/else block
that controls this:
option architecture code 93 = unsigned integer 16 ;
if option architecture = 00:06 {
filename "grub2/shim.efi";
} elsif option architecture = 00:07 {
filename "grub2/shim.efi";
} elsif option architecture = 00:09 {
filename "grub2/shim.efi";
} else {
filename "pxelinux.0";
}
These filenames are based on a relative path to the TFTP root
directory. In the case of RHEL/Satellite, the default is
/var/lib/tftpboot
Based on this, clients with architecture code 0x0b fall through to
the last else block and will be sent /var/lib/tftpboot/pxelinux.0,
which is wrong. pxelinux.0 is an x86 executable for traditional
BIOS-style PXE boot.
I’ll need to add a stanza for the ARM64 architecture. But before we
can do that, we need to get the appropriate files and place them in
/var/lib/tftpboot.
Second Challenge: Getting Grub2 and UEFI Shim for ARM64
For UEFI booting on i386 and x86_64, there are two main boot executables that are used:
shim.efigrub$(arch).efi
shim.efi is a signed boot executable for use with computers
requiring SecureBoot. These kinds of machines require a
cryptographically signed executable the first boot executable.
grub$(arch).efi is a copy of the Grand Unified Boot
Loader. GRUB is responsible for
loading the Linux kernel and actually starting the Linux boot process.
shim.efi is loaded first, and then it can then load subsequent
executables. It immediately tries find and load a corresponding GRUB
executable, attempting a list of potential file names including
grub.efi, grubx64.efi and others.
The UEFI firmware on Raspberry Pi’s do not require SecureBoot, so a
equivalent shim.efi is not needed, but I’ll use it anyway just to
keep things consistent. Since I’m hosting both x86_64 and ARM64 on
the same Satellite server, the architecture code (aa64 for ARM, x64
for x86_64) needs to be included in the filename. Thus, the files for
ARM will be named shimaa64.efi and grubaa64.efi.
Where does one obtain these files? They are contained in 3 packages,
and are also installed by default in any running ARM installation.
In the list below, grub2-efi-aa64-modules is not needed for initial
boot and install, but will become important later.
shim-aa64grub2-efi-aa64grub2-efi-aa64-modules
However, if I try to install these on my Satellite Server:
# dnf install -y grub2-efi-aa64
Updating Subscription Management repositories.
Last metadata expiration check: 2:01:45 ago on Mon 08 Dec 2025 02:31:14 PM EST.
No match for argument: grub2-efi-aa64
Error: Unable to find a match: grub2-efi-aa64
These packages (except grub2-efi-aa64-modules) are not available in
the x86_64 RHEL repositories.
So, that leaves a few methods to get them:
- Download the RPMs from
access.redhat.comand extract what we need - Get the RPMs from an installation ISO
- Get the files from a running system.
Since I had already done a test install of RHEL from a USB Stick, I chose option #3. I’ll leave the two others as an exercise to the reader.
My test host has a hostname of rpi1.private.opequon.net. First, I
ensure that all three of the above packages are installed and then use
the following commands will synchronize all the appropriate files:
# # ON SATELLITE SERVER
# cd /var/lib/tftpboot/
# # Copy files from grub2-efi-aa64-modules
# rsync -avz --delete root@rpi1.private.opequon.net:/usr/lib/grub/arm64-efi .
# chown -R foreman-proxy:root arm64-efi
# # Copy files from grub2-efi-aa64 and shim-aa64
# scp root@rpi1.private.opequon.net:/boot/efi/EFI/redhat/*aa64*.efi .
root@rpi1.private.opequon.net's password:
grubaa64.efi 100% 2622KB 20.0MB/s 00:00
mmaa64.efi 100% 873KB 9.4MB/s 00:00
shimaa64-redhat.efi 100% 956KB 10.0MB/s 00:00
shimaa64.efi 100% 956KB 9.9MB/s 00:00
# chown foreman-proxy:root *aa64*.efi
# chmod 0644 *aa64*.efi
# ls -la *aa64*.efi
-rw-r--r--. 1 foreman-proxy root 2684536 Dec 8 16:43 grubaa64.efi
-rw-r--r--. 1 foreman-proxy root 893760 Dec 8 16:43 mmaa64.efi
-rw-r--r--. 1 foreman-proxy root 978528 Dec 8 16:43 shimaa64.efi
-rw-r--r--. 1 foreman-proxy root 978528 Dec 8 16:43 shimaa64-redhat.efi
This copies a few unnecessary files like mmaa64.efi, which is a
memory tester, and shimaa64-redhat.efi which has the same contents
as shimaa64.efi, but it’s not a big deal.
Now with these in place, we can update our /etc/dhcp/dhcpd.conf to
include a reference to these executables:
option architecture code 93 = unsigned integer 16 ;
if option architecture = 00:06 {
filename "grub2/shim.efi";
} elsif option architecture = 00:07 {
filename "grub2/shim.efi";
} elsif option architecture = 00:09 {
filename "grub2/shim.efi";
} elsif option architecture = 00:0b {
filename "grub2/shimaa64.efi";
} else {
filename "pxelinux.0";
}
Unfortunately, this configuration is NOT permanent! dhcpd.conf
is under control of Satellite’s puppet modules and will get rewritten
next time that satellite-maintain is run. However, fixing that is a
tomorrow problem.
Third Challenge: Loading GRUB Modules
With the DHCP changes and efi files in place, I can run an installation. However, once the installation is complete, the system doesn’t come up on reboot! There are errors about chainloading.
Now this doesn’t mean that the installation wasn’t successful! If I
were to navigate through the UEFI console and boot from the Red Hat
Enterprise Linux Boot Menu entry, then the machine comes up
successfully.
So, what’s going on here?
Recall that Satellite is controlling the boot process through network booting configuration files. At the end of the install process, there is a call-back to the Satellite server that indicates the installation has completed. This causes the Satellite server to change the net boot files to instruct the host to boot from local disk.
The DHCP server tells the host to load shimaa64.efi, which then
loads grubaa64.efi. GRUB looks for configuration and, by default,
it will look in the same place that the grubaa64.efi came from: the
TFTP server.
On the TFTP server, we can find our GRUB configuration under
/var/lib/tftpboot/grub2/
# cd /var/lib/tftpboot/grub2
# ls -la
###
### I've cut out a lot of files here for clarity/brevity
###
-rw-r--r--. 1 foreman-proxy root 320 Jan 20 2021 grub.cfg
-rw-r--r--. 1 foreman-proxy foreman-proxy 1771 Jan 20 23:30 grub.cfg-01-d8-3a-dd-f2-ee-50
-rw-r--r--. 1 foreman-proxy foreman-proxy 1771 Jan 20 23:30 grub.cfg-d8:3a:dd:f2:ee:50
GRUB will go through different file patterns to find the correct one.
In Satellite, the Ethernet MAC address is used to uniquely tie a
specific file to GRUB. In the above example, there are two different
variations of filenames for a machine with MAC address
d8:3a:dd:f2:ee:50. I’m not sure which one GRUB, but Satellite
generates them both.
After GRUB runs through all it’s filename patterns, it will default to grub.cfg, but in a Satellite provisioning situation, that should never happen.
These files tend to be fairly complicated scripts, and the ones provided by Satellite are provided as templates that are heavily generalized to work with RHEL, Fedora and several other distributions.
There are, of course different templates depending on what action is
expected from the host. In this case, since the installation was
completed, Satellite is expecting the host to boot from local boot, so
the PXEGrub2 default local boot template was used.
While I’m troubleshooting, I can directly modify these file in
/var/lib/tftpboot.
A simplified version looks like this:
# Loading modules for GPT Partitions, FAT Filesystem and Chainloading
insmod part_gpt
insmod fat
insmod chain
# Menu Entry to display
menuentry 'Chainload Grub2 EFI from ESP' --id local_chain_hd0 {
# Search for the file to chainload
unset chroot
search --file --no-floppy --set=chroot /EFI/fedora/shim.efi
# If this file is found, chainload it
if [ -f ($chroot)/EFI/fedora/shim.efi ]; then
chainloader ($chroot)/EFI/fedora/shim.efi
echo "Found /EFI/fedora/shim.efi at $chroot, attempting to chainboot it..."
sleep 2
boot
fi
“Chainloading” in this case is using the copy of GRUB we pulled down from the network to load another copy of GRUB (or if it were a different operating system, then loading that operating system’s bootloader). This second copy of grub then is used to load the RHEL installation on the local storage.
Chainloading (and the chainloader command) is NOT part of the basic
grub.efi executable, at least so far as ARM is concerned.
Therefore, that functionality needs to be loaded in at runtime from a
module. This is the intent of the insmod chain command.
For some reason, the insmod commands fails to find the module to
load when running on ARM. I’m not sure exactly how it works on
x86_64. Debugging here is difficult. The chainload module is either
part of the base grub.efi for x86_64 or the insmod is somehow able
to figure out how to load it. I’m not sure.
Regardless, I was able to figure out how to get insmod to work
correctly and load the ARM version of modules from Satellite’s tftp
server! This requires changing the insmod command to include some
hints on where to find the actual module.
From:
insmod chain
change to:
insmod (tftp)grub2/arm64-efi/chain.mod
Similar changes can be made to the other insmod statements, but
chain.mod was the one causing the issue.
These hints tell GRUB to load this module from the TFTP server from a specific directory. Now that it can successfully get this module, the boot can continue!
Fourth Challenge: Post-Install Template
Modifying GRUB files in the /var/lib/tftpboot directory works, but
Satellite owns these files and will rewrite these for various reasons.
I need to modify the Satellite template used to generate this file.
Satellite uses the template referenced by the localboot template
parameter to generate the post installation version of this GRUB
configuration. Unfortunately, this parameter is not exposed in the UI
except for at a global level. Since this variant is ONLY applicable
to ARM, I don’t want to override this template FOR EVERY MACHINE.
To solve this, I can use Satellite Host Groups functionality to create a Host Group just for Raspberry Pi hardware. The Host Group will have a parameter set to reference the updated templates for localboot.
Putting it all Together: Satellite Configuration
Working through those challenges, I can now have all the information I need to configure Satellite to provision a host.
- Create a new templates for ARM including a new localboot template and a partition table
- Create a hostgroup using the Content View, Partition Table, and Local Boot template
- Ensure my Rapsberry Pis are set to boot from the network first
- Create a host in that host group to provision
- Reboot and watch it run!
Putting it all Together: Satellite Templates
To complete the configuration, I had to create two templates in Satellite.
- Chainload Template
I cloned pxegrub2_chainload template into pxegrub2_chainload_RPI
and modified it to include insmod changes.
<%#
kind: snippet
name: pxegrub2_chainload
model: ProvisioningTemplate
snippet: true
description: |
In Foreman's typical PXE workflow, managed hosts are configured to always boot from network and inventory build flag dictates if they should boot into installer (build is on) or boot from local drive (build is off). This template is used to chainload from EFI ESP for systems which booted from network. It is not as straightforward as in BIOS and EFI boot file must be found on an ESP partition.
This will only be needed when provisioned hosts are set to boot from network, typically EFI firmware implementations overrides boot order after new OS installation. This behavior can be set in EFI, or "efi_bootentry" host parameter can be set to "previous" to override boot order back to previous (network) setting. See efibootmgr_netboot snippet for more info.
-%>
<%
paths = [
'/EFI/redhat/shimaa64.efi',
'/EFI/redhat/grubaa64.efi'
]
config_paths = [
'/EFI/fedora/grub.cfg',
'/EFI/redhat/grub.cfg',
'/EFI/centos/grub.cfg',
'/EFI/rocky/grub.cfg',
'/EFI/almalinux/grub.cfg',
'/EFI/debian/grub.cfg',
'/EFI/ubuntu/grub.cfg',
'/EFI/sles/grub.cfg',
'/EFI/opensuse/grub.cfg',
]
-%>
insmod (tftp)grub2/arm64-efi/part_gpt.mod
insmod (tftp)grub2/arm64-efi/fat.mod
insmod (tftp)grub2/arm64-efi/chain.mod
<%=
default_connectefi_option = 'scsi'
connectefi_option = @host ? host_param('grub2-connectefi', default_connectefi_option) : default_connectefi_option
connectefi_option = nil if connectefi_option == 'false'
"connectefi #{connectefi_option}" if connectefi_option
%>
menuentry 'Chainload Grub2 EFI from ESP' --id local_chain_hd0 {
echo "Chainloading Grub2 EFI from ESP, enabled devices for booting:"
ls
<%
paths.each do |path|
-%>
echo "Trying <%= path %> "
unset chroot
# add --efidisk-only when using Software RAID
search --file --no-floppy --set=chroot <%= path %>
if [ -f ($chroot)<%= path %> ]; then
chainloader ($chroot)<%= path %>
echo "Found <%= path %> at $chroot, attempting to chainboot it..."
sleep 2
boot
fi
<%
end
-%>
echo "Partition with known EFI file not found, you may want to drop to grub shell"
echo "and investigate available files updating 'pxegrub2_chainload' template and"
echo "the list of known filepaths for probing. Available devices are:"
echo
ls
echo
echo "If you cannot see the HDD, make sure the drive is marked as bootable in EFI and"
echo "not hidden. Boot order must be the following:"
echo "1) NETWORK"
echo "2) HDD"
echo
echo "The system will poweroff in 2 minutes or press ESC to poweroff immediately."
sleep -i 120
halt
}
- local boot template
The pxegrub2_chainload_RPI template above is a snippet. The
original one was used by the PXEGrub2 default local boot template.
I cloned the PXEGrub2 default local boot template into PXEGrub2
default local boot clone RPI and modified to use my newly created
pxegrub2_chainload_RPI template.
<%#
kind: PXEGrub2
name: PXEGrub2 default local boot
model: ProvisioningTemplate
description: |
The template to render Grub2 bootloader configuration for provisioned hosts,
that still boot from the network.
Hosts are instructed to boot from the first local medium.
Do not associate or change the name.
-%>
set default=<%= global_setting("default_pxe_item_local", "local") %>
set timeout=20
echo Default PXE local template entry is set to '<%= global_setting("default_pxe_item_local", "local") %>'
<%= snippet "pxegrub2_chainload_RPI" %>
- Partition table
Using the kickstart file generated when I installed with a USB drive,
I created a partition table template named RPi SD-Card and SATA Drive
Default.
In order for the Partition Table to show up in Create Host or
Create Host Group screens, it must be associated with an individual
Operating System version.
Since I added the RHEL 9.7 Kickstart, I need to add this template specifically to the Red Hat 9.7/RHEL 9.7 Operating System.
This is found under the Hosts->Provisioning Setup->Operating System
menu item:
In Partition Table tab for the specific RHEL Version (in our case RedHat 9.7):
Make sure the partition table template is in the selected items column!
The contents of the partition table template are:
# Default RPi to use only attached SD Card
# Generated using Blivet version 3.6.0
ignoredisk --only-use=mmcblk1,sda
# System bootloader configuration
bootloader --append="crashkernel=1G-4G:256M,4G-64G:320M,64G-:576M" --location=mbr --boot-drive=mmcblk1
# Partition clearing information
zerombr
clearpart --linux
# Disk partitioning information
part /boot/efi --fstype="efi" --noformat --onpart=mmcblk1p1 --fsoptions="umask=0077,shortname=winnt"
part /boot --fstype="xfs" --ondisk=mmcblk1 --size=1024
part / --fstype="xfs" --ondisk=mmcblk1 --size=20480 --grow
part pv.1 --fstype="lvmpv" --ondisk=sda --grow
volgroup rhel_rpi --pesize=4096 pv.1
logvol swap --fstype="swap" --size=4092 --name=swap --vgname=rhel_rpi
I decided on putting /boot and / on the SD Card as normal
partitions, rather than Logical Volumes. Then on the SATA drive, I
put that drive’s entire contents into a volume group called
rhel_rpi. In the above table, I only put a single logical volume,
one for swap. However, in future, I’ll likely add one for /var and
other locations.
I’m not sure if this is the most optimal or not, but it does seem to cause the least issues on provisioning/re-provisioning.
Normally, I would just have a kickstart that would wipe ALL the drives and go from there. This is not possible here, because the EFI system partition must be preserved.
I had three scenarios I needed to cover:
-
Only the EFI partition on the SD Card and no partition table on the SATA drive. This is the “brand new” state.
-
Multiple partitions on the SD Card, and no partition table on the SATA drive. This is unlikely to happen realistically, but could, if I replaced the SATA drive.
-
Multiple partitions on the SD Card, and partitions on the SATA drive. This is the normal re-provisioning states.
In the kickstart directive, the clearpart --linux was very useful.
It clears ONLY Linux related partitions, so it leaves the EFI
System partition unmodified and intact, but will delete any other
partitions.
This solves scenario #3 above, which is likely the most common, but partitioning would still fail on #1 and #2 because there wasn’t a partition table on the SATA drive.
I tried multiple arguments to clearpart to solve #1 and #2, but
ended up realizing that zerombr was what I actually needed.
zerombr initializes a partition table only on disk that need it.
So it effectively ignores the SD Card, since it will always have a
partition table, but will, if needed, put a partition table on the
SATA drive.
Putting it all together: Host Group
With those templates and provisioning templates, I can now create the Host Group:
In the host group tab, I set the Lifecycle Environment, Content View,
and Deploy On. In Content Source, yavanna.private.opequon.net is my
Satellite host.
In the Operating System tab, I select the RHEL Release that corresponds with our Kickstart repository, update the Partition Table to the template I created, and then set the PXE loader to Grub2 UEFI.
In the parameters tab, we add kt_activation_keys to the Activation
Key that was created and local_boot_PXEGrub2 to the template we
created for local boot.
Putting it all togther: Changing Boot Order
For this to work, a host must have UEFI PXEv4 set as the default
boot option. This is the default unless one has installed via a USB
drive or manually modified the boot order.
Putting it all together: Creating Hosts
Now that I’ve got everything in place, I can create hosts.
Navigate to Create Host in the Satellite Menu:
Fill out a hostname and select the Host Group for the Raspberry Pis:
The Host Group contains almost all the basic parameters are needed to provision this host. So, there’s nothing much else to enter:
The Network Interface still needs to be defined:
Like so:
With the Host defined in Satellite, I can boot the Raspberry Pi and it should start to PXE boot:
And then continue onto GRUB:
And after some time, it should continue to installing:
Summary
After doing all this configuration in Satellite, I now have a very similar process to provisioning these Raspberry Pis as I do my x86_64 machines.
This technically checks off my requirement #5 “Must be able to provision over the network and ideally through Red Hat Satellite”.
However, I fear this setup is a bit fragile. I know that the DHCPd configuration will be overwritten next time I upgrade Satellite. I’ve started to look into patching Satellite, but it may be easier to run a separate DHCP server that is not entirely managed by Satellite.
In the same vein, I’m not sure if the UEFI binaries put in
/var/lib/tftpboot will stay in place or not. Upgrading Satellite
may wipe them out, but I’m not 100% sure yet. Regardless, those
were setup by hand and not by any supported way. So they wont be
updated and could potentially become out of date.
That being said, I’m pleased with how this came together, especially for something unsupported.
Cost Calculations
Prices are going to fluctuate, so this section will likely be out of date very quickly. This analysis is current as of Jan 2026.
Each Raspberry Pi was configured identically:
| Item | Price |
|---|---|
| Raspberry Pi | $74.99 |
| SD Card (32 GiB) | $11.49 |
| SSD (120GiB) | $23.99 |
| PoE Hat | $24.99 |
| HDMI Cable | $10.99 |
| Total | $146.45 |
The UCTronics Chassis cost $269.99, I divided by 4 to get the per Pi price of $21.87.
The TESMart KVM switch cost $349.99. I divided that by 16 to get a per Pi price of $6.25.
I haven’t bought a NanoKVM or equivalent for this system yet, but I put in a price of $100 or $6.25 per Pi.
| Item | Price |
|---|---|
| Raspberry Pi + accessories | $146.45 |
| Chassis (per Pi cost) | $67.50 |
| KVM Switch (per Pi Cost) | $21.87 |
| IP KVM (per Pi Cost) | $6.25 |
| TOTAL per Pi Cost | $242.07 |
Maxing this configuration out with 16 total Pis yields an estimated cost of $3873.12.
This is not including the price of a PoE-compatible switch!
Yikes!
If one is comparing with NEW x86_64 servers or even refurbished recent models (e.g. a HPE Gen 10+), then this is cheaper. Gathering quotes on Server Monkey, a refurbished HPE DL360 Gen 10 Plus chassis (without processors, RAM, or disk) starts at $3,100.
However, the if I go back to slightly older models, a HPE Gen 10 DL360 w/ a 16 Core Processor, 128 GiB RAM, and 4 TB SATA SSD can be priced out for upper $2,000s to low $3,000s. I feel pretty confidant that a single server configured like that could run at least 16 virtual machines if not more and be the similar, if not more performant.
On the other hand, 16 independant hosts have a lower blast-radius for failure compared to a single machine. So that price for the HPE server is maybe low. I’d want more than 1 drive for redundancy, since a single drive failure would wipe out ALL my virtual machines. And, I’d probably want more than 128 GiB of RAM to account for operating system overhead.
Server Monkey is also not the cheapest vendor. I would likely be able to find a better deal on eBay or somewhere else, eventually. This is how I’ve acquired most of my server hardware anyhow, but it requires patience and diligence.
That being said, there’s also some flexibility in the price of the Pi setup. I think I overpaid for HDMI cables and SD Cards….and I certainly feel the UCTronics chassis is over priced.
I also may be able to get more value, though not necessarily a lower price, through buying more capable single board computers, like Raspberry Pi 5s or Orange Pis. There’s also potential experiments I could do with even more unusual architectures like RISC V.
I’ll leave it up to the reader to determine if this is valuable or not, but it’s certainly no slam dunk.
Problems with RHEL
I encountered during this two bugs in RHEL itself that hampered progress.
The first occurred between RHEL 9.2 and 9.4, where a bug caused the frame buffer to fail. So the host would boot successfully, but I’d have no video from the console.
This bug was fixed somewhere between 9.4 and 9.6.
The second bug involved the UEFI shim and other EFI modules. The ones from 9.7 work great. However, after 9.7 they were updated and booting fails when you have more than 3 GiB of RAM enabled.
I can copy the UEFI boot programs from 9.7 and they work as expected. I’m sure I could track down why in the RPM, but I haven’t yet.
In the interim, I’ll likely have to black-list or version restrict the
UEFI packages. Until I figure out which specific package, I can set
the package_upgrade parameter to false on the Host or the Host
Group in Satellite, which will prevent a dnf update -y from running
during the kickstart install. Hopefully this is fixed when 9.8 is
released.
Conclusions and next steps
I am fairly happy with how this turned out…all my requirements were fulfilled. However, I’m not sure of the value here. Whether a setup like this is more cost effective than used x86_64 hardware is not clear, but I’m leaning towards no.
That being said, initial cost isn’t the only factor. I do believe this setup is less noisy just from my own perception, though real measurements are necessary. I’ve also yet to do power consumption tests, which may prove that TCO over a long period is better.
There are also some serious problems with this setup:
First, this isn’t really supported by Red Hat. Both the Satellite procedure as well as just running RHEL on Raspberry Pis. I don’t care too much as this is just lab hardware, but it is annoying that anything could break with any update.
Second, while modifications to files like /etc/dhcp/dhcpd.conf are
functional, these files are controlled by Puppet modules. Thus, the
changes will be overwritten when doing patching, upgrading, or
reconfiguration via satellite-maintain.
This is solvable, but may lead me to running a DHCP server separate from Satellite.
Third, I’ve not been able to get RHEL 10 working…yet. While still current for quite some time, RHEL 9 is no longer the latest release. I like to stay current if I can.
As for next steps, I’d like to do a few different things:
-
Perform some power usage comparisions between this total setup and some of my x86_64 servers.
-
Integrate a NanoKVM or similar with the KVM switch. This will require some custom development work, but would be nice to be able to switch between Raspberry Pis in the same interface.
-
Integrate Power Controls into Satellite. I can control things via my PoE Switch, but it would be nice to take advantage of the Satellite UI for that.
-
Test this out with other RHEL variants, like CoreOS for OpenShift Nodes. I don’t think these would be powerful enough to run as Control/Master nodes of K8s cluster…I suspect that the disk latency is not good enough to run etcd, but it would be interesting to see these Raspberry Pis be made part of a multi-architecture OpenShift cluster.
-
Test out Raspberry Pi 5’s and potentially other single-board computers.
We’ll see if I have time for any of it!