Lab prerequisites

TL/DR

This lab has been created trying to minimize the pre-requisites. You should be able to run it using just your own (modern) laptop with any Hypervisor installed on it (lab tested with libvirt). The lab has been tested with both x86 and ARM systems.

Summary of minimal system pre-requisites:

  • Virtualization hypervisor with at least 6 vCPUs (threads), 6 GB memory, and 100 GB disk free.

  • Internet connection

  • Connectivity Host <→ VM and VM <→ VM

  • Valid RHEL subscriptions and account

Prepare your virtualization environment

Although you can use the steps shown in this tutorial to onboard RHEL on baremetal devices, for simplicity, we will be using virtual machines for this lab, so you will need to have a pre-configured hypervisor where we will be running the required servers and edge devices.

If you are using Linux, you can install and enable libvirt with these commands:

sudo dnf -y install libguestfs-tools-c libvirt-daemon-config-network libvirt-daemon-kvm libvirt qemu-kvm virt-install virt-manager virt-viewer

sudo systemctl enable --now libvirtd

After installing those packages and downloading RHEL 9.0 or 9.1 ISO, you can create VMs using virt-manager or virt-install running a command similar to the one shown below:

virt-install --name=image-builder-fdo \
--vcpus=2 \
--memory=4096 \
--cdrom=<PATH TO RHEL ISO> \
--disk size=50 \
--os-variant=rhel9.1

Depending on the use case or configuration that you want to follow during this lab, you could need to add "special" devices to your edge VM, for example, you probably would like to use a virtual TPM to manage keys as you will be doing it with physical devices on real edge environments.

This lab could be run completely offline, but in order to avoid creating local repositories we will be connecting to the public ones, so you will need Internet access from your VMs.

You need to be sure that your host firewall and virtual bridge permit connectivity between the host and the VMs, between VMs and from VM to Internet.

Notes for MacBook M1 chip users

If you are using Mac M1, this lab can be done using the aarch64 version of RHEL 9.0 or newer, 9.1 recommended.

For the virtualization environment, using QEMU on MacOS with libvirt 8.0 or newer is required for the OStree images to start.

It is possible to run the VM’s on Apple Virtualization, but currently there is a blocking issue regarding communication between different VM’s on the same host.

Requirements: * Brew * Python3 - part of the Xcode installation, or install using: brew install python

Installation of virt-manager steps:

brew tap arthurk/homebrew-virt-manager
brew install virt-manager virt-viewer

Usage:

brew services start libvirt
virt-manager -c "qemu:///session" --no-fork

Leave the terminal open for virt-manager to remain running, or run it in a disowned state by adding & to the end of the command.

Choose your lab architecture

The lab environment will be composed of an edge device and one or multiple servers (image-builder and FDO services). You can deploy all servers in a single VM (Option 1) if you are short in CPU/MEM resources in your hypervisor (laptop?) or you can split services in dedicated VMs (Option 2) to try a more "realistic" architecture.

Option 1: All-in-one server

For this option, you just need the minimum system requirements described in TL/DR section.

You will also need to download a RHEL ISO (RHEL x86 or RHEL aarh64 for ARM) (At this moment, the latest one is RHEL 9.1).

Once you have libvirt or any other hypervisor installed, and your RHEL ISO downloaded, you have to create a VM with 2 vCPUs, 4 GB memory and 50 GB disk and install the Operating System using the RHEL ISO.

During the RHEL 9 install wizard you can introduce your Red Hat account to register your system, so when the installation finishes your RHEL will be already subscribed. If you decide to use RHEL 8, you will need to subscribe your VM using subscription-manager manually after the first install.

Select the minimal install to save some disk space.

If you want to be sure that your VM has access to Internet, check if you get a HTML code 200 when you ask for a web page, for example running this command in your RHEL VM:

curl -LsI redhat.com | grep HTTP | tail -n 1

In addition to that RHEL VM, you will need to create one more VM (min 2 vCPUs, 1.5 GB memory, 20GB disk) during the lab steps (please, do not create it yet, you will do it as part of the lab steps, just take into account that you will need those system resources).

In summary, for Option 1 you will need to have this VM installed and prepared:

  • 1x Red Hat Enterprise Linux 8/9 VM minimal-install subscribed (2 vCPUs, 4 GB memory, 50 GB disk)

and you will need enough resources to create the following VM during the lab steps:

  • 1x Empty VM with UEFI boot enabled (2 vCPUs, 1.5 GB memory, 20GB disk)

Option 2: Dedicated server VMs

If your hypervisor system has enough CPU and RAM, you probably would like to use a more realistic architecture, where the FDO services are running in dedicated servers. In this case, you will still need the VMs shown in "Option 1", but you will need to run three additional VMs for FDO servers.

For Option 2 you will need to have this VM installed and prepared:

  • 1x Red Hat Enterprise Linux 8/9 VM minimal-install subscribed (2 vCPUs, 4 GB memory, 50 GB disk)

  • 3x Red Hat Enterprise Linux 8/9 VM minimal-install subscribed (2 vCPUs, 1.5 GB memory, 20 GB disk)

and you will need enough resources to create the following VM during the lab steps:

  • 1x Empty VM with UEFI boot enabled (2 vCPUs, 1.5 GB memory, 20GB disk)

NOTE: If you want to save some resources, you might combine one of the FDO servers (manufacturing server?) with the Image Builder VM, so you can remove one of the three additional FDO servers.

Create your use case

During the Edge device onboarding, you will be able to run associated automation as part of that first boot. Although in this lab we present an example where the VM will be configured with a random hostname, registered in Insights and configured with several custom scripts, the idea is that you don’t use that example.

This tutorial shouldn’t be a copy/paste lab. The idea is to enable you to create your own device onboarding use case, so you will need to have an onboarding automation (bash scripts, Ansible playbooks, etc) already prepared and that you will use to "inject" it as part of the onboarding process (thanks to FDO).

Probably you will need to adapt your automation to the way that FDO is working and that’s the main goal of this tutorial, that you can get any automation and know how to adapt it in order to "process" it during device onboarding when using FDO.

If you don’t have already any automation that you would like to use, you could think in advance any automation that you would like to test during the edge device onboarding (much better if it involves certificates, keys or password configuration). Some examples could be:

  • Include the edge device in Ansible Automation Platform and run a job right after onboading

  • Install Microshift and import it into ACM

  • Run OpenScap automatically or tasks associated to system hardening

  • Launch an ansible playbook locally to configure additional external systems

  • Run the first boot configuration (maybe in external systems?) of any service running on the edge system (ie. observability, monitoring, logging, …​ )

Be creative! but please prepare your automation before the lab, so you can focus on the FDO concepts instead of debugging Ansible or Bash scripts.

In case you don’t have enough imagination or time, you can proceed using the example that is presented during the lab steps, you won’t get all the fun but you will understand the steps and the moving parts involved in the onboarding process.