RHEL OSTree image - Lab

In this lab you are going to:

  1. Install the Image Builder service in RHEL

  2. Create a OSTree RHEL image by:

    1. Creating a blueprint file to customize the setup

    2. Create a new OSTree repo

    3. Publish the OStree repo

    4. Create an ISO where you inject that OSTree repository

As explained in the RHEL for Edge introduction, in order to use an OSTree RHEL you need to create an image. You have two options while creating these images, you can use the Red Hat Hybrid Console on the cloud or install and use Image Builder in an RHEL system.

In this lab we are going to use the second option, since at this moment the FDO integration is not yet ready in the Red Hat Hybrid Console.

Once Image builder is installed, you will need to create an RHEL "edge" image. In this lab we will perform the steps manually, so you understand better the procedure, but Red Hat is developing an Ansible collection to automate the RHEL for Edge image creation and life-cycle management.

The image that you are going to create should include all RPMs needed by your onboarding automation. Since you had two options while creating the onboarding automation with FDO (using your own use case or the provided example) in this section you will also have to follow the same approach.

If you prepared your own automation, you will need to be sure about the Software dependencies that your automation has, for example, in the example onboarding automation, the system will use the insights client which is not included by default in the RHEL for Edge OSTree image, so we need to specify that the image that we are going to create needs to include such RPM too (we will see that’s done with the blueprint file).

But first, let’s start from the beginning and let’s install the Image Builder.

Install Image builder

The first step is to install the Image Builder in one of the RHEL VMs that you prepared (in case of FDO all-in-one you could use the same RHEL VM where you installed the FDO services).

sudo dnf install -y osbuild-composer composer-cli  bash-completion

Remember to enable the service:

sudo systemctl enable osbuild-composer.socket --now

Load the shell configuration script so that the autocomplete feature for the composer-cli command starts working immediately without reboot:

source /etc/bash_completion.d/composer-cli

We will be using the CLI to create and manage the RHEL images, but you can also use Cockpit. You can install and enable it by running these commands (including opening the TCP ports if you have a Firewall installed in your system)

sudo dnf install -y cockpit-composer

sudo systemctl enable cockpit.socket --now

sudo firewall-cmd --add-service=cockpit --permanent
sudo firewall-cmd --reload

If you want to take a look at Cockpit just go to http://<image-builder IP>:9090

Create the RHEL OSTree image

You are going to follow three steps:

  1. Generate a new OSTree RHEL repository

  2. Download the repository and publish it on a Web server

  3. Use Image Builder to create an ISO image using the repository published on the Web server

Let’s begin with the first step.

Generating the repo image

You need first to create the blueprint file where you include the image customizations and then use that file to create the actual OSTree repository.

When it comes to create the blueprint file for this lab, you need to follow the same option that you selected during the FDO lab: are you going to use your own automation, or you didn’t prepare anything and would like to use the proposed example?.

Option A: Blueprint file for your own FDO onboarding automation

You can customize the image created by the Image Builder through a blueprint.toml file. You can check how to use blueprint in Image Builder but take into account that not all variables are applied to create OSTree images, so it’s better to review the supported blueprint configurations for RHEL for Edge documentation.

You can find a blueprint example below:

name = "factory-edge"
description = "Sample blueprint"
version = "0.0.1"
modules = [ ]
groups = [ ]

[[packages]]
name = "insights-client"
version = "*"

[[packages]]
name = "cockpit"
version = "*"

[customizations]
hostname = "edge-node"

[customizations.services]
enabled = ["cockpit","insights-client"]

[customizations.firewall]
ports = ["9090:tcp"]


[[customizations.sshkey]]
user = "root"
key = "ssh-rsa AAAA...."


[[customizations.user]]
name = "admin"
description = "Admin user"
password = '$6$kJ5K....CmUFIm4KduMkm9/'
key = "ssh-rsa AAAA...."
home = "/home/admin/"
shell = "/usr/bin/bash"
groups = ["users", "wheel"]

In the first section you need to specify the blueprint name and a description, along with the blueprint version.

name = "factory-edge"
description = "Sample blueprint"

After that, you can include modules and group of packages into the image using the modules = [ ] and groups = [ ] sections, for example:

[[groups]]
name = "Desktop"

If you want to include individual packages you can use the section.

Then you can use this section multiple times to include additional single RPM packages that are not included by default in the RHEL OSTree images (ie. insights-client) indicating which version you would like to install (or any/latest with *).

[[packages]]
name = "tree"
version = "*"

Bear in mind that in order to install those RPM packages in the image, the Image Builder needs to have access to them. There might be cases where the RPM is not present in the Image Builder repositories. In that case, you could configure additional repositories that can be used by the Image Builder servers.

It could happen that one of the installed packages contains a Systemd service, and probably you would like to enable that service by default. You can use the [customizations.services] to do it:

[customizations.services]
enabled = ["cockpit","insights-client"]

You can also manage the firewalld configuration, for example, you can open ports:

[customizations.firewall]
ports = ["8080:tcp"]

Or directly enabling services by name:

[customizations.firewall.services]
enabled = ["SERVICES"]
disabled = ["SERVICES"]

Regarding system users, by default, the image only has the root user configured but with no password, so in order to have access to the system (with root user) you will need to include a Public SSH key that will be injected into the image:

[[customizations.sshkey]]
user = "root"
key = "<SSH PUB KEY>"

But what if you want one or multiple "initial users". You can add more "initial" users to the system by using the section (you can include more than one), where you add the user name, password, public SSH key, shell to be used, and user groups.

[[customizations.user]]
name = "admin"
description = "Admin user"
password = '$6$kJ5KDNibL76RnLzW$CQp6UL936Buu8RDtrCKtL0jpDDO.oTQjyzb.GkDL2b6wiOGoO8K/TWhTnggvJn4bMHt5yw3CmUFIm4KduMkm9/'
key = "ssh-rsa AAAA...."
home = "/home/admin/"
shell = "/usr/bin/bash"
groups = ["users", "wheel"]

The password needs to be encrypted, you cannot express the user password in plain text in the blueprint file. You can obtain the encrypted password string by running this command:

python3 -c 'import crypt,getpass;pw=getpass.getpass();print(crypt.crypt(pw) if (pw==getpass.getpass("Confirm: ")) else exit())'

Remember what was mentioned in the initial_user section of the FDO Lab, if you configured there a user name, be sure that you create it in the blueprint too with the same name, otherwise you will get this error while onboarding the device:

0: Error installing SSH key
1: User admin for SSH key installation missing

Additional comment about the users in OSTree RHEL. If you went through the OSTree-based Operating Systems article shared in the previous module, you noticed that in OSTree RHEL you will have two "kinds" of users, the ones created on the first boot of the image, and the ones generated by day-2 operations. The users created by the Image Builder could be described as these "initial" users that are created during the first boot of the device, and whose target is to perform the onboarding configurations and as administrative access to the devices.

Besides these common sections, in the blueprints you can also configure other system parameters such as the timezone, NTP server or locale:

[customizations.timezone]
timezone = "TIMEZONE"
ntpservers = "NTP_SERVER"


[customizations.locale]
languages = ["LANGUAGE"]
keyboard = "KEYBOARD"

By default, Image Builder builds a default kernel into the image. But, you can also customize the kernel, for example appending a kernel boot parameter option to the defaults:

[customizations.kernel]
append = "KERNEL-OPTION"

Using a Real-Time Kernel:

[customizations.kernel]
name = "KERNEL-rt"

or defining a kernel name to use in an image

[customizations.kernel.name]
name = "KERNEL-NAME"

Lastly, you can specify a custom filesystem configuration in your blueprints and therefore create images with a specific disk layout, for example:

[[customizations.filesystem]]
mountpoint = "/opt"
size = "20 GiB"

The blueprint supports the following mountpoints and their sub-directories:

/
/var
/home
/opt
/srv
/usr
/app
/data
/boot

If you want more than one partition, you can create images with a customized file system partition on LVM and resize those partitions at runtime.

After completing the Blueprint file preparation, move to the creating the OSTree repository section.

Option B: Blueprint file for the provided FDO automation example

If you plan to use the provided example, you just need to download the Blueprint template and adapt it to your environment by changing certain values on it.

Start by downloading the blueprint:

Then setup some variables with your values, you public SSH key:

SSH_PUB_KEY="ssh-rsa AAAAB3......am8= user@machine"

And the password for the initial user (which is pre-configured to be admin):

ADMIN_PASSW=$(python3 -c 'import crypt,getpass;pw=getpass.getpass();print(crypt.crypt(pw) if (pw==getpass.getpass("Confirm: ")) else exit())')

Then you need to apply those variables into the Blueprint template:

sed -i "s|<SSH PUB KEY>|$(echo ${SSH_PUB_KEY})|g" ~/blueprint-insights.toml
sed -i "s|<ADMIN PASSW>|${ADMIN_PASSW}|g" ~/blueprint-insights.toml
  1. and you are done with the Blueprint.

Creating the OSTree repository

Once you have your Blueprint, you are ready to create the OSTree repository/image with Image Builder.

The first step is to "push" the Blueprint into Image Builder pointing to the file that you created:

sudo composer-cli blueprints push <blueprint_file>

For example, if you used the provided blueprint file, the exact command would be:

sudo composer-cli blueprints push ~/blueprint-insights.toml

You can double-check that the Blueprint is ready to be used in the Image Builder with this command:

sudo composer-cli blueprints list

Now is time to create the OSTree image. You will need to point to the name that you gave to the Blueprint (the one that you used in the name parameter in the Blueprint file), not the file name as you did in the previous step:

sudo composer-cli compose start-ostree <blueprint_name> edge-commit

If you used the provided Blueprint, which name is kvm-insights, the command will be:

sudo composer-cli compose start-ostree kvm-insights edge-commit

NOTE: In this lab we are using the edge-commit image type. You could use the edge-container type as well as it’s explained in the publishing the repo image section.

The command will start to create the repo. You can check the status of the build by running the following command (remember to use CTL-C to finish the watch process):

  • input

  • output

watch sudo composer-cli compose status
2d1030cb-6900-4e0b-994b-248c5bb55f7c RUNNING  Wed Nov 30 10:11:30 2022 kvm-insights    0.0.1 edge-commit

You will need to wait until the status for your image changes to 'FINISHED` (which it takes more or less 10 min in my VM with 2 cores and 4 GB memory), then the image will ready in Image Builder.

Publishing the repo image

There are multiple ways of distributing the RHEL OSTree images. If you take a look at this RHEL for Edge quickstart GitHub repository or at the Red Hat official documentation you will learn more about the network and non-network based deployments, but you will find that both have something in common, you need to have the OSTree image/repo "published" into a HTTP server.

When you created the image in the previous section, you used the edge-commit image type, which means that Image Builder create the OSTree image contents and put them into a TAR file. There is another image type that you might use here, the edge-container type, which creates the OSTree image contents and directly creates a container image that publishes them using NGINX. In this lab we are going to use the first approach (although as you will see it requires some additional steps) because this procedure can be used for either network and non-network based deployments, since you can include additional information into the container image (ie. a kickstart file) and because you will be able to manage the lifecycle of this container image (ie. image patching) which is convenient for production environments.

Since we used the edge-commit image type, you have your OSTree image in Image Builder as a TAR file but in order to use it you will need to "download" it using the following command, where the IMAGE ID is the ID generated by the Image Builder and that you can find using the command sudo composer-cli compose status.

sudo composer-cli compose image <IMAGE ID>

This command will download a TAR file with all the OSTree image contents. If you are not using the root user, remember to change the file owner:

sudo chown $(whoami) <IMAGE TAR FILE>

You are going to use a container image to publish the OSTree image (as the edge-container image type does) so you need to have podman installed in your system (probably you want to use the same VM where the Image Builder is installed):

sudo dnf install -y podman

The idea is to create a container image with NGINX, so you can customize even the NGINX configuration as part of this procedure. Create the NGINX config file:

cat <<EOF > nginx.conf
events {
}
http {
    server{
        listen 8080;
        root /usr/share/nginx/html;
        location / {
            autoindex on;
            }
        }
     }
pid /run/nginx.pid;
daemon off;
EOF

Create the Dockerfile including this configuration (and exposing the same port that you configured in your NGINX config file, 8080 in this example) file and the OSTree image TAR file. You could use a Dockerfile "argument" to point to the OSTree image file:

cat <<EOF > Dockerfile
FROM registry.access.redhat.com/ubi8/ubi
RUN yum -y install nginx && yum clean all
ARG commit
ADD \$commit /usr/share/nginx/html/
ADD nginx.conf /etc/
EXPOSE 8080
CMD ["/usr/sbin/nginx", "-c", "/etc/nginx.conf"]
EOF

Now you can create your container image using this Dockerfile.

NOTE: Do not forget the last .

NOTE: You might find this error while buiding your container image : /usr/bin/crun: undefined symbol: criu_feature_check.

If you face this error you can try the published workaround, downgrade you crun version in the system that will run the container (Image Builder?), for example in ARM RHEL 9.0:

sudo yum -y install crun-1.4.5-2.el9_0

You can get the list of the different available crun packages versions with this command:

sudo yum --showduplicates list crun

If you used an argument for the OSTree iamge you need to include the full path to your OSTree repo TAR file:

sudo podman build -t <CONTAINER IMAGE NAME> --build-arg commit=<IMAGE TAR FILE> .

An idea for the container image could be a mix of the Blueprint name and the OSTree image ID, something like:

<blueprint_name>:<image id>

For example:

sudo podman build -t kvm-insights:ebcfe8b5-5aed-4d9b-a948-d7ddb9b51150 --build-arg commit=ebcfe8b5-5aed-4d9b-a948-d7ddb9b51150-commit.tar .

Also a good idea is to tag the last container image as "latest":

sudo podman tag <blueprint_name>:<image id> <blueprint_name>:latest

Once you have the container image ready you can run a container based on it, but before that, be sure that you opened the required port. You will need to bind a port in your VM to the port configured in your container image (tcp/8080 in this example).

NOTE: Remember that if you are using this VM to run other services, for example the FDO services, you need to be sure that you don’t use the same ports (use tcp/8090 to be sure that it does not interfiere with FDO default ports)

IMAGE_PUBLISH_PORT=8090

Use the variable to open the port. If you are using Firewalld the command is:

sudo firewall-cmd --add-port=${IMAGE_PUBLISH_PORT}/tcp --permanent
sudo firewall-cmd --reload

Now is time to run the container. Probably you want to name this container with the same name than your container image, but remember that you cannot use special characters such as : for your container name, so you will need to change the : by -.

sudo podman run --name <blueprint_name>-<image id> -d -p  ${IMAGE_PUBLISH_PORT}:8080 <blueprint_name>:<image id>

You can double check that your container is running and publishing the contents by using CURL. If you execute this command from the same VM that is publishing the OSTree repo you could use localhost as the IP, but it is always better to specify the actual IP address of the system:

IMAGE_BUILDER_IP=<IP>

NOTE: If you didn’t use the Image Builder to run the container image publishing the OSTree repo, you will need to configure here the IP address of that system, not the Image Builder IP.

  • input

  • output

<html>
<head><title>Index of /repo/</title></head>
<body bgcolor="white">
<h1>Index of /repo/</h1><hr><pre><a href="../">../</a>
<a href="extensions/">extensions/</a>                                 30-Nov-2022 09:17                   -
<a href="objects/">objects/</a>                                       30-Nov-2022 09:19                   -
<a href="refs/">refs/</a>                                             30-Nov-2022 09:17                   -
<a href="state/">state/</a>                                           30-Nov-2022 09:17                   -
<a href="tmp/">tmp/</a>                                               30-Nov-2022 09:19                   -
<a href="config">config</a>                                           30-Nov-2022 09:17                  38
</pre><hr></body>
</html>

And if you want to be completely sure that the contents are the right ones, you can review the commit id:

  • input

  • output

{"ref":"rhel/9/x86_64/edge","ostree-n-metadata-total":13593,"ostree-n-metadata-written":4573,"ostree-n-content-total":42606,"ostree-n-content-written":36966,"ostree-n-cache-hits":0,"ostree-content-bytes-written":2059379595,"ostree-commit":"a05cce6c8e34a609a484889b1e494484c4bc24b9c8a7374c2ae5573a1a15f1cc","ostree-content-checksum":"2fd316eacbe7b4ea42f24d97958290459c44294301f57638cc40c62b354bd847","ostree-version":"9.0","ostree-timestamp":"2022-11-30T09:19:31Z","rpm-ostree-inputhash":"cdae77f984cc529f21a7cb0d4ad80e9422b180140891fd42238f0b3606a203fa"}

Done!, you OSTree repo is now publised and ready to be used.

Create the RHEL ISO

If you are wondering when we include FDO in the RHEL for Edge image creation…​ it’s now. So far you created an OSTree image and published in a HTTP server. You could already use it following the standard network based deployment (remember the different deployment options) but in that case you wouldn’t be using FDO. If you want to make use of the FIDO Device Onboarding, you will need to create an ISO image containing the OSTree repository and pointing to the FDO Manufacturing server so you can embed the required certificates and keys.

Image Builder can also create an OSTree RHEL ISOs based on an already published OSTRee repository, and is as simple as creating a new Blueprint file (as you did to create the initial OSTree image).

This new Blueprint file will need, at least, info about the disk that will be used to install the system and the FDO manufacturing URL, so let’s create a couple of variables.

Regarding the disk, if you plan to use physical devices for the edge systems, you probably want to use something like sda, but if you are going to use VMs also for the edge devices (as you are doing with the Image Builder and FDO servers), use vda:

DISK_DEVICE=vda

For the FDO manufacturing server URL, you will need the IP address and the port (if you used the defaults, for the manufacturing server is 8080). Remember that the VM where the FDO manufacturing server will depend on the option that you choose to deploy the FDO servers: All-in-one VS distributed services.

FDO_MANUFACTURING_URL="http://<MANUFACTURING SERVER IP>:<MANUFACTURING SERVER PORT>"

Create the new Blueprint file with these two values:

cat <<EOF > blueprint-fdo.toml
name = "blueprint-fdo"
description = "Blueprint for FDO"
version = "0.0.1"
packages = []
modules = []
groups = []
distro = ""
[customizations]
installation_device = "/dev/${DISK_DEVICE}"
[customizations.fdo]
manufacturing_server_url = "${FDO_MANUFACTURING_URL}"
diun_pub_key_insecure = "true"
EOF

Push again this new Blueprint file into the Image Builder:

sudo composer-cli blueprints push blueprint-fdo.toml

And check that the new Blueprint (the name that we used is blueprint-fdo) is ready to be used in the Image Builder (you will also find the Blueprint used to create the OSTree image):

  • input

  • output

sudo composer-cli blueprints list
blueprint-fdo
kvm-insights

Same as you did while creating the OSTree image, you will need to use this Blueprint to create the new image, but in this case, you will also need to include information about where the OSTree repo is published.

You should have already configured at this point two variables with the IP address and the TCP port where the repo is published: ${IMAGE_BUILDER_IP} and ${IMAGE_PUBLISH_PORT}, but you will also need to include the path (URL) to the right content which depends on the RHEL release and architecture used to generate the OSTRee image.

The release and the architecture should match with the ones of the Image Builder, so you can get those values running these commands into that VM:

baserelease=$(cat /etc/redhat-release | awk '{print $6}' | awk -F . '{print $1}')
basearch=$(arch)

Now that you have all the required values, run the actual command in the Image Builder that will create the ISO file with the embedded OSTree repository:

NOTE: It could be a good idea to double-check that the FDO manufacturing service is up and reachable from the Image Builder before creating the ISO, otherwise you will find the No Usable device credential located, skipping Device Onboarding error on the log when booting your device, which means that FDO client does not have the required keys to share with the Rendezvous and Owner server preventing the FDO process from starting.

sudo composer-cli compose start-ostree blueprint-fdo edge-simplified-installer --ref rhel/${baserelease}/${basearch}/edge --url http://${IMAGE_BUILDER_IP}:${IMAGE_PUBLISH_PORT}/repo/

Review the status while the ISO is being created:

  • input

  • output

sudo watch composer-cli compose status
51494368-35f7-471f-b0fa-c19b21709c8f RUNNING  Wed Nov 30 12:12:23 2022 blueprint-fdo   0.0.1 edge-simplified-installer
2d1030cb-6900-4e0b-994b-248c5bb55f7c FINISHED Wed Nov 30 10:19:41 2022 kvm-insights    0.0.1 edge-commit

NOTE: In my Image Builder VM with 2 vCPUs and 4 GB of memory this takes around 10 minutes.

Once the status changes to FINISHED, the ISO image will be ready in the Image Builder, you just need to download it.

In order to select the right image to download, you will need to include the generated ID that you found in the previous command. You can create a variable with it:

FDO_ISO_IMAGE_ID=<IMAGE ID>

Then, download the ISO file from the Image Builder using the ID and change the owner if you are not using the root user:

sudo composer-cli compose image ${FDO_ISO_IMAGE_ID}
sudo chown $(whoami) ${FDO_ISO_IMAGE_ID}-simplified-installer.iso

At this point you should have an ISO file (downloaded in the Image Builder) that you can copy to your Laptop/KVM system to be used to deploy the Edge device in the next chapter.

If you feel that using an ISO file doesn’t scale in production you might be right. There is also a solution for that, you could publish the ISO image into an additional HTTP server and use the UEFI HTTP boot in the end Edge device to boot the ISO using the network. This deployment is not part of the lab, so I let you do it as homework. If you need some hints, you can dig into this script that creates the additional HTTP server, includes the ISO and also gives you some comments about how to configure the KVM network if you are using VMs, or the DHCP if you are using physical devices in order to use UEFI HTTP boot.

RHEL OSTree image Lab summary

In this lab you have:

  1. Install the Image Builder service in RHEL

  2. Create a OSTree RHEL image by:

    1. Creating a blueprint file to customize the setup

    2. Create a new OSTree repo

    3. Publish the OStree repo

    4. Create an ISO where you inject that OSTree repository

You should have the created ISO image in your KVM host to use it as installation method for the edge device VM in the next lab.