Skip to content

đź“– Manual

Introduction

This is a manual that contains useful code snippets, including personal and internet code. I am trying to keep the repository somewhat structured, however, there is no single rigid structure by design, but rather a chaotic collection of things that I think might be helpful. Enjoy figuring this out.

How to Create Bootable Installation Media

First, download the installation ISO file from the official website [1, 2, 3]. Then, write (or "flash") the ISO file onto a USB flash drive [1] using a tool like Rufus [1] if you are on Windows. If you are using Linux or macOS, you can flash the ISO file using the dd command in the terminal. Be very careful with dd, as it can overwrite any drive:

sudo dd if=/path/to/your.iso of=/dev/sdX bs=4M status=progress oflag=sync
  • Replace /path/to/your.iso with the path to the ISO file you downloaded.
  • Replace /dev/sdX with your USB device (e.g., /dev/sdb).
    Important: Do not write to a partition like /dev/sdb1, only to the whole device /dev/sdb.

Make sure your USB flash drive is large enough; for example, if the ISO file is 1.2 GB, it is sensible to use a USB drive with at least 4 GB of storage. Once the ISO is written, plug the USB drive into the target computer. To boot from the USB, you may need to press a special key during startup — check the motherboard manual to find out which key to use (often F12, ESC, or F10).

How to Install Arch

Once the installation has started and the system has loaded, refer to the instructions outlined in section 1. You should see a zsh shell, where you are logged in as root@archiso. From this point onwards, the next step, according to the official documentation [1], is to set the keyboard layout:

loadkeys uk

In situations, where the layout name is unknown, the localectl list-keymaps command would list all available layouts that could be viewed using regular arrow keys in a terminal.

localectl list-keymaps

Based on the quick Google search, good UK fonts are ter-v32b, Uni2-Terminus16, Lat2-Terminus32x16, and Lat2-Terminus16. The ter-v20b font is a smaller variant which seems to be a good fit for my monitor size, i.e. 24″ at 1080p. Notably, these fonts won't be available in the installed system—perhaps because I’m missing some crucial steps that copy the fonts over to the installed OS.

setfont ter-v20b
Similar to the layout name, it possible to preview the fonts using a command to list all files in the fonts folder. The --color=always and -R parameters are to preserve the colour in the output.

ls -lash --color=always /usr/share/kbd/consolefonts/ | less -R

Verify the boot mode [1]. The output could be either 64 or 32, in my case it was 64. There could be situations where the file is missing. In such situations, it means the system was booted in BIOS mode.

cat /sys/firmware/efi/fw_platform_size

Verify the networking status by first checking if the network interface is up. Importantly, the UP flag appears inside angle brackets. In my case, it was easy to mistake this for state DOWN, especially since the latter was highlighted in red due to the --color parameter.

ip --color link

If the connection is down and the only way is to use WiFi, then use iwctl. From an extensive search on the internet, this command seems to be the easiest and most recommended way of connecting to Wi-Fi.

iwtcl

There could be a special situation where we would be bound to notorious eduroam. Then we have to create a configuration file and let iwctl to pick it up. An example file is stored at project/archlinux/var/lib/iwd/eduroam.8021x which is a configuration for the eduroam network using login and password. Required by iwctl. There is project/archlinux/var/lib/iwd/eduroam.8021x which is a configuration to let built-in DHCP daemon resolve an IP address for the host. In case if the installed OS plans to use iwctl to connect to eduroam later then install iwd and dhcpdc and then systemctl enable iwd and systemctl enable dhcpdc to run the daemons on the system start up. Then inside the iwctl tool:

device list
station wlan0 scan
station wlan0 get-networks
station wlan connect [network_name]
exit

Check the internet connection:

ping archlinux.org

Manually check and sync the time and date:

timedatectl

List the available storage devices and their locations:

lsblk

In my case, it listed /dev/nvme0n1. Then, I used the fdisk utility to partition the drive:

fdisk /dev/nvme0n1

Next, you are presented with a somewhat interactive text-based interface (TUI), although, in my opinion, it uses a rather cryptic method of operation. For example, it uses a single letter p to list information about the current partitioning, if any. I would personally prefer to type out full commands, but we have to work with what is provided. At this stage, I created an EFI, a swap, and a Linux filesystem partition. This part is not detailed in the official guide, as partitioning schemes are use-case specific. Fortunately, the author in [1] offers some helpful hints.

g # Create GPT.
ENTER
n # Create new partition.
ENTER
ENTER
ENTER
+512M
ENTER
t # Change type.
ENTER
ENTER
1 # To EFI.
ENTER
n # Create new partition.
ENTER
ENTER
ENTER
+32G
ENTER
t # Change type.
ENTER
ENTER
swap # To swap.
ENTER
n # Create new partition.
ENTER
ENTER
ENTER
ENTER
p # View the results so far.
ENTER
w # Write it.
ENTER

Format the partitions:

mkfs.fat -F 32 /dev/nvme0n1p1
mkswap /dev/nvme0n1p2
mkfs.ext4 /dev/nvme0n1p3

Now, mount the file systems. Be careful with the order here: first mount the main filesystem, and then mount the EFI system partition. If you mount them in the wrong order, the EFI mount point may overwrite itself and fail to appear correctly in /etc/fstab later.

mount /dev/nvme0n1p3 /mnt
mount --mkdir /dev/nvme0n1p1 /mnt/boot
swapon /dev/nvme0n1p2

Install the essential packages: base, linux, and linux-firmware. These form the minimum needed to have a functional operating system.

The efibootmgr and grub packages are required to pass control from the UEFI firmware to the Linux kernel. There are other bootloaders available, but I am most familiar with these. Without these packages, the rest of the installation would not work.

It is possible to install amd-ucode after booting into Arch Linux, but it is best to install it during setup so that it can be loaded early through the bootloader for full effect. If you are using an Intel CPU, use intel-ucode instead. If you are in a VM then the microcode packages should be omitted.

The networkmanager package is needed to connect to the internet after the installation, in order to complete the post-installation setup. For most users, NetworkManager is the better choice due to its ease of use, GUI support, and good compatibility with desktop environments. Although, iwd is lighter and faster, but it is better suited for minimal or embedded setups.

The neovim package is needed to provide a text editing tool. For instance, to edit /etc/locale.gen file later on. And, it so happened, my text editor of choice is neovim.

pacstrap -K /mnt base linux linux-firmware efibootmgr grub networkmanager neovim amd-ucode

Generate an fstab file.

genfstab -U /mnt >> /mnt/etc/fstab

Verify /etc/fstab.

vim /mnt/etc/fstab

Change root into the new system.

arch-chroot /mnt

Set the time zone.

ln -sf /usr/share/zoneinfo/Europe/London /etc/localtime

Run hwclock to generate /etc/adjtime.

hwclock --systohc

Edit /etc/locale.gen and uncomment en_GB.UTF-8 UTF-8 and other needed UTF-8 locales. Then, generate the locales.

locale-gen

Create the /etc/locale.conf file, and set the LANG variable accordingly.

LANG=en_GB.UTF-8

If you set the console keyboard layout, make the changes persistent in /etc/vconsole.conf. Later on, we can also change the font of the console [1].

KEYMAP=uk

Create the /etc/hostname file.

Standard-PC

Set the root password.

passwd

Add user.

useradd -mG wheel user
passwd user

Deploy grub.

grub-install --target=x86_64-efi --efi-directory=/boot --bootloader-id=GRUB

Generate the grub configuration (it will include the microcode installed with pacstrap earlier).

grub-mkconfig -o /boot/grub/grub.cfg

Enable NetworkManager before rebooting. This is considered good practice within the community. The main reason is to ensure network connectivity after rebooting, especially in situations where an SSH service is set up and there is no input or output available, such as no keyboard, mouse, or monitor connected to the system.

systemctl enable NetworkManager

Exit from chroot.

exit

Unmount everything to check if the drive is busy.

umount -R /mnt

Reboot the system and unplug the installation media.

Note

Instead, I prefer to shutdown the PC, unplug the USB drive, and manually boot back.

shutdown -h now

How to Post Install Arch

After the installation of the barebones Arch Linux [1]. Basic terminal login should be presented to the user. Log in with your user credentials.

Once logged in, ensure that the internet connection is working. Refer to the ip --color link command mentioned in [1] to verity the networking status. In situations where there is only Wi-Fi then use nmcli. Otherwise, the wire connection usually picked up automatically.

nmcli device wifi list
nmcli device wifi connect *SSID* password *your_wifi_password*
nmcli connection show

Enable and start the time synchronisation service. It requires an active internet connection to synchronise the system clock with online NTP servers.

timedatectl set-ntp true

Configure pacman and mirrors. All was okay for me, so I just skimmed it through and left it as is.

nvim /etc/pacman.d/mirrorlist
nvim /etc/pacman.conf
Update and upgrade the system. If not root then execute su -.

pacman -Syu

To proceed further, it is paramount to install custom packages to help better manage hardware resources and improve overall UX. The sudo package is important, it allows us to run as super user. The base-devel package group in Arch Linux provides essential development tools like make, gcc, and binutils, which are required to compile software, especially when using the AUR (Arch User Repository) [1]. The openssh package is used to ssh and manage keys. The man package to "read the f-ing manual" [1]. The git git-lfs packages are there because of VCS [1]. To download files from the internet: curl wget. The xorg xorg-xinit packages are to enable GUI through X Systems as I consider Wayland too early to adapt. The xf86-video-amdgpu package is optional and it was included because the target computer iGPU is AMD-based. The polkit-gnome package is optional. Some software require priviliges and they are capable to self-elevate the priviliges if polkit is available. It comes as a dependency of the polkit-gnome package. The alacritty terminal is the default temrinal emulator of choice. The firefox browser is a compromise between widely available and provided the features I like. The keepassxc software is a password manager tool. To connect to my Google Drive I use rclone. To make colours warmer during late hours I use redshift. The p7zip package is used to extract archives. The pipewire pipewire-alsa pipewire-pulse pipewire-jack wireplumber packages are needed to manage audio. The new audio framework replacing pulse and jack. And, wireplumber is the pipewire session manager. The bluez bluez-utils packages provide bluetoothctl. Finally, we install a bunch of fancy fonts, terminus-font nerd-fonts ttf-terminus-nerd noto-fonts noto-fonts-cjk noto-fonts-emoji noto-fonts-extra ttf-font-awesome, at a cost of large storage space, i.e. aproximately 8-9 GB. On a bright side, it should cover almost any character including various emojis. If not root then execute su -.

pacman --sync sudo base-devel openssh man git git-lfs curl wget xorg xorg-xinit xf86-video-amdgpu polkit-gnome alacritty firefox keepassxc rclone redshift p7zip pipewire pipewire-alsa pipewire-pulse pipewire-jack wireplumber bluez bluez-utils terminus-font nerd-fonts ttf-terminus-nerd noto-fonts noto-fonts-cjk noto-fonts-emoji noto-fonts-extra ttf-font-awesome

Setup sudo priviliges for the wheel group by uncommenting the %wheel ALL=(ALL:ALL) ALL line.

Note

You don't need to reboot the PC for the changes to take effect.

su -
EDITOR=nvim visudo

Here’s how you can set EDITOR=nvim system-wide on Arch Linux or similar systems using /etc/environment. Open /etc/environment with root privileges:

sudo nvim /etc/environment

Add the line below (or modify if EDITOR already exists). And, save and exit. This change applies to all users at next login.

EDITOR=nvim

As in [1, 2], to find what package is required to install a command run the following.

clear ; pacman -Fy bluetoothctl

I wanted to keep the barebones Arch Linux installation as minimal as possible, so I moved console font configuration to here as a post-installation step. Append /etc/vconsole.conf. The FONT parameter depends on the terminus-font package, and while optional, it resolved a warning that was raised when building mkinitcpio [1]. This font is used system-wide when booting into the system.

FONT=ter-v20b

After modification, the initial RAM file system needs to be rebuilt. After reboot, the console fonts should change to the one specified in the configuration file.

sudo mkinitcpio -p linux

To enable bluetooth, installed bluez, and bluez-utils. The utilities package should contain bluetoothctl command [1]. Enable and start the service.

clear ; sudo systemctl status bluetooth
clear ; sudo systemctl enable bluetooth
clear ; sudo systemctl start bluetooth
clear ; sudo systemctl status bluetooth

To connect a device via bluetoothctl.

bluetoothctl
power on
agent on
scan on
pair <MAC>
connect <MAC>
trust <MAC>

The most up-to-date approach to manage audio seems to be pipewire but it comes with its own quirks like rtkit warnings [1]. On a bright side, it seems like it works out-of-the-box.

At this point, it might be a good idea to install a GUI like dwm. Refer to the instructions in [1] to install dwm.

Since we installed polkit and the agent, let's add it to the .xinitrc. This is optional as it requires polkit and implies that we use WM instead of a DE.

exec /usr/lib/polkit-gnome/polkit-gnome-authentication-agent-1 &

At this point, the customised part of ~/.xinitrc should have the following. To find ATITUDE:LONGITUDE for Redshift follow the instructions at [1]. 51.50853, -0.12574

# my rules
setxkbmap gb
status &
redshift -P -O 2700 -l LATITUDE:LONGITUDE &
exec /usr/lib/polkit-gnome/polkit-gnome-authentication-agent-1 &
exec dwm

And, ~/.bashrc should have the following.

alias ls='LC_COLLATE=C ls --color=auto --group-directories-first --sort=version'
alias ll='ls -lashF'

PATH=~/.local/bin:$PATH

# By default, Arch would use `vi` as the default editor.
export EDITOR=nvim

There was a situation where the PC used iGPU to calculate graphics but used dGPU to output graphics. In situations where the PC should be forced to use xf86-video-amdgpu modify /etc/X11/xorg.conf.d/xorg.conf. Bare in mind that "PCI:19:0:0" is taken from the output of lspci -nnk, where the numbers are represented in hexadecimal format but the BusID is in decimal format. I didn't need to append zeros at the beginning, e.g. PCI:0:19:0 is invalid.

Section "Device"
    Identifier "iGPU"
    Driver "amdgpu"
    BusID "PCI:19:0:0"
EndSection

At this point, the installation of Arch Linux should be somewhat usable. Further information on post-installation advices is available at 1.

How to Install Ubuntu

Once the installation has started and the system has loaded, as mentioned in the section 1. Follow on-screen instructions.

How to Post Install Ubuntu

The post-installation phase involves setting up programs that are useful regardless of the system’s purpose — whether for computer vision tasks, game development, or general use. The programs listed in this section are valuable in any setup.

Note

Some packages, like build-essential, may already be installed by default on certain systems, such as Ubuntu 20.04.5.

The build-essential package installs important development tools, including a compiler, linker, libraries, and headers used during software compilation. Many third-party programs rely on it, and errors can occur if it is missing. It often comes pre-installed on some distributions.

sudo apt install build-essential

The ubuntu-restricted-extras package installs software not included in the Ubuntu installation ISO due to copyright restrictions. It includes programs such as:

  • ttf-mscorefonts-installer
  • libavcodec-extra
  • libavcodec-extra58
  • libmspack0
  • libvo-amrwbenc0
  • unrar
  • cabextract
  • libaribb24-0

Install it with:

sudo apt install ubuntu-restricted-extras

The gstreamer1.0-libav, gstreamer1.0-plugins-ugly, and gstreamer1.0-vaapi packages provide additional multimedia support, similar to what ubuntu-restricted-extras offers. However, unlike ubuntu-restricted-extras, these packages are available on many Debian-based distributions, not just Ubuntu.

Install them with:

sudo apt install gstreamer1.0-libav \
                 gstreamer1.0-plugins-ugly \
                 gstreamer1.0-vaapi

The fonts-powerline package installs special fonts which make the BASH shell easier on the eyes, particularly when using tools like oh-my-bash.

sudo apt install fonts-powerline

The p7zip-full package installs a utility to archive and unarchive files. This is an important tool to have.

Note

A small reminder: when extracting files, the -o option in the 7z command must not have a space between -o and the output path.

Example:

7z x "archive.zip" -ooutput/path

Install p7zip-full with:

sudo apt install p7zip-full

The curl and wget packages install utilities that allow downloading files from remote computers via URLs. Typically, wget comes pre-installed with Ubuntu, but curl may not. I prefer to have both available.

sudo apt install curl wget

The ffmpeg package installs a utility for working with media files, such as MPEG-4 videos and MP3 audio. However, I have found it better to install ffmpeg inside a conda environment to keep the system-level environment clean.

sudo apt install ffmpeg

The git package is the most common source code management tool. Although there are other tools like Plastic SCM and Perforce, most teams use git. It is also useful to install git-lfs for handling large files.

sudo apt install git git-lfs

Sometimes, Git Large File Storage (LFS) can cause issues. I usually resolve them with:

git lfs install --skip-repo

How to Troubleshoot gdm3 on Ubuntu with Wayland

On Ubuntu 24.04, the gdm3 display manager by default runs on Wayland. However, sometimes, e.g. ast driver, doesn't work with Wayland. To switch to Xorg uncomment WaylandEnable=false line by modifying /etc/gdm3/custom.conf configuration file. Save the file and reboot. This is the only working solution I have found so far.

I have tried many things, such as reinstalling by removing gdm3. I switched to tty3, using Ctrl+Alt+F3 and stopped the display manager service, i.e. gdm3, using commands below. Stopping gdm and display-manager.service act as aliases and, in my opinion, it is better to stop the original service. I have tried both removing then rebooting, as well as, removing without rebooting. Also, I tried with and without stopping the gdm3 service. Unfortunately, it didn't help.

sudo systemctl stop gdm3
sudo apt remove gdm3
sudo apt install gdm3.

I have tried reinstalling by purging gdm3. I did the same experiments as with removing. Initially, when I tried to stop the service, purge it, and install it without rebooting the display manager started to work. Happily after, I logged into my user account and all seemed to be working. However, when I have rebooted, the problem came back.

sudo systemctl stop gdm3
sudo apt purging gdm3
sudo apt install gdm3.

How to Install Windows

Once the installation has started and the system has loaded, as mentioned in the section 1. Follow on-screen instructions.

How to Post Install Windows

  1. Windows 11 After installation, the B650I AX would prompt with a GIGABYTE Control Center. My advice is to avoid it. The software is heavily under developed. Firstly, I started installing the driver for the chipset, mb_driver_597_chipset_7.01.08.129. Then, LAN driver, mb_driver_654_w11_11.21.0903.2024. Then, Audio driver, mb_driver_612_realtekdch_6.0.9733.1. Then, Video driver for APU, mb_driver_2689_apu5_24.30.18.241224. Then, Bluetooth driver, mb_driver_675_realtek8852_1.1068.2401.1903. Interestingly, it prompted me whether I should re-install it using compatibility settings which I actually did but only once.

  2. Windows 10

    Windows 10 is a major release of Microsoft's Windows NT operating system. It is the direct successor to Windows 8.1, which was released nearly two years earlier. It was released to manufacturing on July 15, 2015, and later to retail on July 29, 2015. Windows 10 was made available for download via MSDN and TechNet, as a free upgrade for retail copies of Windows 8 and Windows 8.1 users via the Microsoft Store, and to Windows 7 users via Windows Update. Windows 10 receives new builds on an ongoing basis, which are available at no additional cost to users, in addition to additional test builds of Windows 10, which are available to Windows Insiders. Devices in enterprise environments can receive these updates at a slower pace, or use long-term support milestones that only receive critical updates, such as security patches, over their ten-year lifespan of extended support. In June 2021, Microsoft announced that support for Windows 10 editions which are not in the Long-Term Servicing Channel (LTSC) will end on October 14, 2025.

    • To display Russian characters instead of squares.
      1. Open Language settings.
      2. Open Administrative language settings.
      3. Open Language for non-Unicode programs.
      4. Click on the "Change system locale..." button.
      5. Under Current system locale, select Russian (Russia).
        1. Don't check "Beta: Use Unicode UTF-8 for worldwide language support".
      6. Restart the PC.
    • List of common applications I need.
      • 7-zip
      • miniconda3 (more details here)
      • FastStone Image Viewer
      • Git
      • KeePass
      • Substituted KeePass with KeePassX because it is more modern.
      • VLC
      • WSL (more details here)
      • VirtualBox required Microsoft Redistributable 2019
      • Docker (more details here)

How to Bypass the Networking Requirement during the Windows 11 Installation

Shift + F10 to open up Windows Command Prompt.

oobe\bypassnro

The system should reboot and, at the step where it prompts the user to connect to the internet, the I don't have internet button should be appear.

What to Install

  1. Timeshift to backup & restore on Linux.
  2. Remmina to connect to a remote desktop on Linux. Remote Desktop Connection on Windows.
  3. Firefox to access the internet.
  4. KeePassXC to manage passwords.
  5. rclone to use Google Drive to share files.
  6. mc, nnn to manage files.
  7. nvim, VS Code to edit text.

Remote Desktop

Remotely accessing desktops is a simple concept, but very cumbursome on practice. There are a lot of paid solutions such as AnyDesk, TeamViewer, ZeroTier, etc. However, ideally, we want a self-hosted solution, so that we could run it indefinitely without heavy reliance on third party developers. There we could consider open source solutions like RustDesk & RustDesk Server software, xrdp, freerdp, etc. Futhermore, we can consider software such as Guacamole. Unfortunately, all of the software has thair strong and not so strong sides. In fact, practically, I find Google Chrome Remote Desktop to be the easiest software to deal with, although, personally, I am not a big fun of it. Historically, Google Chrome Remote Desktop was the easiest to install, and when I came to my work, everyone would understand it. For example, I could say: "Oh, just use Chrome Remote Desktop", and my colleagues would get it without any further explanation. Convenient, isn't it.

Anyhow, it is time to step forward, we are acquiring new equipment, and I am going to setup a complex system with ability to give remote access to our equipment on demand whenever and wherever I am situated. For this, I have continued my experiments with various remote-desktop-software, and, here, I will document some of my findings. Starting from RustDesk. First of all, I found that for self-hosting, we need to use a pair of programs, RustDesk as a client, and RustDesk Server as a server applications. I set the server application on an AWS EC2 instance, what was relatively easy. And, following guidelines I found on the internet, I shared the ID, Relay Server, and Key with the client app. It all seemed to be working on the first day, but, on the second day, it would fail to connect to the server on my Windows machine, but wouldn't fail on my Linux machine. Also, I had to use xrdp instead of built-in gnome remote desktop on Ubuntu 24.04 because it wouldn't work the way I want it to causing me black screens.

Eventually, I stumbled upon Guacamole, following guidelines on the internet, I installed it natively on the Ubuntu 24.04. Notably, Ubuntu 24.04 doesn't provide Tomcat 9 because they moved forward to Tomcat 10. However, Guacamole doesn't like Tomcat 10, but it has not problems with Tomcat 9. To solve this problem, I had to download files directly from the Tomcat website. Luckily, installation was straight forward. The reason I didn't go with the version 10 was because Guacamole didn't support that version at that time. Not sure how it is like now. Furtermore, marrying Guacamole version 1.5.5 with gnome remote desktop was impossible. Apparently, there is a bug related with that version of Guacamole. Again, to solve this problem, I had to clone the official mirror repository of Guacamole on GitHub. After this, it seemed to be working, fingers crossed...

To install Guacamole server-side service, start by installing dependencies:

sudo apt install build-essential libcairo2-dev libjpeg-turbo8-dev \
    libpng-dev libtool-bin libossp-uuid-dev libvncserver-dev \
    freerdp2-dev libssh2-1-dev libtelnet-dev libwebsockets-dev \
    libpulse-dev libvorbis-dev libwebp-dev libssl-dev \
    libpango1.0-dev libswscale-dev libavcodec-dev libavutil-dev \
    libavformat-dev

Then, install server-side service.

git clone https://github.com/apache/guacamole-server
cd guacamole-server
autoreconf -fi
sudo ./configure --with-init-dir=/etc/init.d --enable-allow-freerdp-snapshots
sudo make
sudo make install

After installation of the service, I continued to configuring the service.

sudo ldconfig
sudo systemctl daemon-reload
sudo systemctl start guacd
sudo systemctl enable guacd

As a rule of thumb, I also checked the status of the service.

sudo systemctl status guacd

This I just followed the guide as I had no problems with creating folders at this stage either.

sudo mkdir -p /etc/guacamole/{extensions,lib}

From now on, I focused on Tomcat 9.

sudo apt install openjdk-17-jdk
sudo useradd -m -U -d /opt/tomcat -s /bin/false tomcat
sudo wget https://downloads.apache.org/tomcat/tomcat-9/v9.0.96/bin/apache-tomcat-9.0.96.tar.gz -P /tmp
sudo tar -xvf /tmp/apache-tomcat-9.0.96.tar.gz -C /opt/tomcat
sudo chown -R tomcat:tomcat /opt/tomcat

Tomcat is installed, we proceed to configure it.

cd /etc/systemd/system
sudo nvim tomcat.service

Once I created the file, I paste in the following configuration.

[Unit]
Description=Tomcat Server
After=network.target

[Service]
Type=forking
User=tomcat
Group=tomcat
Environment="JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64"
WorkingDirectory=/opt/tomcat/apache-tomcat-9.0.96
ExecStart=/opt/tomcat/apache-tomcat-9.0.96/bin/startup.sh

[Install]
WantedBy=multi-user.target

Restarting services to let it pick up the changes won't hurt at this stage.

sudo systemctl daemon-reload
sudo systemctl start tomcat
sudo systemctl enable tomcat

At this point, if I access the service at port 8080, it should render a page saying all is working. Now, I can focus my attention on Guacamole Web App client.

sudo wget https://downloads.apache.org/guacamole/1.5.5/binary/guacamole-1.5.5.war
sudo mv guacamole-1.5.5.war /opt/tomcat/apache-tomcat-9.0.96/webapps/guacamole.war
sudo systemctl restart tomcat guacd

This is a "production-ready" approach, so we use a database for user authentication instead of a simple XML-file as was documented in the official manual.

sudo apt install mariadb-server -y
sudo mysql_secure_installation

As a note, I didn't use unix_sockets, but a regular password. The rest I left at default arguments. Next, the guide was suggesting to download an older version of a MySQL/J connector (v8.0.26), but I felt brave, and download the latest version 9.1.0. The download procedure was different for me for some reason. I had to manually download file from the website.

sudo apt install ./mysql-connector-j_9.1.0-1ubuntu24.04_all.deb
sudo cp /usr/share/java/mysql-connector-j-9.1.0.jar /etc/guacamole/lib/

Next, I focused on the Apache Guacamole JDBC AUTH plugin.

sudo wget https://downloads.apache.org/guacamole/1.5.5/binary/guacamole-auth-jdbc-1.5.5.tar.gz
sudo tar -xf guacamole-auth-jdbc-1.5.5.tar.gz
sudo mv guacamole-auth-jdbc-1.5.5/mysql/guacamole-auth-jdbc-mysql-1.5.5.jar /etc/guacamole/extensions/

Now, it is time to configure our database.

sudo mysql -u root -p

These are the commands inside MariaDB.

MariaDB [(none)]> CREATE DATABASE guac_db;
MariaDB [(none)]> CREATE USER 'guac_user'@'localhost' IDENTIFIED BY 'password';
MariaDB [(none)]> GRANT SELECT,INSERT,UPDATE,DELETE ON guac_db.* TO 'guac_user'@'localhost';
MariaDB [(none)]> FLUSH PRIVILEGES;
MariaDB [(none)]> EXIT;

After, we want to apply a schema.

cd guacamole-auth-jdbc-1.5.5/mysql/schema
cat *.sql | mysql -u root -p guac_db

Next, we should tell Guacamole how it should handle user data. We create a simple properties file.

sudo nvim /etc/guacamole/guacamole.properties

And, populate it with the configuration below.

# MySQL properties
mysql-hostname: 127.0.0.1
mysql-port: 3306
mysql-database: guac_db
mysql-username: guac_user
mysql-password: password

Restart all relevant services.

sudo systemctl restart tomcat guacd mysql

Now, the service should be accessible at port 8080/guacamole, and default login and password should be guacadmin. At this point, it is strongly recommended to create a new admin user and password and delete the default credentials. To do this, from the guacadmin profile click on Settings. Under the Edit User section, enter your new username and password. Then, under the Permissions section check the all boxes. When you are done, click Save. Now log out from the default user and log in back to Apache Guacamole with your new user created. Then, navigate to the settings, and users tab, and delete the guacadmin user. That’s it, you are done. From there, you can securely access your servers and computers.

References:

Virtualisation

When working with VirtualBox, to enable bridge network, it is a simple matter of changing networking adapter in the settings of the progam to bridge adapter with the real ethernet adapter selected below. However, VirtualBox doesn't let us do hardware passthrough or, at least, I didn't find a definite answer to this on the internet. Otherwise, VirtualBox is a very useful piece of software that I use from time to time to experiment with various OS and new software.

Moving to QEMU/KVM, to allow virtual machines to connect to outside world the host's network should be reconfigured. The swtpm package is reponsible for TPM emulation. I used it to satisfy Windows 11 installation requirement. The virtiofsd package is needed to share folders between the host and guests. The guides at [1, 2, 3] suggested to install WinFSP on Windows and enable Virtio FS service. However, I had compatibility issues installing virtio-win-tools. They were disabling my mouse. Instead, I manually applied the driver in Device Manager by right-clicking on it and selecting Update driver option. As for the service, I manually recreated the path to the executable and copied it from the virtio-win CD-ROM, e.g. virtiofsd.exe and virtiofsd.pdb. The following commands were used to create the service.

sc create VirtioFsSvc binpath="C:\Program Files\Virtio-Win\VioFS\virtiofs.exe" start=auto depend="WinFsp.Launcher/VirtioFsDrv" DisplayName="Virtio FS Service"
sc start VirtioFsSvc

Experimentally, I have found that I don't have to install edk2-ovmf on Ubuntu because I could choose Secure Boot in virt-manager without it. However, on Arch Linux, it seems necessarily. To install QEMU/KVM with Virtual Manager to the following:

sudo apt install qemu-kvm libvirt-daemon libvirt-daemon-system bridge-utils virt-manager swtpm virtiofsd

Usually, on Ubuntu 24.04.1, user is already part of libvirt, so we just need to add the user to the kvm group.

sudo adduser $USER kvm

Once the user is added to the groups, I would suggest rebooting. After reboot, we can check if the installation is correct by checking if there are any virtual machines on the system.

sudo virsh list --all

Or, we can check status of the libvirt daemon.

sudo systemctl status libvirtd

Then, we can create bridge interface. Although, the default NAT allows VMs to communicate with the host and each other. And, through port forwarding on the host, we can expose the VMs to the outside world.

nmcli con show
nmcli con add ifname br0 type bridge con-name br0
nmcli con add type bridge-slave ifname eth0 master br0
nmcli con mod br0 bridge.stp no
nmcli con down eth0
nmcli con up br0
nmcli device show
sudo systemctl restart NetworkManager.service

Although, creating bridges seems to be the most logical approach, sometimes, we can't use them. Therefore, port forwarding seems to be a viable alternative. Here is how to forward ports from host physical interface to virtual interface where the guest VMs reside.

iptables -t nat -I PREROUTING -p tcp -d HOST_IP --dport HOST_PORT -j DNAT --to-destination GUEST_IP:GUEST_PORT
iptables -I FORWARD -m state -d GUEST_IP/24 --state NEW,RELATED,ESTABLISHED -j ACCEPT

For now, I am using iptables-persistent to save/reload port forwarding rules between reboots.

sudo apt install iptables-persistent

The location is saves the rules in is /etc/iptables/rules.v4 and /etc/iptables/rules.v6. To save the rules and load them, I use the following commands. I had to wrap it around as a command for a bash because I need elevated privilages on both piping as well as saving commands.

sudo bash -c "iptables-save > /etc/iptables/rules.v4"

To setup static IP inside guest VM, we need to modify the netplan configuration file for the NetworkManager service.

network:
    version: 2
    renderer: NetworkManager
    ethernets:
        INTERFACE_NAME:
            dhcp4: false
            addresses:
                - STATIC_IP/24
            routes:
                - to: default
                 via: HOST_IP
            nameservers:
                addresses: [HOST_IP]

For example, INTERFACE_NAME=eth0, STATIC_IP=192.168.1.100, HOST_IP=192.168.1.1. Save the aforementioned in the /etc/netplan/01-network-manager-all.yaml file. And, apply the plan.

sudo chmod 700 /etc/netplan/01-network-manager-all.yaml
sudo netplan try

Then, we setup KVM to allow guest VMs to use bridge interface. Start from creating a file in an arbitrary location on the computer and name it host-birdge.xml.

<network>
    <name>host-bridge</name>
    <forward mode="bridge"/>
    <bridge name="br0"/>
</network>

Then execute these commands (as a user in the libvirt group):

virsh net-define host-bridge.xml
virsh net-start host-bridge
virsh net-autostart host-bridge

A mechanism to allow connections from outside.

sudo modprobe br_netfilter
sudo echo "br_netfilter" > /etc/modules-load.d/br_netfilter.conf

Create /etc/sysctl.d/10-bridge.conf.

# Do not filter packets crossing a bridge
net.bridge.bridge-nf-call-ip6tables=0
net.bridge.bridge-nf-call-iptables=0
net.bridge.bridge-nf-call-arptables=0

Apply and check the config.

sudo sysctl -p /etc/sysctl.d/10-bridge.conf
sudo sysctl -a | grep "bridge-nf-call"

Configure the guest to use host-bridge. Open up the Virtual Machine Manager and then select the target guest. Go to the NIC device. The drop down for "Network Source" should now include a device called "Virtual netowrk 'host-bridge'". The "Bridge network device model" will be "virtio" if that's your KVM configuration's default. Select that "host-bridge" device.

Useful commands:

A command to set automatic start of VMs when host boots up.

virsh autostart <vm-name>

A command to undo automatic start of VMs when host boots up.

virsh autostart --disable <vm-name>

A command to set automatic start of the Docker containers when host boots up.

docker update --restart unless-stopped <container-name>

A command to undo automatic start of the Docker containers when host boots up.

docker update --restart no <container_name_or_id>

Alternative to iptables could be nginx

It is possible to configure nginx as a reverse proxy server on a host machine. That way, all traffic could be forwarded to an appropriate guest machine. Furthermore, using OpenResty with Lua, it is possible to configure it to pull forwarding rules from a database like MySQL, PostgreSQL, etc. Building on top, a FastAPI server would update the rules via a simple REST API endpoint. At the end, a user-oriented web application could be developed using Rest.js, Vue.js, or AngularJS. That would constitute a fully automated user-friendly system for host-guest VM management.

References: * Guide to add a bridge interface to the Ubuntu desktop using nmcli * Another guide to add a bridge interface that helped to understand how to set dynamic IP instead of static * Guide to setup a bridged network for KVM guests * Another useful guide, this is where I started my research from * 1/3 of QEMU/KVM Ubuntu 24.04 installation instructions I used * 2/3 of QEMU/KVM Ubuntu 24.04 installation instructions I used * 3/3 of QEMU/KVM Ubuntu 24.04 installation instructions I used * 1/3 of port forwarding instructions I used * 2/3 of port forwarding instructions I used * 3/3 of port forwarding instructions I used * 1/2 of setting static IP using netplan and NetworkManager * 2/2 of setting static IP using netplan and NetworkManager

GPU Passthrough

I decided to make it into a separate section because this task is complex enough on its own.

The following command can tell what kernel driver is in use.

lspci | grep ' VGA ' | cut -d" " -f 1 | xargs -i lspci -v -s {}

The output should look something like this.

bd:00.0 VGA compatible controller: NVIDIA Corporation GA102GL [RTX A6000] (rev a1) (prog-if 00 [VGA controller])
    Subsystem: NVIDIA Corporation GA102GL [RTX A6000]
    Physical Slot: 7
    Flags: bus master, fast devsel, latency 0, IRQ 439, NUMA node 1, IOMMU group 30
    Memory at e9000000 (32-bit, non-prefetchable) [size=16M]
    Memory at 22bfe0000000 (64-bit, prefetchable) [size=256M]
    Memory at 22bff0000000 (64-bit, prefetchable) [size=32M]
    I/O ports at d000 [size=128]
    Expansion ROM at ea000000 [virtual] [disabled] [size=512K]
    Capabilities: <access denied>
    Kernel driver in use: nvidia
    Kernel modules: nvidiafb, nouveau, nvidia_drm, nvidia

For a successful GPU passthrough, I installed Ubuntu 24.04.1 without NVIDIA drivers using only nouveau.

Modify grub.

sudo nvim /etc/default/grub

Make it look like this.

GRUB_CMDLINE_LINUX_DEFAULT="quiet splash intel_iommu=on iommu=pt"

Save the changes, and update grub to apply the changes. Following with a reboot.

sudo update-grub

Blacklist the GPUs by creating a configuration file.

sudo nvim /etc/modprobe.d/gpu-passthrough-blacklist.conf

Make it look like this.

blacklist nouveau
blacklist snd_hda_intel

Bind GPUs to VFIO by creating another configuration file.

sudo nvim /etc/modprobe.d/vfio.conf

Make it look like this.

options vfio-pci ids=XXXX:XXXX,YYYY:YYYY

XXXX:XXXX,YYYY:YYYY are model ids found by using the lspci -nnk | grep -e NVIDIA command. The ids are located at the end of the line. Shortly after, save and apply the changes.

sudo update-initramfs -u

Example

Installing Ubuntu 24.04.1 from a USB flash drive. A standard procedure, however, on the step where it asks to isntall third-party drivers, we leave the checkbox empty. The "Download and install support for additional media formats" was greyed out because the machine didn't have access to the internet. The goal is to ensure that the GPUs wouldn't be used by the host system but rather on-demand by guest VMs.

The system has failed to boot on the first try. Rebooted, but the screen was black. Switched to tty3 and enabled Xorg suppport in the gdm3 configuration file. Rebooted and logged in. The welcome page has been through, and connection to the internet was done via Wi-Fi.

Given the access to the WWW, the system was updated and upgraded. Notebly, the connection was very slow, i.e. it took around 30 minutes to complete the task. It prompted me to restart, we went with "Restart later" and then manually restarted. Enabled RDP and SSH.

Followed instructions to perform initial configuration. Skipped build-essential, ubuntu-restricted-extras, ffmpeg, fonts-powerline, gstreamer, and wget. Proceeded with the Virtualisation instructions. It is very important to skip the sudo apt upgrade because it changes some of the packages that make gdm3 fail to start when blacklisting nouveau! That was a big issue and it took me two days to narrow it down to this point. As it wasn't neccessary to upgrade, I left it at this stage.

gpu0/gpuall SSH will be available at 192.168.122.100:221.
            RDP will be available at 192.168.122.100:33891.

gpu1/gpuduo1 SSH will be available at 192.168.122.101:221.
             RDP will be available at 192.168.122.101:33891.

gpu2/gputrio SSH will be available at 192.168.122.102:221.
             RDP will be available at 192.168.122.102:33891.

gpu3/gpuduo2 SSH will be available at 192.168.122.103:221.
             RDP will be available at 192.168.122.103:33891.

fileserver SSH will be available at 192.168.122.104:221.
           RDP will be available at 192.168.122.104:33891.
           MinIO will be available at 192.168.122.104:9000.
           MinIO Dashboard will be available at 192.168.122.104:9001.

References: * A guide to setup GPU passthrough * A guide to setup GPU passthrough * A guide to setup GPU passthrough * How to lspci to check GPU kernel driver in use * The comment in this link was intriguing * 1/2 of the guide the comment was referring to * 2/2 of the guide the comment was referring to

GPU Passthrough for Arch

A simple Bash script that will give you all the valuable IOMMU information, e.g. 10de:2882 or 10de:22be. The script is available at project/dotfiles/.local/bin/iommu and I usually install it at ~/.local/bin/iommu.

    #!/bin/bash

    for d in /sys/kernel/iommu_groups/*/devices/*; do
        n=${d#*/iommu_groups/*}; n=${n%%/*}
        printf 'IOMMU Group %s ' "$n"
        lspci -nns "${d##*/}"
    done

The guide [1, 2, 3] has mentioned ACS kernel patch, which allows the system to split the hardware devices into separate IOMMU groups. What in turn should provide higher granularity over the devices we want to pass through. I skipped this step.

Modifed /etc/default/grub by adding the following to GRUB_CMDLINE_LINUX_DEFAULT.

amd_iommu=on iommu=pt vfio-pci.ids=10de:21c4,10de:1aeb

Once you've got all these options set, go ahead commit the changes. And, reboot.

sudo grub-mkconfig -o /boot/grub/grub.cfg

After reboot, the output of dmesg | grep vfio indicated that the vfio module wasn't loaded during the boot. Logically, the selected PCI devices weren't used by vfio drivers. To circum nagivate this behaviour, I had to modify /etc/mkinitcpio.conf file.

UPDATE: it seems like vfio_virqfd module is built in vfio [1]. Logically, I removed it from the MODULES list.

MODULES=(vfio_pci vfio vfio_iommu_type1)

I saved the file, and ran the following.

sudo mkinitcpio -p linux

The following command installed the required software to create a VM. The core are qemu libvirt. However, I use virt-manager GUI for easier management. The dmidecode package is one of the dependencies that is needed by virt-manager, I think. I know that somewhere along the journey, it would try to create a network which would use dnsmasq. The edk2-ovmf package is a UEFI firmware for virtual machines based on EDK II (EFI Development Kit). The swtpm package is needed to bypass TMP 2.0 requirement by Windows 10/11. When prompted, I selected the third option: qemu-full.

sudo pacman --sync qemu libvirt virt-manager edk2-ovmf dnsmasq dmidecode swtpm

Start the libvirt daemon.

sudo systemctl enable --now libvirtd
sudo systemctl enable --now virtlogd.socket

Activate the default network and make it persistent between bootings.

sudo virsh net-start default
sudo virsh net-autostart default

Either use polkit with polkit-gnome or add to the libvirt group.

sudo usermod -aG libvirt $USER

Software RAID

From the internet search, the suggested tool is mdadm.

To install mdadm, run the following command in terminal.

sudo apt install mdadm

The following command will create the RAID 5 array.

sudo mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sda /dev/sdb /dev/sdc

Format the RAID array and make it persistent.

clear; sudo mkdir -p /mnt/md0
sudo mount /dev/md0 /mnt/md0/
df -h -x devtmpfs -x tmpfs

Then, inside the host machine, we can setup something like this.

sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf
sudo update-initramfs -u
sudo echo '/dev/md0 /box ext4 defaults,nofail,discard 0 0' | sudo tee -a /etc/fstab

References: * The 1/2 guide * The 2/2 guide

BIOS RAID

Set the SATA from AHCI to RAID.

Created volume under Advanced/Intel(R) VROC sSATA Controller/Create RAID Volume.

Unfortunately, my configuration doesn't support Intel(R) VROC sSATA Controller. I will have to use software RAID.

CPU: Intel(R) Xeon(R) Silver 4410Y
Generation: 4th "Sapphire Rapids"
Chipset: Intel® C621A Chipset

References: * Intel® Virtual RAID on CPU (Intel® VROC) Operating Systems Support List * Release Notes Intel® Virtual RAID on CPU (Intel® VROC) for Linux* * Intel® Xeon® Silver 4410Y Processor * Intel® Virtual RAID on CPU (Intel® VROC) Linux* Driver for Intel® Server Boards and Systems Based on Intel® 621A Chipset * GPU SuperServer SYS-740GP-TNRT

VM for MinIO

To setup a file storage server for research work, we will setup MinIO on Ubuntu 24.04. For this, we will run a VM using KVM/QEMU. We already set up RAID 5 on the host machine and, simply, created virtual disk using entire available space.

Waiting for Ubuntu to install inside VM...

Used parted command line tool to partition and format the hard drive.

Modified fstab to add automount option. And, used UUID instead of a path because it is more reliable.

Below are commands to run a Docker container of MinIO.

mkdir -p ${HOME}/minio/data

docker run \
-p 9000:9000 \
-p 9001:9001 \
--user $(id -u):$(id -g) \
--name minio1 \
-e "MINIO_ROOT_USER=ROOTUSER" \
-e "MINIO_ROOT_PASSWORD=CHANGEME123" \
-v ${HOME}/minio/data:/data \
quay.io/minio/minio server /data --console-address ":9001"

References: * The tutorial I used to setup virtual disk in guest OS.

What are base-devel Dependencies

List of dependencies that base-devel brings with itself. To highlight that base-devel is optional, at least in theory, I put it in the post-installation section [1].

  • archlinux-keyring
  • autoconf
  • automake
  • binutils
  • bison
  • debugedit
  • fakeroot
  • file
  • findutils
  • flex
  • gawk
  • gcc
  • gettext
  • grep
  • groff
  • gzip
  • libtool
  • m4
  • make
  • pacman
  • patch
  • pkgconf
  • sed
  • sudo
  • texinfo
  • which

How to Install dwm and dmenu from Source Code

The dwm and dmenu packages have the following dependencies on Arch Linux: base-devel git libx11 libxft xorg-server xorg-xinit terminus-font [1]. The fonts are optional. And, the former guide used st but I opted for alacritty.

mkdir -p ~/.local/src
git clone git://git.suckless.org/dmenu ~/.local/src/dmenu
git clone git://git.suckless.org/dwm ~/.local/src/dwm

To install dmenu.

cd ~/.local/src/dmenu
nvim config.mk

Edit config.mk to avoid sudo in sudo make install, I modified config.mk by changing the prefix to ~/.local [1].

# XINERAMALIBS  = -lXinerama
# XINERAMAFLAGS = -DXINERAMA

Compile and install.

make clean
make install

To install dwm.

cd ~/.local/src/dwm
nvim config.mk

Edit config.mk to achieve the same as earlier.

# XINERAMALIBS  = -lXinerama
# XINERAMAFLAGS = -DXINERAMA

Edit config.def.h to set the alacritty terminal instead of st.

static const char *termcmd[] = { "alacritty", NULL };

Compile and install.

make clean
make install

Make the dwm executable available to the entire user-space system.

nvim ~/.bashrc

Edit .bashrc file.

PATH=~/.local/bin:$PATH

Copy .xinitrc from default location to home folder for customisation [1].

cp /etc/X11/xinit/xinitrc ~/.xinitrc
nvim ~/.xinitrc

Edit .xinitrc file.

exec dwm

After, when ready to switch to GUI run the following.

clear ; startx

There are many shortcuts to remember [1].

How to Customise dwm and dmenu

The suckless software is customised primarily through patches [1, 2, 3]. These patches in turn modify the source code that is written in C. To apply a patch issue a command such as like: patch < path/to/patch.diff. I would recommend to issue the commands from the root folder of a project. Also, *.rej contains changes that were rejected due to conflicts. These changes must be handled manually [1].

Applied the patches for dwm in order of appearance: bar height, barpadding, vanitygapps.

The config.h file is a generated file and, in my case, was own by root. Therefore, it was easier to modify config.def.h file and remove config.h when rebuilding.

To set custom fonts, e.g. Terminus, Tamzen, JetBrains Mono, Hack, and Source Code Pro, change the arguments from "monospace:size=10" to "xos4 Terminus:pixelsize=14" in config.def.h.

    static const unsigned int borderpx  = 2;  /* border pixel of windows */
    static const unsigned int gappiv    = 10; /* vert inner gap between windows */
    static const int user_bh            = 10; /* 2 is the default spacing around the bar's font */
    static const int vertpad            = 10; /* vertical padding of bar */
    static const int sidepad            = 10; /* horizontal padding of bar */
    static const char *fonts[]          = { "xos4 Terminus:pixelsize=14" };
    static const char dmenufont[]       = "xos4 Terminus:pixelsize=14";

The patches for dmenu: bar height, border, case-insensitive, center.

    static const int user_bh   = 10; /* add an defined amount of pixels to the bar height */
    static const char *fonts[] = { "xos4 Terminus:pixelsize=14" };
    /* -l option; if nonzero, dmenu uses vertical list with given number of lines */
    static unsigned int lines  = 5;
    /* Size of the window border */
    static unsigned int border_width = 2;

Things I need in a status bar:

  1. Time & date
  2. Volume level
  3. Wi-Fi status
  4. Bluetooth status
  5. TBD

A script that achieves this is available at project/dotfiles/.local/bin/status. I usually copy it over to ~/.local/bin/status. An example is displayed below. If there are squares instead of icons then the system is either missing the right fonts that supports icons or the text editor is misconfigured.

    #!/bin/bash

    while true; do
        # Volume
        VOL=$(wpctl get-volume @DEFAULT_AUDIO_SINK@ | awk '{printf("%d", $2 * 100)}')

        # Time
        TIME=$(date '+%Y/%m/%d %H:%M:%S')

        # Wi-Fi
        WIFI=$(nmcli -t -f active,ssid dev wifi | grep '^yes' | cut -d: -f2)
        SIGNAL=$(nmcli -t -f active,ssid,signal dev wifi | grep '^yes' | cut -d: -f3)

        # Bluetooth
        BT_INFO=$(bluetoothctl info)
        BT_CONNECTED=$(echo "$BT_INFO" | grep "Connected:" | awk '{print $2}')
        if [ "$BT_CONNECTED" = "yes" ]; then
            BT_NAME=$(echo "$BT_INFO" | grep "Name:" | cut -d ' ' -f2-)
            BT_RAW_RSSI=$(echo "$BT_INFO" | grep "RSSI:" | awk '{print $2}')
        BT_DEC_RSSI=$(printf "%d" "$BT_RAW_RSSI")

        # Signal quality based on dBm.
        if [ "$BT_DEC_RSSI" -ge -60 ]; then
                BT_QUALITY="Excellent"
            elif [ "$BT_DEC_RSSI" -ge -70 ]; then
                BT_QUALITY="Good"
            elif [ "$BT_DEC_RSSI" -ge -80 ]; then
                BT_QUALITY="Fair"
            else
                BT_QUALITY="Poor"
            fi


            BT_STATUS=" ${BT_NAME} (${BT_DEC_RSSI} dBm, ${BT_QUALITY})"
        else
            BT_STATUS=" Disconnected"
        fi

        # Set status bar
        xsetroot -name "  ${VOL}% |   ${WIFI:-Disconnected} (${SIGNAL:-0}%) | ${BT_STATUS} | ${TIME}"

        sleep 1
    done

New Way to Passthrough a GPU

Enable IOMMU [1] in /etc/default/grub. Both Arch and Ubuntu will have this file if the installed OS uses GRUB to boot.

=== Method 1

=== Intel

    ```bash
    GRUB_CMDLINE_LINUX_DEFAULT="quiet splash intel_iommu=on iommu=pt"
    ```

=== AMD

    ```bash
    GRUB_CMDLINE_LINUX_DEFAULT="quiet splash amd_iommu=on iommu=pt"
    ```

=== Method 2

This tells the system: _"Any device with PCI vendor:device ID matching 10de:1111 or 10de:1112 should be bound to `vfio-pci`."_

=== Intel

    ```bash
    GRUB_CMDLINE_LINUX_DEFAULT="quiet splash intel_iommu=on iommu=pt vfio-pci.ids=10de:2882,10de:22be"
    ```

=== AMD

    ```bash
    GRUB_CMDLINE_LINUX_DEFAULT="quiet splash amd_iommu=on iommu=pt vfio-pci.ids=10de:2882,10de:22be"
    ```

Why the first method is better than the second method? Well, imagine the following situation.

  • gpu1: 10de:1111,10de:1112
  • gpu2: 10de:1111,10de:1112
  • gpu3: 10de:2221,10de:1112

We would like to pass gpu1 and gpu2 that share the same IDs to vfio-ci, and gpu3 should be untouched. Here's where it gets tricky, if gpu3 also includes components (like audio controllers) that share the same PCI IDs as those listed (e.g., 10de:1111 or 10de:1112), then they will be bound to vfio-pci too — even if the GPU itself doesn't match directly.

So, instead of using vfio-pci.ids, which uses IDs globally, use device-specific binding based on PCI addresses (Bus:Device.Function). Identify the full PCI addresses of the devices you want to passthrough (and only those). Create a script to bind specific devices, /usr/local/bin/vfio-bind.sh. Make sure your vfio-pci binding script runs early enough in the boot process to grab the devices before any other driver claims them. There are a few ways to do this on Ubuntu, but the cleanest and most robust is using a systemd service that runs at the right time—before device drivers initialize but after the necessary sysfs paths exist. Substitute 0000:01:00.0 0000:01:00.1 0000:02:00.0 0000:02:00.1 with your PCI addresses.

#!/bin/bash

# Load vfio-pci.
modprobe vfio-pci

# Use driver_override (cleaner way).
for dev in 0000:01:00.0 0000:01:00.1 0000:02:00.0 0000:02:00.1; do
  echo "vfio-pci" > /sys/bus/pci/devices/$dev/driver_override
done

# Bind the devices.
for dev in 0000:01:00.0 0000:01:00.1 0000:02:00.0 0000:02:00.1; do
  echo $dev > /sys/bus/pci/drivers/vfio-pci/bind
done

Make it executable:

sudo chmod +x /usr/local/bin/vfio-bind.sh

Create the systemd service at /etc/systemd/system/vfio-bind.service.

[Unit]
Description=Bind GPUs to vfio-pci
Before=basic.target
After=systemd-modules-load.service
DefaultDependencies=no

[Service]
Type=oneshot
ExecStart=/usr/local/bin/vfio-bind.sh

[Install]
WantedBy=basic.target

🔍 DefaultDependencies=no ensures it runs early, and basic.target is reached before most services or drivers load.

Enable and rebuild initramfs (optional but recommended).

sudo systemctl enable vfio-bind.service

If you're not using vfio-pci.ids in your GRUB anymore (good!), you don’t have to regenerate initramfs, but if you're still using any early-loading module configs, you might:

sudo update-initramfs -u

Reboot and confirm. After rebooting, confirm your devices are bound:

lspci -nnk | grep -A 3 -E 'VGA|Audio'

Look for your passthrough GPUs showing Kernel driver in use: vfio-pci.

How to Reject DHCP Service using NetworkManager

Create and edit /etc/NetworkManager/dispatcher.d/10-block-bad-dhcp to reject secondary DHCP service on a Linux that uses NetworkManager. Set the eno1 argument according to the available adapter name.

#!/bin/bash

IFACE="$1"
STATUS="$2"

logger "NetworkManager: Triggered on $IFACE with status $STATUS"

if [ "$IFACE" = "eno1" ] && [[ "$STATUS" == "up" || "$STATUS" == "dhcp4-change" ]]; then
    IP=$(ip -4 addr show "$IFACE" | awk '/ inet / {print $2}' | cut -d/ -f1)
    if [[ "$IP" == 192.168.* ]]; then
        logger "NetworkManager: Blocking DHCP lease from $IP on $IFACE"
        ip addr flush dev "$IFACE"
        nmcli device disconnect "$IFACE"
        sleep 2
        nmcli device connect "$IFACE"
    else
        logger "NetworkManager: Accepted $IP on $IFACE"
    fi
fi

exit 0

How to develop the hardcode project

git clone git@github.com:couper64/hardcode.git
cd hardcode

conda create -yn hardcode python=3 conda activate hardcode pip install -r requirements.txt

conda activate hardcode

Finally, after the environment is set and Python packages are installed for the local development. To run the public documentation.

mkdocs serve -f public/mkdocs.yml

And, to run the private documentation.

mkdocs serve -f private/mkdocs.yml

From this point on, the changes should immediately update every time the changes are saved.

Note

For completeness sake, below is the command to build the static website, but it isn't needed in our use case.

mkdocs build -f public/mkdocs.yml
mkdocs build -f private/mkdocs.yml

How to resize KVM virtual disk size

Resize the virtual disk size.

sudo qemu-img resize /data/kvm/<vm_name> +10G

Check if the virtual disk has been resized.

sudo qemu-img info /data/kvm/<vm_name>

This gives the path to the *.qcow2 file which is used as an argument to the resizing command.

sudo virsh domblklist <vm_name>

Make sure that there are no snapshots otherwise resizing won't work.

sudo virsh snapshot-list <vm_name>

Remove them using this command.

sudo virsh snapshot-delete <vm_name> <snapshot_name>

Instructions are based on this guide.

How to set default text editor

Either just in a TTY, shell, or .bashrc [1].

export EDITOR=nano

How to enable TPM on KVM

sudo pacman -S swtpm

Then in virt-manager [1, 2]:

  1. Allow customization before installation by checking the box. You can also configure the VM network. For this guide I have used a bridged network.
  2. On the overview windows, select add hardware.
  3. Add TPM 2.0 and make the settings as shown. Then click Finish to apply the changes.
    1. Model: TIS
    2. Backend: Emulated device
    3. Version: 2.0
  4. Just before you begin the installation, remember to change Firmware. Apply the changes and begin the installation.
    1. Firmware: *_secboot.fd

How to install surf on Arch Linux

Install dependencies [1, 2].

sudo pacman -S gcr webkit2gtk-4.1
git clone git://git.suckless.org/surf ~/.local/src/surf

Use the following to open a webpage.

surf http://your-url

During operation, use ctrl-g to enter a new URL [1]. An interesting documentation could be found here at [1] as well.

How to Install and Use rclone

To install rclone, use the command sudo pacman --sync rclone on Arch.

To mount a Google Drive remote with rclone, use the command rclone mount remote_name: /path/to/mountpoint replacing remote_name with your configured remote and /path/to/mountpoint with the local folder.

How to Use Timeshift

sudo blkid
lsblk
sudo nano /etc/fstab
/dev/disk/by-uuid/your-partition-uuid /mnt/your-mount-point ext4 defaults 0 0

Or,

/dev/disk/by-uuid/your-partition-uuid /mnt/your-mount-point ext4 noauto 0 0
sudo timeshift --create
sudo timeshift --list
sudo timeshift --restore

I consider tools like Back in Time, Pike Backup, or Deja Dup not user friendly because I keep facing permissions-related errors with my home directory. Therefore, I ditch the idea of using two tools to backup my stuff and focus on Timeshift only for now.

How to Clean Wipe a Drive

sudo dd if=/dev/zero of=/dev/sdX bs=4M status=progress

Appendix A

To hide the unmounted volumes, the setting is located at Settings/Ubuntu Desktop/Dock/Configure Dock Behavior/Include Unmounted Volumes.

Command to recursively delete files.

find . -type f -name '*.o' -exec rm {} +

Command to recusrively clone a repository.

git clone --recurse-submodules git://github.com/foo/bar.git

Command to kill Ngrok process.

kill -9 "$(pgrep ngrok)"

Commands to run Ngrok in the background.

clear ; ngrok http http://localhost:8080 > /dev/null &
clear ; export WEBHOOK_URL="$(curl http://localhost:4040/api/tunnels | jq ".tunnels[0].public_url")"
clear ; echo $WEBHOOK_URL

Commands to install and setup Ngrok. First, sign up (in) and retrieve the authorisation token.

snap install ngrok
ngrok config add-authtoken <token>

Another installation command, because when I installed ngrok through snap, it couldn't start a service, but when installed through apt, it worked.

curl -s https://ngrok-agent.s3.amazonaws.com/ngrok.asc | \
  sudo gpg --dearmor -o /etc/apt/keyrings/ngrok.gpg && \
  echo "deb [signed-by=/etc/apt/keyrings/ngrok.gpg] https://ngrok-agent.s3.amazonaws.com buster main" | \
  sudo tee /etc/apt/sources.list.d/ngrok.list && \
  sudo apt update && sudo apt install ngrok
sudo ngrok service install --config /path/to/config.yml
sudo ngrok service start

Although, all the messages were indicating "ok", it didn't work for me. Here is the config file.

authtoken: <your-auth-token>
tunnels:
    default:
        proto: http
        addr: 8080

Apendix B

How to develop the hardcode project

git clone git@github.com:couper64/hardcode.git
cd hardcode

conda create -yn hardcode python=3 conda activate hardcode pip install -r requirements.txt

conda activate hardcode

Finally, after the environment is set and Python packages are installed for the local development. To run the public documentation.

mkdocs serve -f public/mkdocs.yml

And, to run the private documentation.

mkdocs serve -f private/mkdocs.yml

From this point on, the changes should immediately update every time the changes are saved.

Note

For completeness sake, below is the command to build the static website, but it isn't needed in our use case.

mkdocs build -f public/mkdocs.yml
mkdocs build -f private/mkdocs.yml

How to resize KVM virtual disk size

Resize the virtual disk size.

sudo qemu-img resize /data/kvm/<vm_name> +10G

Check if the virtual disk has been resized.

sudo qemu-img info /data/kvm/<vm_name>

This gives the path to the *.qcow2 file which is used as an argument to the resizing command.

sudo virsh domblklist <vm_name>

Make sure that there are no snapshots otherwise resizing won't work.

sudo virsh snapshot-list <vm_name>

Remove them using this command.

sudo virsh snapshot-delete <vm_name> <snapshot_name>

Instructions are based on this guide.

How to set default text editor

Either just in a TTY, shell, or .bashrc [1].

export EDITOR=nano

How to enable TPM on KVM

sudo pacman -S swtpm

Then in virt-manager [1, 2]:

  1. Allow customization before installation by checking the box. You can also configure the VM network. For this guide I have used a bridged network.
  2. On the overview windows, select add hardware.
  3. Add TPM 2.0 and make the settings as shown. Then click Finish to apply the changes.
    1. Model: TIS
    2. Backend: Emulated device
    3. Version: 2.0
  4. Just before you begin the installation, remember to change Firmware. Apply the changes and begin the installation.
    1. Firmware: *_secboot.fd

How to install surf on Arch Linux

Install dependencies [1, 2].

sudo pacman -S gcr webkit2gtk-4.1
git clone git://git.suckless.org/surf ~/.local/src/surf

Use the following to open a webpage.

surf http://your-url

During operation, use ctrl-g to enter a new URL [1]. An interesting documentation could be found here at [1] as well.

How to Install and Use rclone

To install rclone, use the command sudo pacman --sync rclone on Arch.

To mount a Google Drive remote with rclone, use the command rclone mount remote_name: /path/to/mountpoint replacing remote_name with your configured remote and /path/to/mountpoint with the local folder.

How to Rofi

Rofi: https://github.com/davatorium/rofi Themes: https://github.com/adi1090x/rofi

Appendix A

To hide the unmounted volumes, the setting is located at Settings/Ubuntu Desktop/Dock/Configure Dock Behavior/Include Unmounted Volumes.

Command to recursively delete files.

find . -type f -name '*.o' -exec rm {} +

Command to recusrively clone a repository.

git clone --recurse-submodules git://github.com/foo/bar.git

Command to kill Ngrok process.

kill -9 "$(pgrep ngrok)"

Commands to run Ngrok in the background.

clear ; ngrok http http://localhost:8080 > /dev/null &
clear ; export WEBHOOK_URL="$(curl http://localhost:4040/api/tunnels | jq ".tunnels[0].public_url")"
clear ; echo $WEBHOOK_URL

Commands to install and setup Ngrok. First, sign up (in) and retrieve the authorisation token.

snap install ngrok
ngrok config add-authtoken <token>

Another installation command, because when I installed ngrok through snap, it couldn't start a service, but when installed through apt, it worked.

curl -s https://ngrok-agent.s3.amazonaws.com/ngrok.asc | \
  sudo gpg --dearmor -o /etc/apt/keyrings/ngrok.gpg && \
  echo "deb [signed-by=/etc/apt/keyrings/ngrok.gpg] https://ngrok-agent.s3.amazonaws.com buster main" | \
  sudo tee /etc/apt/sources.list.d/ngrok.list && \
  sudo apt update && sudo apt install ngrok
sudo ngrok service install --config /path/to/config.yml
sudo ngrok service start

Although, all the messages were indicating "ok", it didn't work for me. Here is the config file.

authtoken: <your-auth-token>
tunnels:
    default:
        proto: http
        addr: 8080