7 Arm64 Proxmox node deployement
ayakael edited this page 2025-11-24 02:24:34 +00:00

Arm64 Proxmox node deployement

These are the steps to setup a new node with Proxmox on arm64.

Install pxvirt (proxmox arm64 port)

  • Download latest pxvirt iso
  • Setup on your favorite USB key, and boot into Graphical UI mode
  • After agreeing to the EULA, make sure you target the right harddisk.
  • We want to setup a ZFS installation, thus click Options, set filesystem as zfs (RAID0), and Harddisk 1 as -- do not use --. This sets up ZFS in a single-disk mode. If installation is on an SSD, in Advanced options set ashift as 13. Click Next
  • Set your country and timezone as desired (in our case Canada and America/Toronto leaving keyboard layout as U.S. English
  • Set root password
  • Set network configuration, making sure that your Management Interface is connected to the internal network, not the DMZ. Hostname should follow convention, that is to say pve-(node name).ilot.io
  • After installation, reboot while removing installation medium

Use faster mirrors

By default, pxvirt uses chinese debian mirrors. These are slow, lets use other ones:

 sed 's|https://mirrors.ustc.edu.cn/debian|http://ftp.ca.debian.org/debian|' -i /etc/apt/sources.list

Use homebrewed kernel

On ilot, we use our own homebrewed kernel as pxvirt is based on the openeuler kernel, while Proxmox is based on Ubuntu's kernel. Ubuntu's kernel also tends to be better support Ampere. The kernel, built in this repo can be installed by adding this repo:

sudo curl https://ayakael.net/api/packages/forge/debian/repository.key -o /etc/apt/keyrings/forgejo-forge.asc
echo "deb [signed-by=/etc/apt/keyrings/forgejo-forge.asc] https://ayakael.net/api/packages/forge/debian bookworm main" | sudo tee -a /etc/apt/sources.list.d/forgejo.list
sudo apt update

After, install using apt:

sudo apt install pve-kernel-6.8-generic

Make sure the new kernel is pinned using proxmox-boot-tool

Setup letsencrypt for https access of Proxmox interface and services

Install letsencrypt:

apt update
apt install certbot

Create API key on https://ca.ovh.com/auth/api/createToken, and store it in /etc/letsencrypt/ovh/ayakael.conf as:

# OVH API credentials used by Certbot
dns_ovh_endpoint = ovh-ca
dns_ovh_application_key = MDAwMDAwMDAwMDAw
dns_ovh_application_secret = MDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAw
dns_ovh_consumer_key = MDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAw

With permissions:

  • GET /domain/zone/*
  • PUT /domain/zone/*
  • POST /domain/zone/*
  • DELETE /domain/zone/*

Make sure to limit access to this file to root only:

chmod 600 /etc/letsencrypt/ovh/ayakael.conf

Debian trixie does not include python3-certbot-dns-ovh package like debian bookworm did. We must thus setup a virtual env for this:

apt-get install python3-venv
python3 -m venv /usr/local/lib/python3.13/virtual-env
/usr/local/lib/python3.13/virtual-env/bin/python3.13 -m pip install certbot-dns-ovh

To use virtual env, you need to change the shebang in /usr/bin/certbot by applying the following diff:

diff --git a/usr/bin/certbot.orig b/usr/bin/certbot
index bff3ceb..8674ee2 100755
--- a/usr/bin/certbot.orig
+++ b/usr/bin/certbot
@@ -1,4 +1,4 @@
-#! /usr/bin/python3
+#! /usr/local/lib/python3.13/virtual-env/bin/python3.13
 # EASY-INSTALL-ENTRY-SCRIPT: 'certbot==4.0.0','console_scripts','certbot'
 import re
 import sys

To facilitate deployment of SSL certificates after automatic renewalls to VMs, a hook is created under /etc/letsencrypt/renewal-hooks which copies certificates to a shared folder (/var/lib/vz/ssl), and reloads http servers. Place under the hook folder a ssl-deploy.shscript with the following content:

#!/bin/bash
PVE_DOMAIN=ayakael.net

cp /etc/letsencrypt/live/$PVE_DOMAIN/fullchain.pem /etc/pve/local/pveproxy-ssl.pem
cp /etc/letsencrypt/live/$PVE_DOMAIN/privkey.pem /etc/pve/local/pveproxy-ssl.key
systemctl restart pveproxy 
systemctl restart nginx

[ ! -d /var/lib/vz/ssl ] && mkdir /var/lib/vz/ssl
cp -RL /etc/letsencrypt/live/. /var/lib/vz/ssl/

find /var/lib/vz/ssl/ -maxdepth 1 -mindepth 1 -type d -exec openssl pkcs12 -export -out '{}'/certificate.p12 -inkey '{}'/privkey.pem -in '{}'/cert.pem -certfile '{}'/chain.pem -passout pass: \;

chown 00:82 /var/lib/vz/ssl -R
find  /var/lib/vz/ssl -type d -exec chmod 775 '{}' \;
find  /var/lib/vz/ssl -type f -exec chmod 640 '{}' \;

#lxc=($(grep -l '/var/lib/ssl' /etc/pve/lxc/* | sed -e 's|.*\/||' -e 's|.conf||'))
#for i in ${lxc[@]}; do
#        lxc-attach -n ${i} -- service nginx reload
#done

After, create certificate:

certbot certonly --dns-ovh   --dns-ovh-credentials /etc/letsencrypt/ovh/ayakael.conf   -d ayakael.net -d '*.ayakael.net'

To facilitate SSL settings management, we will add an nginx config under /var/lib/vz/ssl/choq.ca/nginx.conf with this content:

ssl_certificate /var/lib/ssl/choq.ca/fullchain.pem;
ssl_certificate_key /var/lib/ssl/choq.ca/privkey.pem;
ssl_trusted_certificate /var/lib/ssl/choq.ca/chain.pem;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_session_tickets off;
ssl_protocols TLSv1.2;
ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256';
ssl_prefer_server_ciphers on;
add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; ";
ssl_stapling on;
ssl_stapling_verify on;

The SSL folder is shared with containers using mount point. You can easily do this by adding this line to the container config:

mp1: /var/lib/ssl/ilot.io,mp=/var/lib/ssl/ilot.io,size=512M,shared=1

The SSL folder is shared with the VMs using virtiofs. The make the directory available, you must add /var/lib/vz/ssl folder to Directory Mappings and mount it in the VM. You can following these steps:

  • create directory mapping via Datacenter -> Directory Mappings -> Add with name certbot-ssl
  • On VM in Hardware section, add the directory mapping by creating virtiofs device
  • Boot VM, and create mount directory with mkdir /var/lib/ssl
  • Add virtio to /etc/fstab by adding this line: certbot-ssl /var/lib/ssl virtiofs rw,relatime 0 0
  • Mount it via mount /var/lib/ssl

Repeat this step on each VM

User setup

For auditing and security, casual usage of root should be avoided. Instead, use a user with sudo privileges assigned to each user needing access. Since CHOQ doesn't have more than 1 person needing access, we will be keeping the infra user created in the previous step.

To allow sudo privileges, edit /etc/sudoers with visudo with uncomment wheel group privileges, applying this diff:

diff --git a/etc/sudoers.orig b/etc/sudoers
index 8fad9a1..6d4e762 100644
--- a/etc/sudoers.orig
+++ b/etc/sudoers
@@ -122,7 +122,7 @@ Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sb
in:/b
 root ALL=(ALL:ALL) ALL
 
 ## Uncomment to allow members of group wheel to execute any command
-# %wheel ALL=(ALL:ALL) ALL
+%wheel ALL=(ALL:ALL) ALL
 
 ## Same thing without a password
 # %wheel ALL=(ALL:ALL) NOPASSWD: ALL

We also need to disable root login via ssh by applying this diff to /etc/ssh/sshd_config:

diff --git a/etc/ssh/sshd_config.orig b/etc/ssh/sshd_config
index c5be6fb..aae22ee 100644
--- a/etc/ssh/sshd_config.orig
+++ b/etc/ssh/sshd_config
@@ -33,7 +33,7 @@ Include /etc/ssh/sshd_config.d/*.conf
 # Authentication:
 
 #LoginGraceTime 2m
-PermitRootLogin yes
+PermitRootLogin no
 #StrictModes yes
 #MaxAuthTries 6
 #MaxSessions 10

For a nice looking prompt, we use zsh and ayakael's zshrc. Make sure to change the default shell to zsh in /etc/passwd

apk add zsh rxvt-unicode-terminfo
cd /etc/zshrc
git clone https://ayakael.net/forge/zshrc zshrc.git
mv zshrc.git/* .
mv zshrc.git/.* .
rm -R zshrc.git
cd /home/infra
touch .zshrc

Finally, add your ssh keys for access to infra in /home/infra/.ssh/authorized_keys.

Network setup

The network on the node is separated in two. One interface is dedicated to Proxmox (a "green" interface) and another is dedicated to the guests (an "orange" interface). If the orange is setup in a separate DMZ subnet, the guests will not be able to access the host for better security.

In this setup, the green interface is called vmbr11 while orange is vmbr10. The diff looks something like this:

diff --git a/etc/network/interfaces.orig b/etc/network/interfaces
index fbb85cc..357f667 100644
--- a/etc/network/interfaces.orig
+++ b/etc/network/interfaces
@@ -7,12 +7,20 @@ iface enP3p3s0f0 inet manual
 
 iface enP3p3s0f1 inet manual
 
-auto vmbr0
-iface vmbr0 inet static
+auto vmbr10
+iface vmbr10 inet manual
+       bridge-ports enP3p3s0f1
+       bridge-stp off
+       bridge-fd 0
+#orange
+
+auto vmbr11
+iface vmbr11 inet static
        address 10.10.1.11/24
        gateway 10.10.1.1
        bridge-ports enP3p3s0f0
        bridge-stp off
        bridge-fd 0
+#green
 
 source /etc/network/interfaces.d/*

Notice that there is no gateway and address set for vmbr10. This is what makes it so that the guests cannot access the host through that bridge.

Cluster setup

To join an existing cluster, you simply need to execute, from the new node, 'pvecm add IP-ADDRESS-CLUSTER`