Setting up an AREDN supernode + local node with `systemd-nspawn`
News: If you are near the Pittsburgh area, contact me to join the local AREDN mesh!
Warning
Containers are officially unsupported by OpenWRT, so it will likely never be supported for AREDN either.
Background
systemd-nspawn is a containerization tool built into pretty much all modern Linux distributions.
It compartmentalizes applications using native Linux kernel features such as namespaces.
Currently, AREDN supports running on physical devices and virtual machines.
This is an experiment to see if we can configure working AREDN nodes inside lightweight containers, like systemd-nspawn.
In this guide, we will set up two containers: a supernode and a local node. The resulting network topology will look like this:
The advantages of this approach are:
- Very lightweight compared to a VM-based solution.
- Potentially higher performance — since the network stack is shared with the host OS.
- Availablity of the full
iproute2toolset: just useip netns exec <namespace> <command>to run commands inside the container’s network namespace.
Disadvantages:
- Not beginner-friendly at all. There are many manual network configuration steps.
Warning
Be aware that AREDN runs everything as root inside the container. This means that container breakout is totally possible.
I developed this setup knowing that I can mostly trust the AREDN firmware, and that the host machine is dedicated to this purpose.
You should evaluate your own threat model. If in doubt, use a VM instead.
Prerequisites
- A Linux host using
systemd. You might need to install thesystemd-containerorsystemd-nspawnpackage. - Some time.
Steps
I will skip all the basic OS installation steps and assume your host machine
- boots and runs a
systemd-based Linux distribution (most do since 2015), - runs on a kernel that supports network namespaces (likely the case),
- has the
systemd-nspawnandipcommands and other common Unix utilities installed, - has a working internet connection,
I will also omit all sudo, since pretty much all steps in this guide require root privileges.
Prepare the container filesystems
-
Download the AREDN firmware image: choose the “x86_64 Generic” version from the firmware selection page. It should give you a file ending with
.img.gz. Unpack it withgunzipto get a.imgfile. I will be referring to this file asaredn-x86-64-generic-ext4-combined.img. - Create two directories to hold the container filesystems. I used:
mkdir -p /srv/supernode /srv/localnode - Unpack the AREDN image:
losetup -P loop0 aredn-x86-64-generic-ext4-combined.img mount /dev/loop0p2 /mnt rsync -aP /mnt/ /srv/localnode/ rsync -aP /mnt/ /srv/supernode/ umount /mnt losetup -d loop0Note that the filesystem is located on the second partition of the image, hence
loop0p2. -
Work around a
systemd-nspawnlimitation:systemd-nspawnresets its own set ofsysctlvalues and mounts/proc/sysas read-only inside the container.(2025-01-17) The best solution I’ve found so far is to replace
/sbin/initwith a wrapper:mv /srv/supernode/sbin/init /srv/supernode/sbin/init.innerThen create the wrapper
/srv/supernode/sbin/init:#!/bin/sh for dir in /sys /proc/sys do /bin/mount -o remount,rw "$dir" done exec /sbin/init.innerand mark it executable:
chmod 0755 /srv/supernode/sbin/initThis will remount
/proc/sysas read-write on every boot.Note that using
/etc/rc.local, or any other/etc/rc.dscript, is too late.The
sysctlsettings inside the containers are isolated from the host OS, so allowing write access is mostly acceptable.
Configure network namespaces
We will use the network-namespace mode of systemd-nspawn to give each container its own network stack.
We will set up two pairs of veth interfaces to connect the host OS to the two containers.
Then, we will set up a bridge interface on the host OS between the two veth interfaces, and add a VLAN-tagged interface on top of the bridge for the WAN connection.
At the time of writing, AREDN does this by default during the first boot:
- eth0: LAN 192.168.1.1
- eth0.1 (VLAN): WAN dhcp
- eth0.2 (VLAN): dtdlink 192.168.2.1
To minimize the amount of dirty work we need to do, we will try to stick to this scheme as much as possible.
- Create the AREDN bridge and the WAN VLAN interface:
ip link add br-aredn type bridge ip link add link br-aredn name br-aredn.wan type vlan id 1 ip address add 192.168.10.1/24 dev br-aredn.wanThe names
br-arednandbr-aredn.wanare arbitrary.The IP address
192.168.10.1/24is also arbitrary but should be in a different subnet from your host’s main network. I will be assigning192.168.10.2and192.168.10.3to the supernode and local node later. - Set up network namespaces for the two containers:
ip netns add supernode ip netns add localnodeYou can name them whatever you want as long as they are distinct from each other and from existing namespaces (if any).
- Create the host-to-container
vethpairs (which will also be the untagged LAN interfaces)ip link add veth-super address 36:eb:70:b2:80:1a master br-aredn type veth peer name eth0 netns supernode address 36:eb:70:b2:80:1b ip link add veth-local address 36:eb:70:b2:80:1c master br-aredn type veth peer name eth0 netns localnode address 36:eb:70:b2:80:1dThe
address xx:xx:xx:xx:xx:xxparts are optional and are randomly generated. You can omit them, and they only need to be unique within their own veth pair. I made them all distinct and consecutive for cognitive convenience.The host interface names (the fourth word on each line) can be whatever. For the container interface names (the
peer name), please stick toeth0so we don’t confuse the AREDN firmware. The part afternetnsmust match the namespace names in the previous step. - Bring up the host-side interfaces:
ip link set br-aredn up ip link set br-aredn.wan up ip link set veth-super up ip link set veth-local up
First boot configuration
Since the AREDN firmware is not systemd-aware, we use the --boot mode so that we bring up the entire OpenWRT system.
We configure the supernode first. The steps for the local node is exactly the same.
- Bring up the supernode container:
systemd-nspawn --network-namespace-path=/run/netns/supernode -D /srv/supernode --boot --capability=CAP_NET_ADMIN - Add a LAN IP to the host-side untagged interface:
ip address add 192.168.1.2/24 dev br-arednAt this point, we should be able to ping the supernode at
192.168.1.1from the host. -
Navigate to the AREDN web interface at
http://192.168.1.1and complete the initial setup wizard. - Find out the newly-generated AREDN LAN IP address and prefix length (something like
10.39.62.166/29):ip netns exec supernode ip address show br-lanNow we change the host LAN IP to be in the same subnet:
ip address del 192.168.1.2/24 dev br-aredn ip address add <aredn-lan-ip + 1>/29 dev br-aredn/29is the default LAN size at the time of writing. Use theip netns execcommand to be sure. - Navigate to
http://<aredn-lan-ip>and complete the rest of the setup. In particular, I recommend the following settings:- Enable WAN web and SSH access.
- Disable LAN DHCP for the supernode.
- Set a static IP for the WAN interface.
I used
192.168.10.2for the supernode and192.168.10.3for the local node. This eliminates the need to set up a DHCP server on the host OS. - Set the LAN size to
/30since neither node will have any clients connected to it. - (in Advanced options) Set a unique LAN VLAN tag (I used
12for the supernode and13for the local node). - As always, you should disable WAN telnet access and SSH password login for security.
Note that after changing the LAN size, the LAN IP might change. We should not need it for anything. However, if you do, run the
ip netns execcommand again to find out.
Repeat the above steps for the local node. For the supernode, don’t forget to follow the official supernode setup guide too.
If everything goes well, both nodes should be able to see each other over the DtD link.
Host routing configuration
So far, the containers can only communicate with the host OS. We need to set up routing on the host OS so that the containers can reach the internet.
I use nftables for firewalling. Replace $wan_iface with your host machine’s internet-facing interface name.
chain forward {
type filter hook forward priority filter; policy drop;
ct state { established, related } accept
ct status dnat accept
iifname "lo" accept
iifname "br-aredn.wan" oifname $wan_iface accept
}
chain nat_post {
type nat hook postrouting priority srcnat; policy accept;
iifname "br-aredn.wan" oifname $wan_iface meta nfproto ipv4 masquerade
}
Finally, these sysctl settings are needed to enable IP forwarding and make babel happy:
net.ipv6.conf.all.forwarding = 1
net.ipv4.conf.all.forwarding = 1
net.ipv6.conf.all.accept_redirects = 0
net.ipv4.conf.all.rp_filter = 0
Put them in /etc/sysctl.d/80-aredn.conf and run sysctl -p /etc/sysctl.d/80-aredn.conf to apply them.
Further steps
To bring up the containers after the initial configuration, run steps 1-4 in Configure network namespaces and the first step in First boot configuration.
Some more things you may want to do:
systemdservice script to bring the container up automatically. See appendix.- A reverse proxy for the node web interfaces.
- Add other physical interfaces to the
br-arednbridge. - Firewall!
Appendix
Troubleshooting
-
Error
aredn-XXXXXnode.service: Failed to spawn executor: Device or resource busy:Starting multiple
systemd-nspawnsimultaneously is somehow buggy. To resolve the issue, addaredn-localnode.serviceto theAfter=list inaredn-supernode.service(or vice versa). -
WireGuard nodes fail to show up:
Add these lines to
/srv/(super|local)node/etc/rc.local:if ps | grep -v grep | grep netifd; then service network restart fito reconfigure networking after everything starts up.
systemd unit files
/etc/systemd/system/aredn-network-setup.service:
[Unit]
Description=Configure AREDN container networking
Wants=network.target
After=local-fs.target network-pre.target systemd-sysctl.service systemd-modules-load.service
Before=network.target shutdown.target network-online.target
Conflicts=shutdown.target
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/bin/ip link add br-aredn type bridge
ExecStart=/usr/bin/ip link add link br-aredn name br-aredn.wan type vlan id 1
ExecStart=/usr/bin/ip link add link br-aredn name br-aredn.slan type vlan id 12 # supernode LAN, optional
ExecStart=/usr/bin/ip link add link br-aredn name br-aredn.llan type vlan id 13 # localnode LAN, optional
ExecStart=/usr/bin/ip address add 192.168.10.1/24 dev br-aredn.wan
ExecStart=/usr/bin/ip link set br-aredn up
ExecStart=/usr/bin/ip link set br-aredn.wan up
ExecStop=/usr/bin/ip link set br-aredn.wan down
ExecStop=/usr/bin/ip link set br-aredn down
ExecStop=/usr/bin/ip link delete br-aredn.wan
ExecStop=/usr/bin/ip link delete br-aredn
/etc/systemd/system/aredn-supernode.service:
[Unit]
Description=AREDN supernode
Wants=network-online.target aredn-network-setup.service
After=network-online.target aredn-network-setup.service
[Service]
Type=simple
ExecStartPre=-/usr/bin/ip link delete veth-super
ExecStartPre=-/usr/bin/ip netns delete supernode
ExecStartPre=/usr/bin/ip netns add supernode
ExecStartPre=/usr/bin/ip link add veth-super master br-aredn type veth peer name eth0 netns supernode
ExecStartPre=/usr/bin/ip link set veth-super up
ExecStart=/usr/bin/systemd-nspawn --network-namespace-path=/run/netns/supernode -D /srv/supernode --keep-unit --boot --kill-signal=SIGPWR --resolv-conf=no --timezone=no --link-journal=no --capability=CAP_NET_ADMIN
ExecStopPost=/usr/bin/ip link set veth-super down
ExecStopPost=/usr/bin/ip link delete veth-super
ExecStopPost=/usr/bin/ip netns delete supernode
Restart=on-failure
[Install]
WantedBy=multi-user.target
sed s/super/local/g /etc/systemd/system/aredn-supernode.service > /etc/systemd/system/aredn-localnode.service
systemctl enable --now aredn-supernode.service aredn-localnode.service
AREDN® is a registered trademark of Amateur Radio Emergency Data Network, Inc.