mths : sdrbrg ;

# Terraform in the time of quarantine

For reasons, my current ISP has this funny hard limit of 2000 connections per device, and when you hit the limit you get locked out of the network and fittingly enough you’re presented with a page that states that “you’ve been quarantined” and need to “re-authenticate” to get back on, which typically includes logging into their portal from a different device and removing the offending device and reconnecting.

This has generally not been an issue for me, I first hit the limit one day when I was messing around and running terraform plan in multiple (fairly large) environments, and didn’t bother too much about it.

However, since started working from home come March the fine year of 2020 (pandemic anyone?), I’m doing a lot of terraform’ing daily and have some environments that are quite large and initially forced me over this limit multiple times due to the amount of connections made to AWS. It’s not that each individual terraform plan/apply creates 2000 connections, so I assume the ISP is measuring the number of connections created during a time period, or something like that. Contacting their support did nothing, they closed the case after two weeks without any reply whatsoever.

As I have a subscription with the fine folks over at ovpn, and they (at the time) had started their beta for wireguard, I figured I could just connect to their London endpoint and be done with it. This worked perfectly fine initially, but it was just a bit too easy, and sometimes I don’t want to be connected to a VPN.

Linux network namespaces are pretty cool though, so why not create a namespace, wire it up with wireguard and exec into it whenever I need to run Terraform?

So off we go with two systemd service units, inspired by’s blog post, using veth pairs and a configuration file used in said units:


Description=Named network namespace %i

ExecStart=/usr/bin/ip netns add %I
ExecStart=/usr/bin/ip netns exec %I ip link set lo up
ExecStop=-/usr/bin/ip netns delete %I 2> /dev/null



Description=Named network namespace %I systemd-netns@%i.service

ExecStartPre=/usr/bin/ip link add veth-${GATEWAY_DEVICE} type veth peer name veth-%I
ExecStartPre=/usr/bin/ip addr add ${GATEWAY_IP}/${NETWORK_NETMASK} dev veth-${GATEWAY_DEVICE}
ExecStartPre=/usr/bin/ip link set veth-%I netns %I
ExecStartPre=/usr/bin/ip netns exec %I ip addr add ${NETWORK_IP}/${NETWORK_NETMASK} dev veth-%I
ExecStartPre=/usr/bin/ip link set veth-${GATEWAY_DEVICE} up
ExecStartPre=/usr/bin/ip netns exec %I ip link set veth-%I up
ExecStartPre=/usr/bin/ip netns exec %I ip route add default via ${GATEWAY_IP}
ExecStartPre=/usr/bin/iptables -A FORWARD -o ${GATEWAY_DEVICE} -i veth-${GATEWAY_DEVICE} -j ACCEPT
ExecStartPre=/usr/bin/iptables -A FORWARD -i ${GATEWAY_DEVICE} -o veth-${GATEWAY_DEVICE} -j ACCEPT
ExecStartPre=/usr/bin/iptables -t nat -A POSTROUTING -s ${NETWORK_IP}/${NETWORK_NETMASK} -o ${GATEWAY_DEVICE} -j MASQUERADE
ExecStartPre=/usr/bin/sysctl -w net.ipv4.ip_forward=1
ExecStart=/usr/bin/ip netns exec %I wg-quick up /etc/wireguard/wg0-uk.conf
ExecStop=/usr/bin/ip netns exec %I wg-quick down /etc/wireguard/wg0-uk.conf
ExecStopPost=-/usr/bin/ip netns del %I
ExecStopPost=-/usr/bin/iptables -D FORWARD -o ${GATEWAY_DEVICE} -i veth-%I -j ACCEPT
ExecStopPost=-/usr/bin/iptables -D FORWARD -i ${GATEWAY_DEVICE} -o veth-%I -j ACCEPT
ExecStopPost=-/usr/bin/iptables -D POSTROUTING -t nat -s ${NETWORK_IP}/${NETWORK_NETMASK} -o ${GATEWAY_DEVICE} -j MASQUERADE




After that I was able to run this monstrosity:

$ sudo systemctl enable systemd-netns@wireguard.service --now
$ curl
$ sudo -E ip netns exec wireguard sudo -E -u dist curl

That’s all well and good, but I don’t really fancy running sudo (twice!) to run Terraform. Instead, enter firejail, which is SUID program that can (among other things) use different network namespaces. Perfect.

So rather than the sudo madness, I created a wrapper script for Terraform:

#!/usr/bin/env bash

if ip netns | grep --quiet wireguard; then
  exec firejail \
    --noprofile \
    --netns=wireguard \
    --writable-var \
    --writable-run-user \
    --quiet \
    /usr/bin/terraform $@
  echo "\`wireguard\` network namespace doesn't exist, running in \`init\` namespace" >&2
  exec /usr/bin/terraform $@

Et voilà, runs like a dream. However, as systemd has native support for wireguard I might look into changing it up to use that instead.

# bash: abort script on any error

For the longest of time it’s been my understanding that set -e in Bash causes a script to terminate if any single command of the script fails, which is also exactly what it does, in most cases.

However, when you start using functions things change a bit. Consider the following example:

$ cat basherr

set -e
function run() {
  false # <- this should cause the script to fail
  echo ", world!"
echo "Hello$output" # <- this should never be executed

$ basherr ; echo $?
Hello, world!

I was recently working on a script that had a couple of functions and I wanted the script to abort if any single command failed. After some digging I found the -E option to set, which according to the documentation does the following:

If set, any trap on ERR is inherited by shell functions, command substitutions, and commands executed in a subshell environment. The ERR trap is normally not inherited in such cases.

Armed with this newfound knowledge I replaced set -e with set -E and a custom trap handler for ERR:

set -E
trap "exit $?" ERR

and lo and behold:

$ basherr ; echo $?

I could have saved myself some headache by reading the documentation of set more thouroughly as it does outline exceptions where set -e won’t do what one might think.

10 Dec 2016 / bash

# helpers in the kitchen

When writing tests using RSpec, I prefer to use shared_context and shared_examples for integration setup and for testing shared behaviour, respectively.

When I started out with writing integration tests for my kafka cookbook using serverspec, I wanted to share tests between different suites as they were testing different init systems, but the tests could be heavily refactored to just depend on some shared variables rather than duplicating all of the test cases. It was however not immediately clear how one would go about sharing files between different suites.

After quite some digging I found a bit of information (I think in an old issue or pull request though I no longer have any links handy) that mentioned having a helpers directory in test/integration for sharing files between suites.

So it’s just a matter of creating a helpers directory and the necessary busser specific subdirectory, adding some files and you’re good to go. They’ll even be available on the $LOAD_PATH, so it’s easy to just require a spec_helper or the alike in the actual spec files.

Since v1.2.1 of test-kitchen it’s also possible to create directories in the helpers directory (I tend to keep shared code in a support directory for example).

For reference my kafka cookbook is over here, and more specifically the helpers directory (and serverspec subdirectory) is [over here] (

19 Feb 2016 / test-kitchen chef rspec