mths : sdrbrg ;

# fun with kubectl and local current-context

I use kubectl on a daily basis and it’s a great tool. However, something that has always bothered me is that setting a current context updates its configuration file and thus for all terminal sessions. This can be surprising if you’re like me and switch between different contexts and terminal sessions as running a command in shell A, then switching context in shell B and running a command, the context in shell A will have changed!

I’ve previously looked at something like kubie but it doesn’t (currently) have support for fish (yet), and my Rust is pretty bad at the moment, although adding support for fish is probably a good way to improve that.

One possible solution would be to use multiple different configuration files, and set the KUBECONFIG environment variable accordingly. My main issue with that is that credentials (such as refresh token(s) from OIDC/OAuth2) are automatically refreshed and persisted to the configuration file, thus rendering all of the other configuration files out of date and using a different one (eventually) causes another refresh and so on and so forth.

A simple trick that I’ve found to work around this is to have a “main” configuration file that holds clusters, contexts, users, etc but that does not set the current-context, and then have multiple “context” configuration files that only set the current-context and that are read-only. This has the nice side-effect that it’s no longer possible to set the current-context (as shown below), but updated credentials is persisted to the main configuration file automatically!

$ kubectl config current-context
$ kubectl config set current-context test6
error: open /home/dist/.config/kubectx/test5: permission denied

Whether this is intended behaviour or not I’m not sure, but it works wonderfully well. I’ve combined this into a fish function which also overrides Ctrl+D to “exit” from the current context:

function kubectx --argument-names context
  if test -n $context
    set --local kubectx_config $XDG_CONFIG_HOME/kubectx/$context

    set --global --export KUBECTX $context
    set --global --export KUBECONFIG $kubectx_config:$HOME/.kube/config

    if not test -f $kubectx_config
      sed -e "s/%CONTEXT%/$context/g" $XDG_CONFIG_HOME/kubectx/template > $kubectx_config
      chmod 400 $kubectx_config

    bind -M insert \cd __kubectx_exit
    bind \cd __kubexit_exit
    bind -M visual \cd __kubexit_exit

    commandline --function repaint
    echo "<context> must not be empty" >&2
    return 1

function __kubectx_exit
  set --erase KUBECTX
  set --erase KUBECONFIG
  bind --erase -M insert \cd
  bind --erase \cd
  bind --erase -M visual \cd
  commandline --function repaint

# $XDG_CONFIG_HOME/kubectx/template
apiVersion: v1
kind: Config
current-context: "%CONTEXT%"

I then use this together with fzf to more easily switch between contexts:

function __k8s_cluster_search --description 'k8s cluster search'
  set --local selected (kubectl config get-contexts --output name | eval (__fzfcmd) $FZF_DEFAULT_OPTS)
  if string length -q -- $selected
    kubectx $selected

Latest version of the above can also be found in my dotfiles repository.

# terraform in the time of quarantine

For reasons, my current ISP has this funny hard limit of 2000 connections per device, and when you hit the limit you get locked out of the network and fittingly enough you’re presented with a page that states that “you’ve been quarantined” and need to “re-authenticate” to get back on, which typically includes logging into their portal from a different device and removing the offending device and reconnecting.

This has generally not been an issue for me, I first hit the limit one day when I was messing around and running terraform plan in multiple (fairly large) environments, and didn’t bother too much about it.

However, since started working from home come March the fine year of 2020 (pandemic anyone?), I’m doing a lot of terraform’ing daily and have some environments that are quite large and initially forced me over this limit multiple times due to the amount of connections made to AWS. It’s not that each individual terraform plan/apply creates 2000 connections, so I assume the ISP is measuring the number of connections created during a time period, or something like that. Contacting their support did nothing, they closed the case after two weeks without any reply whatsoever.

As I have a subscription with the fine folks over at ovpn, and they (at the time) had started their beta for wireguard, I figured I could just connect to their London endpoint and be done with it. This worked perfectly fine initially, but it was just a bit too easy, and sometimes I don’t want to be connected to a VPN.

Linux network namespaces are pretty cool though, so why not create a namespace, wire it up with wireguard and exec into it whenever I need to run Terraform?

So off we go with two systemd service units, inspired by’s blog post, using veth pairs and a configuration file used in said units:


Description=Named network namespace %i

ExecStart=/usr/bin/ip netns add %I
ExecStart=/usr/bin/ip netns exec %I ip link set lo up
ExecStop=-/usr/bin/ip netns delete %I 2> /dev/null



Description=Named network namespace %I systemd-netns@%i.service

ExecStartPre=/usr/bin/ip link add veth-${GATEWAY_DEVICE} type veth peer name veth-%I
ExecStartPre=/usr/bin/ip addr add ${GATEWAY_IP}/${NETWORK_NETMASK} dev veth-${GATEWAY_DEVICE}
ExecStartPre=/usr/bin/ip link set veth-%I netns %I
ExecStartPre=/usr/bin/ip netns exec %I ip addr add ${NETWORK_IP}/${NETWORK_NETMASK} dev veth-%I
ExecStartPre=/usr/bin/ip link set veth-${GATEWAY_DEVICE} up
ExecStartPre=/usr/bin/ip netns exec %I ip link set veth-%I up
ExecStartPre=/usr/bin/ip netns exec %I ip route add default via ${GATEWAY_IP}
ExecStartPre=/usr/bin/iptables -A FORWARD -o ${GATEWAY_DEVICE} -i veth-${GATEWAY_DEVICE} -j ACCEPT
ExecStartPre=/usr/bin/iptables -A FORWARD -i ${GATEWAY_DEVICE} -o veth-${GATEWAY_DEVICE} -j ACCEPT
ExecStartPre=/usr/bin/iptables -t nat -A POSTROUTING -s ${NETWORK_IP}/${NETWORK_NETMASK} -o ${GATEWAY_DEVICE} -j MASQUERADE
ExecStartPre=/usr/bin/sysctl -w net.ipv4.ip_forward=1
ExecStart=/usr/bin/ip netns exec %I wg-quick up /etc/wireguard/wg0-uk.conf
ExecStop=/usr/bin/ip netns exec %I wg-quick down /etc/wireguard/wg0-uk.conf
ExecStopPost=-/usr/bin/ip netns del %I
ExecStopPost=-/usr/bin/iptables -D FORWARD -o ${GATEWAY_DEVICE} -i veth-%I -j ACCEPT
ExecStopPost=-/usr/bin/iptables -D FORWARD -i ${GATEWAY_DEVICE} -o veth-%I -j ACCEPT
ExecStopPost=-/usr/bin/iptables -D POSTROUTING -t nat -s ${NETWORK_IP}/${NETWORK_NETMASK} -o ${GATEWAY_DEVICE} -j MASQUERADE




After that I was able to run this monstrosity:

$ sudo systemctl enable systemd-netns@wireguard.service --now
$ curl
$ sudo -E ip netns exec wireguard sudo -E -u dist curl

That’s all well and good, but I don’t really fancy running sudo (twice!) to run Terraform. Instead, enter firejail, which is SUID program that can (among other things) use different network namespaces. Perfect.

So rather than the sudo madness, I created a wrapper script for Terraform:

#!/usr/bin/env bash

if ip netns | grep --quiet wireguard; then
  exec firejail \
    --noprofile \
    --netns=wireguard \
    --writable-var \
    --writable-run-user \
    --quiet \
    /usr/bin/terraform $@
  echo "\`wireguard\` network namespace doesn't exist, running in \`init\` namespace" >&2
  exec /usr/bin/terraform $@

Et voilà, runs like a dream. However, as systemd has native support for wireguard I might look into changing it up to use that instead.

# bash: abort script on any error

For the longest of time it’s been my understanding that set -e in Bash causes a script to terminate if any single command of the script fails, which is also exactly what it does, in most cases.

However, when you start using functions things change a bit. Consider the following example:

$ cat basherr

set -e
function run() {
  false # <- this should cause the script to fail
  echo ", world!"
echo "Hello$output" # <- this should never be executed

$ basherr ; echo $?
Hello, world!

I was recently working on a script that had a couple of functions and I wanted the script to abort if any single command failed. After some digging I found the -E option to set, which according to the documentation does the following:

If set, any trap on ERR is inherited by shell functions, command substitutions, and commands executed in a subshell environment. The ERR trap is normally not inherited in such cases.

Armed with this newfound knowledge I replaced set -e with set -E and a custom trap handler for ERR:

set -E
trap "exit $?" ERR

and lo and behold:

$ basherr ; echo $?

I could have saved myself some headache by reading the documentation of set more thouroughly as it does outline exceptions where set -e won’t do what one might think.

10 Dec 2016 / bash shell