Infinite localhost tunnels

Published:

When I'm playing with web development I typically run a local web server. This is fine until I want to test on a mobile device or show it to someone else. There are a bunch of options out there off the shelf but they either cost money or don't offer a stable URL. I don't feel like paying money since I already have a server I'm paying for and my need is very sporadic. Stable URLs are more of a problem because I often want to integrate with 3rd party APIs which are authorized by URL.

The simple solution is to use ssh port forwarding to forward a localhost port on my web server to a localhost port on my laptop and then an Nginx proxy_pass rule to forward a specific named virtual host to the server-side port. This means either using a single, fixed name for everything I might want to tunnel - which started causing problems once I was playing with service workers, or having to edit and reload web server configuration and provision a new TLS certificate for each new name.

Approach

I've settled on a new approach that's simple and scalable using wildcard certificates and a fairly new, infrequently used feature of ssh port forwarding: UNIX domain sockets.

Normally when we think about port forwarding we're forwarding TCP ports to TCP ports. Passing -R 8000:localhost:1234 to ssh will forward port 8000 on the server to port 1234 on the local machine. If instead of a numeric port we pass a path then ssh will forward a UNIX domain socket. This allows me to have a textual rather than numeric namespace for ports I'm forwarding.

Conveniently, Nginx's proxy_pass directive also allows forwarding to a UNIX domain socket using the special syntax: proxy_pass http://unix:/path/to/socket:;. Wiring the two together we can forward from an Nginx wildcard server to an SSH port forward based on the server name.

A couple of challenges came up getting this to actually work. First of all the UNIX domain sockets created by ssh are given very conservative permissions and in spite of promising sounding ssh options these aren't configurable. At least in my server configuration Nginx wasn't able to connect to those sockets so I had to follow up creating the socket with changing the permissions. Secondly the sockets aren't reliably cleaned up so before creating the forward I had to explicitly remove any old socket that might be there.

I wasn't entirely confident that a specially crafted URL couldn't be used to access parts of the file-system beyond the tunnel sockets so I applied a fairly conservative filter to the allowed host names in the Nginx config.

Server setup

First I set up a wildcard sub-domain on my web server and provisioned a Let's Encrypt wildcard cert. Depending on how your DNS is set up that can be varying levels of tricky and it's not really relevant to this story. I also configured http requests to be redirected https because it's 2019.

Then I updated the domain's nginx.conf to proxy_pass to a UNIX domain socket in /tmp. It looks like:

server {
        server_name "~^(?<name>[[:alnum:]_-]*).tunnel.mydomain.com";
        root /home/user/tunnels/site;
        error_page 502 /error.html;
        location /error.html {
            internal;
        }
        location / {
                proxy_pass http://unix:/tmp/tunnel-$host:;
        }
        listen 443 ssl; # managed by Certbot
        ssl_certificate /etc/letsencrypt/live/tunnel.mydomain.com/fullchain.pem; # managed by Certbot
        ssl_certificate_key /etc/letsencrypt/live/tunnel.mydomain.com/privkey.pem; # managed by Certbot
        include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
        ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
        listen 80;
        server_name *.tunnel.mydomain.com;
        location / {
                return         301 https://$host$request_uri;
        }
}

I also created this a simple error page that gives basic usage instructions if I forget them: error.html
I think this will make /error.html on all forwarded hosts behave weirdly, but I haven't found the need to address this yet.

Client setup

On the client side I wrote a little script that I put in ~/bin/tunnel:

#!/bin/sh

SSH_CONNECTION=user@server.mydomain.com DOMAIN=tunnel.mydomain.com

set -eu

if [ $# -ne 2 ] then echo "$0 <name> <port>" echo "To establish a tunnel from https://name.$DOMAIN/ to http://localhost:port/" exit 1 fi TUNNEL_HOSTNAME="$1.$DOMAIN" TUNNEL_SOCKET="/tmp/tunnel-$TUNNEL_HOSTNAME" TUNNEL_PORT=$2

# remove the old tunnel socket if any ssh $SSH_CONNECTION -o RequestTTY=no rm -f $TUNNEL_SOCKET

# connect the tunnel ssh $SSH_CONNECTION -f -N -R "$TUNNEL_SOCKET:localhost:$TUNNEL_PORT"

# fix the permissions on the tunnel ssh $SSH_CONNECTION -o RequestTTY=no chmod a+rw $TUNNEL_SOCKET

echo "Connect to: https://$TUNNEL_HOSTNAME/"

I can use this by invoking tunnel mysideproject 1234 and then loading https://mysideproject.tunnel.mydomain.com/ from any device.

The only real annoyance is that the tunnel won't automatically reconnect after being disconnected. I could solve this with some slightly cleverer scripting but I've never felt the need - disconnects only tend to happen when I've closed my laptop.