Sometimes, for dev or lightweight personal hosting, I want a server process on my workstation to be reachable from the general Internet. Like a lot of workstations, mine is behind a dynamically addressed gateway performing Address Translation, lacking a simple means of port-forwarding.

There are lots of reasons why you might want to surface something "internal" on the Internet. For me the two main ones of late are:

  • Building bots that use Slack’s Events API, which used HTTP callbacks (or webhooks)
  • Sharing draft blog posts from my local Hugo server

These are common use-cases and there are tools available, like ngrok, that handle this and offer additional application-level features, such as request inspection and replay. My requirements are comparatively simple and Nginx with ssh(1) handles them well.

There is a degree of protection afforded by the typical NAT connectivity pattern used in most networks. The solutions discussed here enable the direct inbound connectivity that NAT breaks, but this also means you lose those protections. Please take a moment to understand the trade-offs before you open anything up to the general Internet.

The Proxy Design

The front end is Nginx on an EC2 instance with an Elastic IP (EIP), acting as an Application Level Gateway, proxying requests to local socket listeners started by ssh. ssh then forwards those incoming connections back to local listening sockets on my wokstation - the host that initiated the ssh connection to the proxy - and securely shuttles traffic back and forth. The smallest instance type, a t2.nano, is sufficient for quite a bit of traffic.

I think the cost is reasonable: EC2 on-demand pricing for a t2.nano instance in us-east-1 is $0.0058/hr (~$4.23/mo, ~$51/yr). I know I’ll use the capacity so I went with a Reserved Instance: one year, paid up-front is $29, which works out to ~$2.42/mo ($0.0033/hr).

Setting Up The EC2 Instance

First I setup a new Security Group called reverse-proxy and enabled the needed services: ICMP, SSH, HTTP, and HTTPS.

Then I launched a new instance. I use the official Debian AMIs and went with the latest stable release, which was 9.4 “Stretch” at the time. Once the instance was up I allocated an EIP and attached it to the instance.

Finally, I added an A record for the EIP addr. I went with something generic: figuring I would likely lump a few different services on there. I’ve since added some specific records for endpoints I expect to keep around for a bit.

Setting Up TLS

Next I setup TLS. I chose to go with Let’s Encrypt as the Certificate Authority. The certificates they issue are free and Let’s Encrypt’s CA cert is from Digital Signature Trust Co. (IdenTrust), which is present in the trust stores of all the clients I need to support.

jereme@buttercup $ echo | openssl s_client -servername -connect > /dev/null
depth=2 O = Digital Signature Trust Co., CN = DST Root CA X3
verify return:1
depth=1 C = US, O = Let's Encrypt, CN = Let's Encrypt Authority X3
verify return:1
depth=0 CN =
verify return:1

Let’s Encrypt uses the ACME protocol to automate validation of domain control and issuance of subordinate certs. I used the EFF’s ACME client, Certbot, which is available in Debian. This was my first time using Certbot and I found it to be straightforward.

Configuring Nginx as a Reverse Proxy Server

Nginx’s ngx_http_proxy_module module provides the proxy_pass directive which does the main work for us. Additionally, we set the traditional X-Forwarded-For header.

Here is the minimal config I’m using:

server {
        listen 80 default_server;
        listen 443 ssl default_server;
        server_name _;
        include snippets/snakeoil.conf;
        return 404;

server {
        listen 443 ssl;

        location / {
                proxy_pass http://localhost:3000;
                proxy_set_header Host $host;
                proxy_set_header X-Forwarded-For $remote_addr;

        ssl_certificate /etc/letsencrypt/live/;
        ssl_certificate_key /etc/letsencrypt/live/;
        include /etc/letsencrypt/options-ssl-nginx.conf;
        ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;

server {
        listen 80;
        return 301 https://$host$request_uri;

SSH Remote Port Forwarding

The last part is the per-listening-socket tunnel managed by ssh, established with the -R option.

Here is the relevant part of the man page:

-R [bind_address:]port:host:hostport
-R [bind_address:]port:local_socket
-R remote_socket:host:hostport
-R remote_socket:local_socket

Specifies that connections to the given TCP port or Unix socket on the
remote (server) host are to be forwarded to the given host and  port, or
Unix socket, on the local side.  This works by allocating a socket to
listen to either a TCP port or to a Unix socket on the remote side. 
Whenever a connection is made to this port or Unix socket, the connection
is forwarded over the secure channel, and a connection is made to either
host port hostport, or local_socket, from the local machine.

Using autossh

As a final step, I use autossh(1) to manage the SSH connection, restarting as needed. With key-based authentication and ssh-agent(1) this can reliably run unattended. In practice, I use a dedicated account with its own credentials.

jereme@buttercup $ autossh -R 3000:localhost:3000 -i $tunnel_key -N 

Example Setup

We use netcat(1) as a simple HTTP responder, to print socket traffic:

jereme@buttercup $ echo -e 'HTTP/1.1 200 OK\n\n hello world' | nc -l 3000

…and here is the listening socket bound by that netcat process:

jereme@buttercup $ sudo netstat -tlnp | grep :3000
tcp        0      0*               LISTEN      12629/nc

We use ssh to forward the listening socket ( we configured for Nginx, via proxy_pass, on back to the listening socket ( on my workstation, buttercup:

jereme@buttercup $ ssh -R 3000:localhost:3000 -N

…and here’s that listener:

root@proxy0 # netstat -4tnlp | grep :3000
tcp        0      0*               LISTEN      27289/sshd: jereme

Request Walk Through

A request from wherever (my work machine, elder-whale, in this example) and the response from - served from buttercup.

 jereme@elder-whale $ curl -v
 * Rebuilt URL to:
 *   Trying
 * Connected to ( port 443 (#0)
 * ALPN, offering h2
 * ALPN, offering http/1.1
 * Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
 * successfully set certificate verify locations:
 *   CAfile: /etc/ssl/certs/ca-certificates.crt
   CApath: /etc/ssl/certs
 * TLSv1.2 (OUT), TLS header, Certificate Status (22):
 * TLSv1.2 (OUT), TLS handshake, Client hello (1):
 * TLSv1.2 (IN), TLS handshake, Server hello (2):
 * TLSv1.2 (IN), TLS handshake, Certificate (11):
 * TLSv1.2 (IN), TLS handshake, Server key exchange (12):
 * TLSv1.2 (IN), TLS handshake, Server finished (14):
 * TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
 * TLSv1.2 (OUT), TLS change cipher, Client hello (1):
 * TLSv1.2 (OUT), TLS handshake, Finished (20):
 * TLSv1.2 (IN), TLS change cipher, Client hello (1):
 * TLSv1.2 (IN), TLS handshake, Finished (20):
 * SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
 * ALPN, server accepted to use http/1.1
 * Server certificate:
 *  subject:
 *  start date: Jun 24 16:21:20 2018 GMT
 *  expire date: Sep 22 16:21:20 2018 GMT
 *  subjectAltName: host "" matched cert's ""
 *  issuer: C=US; O=Let's Encrypt; CN=Let's Encrypt Authority X3
 *  SSL certificate verify ok.
 > GET / HTTP/1.1
 > Host:
 > User-Agent: curl/7.52.1
 > Accept: */*
 < HTTP/1.1 200 OK
 < Server: nginx/1.10.3
 < Date: Tue, 03 Jul 2018 11:05:36 GMT
 < Transfer-Encoding: chunked
 < Connection: keep-alive
  hello world
 * Curl_http_done: called premature == 0
 * Connection #0 to host left intact

The server on buttercup printing the incoming request and responding via Nginx, on proxy0, over the SSH tunnel. Note the X-Forwarded-For header we configured Nginx to supply (redacted here).

jereme@buttercup $ echo -e 'HTTP/1.1 200 OK\n\n hello world' | nc -lp 8000
GET / HTTP/1.0
X-Forwarded-For: 63.115.x.y
Connection: close
User-Agent: curl/7.52.1
Accept: */*


I’ve found this solution to be reliable and extensible. It’s easy to add multiple domains to the single EIP, thanks to the now widespread support for SNI. With Nginx’s flexible Layer 7 routing you have a powerful solution.

Cover photo by Bill Anderson