If you've ever used CircleCI you've probably found yourself employing their excellent "Rerun job with SSH" feature, which gives you a shell to poke around your build environment and debug failures. What's particularly well done with this feature is that your build container doesn't run sshd(8) and key-based authentication, for SSH, is coordinated by CircleCI. If you're using GitHub as your CircleCI authentication provider, Circle pulls the public key you configured in your GitHub account and everything is pretty seamless.
The problem with this proxy approach is that you're no longer able to use scp(1) to transfer files to and from your build container. You can get by for a while working with ssh(1) and making clever use of file descriptors, but it becomes tedious and doesn't integrate well with other tools, like remote editing (for example, TRAMP in Emacs).
Fortunately, you can use a combination of local port forwarding over SSH, which still works with Circle's proxy-based solution, and an instance of sshd(8) in your container, to provide SCP services.
SSH Access To Your Container
I'm assuming you're already familiar with the "Rerun job with SSH" feature; this is the first step. You can refer to the docs at Circle.
Circle gives you the address and port to connect to, and notes the host key for you to verify.
You can now SSH into this box if your SSH public key is added: $ ssh -p 64535 3.90.x.y Use the same SSH public key that you use for your VCS-provider (e.g., GitHub). RSA key fingerprint of the host is SHA256:tLEutvzq75zbI80hM7cEqH8hX3fKO56CXmhdami3v18 MD5:7d:b8:82:74:13:4f:f0:6d:76:53:73:a9:76:a1:03:cd
SSH To Your Container
When you SSH to your build container you will need to forward a local port. I use 2222/tcp for the local and remote sides of the tunnel ssh will setup but you can grab whatever you like that's not in use. You can use different ports for the local, specified first, and remote sides of the tunnel.
jereme@buttercup $ ssh -p 64535 -L 2222:localhost:2222 3.90.x.y root@bced994fe620:~#
Set SSH Authentication Credentials
You will need to setup your SSH credentials unless your container already embeds them (yuck), or your Circle job takes steps to deploy them in your build process. In general practice neither of these is likely.
With that being the case you will need to manually populate your
~/.ssh/authorized_keys file - without the help of scp(1) of course. This is pretty straightforward and a simple copy-and-paste of your public key matter, or retrieval from some network endpoint, are both reasonable strategies; Here I'm using a private gist. sshd(8) is appropriately picky about the ownership and permissions of your credentials; here's how I handled it:
root@bced994fe620:~# umask 0077 root@bced994fe620:~# mkdir ~/.ssh root@bced994fe620:~# curl -s https://gist.githubusercontent.com/jcorrado/foo/raw/bar/my-ssh-pubkey.txt > ~/.ssh/authorized_keys root@bced994fe620:~# ls -al .ssh/ total 12 drwx------ 2 root root 4096 Mar 14 18:15 . drwx------ 4 root root 4096 Mar 14 18:13 .. -rw------- 1 root root 398 Mar 14 18:16 authorized_keys
Do it in subshells, or reset your umask, if needed.
Run an instance sshd(8) listening on the loopback using port 2222/tcp, or whatever you specified as the remote port to ssh(1) via the
-L option. You can verify with netstat(8).
root@bced994fe620:~# /usr/sbin/sshd -p 2222 -o 'ListenAddress 127.0.0.1' root@bced994fe620:~# netstat -4tnl | grep :2222 tcp 0 0 127.0.0.1:2222 0.0.0.0:* LISTEN
Copy To-and-fro Via The Forwarded Local Listener
jereme@buttercup $ scp -P 2222 the_iceman_cometh.txt root@localhost:~/ the_iceman_cometh.txt 100% 320KB 2.9MB/s 00:00
jereme@buttercup $ scp -P 2222 root@localhost:/var/log/foo/error.log . error.log 100% 100 4.5KB/s 00:00
The usual caveats of stale "[localhost]:2222" host keys apply. ssh will complain appropriately and tell you how to prune the old key.
I hope this saves you some time and lets you get back to debugging!
Cover photo by Monica Dorame