This is the story of, a site-wide easter egg we deployed for Birchbox, a few years back.  It used CSS injection and some creative infrastructure to provide a presentation of the site that was tinted pink when accessed via the .pink domain.

I wrote this up when I first started my blog, back when it was a static site hosted at AWS. I've since moved to Ghost and am much happier for it.  But unfortunately they (quite reasonably) don't support some of the unusual design that made this work.  What follows is a walk through on how I recreated the egg on the now defunct

ICANN started rolling out the new generic Top-Level Domains at the end of 2013. By mid 2015 we had registered hundreds of birchbox.* domains, including Some of the registrations were for potential future business models; maybe we would do something interesting with Mostly they were defensive registrations - namespace land grabbing.

Sometime in early 2016, looking through the list of domains, jumped out. A pink-tinted version of Birchbox, living at that URL, seemed like a good easter egg. The interesting part was working out an alternate site presentation, driven by domain, without touching the back end of the site itself. A few days later, was live. It’s not around anymore - a casualty of a frontend redesign since, but works on the same principle.

Like many of the new gTLDs, the intention for .pink isn’t clear to me.

ICANN awarded management of the registry to Afilias. Here is the "Mission/Purpose" excerpted from their application.

".PINK proposes to create an Internet space in which businesses, organizations and individuals can create an Internet identity tied to the color and the concept of pink. This will allow for an explosion of creativity and Internet offerings around this concept."

Looking through nTLDStats for .pink I see 2,390 currently active .pink domains. General availability opened January 18, 2014, so I guess it’s one of those really slow explosions of creativity.

The two deployments vary entirely by the infrastructure design used to connect the second domain. Birchbox is built on our own hardware (we colo) whereas is deployed on an AWS stack (S3, CloudFront, Route 53).

Here is a walkthrough of how it works and the changes made at AWS to support the additional domain.

Tinting The Site Pink

Three CSS filters, applied in series to the body element, tint the site pink. veinjs provides CSS injection, executed as soon as/js/auaap_pink.js is loaded. A huge thank you to my friend Zak for all the frontend magic.

(function () {
  if (!/goldandapager\.pink$/.test( {
  var dianthus_plumarius = 'sepia(1) hue-rotate(300deg) saturate(3)';
  vein.inject('body', {
    'filter': dianthus_plumarius,
    '-webkit-filter': dianthus_plumarius

With this in hand it’s just a matter of loading the JS and delivering traffic to the site via the domain.

Adding a Second Domain to an Existing CloudFront Deployment is a simple S3 static site acting as the origin behind a CloudFront CDN. Adding required an updated TLS certificate, with the additional domain added as a SAN (Subject Alternative Name), and an update to the existing CloudFront distribution’s list of Alternate Domain Names.

TLS Configuration with Amazon Certificate Manager

You can’t update an existing cert so I tossed the old one and generated a new one. To keep things simple, I added both the apex domains and wildcards to one cert.

jereme@buttercup $ aws acm list-certificates | \
  jq '.CertificateSummaryList[] | select(.DomainName == "")'

  "CertificateArn": "arn:aws:acm:us-east-1:123456789012:certificate/12345678-1234-123456789012", 
  "DomainName": ""

jereme@buttercup $ cert_arn=$(aws acm list-certificates | \
  jq -r  '.CertificateSummaryList[] | select(.DomainName == "") | .CertificateArn')

jereme@buttercup $ aws acm describe-certificate --certificate-arn $cert_arn | \
  jq .Certificate.SubjectAlternativeNames


CloudFront CDN Configuration

I updated the existing CloudFront distribution with the new cert and additional name:

Pull a list of defined CloudFront CDNs (”Distributions”). I find it easy enough to distinguish them by origin. Here are the three I’m currently running. We’re interested in

The other two handle redirects from www to their respective apex domains.

jereme@buttercup $ aws cloudfront list-distributions | \
  jq .DistributionList.Items[].DefaultCacheBehavior.TargetOriginId


jereme@buttercup $ cdn=""

Extract the ID so we can inspect our CDN configuration. Note the DomainName for our DNS configuration later.

jereme@buttercup $ cdn_id=$(aws cloudfront list-distributions | jq -r --arg cdn $cdn \
  '.DistributionList.Items[] | select(.DefaultCacheBehavior.TargetOriginId==$cdn) | .Id')
jereme@buttercup $ aws cloudfront list-distributions | jq --arg cdn $cdn \
  '.DistributionList.Items[] | select(.DefaultCacheBehavior.TargetOriginId==$cdn) | .DomainName'


Here is the updated CloudFront configuration with our new certificate and updated Aliases property that now includes our second domain,

jereme@buttercup $ aws cloudfront list-distributions | jq --arg cdn_id $cdn_id \
  '.DistributionList.Items[] | select(.Id==$cdn_id) | .ViewerCertificate,.Aliases.Items'

  "SSLSupportMethod": "sni-only",
  "ACMCertificateArn": "arn:aws:acm:us-east-1:123456789012:certificate/12345678-1234-123456789012",
  "MinimumProtocolVersion": "TLSv1.1_2016",
  "Certificate": "arn:aws:acm:us-east-1:123456789012:certificate/12345678-1234-123456789012",
  "CertificateSource": "acm"

With the CDN ready to accept traffic for, the second and final step was to update our DNS.

Route 53 DNS Configuration

Traditionally, when you setup a CDN, you add a CNAME record pointing your domain to an edge node specified by your provider.

For a domain in a Route 53 hosted zone, you instead add an Alias record pointing your domain to the target. Alias records are a Route 53-specific extension. The target destination is still specified using the CloudFront DomainName but the resolution process carried out by AWS’s name servers is different. A set of A records is returned to the client directly, instead of an intermediateCNAME.

First extract the HostedZoneID so we can inspect the zone’s records.

jereme@buttercup $ zone_id=$(aws route53 list-hosted-zones | \
  jq -r '.HostedZones[] | select(.Name=="") | .Id')

Here is the Alias record for Note the AliasTarget property indicating our CloudFront DomainName from above (here referred to as DNSName).

jereme@buttercup $ aws route53 list-resource-record-sets --hosted-zone-id $zone_id | \
  jq '.ResourceRecordSets[] | select(.Name=="") | select(.Type=="A")'

  "AliasTarget": {
    "HostedZoneId": "Z2FDTNDATAQYW2",
    "EvaluateTargetHealth": false,
    "DNSName": ""
  "Type": "A",
  "Name": ""

That’s all the configuration needed to provide the DNS records that connect to our existing goldandapager.ioCDN.


With our changes in place, a quick curl(1) will let us call it complete.

A few things to note:

  • Our domain resolves
  • We successfully setup a TLS 1.2 connection, matching on a SAN: subjectAltName:
  • The server responds OK with a 200
  • It’s a CloudFront cache hit: X-Cache: Hit from cloudfront
  • To be fancy, we have lynx(1) text dump the page and the first few lines definitely look like our site
jereme@buttercup $ curl -vs | lynx -stdin -dump | head 

* Rebuilt URL to:
* Hostname was NOT found in DNS cache
*   Trying
* Connected to ( port 443 (#0)
* successfully set certificate verify locations:
*   CAfile: none
  CApath: /etc/ssl/certs
* SSLv3, TLS handshake, Client hello (1):
} [data not shown]
* SSLv3, TLS handshake, Server hello (2):
{ [data not shown]
* SSLv3, TLS handshake, CERT (11):
{ [data not shown]
* SSLv3, TLS handshake, Server key exchange (12):
{ [data not shown]
* SSLv3, TLS handshake, Server finished (14):
{ [data not shown]
* SSLv3, TLS handshake, Client key exchange (16):
} [data not shown]
* SSLv3, TLS change cipher, Client hello (1):
} [data not shown]
* SSLv3, TLS handshake, Finished (20):
} [data not shown]
* SSLv3, TLS change cipher, Client hello (1):
{ [data not shown]
* SSLv3, TLS handshake, Finished (20):
{ [data not shown]
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* Server certificate:
*        subject:
*        start date: 2018-03-27 00:00:00 GMT
*        expire date: 2019-04-27 12:00:00 GMT
*        subjectAltName: matched
*        issuer: C=US; O=Amazon; OU=Server CA 1B; CN=Amazon
*        SSL certificate verify ok.
> GET / HTTP/1.1
> User-Agent: curl/7.38.0
> Host:
> Accept: */*
< HTTP/1.1 200 OK
< Content-Type: text/html
< Content-Length: 4314
< Connection: keep-alive
< Date: Thu, 05 Apr 2018 14:13:44 GMT
< Last-Modified: Thu, 05 Apr 2018 13:37:59 GMT
< ETag: "c2f4e5d73cd8ebb0d7de7580be153a28"
* Server AmazonS3 is not blacklisted
< Server: AmazonS3
< Age: 274
< X-Cache: Hit from cloudfront
< Via: 1.1 (CloudFront)
< X-Amz-Cf-Id: SdFn4GGFWPvFE-b6kxFogwAPxynzs1KZTJDDalobacOd5_eb--ztag==
{ [data not shown]
* Connection #0 to host left intact
with a little bit of gold and a $PAGER

writings on various technical subjects

     * [1]Mar 30 2018 [2]Table Gender Seating Probabilities

Exploring the likelihoods of different patterns of table seating, with Clojure.

     * [3]Mar 25 2018 [4]A beginning is a very delicate time...

Final Bits

So that’s it. Some powerful CSS transformations, injected via JavaScript, and some nuts-and-bolts Tech Ops work to connect all the pieces.

I’d be remiss in failing to note some potential snags. There will likely be a few other details you’ll need to sort to get this working on your site.

There’s a bit of a metaphysical question hidden in here. What does your site think of as its identity? Where does it live, so to speak. The challenge is that we typically build sites that can live at any one domain, but not usually multiple domains. When we do have multiple points of entry, like www and the apex domain, we generally normalize with external redirects, landing our users at our choice of the site’s canonical name.

Both times I’ve rolled this out have required a few adjustments. Fully qualified link generation, the browser's same-origin security model, and vhost routing on a backend web tier, are the three main ones that I recall. Each required a little consideration and redesign but nothing major.

Thanks to the Birchbox Technical Operations team for helping with, and to Zak for the frontend bits that drove the fun.

Cover photo by Max Ostrozhinskiy