Tag Archives: aws

So You Want To Keep Your Cookies Secure

At Social Tables, we have this Koa app that needs to read and set a session cookie. We don't want to send that cookie over an unencrypted connection, though.

We use koa-generic-session for session management. That library uses the cookies library under-the-hood, and luckily, there's a simple configuration option to avoid sending cookies over an unencrypted connection!

But it's not that simple.

Turns out that the cookies library inspects the request and will throw an error if the app tries to send a secure cookie over an insecure connection.

This is all fine until you start getting fancy. Fancy, as in, the app is behind an SSL-terminating load-balancer. Which means that the app thinks the connection is an insecure HTTP request.

Now, there is a configuration option for Koa:

app.proxy = true;

This tells Koa that when determining whether a request is secure it may trust the headers that the load-balancer adds to each request.

And, again this is just fine until to start getting even fancier. Fancier, as in, the load-balancer actually points to an nginx proxy that serves static assets and points other traffic to the Koa app.

Now, you can find pointers for how to configure nginx behind an SSL-terminating load balancer.

And that's fine until you start getting ultra-fancy. Ultra-fancy, as in, the load-balancer is configured to support PROXY protocol. I'm not going to get into the reasons why we ended up being so ultra-fancy that we wanted to enable PROXY protocol on our load-balancer. Truth is, we don't need it. But the upshot of why this causes problems is that the headers added to each request are different. And not just different. There is literally nothing in the proxy headers that indicates that the client request was made via https vs. http.

So... luckily our app is hosted on Amazon AWS in a VPC that is not reachable from the internet. In other words, there's no way a request could reach our nginx process other than via an https request that hits our load-balancer. Which means, we can just -- gulp -- hard-code it.

The relevant configuration in the nginx config:

server {
  # ...
  # This is empty because of PROXY protocol
  if ($http_x_forwarded_proto = '') {
    # So we hard-code the protocol as https, i.e., "secure"
    set $http_x_forwarded_proto https;
  }

  location @node {
    # ...
    # This is the header Koa will rely upon
    proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
  }
}

By the way -- and I hope this isn't burying the lede too much here -- but if your app only relies on reading those secure cookies, you don't need to worry about this michegas. A user's browser doesn't know or care about what's behind your SSL-terminating load-balancer using PROXY protocol to talk to nginx proxying to your node app. All a user's browser knows about is the https request that hits the load-balancer. Only if you need to set a secure cookie do you possibly need to know about this.

Hope it helps someone.

Making the Correct Insanely Difficult

tl;dr

If you’re trying to configure nginx on Elastic Beanstalk to redirect http requests to https, here’s what I learned.

  • During deployment, the nginx configuration for your app is located at this file path: /tmp/deployment/config/#etc#nginx#conf.d#00_elastic_beanstalk_proxy.conf via
  • Using a container command, you can edit that nginx configuration file right before it gets deployed.
  • I used a little perl one-liner to insert the redirect.

Background

So... we're using Amazon Web Services Elastic Beanstalk for one of the apps I'm working on. It's pretty easy to get started, but it's also really easy to find that you’re fighting Elastic Beanstalk to get it to stop doing something stupid.

I was fighting one of those "stupid" things the other day: http-to-https redirect.

Let's say you have a web application that requires users to login with a name and a password. You don't want users' passwords getting sent over the internet without being encrypted, of course. So you enable SSL and serve content over https.

But sometimes, users type your domain name (like, “google.com”) into the address bar, which defaults to http. Or they follow a link to your app that mistakenly uses http instead of https. In any event, you don’t want users who are trying to get to your app to get an error message telling them there’s nothing listening on the other end of the line, so you need to be listening for http requests but redirecting them to https for security.

Now, our app is written in Node.js, and we’ve configured Elastic Beanstalk to point internet traffic to an Elastic Load Balancer, which terminates SSL and proxies traffic to the backing servers, which are running our app behind nginx. This might sound like too many levels of indirection, but nginx is optimized for serving static content, while Node.js is optimized for dynamic content, so this is a pretty common setup.

And this is where Elastic Beanstalk gets stupid.

When we configured our app to listen for both http and https traffic, Elastic Beanstalk directed all of that traffic to nginx — and configured nginx to direct all of that traffic to our app — without giving us any way to redirect http traffic to https.

I imagine lots of apps want to respond to both http and https traffic while redirecting insecure http requests to secure https requests. Maybe I’m wrong.

Anyway, I want to do that. And I found it insanely difficult to accomplish.

Easily prune your ssh known_hosts file

At some point, you've probably seen this message when you try to log in to one of your servers:

ssh failure message

This is really common when you have Amazon EC2 instances behind Elastic IPs because the IP address stays the same (and probably the hostname, too), but as new instances replace old instances, the new instances' ssh keys are probably different.

But if you look carefully, you'll see that the failure message tells you how to resolve this problem:

Offending RSA key in /Users/[username]/.ssh/known_hosts:5

That means that line no. 5 of the known_hosts file contains the problematic key. So, assuming that you are sure this is NOT in fact a security breach, you can remove that line.

It's a bit of a pain-in-the-butt to manually edit this file, though. You can use sed to do it easily, but if you're like me and you don't use sed all the time, you need to look at the man pages every time you want to use it. That's why I wrote this quick bash script to do it automatically.

Drop that in your PATH and make it executable. Then you can simply type ssh-purge-host 5 to remove line 5 from your known_hosts file.

Hope that's useful!