Tag Archives: node.js

So You Want To Keep Your Cookies Secure

At Social Tables, we have this Koa app that needs to read and set a session cookie. We don't want to send that cookie over an unencrypted connection, though.

We use koa-generic-session for session management. That library uses the cookies library under-the-hood, and luckily, there's a simple configuration option to avoid sending cookies over an unencrypted connection!

But it's not that simple.

Turns out that the cookies library inspects the request and will throw an error if the app tries to send a secure cookie over an insecure connection.

This is all fine until you start getting fancy. Fancy, as in, the app is behind an SSL-terminating load-balancer. Which means that the app thinks the connection is an insecure HTTP request.

Now, there is a configuration option for Koa:

app.proxy = true;

This tells Koa that when determining whether a request is secure it may trust the headers that the load-balancer adds to each request.

And, again this is just fine until to start getting even fancier. Fancier, as in, the load-balancer actually points to an nginx proxy that serves static assets and points other traffic to the Koa app.

Now, you can find pointers for how to configure nginx behind an SSL-terminating load balancer.

And that's fine until you start getting ultra-fancy. Ultra-fancy, as in, the load-balancer is configured to support PROXY protocol. I'm not going to get into the reasons why we ended up being so ultra-fancy that we wanted to enable PROXY protocol on our load-balancer. Truth is, we don't need it. But the upshot of why this causes problems is that the headers added to each request are different. And not just different. There is literally nothing in the proxy headers that indicates that the client request was made via https vs. http.

So... luckily our app is hosted on Amazon AWS in a VPC that is not reachable from the internet. In other words, there's no way a request could reach our nginx process other than via an https request that hits our load-balancer. Which means, we can just -- gulp -- hard-code it.

The relevant configuration in the nginx config:

server {
  # ...
  # This is empty because of PROXY protocol
  if ($http_x_forwarded_proto = '') {
    # So we hard-code the protocol as https, i.e., "secure"
    set $http_x_forwarded_proto https;

  location @node {
    # ...
    # This is the header Koa will rely upon
    proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;

By the way -- and I hope this isn't burying the lede too much here -- but if your app only relies on reading those secure cookies, you don't need to worry about this michegas. A user's browser doesn't know or care about what's behind your SSL-terminating load-balancer using PROXY protocol to talk to nginx proxying to your node app. All a user's browser knows about is the https request that hits the load-balancer. Only if you need to set a secure cookie do you possibly need to know about this.

Hope it helps someone.

Making the Correct Insanely Difficult


If you’re trying to configure nginx on Elastic Beanstalk to redirect http requests to https, here’s what I learned.

  • During deployment, the nginx configuration for your app is located at this file path: /tmp/deployment/config/#etc#nginx#conf.d#00_elastic_beanstalk_proxy.conf via
  • Using a container command, you can edit that nginx configuration file right before it gets deployed.
  • I used a little perl one-liner to insert the redirect.


So... we're using Amazon Web Services Elastic Beanstalk for one of the apps I'm working on. It's pretty easy to get started, but it's also really easy to find that you’re fighting Elastic Beanstalk to get it to stop doing something stupid.

I was fighting one of those "stupid" things the other day: http-to-https redirect.

Let's say you have a web application that requires users to login with a name and a password. You don't want users' passwords getting sent over the internet without being encrypted, of course. So you enable SSL and serve content over https.

But sometimes, users type your domain name (like, “google.com”) into the address bar, which defaults to http. Or they follow a link to your app that mistakenly uses http instead of https. In any event, you don’t want users who are trying to get to your app to get an error message telling them there’s nothing listening on the other end of the line, so you need to be listening for http requests but redirecting them to https for security.

Now, our app is written in Node.js, and we’ve configured Elastic Beanstalk to point internet traffic to an Elastic Load Balancer, which terminates SSL and proxies traffic to the backing servers, which are running our app behind nginx. This might sound like too many levels of indirection, but nginx is optimized for serving static content, while Node.js is optimized for dynamic content, so this is a pretty common setup.

And this is where Elastic Beanstalk gets stupid.

When we configured our app to listen for both http and https traffic, Elastic Beanstalk directed all of that traffic to nginx — and configured nginx to direct all of that traffic to our app — without giving us any way to redirect http traffic to https.

I imagine lots of apps want to respond to both http and https traffic while redirecting insecure http requests to secure https requests. Maybe I’m wrong.

Anyway, I want to do that. And I found it insanely difficult to accomplish.

npm CLI Quick-Start for Organizations

We have a number of private npm packages, and I needed to create a new user, grant that user read-only access to our private packages. The npm docs are great. Really great. Go there for details. But here are the key commands for this (probably common) series of steps.

Create a new team

$ npm team create <scope:team>

Grant team read-only access to all existing private packages

Get a list of all private packages for your organization (scope)

$ npm access ls-packages <scope></scope>
# Returns json :'(
# Let's use https://github.com/trentm/json to help
# Install: npm install -g json
$ npm access ls-packages <scope> | json -Ma key</scope>
# Returns list of package names. Noice.

Tying it all together

$ for PKG in $(npm access ls-packages <scope> | json -Ma key); do \
npm access grant read-only <scope:team> "${PKG}"; \

Create a new user

Backup your existing ~/.npmrc

$ npm adduser

Save your credentials (auth token will be in ~/.npmrc)

Restore your previous ~/.npmrc

Invite user to organization

Not implemented from the CLI. Use the website: https://www.npmjs.com/org/<scope>/members


Add user to a team

$ npm team add <scope:team> <user>

Remove user from a team

$ npm team rm <scope:team> <user>

Yosemite Upgrade Changes Open File Limit

OSX has a ridiculously low limit on the maximum number of open files. If you use OSX to develop Node applications -- or even if you just use Node tools like grunt or gulp -- you've no doubt run into this issue.

To address this, I have this line in my $HOME/.bash_profile:

ulimit -n 1000000 unlimited

And a corresponding entry in /etc/launchd.conf:

limit maxfiles 1000000

That solved the problem until I upgraded to OSX Yosemite, after which I began seeing the following error every time I opened a terminal window:

bash: ulimit: open files: cannot modify limit: Invalid argument


Luckily, I a little Google foo yielded this Superuser post (and answer).

So it was a quick fix:

$ echo kern.maxfiles=65536 | sudo tee -a /etc/sysctl.conf
$ echo kern.maxfilesperproc=65536 | sudo tee -a /etc/sysctl.conf
$ sudo sysctl -w kern.maxfiles=65536
$ sudo sysctl -w kern.maxfilesperproc=65536
$ ulimit -n 65536 65536    

Then I updated my $HOME/.bash_profile to change the ulimit directive to match that last command, above, and I was back in business.

Could JXCore Be An Awesome Deployment Tool?

JXCore allows you to turn Node.JS applications into stand-alone executables. One possible use case would be to package up your entire application in an executable and deploy it to production servers, skipping the usual dance with git and npm. If performance is good, this could make for an interesting deployment tool. Deploy by Dropbox? Yup, you could! It's on my list of things to try with a hobby project.

Fixing Node.js v0.8.2 Build on Linux

There's a nasty gcc bug on RedHat (RHEL 6) and CentOS Linux (and related) that gets triggered when you try to build Node.js v0.8.2: pure virtual method called.

Solution: Run make install CFLAGS+=-O2 CXXFLAGS+=-O2 instead of just make install.

More info:

Mongoose Indexes and RAM Usage

If you're using Mongoose, you've changed your indexes, and you're wondering why you've run out of RAM, go into the Mongo shell and manually drop any indexes you are no longer using. Mongoose has no method for deleting indexes you're not using any more, so they accumulate, gobbling up RAM.

Now that you've cleaned out those unused indexes, restart Mongo. After the cache warms up (and depending on how many indexes you deleted), you could see a dramatic decrease in RAM consumption.

A Gotcha Using Node.js + Request In a Daemon

I have a Node.js program running as a daemon on a Linux VPS. Periodically, it polls a list of URLs using request. When it first starts, everything runs smoothly. But after running for a while, it starts getting 400 errors, and the longer it runs, the more URLs return 400 errors.

I could not understand what was going on. My code was basically structured like this:

Given that code, we know the req object is initialized with each function call. So, how could this script degrade over time?

Well, I finally tracked it down: COOKIES!

Yup, request has cookies enabled by default. So, I think what was happening was that cookies were being set (presumably, top domain-level cookies having the same name at different URLs or subdomains on the same domain) but the values in request's cookie jar were not being returned properly. That means the remote host was getting invalid cookies -- hence the 400 response for a "Bad Request."

I haven't yet spent the time to figure out if this is a bug in request. It's on my TODO list.

In the meantime, I've disabled cookies in the req object:
var req = { url: url, timeout: options.timeout, jar: false };

It's now working as expected.