Node Setup for Macs

I use Macs, and I have a lot of Node projects on my Macs. Those projects don't all use the same version of Node, so I need to have multiple versions installed. I also want to have a "system" version of Node installed. I'm really happy with how I have my Macs set up to manage this for me. Here's how my setup works.

Nave

First thing I do, before I have any versions of Node installed, is install nave. Nave is a tool that allows you to install and manage different versions of Node. Nave intends to solve your entire multiple-versions-of-Node problem, but I don't use it for that. I use it solely to manage my system-wide version of Node -- the version that will run when I haven't opted to use a project-specified version (more on that later).

Nave is simply a bash script, and you can install it in one of the recommended ways, but what I do is download the script and save it on my system like this:

curl -s https://raw.githubusercontent.com/isaacs/nave/master/nave.sh > /usr/local/bin/nave
chmod 755 /usr/local/bin/nave

Now I can use nave to install my system-wide version of Node.

System-Wide Node

This is a pair of commands that I can re-run any time I want to update my system-wide version of Node.

# where <version> can be a version like 12.4.1,
# or just 12.4, or just 12
# or it can be a string,
# like lts (meaning long-term support),
# or lts/erbium (code name for v12 lts)

nave install <version>
nave usemain <version>

Node Version Manager

Now that a system-wide Node is installed, I will install my version manager, nvm and an automatic version switcher that uses nvm, avn.

Nvm is similar to (and an alternative to) nave. Once installed, nvm is exposed as a shell function (functionally the same as the way nave is a bash script) that allows you to install multiple versions of Node and switch between them by manipulating your shell environment.

You can either explicitly specify a Node version when you run nvm use <version> or you can use an rc file named .nvmrc to specify the version of Node. The contents of .nvmrc can be a version number (full or partial) or a string (like "lts" or "lts/<codename") -- similar to nave. When you run nvm use nvm will use the version specified in the .nvmrc file in the current directory or the closest parent directory.

Avn, together with its companion plugin avn-nvm, will automatically look for an .nvmrc file every time you change directories and then change the Node version, if necessary. This is the key to maintaining your sanity when you have tons of Node projects running different versions of Node!

# Install nvm v0.35.2 (current latest)
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.35.2/install.sh | bash
# You'll need to exit and re-open your terminal for the installation to take effect.

# Install avn and avn-nvm
# Make sure we're using the system-wide Node
nvm use system
npm install -g avn@latest avn-nvm@latest
avn setup

.nvmrc

After I have all that tooling set up, it's time to add rc files.

First, I add an .nvmrc in my $HOME directory that will be used to switch to the system-wide version Node whenever I'm in a directory that doesn't otherwise specify an alternative version.

# contents of $HOME/.nvmrc
system

Then in each project, I add an .nvmrc that specifies the lts or major version of Node that the project requires.

# example project .nvmrc
lts/erbium

This project-level `.nvmrc` should be committed to version control so that everyone on your team can use the same Node version seamlessly.

Before nvm use will work, you need to install the version of Node you want to use: for the above example, nvm install lts/erbium. After installation, you can test that everything is working by changing directories in your terminal and reviewing the terminal messages.

~ $ cd ~/code/test-project
avn activated lts/erbium via .nvmrc (avn-nvm v12.14.0)
~/code/test-project $ cd
avn activated system via .nvmrc (avn-nvm system: v12.14.1)
~ $ 

The Problem with Promises and Domains

FAQ

Q: What should I do?

A: Every Restify 4.x app should use bluebird instead of native Promises.

Q: How do I do that?

A: At the beginning of your app (where you load dotenv) you should add the following:

global.Promise = require("bluebird");

Q: Awesome! 🎉 Now I can just use all the neat-o bluebird promise methods!

A: That is not a question. Also, please don't. Please treat every Promise in your app like a native Promise and don't invoke bluebird methods without explicity using bluebird. If you fail to do that, it will be nearly impossible to undo this hack revert this temporary (albeit clever) solution. So, to be clear:

// bad
return new Promise((resove, reject) => {
        // ...
        if (err) {
            reject(err);
        }
        else {
            resolve(true);
        }
    })
    .finally(() => {
        // native promise has no method `finally`
        console.log("finished");
    });

// good
const bluebird = require("bluebird");
return new bluebird((resove, reject) => {
        // ...
        if (err) {
            reject(err);
        }
        else {
            resolve(true);
        }
    })
    .finally(() => {
        // 👌
        console.log("finished");
    });

Pull Requests: Assign or Request Review?

If you're feeling confused about GitHub's new review requests feature (as opposed to the existing ability to assign people to issues and pull requests) here's my take.

When review requests were first rolled out, they were not very useful because there was no way to see a list of pull requests you had been requested to review. Thankfully, that deficiency has been fixed!

Now, review requests unleash a great way to manage workflow on pull requests.

When you create a pull request, you add a review request for one or more reviewers. In addition, assign the pull request to the reviewer to signal that you are awaiting their action.

When the reviewer finishes their review, they should either merge the pull request (LGTM!) or assign it back to you (or someone else) for action -- respond to a comment, answer a question, revise code, etc.

Then when you respond to feedback, you assign the pull request back to the reviewer, and the cycle repeats.

There is additional explicit signaling in the review request workflow that you can use, too.

So, improve your pull request workflow by:

So You Want To Keep Your Cookies Secure

At Social Tables, we have this Koa app that needs to read and set a session cookie. We don't want to send that cookie over an unencrypted connection, though.

We use koa-generic-session for session management. That library uses the cookies library under-the-hood, and luckily, there's a simple configuration option to avoid sending cookies over an unencrypted connection!

But it's not that simple.

Turns out that the cookies library inspects the request and will throw an error if the app tries to send a secure cookie over an insecure connection.

This is all fine until you start getting fancy. Fancy, as in, the app is behind an SSL-terminating load-balancer. Which means that the app thinks the connection is an insecure HTTP request.

Now, there is a configuration option for Koa:

app.proxy = true;

This tells Koa that when determining whether a request is secure it may trust the headers that the load-balancer adds to each request.

And, again this is just fine until to start getting even fancier. Fancier, as in, the load-balancer actually points to an nginx proxy that serves static assets and points other traffic to the Koa app.

Now, you can find pointers for how to configure nginx behind an SSL-terminating load balancer.

And that's fine until you start getting ultra-fancy. Ultra-fancy, as in, the load-balancer is configured to support PROXY protocol. I'm not going to get into the reasons why we ended up being so ultra-fancy that we wanted to enable PROXY protocol on our load-balancer. Truth is, we don't need it. But the upshot of why this causes problems is that the headers added to each request are different. And not just different. There is literally nothing in the proxy headers that indicates that the client request was made via https vs. http.

So... luckily our app is hosted on Amazon AWS in a VPC that is not reachable from the internet. In other words, there's no way a request could reach our nginx process other than via an https request that hits our load-balancer. Which means, we can just -- gulp -- hard-code it.

The relevant configuration in the nginx config:

server {
  # ...
  # This is empty because of PROXY protocol
  if ($http_x_forwarded_proto = '') {
    # So we hard-code the protocol as https, i.e., "secure"
    set $http_x_forwarded_proto https;
  }

  location @node {
    # ...
    # This is the header Koa will rely upon
    proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
  }
}

By the way -- and I hope this isn't burying the lede too much here -- but if your app only relies on reading those secure cookies, you don't need to worry about this michegas. A user's browser doesn't know or care about what's behind your SSL-terminating load-balancer using PROXY protocol to talk to nginx proxying to your node app. All a user's browser knows about is the https request that hits the load-balancer. Only if you need to set a secure cookie do you possibly need to know about this.

Hope it helps someone.

Making the Correct Insanely Difficult

tl;dr

If you’re trying to configure nginx on Elastic Beanstalk to redirect http requests to https, here’s what I learned.

  • During deployment, the nginx configuration for your app is located at this file path: /tmp/deployment/config/#etc#nginx#conf.d#00_elastic_beanstalk_proxy.conf via
  • Using a container command, you can edit that nginx configuration file right before it gets deployed.
  • I used a little perl one-liner to insert the redirect.

Background

So... we're using Amazon Web Services Elastic Beanstalk for one of the apps I'm working on. It's pretty easy to get started, but it's also really easy to find that you’re fighting Elastic Beanstalk to get it to stop doing something stupid.

I was fighting one of those "stupid" things the other day: http-to-https redirect.

Let's say you have a web application that requires users to login with a name and a password. You don't want users' passwords getting sent over the internet without being encrypted, of course. So you enable SSL and serve content over https.

But sometimes, users type your domain name (like, “google.com”) into the address bar, which defaults to http. Or they follow a link to your app that mistakenly uses http instead of https. In any event, you don’t want users who are trying to get to your app to get an error message telling them there’s nothing listening on the other end of the line, so you need to be listening for http requests but redirecting them to https for security.

Now, our app is written in Node.js, and we’ve configured Elastic Beanstalk to point internet traffic to an Elastic Load Balancer, which terminates SSL and proxies traffic to the backing servers, which are running our app behind nginx. This might sound like too many levels of indirection, but nginx is optimized for serving static content, while Node.js is optimized for dynamic content, so this is a pretty common setup.

And this is where Elastic Beanstalk gets stupid.

When we configured our app to listen for both http and https traffic, Elastic Beanstalk directed all of that traffic to nginx — and configured nginx to direct all of that traffic to our app — without giving us any way to redirect http traffic to https.

I imagine lots of apps want to respond to both http and https traffic while redirecting insecure http requests to secure https requests. Maybe I’m wrong.

Anyway, I want to do that. And I found it insanely difficult to accomplish.

npm CLI Quick-Start for Organizations

We have a number of private npm packages, and I needed to create a new user, grant that user read-only access to our private packages. The npm docs are great. Really great. Go there for details. But here are the key commands for this (probably common) series of steps.

Create a new team

$ npm team create <scope:team>

Grant team read-only access to all existing private packages

Get a list of all private packages for your organization (scope)

$ npm access ls-packages <scope></scope>
# Returns json :'(
# Let's use https://github.com/trentm/json to help
# Install: npm install -g json
$ npm access ls-packages <scope> | json -Ma key</scope>
# Returns list of package names. Noice.

Tying it all together

$ for PKG in $(npm access ls-packages <scope> | json -Ma key); do \
npm access grant read-only <scope:team> "${PKG}"; \
done

Create a new user

Backup your existing ~/.npmrc

$ npm adduser

Save your credentials (auth token will be in ~/.npmrc)

Restore your previous ~/.npmrc

Invite user to organization

Not implemented from the CLI. Use the website: https://www.npmjs.com/org/<scope>/members

npm-add-team-member

Add user to a team

$ npm team add <scope:team> <user>

Remove user from a team

$ npm team rm <scope:team> <user>

Or: How I Learned to Stop Worrying and Love the Memory Leak

I received a "high memory usage" alert. Already panicking, I logged into New Relic and saw this terrifying graph:

Memory Leak?

That's a graph of memory usage, starting from when the server was created. For the uninitiated, when memory usage grows and grows and grows like that, chances are very, very high that you've got a nasty memory leak on your hands. Eventually, your server is going to run out of memory, and the system will start killing processes to recover some memory and keep running -- or just crash and burn.

The funny thing about this particular server is that I had already identified that this server was leaking resources, and I thought I'd fixed it.

Issue Closed

So, I started to investigate.

Running free -m confirmed that nearly all the memory was in use. But top (sorted by MEM%) indicated that none of the server processes were using much memory. Huh?

After some time on Google and Server Fault, I ran slabtop and saw that nearly all server memory was being cached by the kernel for something called dentry. This server has 16GB of RAM -- I'm no expert, but I'm pretty sure it does not need 14GB of cached directory entries. I know I can free this RAM, and with some more help from Google I find the magic incantation is:

echo 2 > /proc/sys/vm/drop_caches

After 5 terrifying seconds during which the server seemed completely locked up, the memory had been freed! But apparently, something about the way this server was acting was causing the kernel to keep all these directory entries cached. In other words, this was probably going to keep happening. I didn't want to have to create a cron job to manually clear the cache every 4 hours, but I wasn't above it.

More reading told me that maybe I was worried about nothing. Looking closely at the peaks of that graph, I saw that the kernel was freeing up memory.

Not a leak!

So maybe I was worried about nothing! Still, I didn't want New Relic alarms going off all the time. And what if the server needs memory more quickly than the kernel can free it? It seemed like something I shouldn't have to worry about.

Yet more Google-noodling, and I found that you can indeed tell the kernel how aggressively to clear the caches. (That latter post captured practically my thoughts exactly, and seemed to trace my experience tracking down this issue to a tee.)

So, after some tweaking, I settled on setting the following sysctl configuration in /etc/sysctl.conf (edit the file, then load it with sysctl -p):

vm.vfs_cache_pressure=100000
vm.overcommit_ratio=2
vm.dirty_background_ratio=5
vm.dirty_ratio=20

It seemed like the higher I set the vm.vfs_cache_pressure, the earlier (lower memory usage) it would free up the cache.

Here's a sweet graph showing three states:

  • [A] untweaked
  • [B] manually clearing the cache with echo 2 > /proc/sys/vm/drop_caches
  • [C] memory usage using the tweaked sysctl configuration

Slab Annotated

Those saw teeth on the right? That's the kernel freeing memory. Just like it was doing before, but more aggressively. This is a "memory leak" I can live with.

So you want to move your Homebrew folder

By default, Homebrew gets installed under /usr/local. This is great, because it doesn't require you to use sudo to install and upgrade packages. But the downside is that it turns your /usr/local directory into a git repository. If you don't have any conflict with this, then by all means, stick with the default.

I had a conflict. Specifically, I use nave for node version management. Unfortunately, both Homebrew and nave drop a README.md in /usr/local, which means nave frequently modifies a file that's under Homebrew's version control and brew update breaks.

Solution

I decided to "move" my Homebrew directory to ~/Homebrew. Here are the steps I followed:


I didn't document this as I did it. Hopefully, I didn't forget anything.

updated

Managers, Goals, and Performance

Key take away for me from "What Great Managers Do to Engage Employees" was this:

Performance management is often a source of great frustration for employees who do not clearly understand their goals or what is expected of them at work. They may feel conflicted about their duties and disconnected from the bigger picture. For these employees, annual reviews and developmental conversations feel forced and superficial, and it is impossible for them to think about next year’s goals when they are not even sure what tomorrow will throw at them. (emphasis mine)

This is something we've been struggling with at my job. We use quarterly OKRs to set goals for all employees, but many of our OKRs (for me and my direct reports) lose relevance after 2 or 3 weeks.

So, I'm still looking for ways to help me (and my team) measure our performance and set clear, relevant goals.