Proxying WebSocket Traffic for Node.js: The Present State of Play
The up-front summary: if you are reading this after Nginx adds support for proxying websocket traffic to Node.js (supposedly coming up in version 1.3), then everything is rainbows and unicorns - just use Nginx. If you are reading this prior to Nginx support for proxying of websocket traffic, then you will likely have to do more work and investigation in order to create a good proxy setup for your servers.
Update as of 06/2013: Here are three viable server setups described in detail, including one that uses Nginx and its newly arrived websocket support:
- Using Stunnel and Varnish
- Using HAProxy 1.5-dev14 or later with native SSL support
- Using Nginx 1.3 or later
So let us say that you are developing a Node.js / Socket.IO application that both uses websockets and serves files the old-fashioned way, such as through Express - this is a fairly common situation. You want to have an HTTP proxy server rather than Node.js field all incoming traffic, so that you can set up load balancing, route requests for static files to some other server process, avoid having to set up SSL configuration in Node.js, and so forth.
Unfortunately at this point in time it isn't completely straightforward to proxy both websocket traffic and ordinary web requests for Node.js through a single port - which would be the ideal situation. It becomes even less straightforward if you want to use SSL. The following is a brief overview of present options as of early Q3 2012.
Nginx Doesn't Yet Support Websocket Traffic
Nginx is a tool of choice for much of the Node.js community when it comes to proxying traffic to Express sites, load balancing, or serving static file requests. So in the ideal world, we could all just use Nginx - but unfortunately, it won't support proxying of websocket traffic until version 1.3, the next major release, with no date set for that functionality. You might look at another review of the options for more on that.
Give Up and Run Websockets on a Different Port
You could run normal web traffic through port 80 or 443 and websocket traffic through port 10080 or 10443. This is somewhat ugly, however, and runs into the major issue that many networks will block outbound traffic to unusual ports. But other than that it will work: you could go back to using Nginx as a proxy, with Node.js serving the higher port numbers directly.
Give Up and Run Websockets on a Different IP Address
You can give your server two IP addresses, serve port 80 with Nginx on one address and port 80 with Node.js on the other address. Then point different subdomains of your site to the different addresses, and manage the websocket/non-websocket divide that way. You'll find a brief guide to setting this up at shedshape. This is less ugly than using different ports on the same server, and probably far less work than any of the other solutions outlined here. That said, it is possible that some forms of application will require additional configuration changes and coaxing because multiple subdomains are used, and SSL certificates will have to cover both subdomains.
The TCP Proxy Module for Nginx Doesn't Solve the Problem
The excellent nginx_tcp_proxy_module allows Nginx to be used as a proxy for websocket traffic, as outlined in a post from last year. Unfortunately it is really only intended for load balancing - you can't use it to split out websocket versus non-websocket traffic arriving on the same port and send them to different destinations based on URI. Additionally, this requires a custom build to include the module into Nginx - which might be an issue for future support and upgrades.
HAProxy is a Simple Solution Until You Want to Involve SSL
HAProxy can manage websocket traffic, and if you just want to proxy unencrypted traffic on port 80, then the setup is pretty straightforward. Here's a short guide for putting HAProxy in front of Node.js and Apache.Issues start to arise with SSL traffic, however, as HAProxy on its own isn't suitable for handling encrypted web traffic. Thus the necessary setup becomes more complex, involving an additional proxy to terminate the SSL connection and then pass unencrypted traffic to HAProxy - such as Stunnel or Pound. Fortunately, there is at least one overview that walks through these options and provides some examples for configuration. As noted below, however, Pound seems to be ruled out on the basis on not actually working for websocket traffic.
Pound May Be a Solution, But it Doesn't Work For Me
Pound may be a solution for load balancing and splitting traffic to a single port by URL, sending some to Node.js and some to Nginx. It handles all that in addition to terminating SSL connections so as to pass on unencrypted traffic to backend servers. Unfortunately examples from the community seem to be thin on the ground, and my efforts to make it work with a Socket.IO-based backend suggest that it doesn't in fact support websocket traffic. (This was using Pound 2.5-1.1, installed as a package on Ubuntu 12.04). This is a pity, as it is very simple to set up and configure.
A Solution With Varnish, Nginx, and Node.js, But Not For SSL
Varnish can also be used to proxy websocket traffic, splitting traffic arriving at a single port between Node.js and Nginx. This solves the basic problem, and gives you the added benefits and functionality offered by Varnish - but Varnish doesn't itself support SSL traffic. So here you're back to a similar situation as for HAProxy: Stunnel or a similar service to terminate SSL connections has to manage incoming HTTPS connections and pass plain HTTP connections to Varnish.
So Where Does This Leave the Ideal Solution for SSL?
Here is one present "ideal" solution that (a) supports SSL for both ordinary web traffic and websockets on the same port, and (b) allows for splitting traffic between backends by URL and other forms of load balancing:
- Varnish listening on port 80
- Stunnel listening on port 443, passing all traffic to Varnish
- Nginx and Node.js in some combination as the backend behind Varnish
This is somewhat more complex than the promised future of being able to just use Nginx, post-1.3 - though no doubt the Varnish advocates would suggest that any serious web application should be using Varnish anyway.
Isn't This Whole Situation Somewhat Ridiculous?