Last few days I spent integrating SockJS transport library into Ejabberd and Strophe. And here are some thoughts and impressions.
Just in case:
- Ejabberd is XMPP (Jabber) server written in Erlang;
- BOSH is "Bidirectional-streams Over Synchronous HTTP", basically custom protocol over HTTP which allows browsers to talk to XMPP server. Defined here;
- SockJS is websocket emulation protocol. It works over real websockets if there's browser support or uses one of the fallback transports (long polling, etc).
Actually, it is better to rephrase the question: why websockets? Because of latency and cost to maintain active connection. It is much more efficient to use persistent TCP connection than bunch of short-lived HTTP requests.
Why SockJS instead of raw websockets and BOSH as fallback? Three reasons:
- SockJS provides websocket-like API, so using SockJS on the client is as simple as creating instance of SockJS class instead of Websocket class;
- No need to hack Strophe to support both BOSH and websocket at the same time - SockJS already provides fallback transports;
- There are ready-to-use server-side websocket libraries for Erlang (like Cowboy). Instead of writing yet another websocket protocol implementation using Ejabberd HTTP framework, I thought it should be easier to run Cowboy in a separate Erlang process and use its websocket module. With this in mind, why not use sockjs-erlang, as it already runs on top of Cowboy?
Instead of using custom handshake (like in BOSH), client sends and receives "normal" XMPP stream header. So, SockJS is used as TCP replacement with exactly same protocol. It is sort of compatible with xmpp-over-websocket draft, except of the websocket handshake (Sec-WebSocket-Protocol header) and using SockJS protocol. However, with minor server-side modification you can connect websocket-compatible XMPP library to SockJS server, as SockJS also exposes "raw" websocket endpoint.
Unfortunately, there's no ready-to-use websocket module for Ejabberd. There's pretty old fork, which you can find here. It is quite hackish (parsing XML with regexps, creating new process for each incoming stanza, etc) and does not support latest websocket spec.
There's supposedly websocket module developed by ProcessOne, but it is not released, so I can't say anything about it.
SockJS integration module was developed as ordinary Ejabberd module, which spawns worker process, which hosts Cowboy with SockJS route. For every incoming SockJS connection, it spawns child process which holds some state, xml_stream and child c2s connection. Whenever something received from the SockJS, it will be fed to the xml_stream. Whenever c2s wants to send something, it will be sent using SockJS API, etc.
Unfortunately, I can not share code, but if you want to do integration yourself - it is fairly easy to do.
I used this gist as basis for the Strophe.js integration. Essentially, it is same Strophe.Connection class as in this gist, but I used slightly newer Strophe.js version and used SockJS class instead of Websocket class.
After trying out this integration, I saw significant latency improvement when using websocket transport. But what struck me as well - even with SockJS polling transports, it seemed like latency is lower than BOSH!
So, I created small HTML page, which does following:
- Gets current time stamp
- Connects to jabber server and authenticates
- Sends ping
- Waits for pong
- Repeats steps #3 and #4 one hundred times
- Calculates time delta
Basically, it won't send next ping before receiving pong. Higher latency - longer it'll take to complete.
- Transport is SockJS transport name or BOSH if it is BOSH;
- Localhost N - tests against Ejabberd running on localhost, in milliseconds;
- Remote N - tests against US server (average ping at time of testing: 162ms), in milliseconds.
|Transport||Localhost 1||Localhost 2||Localhost 3||Remote 1||Remote 2||Remote 3|
- Looks like Ejabberd BOSH implementation agressively buffers outgoing messages to send them in one response. SockJS also does this for polling transports, but doesn't have any internal delays - if there's data in queue, if will be dumped immediately;
- For remote Ejabberd instance with pretty high network latency, results are still in favor of SockJS, even though SockJS did 223 requests with polling transport and BOSH did only 118 requests;
- SockJS streaming transport worked very well in remote server test with almost websocket-like latency;
- JSONP-polling transport appears to be a decent alternative for BOSH;
- Some transports "scale" better with higher latency. For example, even though xhr-polling and xhr-streaming had same latency against local server, with remote server xhr-streaming is much more efficient.
I'm pretty happy with the switch. Not sure how sockjs-erlang will handle increased load or what's memory footprint is like, but I'm already seeing much better application responsiveness with SockJS.
We'll see how it goes.