45

I've read a bunch of different questions on what Nginx configuration is appropriate for SSE and came up with some confusing results regarding what settings to use:

So what's the right answer?

1 Answer 1

93

Long-running connection

Server-Sent Events (SSE) are a long-running HTTP connection**, so for starters we need this:

proxy_http_version 1.1;
proxy_set_header Connection "";

NOTE: TCP connections in HTTP/1.1 are persistent by default, so setting the Connection header to empty does the right thing and is the Nginx suggestion.

Chunked Transfer-Encoding

Now an aside; SSE responses don't set a Content-Length header because they cannot know how much data will be sent, instead they need to use the Transfer-Encoding header[0][1], what allows for a streaming connection. Also note: if you don't add a Content-Length most HTTP servers will set Transfer-Encoding: chunked; for you. Strangely, HTTP chunking warned against and causes confusion.

The confusion stems from a somewhat vague warning in the Notes section of the W3 EventSource description:

Authors are also cautioned that HTTP chunking can have unexpected negative effects on the reliability of this protocol. Where possible, chunking should be disabled for serving event streams unless the rate of messages is high enough for this not to matter.

Which would lead one to believe Transfer-Encoding: chunked; is a bad thing for SSE. However: this isn't necessarily the case, it's only a problem when your webserver is doing the chunking for you (not knowing information about your data). So, while most posts will suggest adding chunked_transfer_encoding off; this isn't necessary in the typical case[3].

Buffering (the real problem)

Where most problems come from is having any type of buffering between the app server and the client. By default[4], Nginx uses proxy_buffering on (also take a look at uwsgi_buffering and fastcgi_buffering depending on your application) and may choose to buffer the chunks that you want to get out to your client. This is a bad thing because the realtime nature of SSE breaks.

However, instead of turning proxy_buffering off for everything, it's actually best (if you're able to) to add the X-Accel-Buffering: no as a response header in your application server code to only turn buffering off for the SSE based response and not for all responses coming from your app server. Bonus: this will also work for uwsgi and fastcgi.

Solution

And so the really important settings are actually the app-server response headers:

Content-Type: text/event-stream;
Cache-Control: no-cache;
X-Accel-Buffering: no;

And potentially the implementation of some ping mechanism so that the connection doesn't stay idle for too long. The danger of this is that Nginx will close idle connections as set using the keepalive setting.


[0] https://www.rfc-editor.org/rfc/rfc2616#section-3.6
[1] https://en.wikipedia.org/wiki/Chunked_transfer_encoding
[2] https://www.w3.org/TR/2009/WD-eventsource-20091029/#text-event-stream
[3] https://github.com/whatwg/html/issues/515
[4] http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffering
[5] https://www.rfc-editor.org/rfc/rfc7230#section-6.3
[6] https://gist.github.com/CMCDragonkai/6bfade6431e9ffb7fe88

7
  • 1
    Can you elaborate on what the ping mechanism is? Is it simply pushing an empty message to the channel? I have set up the nginx and app level headers but I am still getting a 504 timeout from nginx for any of the event-source endpoints.
    – wgwz
    Apr 17, 2017 at 22:09
  • 1
    a ping would just be some (bogus) data sent at an interval over the connection, on the client you can handle this ping and ignore it. NOTE: if your connection isn't working at all, pinging won't help, something else is wrong.
    – c4urself
    Apr 18, 2017 at 15:25
  • 4
    I added the response headers as suggested and it works. I made no changes to nginx v1.12 config and so far no problems.
    – Mikkel
    Jul 5, 2017 at 12:00
  • 6
    Adding the X-Accel-Buffering: no header was key for me, but importantly, I had to do as @c4urself wrote: "add the X-Accel-Buffering: no as a response header in your application server code". Adding this header to a location section in my nginx config did not work -- the whole event stream waited to be sent until after the application finished/terminated.
    – MDMower
    Jan 17, 2019 at 15:14
  • Is proxy_http_version 1.1; necessary? I am trying to run more than 6 SSE streams from a browser and hence I need HTTP2. Jul 11, 2019 at 14:18

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .