BUG/MEDIUM: h2/htx: always fail on too large trailers

In case a header frame carrying trailers just fits into the HTX buffer
but leaves no room for the EOM block, we used to return the same code
as the one indicating we're missing data. This could would result in
such frames causing timeouts instead of immediate clean aborts. Now
they are properly reported as stream errors (since the frame was
decoded and the compression context is still synchronized).

This must be backported to 1.9.
This commit is contained in:
Willy Tarreau 2019-05-06 11:12:18 +02:00
parent 5121e5d750
commit aab1a60977

View File

@ -1928,8 +1928,22 @@ static struct h2s *h2c_frt_handle_headers(struct h2c *h2c, struct h2s *h2s)
if (h2s->st != H2_SS_IDLE) { if (h2s->st != H2_SS_IDLE) {
/* The stream exists/existed, this must be a trailers frame */ /* The stream exists/existed, this must be a trailers frame */
if (h2s->st != H2_SS_CLOSED) { if (h2s->st != H2_SS_CLOSED) {
if (h2c_decode_headers(h2c, &h2s->rxbuf, &h2s->flags, &body_len) <= 0) error = h2c_decode_headers(h2c, &h2s->rxbuf, &h2s->flags, &body_len);
/* unrecoverable error ? */
if (h2c->st0 >= H2_CS_ERROR)
goto out; goto out;
if (error == 0)
goto out; // missing data
if (error < 0) {
/* Failed to decode this frame (e.g. too large request)
* but the HPACK decompressor is still synchronized.
*/
h2s_error(h2s, H2_ERR_INTERNAL_ERROR);
h2c->st0 = H2_CS_FRAME_E;
goto out;
}
goto done; goto done;
} }
/* the connection was already killed by an RST, let's consume /* the connection was already killed by an RST, let's consume