BUG/MEDIUM: chunk: fix infinite loop in get_larger_trash_chunk()

When the input chunk is already the large buffer (chk->size ==
large_trash_size), the <= comparison still matched and returned
another large buffer of the same size. Callers that retry on a
non-NULL return value (sample.c:4567 in json_query) loop forever.

The json_query infinite loop is trivially triggered: mjson_unescape()
returns -1 not only when the output buffer is too small but also for
any \uXXYY escape where XX != "00" (mjson.c:305) and for invalid
escapes like \q. The retry loop assumes -1 always means "grow the
buffer", so a 14-byte JSON body of {"k":"\u0100"} hangs the worker
thread permanently. Send N such requests to exhaust all worker
threads.

Use < instead of <= so a chunk that is already large yields NULL.
This also fixes the json converter overflow at sample.c:2869 where
no recheck happens after the "growth" returned a same-size buffer.

Introduced in commit ce912271db4e ("MEDIUM: chunk: Add support for
large chunks"). No backport needed.
This commit is contained in:
Greg Kroah-Hartman 2026-04-07 09:48:11 +02:00 committed by Christopher Faulet
parent f712841cf0
commit 782a1b5888

View File

@ -170,7 +170,7 @@ struct buffer *get_larger_trash_chunk(struct buffer *chk)
/* no chunk or a small one, use a regular buffer */
chunk = get_trash_chunk();
}
else if (large_trash_size && chk->size <= large_trash_size) {
else if (large_trash_size && chk->size < large_trash_size) {
/* a regular byffer, use a large buffer if possible */
chunk = get_large_trash_chunk();
}