mirror of
https://git.haproxy.org/git/haproxy.git/
synced 2025-09-21 05:41:26 +02:00
In 1.4-dev3, commit 31971e5 ("[MEDIUM] add support for infinite forwarding") made it possible to configure the lower layer to forward data indefinitely by setting the forward size to CHN_INFINITE_FORWARD (4GB-1). By then larger chunk sizes were not supported so there was no confusion in the usage of the function. Since 1.5 we support 64-bit content-lengths and chunk sizes and the function has grown to support 64-bit arguments, though it still limits a single pass to 32-bit quantities (what fit in the channel's to_forward field). The issue now becomes that a 4GB-1 content-length can be confused with infinite forwarding (in fact it's 4GB-1+what was already in the buffer). It causes a visible effect when transferring this exact size because the transfer rate is lower than with other sizes due in part to the disabling of the Nagle algorithm on the sendto() call. In theory with keep-alive it should prevent a second request from being processed after such a transfer, but since the analysers are still present, the forwarding analyser properly counts down the remaining size to transfer and ultimately the transaction gets correctly reset so there is no visible effect. Since the root cause of the issue is an API problem (lack of distinction between a real valid length and a magic value), this patch modifies the API to have a new dedicated function called channel_forward_forever() to program a permanent forwarding. The existing function __channel_forward() was modified to properly take care of the requested sizes and ensure it 1) never overflows and 2) never reaches CHN_INFINITE_FORWARD by accident. It is worth noting that the function used to have a bug causing a 2GB forward to be scheduled if it was called with less data than what is present in buf->i. Fortunately this bug couldn't be triggered with existing code. This fix should be backported to 1.6 and 1.5. While it also theorically affects 1.4, it's better not to backport it there, as the risk of breaking large object transfers due to significant API differences is high, compared to the fact that the largest supported objects (4GB-1) are just slower to transfer.