Summary:
This adds the new priority queue implementation and a TransportSetting that controls whether it should be used or not. The default is still the old priority queue, so this diff should not introduce any functional changes in production code.
One key difference is that with the new queue, streams with new data that become connection flow control blocked are *removed* from the queue, and added back once more flow control comes. I think this will make the scheduler slightly more efficient at writing low-priority loss streams when there's high-pri data and no connection flow control, since it doesn't need to skip over those streams when building the packet.
If this diff regresses build size, D72476484 should get it back.
Reviewed By: mjoras
Differential Revision: D72476486
fbshipit-source-id: 9665cf3f66dcdbfd57d2199d5c832529a68cfac0
Summary: I started with the QuicStreamManager, but it turns out that the path from the manager up to the close path touches a LOT, and so this is a big diff. The strategy is basically the same everywhere, add a folly::Expected and check it on every function and enforce that with [[nodiscard]]
Reviewed By: kvtsoy
Differential Revision: D72347215
fbshipit-source-id: 452868b541754d2ecab646d6c3cbd6aacf317d7f
Summary: With reliable resets, we're not going to advance the `currentReadOffset` to the `finalSize` until the application has read all of the reliable data. This is because we could still buffer additional bytes until we get all the reliable data, so we don't want to prematurely send flow control.
Reviewed By: mjoras
Differential Revision: D67681087
fbshipit-source-id: 9b041ce2ae15ccda4b8c6594759fcfe2deb04f64
Summary:
We appropriately adjust the following when we send a reliable reset:
- Write buffers, retransmission buffers, loss buffers, etc
- The `sumCurStreamBufferLen`, which seems to indicate the number of bytes sent to the transport minus the number of bytes written out to the wire.
Additionally, I'm introducing a `reliableSizeToPeer` variable within the stream state that keeps track of the reliable size we're sending to a peer. In future, we'll use this variable to ensure that we never increase the reliable size.
Note that none of the added logic is hooked up, so it's not executable.
Reviewed By: mjoras
Differential Revision: D64907994
fbshipit-source-id: 2a372a095c03757949c8866241ba20cabf711333
Summary:
I suspect that this was the intention from the start, which is that updates are triggered after windowSize / flowControlWindowFrequency is read.
For the current math this is ONLY true if the factor is two. Thsi is because it is subtracting curReadOffset from curAdvertisedOffset, which is the amount of window _remaining_.
Change the math so it works more usefully and e.g. frequencies of 4 work as expected. Conveniently, the math still works for factors of 2 which is the only currently usable one without crazy side effects.
Reviewed By: jbeshay, kvtsoy
Differential Revision: D66216119
fbshipit-source-id: 6d19855dea1c26440d7cc6f66a8c2a503cae2caf
Summary: That way we can keep the default to false in the mc. This makes adding params to ios or other platforms easier
Reviewed By: NishantNori
Differential Revision: D64791939
fbshipit-source-id: 23cf0e7962619d4754a766537a5cb87e92a22487
Summary:
The idea here is to make it so we can swap out the type we are using for optionality. In the near term we are going to try swapping towards one that more aggressively tries to save size.
For now there is no functional change and this is just a big aliasing diff.
Reviewed By: sharmafb
Differential Revision: D57633896
fbshipit-source-id: 6eae5953d47395b390016e59cf9d639f3b6c8cfe
Summary:
Double the initial stream flow control window every time we get a stream blocked from peer.
This essentially reduces the stream blocked spam in RTT constrained conditions and when the initial flow control window was suboptimal.
Reviewed By: mjoras
Differential Revision: D55490692
fbshipit-source-id: 16db0ce63e787feba2bbbed4b8bc79b6925480cd
Summary: Until a flow control update arrives
Reviewed By: mjoras
Differential Revision: D55528849
fbshipit-source-id: 904543f3d04aae1cbbceb34f5723039dc5fb94e2
Summary: Proactively send out a blocked stream frame if we've used up all the flow control window on write.
Reviewed By: mjoras
Differential Revision: D55527833
fbshipit-source-id: 9dd645cde6c99e413272a2c373ce9a691fa34c28
Summary: Update flow control settings names to reflect that these are indeed flow control
Reviewed By: jbeshay
Differential Revision: D48137685
fbshipit-source-id: a48372e21cdd529480e25785a9bd5de456427ef3
Summary:
This allows the connection receive window to be dynamically increased.
The logic is simple. If the time between successive connection flow control updates is less than the 2 * SRTT, we are seriously connection flow control blocking the sender. In this case we double the target window.
This allows the window to grow to accomodate senders which are actually maintaining a high send rate, for data which is actually being consumed
Reviewed By: afrind
Differential Revision: D34289000
fbshipit-source-id: 6c846b26283846c05aab6fe12e4a361f75a1b423
Summary:
This was completely missing previously, which led to Client quickly
shutting down a connection with flow control violation when server oversends in
DSR mode.
Reviewed By: mjoras
Differential Revision: D27940953
fbshipit-source-id: 5644c1a3da5217365df9de33258bb5b071ff8187
Summary:
clear DSR buffers, release flow control, calling release() on the
packetization request sender. For now the stream will own the sender until
stream itself is dead. We need to change this ownership model later to be able
to reset the pointer when we reset the stream.
Reviewed By: mjoras
Differential Revision: D27901663
fbshipit-source-id: d9d12ef95ae59c6f0fe7ac1b1589d8527b1bc48d
Summary: Without this unsent data is discounted from the effective stream flow control permanently. With enough reset streams this can pile up and potentially deadlock connection flow control.
Reviewed By: yangchi
Differential Revision: D24766394
fbshipit-source-id: 012772f257dcd014bea92e35109e63d30afdc465
Summary: This was probably a premature optimization and introduces complexity for dubious gain. Additionally a sequence of losses could potentially cause multiple updates to be delayed.
Reviewed By: yangchi
Differential Revision: D23628058
fbshipit-source-id: d6cf70baec8c34f0209ea791dadc724795fe0c21
Summary: Implemented necessary changes to emit DATA BLOCKED frame when a QUIC connection is write blocked.
Reviewed By: mjoras
Differential Revision: D23067313
fbshipit-source-id: f80d7425c9a3c4e9b81405716bcd944c83b97ac2
Summary:
All instancesi of LIKELY and UNLIKELY probably should be removed. We will
add them back in if we see pathologies in performance profiles.
Reviewed By: mjoras
Differential Revision: D19163441
fbshipit-source-id: c4c2494d18ecfd28f00af1e68ecaf1e85c1a2e10
Summary: Add transportStateUpdate event so it can be part of qlog.
Reviewed By: mjoras
Differential Revision: D16342467
fbshipit-source-id: 109189275d44996850b82646bab4a733a3a4c7a1