1
0
mirror of https://github.com/facebookincubator/mvfst.git synced 2025-08-05 11:21:09 +03:00

Quic pacing refactor

Summary:
(1) The first change is the pacing rate calculation is simplified. It
removes the interval calculation and just uses the timer tick as the interval.
Then it calculates the burst size from there.  For most cases these two
calculation should land at the same result, except when the
`cwnd < minBurstSize * tick / RTT`. In that case, the current calculation would
spread writes evenly across one RTT, assuming no new Ack arrives during the RTT;
while the new calculation uses the first a few ticks to finish the cwnd amount
of data.

(2) Then this diff changes how we compensate late timer. Now the pacer will
maintain a nextWriteTime_ and lastWriteTime_, which makes it easier to
calculate time elapsed since last write. Then each time writer tries to write,
it will be allowed to write timeElapsed * pacingRate. This is much more
intuitive than the current logic.

(3) The diff also adds pacing limited tracking into the pacer. An expected
pacing rate is cached when pacing rate is refreshed by congestion controller.
Then with packets sent out, Pacer keeps calculating the current send rate. When
the send rate is lower, Pacer sets pacingLimited_ to true. Otherwise false.

Only when the connection is not pacing limited, the lastWriteTime_ will be
packet sent time, otherwise it will be set to the last nextWriteTime_. In other
words: if the send rate is lower than expected, we use the expected send time
instead of real send time to calculate time elapsed, to allow higher late
timer compenstation, to give pacer a chance to catch up.

(4) Finally this diff removes the token collecting behavior in the pacer. I
think having tokens increaed, instead of reset, when an ack refreshes the pacing
rate or when we compensate late time, is quite confusing to some people. After
all the above changes, I found tperf can still sustain good throughput without
always increase tokens, and rally actualy gives even better results. So i think
we can remove this part of the pacer that's potentially very confusing to
people who don't know how we got there.

Reviewed By: mjoras

Differential Revision: D19252744

fbshipit-source-id: b83e4a01fc812fc52117f3ec0f5c3be1badf211f
This commit is contained in:
Yang Chi
2020-01-17 10:08:46 -08:00
committed by Facebook Github Bot
parent 88efcd81f6
commit edb5104858
11 changed files with 117 additions and 136 deletions

View File

@@ -765,7 +765,7 @@ TEST_F(QuicTransportFunctionsTest, TestUpdateConnectionWithPureAck) {
ackFrame.ackBlocks.emplace_back(0, 10);
packet.packet.frames.push_back(std::move(ackFrame));
EXPECT_CALL(*rawController, onPacketSent(_)).Times(0);
EXPECT_CALL(*rawPacer, onPacketSent()).Times(0);
EXPECT_CALL(*rawPacer, onPacketSent(_)).Times(0);
updateConnection(
*conn, folly::none, packet.packet, TimePoint(), getEncodedSize(packet));
EXPECT_EQ(0, conn->outstandingPackets.size());
@@ -1020,7 +1020,7 @@ TEST_F(QuicTransportFunctionsTest, WriteQuicdataToSocketWithPacer) {
IOBuf::copyBuffer("0123456789012012345678901201234567890120123456789012");
writeDataToQuicStream(*stream1, buf->clone(), true);
EXPECT_CALL(*rawPacer, onPacketSent()).Times(1);
EXPECT_CALL(*rawPacer, onPacketSent(_)).Times(1);
EXPECT_CALL(*transportInfoCb_, onWrite(_));
writeQuicDataToSocket(
*rawSocket,