Summary: This is my second attempt at D61871891. This time, I ran `xplat/cross_plat_devx/somerge_maps/compute_merge_maps.py`, which generated quic/somerge_defs.bzl
Reviewed By: kvtsoy
Differential Revision: D61975459
fbshipit-source-id: bec62acb2b400f4a102574e8c882927f41b9330e
Summary: `PacketEvent` is a very inaccurate and misleading name. We're basically using this as an identifier for cloned packets, so `ClonedPacketIdentifier` is a much better.
Reviewed By: kvtsoy
Differential Revision: D61871891
fbshipit-source-id: f9c626d900c8b7ab7e231c9bad4c1629384ebb77
Summary:
The idea here is to make it so we can swap out the type we are using for optionality. In the near term we are going to try swapping towards one that more aggressively tries to save size.
For now there is no functional change and this is just a big aliasing diff.
Reviewed By: sharmafb
Differential Revision: D57633896
fbshipit-source-id: 6eae5953d47395b390016e59cf9d639f3b6c8cfe
Summary: This reduces the number of stats callbacks when processing multiple packets in rapid succession.
Reviewed By: mjoras
Differential Revision: D56315022
fbshipit-source-id: 750024301f28b21e3125c144ead6f115706736a4
Summary:
The write loop functions for DSR or non-DSR are segmented today. As such, so are the schedulers. Mirroring this, we also currently store the DSR and non-DSR streams in separate write queues. This makes it impossible to effectively balance between the two without potential priority inversions or starvation.
Combining them into a single queue eliminates this possibility, but is not entirely straightforward.
The main difficulty comes from the schedulers. The `StreamFrameScheduler` for non-DSR data essentially loops over the control stream queue and the normal write queue looking for the next stream to write to a given packet. When the queues are segmented things are nice and easy. When they are combined, we have to deal with the potential that the non-DSR scheduler will hit a stream with only DSR data. Simply bailing isn't quite correct, since it will just cause an empty write loop. To fix that we need check, after we are finished writing a packet, if the next scheduled stream only has DSR data. If it does, we need to ensure `hasPendingData()` returns false.
The same needs to be done in reverse for the DSR stream scheduler.
The last major compication is that we need another loop which wraps the two individual write loop functions, and calls both functions until the packet limit is exhausted or there's no more data to write. This is to handle the case where there are, for example, two active streams with the same incremental priority, and one is DSR and the other is not. In this case each write loop we want to write `packetLimit` packets, flip flopping between DSR and non DSR packets.
This kind of round robining is pathologically bad for DSR, and a future diff will experiment with changing the round robin behavior such that we write a minimum number of packets per stream before moving on to the next stream.
This change also contains some other refactors, such as eliminating `updateLossStreams` from the stream manager.
(Note: this ignores all push blocking failures!)
Reviewed By: kvtsoy
Differential Revision: D46249067
fbshipit-source-id: 56a37c02fef51908c1336266ed40ac6d99bd14d4
Summary: Add the new IMMEDIATE_ACK frame from the ack frequency draft.
Reviewed By: mjoras
Differential Revision: D38523438
fbshipit-source-id: 79f4a26160ccf4333fb79897ab4ace2ed262fa01
Summary:
Adds paddingModulo to QuicPacketBuilder, which pads up short header packets so that the length remaining of a packet is a multiple of paddingModulo
ie a message of length 25 and max packet size of 40 with a paddingModulo of 8 will get 7 padding frames to a total length of 32. (and a message of length 26 will get 6 padding frames to also go up to a total length of 32)
Reviewed By: mjoras
Differential Revision: D32858254
fbshipit-source-id: 99ada01108429db9f9bba1e342daf99e80385179
Summary:
These are either no longer relevant, are unlikely to be done, or are spculative enough that they don't deserve code space.
Hope here is to make our search for TODOs higher signal.
Reviewed By: lnicco
Differential Revision: D29769792
fbshipit-source-id: 7cfa62cdc15e72d8b7b0cd5dbb5913ea3ca3dc5a
Summary:
When rebuilding outstanding packets, if the packet contains an ACK, use a fresh ACK state instead of the potentially stale one from the outstanding packet.
Collateral changes:
- The AckVisitor logic in processAckFrame now visits AckFrames all the time. This is to guarantee that they are visited even if they belong to cloned packets. The behavior for all other frame types remains unchanged.
- If rebuilding the AckFrame is not successful, it is ignored. The rest of the clone packet is still sent.
I have tried to address all the concerns that were previously raised on D27377752
Reviewed By: yangchi
Differential Revision: D28659967
fbshipit-source-id: fc3c76b234a6e7140dbf038b2a8a44da8fd55bcd
Summary:
Two bugs introduced in the pervious diff that merged write buffer and
loss buffer of a QUIC stream:
(1) We have stopped writing streams' loss buffers if connection is flow control
blocked.
(2) When stream write fails due to running out of packet space, it would be
mistakenly categorized as flow control has ran out. As a result, the loop in
writeStreamsHelper() would continue to try to write loss buffers data, assuming
loss buffer is still OK to write since they don't limit by flow control window.
But because the actually problem is the packet is full, the loss buffer write
would fail anyway. So we wasted some CPU trying to do a write that's destined
to fail.
Reviewed By: mjoras
Differential Revision: D27117026
fbshipit-source-id: db508fa101e04fbe4b5fe21c0c7013b0c0b07936
Summary: they need to be prioritized together
Reviewed By: mjoras
Differential Revision: D26918282
fbshipit-source-id: 061a6135fd7d31280dc4897b00a17371044cee60
Summary: As in title. No reason to make a string copy, these always refer to string literals.
Reviewed By: yangchi
Differential Revision: D25288518
fbshipit-source-id: 489f0ac22aa86aa4a4acb562245e98b6d33a10fd
Summary: Adds a top level API to set stream priorities, mirroring what is currently proposed in httpbis. For now, default all streams to the highest urgency, round-robin, which mirrors the current behavior in mvfst.
Reviewed By: mjoras
Differential Revision: D20318260
fbshipit-source-id: eec625e2ab641f7fa6266517776a2ca9798e8f89
Summary:
According to the spec v21
(https://tools.ietf.org/id/draft-ietf-tsvwg-datagram-plpmtud-21.html#name-sending-quic-probe-packets),
d6d probe should only contain a PING followed by many PADDING frames. As a first step, we fully comply with the spec.
Future optimizations:
- We can potentially put loss data in the probe packet to perform retransmission. This might make the probe packet more useful by increasing goodput.
Reviewed By: mjoras
Differential Revision: D22557962
fbshipit-source-id: 9a6584bc46aeb29981c4e2c4121ded127a7f2f06
Summary: we don't use it in most of our code any more
Reviewed By: sharmafb
Differential Revision: D23378968
fbshipit-source-id: 876c27e8dea0e82f939c5f1b52a1cb16bb13195d
Summary:
The problem with Ping being a simple frame:
(1) All SimpleFrames are in the same scheduler. So sending ping means we may
also send other frames which can be problematic if we send in Initial or
Handshake space
(2) Ping isn't retranmisttable. But other Simple frames are. So we are
certainly setting this wrong when we send pure Ping packet today.
That being said, there are cases where we need to treat Ping as retransmittable.
One is when it comes to update ack state: If peer sends us Ping, we may want to
Ack early rather than late. so it makes sense to treat Ping as retransmittable.
Another place is insertion into OutstandingPackets list. When our API user sends
Ping, then also add a Ping timeout. Without adding pure Ping packets into OP list,
we won't be able to track the acks to our Pings.
Reviewed By: mjoras
Differential Revision: D21763935
fbshipit-source-id: a04e97b50cf4dd4e3974320a4d2cc16eda48eef9
Summary:
The AckScheduler right now has two modes: Immediate mode which always
write acks into the current packet, pending mode which only write if
needsToSendAckImmediately is true. The FrameScheduler choose the Immdiate mode
if there are other data to write as well. Otherwise, it chooses the Pending
mode. But if there is no other data to write and needsToSendAckImmediately is
false, the FrameScheduler will end up writing nothing.
This isn't a problem today because to be on the write path, the shouldWriteData
function would make sure we either have non-ack data to write, or
needsToSendAckImmediately is true for a packet number space. But once we allow
packets in Initial and Handshake space to be cloned, we would be on the write
path when there are probe quota. The FrameScheduler's hasData function doesn't
check needsToSendAckImmediately. It will think it has data to write as long as
AckState has changed, but can ends up writing nothing with the Pending ack
mode.
I think given the write looper won't be schedule to loop when there is no
non-ack data to write and needsToSendAckImmediately is true, it's safe to
remove Pending ack mode from AckScheduler.
Reviewed By: mjoras
Differential Revision: D22044741
fbshipit-source-id: 26fcaabdd5c45c1cae12d459ee5924a30936e209
Summary: It may be desirable for some applications to limit the number of streams written to a single packet, for example if they are treating streams like individual messages.
Reviewed By: yangchi
Differential Revision: D21067175
fbshipit-source-id: 75edcb1051bf7f1eb2c9b730806350f9857eabec
Summary:
Now all the schdulings are via main FrameScheduler or
CloningScheduler, this API is no longer used anywhere.
Reviewed By: mjoras
Differential Revision: D20899649
fbshipit-source-id: 89b34c6e8178a3abad10c594b24feae8308e3768
Summary:
To prepare for another configuration of this chain. With this, now we
can land all the outstanding GSO optimization diffs without having to worry
about breaking the production code.
Reviewed By: mjoras
Differential Revision: D20838453
fbshipit-source-id: 807a0c546305864e0d70f8989f31d3de3b812278
Summary: For protocols like HTTP/3, lacking an actual priority scheme, it's a good idea to write control streams before non control streams. Implement this in a round robin fashion in the same way we do for other streams.
Reviewed By: afrind
Differential Revision: D18236010
fbshipit-source-id: faee9af7fff7736679bfea262ac18d677a7cbf78
Summary: Previously this would set `wrappedAround_` to `true` if the result of `lower_bound` was the container end. This is not actually a useful behavior and results in no streams being written if e.g. there is only one stream in the writable collection. What we actually want is for the iterator to start over from the start of the collection in this case.
Reviewed By: siyengar
Differential Revision: D17856193
fbshipit-source-id: d4a285879f16bb6827446c35dcaf6dc48ac03876
Summary:
The intention here was always to write to streams in a round robin fashion. However, this functionality has been effectively broken since introduction as `lastScheduledStream` was never set. We can fix this by having the `StreamFrameScheduler` set `nextScheduledStream` after it has written to the streams. Additionally we need to remove a check that kept us from moving past a stream if it still had data left to write.
In extreme cases this would cause streams to be completely starved, and ruin concurrency.
Reviewed By: siyengar
Differential Revision: D17748652
fbshipit-source-id: a3d05c54ee7eaed4d858df9d89035fe8f252c727
Summary: these names are always temporary strings that we can avoid a copy
Reviewed By: mjoras
Differential Revision: D17477801
fbshipit-source-id: b9a04e4e6df3a2ccd4a45233a7a755f87d16d570
Summary:
Prior to this diff we would clone out an entire flow control's worth of data from the writebuffer and then clone out a smaller portion of that to write into the packet builder. This is extremely wasteful when we have a large flow control window.
Additionally we would always write the stream data length field even when we are going to fill the remainder of the packet with the current stream frame. By first calculating the amount of data that needs to can be written and writing the header, we can now omit the data length field when we can fill the whole packet.
Reviewed By: yangchi
Differential Revision: D15769514
fbshipit-source-id: 95ac74eebcde87dd06de54405d7f69c42362e29c
Summary:
This ensure a lot of code do not depend on fizz anymore.
Pull Request resolved: https://github.com/facebookincubator/mvfst/pull/26
Reviewed By: mjoras, JunqiWang
Differential Revision: D16030663
Pulled By: yangchi
fbshipit-source-id: a3cc34905a6afb657da194e2166434425e7e163c
Summary:
There is a bug in how we decide if we should schedule a write loop due
to sending Acks. Currently if one PN space has needsToWriteAckImmediately to
true and another PN space has hasAcksToSchedule to true, but no PN space
actually has both to true, we will still schdeule a write loop. But that's
wrong. We won't be able to send anything in that case. This diff fixes that.
Reviewed By: JunqiWang
Differential Revision: D15446413
fbshipit-source-id: b7e49332dd7ac7f78fc3ea28f83dc49ccc758bb0