Summary: The logic for deciding which ACK type to write was duplicated in QuicPacketScheduler and QuicPacketRebuilder. This refactors the logic out into a separate QuicAckScheduler so it can be tested for correction and reused in both places.
Reviewed By: sharmafb
Differential Revision: D69933311
fbshipit-source-id: e4f45688a5d258dd2a57f9f7844407f3efad5f49
Summary: This adds a knob which puts a fixed amount of padding at the start of short header packets. This is useful to test the consequences of e.g. a larger CID or the impacts of a smaller packet size.
Reviewed By: jbeshay
Differential Revision: D69603113
fbshipit-source-id: b92ba78682eed21b7d75e38c9584a93481e2eb2f
Summary: There was so much boilerplate in these tests they were very hard to read and reason about.
Reviewed By: mjoras
Differential Revision: D68677041
fbshipit-source-id: 162c80021ee6db5baacd6999a2958242b906e83a
Summary: Restore the buggy version of `cloneAllPacketsWithCryptoFrame`, which did allow up to two packets to be cloned in the same PTO flight but would only clone them at most once, as an optional mode of operation.
Reviewed By: mjoras
Differential Revision: D67994014
fbshipit-source-id: 16f66fd48a9a4410427ba21dea6e7f6d1eee48e6
Summary:
I'm modifying the `QuicPacketScheduler` so that we only send RESET_STREAM_AT frames when we've sent out all of the data upto the reliable offset. While this is not something that's mandated by the spec, it greatly simplifies the accounting of flow control if we do it this way.
In more detail, if we allowed the egressing of the RESET_STREAM_AT frame before all of the reliable data was sent out, we'd need coordination between the StreamFrameScheduler and the RstStreamScheduler, so as to ensure that we are correctly accounting for flow control. In addition to that, we'd likely have to have a "flow control accounted for" offset in each stream state, so as to ensure that we don't "double count" flow control between the two schedulers. This adds significant complexity, so we're just sending RESET_STREAM_AT frames when we've sent out all of the data upto the reliable offset.
Reviewed By: mjoras
Differential Revision: D66709346
fbshipit-source-id: 01ca8659ae80fcdbd0e599f480fde10a11f0a56b
Summary:
Concatenate adjacent namespaces + format
This is a codemod. It was automatically generated and will be landed once it is approved and tests are passing in sandcastle.
You have been added as a reviewer by Sentinel or Butterfly.
Autodiff project: nc
Autodiff partition: fbcode.quic
Autodiff bookmark: ad.nc.fbcode.quic
Reviewed By: hanidamlaj
Differential Revision: D65365244
fbshipit-source-id: 0bbaa7684d03caf8fc8eff3439a0865940398220
Summary:
In CloningScheduler, in the loop for selecting a candidate packet for cloning under the "cloneAllPacketsWithCryptoFrame" mode of operation, skip a packet containing a CRYPTO frame only if both it and the most recent outstanding packet are clones of the same packet.
Otherwise, it would clone the CRYPTO packets no more than once.
Reviewed By: mjoras
Differential Revision: D64411438
fbshipit-source-id: eac9e3dbb4c6d2536b1259af08f3c9647ef06ad8
Summary:
A packet in the CloningScheduler can be in three states:
1. Not cloned yet
2. Cloned, not processed yet
3. Cloned and processed
Currently, CloningScheduler will clone a packet in state 1, moving it to state 2, and continue cloning the packet while it is in state 2 without proceeding to subsequent packets.
This change makes it so that a packet in state 2 that has a CRYPTO frame will be skipped over, so that the next packet (which will be in state 1) has a chance to also be cloned.
Extra: Fix `DoNotCloneProcessedClonedPacket` to have the first packet be the already-processed one. If the second packet is the only one already processed, then of course the first one would be scheduled, letting the test speciously pass.
Reviewed By: mjoras
Differential Revision: D62770483
fbshipit-source-id: 5b00958ab4a787615338debacbe9bd2a0f6f74a4
Summary: This is my second attempt at D61871891. This time, I ran `xplat/cross_plat_devx/somerge_maps/compute_merge_maps.py`, which generated quic/somerge_defs.bzl
Reviewed By: kvtsoy
Differential Revision: D61975459
fbshipit-source-id: bec62acb2b400f4a102574e8c882927f41b9330e
Summary: `PacketEvent` is a very inaccurate and misleading name. We're basically using this as an identifier for cloned packets, so `ClonedPacketIdentifier` is a much better.
Reviewed By: kvtsoy
Differential Revision: D61871891
fbshipit-source-id: f9c626d900c8b7ab7e231c9bad4c1629384ebb77
Summary:
**Context**
The `BufAccessor` is used to access a contiguous section of memory. Right now, it works with a `Buf` under the hood.
**Overall plan**
The plan is to change the `BufAccessor` to use a `uint8_t*` instead. Since we're using section of contiguous memory, there's no need to use a chained buffer abstraction here. This'll move us closer to deprecating the usage `folly::IOBuf`.
**What this diff is doing**
Most use cases of the `BufAccessor` look like the following:
```
auto buf = bufAccessor.obtain();
// Do something with buf, like calling trimEnd
bufAccessor.release(buf)
```
I'm adding APIs to the `BufAccessor` so that there's no need to `obtain()` and `release()` the `Buf`. We'd instead just call an API on the `BufAccessor`, which would call that same API on the underlying `folly::IOBuf`. Later on, we'll change the `BufAccessor` to use a `uint8_t*` under the hood.
I'm currently leaving in the `obtain()`, `release()`, and `buf()` APIs because Fizz and the AsyncUDPSocket expect `folly::IOBuf` as inputs in many of their APIs. Once those callsites are migrated off `folly::IOBuf`, we can remove these APIs.
Reviewed By: mjoras
Differential Revision: D60973166
fbshipit-source-id: 52aa3541d0c4878c7ee8525d70ac280508b61e24
Summary:
The idea here is to make it so we can swap out the type we are using for optionality. In the near term we are going to try swapping towards one that more aggressively tries to save size.
For now there is no functional change and this is just a big aliasing diff.
Reviewed By: sharmafb
Differential Revision: D57633896
fbshipit-source-id: 6eae5953d47395b390016e59cf9d639f3b6c8cfe
Summary: This is effectively an unused field encoding duplicated information, but it was widespread.
Reviewed By: kvtsoy
Differential Revision: D57289922
fbshipit-source-id: ca1499e2576e5ae28e3880b865a29c2b8d9a3d1b
Summary: This reduces the number of stats callbacks when processing multiple packets in rapid succession.
Reviewed By: mjoras
Differential Revision: D56315022
fbshipit-source-id: 750024301f28b21e3125c144ead6f115706736a4
Summary: Neither QUIC not TransportMonitor is using the `packetsInflight` field of the `OutstandingPacketMetadata`
Reviewed By: hanidamlaj
Differential Revision: D55926288
fbshipit-source-id: 32efd66add1e6374a8d3446ff635fe582b36e644
Summary: Neither QUIC not TransportMonitor is using the `totalBodyBytesSent` field of the `OutstandingPacketMetadata`
Reviewed By: hanidamlaj
Differential Revision: D55897240
fbshipit-source-id: 521f8a016f838dd1fe0593daa7a4e45c4fd222cf
Summary: This is similar to the previous commit. We use a stack-based IOBuf instead of allocating one on the heap. This saves CPU because allocations/deallocations on the stack are a lot cheaper.
Reviewed By: hanidamlaj
Differential Revision: D53101722
fbshipit-source-id: dd59a7eca6498db19472a62f954db3e2f2f27a42
Summary: Creating an IOBuf on the heap when we use `folly::IOBuf::wrapBuffer` is expensive.
Reviewed By: hanidamlaj
Differential Revision: D52506216
fbshipit-source-id: eed2b77beae0419b542b0461303785cc175e3518
Summary:
The idea here is to allow a way to do incremental stream priorities while switching between streams as aggressively.
Achieving this is somewhat tricky. The easiest place to track this is to make it so the iterator in QuicPriorityQueue no-op next() until a counter reaches an increment.
This is especially impacftul for DSR, where round robining per packet is almost pathologically bad both for CPU impact but also spurious losses and low bandwidth estimates.
Thoughts?
(Note: this ignores all push blocking failures!)
Reviewed By: kvtsoy
Differential Revision: D46268308
fbshipit-source-id: cd5b924141365f61f8a3363bc9cb38a62e5c94cf
Summary:
The write loop functions for DSR or non-DSR are segmented today. As such, so are the schedulers. Mirroring this, we also currently store the DSR and non-DSR streams in separate write queues. This makes it impossible to effectively balance between the two without potential priority inversions or starvation.
Combining them into a single queue eliminates this possibility, but is not entirely straightforward.
The main difficulty comes from the schedulers. The `StreamFrameScheduler` for non-DSR data essentially loops over the control stream queue and the normal write queue looking for the next stream to write to a given packet. When the queues are segmented things are nice and easy. When they are combined, we have to deal with the potential that the non-DSR scheduler will hit a stream with only DSR data. Simply bailing isn't quite correct, since it will just cause an empty write loop. To fix that we need check, after we are finished writing a packet, if the next scheduled stream only has DSR data. If it does, we need to ensure `hasPendingData()` returns false.
The same needs to be done in reverse for the DSR stream scheduler.
The last major compication is that we need another loop which wraps the two individual write loop functions, and calls both functions until the packet limit is exhausted or there's no more data to write. This is to handle the case where there are, for example, two active streams with the same incremental priority, and one is DSR and the other is not. In this case each write loop we want to write `packetLimit` packets, flip flopping between DSR and non DSR packets.
This kind of round robining is pathologically bad for DSR, and a future diff will experiment with changing the round robin behavior such that we write a minimum number of packets per stream before moving on to the next stream.
This change also contains some other refactors, such as eliminating `updateLossStreams` from the stream manager.
(Note: this ignores all push blocking failures!)
Reviewed By: kvtsoy
Differential Revision: D46249067
fbshipit-source-id: 56a37c02fef51908c1336266ed40ac6d99bd14d4
Summary: Allows the default stream priority (which also means scheduling policy) to be set via TransportSettings.
Reviewed By: jbeshay, hanidamlaj
Differential Revision: D45881729
fbshipit-source-id: fcb72022cd6eac2f1dafc55173d5a46a72a95dbc
Summary: As in title. I don't think there's a good reason to have this, better to have the caller be explicit.
Reviewed By: afrind, hanidamlaj
Differential Revision: D45875589
fbshipit-source-id: 3958ca10db542923b46ed8511c996ab3b292e9ab
Summary:
## Context
Context: see summary for D43854603.
## In this diff
In this diff, make way for easily adding new elements to Priority struct.
Reviewed By: afrind
Differential Revision: D43882907
fbshipit-source-id: f74f6ec41162a970dcd666bfa5192676f968d740
Summary:
Fixing dependency loop introduced by D37799050 (96abc8160d)
Running `autodeps` yields the following patch:
```
--- a/xplat/quic/state/TARGETS
+++ b/xplat/quic/state/TARGETS
@@ -43,8 +43,8 @@
exported_deps = [
"//folly:random",
"//quic:constants",
+ "//quic/codec:codec",
"//quic/codec:types",
- "//quic/common:circular_deque",
"//quic/common:interval_set",
],
)
```
If this patch is applied, there is a circular dependency loop between `//quic/codec:codec` and `//quic/state:ack_states` by way of `//quic/codec:types`; this loop was introduced by D37799050 (96abc8160d).
Fixed by separating out headers files and targets. In parallel, renamed structures used for writing ACK frames (which were the reason this loop occurred) to make their role clear.
Differential Revision: D42281359
fbshipit-source-id: 8514c99f3fe72ff1d942d7f303e4a209838c7623
Summary:
We don't need to carry these states after the handshake is confirmed, so make them pointers instead. This will facilitate adding a structure to the AckState for tracking duplicate packets.
(Note: this ignores all push blocking failures!)
Reviewed By: hanidamlaj
Differential Revision: D41626895
fbshipit-source-id: d8ac960b3672b9bb9adaaececa53a1203ec801e0
Summary:
We previously made the decision to disallow the writing of a FIN on a DSR stream from a non-DSR stream frame as there's not really much value in allowing it.
The code that was previously allowing this to work has a bad bug. If the DSR bytes for a stream were written completely before the non-DSR bytes had been written, the scheduling code would set `canSetFin = true`, which would cause a FIN to be written at the `bufMetaStartingOffset`. This would race with the FIN sent by the backend and, if not cause a connection error, cause an HTTP truncated body of length 0.
We can avoid this by finishing the job of not allowing DSR streams to be FIN'd by non-DSR stream frames.
Reviewed By: hanidamlaj
Differential Revision: D41479274
fbshipit-source-id: 66b321db72fdb4a16378e33048e3fb97133ced92
Summary: We need to do this to maintain consistency with the schedulers. Prior to this fix when we ran out of connection flow control and had lost DSR data the non-DSR stream scheduler would do an empty write _and_ the DSR stream scheduler would do an empty write. To fully resolve the issue we need to track the streams separately (note that the same stream can be in both sets).
Reviewed By: jbeshay
Differential Revision: D40003361
fbshipit-source-id: fd3e567bc8a2bc8b71d4c4543894053fa6e24dc4
Summary:
When 0-rtt is used and an initial packet is lost, the retransmission of this packet may end up shorter than allowed because the padding frames are subject to the cwnd.
This change makes the padding of initial packets not subject to the cwnd.
Reviewed By: mjoras
Differential Revision: D39594390
fbshipit-source-id: 2d714c921f243f8a59577a6edaaeaa1c7e2be815
Summary: Add the new IMMEDIATE_ACK frame from the ack frequency draft.
Reviewed By: mjoras
Differential Revision: D38523438
fbshipit-source-id: 79f4a26160ccf4333fb79897ab4ace2ed262fa01
Summary:
We may end up writing acks after a length-less stream frame and it
will be considered garbage data by the peer. The acks will be written later as
usual, outside of cloning scheduler.
Reviewed By: mjoras
Differential Revision: D35561912
fbshipit-source-id: 9334523b984870fa525c856a504ae7d2ae4f34c3
Summary:
Adds paddingModulo to QuicPacketBuilder, which pads up short header packets so that the length remaining of a packet is a multiple of paddingModulo
ie a message of length 25 and max packet size of 40 with a paddingModulo of 8 will get 7 padding frames to a total length of 32. (and a message of length 26 will get 6 padding frames to also go up to a total length of 32)
Reviewed By: mjoras
Differential Revision: D32858254
fbshipit-source-id: 99ada01108429db9f9bba1e342daf99e80385179
Summary: When there is not enough space in the QUIC packet to write a datagram frame, we should not dequeue the datagram
Reviewed By: mjoras
Differential Revision: D33109603
fbshipit-source-id: 937a5a0ffe55c7f88e39faf224e7ad06ca599708
Summary:
This is a bug that could prevent us from writing data if we ran out of connection flow control while we had lost data.
The last attempt missed a mistake in the scheduling of sequential priority streams.
Reviewed By: kvtsoy
Differential Revision: D33030784
fbshipit-source-id: e1b82234346a604875a9ffe9ab7bc5fb398450ed
Summary: As titled. Reduces confusion, as we always expect this to be populated.
Differential Revision: D31887097
fbshipit-source-id: 153b05bae8abd559fe49d2c07c64d2ad0d92a809
Summary: This is causing empty write loops for some reason.
Reviewed By: jbeshay, lnicco
Differential Revision: D32990291
fbshipit-source-id: 2a183d591de54c7fe0ca54aea828a697a4cd9af0
Summary: This is an edge case we weren't handling properly. This can lead to situations where we run out of conn flow control, the peer hasn't received all data, and then we never write the data to trigger a flow control update.
Reviewed By: afrind
Differential Revision: D32887140
fbshipit-source-id: df5dfad8e0775ef43e5ca6ab98b8ca6b5987ce31