Summary:
This is essentially a further extension of the previous idead of having a sequence number per-DSR stream.
The rationale is the same -- to reduce spurious loss declared from the reordering threshold.
The primary case this works around is the following
```
<!DSR><!DSR><!DSR><DSR><DSR><DSR><!DSR>
```
Suppose we get an ACK for [0-1,3-5] leaving only packet number 2 remaining.
The naive reordering threshold will declare packet number 2 as lost. However, in principle this is arguably wrong since when considered on the timeline of !DSR packets, the gap does not exceed the reordering threshold.
To accomodate this we need to track a monotonically increasing sequence number for each non-DSR packet written, and store that in the packet metadata. This way we can use that for the reordering comparison rather than the packet number itself.
When there is no DSR packets ever written to a connection this should devolve to an identical result to using the packet number, as they will increment together.
Reviewed By: kvtsoy
Differential Revision: D46742386
fbshipit-source-id: 2983746081c7b6282358416e2bb1bcc80861be58
Summary:
This has been a default behavior for us for a long time. Basically, we will always include ACK frames in a burst if there's new packets to ACK.
This is overall probably fine and is flexible to peer behavior such that the peer can get away with basically never sending anything ACK-eliciting.
However it is not without its issues. In particular, for DSR it means that we write an excessive amount of pure ACK packets since they cannot be coalesced with a DSR burst.
Make this behavior optional, such that we only include ACK frames in non-probing schedulers when we are only writing stream data.
Reviewed By: kvtsoy
Differential Revision: D39479149
fbshipit-source-id: 6e92d45311a3fb23750c93483b94e97dd3fdce26
Summary: Turns out these functions have been returning 0 for bytes written for a while. Nothing was using them (yet), so there wasn't any functional breakage.
Reviewed By: kvtsoy
Differential Revision: D46653336
fbshipit-source-id: 765b3363c1fc0729e8239d1293ddcc4ae4bb9c79
Summary:
As in title. I don't think this was affecting anything but remove it since we are using DSR.
(Note: this ignores all push blocking failures!)
Reviewed By: hanidamlaj
Differential Revision: D46522584
fbshipit-source-id: 34e0ff1f208d00be51963d2081decde5e7acccf7
Summary:
The write loop functions for DSR or non-DSR are segmented today. As such, so are the schedulers. Mirroring this, we also currently store the DSR and non-DSR streams in separate write queues. This makes it impossible to effectively balance between the two without potential priority inversions or starvation.
Combining them into a single queue eliminates this possibility, but is not entirely straightforward.
The main difficulty comes from the schedulers. The `StreamFrameScheduler` for non-DSR data essentially loops over the control stream queue and the normal write queue looking for the next stream to write to a given packet. When the queues are segmented things are nice and easy. When they are combined, we have to deal with the potential that the non-DSR scheduler will hit a stream with only DSR data. Simply bailing isn't quite correct, since it will just cause an empty write loop. To fix that we need check, after we are finished writing a packet, if the next scheduled stream only has DSR data. If it does, we need to ensure `hasPendingData()` returns false.
The same needs to be done in reverse for the DSR stream scheduler.
The last major compication is that we need another loop which wraps the two individual write loop functions, and calls both functions until the packet limit is exhausted or there's no more data to write. This is to handle the case where there are, for example, two active streams with the same incremental priority, and one is DSR and the other is not. In this case each write loop we want to write `packetLimit` packets, flip flopping between DSR and non DSR packets.
This kind of round robining is pathologically bad for DSR, and a future diff will experiment with changing the round robin behavior such that we write a minimum number of packets per stream before moving on to the next stream.
This change also contains some other refactors, such as eliminating `updateLossStreams` from the stream manager.
(Note: this ignores all push blocking failures!)
Reviewed By: kvtsoy
Differential Revision: D46249067
fbshipit-source-id: 56a37c02fef51908c1336266ed40ac6d99bd14d4
Summary:
This has been hardcoded to SRTT/25 for a long time. However, especially when using DSR this might not be the most appropriate since it can start to get close to real world SRTTs.
Control it via a knob, and also add the behavior such that setting it to 0 effectively disables the time limit.
Reviewed By: jbeshay
Differential Revision: D46084438
fbshipit-source-id: 7dc619fdff1bd5c3f8654c4936200e0340ef94f2
Summary:
When using ACK_FREQUENCY and high reordering thresholds, the normal tricks for drawing ACKs with PTOs are slightly defeated.
Thankfully we can get around this by issuing IMMEDIATE_ACKs when probing. The strategy is basically to replace one of the usual probes with an IMMEDIATE_ACK probe.
Also, include ACKs in these probes (and the PING probes), since there's not much reason not to.
Reviewed By: jbeshay
Differential Revision: D45965703
fbshipit-source-id: 9b98473312a239e17a0b8040f45d197b1a2c8484
Summary:
Adds a prewrite function in packet processors that can be used to set PreWrite Reuqests that will apply apply to the next write loop in the QuicSocket.
Currently, the PrewriteRequest only contains a list of cmsgs but it can be expanded when needed.
This replaces the setAdditionalCmsgs callback that was previously passed down directly to AsyncUDPSocket, allowing multiple processors to affect the additionalCmsgs sent to the AsyncUDPSocket.
Reviewed By: bschlinker
Differential Revision: D44233700
fbshipit-source-id: 799357ee5df89b2a526611bd83d0b4a9a13b71d5
Summary:
This diff adds a OnPacketDestroyed PacketProcessor callback function to be called from a OutstandingPacket destructor. To enable this, a few things had to be done
1. Inject a generic std::function lambda in `OutstandingPacket` as a PacketProcessor callback which is called on OutstandingPacket wrapper class destruction.
2. Explicitly null the OnPacketDestroyed callback on move construction for "moved from" objects
3. Detect if the move is a "replace" (like during `std::deque::erase`) and execute callback before move.
4. Unit-test
Reviewed By: bschlinker
Differential Revision: D43896147
fbshipit-source-id: 1b64a9d66ae628d83b6edb65d1bad54b27db7964
Summary:
To get reliable packet destruction events, created a `OutstandingPacketWrapper` wrapper that wraps the current `OutstandingPacket` class, so callback functions can be added to the wrapper instead of the underlying object itself.
#### Why do we need this wrapper?
`std::deque::erase` does not guarantee that appropriate object destructors will be called (only that the number of destructions = number of objects erased). This is a real problem as packet destruction events are then no longer reliable (OutstandingPacket object is wrong!). The wrapper class handles this condition by detecting it in the move assignment constructor (called during erase) and calls the appropriate packet destruction callback before packets are moved. If we did the same fix in the old OutstandingPacket, we have to make sure the callback is called before all the fields of OutstandingPacket are moved - this is not scaleable. Hence a wrapper with the underlying object and a destruction callback function.
I also disabled copy construction for OutstandingPacket (otherwise we will get duplicate OnPacketDestroyed callbacks and cannot track packets reliably). Removing packet copies also improves performance. Some code changes (in tests mostly) are mostly in service of this particular change.
Reviewed By: bschlinker
Differential Revision: D43896148
fbshipit-source-id: c295d3c4dba2368aa66f06df5fc82b473a03fb4d
Summary: - .reset() is probably fractionally less costly than the assignment operator?
Reviewed By: sharmafb
Differential Revision: D43579041
fbshipit-source-id: 4838b6c21e94197782cf56866950be1dbf65b106
Summary:
We don't need to carry these states after the handshake is confirmed, so make them pointers instead. This will facilitate adding a structure to the AckState for tracking duplicate packets.
(Note: this ignores all push blocking failures!)
Reviewed By: hanidamlaj
Differential Revision: D41626895
fbshipit-source-id: d8ac960b3672b9bb9adaaececa53a1203ec801e0
Summary:
Track the total (cumulative) amount of time that the connection has been application limited and store this information in `OutstandingPacketMetadata`. If this value is the same for two `OutstandingPacketMetadata` then we know that the transport did not become application limited between when those two packets were sent (or, less likely, the transport was application limited for less than one microsecond given the microsecond resolution of the timestamp).
We store the amount of time spent application limited instead of a count of the number of application limited events because the implications of being application limited are time dependent.
Tests show that we need to be able to inject a mockable clock. That's been an issue for some time; will work on in a subsequent diff.
Differential Revision: D41714879
fbshipit-source-id: 9fd4fe321d85639dc9fb5c2cd51713c481cbeb22
Summary:
Check for GSO support once before the first write, cache the value and use it throughout the conn lifetime.
If GSO is not supported, ensure we use chained memory write path.
Reviewed By: mjoras
Differential Revision: D40240513
fbshipit-source-id: b699b4633246f3c15d2be7b39580686e44c2aab3
Summary: Add the new IMMEDIATE_ACK frame from the ack frequency draft.
Reviewed By: mjoras
Differential Revision: D38523438
fbshipit-source-id: 79f4a26160ccf4333fb79897ab4ace2ed262fa01
Summary: Upgrading glog from 0.4.0 to 0.5.0 broke the windows build for some time. This change skips calling LOG_EVERY_N for Windows to restore the build. It is a stop-gap measure until logging is migrated to folly XLOG.
Reviewed By: kvtsoy
Differential Revision: D38371427
fbshipit-source-id: 9711607a348f0473e3e922d7f627217b3948c45d
Summary: PacketProcessors process the packets that we send and their corresponding ack and loss events. We store a vector of packet processors inside `QuicConnectionStateBase` and we call `onPacketSent` and `onPacketAck` whenever their corresponding functions in `CongestionController` class are called. The plan is to make `CongestionController` class extend `PacketProcessor`.
Reviewed By: bschlinker
Differential Revision: D36404668
fbshipit-source-id: 52d63a6a74bfa2031131d31047da25a5e2be12aa
Summary: This is a pretty obvious thing to do. There's not really any reason to have the data and metadata separately since we don't need to reallocate.
Reviewed By: jbeshay
Differential Revision: D36237370
fbshipit-source-id: 093ad7fb2c54b596ea5cc327ffcc24de1748d362
Summary:
- PTOs should not be subject to congestion control limits
- Quickly recover from PTOs being writableBytesLimited by calling onPTOAlarm() as soon as we become unblocked
Reviewed By: mjoras
Differential Revision: D35480409
fbshipit-source-id: 51500db6fff17a7badefea8bda7f63141e97f746
Summary: - as title; ninja landing this because jf land doesn't work
Reviewed By: mjoras
Differential Revision: D35446781
fbshipit-source-id: 73cfbd19da3922a9b3471a95006b817ea495768b
Summary: Report all bytes written in observer packets written events.
Reviewed By: jbeshay
Differential Revision: D33086404
fbshipit-source-id: 1456cf23a420abd025f047f190cb2b8a9868826e
Summary: add more instrument info for the dsr egressed counts and bytes
Reviewed By: mjoras, shodoco
Differential Revision: D34192441
fbshipit-source-id: 2b3567b7bec0748ab99ad042742d7823a52da568
Summary: - logging number of bytes the server sent during the handshake for insight.
Reviewed By: mjoras
Differential Revision: D33069800
fbshipit-source-id: e7e8f25183ee30de99e2971bae3c6b93882f6e63
Summary:
This is a bug that could prevent us from writing data if we ran out of connection flow control while we had lost data.
The last attempt missed a mistake in the scheduling of sequential priority streams.
Reviewed By: kvtsoy
Differential Revision: D33030784
fbshipit-source-id: e1b82234346a604875a9ffe9ab7bc5fb398450ed
Summary: This is causing empty write loops for some reason.
Reviewed By: jbeshay, lnicco
Differential Revision: D32990291
fbshipit-source-id: 2a183d591de54c7fe0ca54aea828a697a4cd9af0
Summary: This is an edge case we weren't handling properly. This can lead to situations where we run out of conn flow control, the peer hasn't received all data, and then we never write the data to trigger a flow control update.
Reviewed By: afrind
Differential Revision: D32887140
fbshipit-source-id: df5dfad8e0775ef43e5ca6ab98b8ca6b5987ce31
Summary: Otherwise we can end up in a situation where the non-DSR scheduler was limited by the loop time while the DSR scheduler is not, leading to an inconsistency.
Reviewed By: lnicco
Differential Revision: D32570027
fbshipit-source-id: f43d2517589c22bac0f2bb76626cc55c2a21fa5d
Summary: This doesn't belong in the generic state. Untangling it is a little difficult, but I think this solution is cleaner than having it in the generic state.
Reviewed By: JunqiWang
Differential Revision: D29856391
fbshipit-source-id: 1042109ed29cd1d20d139e08548d187b469c8398
Summary: As in title, this doesn't need to be in the base state.
Reviewed By: JunqiWang
Differential Revision: D29855140
fbshipit-source-id: 8d3a4b12fd6b93b2277020d56862915e084f1c05
Summary:
These are either no longer relevant, are unlikely to be done, or are spculative enough that they don't deserve code space.
Hope here is to make our search for TODOs higher signal.
Reviewed By: lnicco
Differential Revision: D29769792
fbshipit-source-id: 7cfa62cdc15e72d8b7b0cd5dbb5913ea3ca3dc5a
Summary: We are now adding them to the outstanding packet list, for not good reason.
Reviewed By: mjoras
Differential Revision: D29469128
fbshipit-source-id: b279774b6ef5bbb436b3c9d7bcdad39eea8f1079