1
0
mirror of synced 2025-07-29 11:01:13 +03:00

ThreadPool: optional limit for jobs queue (#1741)

For very busy servers, the internal jobs queue where accepted
sockets are enqueued can grow without limit.
This is a problem for two reasons:
 - queueing too much work causes the server to respond with huge latency,
   resulting in repetead timeouts on the clients; it is definitely
   better to reject the connection early, so that the client
   receives the backpressure signal as soon as the queue is
   becoming too large
 - the jobs list can eventually cause an out of memory condition
This commit is contained in:
vmaffione
2023-12-24 14:20:22 +01:00
committed by GitHub
parent 31cdcc3c3a
commit 374d058de7
3 changed files with 118 additions and 10 deletions

View File

@ -433,6 +433,17 @@ If you want to set the thread count at runtime, there is no convenient way... Bu
svr.new_task_queue = [] { return new ThreadPool(12); };
```
You can also provide an optional parameter to limit the maximum number
of pending requests, i.e. requests `accept()`ed by the listener but
still waiting to be serviced by worker threads.
```cpp
svr.new_task_queue = [] { return new ThreadPool(/*num_threads=*/12, /*max_queued_requests=*/18); };
```
Default limit is 0 (unlimited). Once the limit is reached, the listener
will shutdown the client connection.
### Override the default thread pool with yours
You can supply your own thread pool implementation according to your need.
@ -444,8 +455,10 @@ public:
pool_.start_with_thread_count(n);
}
virtual void enqueue(std::function<void()> fn) override {
pool_.enqueue(fn);
virtual bool enqueue(std::function<void()> fn) override {
/* Return true if the task was actually enqueued, or false
* if the caller must drop the corresponding connection. */
return pool_.enqueue(fn);
}
virtual void shutdown() override {