It's a fairly common pattern:
So you have one part of the system producing work. And then this work is consumed by the worker threads.
Or even more simply, you could just spin around doing produce, consume, produce, consume in one thread.
The classic worker thread pool:
select()
or one of its friendsfor (SelectionKey key : selector.selectedKeys()) {
executor.execute(() => handleSelected(key.channel(), key.readyOps()));
}
Executor
implementations often function as queues
Every piece of incoming work is guaranteed to cause a context switch, along with other overhead involved in queueing the consumer. This adds latency to processing the work, and may also require the work context to be loaded into a different CPU's cache.
Using more CPUs to handle work - Jetty calls this Parallel Slowdown, Jetty-9 was 15% slower than Jetty-8 after making it fully async-capable.
Understanding and dealing with this effect is Mechanical Sympathy- "how to code sympathetically to and measure the underlying stack/platform so good performance can be extracted"
Problems with queues in general: applying back-pressure to the producer instead of just letting the queue grow. If the queue size is simply limited, you just end up blocking everything again eventually.
SelectionKey key = selectedKeys.next();
if (!key) return;
executor.execute(this); // may take some time to dispatch
do {
handleSelected(key.channel(), key.readyOps());
key = selectedKeys.next();
} while (key);
If work is processed fast compared to thread dispatch, then this behaves just like a single thread P-C loop. Thread dispatch delay may mean time for a job in the executor to get through the queue.
Additional threads added to handle production - may immediately exit if there's nothing to do by the time they start