Also known as Smart Batching. There isn’t much in the academic literature that I’ve found on this topic (perhaps there is another name?), but natural batching can be a highly effective mechanism to increase systems throughput while also keeping average latency low. The way it works is deceptively simple.
Let’s consider the following scenario - we have a thread that is producing data, and another thread consuming data. The rate at which the producer adds data is random (for example, as a result of an external network process), and the consumer may take a varied amount of time to process the data as well. There are three mechanisms we can deploy for the consumer to accept data:
- take one item at a time;
- process n items at a time, waiting until either n items are available or a timer expires before processing;
- and natural batching, in which the consumer takes up to n items without waiting.
What tends to happen with natural batching is that during normal steady state processing, batches are small - sometimes only a single item. As the producer bursts, or the consumer slows down, batch sizes elastically increase up to the maximum number of items. Once conditions return to normal, batch sizes drop back down to low numbers. So we’re seeing the best of both worlds - no waits, and thus lower latency, with a smooth running system, and batches to increase throughput as conditions demand.
The Ring Buffer implementations in Agrona, Subscriptions in Aeron, LMAX’s Disruptor and other libraries offer support for natural batching.
Note that Back pressure must be taken into account.