Pooled chunk storage for refcounted byte slices?

⚓ rust    📅 2025-06-02    👤 surdeus    👁️ 3      

surdeus

Warning

This post was published 31 days ago. The information described in this article may have changed.

I have a server application that needs to serialize messages into slices of bytes of different lengths, and periodically retry sending those messages until they either expire or are acknowledged. I've identified this area of my code as a performance bottleneck and am looking for ways to improve beyond having a global allocation for each message slice.

These messages are serialized once and then enqueued to be sent to multiple clients, though not every client gets every message. Ideally each client's representation on the server would have a VecDeque containing a collection of Rcs or some sort of refcounted representations to a serialized message (a [u8] of variable length) that is waiting to be (re)sent.

My first pass sketch of a data structure for this would be:

  • Create a big [u8] page/chunk to push messages to as subslices
  • For each pushed message, provide an Rc or Rc-like handle to that subslice of the chunk
    • This refcount is shared for the entire chunk, not just the subslice
    • When all of the references are dropped (no subslice has an active reference anymore), return that chunk to a free list
  • If there isn't room in the currently active chunk, either pull one off the free list or allocate a new one

With that in mind, some questions:

  1. Does a crate like this already exist? I'm aware of some bump allocators, but they work differently from what I have in mind.
  2. Could I just do this with Rc? I found this thread about creating subslices. Can an Rc also be changed to have custom drop behavior (e.g. returning the chunk to a free list rather than truly dropping it)?

3 posts - 2 participants

Read full topic

🏷️ rust_feed