Unsound arena implementation

⚓ rust    📅 2025-06-10    👤 surdeus    👁️ 1      

surdeus

Hello,

I am working on a small arena implementation with reusable memory. Miri is not happy with my implementation so I'm here asking for help :slight_smile:

The arena behaves like a bump arena, but when an allocation is dropped, the write index for the backing buffer moves back so that the freed memory can be reused. In principle this works, but Miri complains.

Miri is happy with this test:

#[test]
fn can_allocate_deallocated_memory() {
    let mut backing_storage = AlignedStruct([0u8; 1024]);
    let arena = StackArena::from_slice(&mut backing_storage.0);

    let a = arena.allocate::<u8>(1);
    drop(a);

    let a = arena.allocate::<u8>(1);
    drop(a);
}

but not with this one:

#[test]
fn passing_allocation_into_fn() {
    fn takes_allocation<'a, 'b, 'arena>(arena: &'b StackArena<'arena>, t: Allocation<'a, u8>) -> Allocation<'b, u8> where 'arena: 'b {
        drop(t);
        arena.allocate::<u8>(1)
    }

    let mut backing_storage = AlignedStruct([0u8; 1024]);
    let arena = StackArena::from_slice(&mut backing_storage.0);
    
    let a = arena.allocate::<u8>(1);
    let _b = takes_allocation(&arena, a);
}

In both tests, the arena creates both allocations using the same memory.

I've tried all sorts of combinations of lifetimes in takes_allocation(). I haven't been able to make Miri happy.

The code itself compiles and the tests pass.

This is what Miri says:

running 1 test
test arena::stack_arena::tests::passing_allocation_into_fn ... error: Undefined Behavior: not granting access to tag <423014> because that would remove [SharedReadOnly for <423770>] which is strongly protected
   --> /home/andy/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/slice/raw.rs:192:9
    |
192 |         &mut *ptr::slice_from_raw_parts_mut(data, len)
    |         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ not granting access to tag <423014> because that would remove [SharedReadOnly for <423770>] which is strongly protected
    |
    = help: this indicates a potential bug in the program: it performed an invalid operation, but the Stacked Borrows rules it violated are still experimental
    = help: see https://github.com/rust-lang/unsafe-code-guidelines/blob/master/wip/stacked-borrows.md for further information
help: <423014> was created by a SharedReadWrite retag at offsets [0x0..0x400]
   --> crates/airten/src/arena/stack_arena.rs:72:22
    |
72  |             storage: slice.as_mut_ptr(),
    |                      ^^^^^^^^^^^^^^^^^^
help: <423770> is this argument
   --> crates/airten/src/arena/stack_arena.rs:545:76
    |
545 |         fn takes_allocation<'a, 'b, 'arena>(arena: &'b StackArena<'arena>, t: Allocation<'a, u8>) -> Allocation<'b, u8> where 'arena: 'b {
    |                                                                            ^
    = note: BACKTRACE (of the first span) on thread `arena::stack_ar`:
    = note: inside `std::slice::from_raw_parts_mut::<'_, u8>` at /home/andy/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/slice/raw.rs:192:9: 192:55
note: inside `arena::stack_arena::tests::passing_allocation_into_fn::takes_allocation::<'_, '_>`
   --> crates/airten/src/arena/stack_arena.rs:547:13
    |
547 |             arena.allocate::<u8>(1)
    |             ^^^^^^^^^^^^^^^^^^^^^^^
note: inside `arena::stack_arena::tests::passing_allocation_into_fn`
   --> crates/airten/src/arena/stack_arena.rs:554:18
    |
554 |         let _b = takes_allocation(&arena, a);
    |                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^
note: inside closure
   --> crates/airten/src/arena/stack_arena.rs:544:36
    |
543 |     #[test]
    |     ------- in this procedural macro expansion
544 |     fn passing_allocation_into_fn() {

The relevant implementation of the arena is as follows:

pub(crate) struct StackArena<'a> {
    storage: *mut u8,
    size: usize,
    context: RefCell<Context>,
    _phantom: PhantomData<&'a mut ()>,
}

impl<'a> StackArena<'a> {
    /// Creates a new stack arena from a mutable slice of bytes.
    ///
    /// The slice must be aligned to 16 bytes.
    pub fn from_slice(slice: &'a mut [u8]) -> Self {
        assert!(
            slice.as_mut_ptr().align_offset(16) == 0,
            "Backing storage must be aligned to 16 bytes"
        );

        StackArena {
            storage: slice.as_mut_ptr(),
            size: slice.len(),
            context: RefCell::new(Context {
                item_count: 0,
                write_index: 0,
                allocation_map: 0,
                allocation_size: [0; usize::BITS as usize],
                maximum_memory_usage: 0,
            }),
            _phantom: PhantomData,
        }
    }

    #[track_caller]
    fn create_allocation<T: Allocatable>(&self, count: usize) -> (&'a mut [T], u8) {
        let allocation_size;
        let mut context = self.context.borrow_mut();

        assert!(count > 0, "Cannot allocate zero items");
        assert!(
            context.item_count < usize::BITS as usize,
            "Cannot allocate item: stack arena can only hold up to {} items",
            usize::BITS,
        );

        // SAFETY:
        // - The pointer is guaranteed to be aligned to the alignment of T
        // - The slice is guaranteed to be valid for the entire lifetime of the arena
        // - The arena is guaranteed to have enough space for the allocation
        // - The slice is guaranteed to hold valid values of type T
        let slice = unsafe {
            let ptr = self.storage.add(context.write_index);
            let offset = ptr.align_offset(core::mem::align_of::<T>());
            allocation_size = (core::mem::size_of::<T>() * count) + offset;

            assert!(
                self.size - context.write_index >= allocation_size,
                "Allocation does not fit in stack arena: {} bytes requested, {} available, arena size must be {}",
                allocation_size,
                self.size - context.write_index,
                context.write_index + allocation_size
            );

            let ptr = ptr.add(offset);
            core::slice::from_raw_parts_mut(ptr as *mut T, count)
        };

        // Mark the allocation as used and save its size
        let allocation_id = context.item_count;
        context.allocation_map |= 1 << allocation_id;
        context.allocation_size[allocation_id] = allocation_size;

        // Prepare for the next allocation
        context.item_count += 1;
        context.write_index += allocation_size;

        // Keep track of the maximum memory usage
        if context.write_index > context.maximum_memory_usage {
            context.maximum_memory_usage = context.write_index;
        }

        (slice, allocation_id as u8)
    }

    #[track_caller]
    pub fn allocate<T: Allocatable>(&'a self, count: usize) -> Allocation<'a, T> {
        let (slice, allocation_id) = self.create_allocation::<T>(count);
        slice.fill(T::default());
        Allocation {
            memory: slice,
            context: Some(AllocationContext::Stack(StackAllocationContext {
                id: allocation_id,
                context: &self.context,
            })),
        }
    }
}

// This is called from the Drop impl of an `Allocation` object that wraps the allocation
pub(super) fn drop_impl(stack_context: &StackAllocationContext) {
    let mut context = stack_context.context.borrow_mut();
    debug_assert!(context.item_count > 0);

    // Mark the allocation as unused
    context.allocation_map &= !(1 << stack_context.id);

    let free_slots = context.allocation_map.leading_zeros() as usize;
    let last_occupied_slot = usize::BITS as usize - free_slots;

    // Calculate the size of the allocations to free
    let mut size_to_free = 0;
    for i in last_occupied_slot..context.item_count {
        size_to_free += context.allocation_size[i];
        context.allocation_size[i] = 0;
    }

    // Clear all the slots that were freed
    let freed_slots = context.item_count - last_occupied_slot;
    context.item_count -= freed_slots;
    context.write_index -= size_to_free;
}

I've read into using ManuallyDrop and I've seen several arena impls using this, but I'm not yet sure how this helps solving this particular problem. If anyone can point me in the right direction I'd be very grateful :slight_smile:

12 posts - 4 participants

Read full topic

🏷️ rust_feed