What's the cost of allocating a vector vs locking a mutex?
⚓ Rust 📅 2025-11-29 👤 surdeus 👁️ 5I'm doing a task similar to one described in this thread -- one thread reads from a file chunks of variable length, sends it to a channel, and the worker threads process these items.
I thought of a scheme with a vector of vectors under one or multiple mutices. But it seems to have bottlenecks. Instead, maybe I use a carousel: worker threads read from one channel, then send the vectors back over another channel. File reader thread will do the other way around.
I wonder, is this worth the hassle?
In current scheme, I create 1K vectors per second. I know their size beforehand (first 4 bytes indicate the size) and don't increase capacity, so it's 1 allocation per chunk + 1 channel write.
The carousel version will have 2 channel changes (1 read & 1 write) and 0 allocations per chunk.
Current reader code
move || -> Result<(), ThreadCrash> {
loop {
let chunk_len = match u32::read_from_io(&mut file_reader) {
Ok(v) => Ok(v),
Err(e) if e.kind() == std::io::ErrorKind::UnexpectedEof => {
break
},
Err(e) => Err(ThreadCrash::new(&format!("{e}")))
}?;
let mut bytes = vec![0; chunk_len as usize];
file_reader.read_exact(&mut bytes).unwrap();
jobs.send(bytes).map_err(|e| ThreadCrash::new(&format!("{e}")))?;
}
Ok(total_tracks)
}
Reader with carousel
move || -> Result<(), ThreadCrash> {
loop {
let bytes = used_jobs.recv().unwrap();
let chunk_len = ...
file_reader.read_exact(&mut bytes).unwrap();
jobs.send(bytes).unwrap();
}
}
worker thread:
move || -> Result<(), ThreadCrash> {
loop {
let bytes = jobs.recv().unwrap();
// do the job
used_jobs.send(bytes).unwrap();
}
}
4 posts - 2 participants
🏷️ Rust_feed