Question about atomics memory ordering: SeqCst needed or not?
⚓ Rust 📅 2026-02-25 👤 surdeus 👁️ 1I stumbled upon a problem with atomics memory ordering that I can't figure out if it's actually correct or not, and if not - why. I tried to distill the essence in the code below. (Disclaimer: It's an artificial code example and probably not the best code - I'm still on a learning journey.)
The gist of the program is: Two threads modify data (just increment) in a racy manner, but the critical section is protected by a lock that consists of two counters, one counter for each thread where an odd value represents 'locked' state and an even value the 'unlocked' state. From my understanding of the memory ordering docs for atomics [std::memory_order - cppreference.com] the program should be correct and all assertions succeed.
If I run this program, it succeeds in playground and my testing machine. Also cargo thread sanitizer (-Zsanitizer=thread) and address sanitizer (-Zsanitizer=address) are happy. Just, in Miri the program fails where the interpreter performs some order of execution such that assertions fail.
Only if I change the lock operation (which is vec_lock[myself].fetch_add(1, Ordering::AcqRel);) and the check (which is vec_lock[other].load(Ordering::Acquire) % 2 == 1) to use SeqCst also Miri succeeds.
However, I can't help myself but I think the program as shown below is correct as is (independent of arch or anything - just according to spec) because I think the read/write barriers should be sufficient here to ensure correct ordering and inter-thread visibility of lock and check. This would mean Miri reports a false positive here and that's where I have doubts.
Can anybody help to clarify and correct my mental model of memory ordering? Thank you in advance!
use std::sync::Arc;
use std::sync::atomic::AtomicBool;
use std::sync::atomic::AtomicUsize;
use std::sync::atomic::Ordering;
use std::thread;
use std::time::Duration;
fn main() {
let vec_lock = Arc::new(vec![AtomicUsize::new(0), AtomicUsize::new(0)]);
let data = Arc::new(AtomicUsize::new(0)); // data to be modified with increments by one
let stop = Arc::new(AtomicBool::new(false));
let two_threads = (0usize..2)
.map(|myself| {
let vec_lock = vec_lock.clone();
let data = data.clone();
let stop = stop.clone();
thread::spawn(move || {
let mut success: usize = 0;
let mut failure: usize = 0;
let other = (myself + 1) % 2;
while !stop.load(Ordering::Relaxed) {
vec_lock[myself].fetch_add(1, Ordering::AcqRel); // lock (even -> odd)
if vec_lock[other].load(Ordering::Acquire) % 2 == 1 {
// other is already locked (We count this just to see, if threads really interleave.)
failure += 1;
} else {
// critical section: perform a non-atomic increment (to provoke and test for race condition)
let val = data.load(Ordering::Relaxed);
// thread::sleep(Duration::from_millis(10)); // sleep, if you want more confidence
assert_eq!(val, data.swap(val + 1, Ordering::Relaxed));
success += 1;
}
vec_lock[myself].fetch_add(1, Ordering::Release); // unlock (odd -> even)
}
(success, failure)
})
})
.collect::<Vec<_>>();
thread::sleep(Duration::from_millis(1000));
stop.store(true, Ordering::Relaxed);
let sum = two_threads
.into_iter()
.enumerate()
.map(|(id, t)| {
let (success, failure) = t.join().unwrap();
println!("{id}: {success} {failure}");
success
})
.sum::<usize>();
println!("{}", data.load(Ordering::Acquire));
assert_eq!(sum, data.load(Ordering::Acquire));
}
9 posts - 3 participants
🏷️ Rust_feed