The delicate dance of the sync->async bridge

⚓ Rust    📅 2025-07-08    👤 surdeus    👁️ 2      

surdeus

I've followed the fairly common pattern in one app of walling-off the async portion. That portion is the use of a gRPC service, meaning Tonic, which is async-only.

The service that wraps this API uses a helper like this

    fn run<F, R>(&self, f: F) -> R
    where
        F: Future<Output = R>,
    {
        tokio::task::block_in_place(move || tokio::runtime::Handle::current().block_on(f))
    }

to make calls to that API. (It turns out that this service is also in an async context but ignore that.)

This is - fine. Sort of.

It's not really fine because block_on always carries the risk of deadlock. (run() above is used in the API calls to first obtain a connection, which involves a Tokio Mutex lock.)

If there were no guard, the server being inundated with requests could result in all of the Tokio Runtime threads waiting in block_on calls - and we're dead.

I explored using a oneshot channel as an alternative to block_on. It works, but it is really messy and verbose.

(The fact that you can't create a Runtime within one is really a frustrating limitation - a transient single-thread Runtime would be a great way to handle this.)

I settled on using a Semaphore to date the number of active gRPC calls. If the max number of possible concurrent block_on calls is kept less than the number of Runtime threads, we can't get a deadlock.

This is not a bad idea except then I discovered that Rust's std lib Semaphore is depracated.

Before I hunt down an alternative (non-async) semaphore implementation - any other ideas?

2 posts - 2 participants

Read full topic

🏷️ rust_feed