Still Using Synchronous File i/o in Vaults?

I notice we use std::fs for file i/o, in the state db particularly (although it may be used elsewhere, I’m not sure as of right now). I know tokio offers an async file i/o api as a feature flag, so I’m wondering if we could speed vault startup by using the async file i/o of tokio and allowing a vault’s I/O scheduler to batch up the file reads/writes better and allowing execution to continue elsewhere.

Is there a reason we currently don’t use the async versions, or is it more that the async conversion isn’t quite complete yet (as part of the “minor issues” mentioned in the September 3 update)? Wouldn’t be a massive effort to convert afaik, but just curious as I stumbled across it earlier.

5 Likes

Good catch, we have not made these async yet, just down to time etc. it would be a great PR if you are inclined? The async work is in branches just now so a few days they should be in master I hope.

5 Likes

Sure, I was thinking of submitting a patch for it anyway while I had the files up. Seems dependencies are being monkeyed with in other crates at this moment, but once that settles down and I can compile again, I’ll get a PR going :+1:

EDIT: was a pretty quick fix, so I went ahead and made the PR, although until the dependencies are patched up, I’ll just leave it as a draft.

6 Likes

Thanks, much appreciated.

2 Likes

Just a quick update on this. So taking a look at the code, we can’t currently enable something like the following concurrent code:

// potential concurrency capability
let mut tasks = Vec::new();
for _ in 0..10 {
    tasks.push( chunk_store.do_concurrent_op() );
}
future::futures::try_join_all(&tasks).await?;

Instead, we can currently only do something like the following code (even after the async conversion PR from what I can tell). Currently, we need to await serially on each job, which isn’t “real” concurrency from the perspective of the ChunkStore struct.

// current capability
for _ in 0..10 {
    chunk_store.do_op().await?;
}

To get to the properly concurrent version, it requires each store to first internally synchronize its own record of the UsedSpace for that ChunkStore<T>, not just the global used_space value we currently pass around (currently each instance of ChunkStore<T> tracks its own used space amount that isn’t async-safe). Only after that, can the chunk store itself be be modified to allow the concurrency.

So to address the first item, I submitted a potential enhancement that I think would simplify the interface with UsedSpace and synchronize it across the vault, as a first step towards the above. Afaik, it’s an enhancement that hasn’t been covered yet, so thoughts are appreciated :slight_smile:

5 Likes