RFC 43 – Async safe_core



Discussion topic for RFC 43 – Async safe_core


I am unsure of two items that perhaps could be clarified.

  1. Use of the (often frowned upon) observer pattern and what alternatives were considered (like computing futures or similar).
  2. The use of the throttling methods, is this temporary to prevent network flooding by clients?

It’s a great RFC though and well overdue, we have been showing a blocking & slow interface for too long and it will prompt us to look at network speed ops which will allow caching etc. to show it’s capabilities. Of course caching all data types would be excellent, even prefetch some types like appendable would be good, then complete the action with latest etc.

I think network flooding with Gets may be a separate thread, so not wishing to hi-jack this one with that conversation.


@dirvine: what do you mean by a computing future?


It should be possible to opt out or tune MAX_LOAD_PER_AUTHORITY. These constants may change in the future or the application might implement its own throttling mechanism. The major motivation for this limit is the exhaustion of resources. Looking at approaches to deal with C10K (C100K if you prefer) and how we approach it, it’s clear the number of threads we use is not tuned to be proportional to the number of CPUs like other do, but to network resources. We’re adding these limits mostly because we’re worried with threads.

It’s a good solution that can get us a simplified code base to work with while Rust develops good async foundations.

I’ll start implementing this RFC.


Regarding futures VS callbacks, I’m on the callbacks side. futures-rs seems like a moving target at the moment and this dependency might result in several rewrites of async functions.

And I guess it shouldn’t be too hard to refactor from callbacks to futures eventually (as opposed to futures->callbacks or sync->futures). It’s not exposed in the public interface, so an internal async mechanism can be changed when it’s appropriate (e.g. when futures-rs is stabilized at 1.0 or if async/await is incorporated into rust-stable).

Callbacks have their downsides but in any case it’s the same state machine under the hood. What I mean is that semantics don’t differ substantially, only the syntax/representation. So callbacks can be used and composed in the same way as futures (in regard to throttling, etc.).

There’s one more thing to it, though: futures have a simpler way to handle errors (with futures-rs it’s essentially an asynchronous Result<R,E>). I haven’t noticed anything about errors in the RFC, but I guess that there’d be 2 callback functions for success/failure and that might add more complexity to code.


@nbaksalyar: agree with you about callbacks vs. futures for now. futures-rs is too unstable at the moment. To handle errors, I’d suggest passing Result<R, E> to the callback, instead of having two callbacks.



  1. Ah the observer here is not the tradational object-oriented observer pattern (which i think is discouraged in favour of functional paradigm). So it’s not something like:
trait Observer {
     fn call_me(....);

// And now concrete observers deriving from it etc.

It’s more like registering an mpsc::Sender - this is also how routing integrates with safe_core and crust with routing.
I should have made this clearer perhaps, but ya certainly not the object-oriented-observer pattern - just the name observer (as in listening to events).

  1. I think it should be temporary - whenever routing signals it’s OK and is able to handle many simultaneous requests under churn etc. that will be removed. I can remove it away if it is not something (too many simultaneous requests) that would trouble the network right now ?


@nbaksalyar @madadam @vinipsmaker: What do you guys think of https://github.com/alexcrichton/futures-rs/issues/121#issuecomment-244974815 in combination to the caveat mentioned in https://github.com/alexcrichton/futures-rs/issues/121#issuecomment-245352325.

Basically by the time we construct the last future, if the futures are already satisfied then it will fire immediately in the same thread as JS called us and execute the JS callback else it will fire in the event loop thread and thus the cb will be executed in event-loop thread too. Both however in my experiment land up getting posted/put to the JS main thread by NodeFFI so everything seems fine that way.

However if there is anything bad you guys think can happen do write here in advance.


I case of Javascript / Node.js, I it’s fine, because, as you mentioned, node.js takes care of posting the callback to the main event thread. If we, however, want to expose this API to other environments/languages too, then it might be a problem. As a very minimum, I’d suggest mentioning in the docs that the callback might be called from different threads and the user is responsible to synchronize it. There are existing, widely used libraries that use similar approach. For example: https://wiki.libsdl.org/SDL_SetEventFilter.