This file has been truncated. show original
# Async safe_core
- Status: implemented
- Type: feature
- Related components: `safe_core`, `safe_launcher`
- Start Date: 02-September-2016
- Discussion: https://forum.safedev.org/t/rfc-43-async-safe-core/99
Making `safe_core` async with respect to FFI and internally.
- The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC 2119](http://tools.ietf.org/html/rfc2119).
Currently `safe_core` has a threaded design. It has properly mutexed core-objects and parallel invocations can be achieved by making calls from different threads. However the frontend coded in NodeJS interfaces via `NodeFFI` which detects the hardware concurrency and limits the number of threads to that. This is not bad but leads to underutilisation of `safe_core`. This RFC proposes a design in which number of invocations are not limited. While this will lead to optimal usage of `safe_core` itself, it can cause very high network traffic, so we will try and address such problems too.
## Detailed design
Let's assume the hardware concurrency is 2 (dual core) for a machine. In current threaded design, if `App_0` makes 2 requests to `Launcher`, `App_1` makes 2, `NodeFFI` module in `Launcher` queues 2 of them and sends only 2 to `safe_core` spawing 2 threads for parallelism. Until atleast one thread returns, none of the 2 queued requests will have any chance of getting through to `safe_core`. `safe_core` invariably waits tremendous amount of time (in terms of CPU speeds) for network responses (_IO_). While it sits doing nothing it could have handled so many more requests if only `NodeFFI` forwarded it. This leads to underutilisation of the core library. However there are positives to this. For instance if `NodeFFI` were to spawn a thread per request, then we could take that to the other extreme saying that 30 combined requests from a few apps will result in 30 threads which is also not good. Further, by restricting number of simultaneous invocations the throttling mechanism can be view as in-built. One would not be allowed to choke the network/bandwidth by making 100's of concurrent requests. So we need to strike a balance.
To better utilise the library without uncontrolled spawning of resource hogging threads, this RFC proposes async design involving single threaded event loop. We will use `futures-rs` crate. There will be a central event loop running on a single thread which registers futures and dispatches them when ready.