Adventures in Rust, Node.js and Safe

I’ve been using the Rust Safe APIs for the past few weeks and steadily move towards an idea of how all moving parts come together. As a kind of exercise I’ve worked on improving and updating the bindings to Node.js. This post serves as memory and reference to myself and anyone interested. (I welcome any feedback from someone proficient with Rust!)

A major challenge I’m trying to wrap my head around, is what I assume is due to the Rust ownership model and its opposite: the garbage collector of Node.js.

Node.js has an API (N-API) that allows us to translate Rust into JavaScript. For example, one can turn an i32 into a Number. Say we have the following Safe idiom in Rust:

#[tokio::main]
async fn main() {
    let mut safe = sn_api::Safe::new(None, std::time::Duration::from_secs(120));
    safe.connect(None, None, None).await.unwrap();
    let (url, kp) = safe.keys_create_preload_test_coins("9").await.unwrap();
}

Then the equivalent in idiomatic Node.js could be this:

const { Safe } = require('sn_api');

(async () => {
    const safe = new Safe(null, 120);
    await safe.connect();
    const [url, kp] = await safe.keys_create_preload_test_coins('9');
})().catch(r => console.dir(r));

With N-API, these kinds of translations can be implemented.

napi-rs — N-API in Rust

napi-rs is a Rust library offering an ergonomic way to use N-API. I’ll use this to translate the Rust API into JS constructs.

N-API has a way to wrap a native (Rust) object (e.g. sn_api::Safe) into a JS object. When a method is called, the native instance can then be borrowed (unwrapped) and used.
Now let me try to explain my problem. If we want to do something asynchronously (e.g. fetch a file from the network), then we need a runtime for this. (The Safe APIs mostly use async functions that have to be ‘polled to completion’ by a runtime.) Node.js only has a single main (event loop) thread, thus any function we create should not block. If it would block, then we might be blocking the whole application from executing anything else.

Imagine we create a simple JavaScript class that wraps around an integer (e.g. i32). napi-rs has a convenience function that allows us to execute a future in a Tokio runtime. This function makes sure there is only one runtime instantiated for our addon, run that runtime on a separate thread (independent of Node.js main thread), and send our futures to that runtime, handling it and turning it into a JavaScript Promise. A very convenient function indeed.

Let me illustrate a problem I could bump into:

#[js_function(0)]
fn constructor(ctx: CallContext) -> Result<JsUndefined> {
    let mut this: JsObject = ctx.this_unchecked();
    ctx.env.wrap(&mut this, 572)?; // Wrapping the native Rust value.
    ctx.env.get_undefined()
}

#[js_function(1)]
fn my_method(ctx: CallContext) -> Result<JsUndefined> {
    let this: JsObject = ctx.this_unchecked();
    let num: &mut i32 = ctx.env.unwrap(&this)?; // Borrowing the value.

    // Convenience method to execute futures, returning a Promise!
    ctx.env.execute_tokio_future(
        async {
            println!("{}", num); // Compile error!!!
            // (because reference might be invalid)
        },
        |&mut env, val| {
            ctx.env.get_undefined()
        },
    )
}

The native Rust value can be unwrapped/borrowed, but I can’t pass it to the Tokio runtime on the other thread, because the data it references might have been garbage collected (I assume) in Node.js. The actual error I am getting is the following:

explicit lifetime required in the type of `ctx`
lifetime `'static` required

Again assuming: the context that provides the borrow might be dropped entirely, thus dropping the native Rust value.


One solution is to clone the value and move it into the async function. With an integer, that seems like a good solution. But, with sn_api::Safe I am not so sure about it. If the object is cloned and we call, for example, connect(), then only the cloned Safe instance will be connected, no the one still wrapped in the JS object.
(I’ve used this solution for now in a draft I’m making here.)


To be continued.


PS: I updated the existing bindings (they use `neon`) for a bit and described some similar problems [here](https://github.com/maidsafe/sn_nodejs/pull/85#issuecomment-779909946). I'm starting to think cloning is a sensible solution to most problems in Node.js. But that also means that the Safe Rust APIs should be up to that task and allow it to work that way.
7 Likes

I’m not sure at all but just throwing it here in case it makes any sense @bzee , IIRC at some point I was imagining that to expose the API with Promises we may need to create nodeJS functions wrappers like what it’s shown in this example as a wrapper to perform_async_task binding function: https://neon-bindings.com/docs/async#promises …if I’m understanding that correctly…

1 Like

Thanks, @bochaco. A few years ago, that Task API was the first thing I looked at. But, it doesn’t solve the problem I’m describing in the OP. You still need to somehow share or control the lifetime of the Safe instance with those thread contexts. If you were to execute the runtime as a task, you will still need to ‘send’ the Safe instance to that task/thread, thus managing the lifetime.

Besides that, the Task API is being deprecated:

The Task API (neon::task) is deprecated, and should in most cases be translated to using the Event Queue API.

Rationale: The Task API was built on top of the low-level libuv thread pool, which manages the concurrency of the Node.js system internals and should rarely be exposed to user-level programs. For most use cases, Neon users took advantage of this API as the only way to implement background, asynchronous computations. The Event Queue API is a more general-purpose, convenient, and safe way of achieving that purpose.

The Event Queue API they are talking about is more promising. This is what I’ve been looking at yesterday. In N-API this presumably is what they call a ‘thread-safe functions’. Whereby you can queue a callback into a JavaScript function from another thread.


I’ve come across quite an interesting example using it for a similar use-case that we have here. In there, they’re using a ‘JsBox’. This is, I assume, what I described as ‘wrapping’. Only Neon might have wrapped that in a construct that does indeed make its lifetime ‘indefinite’. Anyhow, I guess I’m onto something here, because that JsBox contains an atomic reference. This all begins to look like something from my problem sphere.

In the end it’s all about this line from that example:

const [a, b] = await Promise.all([counter.incr(), counter.incr()]);

Being able to run two futures simultaneously with the same object.

4 Likes

Seems like wrapping and keeping a reference with the wrapped object could indeed be a solution to this (ensuring the object isn’t deleted in JS), but I’ve no idea what that looks like in code.

Thanks @bzee for answering my question (on your PR) about napi-rs versus neon. I’m trying to understand what ways this might be relevant to what I’m up to with Git Portal.

One obvious difference is that I’m not making a binary, but compiling to WASM, but at some point I will be using safe_nodejs in the browser again so I’m wondering how that relates to a native node module.

Is the browser Safe API based on this and compiled with the browser for each platform? That makes sense, and would answer a curiosity I’ve had lurking for a long time but which didn’t become a question until now!

I guess it would be feasible to compile more of the Safe APIs to WASM, but possibly hard to access the low level system APIs in a browser. I saw a question from someone about this recently (how to access network ports) and think it’s being worked on for WASI, so may be possible one day.

3 Likes

Yes. The browser is built with Node.js (Electron) and must use an addon to interact with the Safe API. It injects the addon (or wrapper around the addon) in the window object so Safe sites can use this API from their JavaScript context.

This has been pipe dream of mine. And before I looked at N-API and such, I looked at WASI. That is supposed to allow WASM to interact with low-level APIs like sockets. A Safe client will use a UDP/QUIC socket to connect to the network. But for now, WASI only allows one to inject files (allow the WASM context to open a file at a certain path, with permissions). And sockets is probably a whole new challenge. It’s all about how the host environment will control or allow the WASM process to access these things in a safe way.

2 Likes

I found the short discussion of sockets I mentioned, it’s in the #wasi channel of the WebAssembly Discord chat, on Feb 8th. Someone asks about TCP/IP and the response is that you can only use sockets, but have to create them outside and pass them to the WASI, so you can provide functions to do this and call them from your WASI. If so that sounds like it could work.

Also, did you look at Emscripten? I dabbled with it to get wasm-git working for my use case but have set that aside as the Rust support seems to have rotted.

1 Like

Perhaps it’s better to continue on this topic:

Passing sockets doesn’t seem appropriate though. That seems like recreating the interface that WASI is eventually supposed to provide.

From here:

Sockets aren’t naturally hierarchical though, so we’ll need to decide what capabilities look like. This is an area that isn’t yet implemented.

3 Likes

@bzee I think cloning should be fine for now. most of the client is setup using Arc so cloning will still reference the original, and changes would be made across the board. Indeed, we can set up the whole thing as Arc’s if that makes sense

3 Likes

Very interesting, thanks for that useful information!

An update with a solution. I’ll try to keep it simple for my future self and anyone interested.

The end-goal of this little project is to be able to do two things at the same time. For example, fetch two files at once.

Mutable call or immutable call

After you instantiate the Safe object, you can connect to the network and then do various types of requests. Connecting with connect will mutate the Safe instance, meaning it will change the inner representation of the instance. It might store the connection details, the configuration file that was loaded, and the private keys that might be used for accessing the network.

const safe = new Safe(..);
safe.connect(..).await; // This will mutate 'safe' instance.

After connecting, we could fetch a file from the network. Such an operation would not require the Safe instance to be mutated. There is no new information to store in the inner representation of our connection.

const file = safe.fetch("safe://../file.txt").await; // This will not mutate.

Now, why is this mutability aspect important? The answer lies within Rust’s ownership model.

Ownership and references

A core characteristic in Rust is the ownership model. It makes sure values aren’t randomly mutated and accessed all the time. Without it you might read values while it’s being written to, which could cause weird behavior. Rust makes it clear who owns what and whether you can write or read from it.

In one sentence: there can be multiple readers or a single writer. Reading is harmless, so the application might be reading the value from multiple places at the same time (threading). But writing is different, if one is writing, no one else should be writing or reading. Writing here refers to mutation, while reading is the immutable aspect. So, in terms of Rust, we can have multiple immutable references or a single mutable reference.

Back to connect. As I mentioned, connect mutates the Safe instance. In other words, Safe.connect needs a mutable reference to itself. Thus, if we are using Safe.connect we cannot do anything else: we can only have a single mutable reference. This means we can’t fetch a file while connecting, which makes sense: We can’t fetch a file before we have connected…

Translation to JS

Connecting and fetching files are both examples of asynchronous operations. In JavaScript this is mostly translated as promises. While in Rust we have the ownership model making sure we can’t do anything else while we’re connecting, in Node.js we can’t guarantee that. One could just do this, without compiler errors:

await Promise.all([safe.connect(..), safe.fetch('safe://..')]);

Both at the same time and there’s nothing we can do about it.

Either one writer (connect) or multiple readers (fetch)

A mechanism in Rust that applies to exactly this challenge is the RwLock:

This type of lock allows a number of readers or at most one writer at any point in time.

Using this type, we can wrap the Safe object in this lock and when we need a mutable reference or an immutable reference we can leverage this lock to let us know whether that’s possible at the time. If it’s not possible, then we can either wait or return an error.

Making sure the value isn’t dropped

We need one more mechanism to solve what I described in the original post:

Which is the Arc. It stands for atomic reference count. We can wrap the Safe instance (or rather the lock) in the Arc. When we want to use the instance to execute a future on a separate thread, we can clone the Arc, which means it will increase the reference count. Even if Node.js would now drop the original Arc and decrease the count, the cloned Arc ensures the value would not be dropped. As long as we are using it, it shouldn’t be dropped.

Implementation

Now let me show the (simplified) implementation I’ve constructed:

#[js_function]
fn constructor(ctx: CallContext) -> Result<JsUndefined> {
    let safe = Safe::new(..);
    let safe = Arc::new(RwLock::new(safe)); // Wrap in lock and in Arc.

    let mut this: JsObject = ctx.this_unchecked();
    ctx.env.wrap(&mut this, safe)?; // Attach to the JS object.

    ctx.env.get_undefined()
}

#[js_function]
fn connect(ctx: CallContext) -> Result<JsUndefined> {
    let this: JsObject = ctx.this_unchecked();
   
     // Borrow the Arc containing the Safe lock.
    let safe: &Arc<RwLock<Safe>> = ctx.env.unwrap(&this)?;
    let safe = Arc::clone(&safe); // Cloned version to send into runtime.

    ctx.env.execute_tokio_future(
        async move {
            // Wait until we can have sole write access!!!!
            let lock = safe.write().await;
            lock.connect(..).await // Connect (taking the mutable reference)
        },
        // Nothing to return. So return undefined.
        |&mut env, _| env.get_undefined(),
    )
}

#[js_function]
fn fetch(ctx: CallContext) -> Result<JsString> {
    let this: JsObject = ctx.this_unchecked();
    let safe: &Arc<RwLock<Safe>> = ctx.env.unwrap(&this)?;
    let safe = Arc::clone(&safe);

    ctx.env.execute_tokio_future(
        async move {
            // We'll get an immutable reference when there is NO writer!!!
            let lock = safe.read().await;
            lock.fetch("safe://..").await  // Fetch (taking the immutable reference)
        },
        |&mut env, file| {
            // Process the file, turn in to JS string for example.
        },
    )
}

#[module_exports]
pub fn init(mut exports: JsObject, env: Env) -> Result<()> {
    let safe = env.define_class(
        "Safe",
        constructor,
        &[
            Property::new(&env, "connect")?.with_method(connect),
            Property::new(&env, "fetch")?.with_method(fetch),
        ],
    )?;
    exports.set_named_property("Safe", safe)?;

    Ok(())
}

And to demonstrate it on the JS side (again simplified):

const { Safe } = require('sn_api');

// Instantiate Safe (putting it in the Arc and lock).
const safe = new Safe(..);

// Wait for us to be the sole writer and connect, mutating the object!
await safe.connect();

// Wait for a possible writer, then borrow immutably, fetching the file.
const file = await safe.fetch('safe://../file.txt');

// We can borrow immutably multiple times simultaneously. Multiple readers!!
const [file1, file2] = await Promise.all([safe.fetch(..), safe.fetch(..)]);

So, why not clone?

The simple answer is that the connect function can’t be implemented with clone. Referring to my original post:

But, connect is still asynchronous, so I did want to look for a way to implement it that way. It should never block the Node.js main thread, so a synchronous implementation isn’t possible.

The complicated answer is that I’ve also looked at this as a better way to understand Rust and async. There might be other solutions or drawbacks to the current solution, we’ll see. I’m open to alternatives, though it seems at least somewhat robust what I’ve come up with now.

9 Likes

Thanks for sharing @bzee! I was wondering what the status was on the browser side and it sounds like you’re on the frontier!

So from looking at the above, wrapping the client calls in this way provide a way to access the Safe Network API directly from the browser?

Does this also mean that each app (‘page’) that is loaded into the browser could have its own app id (and associated permissions)?

Also, as an aside, can a safe client (wrapper?) be passed into a WASM call? I’m just connecting dots and wondering what the capabilities are or could be.

I cloned sn_browser with a view to start looking through some of this stuff. Do you have a branch with any of this stuff in to mess around with too?

1 Like

I have now just started to think about the browser. I’ve kind of focused purely on the client part of the API ignoring the authentication, apps and the browser. The Node addon will be injected into the JS environment of the Safe apps running in the browser.

The injected API will probably be a little modified (i.d. have functions overridden) to make sure Safe sites can’t access powerful parts of the API that might have external consequences (sites should be sandboxed).

If you instantiate a WASM, you can specify modules that it can import. So, in theory you could create a WASM app that imports the API. This is the most realistic option I’ve come up with if you would want to write Rust code that interacts with Safe while running in the browser. The Rust code will translate into WASM, this WASM should run in a context that has the Node.js addon available, which means it kind of communicates with the Rust code again. It’s a little weird to imagine and surely not as performant as without the round-trip into WASM/JS. From Rust, you can do that with some kind of wrapper that detects it’s being compiled for the WASM target and replaces the Safe API with a wrapper that imports the equivalent JS functions. Again, it’s hard to imagine, but I guess that at this moment is the most realistic. Though, the amount of type conversion happening might make it all very complicated.

In the future WASI might be a solution, which means the full Rust API can be executed from within WASM, having full access to the file system en networking sockets. This might also replace the Node.js addon, as Node.js should by then also be able to run WASI-targeted WASM.

I’m working on Node.js bindings here (test).

4 Likes

Using wasm-pack takes away a lot of the pain of writing a Rust/WASM modules which can call your browser JavaScript (including safe_nodejs) and vice versa.

1 Like

Yes, this is correct, and that’s how it’s been working in the past, and this is where you start seeing or making some more sense of the existence of the authd. When a webapp needs write access it needs a keypair with safecoins in its balance so it can pay for storage. Otheriwse it can access the network but with read-only access.

The API exposes the auth_app which allows apps to get such keypair from authd (this API does that, sends a request to authd), the user then can allow such request (using CLI auth commands, or SNAP GUI) which results in the authd assigning and returning a keypair to the app.

This flow is when you use authd to give the apps/webapps a keypair to work with, but there are other infinite ways the apps may decide to obtain such keypair, e.g. a hardware wallet (not possible yet due to missing feature in sn_client API), asking the user to enter it, etc…

3 Likes

Let me add one more thing, the browser itself is a Safe app and it uses the Safe API with read-only access, which calls fetch to retrieve the content of the website, then the website may be static or a dynamic Safe webapp which in turn will also make use of the Safe API.

2 Likes

This seems a relevant development (or maybe not, see 'EDIT’ below), WASIX is a superset of WASI which adds sockets (TCP/UDP) and tokio among other things.

Maybe this will enable the Safe API to be compiled to WASM for the browser.

safenode running in a browser anyone? :scream:

Or how about a Safe Browser running inside Chrome/Firefox or :man_shrugging:. I really don’t know what this makes possible, but it seems important.

Anyone want to play?

EDIT

  • WASIX is a non standard project using a proprietary toolchain so probably not of much real world use but still interesting to play with. FYI I tried compiling safenode but WASIX requires libc 2.36 which isn’t available until Ubuntu 23 :man_shrugging:. Meanwhile…

  • WASI Preview 2 is coming later in 2023. So far we have been using Preview 1 although it wasn’t called that! Anyway, Preview 2 includes async Sockets, but threading won’t be until Preview 3 :cry:. These are steps towards a WASI 1.0 W3C Standard.

Good intro to WASI and Preview 2 here: