Safe_app_neon - help with Rust

Working on a library based on Neon that is an experiment to see how it performs in comparison to safe_app_nodejs.

  • safe_app_nodejs is working with FFI

  • safe_app_neon is working with Neon, which creates addons in Rust instead of C++

The current stable implementation is able to allow an app to register itself on a system, to generate an auth URI, and to open a URI on a system, as shown in this example app.

The next step in implementation, on update branch, is to allow an app to connect to the network, now that it’s able to receive an auth-granted URI.

In order to connect an app to network, I’ll need to process the URI it received back from authenticator, and this is where I need help:

If I were utilising the exposed FFI, I’d be using decode_ipc_msg, to decode the returned URI, where a respective callback is called depending on how the app was asking to connect with network.

However, in my library, I have my own function named decode_ipc_msg, that I want to return a value instead of calling a callback.

Specifically, I want an app to be connected to network and to simply return the App struct, in the case that my app has been granted authorisation, to be utilised when my app needs to call further operations on the network.

What I specifically need help with is understanding how to properly destructure the data returned from decode_msg and what to make my return type for decode_ipc_msg.

Right now the return type is Result<???, AppError>.

It appears that AppError is fine for the case of an Err, but there are multiple possibilities for the Ok() portion of the Result type.

IpcMsg::Resp {
            resp: IpcResp::Auth(res),
            req_id,
        } => {
            match res {
                Ok(auth_granted) => App::registered(app_id, auth_granted, move |event| {println!("Network state: {:?}", event)}),
                Err(err) => Err(AppError::from(err)),
            }
        },
        IpcMsg::Resp {
            resp: IpcResp::Containers(res),
            req_id,
        } => {
            match res {
                Ok(()) => req_id,
                Err(err) => AppError::from(err)
            }
        },
        IpcMsg::Resp {
            resp: IpcResp::Unregistered(res),
            req_id,
        } => {
            match res {
                Ok(bootstrap_cfg) => serialise(&bootstrap_cfg)?,
                Err(err) => AppError::from(err)
            }
        },
        IpcMsg::Resp {
            resp: IpcResp::ShareMData(res),
            req_id,
        } => {
            match res {
                Ok(()) => req_id,
                Err(err) => AppError::from(err),
            }
        }

Ok(), based on this match tree, may be an i32, App, or a Vec<u8>

I must need a fresh perspective on how to more simply return a desired value from decode_msg.

1 Like

This sounds very interesting Hunter - can you spell out what it would enable? I see it gives access to SAFE API in Rust, and you mention add ons. Is this for the SAFE Beaker browser, and is it a potential way of overcoming the shortcomings and keeping that route alive?

1 Like

According to this test, http://programminggiraffe.blogspot.com/2014/10/nodejs-ffi-vs-addon-performance.html, addons are 100 times faster than FFI. I’d like to get this library to a point where I can compare it to our FFI layer to see what performance gains we can achieve.

Also, we may be seeing issues related to FFI, https://maidsafe.atlassian.net/browse/MAID-2279, that may or may not become more of a problem as we progress. That’s vague but were still researching if FFI is causing this issue.
If it is and we can get rid of FFI layer altogether, it would simplify our libraries and perhaps make maintenance simpler into the future.

This is also related to the future of our browser development, which you’ll read about in this week’s dev update. Depending on the decision that’s made, we may not be able to depend on FFI anymore.

Whether this proves to be viable or if Web Assembly proves viable, the browser and external nodejs apps will only depend on one common library instead of having a separate beaker-plugin-safe-app and safe_app_nodejs.

That’s only from the front-end perspective however.

There may be problems with FFI that I’m not aware of, like memory safety issues or architectural complexity. I’m not sure.

For example, with Neon, when I want to utilise a piece of returned data from a Rust function, Neon creates a raw pointer (Box::into_raw(Box::new(<App Struct>)) and ties it to a Javascript object, which will drop the allocated memory when the garbage collector is called to clean up that JS object.

3 Likes

As a follow-up, here’s an interesting comparison for how native code, n-api addons, and wasm may be each most useful.

> node benchmark.js

Levenstein Distance:
   Native x 118,191 ops/sec ±0.94% (83 runs sampled)
   N-API Addon x 228,882 ops/sec ±0.89% (89 runs sampled)
   Web Assembly x 139,091 ops/sec ±3.65% (79 runs sampled)
 Fastest is N-API Addon

Fibonacci:
   Native x 3,158,795 ops/sec ±1.81% (81 runs sampled)
   N-API Addon x 2,731,388 ops/sec ±1.67% (83 runs sampled)
   Web Assembly x 6,615,989 ops/sec ±1.78% (81 runs sampled)
 Fastest is Web Assembly

Fermat Primality Test:
   Native x 1,546,993 ops/sec ±1.03% (83 runs sampled)
   N-API Addon x 1,318,161 ops/sec ±2.49% (79 runs sampled)
   Web Assembly x 2,297,521 ops/sec ±2.99% (76 runs sampled)
 Fastest is Web Assembly

Simple Linear Regression:
   Native x 161,016 ops/sec ±3.50% (76 runs sampled)
   N-API Addon x 3,397 ops/sec ±3.71% (72 runs sampled)
   N-API Addon using TypedArrays x 73,713 ops/sec ±2.58% (75 runs sampled)
   Web Assembly x 22,633 ops/sec ±3.35% (78 runs sampled)
   Web Assembly using TypedArrays x 26,032 ops/sec ±2.24% (77 runs sampled)
 Fastest is Native

SHA256:
   Native x 14,166 ops/sec ±3.12% (78 runs sampled)
   N-API Addon x 63,740 ops/sec ±0.81% (84 runs sampled)
   Web Assembly x 32,916 ops/sec ±0.91% (88 runs sampled)
 Fastest is N-API Addon

Interesting to think about all that our API’s currently perform and what they also may perform in the future, to estimate which will give the most performance for our needs.

4 Likes

This topic was automatically closed after 60 days. New replies are no longer allowed.