Hey all again!
So, as per this thread, it seems like the node cleanup policy from the CLI perspective is to just kill the node outright. This isn’t the most elegant, but it works. That said, @bochaco mentioned that, ideally, it might be useful to some sort of interface over which a node could receive commands (from the localhost or otherwise) to do such tasks like shutting down, pinging, etc. They pointed to the current implementation of
sn_authd as an example of how the
qjsonrpc library is currently used to control the authenticator daemon in just such a fashion.
This seems to me like a good opportunity to play with and provide some feedback on the
qjsonrpc API in a slightly different context (since
authd happens to be the only consumer of the library at the moment). I figured, I’d write a little program to implement a
qjsonrpc client/server model for passing messages and report my findings.
The goal of this experiment is mainly to find and iron out any sticking points in the API and maybe end up with a chunk of example code for the library. But because I like to aim high, I’m also writing it so that, if it reaches sufficient maturity, it might be spliced into the existing code with relative ease. Maybe one day I can feel the sweet satisfaction of typing
safe node shutdown and seeing my node elegantly shut down without being axed by my OS like a blood sacrifice to the computing gods.
Anyway, over the past few days I got a contrived example running, which you can see here. It’s still very much a work in progress, but it’s there if you’re curious.
Since the node has no public API so to speak of (e.g. you can’t make a node do much other than
node.run() even if you do have a reference to
node), I figured the best approach would be to develop a message passing system. The program right now is comprised of three actors. There’s a client, the rpc interface manager, and a minimal/faked node service. The client sends a message to the manager, which fields the request, wraps it, and places it in a pipe for the faked node service to attend to. Once that query is serviced, the response is placed on a different pipe back to the manager, which fetches the cached connection and forwards the response back to the client. One can imagine we could replace the spoofed server/node process with an actual Safe node, and we’d have our desired interface. This way also wouldn’t require any api changes for the node itself, which is nice and flexible.
Currently, it only works for the single test case of a client sending a single ping and waiting for a response (I haven’t gotten around to implementing the exit logic yet, so any other configuration would lead to the manager exiting too early or never shutting down probably. It shouldn’t take too long to fix that, but I digress).
Update 1: Initial implementation is working (albeit maybe not pretty yet), and it works as follows. In a test case, a client pings the server, receives an ACK, then sends a shutdown, and receives a shutdown ACK, at which point we consider the test passed. It sounds straightforward, but a good bit goes on under the hood as you might imagine. At this stage, I think it’s a pretty nifty PoC!
Notes/Ideas I had While Using
Here’s a few things I noticed along the way. Nothing major, but I figured I’d start a list here so I don’t forget later. After I do a bit of tweaking here and there on the WIP code, I’ll get back to this and take a swing at some of these items.
- Error codes are a bit cumbersome right now since each error code is hardcoded (e.g. my implementation and the current implementation in
authduse just a single error code to report every error under the sun to the client). It might be cool to have a custom
#[derive]statement which associates each item in an
Errorenum with a unique error code. Maybe allowing for custom prefixes (e.g. all the
4xxconstitute one error code category, etc.). That way, when reporting errors to the client, it would be quick to convert your
qjsonrpcerror code. It would also save on a lot of hardcoded typing/maintenance effort.
- Type names are awfully similar sometimes. For example, I personally found the
IncomingJsonRpcRequestand similar formations weren’t intuitive for me. That said, after reading the source and writing my implementation I see why it’s called that. I don’t know if this is an “issue” (and I admittedly don’t have an alternative off the top of my head) but it’s worth pointing out.
send()and similar methods seems like it could be simplified perhaps? Maybe using some clever templating, but maybe it’s already been considered.
Endpointobjects are still pretty low level. Calling
bind(), listening for connections, and iterating requests feels like working with unix sockets. Since JSONRPC 2.0 is a client server model, providing an abstracted
Servertype similar to
reqwest::Clientor others would reduce boilerplate in a lot of non-specialized cases.
I’ll keep updating this post as I go. Feel free to offer feedback if you want. I just wanted a place to organize my thoughts a little and share a bit of toy code I wrote .