RFC 48 – Authorise apps on behalf of the owner to mutate data

safe_launcher
routing
safe_core
safe_vault

#1

Discussion topic for RFC 48 – Authorise apps on behalf of the owner to mutate data


New authentication flow + Granular permission control
#3

Two questions:

  • Will the original owner still be able to mutate data? (Not using an app, for example in a rust program that uses safe_core crate directly)

  • Could maid managers limit put usage of each app? (For example by redefining auth_keys field as a BTreeMap giving the available space for a given public key)


#4

Actually this would be a good to have as a permissions parameter.

Also help stop errant APPs from chewing through ones wealth.

Isn’t that just the same as an APP except you wrote it?


#5

I think I have the answer, at least for SDs. The routing crate checks the signature against any key passed in: It can be the user key, the key associated with the (user, app) pair, a dynamically generated key, ….

For MDs, one thing that bothers me is this sentence in the RFC: The type MutableData thus does not need to store any signatures. IMO a static analysis of the data should allow to conclude whether it is valid or not. Because of the signatures absence, the envisioned MDs structure seems less secure than current SDs structure.


#6

I agree, to an extent. One thing much of this overlooks though is data chains. When these are in place it helps a lot. It’s a little different though. So now we would have a group agree on the hash of the data and this is group consensus the data is in fact valid and has a valid owner. So as you say routing can tell the mutation is from that owner. So good there.

The weakness (and I agree) is static analysis of a single data item and that is true. With data chains though you will not only be able to confirm the data is correct (they use a sha3 hash to confirm the data on disk part), but you will also be able to see the data is “network valid” i.e. it was correctly agreed to be network data and not just created.

So for the short term we do lose some static analysis of single data items, but without knowing if they were indeed valid “network valid” then the static analysis on it’s own is less powerful. With data chains we will have the best of both worlds, the data will have to hash correctly and have the correct format etc. but will also be confirmed agreed by a group that the user can confirm was at one point in time a network valid group.

In saying all of that I do agree there is still an argument over signature validation and simple hash validation, there is a slightly more open attack vector available to attackers to use bad data + “random filler” to achieve the hash. It’s reduced by the types size constraints but it still does exist. It is potentially though an extremely difficult hack and arguable somewhat similar to a signature validation hack, using similar techniques. I am not fully convinced it’s losing security, but I am not convinced it’s not either. We should do a bit of cryptanalysis on this perhaps.

So I think this should not be a closed conversation and we need to measure the impact of that extra 64 bytes (signature size) and cpu on the network against the possible decreased attack vector in this case.


#7

While I agree with the obvious benefits that data_chains would bring in, I don’t think we’d be loosing anything here tbh especially considering the very specific case being analysed which is “client side” alone. I’d actually say the network is being specific about what it ensures with MD.

So to elaborate, we are talking about signature validation as static validations, a signature cryptographically tells us a given payload was signed with a specific key. so ofc this can only be performed for keys we trust as valid in this case the owner. So if I’m the owner and I get some data which holds a signature I can check with my key, then I know data is not mutated maliciously. Now the same certainly can’t apply to “public” data since the sender keys could very well be anything since the owner doesn’t know the sender. so this signature validation is for owners or scaled up to people who me(the validator) knows the pub-sign-key of outside the scope of this data itself.

Now from the network POV group consensus is the key part of any trust. We do not trust any single authority especially from a client and even for efficiency we have consensus to agree on the hash of the expected result before getting the full msg from a single node currently. So the act of getting a blob from a group authority is what the network ensures to provide as genuine and not mutated unexpectedly.

Back to the client side static analysis, since this is client side, the signature can very well be “expected” as part of the data payload itself too. So in case of our example above. I as the owner when I store MD, can have the entry-value a complex type that includes a signature(doesn’t even have to be complex, just append the signature to the string). Now when I have a MD owned by me, I can just say I only trust it when I can decode my entry-value and extract the signature which I expect to be the result of signing the remaining payload with my key). Static analysis achieved :slight_smile:

So in specific cases where the owner knows the only senders he’s expecting to mutate the data who he “trusts”(know their identity via their sign key) can create a simple client side requirement and get their requirement done. Its not something related to the network or the network enforced type primitives at that point in my opinion.


#8