RFC 46 – New Auth Flow

Thanks @ben, yes I suspected this could be a problem, and I agree it’s not good to have the authenticator to expire keys.

Just in case you are not aware, the TV Set-top-box devices work like this within a DOCSIS network, their are registered and a certificate is installed in the device which it then uses to communicate with the servers (CMTS).


Are there any conditional access mechanisms available or being considered other than using time? Such as, revoke after N-requests, or a general “Revoke all temporary grants to all apps (since login / ever)” etc.

Thinking from user side, I’m wondering what I might want, and some kind of granularity in revocation might be useful. As a user I came up with three broad categories of access scope to help me think this through:

  1. Everything Scope - this scope is always as high value/risk as the most valuable data in the account. Users can know this, and in theory could manage risk and security with different accounts. For example, keeping one for housekeeping, general administration and small change, another account for Safecoin vault earnings or investigative journalism. I think that’s fine in theory but IMO for most people is hard to do, and so it would be better if value/risk could be managed by granularity of access scope within the account, for as many users and use cases as possible. However, as I understand it whatever we do will have a limit that is not going to be very secure, and that the only very secure boundary is going to be account login, because apps can’t be trusted to store credentials as well as “in the user’s head”, and not to secretly steal credentials from each other. This means extra care will be needed with everything scope, unless I’m wrong? This has occurred in the past :slight_smile:. By extra care, I mean to ensure the user understands, knows what alternatives are available and finds them easy to make use of at the point this is realised (which might be long after the app is first installed or used).

  2. App Specific Silo Scope - here value equates to the most valuable data that the app can generate / access / is provided by the user. So value/risk is very variable, but often well defined according to the nature of a given app, and so relatively easy for a user to estimate and understand, even in advance. Examples: password manager v invoice generator v calendar v contacts manager v alarm clock etc. This is not always the case though, see next…

  3. Task or Data Value Scope - covers the case where value/risk relates directly to particular data or the task associated with it. In terms of apps, think of spreadsheet, word processor etc. where the value of data produced by the app, or which you want the app to access, can’t be known in advance, or is simply anything associated with a particular task. For a given task or data silo it will also be impossible to know which apps will need access to it in advance, so here authorisation is at the point of use, and perhaps per use, and not per app. Thinking in terms of tasks, we can have shopping (so shopping list), writing a product review for magazine article (so spreadsheets comparing products, word processor for drafts, article etc), or working on maidsafe’s next acquisition etc. :wink:. For many things default security is adequate, for others not so much, but only for very high value/risk things will the majority of users bother to take even the simplest precautions (look at Hillary Clinton’s email, and others who should know better at the top of government using services like Yahoo! even after they’ve been exposed as insecure…again!). I don’t know if that makes task/data silo access granularity pointless, or just means we need to make using such a facility very intuitive and as effortless as possible. I think it is worth trying though. It is not done well yet only because it is very hard to achieve (because it requires humans to think, and do work!) not because it isn’t very important. It is at the root of much of the surveillance and invasion of privacy we know about, and is what makes it hard for anybody to regain control. So for example we could consider the ability to lock data according to which folder it is in, and for permission to read and/or write to be required according to criteria that I have yet to think about. But for example, on access per session. Or in order to read. Or in order to write. Per folder, or even per file. Obviously though, no point providing features if they aren’t going to get used, so I think we need to consider usability almost as much as technical feasibility and theoretical value (by which I mean security achieved). Even if we can’t do this now, it may be worth thinking about in case their are things we’d like to implement in advance that ensure it can be achieved with least effort and burden for users later.

IMO usability is crucial in all of this. Everyone has to login to their SAFE account, but after that we know hardly anyone will put an ounce more effort into managing their risk and security than we actually enforce, and anything we enforce which causes the least bit of inconvenience will push users towards less secure alternatives which is bad.

In the end everyone wins by keeping as many users as possible more secure, so I think work hard to maximise usability, and within that do the best job we can to provide security.


Of course other mechanisms are considered, but this RFC doesn’t contain anything about automatic revokation, only about authentication and user-triggered revokation. While we might add any of those in the future, we haven’t discussed that yet, nor have we gained enough experience (aka having apps) to know what we’d be doing there. Automatic revokation is out of scope for this one.


Not sure where to put this but rather than start a topic I’ll try here…

Friendly Account Names

Quite a lot of us will have multiple accounts and at the moment we have no way to:

  • know which we’re logged into
  • or refer to them without potentially exposing the Account Secret (e.g. by writing down a list of the accounts we have set up somewhere)

An App could do this separately, but if we provide for it within the standard account metadata, and it is accessible to all apps, I think that it would be much more useful - each App would be able to show which account is active for example.

Example account names might be:

Very Secret Stuff

These would have no meaning in themselves - just labels that the user can set and edit, with the value stored in the metadata for the account itself. So it is not an account “username”, you can’t type in “Daily” to say I’m trying to log into that account.

When SAFE Beaker, or any App, authorises with SN and retrieves account metadata, it would be able to show this somewhere in the UI:

  • “Connected to SAFE Network / Unauthorised”

When you authorise, might become:

  • “Connected to SAFE Network / Daily account”

This would require a way to “name” accounts, but this need not interfere with account creation at all, or it include an extra input field with default value of “unnamed account” for example, which can be overwritten or left as is. SAFE Beaker or other apps could provide UI to edit the name of the currently authorised account.

I doubt any of this needs API changes, but by providing for it from the start (e.g. in Beaker account creation UI, and providing an account renaming UI), and specifying the use of corresponding account metadata value in the API docs would allow apps to cater for users with multiple accounts in a coherent way.


I think an API modification is necessary to store the friendly name in the account. This is needed to retrieve this name when the user reconnects from another station. And even if the user reconnects from the same station I would advise against storing this info in the local disk to avoid traces after visiting the network.


currently safe_core generates real locator and credentials from the user entered secret and password in a 2 steps process involving an intermediate (password, keyword, pin) triplet:

        let (password, keyword, pin) = utility::derive_secrets(acc_locator, acc_password);
        let acc_loc = Account::generate_network_id(&keyword, &pin)?;
        let user_cred = UserCred::new(password, pin);

I am not able to link this rust code in safe_core to javascript code in safe_launcher, but I suppose that acc_locator variable corresponds to the user secret.

If this assumption is correct then I think there is a problem in this code because derive_secrets function generates a keyword and a pin that depend only on secret. This means that 2 users cannot use the same secret (like Mark for example). IMO this is clearly a bug, because anyone should be able to use Mark name. Correction is very simple: avoid collisions on the session packet by adding the password as a complementary element in the real locator (the session packed is a MD keyed by acc_loc variable).

If this bug is corrected (or my assumption is not correct, meaning that acc_loc key already depends on both secret and password) then we wouldn’t need a third element for friendly name and launcher could just display the first element of credentials. This element should be renamed to something like “account name” instead of “account secret” and anyone could use common names like Daily or Wallet without colliding with anyone else. The needed personal uniqueness and secrecy would be brought by the second element of credentials, aptly named “account password”.

No API modification is needed with this solution and an added advantage is that complexity requirements are to be checked only on the second element of credentials.

Not a bug but a feature. The location depends only on “secret” but this means that we must generate a unique secret. Otherwise if secret is something simple, like Mark, we decrease security and all responsibility rests solely on the password.

The javascript code is here.

You’re right it’s not a bug because Maidsafe coded it on purpose.

But initially they coded a pin depending on both locator and password on 19 July. See commit comment:

User supplies 2 secrets: Account locator and Account password. Each is hashed. Keyword is the hash of locator and password is the hash of account password. Pin is derived from the combination of two hashes and acts a salt. These 3 derivatives then work as usual internally (going via Scrypt etc.).

And then they explicitly changed their mind on 26 July with following commit comment:

Derive PIN exclusively from account locator - do not involve password in this process.

The problem is that I find the first version better:

  • With the second version, user must enter 2 password fields instead of traditional name + password fields. This is complex for user and doesn’t bring any added security. It’s only necessary to prevent user choosing common names to avoid collisions. This is not a good reason for me.

  • With the second version, some evolutions must be done on API to implement what is asked by @happybeing. With the first version, the entered name could be the friendly name itself. With the second version, the user must enter a third field during account creation, so that makes a total of 3 fields (2 passwords and a friendly name).

The second commit comment doesn’t explain why the pin was changed and I’m afraid these points were not considered.

Note: I call password a field whose value must meet complex requirements about length and kind of characters.

Don’t get me wrong, but isn’t this solely a UI feature of the authenticator? I do understand correctly that you do want distinctly separated accounts, right? Well, right now the authenticator only has single-account (per-session) support, but we are already pondering about allowing multi-profile support, where you could store a name for some login credentials and hold them in a master account (maybe?) and the UI would ask you which account it should grand access to when an App asks to connect. This is already possible today and will be possible in the next version and wouldn’t need any changes on the network itself.

This won’t be focus on this version, nor is it on the scheduled plans I know of (and therefore, if we want to continue the conversation, I’ll split it out of the RFC convo). However, with the new authenticator, it should be possible for the community to fork it or provide a wrapper, which can allow you to do this already today.

The entire conversation with all reasoning and explanations can be found here. It is a complex issue and I’d rather not open the discussion about it again here. Especially as it has nothing to do with this RFC.

Don’t get me wrong, but isn’t this solely a UI feature of the authenticator?

Quite so (you know better than I do :slight_smile:). I don’t mind how we get it, and your comments about multiple account handling go way beyond this. I’m just highlighting the issue, but as usual MaidSafe are way ahead of me :wink:

1 Like

Incoming: Updates on the public Containers, Container encryption

While working on the container encryption last week, we realised that the previous usage of Nonces would break our key-lookup system. Thus we created a different scheme as outlined in this PR.

Secondly, the public container and many other details where specified very well yet. This PR restructures some of the information, defines the Container and a new LinksContainer-Convention. By using this and other existing conventions, the RFC could be made much clearer and focus on the difference of each part rather than explaining the entire mutable structure every time.

(quotes are from last version linked by @ben)

rather than having a hierarchy of StructuredData pointing to “subdirectories”, we will flatten the structure into a single key-value-store mapping and emulate a file system like access on top of that.

If I understand correctly this means that there is no explicit notion of a directory, only an implicit one defined by the set of existing file names in a container. For example, a file named like A/B.txt defines an implicit directory A containing a B.txt file.

The problem I see is that there are only 100 entries in a MD so that means a whole file system cannot have more than 100 files, that’s not enough for me.

Where the key is a UTF-8-String encoded full path filename, mapping to the serialised version of either another Container in the network or a serialised file struct …

The Container should point to another container following the same convention as its parents - so at least NFS - or to a serialised file struct as described before.

These pointers to containers could be used to enlarge a file system, but then how is solved file name unicity?

In a single container, 2 entries A/B.txt cannot be created, but that’s impossible to control within several containers, as they are managed by different vaults.

Previous NFS with each directory stored in a SD didn’t have this problem.


Where did you read that? I didn’t know there was a 100 entries limit on MD. The only limit I know of is a total of 1MB and we want to lift that before ending alpha…

Secondly, we are already splitting the “root” up into multiple area containers, in order to be able to give specific access to just a subset of data. As the containers spec already mentions the 7 defaults and explicitly leaves open that the user might create more top-level directories in the future.

Well, one key problem we had with the SD is that it required tree traversal if we wanted to change the permissions. Though the explicitly allow containers to link to other containers, we’d still not traverse those and fix permissions but instead allow that as a usage pattern to “link” a key into a separate area. Think of adding a link to your publicly shared website from the app directly in the _public-container so that you can let others know about this.

Regarding pure NFS, the only reason we even added it because of the current 1MB limit of MD but if this becomes a often required usage pattern, or we can’t lift that limit, we might be changing the implementation later on to provide for other patterns.

In the first iteration we will probably not make container-pointers transparent, but instead you’d have to explicitly ask for a key that exists and if that is a container, we return a container. Then you could do another lookup in there. So you’d have to say whether you want A/b.tx or b.txt within the container A by first looking up A and then the key b.txt within it. But you’d have to know where you want to split it. If you explicitly expected a file at that location we might also tell you that there isn’t any file there (but a container).

Whether we’d do transparent traversal in a later stage within the NFS emulation hasn’t been decided yet. And while I could share what I consider the better approach for this, there will be the appropriate RFC and time to discuss it then.

In the limits section of the RFC

1 Like

The MutableData data type imposes the following limits:

Maximum size for a serialised MutableData structure must be 1MiB;
Maximum entries per MutableData must be 100;
Not more than 5 simultaneous mutation requests are allowed for a (MutableData data identifier + type tag) pair;
Only 1 mutation request is allowed for a single key.
1 Like

You win by a few seconds…:wink:

1 Like

Then a directory is a container. That’s perfect for me: uniqueness is solved and this is analog to what was done with SDs (but this is not what is described in the text I quoted)

Thanks for the link, I was not aware of that limitation, must have been added after I last saw the mapped data draft.

That is troubling indeed. I tried to figure out where it came from and the reasons behind it (as well as whether that might be possible to lift it), because I wholly agree: a hundred files are nothing. Wasn’t able to find much in the internal discussions, will ask the routing team tomorrow, trying to find some answers.

Well, kinda. There are no directories - those are purely virtual. The container could also be stored under a/b/mycontainer.txt. While it’s recommended, it is not prevented either (at least not as part of this RFC). It would just come back as something you can read or write because it is a container rather than a file. So, I guess in an emulation it would show that entry as a non-file-entry. And you could open the address stored there as a new container.

I agree that what you want to achieve is there and can be achieved. I think this is just semantics, meaning and language at this point that gets a little in the way (as we removed the notion of directory in the first place). But essentially yes: putting containers into container is totally possible and explicitly allowed.


These numbers are just an initial limitation btw as in to start off with in testnets when the approach can firstly get verified. The RFC probably needs to get edited to make this one more clear. They by no means should indicate what we aim for permanently. They should currently allow the data type and its associated functionality get tested to start with before scaling the capacities such as any listed in the “Limits” section. Similar points can also get made for the number of parallel mutations a container might accept for a given entry or …

The main potential concern with having no limits to start off with in terms of the count would be with data concentration and some groups holding MutableData thereby getting lot more disproportionate chunk store sizes and thereby influence churn duration as churn operates per entry as detailed in the RFC. While these can be handled in a few ways to spread the data seamlessly to the user, they still need analysed and then discussed in an iteration as they aren’t flushed out yet.

Also as far as the type itself goes. It doesnt necessarily force “whole filesystem” in a single MutableData I’d think. MutableData can still get turned into a linked list if needed where the last entry points to the next MutableData as a continuation and so on even with this size limit or achieve the SD equalent of nested directories if needed too. This ofcourse isnt going to get the “lookup” across the entire data set if nested/linked so a choice depending on use-case may apply.


Hi all,
Ammendment proposal from @happybeing:


Re-opening this as I probably was too hasty closing it previously… you’re all going to see a lot more notifications from me going forward now as we reboot the RFC discussions (#sorrynotsorry).


1 Like