I think this was to prevent people from being able to change content and claim it was never changed. Let’s say you have an entry with version N which other people take as valid, perhaps some contractual information, you could then remove it and create an new entry with the same key and same version N but new content, i.e. you remove the entry, then insert a new one with the same key, and update it until you reached the same version N but with a new value. So by clearing it, you can at least realise that the value for version N is not valid anymore.
This a whole topic in itself, my understanding is that there are also plans/proposals for having the network to keep the entire history of an entry or data.
Thanks Gabriel, that makes sense, but it has some unfortunate consequences.
For example, in using an MD for storing and regularly invalidating keys such as using an MD as a file index as in SAFE-NFS. Every time a filename is changed, an unused key is left which takes up space in the MD, and which has to be filtered whenever searching the index (such as to list a directory).
So I’m wondering if there are plans to mitigate this kind of issue?
Similarly, over time a container will become cluttered with discarded keys.
I see. But now we have a list of item that can only grow overtime. It’s not very practical as a list. For example, the MD for NFS will be filled with empty field pretty fast.
Sounds like the problem of data disappearing needs to be fixed elsewhere, if at all. Maybe apps that needs it should figure out a protocol using immutable data for this kind of problem. Idk.
Its just that the impact on the list of entries is too big IMHO.
one which exhibits current behaviour, and is useful where keys rarely or are never needing to be removed
or where keys can be deleted
In the mean time it is possible to simulate the latter if you need it, at the cost of not being able to use the built-in NFS API ‘simulate as’ functionality.
You could do this by storing your index in a single MD key, but since the cost of updating this would likely be the same as the equivalent size immutable data, it might be more sensible to use ID for an index rather than MD. You would then have the option of also storing references to your ID as a history, giving rollback/archive functionality too.
I can see good reasons for building your own layer rather than using the built in features (such as emulate as NFS). For ‘serious’ applications, you could optimise the storage to minimise PUT costs by moving infrequently used data together, and frequently changed data together for example.
So we can do what we need with the existing MD & ID, which means we don’t have a good case for changing the API for this at least.
That sound backward, use an immutable data when you need to remove items for a list and mutable data when you want a immutable list… It’s not an elegant intuitive solution and it makes the NFS api unusable in the long term as a file system, that should hint that something is off in the design.
If we need an immutable list of item, create a key value list within a ID where the value points to an MD that can be modified.
So I don’t see why it should be the default behavior of MD when it’s doable with an ID in the rare case that you’ll need it.
My 2c anyway, I understand I might be missing something.
Edit: wait, can’t we create a MD without the delete permission for undeletable list?
Just to be sure there is no misunderstanding: The use case for clearing entries instead of deleting them, is not for immutable data, but is for mutable data for which we want to know if it is has been modified. Hard delete implementation doesn’t allow this because an entry could be recreated with the same version as before but a different content.
I’m with you. Could we achieve the same result with a MD that has no owner and no delete permission? Or, not really sure if it’s possible but, a MD set in a way that the owner can update, insert but not delete?
This would seem to work for MDs created for arbitrary purposes, but am I right in thinking it won’t help applications which want to use the standard containers (to store files for example)? Unless those are made to allow delete, which in not sure would be acceptable.
I also don’t understand. What is proposed here is possible right now for any MD types, including standard containers: if at one time a MD has no delete permission and no modify permissions permission granted, then current entries cannot be modified covertly (meaning that they can be modified, but the entry version will be incremented with no possibility to cheat). If this is confirmed by @maidsafe, I think that delete operations can be implemented by a hard delete.
It is possible that current client tools don’t allow removing these permissions. But it’s only a matter of HMI to be developed.
My point is not that this couldn’t be applied to the root container. I agree with what you say there.
My point is that MaidSafe maybe arrived at this behaviour because they want the existing soft delete behaviour for those containers. In which case, any app that writes keys to them will not be able to make use of hard delete for those (root) containers. Hence, an app storing files within those root containers will not benefit from the option to hard delete file entries, since they won’t be allowed in the root containers. I don’t know that to be the case, so only speculating that this innovation won’t help that case, which is probably a very common one.
Maybe a bug, feature or rookie coding error. All help appreciated!
I have some code which looks roughly as below. I iterate over the MD entries of the root container and use safeNfs.fetch() get info related to those entries where I’ve used NFS emulation to insert() immutable files. The function _makeFileInfo() which uses the fileHandle to get metadata for the file and returns a Promise. The iteration works and a correct value gets inserted at listing[ name ].
But the code does not produce the desired result after the iteration. After the ‘Iterations finished’ statement, listing is empty.
So it looks as if the .then of the forEach() iterator completes before the promises created inside the loop have completed (ie the call to fetch() shown, and the one returned by _makeFileInfo(). So I have two questions:
is this a bug in the forEach() implementation, or should the iterator complete regardless of the state of the Promise created by the fetch and _makeFileInfo()?
if it isn’t a bug in forEach() implementation, what would the correct structure be here (without using await if possible)?