Local failsafe as a route to confidence in an unstable network?

From thinking on how we can know the network is robust; I wondered there might be a relatively simple solution, to providing confidence against any network that might still be untested or unstable.

I’ve not seen similar suggested but expect it might have been considered; so, won’t overthink it just yet…

I wonder if a user had a local-SAFE with a copy of their data, then regardless of how the network performs, their data is secure.

Having SAFE provide a secure local envelope, that stores a copy of the users data, could solve two problems in one:

  • some users will always want a local copy of their own data
  • counterbalance for any uncertainty about how the current network will perform
    and such an approach would give users a sense of control, over what might appear as uncertainty - the ‘new’.

In the case that network is volatile, then user could have a simple one button option to reupload their data… at zero cost. That seems like a simple idea, which might serve well for marketing against any suggestion of risk on an unproven network.

Also, perhaps this would allow devs to play rough and resolve where the real limits are, using alsorts of different methods, without worrying about real loss of data.

Count it perhaps as a failsafe for all time; at no point can any data that a user has noted as important, ever be lost; it might just become inaccessible, until they push button for a free upload.

So, steps to this I wondered would be:

  1. data sent to network
  2. brought back and validated, so it’s known the network has a copy
  3. put securely into a local cache that is always readable to the user but is only added to where the network acknowledged the PUT cost was paid

So, like a goldfish bowl, the user can see their data is SAFE-locally and then they can also play rough with a network that they might wonder has not been proven… making a lot more use of it and sooner than they otherwise might do, for the worry about the real risks of volatility.

I would suggest an option that if a user or app considers data can be lost, then it can choose to opt-out but otherwise everything is erm… made-safe by default.

The only real limit then might be that safecoin, could not adopt the same… or perhaps there could be some wangle, to assure all users…

:bulb: Maybe this would also lend itself to seeing real metrics of how much data was lost - the local-SAFE could run a test and report that all was well?


i wonder if a modified launcher can implement something like that?

A bit different maybe, but remoteStorage.js apps effectively do this, storing data locally in the browser and syncing it with SN. Buggy demo at: safe://myfd.rsports (requires Beaker 0.3.6)

There were posts previously that prompted an idea of apps hosting data locally… if it’s user-centric and not critical data that needs to be safely on the network, then it could be stored in that goldfish bowl above?.. enhanced cookie like options for websites then, without cost to users perhaps… or perhaps your option of leveraging what beaker can do, does solve that.

You could do something simpler, but still effective.

  1. Mount safe drive
  2. Rsync between local drive and safe drive

Only changes would be copied to the safe drive, which would then automatically be uploaded to safenet.

Obviously, this would only work for NFS stores.

1 Like

but you’re missing the whole point…

This is about what SAFE would be doing, not some to-do list for users.

Yes, users could mount and make a backup but having local copy necessarily available - at least of that data which user did not opt-out of, would allow for default confidence against all threats.

SAFE would then be obviously useful sooner because it would be invulnerable to volatility that might lose data, though a range of other events to the most extreme where the whole network goes down. It would just have an option to bounce back, in part or in whole.

If the goldfish bowl held a simple private key as evidence that it belonged to user X, then when the network comes back or when the user pushes a button, their data is reset against the archive.

This would allow devs the freedom to play tough in finding limits where the network would see more real world use from users more confident of what SAFE can do for them … zero risk for the user, there is no reason not to jump in and give it a go. The alternate does leave a hurdle to adoption, where users will look to heritage as evidence or otherwise their neighbours subjective experience.

and, as I suggested, such local-SAFE could help provide real trusted metrics… a simple counter like builders have. SAFE has seen no data loss in 999 days. and we know that because data matches a local copy on 1billion users participating in the count.

Well, you could integrate something more tightly with the launcher or some such. A wee script using inotifywait and rsync would be pretty handy and easy to setup though.

A quick google lead me here to someone doing just this via ssh, btw: https://techarena51.com/index.php/inotify-tools-example/

As soon as you can mount a safe folder, other usual backup options will become trivial. Still the point in the OP is not about backup by user but assurance to the user by SAFE.

1 Like