Upgrade process for the network

The current upgrade mechanism for the safe network (testnets) is to wipe the network and start again.

Are there are any proposals for how new features are to be released to the live network? I haven’t been able to find any. Obviously the existing method of turning the network off won’t work!

I want to spitball some ideas about how upgrades may happen, which relates to node ageing and node ranking (which I think is completely contained with ageing?).

Upgrade by deranking

If the version (or even better, capabilities) of nodes can be known, old nodes can be deranked. This encourages upgrading.

This may be done naively via a user-agent header, and perhaps evolve to a more complex zero-knowledge proof for determining the availability of new features.

The balance is between making upgrades hard to spoof vs creating a lot of extra work and complexity.

Upgrade automatically

Vaults could upgrade themselves by downloading the new version from the safe network automatically and loading the changes on-the-fly. This is difficult to enforce, since nodes may simply remove the autoupdate portion of the code. Still a nice feature to have, but not a secure mechanism for upgrading the network.

Enforce backward compatability

Another option to handle upgrades is to have all changes be backward compatible. I think this is impossible to achieve.


If all node operators are active members of the community and behave as good citizens then all it takes is an announcement and nodes will be upgraded by their operators. This is unreliable since some nodes will be abandoned, or have lazy operators, or be operating under malicious conditions.

Further to this list of possible mechanisms is also how upgrades will be rolled out. When the first ‘upgraded’ node comes onto the network, it will appear to be ‘faulty’ and possibly be deranked. How can this be solved? How can the network know the difference between ‘upgraded’ nodes and ‘malicious’ nodes?


Great this is getting poked around and nice points. Further considerations are, as you point out, where are the upgrades and how are they trusted. We are lucky we have multi sig structured data. So we could have the software upgrades pointed to and they are immutable data themselves. The key holders of the SD item would need to be “trusted” and this is a place where the Pod’s would have been great. I think as MaidSafe roll out Joint Ventures it also helps as having these keyholders in different countries and essentially different companies then we have more trust.

So reproducible builds that are agreed and signed with keys that can be revoked is great. It takes us a little forward. Then gaols and sandboxing etc. all comes into play.

Initially ignoring the fact we can be secure, but the OS probably is upgrading automatically with complex malware is something for another convo.

Then the upgrade itself, from a vault perspective. Data chains allows republish of valuable stuff (data) so we can use this. Even in a full outage as skype did due to win98 servers a few years back, SAFE will restart and do so with data, however it’s more nuanced, as you know.

A potentially simple solution here is that nodes create islands of connections to known nodes. So these groups all know each other and negotiate connections. If each node can recognise the current version and also one version back then it can connect to multiple node types.

On connecting a node can be told, you are old now, so upgrade on next reboot (no need for flood). This would seem pretty straight forward, but has edge cases (for later).

The other issue is nodes should download the new software and run it in parallel, measuring the new codes capability to “far” and also check it does not try and “phone home”. Then after a random time, upgrade to it.

So these are just points to add to the upgrade discussion, I think, Upgrade will be a big upcoming feature and a difficult one in decentralised networks, Google and many others use many ways, but nearly always include upgrade servers (a no no for us).

Hope this all helps with the discussion anyway.


Would be really cool to see a hot-patch option so vaults don’t need to restart to load new features. Somewhat similar to:

This, combined with some form of rolling updates, seems like a very interesting area for exploration. Excited to see how this evolves, it’s a great area of research.


Agreed, hot patch though is harder, perhaps, to defend against malicious upgrades. Obviously no decisions yet, but I have always liked the idea of the network doing it’s own Qa process. I mean run the upgrade in parallel and measure it? Perhaps there is a neat mix of the two we can use though, so always run 2 vaults? one hot upgradable, perhaps :wink:


It’s not too late! In the pod meetings I’ve been to, everyone kept asking about launch SafeCoin income etc so I definitely believe that once these things are available, people will come together and use the funding to create pods all over very rapidly.

I see pods and farming / app incubation being somewhat codependent or at least very supplementary. Then we can enjoy things like these distributed SD update keys being all over.

The pod we had in Daniel and Paige in SF would have loved this and was very ready to thrive on this if it was around back then a year or so ago. It was all at Rackspace coworking space on 2nd St with great internet, and everyone was bringing their Pi’s / arduinos and also all sorts of cool small machines I’ve never heard of

If this was possible then, everyone could have a membership, plug in their little machine to make a little money, and work together on SAFE apps etc. I really wanted a hackathon

All these things can still happen! Coworking spaces are everywhere and would be a great and very attractive group to get going! Even here


I am counting on that in many ways and hope it does work. I will try hard to help out where I can when I get time and, as you rightly say, when we launch and folk can engage.


Are builds deterministic? Would the pods be signing the MaidSafe build or will they build their own and confirm the hash before signing?

I’ve been interested in developing a tool that tracks the consistency of multiple builds of the same version, signed to confirm the consistency. Feel it’s a bit early yet, but would be a good thing for the future.

Multibit does something like this: https://multibit.org/help/hd0.3/deterministic-build-process.html


They can be.

Yes this is how I would see this happening. It may require docker type builds or similar to the whole tor/bitcoin reproducible build process.