Secure Random Relocation

poisson in French means fish. :slight_smile: adapted to the SAFE network, what might works is we might want to think about “schooling” or containing the boundaries of the randomness generated by the poisson random distribution with advice (current and forward trending/predicted state of areas of the network) gleaned router information which considers a larger statistical sample of current network sickness or health in a randomized sampling of areas, ie- a couple of newly define areas which show up as sick areas would be also included in the bigger school with more than a couple of healthy areas with all of them treated together as the overall collective target (school) of possible candidates for relocation of the vault, so depending on the degree of sickness or health you can have the network randomly skew “the spew” of new vaults to the newly created school of targets, then have the network randomly and shortly thereafter take another random set of area samples of sick/health, which do include nearby areas as well sick and healthy to create a new set of targets for random vault relations

The key is to keep changing up the schools… in random time intervals

We did this to great effect @ Cloakware when I worked there to essentially create new builds of target mobile devices with different hardware and software associated locks for every release randomly distributing the distribution of the devices to different geo locations and through different channels, all the time at random intervals, so that even if one device got cracked (and it never did) is was would have to be with a huge supercomputer… Cloakware is now Iredeto (a Dutch/Chinese firm with the brains in Ottawa , Canada) but they have kept the Cloakware brand name, back in 2007 and 2008 we were doing this for the biggest company names that start with A and N for millions of mobile devices and collecting hack free bonuses per quarter doing it) The basic premise is the same only it would be done in real network time for SAFE, so this Poisson approach could definitely work for SAFE with some clever integration into what the routers know at the time visa vis network sickness/health to create a random fair relocation of the vault keeping anyone from dominating to win more Safecoin… again my 2 cents, nice suggestion! it surely would fit with what @mav is suggesting R2

1 Like

An interesting point for sure.

From my own perspective, I have a notion that this might better suit vaults with a track record rather than newly-joined ones. What I mean is that if we’re able to have the network maintain health metrics on sections of the network, it’d be well placed to identify optimal vaults to relocate so that the best balance is met (i.e. it doesn’t relocate an exceptionally healthy vault to a weaker area if that would cause the source section to become sicker than the target one)

Relocating vaults in any non-random way though always makes me twitchy! (in-house, I’m less a fan of targeted relocation than some of the rest of the team I think) It opens the door to attackers just that little bit wider :slight_smile:


I’m wrong or are you proposing to completely eliminate the concept of neighbourhood in the Safe network? (Basically change an essential part of the Disjoin Sections RFC)

Because the concept of neighbourhood, as I see it, is more than “the right to negotiate”. It is the cornerstone that structures the entire network and, among other things, allows:

.-An uniform distribution and an structuration that allows to know the distance between groups, the right directions to send messages and the number of hops.
.-An split or join, of sections, with clear rules
.-A growth of the route table known in advance (one more section each time the network doubles in size).
.-An approximate idea of the total size of the network

I’m concerned about the possible negative consequences of such a fundamental change.
In fact, I still do not understand why can not use HDKeypairs for relocations (brilliant idea) while maintaining the concept of neighbourhood.

1 Like

I saw it as removing the neighbourhood (limitation) for relocating a node. This is what the proposal is about.

I didn’t see any need to remove neighbourhood for the other functionality/workings.

1 Like

@mav : a superb post again :slight_smile:

just wondering regarding the brutal force key generation part.
If the generating time is concerned, a vault could generate key-pairs in background thread during free time and pick-one that fits the relocation requirement.
So basically, with small sized network, the key-pairs is generated on-demand; with large sized network, key-pairs are prepared in advance.
This way, the storage capacity is also somehow get evaluated.

Also, as the vault needs to prepare key-pair each time got a prefix length changed, or after a relocation. So the vault is also under-taken measured performance improvements as the network grows while carry out normal network duty ?


HDKeypairs is news to me, so I have a bunch of research to do there, but on the face of it sounds like a great solution. I’m personally in favour of random relocation for much the same reasons as @mav (reduced target address space and key generation expense), but as with many topics there’s loads of debate in-house about it. I agree with the keygen calculations in Data Chains - Deeper Dive and actually wrote a quick test to confirm these.

Perhaps as @tfa suggests, a combination of random then targeted would be OK, although any deviation from pure random makes me uneasy.

There was an idea a while back (not even sure if it was discussed in-house, let alone properly documented) to have the section agree a random 256 bit number and this would be XORed with the relocating peer’s public key to provide its new address. This has the drawback of probably requiring the random portion to be part of the peer’s name, doubling the name’s size, but otherwise is conceptually similar in many ways to @mav’s proposal.

I’m less convinced about this though:

I think a single malicious vault would be able to wait for all its peers to present their random bytes, then craft its own bytes to suit whatever target address it wants. As @dirvine mentioned, the hash of a churn event or Put/Edit request is probably a better source.


We were discussing this in the other forum too couple days back. but in that post was also meaning recursion for balance after the first hop.

Still think this might have to almost be an AND of those factors rather than OR personally to prevent stalling, but want to see how the simulation results come out to give that some context. Hopefully @bart gets those soon-ish :wink:


No. Sorry I missed a vital bit of context to the post:

“The HDKeypair proposal requires eliminating the idea of neighbours for relocations.”

Thanks for bringing up this ambiguity.

The current secure messaging using neighbours would still apply to all other messaging.

I admit to slightly overreaching on the relocation request idea. Using a different messaging mechanism is probably not necessary and it’s probably better to just use the existing messaging. But I wanted to put the idea out because who knows what it might influence in other ways.

Agreed. Key generation can happen in the background which is a nice property to have, especially since unused keys may be kept and used later if needed. No work need ever be discarded.

But there’s no bound on what keys are needed, since neighbours may be found recursively so it’s not enough just to generate a key for each current neighbour section.

This means there should always be a constant background thread finding keys. That seems a bit ugly.

It eventually leads to the unpleasant idea that if the network becomes very large there’ll probably be a ‘market for keys’ pop up where you’ll be able to buy a key matching the relocation section from someone who has brute forced it on powerful specialized hardware. This breaks security because the private key is no longer secret. Who would buy a key?! The same people who set their bitcoin transaction fees to 0 then wonder why ‘it doesn’t work’… I think someone just wanting to earn safecoin would gladly buy a precomputed key if it means they can remove a roadblock to earning more safecoin. Overall I think the pubkey-brute-force concept potentially comes with some very ugly incentives.

My main point with hdkeypairs is not so much ‘can it be made cheaper’ but ‘we should use constant time instead of linear or exponential time algorithms’ where we can. It’s a scalability thing, not an economics thing.

True, but the performance for ‘generate a keypair’ isn’t really meaningful to the day-to-day requirement of ‘managing data’. So I think separating ‘measuring performance’ from ‘network operations’ is a good idea. It’s a bit arbitrary to use keypair generation as measure for performance.

Good point. Agreed.

I have also done a quick test using iancoleman/ed25519_brute_force (which uses rust_sodium).

2014 laptop  i7-4500U @ 1.80GHz - 5851 keys per second

2017 desktop i7-7700  @ 3.60GHz - 9119 keys per second

Might tidy up the reporting a bit and put it on the main forum to crowdsource some data.


I suspect this point is critical in your proposal and not really understood.

We did a bit of testing today and 10,000 keys per second with dalek (nightly) seems reasonable. Be nice to test that on a rpi3, I may run one up.

Also using the hash of mutation events and churn events will give us the random data for depth. This will be an agreed (consensus) ransom data element.


Its all about recycling the randomly generated distribution map for the vaults needing redeployment in a randomly timed way and randomly determining geography for the new vault deployment, so the layers of the onion skin are always changing…

1 Like

This is so important. Also add that a large attacker would have dedicated machines generating key pairs for the attacking nodes.

And I also saw it as logic over PoW. To me logic is the better method than brute-force

This is very important in my view. Performance should be measure by the activities required to be done. Not by something that can be farmed off to specialised hardware or predetermined values (eg file of billions of keypairs or indexed database somewhere)

I think this is going to be ever so important. Because I can see the market for farming boxes which are very cheap SBCs and the user adds storage device. The beauty of this is that it allows vaults to be 24/7 in houses where the (only) PC is turned off for 8-16 hours a day (e.g. 80-90% of households). If the key gen caused these small SBCs to fail ability testing because of brute-force key gen then I feel that we have lost big time.

Ability of a node should not be determined by its ability to brute force a key pair as fast as a 4 core i7 or GPU. The ability of a node should be on its ability to do the work expected of it as a functioning Node. Normal key generation is very fast compared to trying to brute-force a key pair.


I agree with both these points.

I ran your test here (although I changed it to use a newer version of rust_sodium since 0.2 doesn’t build libsodium on Windows by default):

desktop i7-2600K @ 3.40GHz - 21317 keys per second

I ran it with --release. Not sure if you did too (the readme doesn’t mention it), but it shouldn’t make much difference since the actual work is being done by the C lib which is already compiled/installed in release mode. Of course, if you test a pure Rust implementation (e.g. Dalek) this will make a huge difference :smiley:


Yes - I can see the appeal. I think this would be a very gnarly job, both from the perspective of what metrics can/should be used to measure health, and how the network can make use of that information securely. Certainly worth fuller consideration a bit further down the line as far as I’m concerned!

1 Like

Just for perspective. The creation of keys is say 30,000 ns, signing is 40,000ns and verifying is circa 100,000ns.A node will have to sign and verify huge numbers of messages as the network grows.

The point is still valid though, that constant time for relocate is neat feature.


Yes thanks for those figures.

as you say the point is to verify that a Node can do these things. But brute-force a key pair is not a normal function of the Node’s operations. Is it?? And a brute-force requires a lot of key gens just to get the small portion matching. Is there an estimate of how many gens (on average) would be required for a small network and for a large one?


I ran Dalek and got about 30K keypairs / second, so about 3 times faster.


It has been in testnets so far.

In a small network of 2 sections the keygen would take 6ms if there were 100,000 sections (say 2 million node network) then the keygen could take about 2 mins, however if we dictate the range within a section this could go up to several 10’s of minutes

see this link for initial working from Ian as well. Data Chains - Deeper Dive


If we use that with the sha3 hash lib we use then it will be faster again, possibly another 2-3 times faster. I may give it a shot.

1 Like

I was meaning that the brute-force is only there for this relocation function and I gather its not used for what we expect of the Node’s operations in the network and vault.

So about 20 gens (30uS each)

And I can already hear the cries of wasting energy with the CPU intensive work nodes have to do :stuck_out_tongue:


Interesting. When I ran my test earlier, Dalek was about twice as slow as sodium when I built with stable Rust (1.23.0) and only a little faster (less than 10%) when I used the latest nightly compiler (1.25.0)! When I mentioned a huge difference, I was meaning comparing debug vs release.

If the Dalek code is going to get pushed to your repo, I’ll try it here too?

I tried that earlier too, and it made almost no difference. I expect it’ll come into play more with the signing/verification.