Its all about recycling the randomly generated distribution map for the vaults needing redeployment in a randomly timed way and randomly determining geography for the new vault deployment, so the layers of the onion skin are always changing…
This is so important. Also add that a large attacker would have dedicated machines generating key pairs for the attacking nodes.
And I also saw it as logic over PoW. To me logic is the better method than brute-force
This is very important in my view. Performance should be measure by the activities required to be done. Not by something that can be farmed off to specialised hardware or predetermined values (eg file of billions of keypairs or indexed database somewhere)
I think this is going to be ever so important. Because I can see the market for farming boxes which are very cheap SBCs and the user adds storage device. The beauty of this is that it allows vaults to be 24/7 in houses where the (only) PC is turned off for 8-16 hours a day (e.g. 80-90% of households). If the key gen caused these small SBCs to fail ability testing because of brute-force key gen then I feel that we have lost big time.
Ability of a node should not be determined by its ability to brute force a key pair as fast as a 4 core i7 or GPU. The ability of a node should be on its ability to do the work expected of it as a functioning Node. Normal key generation is very fast compared to trying to brute-force a key pair.
I agree with both these points.
I ran your test here (although I changed it to use a newer version of rust_sodium since 0.2 doesn’t build libsodium on Windows by default):
desktop i7-2600K @ 3.40GHz - 21317 keys per second
I ran it with
--release. Not sure if you did too (the readme doesn’t mention it), but it shouldn’t make much difference since the actual work is being done by the C lib which is already compiled/installed in release mode. Of course, if you test a pure Rust implementation (e.g. Dalek) this will make a huge difference
Yes - I can see the appeal. I think this would be a very gnarly job, both from the perspective of what metrics can/should be used to measure health, and how the network can make use of that information securely. Certainly worth fuller consideration a bit further down the line as far as I’m concerned!
Just for perspective. The creation of keys is say 30,000 ns, signing is 40,000ns and verifying is circa 100,000ns.A node will have to sign and verify huge numbers of messages as the network grows.
The point is still valid though, that constant time for relocate is neat feature.
Yes thanks for those figures.
as you say the point is to verify that a Node can do these things. But brute-force a key pair is not a normal function of the Node’s operations. Is it?? And a brute-force requires a lot of key gens just to get the small portion matching. Is there an estimate of how many gens (on average) would be required for a small network and for a large one?
I ran Dalek and got about 30K keypairs / second, so about 3 times faster.
It has been in testnets so far.
In a small network of 2 sections the keygen would take 6ms if there were 100,000 sections (say 2 million node network) then the keygen could take about 2 mins, however if we dictate the range within a section this could go up to several 10’s of minutes
see this link for initial working from Ian as well. Data Chains - Deeper Dive
If we use that with the sha3 hash lib we use then it will be faster again, possibly another 2-3 times faster. I may give it a shot.
I was meaning that the brute-force is only there for this relocation function and I gather its not used for what we expect of the Node’s operations in the network and vault.
So about 20 gens (30uS each)
And I can already hear the cries of wasting energy with the CPU intensive work nodes have to do
Interesting. When I ran my test earlier, Dalek was about twice as slow as sodium when I built with stable Rust (1.23.0) and only a little faster (less than 10%) when I used the latest nightly compiler (1.25.0)! When I mentioned a huge difference, I was meaning comparing debug vs release.
If the Dalek code is going to get pushed to your repo, I’ll try it here too?
I tried that earlier too, and it made almost no difference. I expect it’ll come into play more with the signing/verification.
No that was prob a mistake (don’t do maths and flu together). It would be normally 1-2 gens so 30-60microsecs
And for 100,000 sections it would still be pretty fast: 3 or 4 seconds.
My Dalek results are not from my repo…
$ git clone https://github.com/isislovecruft/ed25519-dalek.git $ $ cargo bench --features="bench" ... test ed25519::bench::key_generation ... bench: 32,651 ns/iter (+/- 760) ...
Yes this is why I am not sure of the term brute force as it gives the impression that it is a significant workload. WE also must consider machine capabilities going forward as they should be increasing quickly sans meltdown and spectre of course
19500ns on my machine (i7 3.4 Ghz). just fyi
Ah - I’ve not gone through her code - just a quick skim earlier, but I noticed that her keygen benchmark looks a bit suspect. Specifically the use of the
ZeroRng which is “A fake RNG which simply returns zeroes.”
I see your point about brute force, but it does seem to me like the right term for what we’re doing. We’re not using any sort of algorithm to optimise the number of attempts we make - we’re just taking random shots until one finally scores.
True, but the max force required is the gen time * num sections, if the keys produced are random and equally distributed across the address range. The optimisation is using the fastest method to do that;-) So its not like trying to guess a password.
This is a table of the number of guesses required to generate a key with a certain prefix length (code here).
So to have a 90% chance of guessing a 7-bit prefix means generating 294 keypairs. This corresponds to a network size of about 6K vaults.
| probability of generating a valid key | | 0.1 0.5 0.9 0.99 0.9999 | Prefix Length | | Sections Vaults 3 | 1 5 17 34 69 | 8 400 4 | 2 11 36 71 143 | 16 800 5 | 3 22 73 145 290 | 32 2K 6 | 7 44 146 292 585 | 64 3K 7 | 13 88 294 587 1K | 128 6K 8 | 27 177 588 1K 2K | 256 13K 9 | 54 355 1K 2K 5K | 512 26K 10 | 108 709 2K 5K 9K | 1K 51K 11 | 216 1K 5K 9K 19K | 2K 102K 12 | 432 3K 9K 19K 38K | 4K 205K 13 | 863 6K 19K 38K 75K | 8K 410K 14 | 2K 11K 38K 75K 151K | 16K 819K 15 | 3K 23K 75K 151K 302K | 33K 2M 16 | 7K 45K 151K 302K 604K | 66K 3M 17 | 14K 91K 302K 604K 1M | 131K 7M 18 | 28K 182K 604K 1M 2M | 262K 13M 19 | 55K 363K 1M 2M 5M | 524K 26M 20 | 110K 727K 2M 5M 10M | 1M 52M 21 | 221K 1M 5M 10M 19M | 2M 105M 22 | 442K 3M 10M 19M 39M | 4M 210M 23 | 884K 6M 19M 39M 77M | 8M 419M 24 | 2M 12M 39M 77M 155M | 17M 839M 25 | 4M 23M 77M 155M 309M | 34M 2B 26 | 7M 47M 155M 309M 618M | 67M 3B 27 | 14M 93M 309M 618M 1B | 134M 7B 28 | 28M 186M 618M 1B 2B | 268M 13B 29 | 57M 372M 1B 2B 5B | 537M 27B 30 | 113M 744M 2B 5B 10B | 1B 54B 31 | 226M 1B 5B 10B 20B | 2B 107B 32 | 453M 3B 10B 20B 40B | 4B 215B 33 | 905M 6B 20B 40B 79B | 9B 429B 34 | 2B 12B 40B 79B 158B | 17B 859B 35 | 4B 24B 79B 158B 316B | 34B 1718B 36 | 7B 48B 158B 316B 633B | 69B 3436B 37 | 14B 95B 316B 633B 1266B | 137B 6872B 38 | 29B 191B 633B 1266B 2532B | 275B 13744B 39 | 58B 381B 1266B 2532B 5063B | 550B 27488B 40 | 116B 762B 2532B 5063B 10127B | 1100B 54976B
My logic for this table:
Define L as the number of bits in the prefix
Chance of a single key generation being a match
Chance of it not being a match
Chance of nth try not being a match
Chance of nth try being a match
p = 1-(1-1/2^L)^n
rearrange to find n for chance p of a match
1-p = (1-1/2^L)^n
log(1-p) = n * log(1-1/2^L)
n = log(1-p) / log(1-1/2^L)