Good point. I guess my bias to worst case scenario is coming through.
The bitcoin-style lottery only applies to finding the first key from a fresh key cache, but as you say previous guesses can be stored to make future relocations hopefully faster via lookup.
My concern is that if I’m running multiple vaults (which I will) it makes good sense to share the key caches. And then it’s an obvious step to share them with trusted friends. And then share them with other people (maybe for a price). And then message security becomes questionable but not in any obviously noticable way. And then a major key provider suddenly realises they control most of the messaging on the network. And then the major key provider gets hacked. And so on down the slippery slope.
Individual vaults will discard duplicate prefixes, but this is a waste. It’s more economical to keep the duplicates but since the person generating them can’t use the duplicates they must try to sell them so the work isn’t wasted. So this is a pretty clear incentive to sell keys if you possibly can to avoid discarding duplicated work.
Because work is not discarded, people who start generating keys early have an advantage over those who start generating keys later. It creates an incentive for early participants to establish many vaults to make long section prefixes so newcomers require lots of initial work and the work of early participants gives them an even greater headstart. If I had a lot of resources I’d definitely want the network to grow large to push smaller participants out of viability. Sounds a lot like the direction of bitcoin mining to me.
Back to the optimist angle, this only applies to very large networks, since key generation is viable even for ‘quite large’ networks. But I feel by the time this becomes a problem it’ll also be the hardest time to be solving it. Best to address it beforehand.
Agreed except for encrypting the cache, since the process managing the key finding presumably runs at the same security level as the vault itself, which has the keys and must be secure in the first place.
I think storing privkey + pubkeyprefix is enough. Pubkey can be derived from privkey, but there needs to be a way to quickly look up pubkeyprefix.
Doesn’t the block include the prior block hash, making every block unique? How are they chained? Doesn’t the ‘chain link’ make every block unique, thus preventing replay?
I’m almost there, but not quite.
The amount of work, sure, it’s probably doable most of the time by most computers.
But the incentives that arise becauase of it are terrible. It’s a waste of time and energy on a not-productive-to-end-users exercise. It’s inelegant. It’s poor design and engineering. It’s less scalable.
The table of guesses can be used to see how many vaults really struggle to find a specific key.
For example a network size of 10K vaults, using the row for prefix length 8:
| probability of generating a valid key |
| 0.1 0.5 0.9 0.99 0.9999 |
Prefix Length | | Sections Vaults
8 | 27 177 588 1K 2K | 256 13K
This means 10% of vaults (ie 1000 vaults) will find a key within 27 guesses, or rather, 90% of vaults will not find a key within 27 guesses.
50% of vaults will not find a key within 177 guesses.
10% of vaults will not find a key within 588 guesses.
1% of vaults (ie 100 vaults) will not find a key within 1K guesses.
0.01% of vaults (ie 1 vault) will not find a key within 2K guesses.
This idea becomes quite significant at higher network sizes, eg 7B vaults means 700K vaults will not find a key within 1B guesses. This growth in failure as the network grows is quite daunting.
But by the same token, to be an optimist, 1B guesses is about 10h on todays hardware, so in less than half a day almost all vaults would be able to find a key and be able to take part in a 7B node network. That’s pretty fine by me.
But a 7T node network? I think problems will start. Do we care about 7T networks? I’d like to think so
Thinking pragmatically, it’s probably worth considering whether the proposed changes are needed from the start or whether it’s possible to add them later. Does it affect consensus? Can two modes of operation be run side-by-side and one phased out gradually? It’s a complex question which I haven’t thought about yet. But it does affect what work gets done now vs later so I think it matters.
Ultimately I feel it comes down to the economic model for safecoin and how the incentives affect network size. If the network never becomes massive then keygen isn’t a problem. But if it becomes massive then keygen will get ugly. The size of the network imo only depends on the as-yet-undesigned safecoin incentive structure.
Exponential growth is real. We can’t ignore it.