Secure Random Relocation

This is the part that makes me most uncomfortable about the background work - somehow in my mind it’s highly likely to not fit once a range constraint is given.

2 Likes

I am unconvinced that the work required to generate the correct key is beyond what a node should be able to do easily.

4 Likes

Good point. I guess my bias to worst case scenario is coming through.

The bitcoin-style lottery only applies to finding the first key from a fresh key cache, but as you say previous guesses can be stored to make future relocations hopefully faster via lookup.

My concern is that if I’m running multiple vaults (which I will) it makes good sense to share the key caches. And then it’s an obvious step to share them with trusted friends. And then share them with other people (maybe for a price). And then message security becomes questionable but not in any obviously noticable way. And then a major key provider suddenly realises they control most of the messaging on the network. And then the major key provider gets hacked. And so on down the slippery slope.

Individual vaults will discard duplicate prefixes, but this is a waste. It’s more economical to keep the duplicates but since the person generating them can’t use the duplicates they must try to sell them so the work isn’t wasted. So this is a pretty clear incentive to sell keys if you possibly can to avoid discarding duplicated work.

Because work is not discarded, people who start generating keys early have an advantage over those who start generating keys later. It creates an incentive for early participants to establish many vaults to make long section prefixes so newcomers require lots of initial work and the work of early participants gives them an even greater headstart. If I had a lot of resources I’d definitely want the network to grow large to push smaller participants out of viability. Sounds a lot like the direction of bitcoin mining to me.

Back to the optimist angle, this only applies to very large networks, since key generation is viable even for ‘quite large’ networks. But I feel by the time this becomes a problem it’ll also be the hardest time to be solving it. Best to address it beforehand.

Agreed except for encrypting the cache, since the process managing the key finding presumably runs at the same security level as the vault itself, which has the keys and must be secure in the first place.

I think storing privkey + pubkeyprefix is enough. Pubkey can be derived from privkey, but there needs to be a way to quickly look up pubkeyprefix.

Doesn’t the block include the prior block hash, making every block unique? How are they chained? Doesn’t the ‘chain link’ make every block unique, thus preventing replay?

I’m almost there, but not quite.

The amount of work, sure, it’s probably doable most of the time by most computers.

But the incentives that arise becauase of it are terrible. It’s a waste of time and energy on a not-productive-to-end-users exercise. It’s inelegant. It’s poor design and engineering. It’s less scalable.


The table of guesses can be used to see how many vaults really struggle to find a specific key.

For example a network size of 10K vaults, using the row for prefix length 8:

              | probability of generating a valid key  |
              |      0.1      0.5    0.9   0.99 0.9999 |              
Prefix Length |                                        | Sections Vaults
            8 |       27      177    588     1K     2K |    256    13K

This means 10% of vaults (ie 1000 vaults) will find a key within 27 guesses, or rather, 90% of vaults will not find a key within 27 guesses.

50% of vaults will not find a key within 177 guesses.

10% of vaults will not find a key within 588 guesses.

1% of vaults (ie 100 vaults) will not find a key within 1K guesses.

0.01% of vaults (ie 1 vault) will not find a key within 2K guesses.

This idea becomes quite significant at higher network sizes, eg 7B vaults means 700K vaults will not find a key within 1B guesses. This growth in failure as the network grows is quite daunting.

But by the same token, to be an optimist, 1B guesses is about 10h on todays hardware, so in less than half a day almost all vaults would be able to find a key and be able to take part in a 7B node network. That’s pretty fine by me.

But a 7T node network? I think problems will start. Do we care about 7T networks? I’d like to think so :slight_smile:


Thinking pragmatically, it’s probably worth considering whether the proposed changes are needed from the start or whether it’s possible to add them later. Does it affect consensus? Can two modes of operation be run side-by-side and one phased out gradually? It’s a complex question which I haven’t thought about yet. But it does affect what work gets done now vs later so I think it matters.


Ultimately I feel it comes down to the economic model for safecoin and how the incentives affect network size. If the network never becomes massive then keygen isn’t a problem. But if it becomes massive then keygen will get ugly. The size of the network imo only depends on the as-yet-undesigned safecoin incentive structure.

Exponential growth is real. We can’t ignore it.

8 Likes

What do you mean by today’s hardware. RPi3? or 8th gen i7 10+ core?

That 10hour might be 100 or more hours on a RPi3. I’d say if we can be seeing vaults like commodity items then we may reach the greater than a billion vaults within a reasonable time. But if it takes days or a week (== 10Hr on PC) to join the network then that is a worry.

If this one piece of work prevents SBCs like RPi3 which will be with us for at least 5 more years, then we will lose a major source of vaults and if we limit it to PCs then its a reducing market.

If we want phones then we need to get rid of the “brute-force” method and your HD keys method still seems to be the better way.

We have to think in terms of each >2TB drive produced could eventually have a small SBC attached and be a vault.

Its not a case of thinking of what size it will in reality be, but planning for the best outcome. The best outcome is that every internet connected home will have a SBC+drive vault. Just like every home used to have a rotary dial phone and those who didn’t wanted one. (at least in the 1st world)


And its worth noting that if we are to distinguish ourselves from blockchain then we cannot be wasting energy on a brute-force style method where a logical method does the job easier.

The cry will be “energy wasters”. Its just like blockchain mining. And its a billion nodes wasting energy not a few hundred. (yes people won’t consider its only 10 hours to 100 hours the energy is used)


5 Likes

If the keys were held in memory I’d agree, but I believe Qi’s proposing holding the cache as a file on disk. In that case I think it’s safer to just encrypt the cache.

Don’t want to drag us off topic here, but the current design doesn’t include the hash of the previous block. Some brief points which hopefully don’t confuse more than help:

  1. an absolutely strict order isn’t required (say peers A and B both drop at the same time from the same section, then some remaining peers could validly record this as A first and others as B first - both perspectives are valid)
  2. the signers of a block help in identifying where that block can be valid in the chain
  3. the type of the block helps in identifying where it fits in the chain (e.g. if an elder peer leaves the section either via relocation or due to dropping, then the following block should be promoting an adult or infant to replace it)

+1

Yes, this is so often the case! I feel the network will be most vulnerable to attack while still relatively small. Maybe not in terms of the number of individual attack attempts but in terms of difficulty (given the ability to run x malicious vaults, this will be a higher proprtion of a small network) and in terms of the impact of a successful attack. So there’s a strong incentive to get things secured properly from the start.

Optimisation is a different thing entirely. Assuming a given design is sub-optimal but still secure and feasible for a small/medium sized network, then it’s much easier and probably more sensible to postpone optimising.

I feel the issue we’re discussing here though is a bit of both. The brute force keygen is a sub-optimal design in my opinion, but the targeted relocation is a security concern. Even that’s over-simplifying the case - not using targeted relocation could prove to be infeasible if we end up with highly imbalanced sections!

I guess I’d be more comfortable if we even took the middle ground and went for random relocation and stuck with brute force keygen. But I still feel we can do better than that from the outset! I’d need to do a lot more reading about HDKeypairs to be fully convinced, and even then I think I’d still be slightly more in favour of the random data being built into a peer’s ID since it seems to have a couple of benefits over the HDKeypair approach.

5 Likes

While I do sort of agree here, I think this is probably overstating the case. Mainly because the brute-force method will only be used relatively infrequently when compared to bitcoin’s proof of work applied to every single block in the chain. Bitcoin mining is mostly wasted energy, whereas our brute-force keygen energy sink should only be a relatively small proprtion of the overall farming effort.

3 Likes

I am thinking of the nay sayers overstating the case, and while 90% of the population do not understand the mechanics some/many/most? will end up believing the shock-jocks. I personally realise the energy wastage is not much in the scheme of things, but we will end up lone voices against the few who spread that claim of energy wastage.

My major concern is with the time taken for new/restarted nodes being relocated to do this brute-force when the network is of decent size.

I am thinking that for a few years while RPi3 (or 4) is a typical cheap SBC that we may make them unsuitable for connecting to the network, even though they can operate as a Node under all other conditions. I am also thinking that if we want smart phones to be capable of being nodes (on charge/wifi) that we need to be very mindful of the time required. No good for smartphones if they cannot even connect (relocate) while on charge overnight.

Also I feel and am repeating myself it seems that we might want to be able to have “smart drives” where one puts a RPi3 (or equivalent) on top of a Hard Drive (or SD card) and these become a major method of getting vaults/Nodes into the homes. Imagine paying 20$ + cost of Harddrive or SD card and the home becomes a node in the network and the homeowner gets paid for it.

If it requires a i7 PC to get connected in reasonable time then adoption is going to be real slow. Remember most people still turn off their home computers/laptops every day. Having a SBC Node solves that issue and is a very low energy device which solves the ongoing cost.

5 Likes

Maybe we’ll see usb keygen asics, like we saw consumer-grade usb bitcoin miner asics. So rpi + hdd + keygenasic could be a solution…? Not ideal but definitely possible.

1 Like

Just saying - the more the uneducated (in tech) the person is then the harder it will be to gain acceptance, when adding more parts/steps.

Just image the advertising - buy this box (RPi/SBC) and add your USB drive or SD card and plug the box into the hdmi port of the TV.

Setup your wallet address via bluetooth (phone to SBC)

Then optionally unplug from TV and leave running.

I know a USB dongle is easy enough, but I think you’d have to parcel it with the box transparently to the user.

But of course someone has to develop the dongle. The RPi solution is almost off the shelf stuff and minor mods to the OS card. So buy a presetup one or use a hackable guide.

Anyhow this relies on the dongle not having flaws too.

Its just the time factor to (re)join when network has reached global status due to something that should be done like your HDkeys that seriously worries me about global acceptance. If we do not aim for global acceptance then SAFE is not S.A.F.E. its only S.A.

And I seem to be going a little off topic now and I should leave this to more knowledgeable people and the topic at hand

Nail -> head (spot on). Yes this is possible. We can ensure the id struct and its associated methods are written in a manner that allows hd keys or similar in the future as a drop in. There are a few parts of the code that require upgrade ability, These include multihash types as well as serialisation identifiers as well. So this would not rule out hd key type additions later on (perhaps even during testnets/beta).

So initially I feel the simplest approach to get the code up and running is the best method. Testing the whole data chain + merge split and age / relocation is already a large step. So the simpler right now the better. I think we are all on the same page re useless work and not requiring nodes to do stuff that is not immediately valuable to the network.

How does that sound?

9 Likes

Fine to do whichever to get the implementation done and debugged. But personally I think, from experience, that deciding for a not time-waster :wink: method should be done before during beta. Or else it’ll stay that way till the complaints of the time taken to rejoin the network arise.

2 Likes

Agreed, but it needs looked at in great detail and we are pushed too far back with the impl of routing right now. We do need to consider potentially too much work for a node to do and how to alleviate it.

So far we have

  1. Ignore it (mostly me for now :wink: )
  2. HD Keys (is a fix if we can validate security of these in terms of team knowledge as well as code)
  3. Indirection of the id type via xor mangling of random data.

3 we already had in code and it was terrible, lead to more complex code, but has some good points, the biggest detraction is needing to go find keys etc.

So we have an issue if the becomes becomes huge (hundreds of millions of nodes) which will happen in time, but we do have another problem that may occur before that and that will be the requirement of quantum resistant asymm keypairs. This could be lattice or similar and possibly can use HD key type trick, but using a grid position or something instead of a point. I think this is actually becoming a real concern now, both MS and IBM have quantum playgrounds on line to code in qbits, CES has advances as well.

So we should look at this properly and not code out the ability to alter the id key type, from plain ed25559 key, or HD key variant of that, or a completely new algorithm all together.

There is a few things to be done during beta like this as I said above, wire format to be made more formal and upgradable is a big one. Upgrade and upgrade tests by nodes is also big. They are all going ot take time, but will be worth it. Not simple but if we keep things less complex as we iterate then we will be golden. Just need to watch it all really.

6 Likes

Yea, lack of time and other tasks are a b.tch. Shame we can never just work on one thing at a time. :frowning:

Anyhow thanks for the insight and information of where the team is at.

I will stop pushing for the moment :slight_smile: and continue enjoying the journey.

3 Likes

No worries Rob, this is probably one of the best threads from a dev point of view and offering a really neat opportunity for a better deign. I think we only gain with these threads and we need to thank @mav for this one. HDkeys is a neat solution to prevent the unwanted work and if done securely I feel sure random relocation works even without network balancing recursive relocation. Small networks though are really helped va the balancing part, so we will get back to this pretty soon I hope. Thanks again @mav

6 Likes

Sounds good. I think alpha this strategy is fine, probably even for beta. But the live network upgrades-to-vaults is going to be very very interesting and a hugely difficult but fascinating problem to investigate.

My relatively-uninformed opinion on this is the network will grow huge much sooner than quantum resistant keypairs are a concern. But you’re right that in general there’s a need to be able to change / upgrade. This affects the design work needed to be done now even though it’s for a future problem, so it’s good to consider.

Glad we could all come to a very similar frame of mind on this topic :slight_smile:

7 Likes

Sharing the key caches among your own vaults is acceptable.
However I doubt about a third-party service.
People may use such service when trying to join the network in time to retain their valuable status.
However, when your status becomes such valuable, will you trust a private key not generated by yourself?

Those who don’t know why its untrustworthy will do so. In other words about 70-85% of the population.

So we would likely see people offer self contained vaults for sale that utilise a centralised “service” which guarantees quick connections and are utilising this centralised key service. You know they advertise the quickest joining vaults on the market, utilising unique key gen services.

6 Likes

To add , from an architectural point of view, defeating attacks IMO requires t be randomized but also randomizing group transformations along the lines of microbial activity as it applies to dynamic random distribution

curiously this the only search string entry above for “dynamic random distribution”

So there is ample opportunity to learn from this …

I found this reference taking it one step further:

“Modeling microbial dynamics in heterogeneous environments: growth on soil carbon sources.”

A good reference place to start if we are considering bio-mimicry as a worthy area of research with regard to creating “dynamic random distribution” and ofc its cousin “dynamic random re-distribution” as it may be applied to Secure Random Re-Location.

2 Likes

This may be a stupid question, but does standard practice involving “right to negotiate” actually apply for nodes operating within the regime of SAFE routing protocols? In other words, if a section is unhealthy and a given infant node shows up to help out, behaves well, doesn’t ask too many questions and does what it’s told, isn’t this all that matters? If the infant node ends up being adversarial, won’t other node aging and defense mechanisms kick in to ban the node from the network? Consider a sly adversary that somehow figured out a way to get this right to negotiate, the group still needs to ensure that they are protected from this crafty devil, right?

Yes, filters are good. Perhaps rather than a deterministic filter, a stochastic one will suffice?

Forgive me if the following real world analogy is too naive: A widget company (section) is overworked and has need for some additional employees (infants). Their HR department (group) can either screen new applicants based on referrals from neighboring businesses in the community whom they trust (targeted neighbor relocation) or screen applicants based on the pedigree of their resume (HDkeys). These methods are both HR preferred when compared to the effort required for the company to hire every individual that comes along off the street (unfiltered), pay some employment tax, then ban them from the premises for bad behavior and lack of productivity. However, it’s not like this company is sending a rocket to the moon, they just make widgets, so the value in an overly selective screening process may be limited. Enter candidate pools, where any candidate that shows up will be placed into a pool for possible consideration. Every so often (random accumulation/wait time), HR selects a candidate from the pool at random. As long as there is a majority of decent applicants currently in the pool at that instant, the chances are good that HR will hire a good worker, and the randomly selected worker is still going to need to pass some entrance exams (PoW/PoR). However, this doesn’t necessarily stop a mass of applicants from swamping the front doors of the company ever day, thereby shutting down operations. To guard against this scenario, priority is given for entrance to the facility(message routing) based on seniority (sending/receiving node age). Thus, HR considers new applicants only when most all other meaningful work has been completed. Incidentally, if production rates are falling too quickly due to lack of workers (rapidly declining section health) then finding new applicants may become the highest priority work in the interim. Job seekers for their part, may be impatient, so nothing stops them from leaving to look for a different place of employment, or for a message to arrive from management at another factory (group in neighboring section) indicating that a particular applicant is banned. You could also pit applicants against each other to see who is the most trustworthy with the widget inspection (collective data integrity / resource proofs). Since priority is given to seniority (node age) when leaving the parking lot (message/data routing), shipping and receiving is not held up due to too much unproductive traffic to ensure normal company operations go largely unaffected.

Yes, this was a rather long winded way to suggest a group acceptance strategy based on random acceptance from infant pools that occurs at random time intervals while also assigning messaging/routing/hopping priority based on nodal age. Some competition within infant pools or assigning collective/coordinated PoW may also be a way to minimize resource needs from the more trusted members of a group and to out which applicants are untruthful. This may be in line with what @rreive was alluding to. I see that in the time since you posted, and since I had some of these thoughts, dirvine gave some additional clarifications here that I missed the first time reading through so maybe what I’ve said falls into dirvine’s category 3, ie. “terrible”. Anyhow, I apologize if this was counterproductive, but agree that the idea of random relocation is potentially a very good one since stochastic monte-carlo type processes like this have the potential for maximal scaling. However, there is always a fine line between the law of averages vs. using intelligence to improve efficiency and right now I’m not experienced enough to judge just how time consuming the “brute force” key generation following the standard protocol will really be relative to other network timings. For this reason I’m looking forward to your analysis of the crowd-sourced keypair generation data. Cheers.

3 Likes

BIP32-like mechanisms for ed25519 etc can be derived using SLIP-0010 “Universal private key derivation from master private key”.

For example, Stellar uses ed25519 and has hierarchical derivation described in SEP-0005 “Key Derivation Methods for Stellar Accounts”

Not sure about BLS.

1 Like