Secure Random Relocation

Just saying - the more the uneducated (in tech) the person is then the harder it will be to gain acceptance, when adding more parts/steps.

Just image the advertising - buy this box (RPi/SBC) and add your USB drive or SD card and plug the box into the hdmi port of the TV.

Setup your wallet address via bluetooth (phone to SBC)

Then optionally unplug from TV and leave running.

I know a USB dongle is easy enough, but I think you’d have to parcel it with the box transparently to the user.

But of course someone has to develop the dongle. The RPi solution is almost off the shelf stuff and minor mods to the OS card. So buy a presetup one or use a hackable guide.

Anyhow this relies on the dongle not having flaws too.

Its just the time factor to (re)join when network has reached global status due to something that should be done like your HDkeys that seriously worries me about global acceptance. If we do not aim for global acceptance then SAFE is not S.A.F.E. its only S.A.

And I seem to be going a little off topic now and I should leave this to more knowledgeable people and the topic at hand

Nail → head (spot on). Yes this is possible. We can ensure the id struct and its associated methods are written in a manner that allows hd keys or similar in the future as a drop in. There are a few parts of the code that require upgrade ability, These include multihash types as well as serialisation identifiers as well. So this would not rule out hd key type additions later on (perhaps even during testnets/beta).

So initially I feel the simplest approach to get the code up and running is the best method. Testing the whole data chain + merge split and age / relocation is already a large step. So the simpler right now the better. I think we are all on the same page re useless work and not requiring nodes to do stuff that is not immediately valuable to the network.

How does that sound?

9 Likes

Fine to do whichever to get the implementation done and debugged. But personally I think, from experience, that deciding for a not time-waster :wink: method should be done before during beta. Or else it’ll stay that way till the complaints of the time taken to rejoin the network arise.

2 Likes

Agreed, but it needs looked at in great detail and we are pushed too far back with the impl of routing right now. We do need to consider potentially too much work for a node to do and how to alleviate it.

So far we have

  1. Ignore it (mostly me for now :wink: )
  2. HD Keys (is a fix if we can validate security of these in terms of team knowledge as well as code)
  3. Indirection of the id type via xor mangling of random data.

3 we already had in code and it was terrible, lead to more complex code, but has some good points, the biggest detraction is needing to go find keys etc.

So we have an issue if the becomes becomes huge (hundreds of millions of nodes) which will happen in time, but we do have another problem that may occur before that and that will be the requirement of quantum resistant asymm keypairs. This could be lattice or similar and possibly can use HD key type trick, but using a grid position or something instead of a point. I think this is actually becoming a real concern now, both MS and IBM have quantum playgrounds on line to code in qbits, CES has advances as well.

So we should look at this properly and not code out the ability to alter the id key type, from plain ed25559 key, or HD key variant of that, or a completely new algorithm all together.

There is a few things to be done during beta like this as I said above, wire format to be made more formal and upgradable is a big one. Upgrade and upgrade tests by nodes is also big. They are all going ot take time, but will be worth it. Not simple but if we keep things less complex as we iterate then we will be golden. Just need to watch it all really.

6 Likes

Yea, lack of time and other tasks are a b.tch. Shame we can never just work on one thing at a time. :frowning:

Anyhow thanks for the insight and information of where the team is at.

I will stop pushing for the moment :slight_smile: and continue enjoying the journey.

3 Likes

No worries Rob, this is probably one of the best threads from a dev point of view and offering a really neat opportunity for a better deign. I think we only gain with these threads and we need to thank @mav for this one. HDkeys is a neat solution to prevent the unwanted work and if done securely I feel sure random relocation works even without network balancing recursive relocation. Small networks though are really helped va the balancing part, so we will get back to this pretty soon I hope. Thanks again @mav

6 Likes

Sounds good. I think alpha this strategy is fine, probably even for beta. But the live network upgrades-to-vaults is going to be very very interesting and a hugely difficult but fascinating problem to investigate.

My relatively-uninformed opinion on this is the network will grow huge much sooner than quantum resistant keypairs are a concern. But you’re right that in general there’s a need to be able to change / upgrade. This affects the design work needed to be done now even though it’s for a future problem, so it’s good to consider.

Glad we could all come to a very similar frame of mind on this topic :slight_smile:

7 Likes

Sharing the key caches among your own vaults is acceptable.
However I doubt about a third-party service.
People may use such service when trying to join the network in time to retain their valuable status.
However, when your status becomes such valuable, will you trust a private key not generated by yourself?

Those who don’t know why its untrustworthy will do so. In other words about 70-85% of the population.

So we would likely see people offer self contained vaults for sale that utilise a centralised “service” which guarantees quick connections and are utilising this centralised key service. You know they advertise the quickest joining vaults on the market, utilising unique key gen services.

6 Likes

To add , from an architectural point of view, defeating attacks IMO requires t be randomized but also randomizing group transformations along the lines of microbial activity as it applies to dynamic random distribution

curiously this the only search string entry above for “dynamic random distribution”

So there is ample opportunity to learn from this …

I found this reference taking it one step further:

“Modeling microbial dynamics in heterogeneous environments: growth on soil carbon sources.”

A good reference place to start if we are considering bio-mimicry as a worthy area of research with regard to creating “dynamic random distribution” and ofc its cousin “dynamic random re-distribution” as it may be applied to Secure Random Re-Location.

2 Likes

This may be a stupid question, but does standard practice involving “right to negotiate” actually apply for nodes operating within the regime of SAFE routing protocols? In other words, if a section is unhealthy and a given infant node shows up to help out, behaves well, doesn’t ask too many questions and does what it’s told, isn’t this all that matters? If the infant node ends up being adversarial, won’t other node aging and defense mechanisms kick in to ban the node from the network? Consider a sly adversary that somehow figured out a way to get this right to negotiate, the group still needs to ensure that they are protected from this crafty devil, right?

Yes, filters are good. Perhaps rather than a deterministic filter, a stochastic one will suffice?

Forgive me if the following real world analogy is too naive: A widget company (section) is overworked and has need for some additional employees (infants). Their HR department (group) can either screen new applicants based on referrals from neighboring businesses in the community whom they trust (targeted neighbor relocation) or screen applicants based on the pedigree of their resume (HDkeys). These methods are both HR preferred when compared to the effort required for the company to hire every individual that comes along off the street (unfiltered), pay some employment tax, then ban them from the premises for bad behavior and lack of productivity. However, it’s not like this company is sending a rocket to the moon, they just make widgets, so the value in an overly selective screening process may be limited. Enter candidate pools, where any candidate that shows up will be placed into a pool for possible consideration. Every so often (random accumulation/wait time), HR selects a candidate from the pool at random. As long as there is a majority of decent applicants currently in the pool at that instant, the chances are good that HR will hire a good worker, and the randomly selected worker is still going to need to pass some entrance exams (PoW/PoR). However, this doesn’t necessarily stop a mass of applicants from swamping the front doors of the company ever day, thereby shutting down operations. To guard against this scenario, priority is given for entrance to the facility(message routing) based on seniority (sending/receiving node age). Thus, HR considers new applicants only when most all other meaningful work has been completed. Incidentally, if production rates are falling too quickly due to lack of workers (rapidly declining section health) then finding new applicants may become the highest priority work in the interim. Job seekers for their part, may be impatient, so nothing stops them from leaving to look for a different place of employment, or for a message to arrive from management at another factory (group in neighboring section) indicating that a particular applicant is banned. You could also pit applicants against each other to see who is the most trustworthy with the widget inspection (collective data integrity / resource proofs). Since priority is given to seniority (node age) when leaving the parking lot (message/data routing), shipping and receiving is not held up due to too much unproductive traffic to ensure normal company operations go largely unaffected.

Yes, this was a rather long winded way to suggest a group acceptance strategy based on random acceptance from infant pools that occurs at random time intervals while also assigning messaging/routing/hopping priority based on nodal age. Some competition within infant pools or assigning collective/coordinated PoW may also be a way to minimize resource needs from the more trusted members of a group and to out which applicants are untruthful. This may be in line with what @rreive was alluding to. I see that in the time since you posted, and since I had some of these thoughts, dirvine gave some additional clarifications here that I missed the first time reading through so maybe what I’ve said falls into dirvine’s category 3, ie. “terrible”. Anyhow, I apologize if this was counterproductive, but agree that the idea of random relocation is potentially a very good one since stochastic monte-carlo type processes like this have the potential for maximal scaling. However, there is always a fine line between the law of averages vs. using intelligence to improve efficiency and right now I’m not experienced enough to judge just how time consuming the “brute force” key generation following the standard protocol will really be relative to other network timings. For this reason I’m looking forward to your analysis of the crowd-sourced keypair generation data. Cheers.

3 Likes

BIP32-like mechanisms for ed25519 etc can be derived using SLIP-0010 “Universal private key derivation from master private key”.

For example, Stellar uses ed25519 and has hierarchical derivation described in SEP-0005 “Key Derivation Methods for Stellar Accounts”

Not sure about BLS.

1 Like