I suppose the downside of sim 2 and the greater number of sections is the much higher churn (ie network overhead)?
There are more churns in simulation 2 simply because the network is bigger, and it is bigger because there are less nodes that cannot connect. So, no downside in this area.
The number of churns is even smaller if we look at the figures when simulation 2 had a similar number of nodes:
Metrics  Simulation 1 at step 890000  Simulation 2 at step 260000 

Adds  800814  234114 
Drops  62523  18053 
Rejoins  26663  7833 
Relocations  503815  376067 
Rejections  578686  56978 
Churns  1912122  1021403 
Sections  8258  8358 
Section nodes  161447  163394 
Left nodes  26287  3997 
Rejection rate  72%  27% 
Simulation 2 is much better than simulation 1, except in the number of distinct prefix lengths which varies from 3 to 4: the values are better (I mean prefix lengths are greater) but they are slightly more dispersed than in simulation 1.
So, I improved the program to make sure that when a node is relocated to a section it is placed in the section half having less adults. The simulation is running right now. Results are expected in a few hours.
Result with the new algorithm to balance the sections surpasses all expectations: there is only one prefix length, except during the period of transition from one prefix length to the next one where there are 2.
As a consequence, the number of section is now a stepped curve equaling exactly a power of 2 between each transition (in green):
The rejection rate has large oscillations with minimum values at transitions:
The number of nodes is slightly better:
But the real gain of the new algorithm is the homogeneity of the network.
Nice work Thierry, as always.
The question I ask myself is whether it is possible that the network can get more than half a million Vaults in a relatively short time. Which leads me to insist that, beyond the search for technical solutions, socioeconomic factors are critical and must be taken into consideration.
What is the expected number of Vaults at the beginning and what growth is expected?
Do we want a Big Bang type of growth or a slower one?
What type of distribution is going to be chosen? Linear, exponential, logarithmic …? And to what extent will this affect the number of Vaults and their growth?
Who and why would someone spend resources attacking the network?
etc etc…
Yesterday a play with the Adam simulation. Those are the result until 12000000 Iterations with standard parameters. Strange behaviour, huge rejections and from 6000000 iterations there are no change in the metrics.
Iterations: 12000000, nodes: 1183, sections: 68, merges: 78, splits: 1217, relocations: 1143 rejections: 510853029 }, AgeDist { min: 4, max: 1079, avg: 258.13 }, SectionSizeDist { min: 9, max: 27, avg: 17.40 }, PrefixLenDist { min: 1, max: 8, avg: 7.12 }, MaxPrefixLenDiff: 7
And this result with relocationstrategy youngest first and 50000000 iterations
iterations: 50000000, nodes: 1606, sections: 94, merges: 104, splits: 1725, relocations: 2710 rejections: 378513740 }, AgeDist { min: 4, max: 1247, avg: 284.62 }, SectionSizeDist { min: 9, max: 32, avg: 17.09 }, PrefixLenDist { min: 1, max: 9, avg: 7.62 }, MaxPrefixLenDiff: 8
I don’t think we can choose the kind of growth because it will depend on farmers willingness to provide vaults and keep them running.
We can help the growth by:

Providing incentives for farming (this is what is intended with safecoin rewards)

Reducing barriers to farmers wanting to participate (by that I mean minimizing the rejection rate)
But apart these elements, the growth is largely outside our control.
That’s strange. I started with @bart code and I didn’t studied @madadam code nor did I test it. So, I cannot tell where the problem is, but these results are not the expected ones. Maybe the code isn’t ready yet. It could be also a parameterization problem.
That could be the parameter that causes the problem. @bart’s code implements oldest first strategy. It is hard coded, and I left it as it is, because @mav made a convincing case that it is the right choice (youngest first is double discrimination for relocation of old nodes).
This weekend I have added 2 small modifications:

I have implemented my idea to replace relocation to the neighbouring weakest section by a 2 steps relocation: first to a distant random section and then to the weakest neighbour of this section. This idea was expressed in this post.

Second one is not an improvement but a simplification: I have used the random() function to compute, well … the random section. But in reality vaults cannot use this function because the result will be different between each vault. The randomness must be computed from a common data, typically the hash of an agreed state.
For example, current routing code relocates a joining node to the hash of its name concatenated with the ids of its 2 closest nodes. I don’t think that this algorithm will be kept, but I don’t need to know what the new algorithm will be. The important characteristic for the simulation is that it is a random address, how it will be computed in the real network doesn’t matter.
The same is true for the age of the node to relocate: In @bart code It was computed from the hash of the event, but when we look at @madadam repo, it has been changed to another thing. I make the same argument here: we don’t need to know the final implementation, we only need to compute a random age.
So, I have replaced this usage of hash function by the random function again. Then the hash has been completely removed, because it was not being used anywhere else.
Results are very similar but are slightly less stable (very slightly). I observed two instances with 3 distinct prefix lengths (for example at iteration 310000):
Prefix len  Count  Average  Min  Max  Standard dev 

12  1  17.00  17  17  None 
13  247  21.62  11  25  3.35 
14  15886  12.43  8  16  1.20 
All  16134  12.57  8  25  1.69 
Also, there is sometimes a merge that happens during a stable period with a power of 2 number of sections (for example at iterations 390000, 400000, 410000):
Prefix len  Count  Average  Min  Max  Standard dev 

14  16384  15.38  10  20  1.11 
All  16384  15.38  10  20  1.11 
Prefix len  Count  Average  Min  Max  Standard dev 

13  1  19.00  19  19  None 
14  16382  15.77  12  20  1.16 
All  16383  15.77  12  20  1.16 
Prefix len  Count  Average  Min  Max  Standard dev 

14  16384  16.16  12  21  1.18 
All  16384  16.16  12  21  1.18 
Intuitively, I would have thought that relocating nodes at distance is more secure, but these results are not proving this. Though the difference is tenuous: one section over 16000 and I don’t show the curves because they are indistinguishable from previous ones.
Maybe I should add a parameter to allow comparison with more tests. Something like RelocationScope with 2 values (Local and Distant).
I think this mainly applies to attack scenarios like the google attack where particular sections are flooded. This is where the relocation distance matters more.
Exactly. The idea of distant random relocation is not better metrics but avoid possible attacks taking advantage of the relocation in the neighbourhood. An attacker could focus their efforts on a few sections, in whose vicinity has several nodes, and, via DDOS, eliminate certain competitors or even forcing merges. This would allow to be in positions of advantage to gain control of a group.
This type of attack has similarities to games like Go where machine learning via artificial neural network, like AlphaGo, are proving to be extremely effective.
This weekend I greatly optimized network simulation. For example, for 900000 iterations the duration went from 5h48 to 1mn20.
Here are the optimizations, with new duration of 100000 iterations displayed at each step (initially 2mn52):

Reverse fields order in Prefix structure so that entries in btree maps are ordered by bits value instead of len value. This allows to get the section containing a node, by finding the first entry less than its name instead of iterating over all the entries.
=> 2mn48 
To get the neighbouring sections, make a series of targeted tests instead of testing all possible sections. The algorithm is the following: for each position in prefix:

flip corresponding bit from the prefix and test it

if test is unsuccessful, shorten the prefix and test again

shorten until success or until modified bit is reached
Note: Testing longer prefix is of no use because corresponding section are healthier and so, we don’t want to relocate nodes to them.
=> 2mn44 

Cache sections drop weights. Recompute it only when a section is split or merged and update it when a node is added or removed. Use this cache to compute total network drop weight.
=> 2mn 
Use this cache to compute random node to drop
=> 1mn29 
captured output at end of simulation instead of every event.
=> 3s
Additionally, I have:

corrected the problem that only neighbouring sections were considered to find an unhealthy section. I was forgetting that the distant section itself could be unhealthy too. The result is slightly improved (3000 more nodes on a total of 580000 after 900000 iterations).

removed inc_age parameter because I don’t see why age of nodes would be incremented at section merge or split (besides only those in the first half were incremented in case of split).

added a parameter to specify if node relocations are local or distant

displayed the number of distinct prefix lengths. A homogeneous network should have at most 2 distinct prefix lengths.
With these optimizations simulations with 10000000 iterations become possible (they take 2h52mn on my PC). Here are the results with max_young=4, init_age=4, relocation_rate=aggressive, relocation_scope=local:
With relocation_scope=distant the results are similar except for the discrepancies indicated last week, with appearance of 3 distinct prefix lengths:
Sections with 3 distinct prefix lengths appear in 129 iteration with sometimes several sections having the shorter prefix, up to 6 sections like at iteration 9670000:
Prefix len  Count  Average  Min  Max  Standard dev 

17  6  16.83  14  19  1.94 
18  2560  19.01  9  25  4.86 
19  519144  12.06  8  16  1.16 
All  521710  12.09  8  25  1.30 
I think the explanation is the following: with random relocations applied on thousands of sections it happens that some of them lose a lot of nodes. With local relocations, this will create an excess of nodes in the sections around such a weak section, so there is a high probability that these nodes will come back to their initial section. With distant relocation the nodes are sent far away and the probability that a node is relocated to the neighbourhood is low, or more exactly standard without any consideration that there is a weak section here. When this weak section has less than 8 nodes, it will merge and create a new section with a shorter prefix length, hence the appearance of 3 distinct prefix lengths.
Distant relocations are more secure because they prevent section targeting, but this instability is harmful. So, to try to solve both problem I have added a third value to relocation_scope: mixed value which indicates a 50%/50% probability between local and distant relocation.
Simulation with relocation_scope=mixed shows that the instability has not totally disappeared, but the results are better: there are only 9 iterations with 3 distinct prefix lengths and each time there is only one section with the shorter prefix.
Then I wanted to find the right mix of local and distant relocations. So, I have renamed this parameter and modified it to take a numeric value for the distant relocation probability. I have got the following results with various values for distant relocation probability:
Distant relocation probability  Iterations with 3 distinct prefix lengths 

100%  129 
70%  96 
50%  9 
30%  29 
0%  0 
The result for 30% is disappointing because it means we still get 3 distinct prefix lengths with a low rate of distant relocations, which is both unsecure and unstable.
Then I modified the program, to not relocate a node when relocation would trigger a section merge (meaning its size is GROUP_SIZE before relocation). That was not enough so I changed the condition to size <= GROUP_SIZE + 1 (meaning that I also don’t to want to risk a merge if a node drops the section afterwards). There is still one iteration with 3 distinct prefix lengths at 100% distant relocation probability, but the end is near.
Last try was with the same condition but with 80% distant relocation probability, and at last there are at most 2 distinct prefix lengths all along the simulation. With 80% probability of distant relocation each time a node is relocated, targeting a section remains impossible.
So, finally I have got a network that is both stable and secure.
Another interesting feature of this algorithm is that the minimum number of nodes in a section is higher (9 instead of 8):
not sure, but this maybe an overkill. In the production code, there is not such knowledge of the whole network.
Then trying to figure out which section is the least healthy one will be difficult and at least involves certain amount of messages to be exchanged.
There is no need to know the whole network. The distant section is chosen at random. If this section has a neighbour that is in worse shape than itself then the node is relocated to this neighbour. This selection is done locally by the distant section.
I think there is something similar that is already implemented in current routing code for relocation of joining nodes (in handle_expect_candidate).
Yes, you are right.
As long as the least unhealthy section is only among the neighbours, not among the whole network, there is no need to have the knowledge of whole network.
There might be multiple hops of relocation, and each hop incurs a group consensus to be achieved. But I think it is acceptable.
For now there is only one hop in the simulation program.
A recursive approach is an improvement I should consider (looking at the selected neighbour’s neighbours until there isn’t any worse one).
Just want to confirm that this says total of 580000 nodes, but the diagrams below showing total of 520000 sections and 25 nodes per section which means a total of 13m nodes.
which one is correct?
For 900000 iterations, the figures are on the left side of the diagrams, so they are hard to estimate and I have the real figures only available at home.
At this iteration there are approximately 18 nodes per section on average and exactly 32768 sections (the nearest power of 2), which makes grossly 590000 nodes.
Hey @tfa,
Great work! Don’t know if you are aware, but we’ve been working for some time on a new simulation (hosted at https://github.com/maidsafe/datachains_sim). The main difference from the old one is how Joins and Drops are calculated. In the old sim, there is at most one Join and one Drop event per iteration. In the new sim, the number is proportional to the number of sections in the network. This creates much higher pressure on the network (likely higher than the real network will be experiencing) which should in theory uncover potential problems sooner. The flip side is that the new sim is quite a bit slower than the old one.
We are currently looking into the changes you made to see how we can potentially apply them to the new sim. We’ll report back with out results as soon as we have some. Please let us know if you’re interested in updating your version of the simulator to the rules we’re using in the new sim and we’ll try to help with it.
To take into account that network activity is proportional to its size, can’t we just suppose that one iteration corresponds to a fraction of a unit of time and this fraction decreases as the network grows (because the number of events increases per unit of time). Time is a growing function of iterations, but the growth is decreasing as time passes, something like log functions.
No flip side in fact: the iterations are slower with new version, but more iterations are needed in the old version to simulate the same real time duration.
Thanks for the offer. But not for now, I am working on a new metric more precise than number of distinct prefix lengths to express network homogeneity. Possibly later.