Data Chains - Deeper Dive


This weekend I greatly optimized network simulation. For example, for 900000 iterations the duration went from 5h48 to 1mn20.

Here are the optimizations, with new duration of 100000 iterations displayed at each step (initially 2mn52):

  • Reverse fields order in Prefix structure so that entries in btree maps are ordered by bits value instead of len value. This allows to get the section containing a node, by finding the first entry less than its name instead of iterating over all the entries.
    => 2mn48

  • To get the neighbouring sections, make a series of targeted tests instead of testing all possible sections. The algorithm is the following: for each position in prefix:

    • flip corresponding bit from the prefix and test it

    • if test is unsuccessful, shorten the prefix and test again

    • shorten until success or until modified bit is reached

    Note: Testing longer prefix is of no use because corresponding section are healthier and so, we don’t want to relocate nodes to them.
    => 2mn44

  • Cache sections drop weights. Recompute it only when a section is split or merged and update it when a node is added or removed. Use this cache to compute total network drop weight.
    => 2mn

  • Use this cache to compute random node to drop
    => 1mn29

  • captured output at end of simulation instead of every event.
    => 3s

Additionally, I have:

  • corrected the problem that only neighbouring sections were considered to find an unhealthy section. I was forgetting that the distant section itself could be unhealthy too. The result is slightly improved (3000 more nodes on a total of 580000 after 900000 iterations).

  • removed inc_age parameter because I don’t see why age of nodes would be incremented at section merge or split (besides only those in the first half were incremented in case of split).

  • added a parameter to specify if node relocations are local or distant

  • displayed the number of distinct prefix lengths. A homogeneous network should have at most 2 distinct prefix lengths.

With these optimizations simulations with 10000000 iterations become possible (they take 2h52mn on my PC). Here are the results with max_young=4, init_age=4, relocation_rate=aggressive, relocation_scope=local:

With relocation_scope=distant the results are similar except for the discrepancies indicated last week, with appearance of 3 distinct prefix lengths:

Sections with 3 distinct prefix lengths appear in 129 iteration with sometimes several sections having the shorter prefix, up to 6 sections like at iteration 9670000:

Prefix len Count Average Min Max Standard dev
17 6 16.83 14 19 1.94
18 2560 19.01 9 25 4.86
19 519144 12.06 8 16 1.16
All 521710 12.09 8 25 1.30

I think the explanation is the following: with random relocations applied on thousands of sections it happens that some of them lose a lot of nodes. With local relocations, this will create an excess of nodes in the sections around such a weak section, so there is a high probability that these nodes will come back to their initial section. With distant relocation the nodes are sent far away and the probability that a node is relocated to the neighbourhood is low, or more exactly standard without any consideration that there is a weak section here. When this weak section has less than 8 nodes, it will merge and create a new section with a shorter prefix length, hence the appearance of 3 distinct prefix lengths.

Distant relocations are more secure because they prevent section targeting, but this instability is harmful. So, to try to solve both problem I have added a third value to relocation_scope: mixed value which indicates a 50%/50% probability between local and distant relocation.

Simulation with relocation_scope=mixed shows that the instability has not totally disappeared, but the results are better: there are only 9 iterations with 3 distinct prefix lengths and each time there is only one section with the shorter prefix.

Then I wanted to find the right mix of local and distant relocations. So, I have renamed this parameter and modified it to take a numeric value for the distant relocation probability. I have got the following results with various values for distant relocation probability:

Distant relocation probability Iterations with 3 distinct prefix lengths
100% 129
70% 96
50% 9
30% 29
0% 0

The result for 30% is disappointing because it means we still get 3 distinct prefix lengths with a low rate of distant relocations, which is both unsecure and unstable.

Then I modified the program, to not relocate a node when relocation would trigger a section merge (meaning its size is GROUP_SIZE before relocation). That was not enough so I changed the condition to size <= GROUP_SIZE + 1 (meaning that I also don’t to want to risk a merge if a node drops the section afterwards). There is still one iteration with 3 distinct prefix lengths at 100% distant relocation probability, but the end is near.

Last try was with the same condition but with 80% distant relocation probability, and at last there are at most 2 distinct prefix lengths all along the simulation. With 80% probability of distant relocation each time a node is relocated, targeting a section remains impossible.

So, finally I have got a network that is both stable and secure.

Another interesting feature of this algorithm is that the minimum number of nodes in a section is higher (9 instead of 8):


The value here is beyond any expectation @tfa Thank you very much indeed.


not sure, but this maybe an overkill. In the production code, there is not such knowledge of the whole network.
Then trying to figure out which section is the least healthy one will be difficult and at least involves certain amount of messages to be exchanged.


There is no need to know the whole network. The distant section is chosen at random. If this section has a neighbour that is in worse shape than itself then the node is relocated to this neighbour. This selection is done locally by the distant section.

I think there is something similar that is already implemented in current routing code for relocation of joining nodes (in handle_expect_candidate).


Yes, you are right.
As long as the least unhealthy section is only among the neighbours, not among the whole network, there is no need to have the knowledge of whole network.

There might be multiple hops of relocation, and each hop incurs a group consensus to be achieved. But I think it is acceptable.


For now there is only one hop in the simulation program.

A recursive approach is an improvement I should consider (looking at the selected neighbour’s neighbours until there isn’t any worse one).


Just want to confirm that this says total of 580000 nodes, but the diagrams below showing total of 520000 sections and 25 nodes per section which means a total of 13m nodes.
which one is correct?


For 900000 iterations, the figures are on the left side of the diagrams, so they are hard to estimate and I have the real figures only available at home.

At this iteration there are approximately 18 nodes per section on average and exactly 32768 sections (the nearest power of 2), which makes grossly 590000 nodes.


@tfa: ah, never mind, I mis-read the iterations in the diagram.
The numbers are ok.


Hey @tfa,

Great work! Don’t know if you are aware, but we’ve been working for some time on a new simulation (hosted at The main difference from the old one is how Joins and Drops are calculated. In the old sim, there is at most one Join and one Drop event per iteration. In the new sim, the number is proportional to the number of sections in the network. This creates much higher pressure on the network (likely higher than the real network will be experiencing) which should in theory uncover potential problems sooner. The flip side is that the new sim is quite a bit slower than the old one.

We are currently looking into the changes you made to see how we can potentially apply them to the new sim. We’ll report back with out results as soon as we have some. Please let us know if you’re interested in updating your version of the simulator to the rules we’re using in the new sim and we’ll try to help with it.


To take into account that network activity is proportional to its size, can’t we just suppose that one iteration corresponds to a fraction of a unit of time and this fraction decreases as the network grows (because the number of events increases per unit of time). Time is a growing function of iterations, but the growth is decreasing as time passes, something like log functions.

No flip side in fact: the iterations are slower with new version, but more iterations are needed in the old version to simulate the same real time duration.

Thanks for the offer. But not for now, I am working on a new metric more precise than number of distinct prefix lengths to express network homogeneity. Possibly later.


I have been thinking about sections with distinct prefix lengths: What is the problem when there are 3 distinct values?

The simple answer is that a section with the shorter prefix manages an XOR range 4 times larger than another one with the longer prefix. This means it will store 4 more times data on average.

But the complete answer is more complex because the shorter prefix section is possibly issued from a merge (meaning it has more vaults to manage the larger range), whereas the longer one can be issued from a split (meaning it has less vaults but manages a smaller range).

So, the important metric is not the prefix length range alone and the number of vaults in the sections must be incorporated to it.

For this purpose I have added a new metric called “density gap” which is the ratio of highest section density over lowest section density where density is:

number of vaults in the section / section size.

and section size is defined in following table (where Lmax is the greatest prefix lengths):

Prefix length Section size
Lmax 1
Lmax-1 2
Lmax-2 4
Lmax-n 2^n

When there is only one prefix length, the density gap is simply the vault count of the most populated section divided by the vault count of the least populated one. When there are several distinct prefix lengths the section size divisor takes into account the relative size of the section in the XOR space.

The density gap is a number greater than 1 and the lower the value is, the more homogeneous the network is.

Its range is similar to the number of distinct prefix lengths and so the 2 curves can be displayed in the same chart, with the same ordinate scale.

I have also added a new parameter (relocation_margin) to control more precisely inhibition of relocations from a small section. Last week, the formula I came up with was: adult count <= GROUP_SIZE + 1. I have changed it to: adult count <= GROUP_SIZE + relocation_margin, which defines the following behaviors:

  • relocation_margin=-1: no inhibition, this is the original behavior from @bart’s code

  • relocation_margin=0: relocation is inhibited if it would trigger a section merge

  • relocation_margin=1: relocation is inhibited if a subsequent node drop would trigger a section merge (same behavior as last week version)

  • relocation_margin=2: relocation is inhibited if 2 subsequent node drops would trigger a section merge

I have launched many simulations, all with 10_000_000 iterations, but with various values for the parameters. Here are the results for the 2 main metrics:

  • number of nodes at the end of simulation (which must be maximized)

  • maximum value of density gap all along the simulation (which must be minimized)

Log File Nodes Max Gap
raggressivei4y4m0d100.txt 6536206 4.27
raggressivei4y4m0d80.txt 6537338 4.53
raggressivei4y4m1d100.txt 6595668 4.25
raggressivei4y4m1d80.txt 6583353 3.56
raggressivei4y4m2d100.txt 6327650 3.40
raggressivei4y4m2d80.txt 6308415 3.20
raggressivei4y4m3d100.txt 1911206 4.25
raggressivei4y4m3d80.txt 953994 5.07
rstandardi4y4m1d100.txt 3047053 3.20
rstandardi4y4m1d80.txt 3043159 3.08
rstandardi4y2m1d100.txt 2646656 3.00
rstandardi4y2m1d80.txt 2639692 3.00
raggressivei4y4m1d60.txt 6583749 3.78
raggressivei4y4m2d60.txt 6292031 3.20

In all cases max gap is greater than 3, which means that sometimes there are nodes that manage an area in XOR space at least 3 times greater than some other nodes. Dispersion of max gap is very narrow so we can favor final number of nodes.

A good compromise seems to be raggressivei4y4m1d80 which corresponds to the following parameters:
init_age: 4, split_strategy: Complete, max_young: 4, iterations: 10000000, summary_intervals: 10000, growth: (90, 7), structure_output_file: None, drop_dist: Exponential, relocation_rate: Aggressive, distant_relocation_probability: 0.8, relocation_margin: 1

Here are the corresponding diagrams:

As I said in my previous post, time is not linear, but is more and more streched as the number of iterations grows, so the horizontal axis should be modified with a logarithmic scale if we want to show results in function of time instead of iterations. For example the strait line representing the total number of nodes is replaced by an exponential curve. Of course this is true only while the number of node creation is proportional to the network size, which cannot hold indefinitely.

Strangely, Excel doesn’t allow selection of logarithmic scale on x axis of a line chart. The workaround to get the chart was the following:

  • transform the line chart to a XY (scatter) chart

  • select logarithmic scale on x axis

  • transform the chart back to a line chart

I have created a github repo that contains all my log files and Excel files. The Excel files were manually edited, but a simple utility ( allows extraction of columns from the log files.

This will conclude my tests on simulations. The parameters I have selected (-raggressive -i4 -y4 -m1 -d80) favor network growth by keeping rejection rate largely under 33%. Density gap is a bit high (3.56) but anyway it cannot go under 3. A 80% distant relocations percentage is secure enough, lowers density gap from 4.25 to 3.56 and has minimal impact on the final number of nodes (6583353 instead of 6595668).


The new version has more hash calculations for relocate attempts and more handling procedures to complete a relocation request.
To be more precise, given a network having the same size, the new version takes more time to make the network grows into next stage size.
For example, your version can handle a network grows from 1 million to 10 million within reasonable time, but the new version could already take hours to make a network grows from 1 million to 2 million.

This could be a useful metric to explore.
And just to confirm, say we get sections 0, 11 and 10, with section 0 having 20 nodes, and section 11 having 10 nodes and section 10 having 16 nodes.
then Lmax is 2, and density for section 0 is 20/2 = 10, for section 11 is 11/1 = 11 and for section 10 is 16/1 = 16 .
then the density_gap = 16 / 10 = 1.6 , right?


I was referring to new sim generating more join/drop events in an iteration, proportionally to network size. I don’t do that because it is simpler to take account clock time by just modifying x axis. I think this method is also more flexible if we want to simulate other kind of growth without recomputing everything.


  • I have removed all hash calculations from old sim because, as I said earlier, the simulation doesn’t need to know how random addresses will be computed in the real network.

  • I have brought many optimizations in the old sim and I didn’t check if any of them were applicable to new sim.

1.6 value for density gap in your calculations is correct. It means that nodes in most dense section (10) have 1.6 times less work to do to store average data than nodes in less dense section (0). One can argue that the result should be inverted (compute 10/16 instead), but I have chosen this way so that the 2 curves in the network homogeneity chart (distinct prefix lengths & density gap) must be lowered to get the best possible network.