Data Chains - General rules in an eventually consistent network

Introduction

What follows is a developing picture of group consensus in an eventual consistency network. The style of this document is that of a story that develops as each section is considered. This is not a formal specification, although makes an attempt to allow the reader to develop such. We will extract rules where we can and develop these through the document.

This document will also enforce the basic code of network nodes and their priorities to maintain the network and by extension themselves.
These priorities are

  1. Protect the network.
  2. Protect your group.
  3. Protect self.

These basic tenants will mean a node will self terminate to protect the group, or destroy the group
to protect the network.

Chain Observations

  1. The chain on it’s own must represent a true history of network events, whether these be group membership or data events.
  2. Given a chain we should be able to cryptographically prove it is valid and not forged.
  3. A chain requires a trusted starting point to traverse from.
  4. The tip of the chain on it’s own cannot show any current status.
  5. A chain that contains only events is limited in the ability to protect against the removal of data events, but will not be able to remove section events.

Network Observations (assumes we use data chains)

Given a chain that does represent a true history we can also obtain further information to validate the current status of group membership, data versions etc. This is achieved by noting the following:

  1. A node transmits messages through neighbor groups, therefore it must be a valid network node.
  2. A node presenting a chain must also (by default) be telling us the group membershp at the tip of the chain.
  3. Given we can know group membership, we can validate the group by sending a message to each. This can only be replied to if the node exists on the network, a node in our group or a remote group, either way the neighbors or us know if the node is valid). To further strengthen this we can encrypt (thanks to secure name) a challenge to such nodes.
  4. On network restart we can accept a chain as containing valid information from a node, however we will require several nodes (>Q) to trust this is the latest view of the network. Such validation may also require neighbor validation (TODO as currently out of scope)
  5. A chain with missing events will be detectable in cases where multiple copies are requested from different current group memebers.

Section Block Initial Thoughts

Block Identifiers related to a section contain

pub enum BlockIdentifier { // this is a simple identifier, ignoring data, in a real system this would be part of an enum that includes data
    Remove(PublicKey),
    Add(PublicKey),
    SplitFrom(Prefix),
    MergeTo(Prefix),
}

As identified here we assume there is no event that has a corresponding negative event. In other words double vote does not exist in this set of events.

Analysing these events we can make simple initial observations on order.

  1. Remove Can only appear only after Add.
  2. Add must conversely appear before Remove.
  3. SplitFrom(Prefix) can only appear when we have enough Add and not enough Remove that the section has GroupSize + buffer for two new sections.
  4. MergeTo(Prefix) shall only appear when we have enough Remove but not enough Add that we have only GroupSize nodes left in our section.

These basic observations allow us to now dig a little deeper into a potential ordering. However we will have to understand the voting process and quorum a little more to allow us to relate this to ordering of the above events. Although we have established no contradicting events and basic order of events we have not enough information to allow us to order a randomly given set of the above events, yet!

Votes

A vote is a digitally signed observation of a network event that a node proposes to it’s peers in the group. Until the group (or specifically) quorum of group members agree with their vote then the event is not agreed. On agreement from peers a node will send the complete approved Block to all other group members.

A vote is defined in code as:

pub struct Vote {
    identifier: BlockIdentifier,
    proof: Proof,
}

A proof is defined as:

pub struct Proof {
    key: PublicKey, // valid voters key
    sig: Signature, // voters signature of BlockIdentifier
}

A proof is a digitally signed claim by a node that a network event should be registered. This claim is made as node sees network events that should be recorded and agreed by the group. The node sees something and confirms that it’s accurate by requesting group consensus on the event.

As the vote is signed then nodes who are acting maliciously have little effect, however their actions are recorded by default and these actions are digitally signed. Therefor bad or invalid behaviour is detectable and cryptographically provable. This detection and any punishment is not discussed in this paper.

Block Structure

A Block is defined as follows

pub struct Block {
    identifier: BlockIdentifier,
    proofs: Vec<Proof>,
    pub locally_valid: bool, // may be ignored
}

A block is an agreement of several nodes. This agreement is digitally signed by each node. The BlockIdentifier is the event that is proposed for agreement and signed.

This can be thought of as a proposal until there are enough votes to turn it from a proposal to an agreed network event (in this case the group status at this point in the chain history). It is clear that any node can make any proposal at any time and send it to all group members.

Chain Structure

This is a very simple container of Blocks and is defined as such in code:

pub struct DataChain {
    chain: Vec<Block>, // the chain
    group_size: usize, // required for decisions on merge / split
    path: Option<PathBuf>, // used for local non volatile storage
}

To reason about a chain it can be reduced to a simple container of blocks in some order that can be validated. For now we will consider that to be the case.

Chain formation (ideal case)

On the Genesis of a network there is a single initial node. This node will create the very first chain entry and vote for that itself. In this case there is no vote on the Added event. This is the only time this can be allowed to happen.

Node 2 then connects to the network and Node1 will see this and vote for Node2 to be added to the chain. At this point we have majority of nodes from the last Block to agree and validate this Block (node 1 is 100% of group). The chain now starts to form into a container we can then reason about and firm up on the rules the chain must follow.

As Node3 connect to 1 and 2 both of these Vote for 3 to be added. As this accumulates in each node as valid (i.e. a majority of existing “valid voters” has agreed via a vote) then the Block is added to the chain.

We will pause here for a moment to consider what we have now, in this ideal case (ideal as these nodes connect and stay connected).

We have three blocks in our chain:

  1. Add(1)
  2. Add(2), S1 // signed by 1
  3. Add(3), S1,S2 // signed by 1 & 2

It is clear that these Blocks have an implicit order. If we try and reorder then consensus is broken.

  1. Add(4), S1,S2,S3 // Signed by a majority of previous Block so Ok.
  2. Add(5), S1, S2, S3, S4

As we can see this ideal case does have an implicit order. Let us develop this a bit further to find the algorithm that allows us to fix this order in place throughout the chain, even in non ideal (normal) network.

Rule 1 : First block requires no vote
Rule 2 : Second node only requires first node’s vote
Rule 2 : After first node is in place any further action requires a majority of valid voters to agree
Rule 3 : A genesis of a chain requires a min 4 nodes (3 nodes may only add another node, they cannot afford to loose one.

Non ideal or real world case

In reality any network of nodes will recieve messages out or order and may even drop some messages. This is a problem for us and we must consider manipulating this unorder or living with it. This proposal looks at an option to handle out of order messages. Obviously there will be a level of unorder that will defeat any system. So gap between recieving everything ordered and not is an area of much confusion. We will try and resolve that here.

Events and corresponding votes, we know, can drop, be delivered out of sequence and to add to this conundrum, nodes can simply drop or disconnect with no notice. The number of nodes that can drop is unbounded (or we assume it is). As a node in a group, we take consensus of the group to be “the law” or the agreed and only truth about events, however this is not supportable in a group that can loose a majority of nodes in a single step.

Two truths

The above conundrum introduces two truths,

  1. The truth we see.
  2. The truth the group agrees.

The truth we see can only be proven to us, others will not accept this truth. Only agreed group consensus will satisfy others. This is an important point.

Therefor we can make local observations (such as we need to vote on Add Kill Remove Split Merge) based on 1, but need to get agreement from others (by voting) to turn this into 2. The agreement we must seek cannot be only our group, if that has fallen in size to a number that is below an agreed quorum figure. This assumes that a fixed minimum quorum (majority) size is known and required prior to a decision being rattified or accepted by others.

Valid voters

It is a critical measure to know our current valid voters, this is not some knowledge we can only consider locally. Valid voters must be cryptographically provable. The first mechanism that comes to mind is to include all current known section members or voters per Block, however that can lead to all nodes requiring to agree on every step the section membership. This is not in line with real world, where we already agree that events are out of order, therefor membership is also likely to be out of order. This recent change or event order is something that would be good to live with, but only for very short periods.

When is a Node not a valid voter?

A critical question is what to do with misbehaving nodes. These are nodes that do not vote, or frequently propose (vote) for events that we do not agree with. To maintain integrity bad nodes will require to be punished and possibly removed from a group. If the punishment (such as Kill the node or immediately relocate with age/2) is too harsh it would de-stabilise the group quickly, too lenient and it will lead to too high an error rate and affect our decision making.

Measuring a nodes validity

To penalise or at least handle a mis-behaving node, we need to measure it against an agreed behaviour. As we know nodes will recieve messages and events out of order we do need to handle the case where a node is trying to keep up, but maybe slightly slow. This slow node may be valuable, but if too slow, then it’s a danger.

Mass node loss

As we accumulate proofs for a Block that we have a local agreement with i.e. we have voted for this event) then we should expect all group members to react or vote within a period. This period is critical to understand, if it is a time period and we have mass node loss then we may struggle to keep up with events and expect a quorum that simply does not exist. If the network around us can help out then we should use that.

Fortunately with disjoint groups we are connected to neighbour groups. These neighbours can help in this situation.

As our section/group collapses we can Add members from already connected neighbour groups. This process would recurs until we are at a safe minimum number of “valid voters”. During this process we would required to relocate data [TODO]

Actions for a node to stay valid

As we accumulate a Block there will be Q nodes that have voted. G-Q are still in process, but should vote quickly. We may not have received their vote yet, but voters may have.

Voting process and block validation

As a node proposes an event (votes) then it will send the vote direct to all group members.

As a Block receives Q votes of valid voters then the Block is considered valid. This should be noted to be valid for us and anyone in our group we send the whole Block to. Therefore as a Block validates we send it via a RoutingMessage, this means the message will swarm around the group and hopefully reach all members.

As we receive a valid Block then we confirm it is signed by at least Q of valid voters we know of in our chain. We also extract all votes we do not have in that Block and add to our own local Block. If our Block has then become valid we will send it via a RoutingMessage to the group.

We now need to consider if we have received all Blocks and all votes. Therefore we have 2 questions to answer.

  1. What period should be used to confirm all voters should have voted.
  2. What do we do about missing votes?

We also have a little more local information. A node we have seen leaving may not yet have accumulated, however we would have a self voted Block for Removed(the node). If this is the case we do not need to wait on the vote from the node we think is missing. However we can see the opposite situation here where we have seen a node leave, but we receive a Block with the nodes vote. In this case we accept the vote, but will need to relocate our Remove(the node) vote to appear above this Block in the chain. This is because group consensus is always more powerful than local knowledge.

Rule 3: If we locally see a node lost, then do not need to take action on non vote of that node for any Block in process.
Rule 6: If we receive a valid Block with the missing nodes vote prior to accumulation of the Remove vote for that node accumulating then we accept the vote, but move our Remove proposal above the current block.

If though we have not seen a node being lost and it has not voted in a period then we may have issues. This period is problematic as in mass loss or excessive churn we can have traffic surges and some nodes, although good, may struggle to keep up. If a node is always slow and actually never being useful then we would be better of removing the node from our group, if we can.

Rule 7: A valid voter that is Removed must have it’s accumulating Block after any Block that contains it’s vote, until the Remove for that node accumulates.

Rule 8: If a valid voter is removed via an accumulated Remove block then no further votes are allowed from that node.

Deeper Look at Sections and Groups

The issue with growing sections and consensus is that we lose control of the groups ability to reason that events my not be reordered in damaging ways. This could be ignoring some events till a later stage etc. There are likely further edge cases that will cause harm. Instead of searcing that space, this paper suggests a different route.

Logical Groups

Instead of a changing group size we use a logical group. This is the number of nodes in a section that are chosen by an agreed set of rules. In this way we can reduce attack space and at the same time reduce the number of network messages as the rest of the section accept the groups decisions.

Another advantage here is that as we have little control over how a group shrinks (we have control over adding nodes and we do that) but we do have a significant oversupply of nodes in a section, ready to join a group, we are in better shape for these unforseen events.

Section membership

Nodes in a section will now all vote as normal, but only consider the quorum Q of G closest nodes to the prefix as valid section decision voters. This leads us further to understanding some more of a valid voter. To “lock” the logical group in place and prove this was the agreed group we require an additional event. This event is an “Add Valid Voter” event. A corresponding RemoveValidVoter may also now be required, but we will develop that notion as we progress.

What do Vaild Voters do?

Valid voters are the nodes that make decisions that the whole section and it’s neighbors can accept. These nodes will send the valid Blocks to all section members as they become valid, as above.

What do non valid voters in a section do?

Standby nodes (the nodes that are section members but not in the G closest to the prefix, i.e. not valid voters) cannot make consensus decisions about sections. They will accumulate and validate Blocks voted on by G. These nodes will not ncessarily send the valid blocks to all section memebers. In this proposal we do not do consider these nodes send valid blocks. They will participate in network load, whether data actions or later computational information.

What about non section events such as data?

All section members can and should participate in the network. In the case of data these nodes should hold data, however we do not need all nodes to hold all data, instead the valid voters can insist the X close nodes to an event type identifier (such as hash of data) holds the data. This allows the G to still manage the section, but force section members to do some of the work, but not section decision making.

Data for instance such as Get, Post and Put are not described here as it’s slightly out of scope, but will be handled in a manner similar to the current network. The neighbor sections may have to confirm the reply is from the X close nodes to the event type (address/hash etc.) of the event and not necessarily from G in this case.

AddValidVoter event

The AddValidVoter event is used when a hot standby node becomes one of the G close nodes to the prefix. This node will come from the hot standby group. In a merge type situation the new merged group will require to vote on the current new G via individual AddValidVoter events.

RemoveValidVoter event

This event is called when a node is no longer a valid voter for the current section. The cases where this can happen, is merge, split and the node actually going off line. These are all events that seem to suggest we do not need a remove valid voter envent, however, a new node joining a section can displace a current valid voter, so in this case we do need a remove valid voter. This also shows that we must differentiate a remove or kill from such an event as all we know is the node is not a valid voter any more, however we still need to know if the node should be removed?

Selection of G for a logical group

So far we consider only the closest nodes to a prefix as G, however node age may significanly alter this paradigm in nice ways. If instead of close to a prefix, we select the G “oldest” nodes then we:

  1. Use the most trusted for section decisions.
  2. Use nodes that are less likely to churn.
  3. Give the most trusted nodes the power to judge others and penalise faulty nodes.
  4. Dillute the deciding group amongst the address range in our section (good for data etc.)

It would seem that node age will allow a better metric for selecting G in this case. As node age does not allow targetting of where in a section to locate to then it would appear to be no less secure than addressed based selection, but very likely more secure due to higher proven worth of decision making nodes.

This also nicely answers an open question in node age, namely how to prevent “young” nodes influencing a group through tagetted churn. As young nodes in this scenario will have much less or zero control over section event decisions it would appear this proposal is a step to or may be completely solving that issue, especially if only churn events in G were considered for age increases.

Rule 4: If we have two competing nodes that would qualify for valid voter status, then we select the one with the lowest ID

Group Split

As the group grows then it will split into two groups. This split will mean that the groups common prefix splits at that prefix + 1 bit (the prefix + 0 and prefix + 1 addresses). This action requires the code has an agreed constant (magic number) that allows a node to agree that is has enough valid voters to allow this split and both new groups will have enough members.

To achieve a split will mean we Remove nodes from our current group that differ in the next bit of the current prefix from us in the current group. Therefore a split can be seen as a batch of Remove Blocks.

Rule 5: If we have G + buffer members of our group that differ in the common prefix of our group + 1 bit and we have G + buffer members that share the same prefix + 1 as us then we vote to Remove the members that differ from our address in our common prefix + 1

Group Merge

If we lose too many nodes via Remove Blocks accumulating then we may fall below our agreed group size. In this case we send Add Blocks to our group members for our neighbour group that differs in the first bit different from our common prefix.

Rule 6: If our group shrinks to a minimum size we must Add from our neighbour group.

Rule 6 will mean the first node added alters our current prefix. The merge has begun and given we know nodes that should now be in this group. It will complete. As this merge is underway (after first node is accepted) then the group will require a new neighbor group. As each node now accepts the Add to the new group it will have to contact it’s neighbor group to find that neighbors neighbor and connect to it. However the merging groups will possibly require to complete the merge to get to a minimum quorum!!! [TODO tbc]

A closer look at Remove

Until we looked at split, Remove was specific to bad nodes being ejected, or us requiring to disconnect (if still connected) from a node on an accumulated Remove Block. Split though is a case where we do not necessarily wish to actually disconnect from a node, but we do wish to not consider it in our group any more, in fact it implicitly is now a neighbour of a new section (prefix). To clarify this, we should now create a new type, Kill and use this where we actually wish to not be connected to this node any more. Remove will then become a trigger to remove this node and all in it’s (new) prefix from our current group. Unlike merge (above) the last node we Remove will alter our prefix. Therefor we may require to explicity state this is a Split as removing the first node means our sections looks OK again and we would not continue to Remove nodes.

As our prefix has changed we will require to Remove any sections that we no longer require [TODO tbc]

A closer look at merge and split

It is clear a merge or split action is a decision that we must make in relation to adding or disconnecting nodes from our group. This implicit action is obvious, measurable and detectable locally. When we locally detect this action we can request consensus, by casting a vote (or many votes for split/merge).

Again, due to eventual consensus we cannot vote to add or remove a specific set of nodes as we all as a group will not agree on specifically what nodes at any particular point in time. We can though vote for the nodes we think should be added or removed. In this the group can come to consensus on each individual node.

Merge is interesting though as it means reaching out of our group to a neighbour group that we wish to merge with. That affects the neighbours prefix as well as ours, even when the neighbour sees no reasons in their group why this should happen. They need convinced that it should (order of priorities for each node is Network, Group, Self and this cannot change).

Therefore we need to convince the neighbour that the network is in danger if we do not merge. This is made simpler as we know the neighbour nodes are all connected to us and they can see that our group has shrunk. To save the network they must agree to merge, which in our case means we all join a new prefix (current prefix - 1 bit).

Rule 7: An Add from a neighbour group is allowed, if our group is below minimum or the neighbour group is below minimum (as seen locally). The add must be only for our prefix - 1 bit though.

New prefix event

Merge and split are two seperate events that we require to complete when started. From above we can see that merge seems to almost auto complete, if we consider our prefix is now changed. Split, however, does not have this implicit knowledge. Therefor reasoning about our prefix is likely essential.

To satisfy this conundrem we introduce another new event, ToPrefix, this event will proceed any Remove or Add relating to split or merge event. This also gives us the ability to know the current prefix our section refers to.

As we have seen, merge is almost autocompleting as we simply Add nodes that we need and we control that, even losing nodes during this process is Ok (we cannot control loss). However, during a splt we have an issue if we loose nodes during the process. We may need to merge again and cannto continue the split. Using the new event ToPrefix though does not require we complete the split, we can vote for this new prefix at any time.

Rule XX: During a split (move to larger prefix) if we loose nodes that would cause a new section to be too small, we stop voting for further removes and instead vote for the next ToPrefix back to our original group, or in extreme cases back two levels (essentially changing our split into a merge with previous neighbor).

This rule will mean that we do not detach from our previous neighbor until we have reached the new prefix (which is implicit in both split and merge).

Rule XX: As we Remove nodes when moving prefix, we must RemoveValidVoter and AddValidVoter for each valid voter we will replace. AddValidVoter is the oldest node not yet in G (although it will be a section member) in our new prefix.

Rule XX: As we Add nodes to our new prefix we must must RemoveValidVoter for current oldest voter and AddValidVoter for new oldest voter in the new prefix.

Knowledge of the neighbor votes age will be very helpful in saving us frmo replacing valid voters too often as the groups merge.

Prefix versions

As this event type can be repeated and possibly with the same quorum (or quorum from a previously existing vote for ToPrefix) we will require a prefix index. Therefor the last valid ToPrefix block will have an index number, our vote must now be for that number + 1.

In the event of a cancelled or abandoned event (like split whilst loosing nodes) the current attempts to Split will be trying to vote on a different ToPrefix but with the same index. This seems counter intuative, but is actually Ok. If a Quorum did vote for the “In flight” ToPrefix then they will not vote the the current ToPrefix as it has the wrong index (not a valid successor to last valid prefix). If it was valid to move to a new prefix those nodes would also have sent the valid ToPrefix with index + 1. This then resolves.

Rule XX: If we recieve a valid ToPrefix from our neighbor group we must also [TODO] XXXX (do we really need to vote for somethign a network group tells us??)

This allows us to collapse (merge) multiple levels at once if required (the index helps)

Open question - should index be in every vote?

To avoid any replay type attacks (old votes replay or insertion into valid chains, invalidly) then it may be wise to include the prefix index in each Vote/Block. It may be that this should extend to also include the actual prefix (to avoid using cancelled events such as abaondon Removes in the split case) as well. There is not a compelling debate for this yet though.

Open question 2 - should RemoveValidVoter be ReplaceValidVoter

AddValidVoter is required from the very start of the network or in cases where a “valid voter” dies as there is then less than G “Valid voters”, however at any other time we should not remove a valid voter without replacing it immediately, therefor it makes sense (perhaps) to replace the RemoveValidVoter with ReplaceValidVoter as we should never have less than G valid voters after the initial group forms, if at all possible.

XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXBelow here now needs updatedXXXXXXXXXXX

Redefine BlockIdentifier

At this stage we can now redefine the BlockIDentifier as follows:

Block Identifiers related to a section contain

pub enum BlockIdentifier { // this is a simple identifier, ignoring data, in a real system this would be part of an enum that includes data
    Remove(PublicKey),  // remove node from our group/section
    Kill(PublicKey),  // remove node from network.
    RemoveValidVoter(PublicKey),
    Relocate(PublicKey), node has restarted in our group, so it's relocated. Shall only happen after `Kill`
    Add(PublicKey), // Add node to our section 
    AddValidVoter(PublicKey), // promote member to valid voter
    ToPrefix(prefix, Index),
}

This is now a simpler type and therefore is easier to reason about. The Kill event is clear and means we have removed this node and if we did see it again it will be once and only to relocate.

Our section members

Locally we can see that we believe we have certain members valid in our section. As we receive a “claimed” valid Block we must do two things, before we can resend the valid
Block to our peers:

  1. Confirm each voter is what we consider locally as a valid voter.
  2. Ensure enough votes are present to satisfy our local knowledge of a quorum.

If this holds true then we have a valid Block and can either accept it in total, if we have not previously seen the BlocIdentifier or use
the votes we do not have to complete our Block.

This validates our section members are “close enough” in real time to our peers and the Block validates allowing the chain to progress.

Rule 8: On receipt of a “claimed” valid Block we confirm each voter is valid locally and ensure there are at least quorum votes in the claim, if so we validate the Block locally and resend adding any further votes we have to it.

A final look at order of Blocks

Now we have analysed the chain a little deeper and simplified the BlockIdentifier we can look at order of events. Here we will attempt to convince the reader that there is an implicit total order to the BlockIdentifiers and their corresponding votes.

First we will recap the rules we have agreed so far and adjust the remove/kill identifiers as per the discussion. Note that rules 6 & 8 are removed here as they were dangerous and used local knowledge to re-order a chian that would require such local knowledge to validate it, that is obviously impossible in retrospect. The rules have be renumbered to record this.

Rule 1 : First Block requires no vote
Rule 2 : Second node only requires first node’s vote
Rule 2 : After Block1 any further blocks requires a majority of valid voters to agree
Rule 3 : A genesis of a chain requires a min 4 nodes (3 nodes may only add another node, they cannot afford to loose one
Rule 3: If we locally see a node lost, then do not need to take action on non vote of that node for any Block in process.
Rule 6: If we receive a valid Block with the missing nodes vote prior to accumulation of the Kill or Remove vote for that node accumulating then we accept the vote, but move our Kill or Remove proposal above the current block.
Rule 6: A valid voter that is Removed or Killed must have it’s accumulating Block after any Block that contains it’s vote, until the Remove or Kill for that node accumulates.
Note rule 8 is now split into two rules

Rule 8a: If a valid voter is removed via an accumulated Kill block then no further votes are allowed from that node.**
**Rule 8b: If a valid voter is removed via an accumulated Remove block then no further votes are allowed from that node unless it is Added again.

Rule 4: If we have two competing nodes that would qualify for valid voter status, then we select the one wiht the lowest ID
Rule 5: If we have G + buffer members of our group that differ in the common prefix of our group + 1 bit and we have G + buffer members that share the same prefix + 1 as us then we vote to Remove the members that differ from our address in our common prefix + 1
Rule 6: If our group shrinks to a minimum size we must Add from our neighbour group.
Rule 7: An Add from a neighbour group is allowed, if our group is below minimum or the neighbour group is below minimum (as seen locally). The add must be only for our prefix - 1 bit though.
Rule 8: On receipt of a “claimed” valid Block we confirm each voter is valid locally and ensure there are at least quorum votes in the
claim, if so we validate the Block locally and resend adding any further votes we have to it.

With these rules in place we have ordering of all Blocks by all nodes and these will be equivilent in each node, eventually.

[Further note to be decided]
We will,however require to merge chains on a merge of a group, to do this we will introduce another Rule.
Rule 13: On receipt of an Add from a neighbor group we will request the nodes full chain from that prefix, from that chain we will request
all data from the section and create a Vote for each data block to be inserted into our chain.

Rule 14: If we receive a data block from nodes in our section that share a prefix and voted for by a majority of that prefix we will accept
the vote as valid and insert the data.

This means the merged group will contain all data from the two merged groups.

8 Likes