Michael Grønager [ARCHIVE] on Nostr: 📅 Original date posted:2011-12-29 🗒️ Summary of this message: The trickle ...
đź“… Original date posted:2011-12-29
🗒️ Summary of this message: The trickle algorithm in CNode::SendMessages sends only 1/4 of transaction-invs, but the other 3/4 are carried around from round to round. A suggestion is made to remove the redundant vInvWait stuff or change the algorithm.
đź“ť Original message:In CNode::SendMessages there is a trickle algorithm. Judging from the comments it is supposed to:
* at each update round a new (random) trickle node is chosen, with 120 nodes and an average round time of 100ms (the sleep) we will have moved through roughly all nodes every 12-15 seconds.
* when a node is the trickle node it will get to send all its pending addresses to its corresponding peer.
* when a node is not trickle node (the rest of the nodes) we send transaction-invs, however, only 1/4 of them - the rest is pushed to wait for the next round and would eventually get sent.
However, the way the 1/4 of the invs are chosen is by:
(inv.getHash() ^ hashSalt) & 3 == 0
As hashSalt is a constant (static, generated on start up) and as the hash of an inv is constant for the inv too, the other 3/4 will never get sent and hence it does not make sense to carry them around from round to round:
if (fTrickleWait) vInvWait.push_back(inv);
and:
pto->vInventoryToSend = vInvWait;
The hashSalt will be different for each node in the peer-to-peer network and hence as long as we have much more than 4 nodes all tx'es will be sent around.
Ironically, this (wrong?) implementation divides the inv forwarding hash space into 4, along the same lines as we discussed last week for DHTs...
I suggest to either keep the algorithm as is, but remove the redundant vInvWait stuff, or to change the algorithm to e.g. push the tx'es into a multimap (invHash^hashSalt, invHash) and choose the first 25% in each round.
The last alternative is that I have misunderstood the code... - if so please correct me ;)
Happy New Year!
Michael
🗒️ Summary of this message: The trickle algorithm in CNode::SendMessages sends only 1/4 of transaction-invs, but the other 3/4 are carried around from round to round. A suggestion is made to remove the redundant vInvWait stuff or change the algorithm.
đź“ť Original message:In CNode::SendMessages there is a trickle algorithm. Judging from the comments it is supposed to:
* at each update round a new (random) trickle node is chosen, with 120 nodes and an average round time of 100ms (the sleep) we will have moved through roughly all nodes every 12-15 seconds.
* when a node is the trickle node it will get to send all its pending addresses to its corresponding peer.
* when a node is not trickle node (the rest of the nodes) we send transaction-invs, however, only 1/4 of them - the rest is pushed to wait for the next round and would eventually get sent.
However, the way the 1/4 of the invs are chosen is by:
(inv.getHash() ^ hashSalt) & 3 == 0
As hashSalt is a constant (static, generated on start up) and as the hash of an inv is constant for the inv too, the other 3/4 will never get sent and hence it does not make sense to carry them around from round to round:
if (fTrickleWait) vInvWait.push_back(inv);
and:
pto->vInventoryToSend = vInvWait;
The hashSalt will be different for each node in the peer-to-peer network and hence as long as we have much more than 4 nodes all tx'es will be sent around.
Ironically, this (wrong?) implementation divides the inv forwarding hash space into 4, along the same lines as we discussed last week for DHTs...
I suggest to either keep the algorithm as is, but remove the redundant vInvWait stuff, or to change the algorithm to e.g. push the tx'es into a multimap (invHash^hashSalt, invHash) and choose the first 25% in each round.
The last alternative is that I have misunderstood the code... - if so please correct me ;)
Happy New Year!
Michael