What is Nostr?
Matt Corallo [ARCHIVE] /
npub1e46…xmcu
2023-06-09 13:05:53
in reply to nevent1q…gzpx

Matt Corallo [ARCHIVE] on Nostr: 📅 Original date posted:2022-04-22 📝 Original message: On 4/22/22 9:15 AM, Alex ...

📅 Original date posted:2022-04-22
📝 Original message:
On 4/22/22 9:15 AM, Alex Myers wrote:
> Hi Matt,
>
> Appreciate your responses.  Hope you'll bear with me as I'm a bit new to this.
>
> Instead of trying to make sure everyone’s gossip acceptance matches exactly, which as you point
> it seems like a quagmire, why not (a) do a sync on startup and (b) do syncs of the *new* things.
>
> I'm not opposed to this technique, and maybe it ends up as a better solution.  The rationale for not
> going full Erlay approach was that it's far less overhead to maintain a single sketch than to
> maintain a per-peer sketch and associated state for every gossip peer.  In this way there's very
> little cost to adding additional gossip peers, which further encourages propagation and convergence
> of the gossip network.

I'm not sure what you mean by per-node state here - I'd think you can implement it with a simple
"list of updates that happened since time X" data, instead of having to maintain per-peer state.

> IIUC Erlay's design was concerned for privacy of originating nodes.  Lightning gossip is public by
> nature, so I'm not sure we should constrain ourselves to the same design route without trying the
> alternative first.

Part of the design of Erlay, especially the insight of syncing updates instead of full mempools, was
actually this precise issue - Bitcoin Core nodes differ in policy for a number of reasons
(especially across updates), and thus syncing the full mempool will result in degenerate cases of
trying over and over and over again to sync stuff your peer is rejecting. At least if I recall
correctly.

> if we're gonna add a minisketch-based sync anyway, please lets also use it for initial sync
> after restart
>
> This was out of the scope of what I had in mind, but I will give this some thought. I could see how
> a block_height reference coupled with set reconciliation could provide some better options here.
> This may not be all that difficult to shoe-horn in.
>
> Regardless of single sketch or per-peer set reconciliation, it should be easier to implement with
> tighter rules on rate-limiting. (Keep in mind, the node's graph can presumably be updated
> independently of the gossip it rebroadcasts if desired.) As a thought experiment, if we consider a
> CLN-LDK set reconciliation, and that each node is gossiping with 5 other peers in an evenly spaced
> frequency, we would currently see 42.8 commonly accepted channel_updates over an average 60s window
> along with 11 more updates which LDK accepts and CLN rejects (spam.)[1] Assuming the other 5 peers
> have shared 5/6ths of this gossip before the CLN/LDK set reconciliation, we're left with CLN seeing
> 7 updates to reconcile, while LDK sees 18.  Already we've lost 60% efficiency due to lack of a
> common rate-limit heuristic.

I do not believe that we will ever form a strong agreement on exactly what the rate-limits should
be. And even if we do, we still have the issue of upgrades, where a simple change to the rate-limits
causes sync to suddenly blow up and hit degenerate cases all over the place. Unless we can make the
sync system relatively robust against slightly different policies, I think we're kinda screwed.

Worse, what happens if someone sends updates at exactly the limit of the rate-limiters? Presumably
people will do this because "that's what the limit is and I want to send updates as often as I can
becaux...". Now you'll still have similar issues, I believe.

> I understand gossip traffic is manageable now, but I'm not sure it will be that long before it
> becomes an issue. Furthermore, any particular set reconciliation technique would benefit from a
> simple common rate-limit heuristic, not to mention originating nodes, who may not currently realize
> their channel updates are being rejected by a portion of the network due to differing criteria
> across implementations.

Yes, I agree there is definitely a concern with differing criteria resulting in nodes not realizing
their gossip is not propagating. I agree guidelines would be nice, but guidelines doesn't solve the
issue for sync, sadly, I think. Luckily lightning does provide a mechanism to bypass the rejection -
send an update back with an HTLC failure. If you're trying to route an HTLC and a node has new
parameters for you, it'll helpfully let you know when you try to use the old parameters.

Matt
Author Public Key
npub1e46n428mcyfwznl7nlsf6d3s7rhlwm9x3cmkuqzt3emmdpadmkaqqjxmcu