What is Nostr?
Matt Corallo [ARCHIVE] /
npub1e46…xmcu
2023-06-09 13:06:06
in reply to nevent1q…98gr

Matt Corallo [ARCHIVE] on Nostr: 📅 Original date posted:2022-05-27 📝 Original message: On 5/26/22 8:59 PM, Alex ...

📅 Original date posted:2022-05-27
📝 Original message:
On 5/26/22 8:59 PM, Alex Myers wrote:
>> Ah, this is an additional proposal on top, and requires a gossip "hard fork", which means your new
>> protocol would only work for taproot channels, and any old/unupgraded channels will have to be
>> propagated via the old mechanism. I'd kinda prefer to be able to rip out the old gossip sync code
>> sooner than a few years from now :(.
>
> I viewed it as a soft fork, where if you want to use set reconciliation, anything added to the set would be subject to a constricted ruleset - in this case the gossip must be accompanied by a blockheight tlv (or otherwise reference a blockheight) and it must not replace a message in the current 100 block range.
>
> It doesn't necessarily have to reference blockheight, but that would simplify many edge cases. The key is merely that a node is responsible for limiting it's own gossip to a predefined interval, and it must be easily verifiable for any other nodes building and reconciling sketches. Given that we have access to a timechain, this just made the most sense.

Ah, good point, you can just add it as a TLV. It still implies that "old-gossip" can't go away for a
lont time - ~everyone has to upgrade, so we'll have two parallel systems. Worse, people are relying
on the old behavior and some nodes may avoid upgrading to avoid the new rate-limits :(.

>>> If some nodes have 600000 and others have 600099 (because you broke the
>>> ratelimiting recommendation, and propagated both approx the same
>>> time), then the network will split, sure.
>>
>>
>> Right, so what do you do in that case, though? AFAIU, in your proposed sync mechanism if a node does
>> this once, you're stuck with all of your gossip reconciliations with every peer "wasting" one
>> difference "slot" for a day or however long it takes before the peer does a sane update. In my
>> proposed alternative it only appears once and then you move on (or maybe once more on startup, but
>> we can maybe be willing to take on some extra cost there?).
>
> This case may not be all that difficult. Easiest answer is you offer a spam proof to your peer. Send both messages, signed by the offending node as proof they violated the set reconciliation rate limit, then remove both from your sketch. You may want to keep the evidence it in your data store, at least until it's superceded by the next valid update, but there's no reason it must occupy a slot of the sketch. Meanwhile, feel free to use the message as you wish, just keep both out of the sketch. It's not perfect, but the sketch capacity is not compromised and the second incidence of spam should not propagate at all. (It may be possible to keep one, but this is the simplest answer.)

Right, well if we're gonna start adding "spam-proofs" we shouldn't start talking about complexity of
tracking the changed-set :p.

Worse, unlike tracking the chanaged-set as proposed this protocol is a ton of unused code to handle
an edge case we should only rarely hit...in other words code that will almost certainly be buggy,
untested, and fail if people start hitting it. In general, I'm not a huge fan of protocols with any
more usually-unused code than is strictly necessary.

This also doesn't capture things like channel_update extensions - BOLTs today say a recipient "MAY
choose NOT to for messages longer than the minimum expected length" - so now we'd need remove that
(and I guess have a fixed "maximum length" for channel updates that everyone agrees to...basically
we have to have exact consensus on valid channel updates across nodes.

>> Heh, I'm surprised you'd complain about this - IIUC your existing gossip storage system keeps this
>> as a side-effect so it'd be a single integer for y'all :p. In any case, if it makes the protocol a
>> chunk more efficient I don't see why its a big deal to keep track of the set of which invoices have
>> changed recently, you could even make it super efficient by just saying "anything more recent than
>> timestamp X except a few exceptions that we got with some lag against the update timestamp".
>
> The benefit of a single global sketch is less overhead in adding additional gossip peers, though looking at the numbers, sketch decoding time seems to be the more significant driving factor than rebuilding sketches (when they're incremental.) I also like maximizing the utility of the sketch by adding the full gossip store if possible.

Note that the alternative here does not prevent you from having a single global sketch. You can keep
a rolling global sketch that you send to all your peers at once, it would just be a bit of a
bandwidth burst when they all request a few channel updates/announcements from you.

More generally, I'm somewhat surprised to hear a performance concern here - I can't imagine we'd be
including any more entries in such a sketch than Bitcoin Core does transactions to relay to peers,
and this is exactly the design direction they went in (because of basically the same concerns).

> I still think getting the rate-limit responsibility to the originating node would be a win in either case. It will chew into sketch capacity regardless.

That's fair, though I do still very much struggle with how inflexible this proposal would be towards
any future changes to relay policy. Basically we're locking ourselves into a fixed rate-limit that
everyone has to agree to, with problems introduced if we ever go to change it.

Matt
Author Public Key
npub1e46n428mcyfwznl7nlsf6d3s7rhlwm9x3cmkuqzt3emmdpadmkaqqjxmcu