What is Nostr?
Fabrice Drouin [ARCHIVE] /
npub1s8z…yaw4
2023-06-09 12:48:53

Fabrice Drouin [ARCHIVE] on Nostr: đź“… Original date posted:2018-02-13 đź“ť Original message: On 12 February 2018 at ...

đź“… Original date posted:2018-02-13
đź“ť Original message:
On 12 February 2018 at 02:45, Rusty Russell <rusty at rustcorp.com.au> wrote:
> Christian Decker <decker.christian at gmail.com> writes:
>> Rusty Russell <rusty at rustcorp.com.au> writes:
>>> Finally catching up. I prefer the simplicity of the timestamp
>>> mechanism, with a more ambitious mechanism TBA.
>>
>> Fabrice and I had a short chat a few days ago and decided that we'll
>> simulate both approaches and see what consumes less bandwidth. With
>> zombie channels and the chances for missing channels during a weak form
>> of synchronization, it's not that clear to us which one has the better
>> tradeoff. With some numbers behind it it may become easier to decide :-)
>
> Maybe; I think we'd be best off with an IBLT-approach similar to
> Fabrice's proposal. An IBLT is better than a simple hash, since if your
> results are similar you can just extract the differences, and they're
> easier to maintain. Even easier if we make the boundaries static rather
> than now-relative. For node_announce and channel_update you'd probably
> want separate IBLTs (perhaps, though not necessarily, as a separate
> RTT).

Yes,
​real filters would be better, but
the 'bucket hash' idea works (from what I've seen on testnet)
​for our​
specific
​target​
(nodes which are connected to very small number of peers and go offline
​
very often)
​.


> Note that this approach fits really well as a complement to the
> timestamp approach: you'd use this for older pre-timestamp, where you're
> likely to have a similar idea of channels.

Both approaches maybe needed because they may be solutions to different
problems (nodes which get
​ d
isconnected from
​
a small set of peers vs nodes connected to many peers, which remain
online but not some of their peers)

>>> Now, as to the proposal specifics.
>>>
>>> I dislike the re-transmission of all old channel_announcement and
>>> node_announcement messages, just because there's been a recent
>>> channel_update. Simpler to just say 'send anything >=
>>> routing_sync_timestamp`.
>>
>> I'm afraid we can't really omit the `channel_announcement` since a
>> `channel_update` that isn't preceded by a `channel_announcement` is
>> invalid and will be dropped by peers (especially because the
>> `channel_update` doesn't contain the necessary information for
>> validation).
>
> OTOH this is a rare corner case which will eventually be fixed by weekly
> channel_announce retransmission. In particular, the receiver should
> have already seen the channel_announce, since it preceeded the timestamp
> they asked for.
>
> Presumably IRL you'd ask for a timestamp sometime before you were last
> disconnected, say 30 minutes.
>
> "The perfect is the enemy of the good".

This is precisely what I think
​would
not work very well with the timestamp approach:
​ ​
when you're missing an 'old' channel announcement, and only have a few
sources for them.
​ ​
It can have a huge impact on terminal nodes which won't be able to find
routes and waiting for a
​ ​
new channel update would take too long.
Yes, using just a few peers mean that you will be limited to the routing
table they will give you, but
​ ​
having some kind of filter would let nodes connect
​ ​
to other peers just to retrieve
​them and check how far off they are from the rest of the nework. This
would not possible with a timestamp (you would need to download the entire
routing table again, which is what we're trying to avoid)

>>> Background: c-lightning internally keeps an tree of gossip in the order
>>> we received them, keeping a 'current' pointer for each peer. This is
>>> very efficient (though we don't remember if a peer sent us a gossip msg
>>> already, so uses twice the bandwidth it could).

Ok so a peer would receive an announcement it has sent, but woud
immediately dismiss it ?

>>
>> We can solve that by keeping a filter of the messages we received from
>> the peer, it's more of an optimization than anything, other than the
>> bandwidth cost, it doesn't hurt.
>
> Yes, it's on the TODO somewhere... we should do this!
>
>>> But this isn't *quite* the same as timestamp order, so we can't just set
>>> the 'current' pointer based on the first entry >=
>>> `routing_sync_timestamp`; we need to actively filter. This is still a
>>> simple traverse, however, skipping over any entry less than
>>> routing_sync_timestamp.
>>>
>>> OTOH, if we need to retransmit announcements, when do we stop
>>> retransmitting them? If a new channel_update comes in during this time,
>>> are we still to dump the announcements? Do we have to remember which
>>> ones we've sent to each peer?

>>
>> That's more of an implementation detail. In c-lightning we can just
>> remember the index at which the initial sync started, and send
>> announcements along until the index is larger than the initial sync
>> index.
>
> True. It is an implementation detail which is critical to saving
> bandwidth though.
>
>> A more general approach would be to have 2 timestamps, one highwater and
>> one lowwater mark. Anything inbetween these marks will be forwarded
>> together with all associated announcements (node / channel), anything
>> newer than that will only forward the update. The two timestamps
>> approach, combined with a new message, would also allow us to send
>> multiple `timestamp_routing_sync` messages, e.g., first sync the last
>> hour, then the last day, then the last week, etc. It gives the syncing
>> node control over what timewindow to send, inverting the current initial
>> sync.
>
> That would fit neatly with the more complicated bucketing approaches:
> you'd use this technique to ask for the entire bucket if the SHA
> mismatched/IBLT failed.

There is also somehting that would work fairly well today: just exchange
all shortIds that you have.
With the simplest possible implementation (sort and concatenate all 8 bytes
short ids and compress with xz or gz or zip)
it fits in about 8 Kb. And there are lots of easy optimizations
​(heights are mostly consecutive integers, tx and output index are small...)

> Cheers,
> Rusty.
> _______________________________________________
> Lightning-dev mailing list
> Lightning-dev at lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/lightning-dev/attachments/20180213/18989ef5/attachment-0001.html>;
Author Public Key
npub1s8zghfrvyydu3ld5jrgue7crvzw8agyslpv8mh9pexgxwmcfelfsk5yaw4