What is Nostr?
Olaoluwa Osuntokun [ARCHIVE] /
npub19he…kvn4
2023-06-09 12:48:50
in reply to nevent1q…25g5

Olaoluwa Osuntokun [ARCHIVE] on Nostr: πŸ“… Original date posted:2018-02-06 πŸ“ Original message: Hi Y'all, Definitely ...

πŸ“… Original date posted:2018-02-06
πŸ“ Original message:
Hi Y'all,

Definitely agree that we need a stop-gap solution to fix the naive table
dump on initial connect. I've been sketching out some fancier stuff, but we
would need time to properly tune the fanciness, and I'm inclined to get out
a stop-gap solution asap. On testnet, the zombie churn is pretty bad atm.
It results in uneasily wasted bandwidth as the churn is now almost constant.
There still exist some very old testnet nodes our there it seems. Beyond the
zombie churn, with the size of the testnet graph, we're forced to send tens
of thousands of messages (even if we're already fully synced) upon initial
connect, so very wasteful over all.

So I think the primary distinction between y'alls proposals is that
cdecker's proposal focuses on eventually synchronizing all the set of
_updates_, while Fabrice's proposal cares *only* about the newly created
channels. It only cares about new channels as the rationale is that if once
tries to route over a channel with a state channel update for it, then
you'll get an error with the latest update encapsulated.

Christian wrote:
> I propose adding a new feature bit (6, i.e., bitmask 0x40) indicating that
> the `init` message is extended with a u32 `gossip_timestamp`, interpreted
as
> a UNIX timestamp.

As the `init` message solely contains two variably sized byte slices, I
don't think we can actually safely extend it in this manner. Instead, a new
message is required, where the semantics of the feature bit _require_ the
other side to send it directly after receiving the `init` message from the
other side.

Aside from that, overall I like the simplicity of the protocol: it
eliminates both the zombie churn, and the intensive initial connection graph
dump without any extra messaging overhead (for reconciliation, etc).

Fabrice wrote:
> Just to be clear, you propose to use the timestamp of the most recent
> channel updates to filter the associated channel announcements ?

I think he's actually proposing just a general update horizon in which
vertexes+edges with a lower time stamp just shouldn't be set at all. In the
case of an old zombie channel which was resurrected, it would eventually be
re-propagated as the node on either end of the channel should broadcast a
fresh update along with the original chan ann.

> When a node that supports channel announcement filters receives
> a`channel_announcement_filters` message, it uses it to filter channel
> announcements (and, implicitly ,channel updates) before sending them

This seems to assume that both nodes have a strongly synchronized view of
the network. Otherwise, they'll fall back to sending everything that went on
during the entire epoch regularly. It also doesn't address the zombie churn
issue as they may eventually send you very old channels you'll have to deal
with (or discard).

> The use case we have in mind is mobile nodes, or more generally nodes
> which are often offline and need to resync very often.

How far back would this go? Weeks, months, years?

FWIW this approach optimizes for just learning of new channels instead of
learning of the freshest state you haven't yet seen.


-- Laolu


On Mon, Feb 5, 2018 at 7:08 AM Fabrice Drouin <fabrice.drouin at acinq.fr>
wrote:

> Hi,
>
> On 5 February 2018 at 14:02, Christian Decker
> <decker.christian at gmail.com> wrote:
> > Hi everyone
> >
> > The feature bit is even, meaning that it is required from the peer,
> > since we extend the `init` message itself, and a peer that does not
> > support this feature would be unable to parse any future extensions to
> > the `init` message. Alternatively we could create a new
> > `set_gossip_timestamp` message that is only sent if both endpoints
> > support this proposal, but that could result in duplicate messages being
> > delivered between the `init` and the `set_gossip_timestamp` message and
> > it'd require additional messages.
>
> We chose the other aproach and propose to use an optional feature
>
> > The reason I'm using timestamp and not the blockheight in the short
> > channel ID is that we already use the timestamp for pruning. In the
> > blockheight based timestamp we might ignore channels that were created,
> > then not announced or forgotten, and then later came back and are now
> > stable.
>
> Just to be clear, you propose to use the timestamp of the most recent
> channel updates to filter
> the associated channel announcements ?
>
> > I hope this rather simple proposal is sufficient to fix the short-term
> > issues we are facing with the initial sync, while we wait for a real
> > sync protocol. It is definitely not meant to allow perfect
> > synchronization of the topology between peers, but then again I don't
> > believe that is strictly necessary to make the routing successful.
> >
> > Please let me know what you think, and I'd love to discuss Pierre's
> > proposal as well.
> >
> > Cheers,
> > Christian
>
> Our idea is to group channel announcements by "buckets", create a
> filter for each bucket, exchange and use them to filter out channel
> announcements.
>
> We would add a new `use_channel_announcement_filters` optional feature
> bit (7 for example), and a new `channel_announcement_filters` message.
>
> When a node that supports channel announcement filters receives an
> `init` message with the `use_channel_announcement_filters` bit set, it
> sends back its channel filters.
>
> When a node that supports channel announcement filters receives
> a`channel_announcement_filters` message, it uses it to filter channel
> announcements (and, implicitly ,channel updates) before sending them.
>
> The filters we have in mind are simple:
> - Sort announcements by short channel id
> - Compute a marker height, which is `144 * ((now - 7 * 144) / 144)`
> (we round to multiples of 144 to make sync easier)
> - Group channel announcements that were created before this marker by
> groups of 144 blocks
> - Group channel announcements that were created after this marker by
> groups of 1 block
> - For each group, sort and concatenate all channel announcements short
> channel ids and hash the result (we could use sha256, or the first 16
> bytes of the sha256 hash)
>
> The new `channel_announcement_filters` would then be a list of
> (height, hash) pairs ordered by increasing heights.
>
> This implies that implementation can easily sort announcements by
> short channel id, which should not be very difficult.
> An additional step could be to send all short channel ids for all
> groups for which the group hash did not match. Alternatively we could
> use smarter filters
>
> The use case we have in mind is mobile nodes, or more generally nodes
> which are often offline and need to resync very often.
>
> Cheers,
> Fabrice
> _______________________________________________
> Lightning-dev mailing list
> Lightning-dev at lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/lightning-dev/attachments/20180207/2ea76f0a/attachment-0001.html>;
Author Public Key
npub19helcfnqgk2jrwzjex2aflq6jwfc8zd9uzzkwlgwhve7lykv23mq5zkvn4