What is Nostr?
Jim Posen [ARCHIVE] /
npub1ncn…qt2n
2023-06-09 12:48:51
in reply to nevent1q…25jv

Jim Posen [ARCHIVE] on Nostr: 📅 Original date posted:2018-02-07 📝 Original message: I like Christian's ...

📅 Original date posted:2018-02-07
📝 Original message:
I like Christian's proposal of adding a simple announcement cutoff
timestamp with the intention of designing something more sophisticated
given more time.

I prefer the approach of having an optional feature bit signalling that a
`set_gossip_timestamp` message must be sent immediately after `init`, as
Laolu suggested. This way it doesn't conflict with and other possible
handshake extensions.


On Feb 7, 2018 9:50 AM, "Fabrice Drouin" <fabrice.drouin at acinq.fr> wrote:

Hi,

Suppose you partition nodes into 3 generic roles:
- payers: they mostly send payments, are typically small and operated
by end users, and are offline quite a lot
- relayers: they mostly relay payments, and would be online most of
the time (if they're too unreliable other nodes will eventually close
their channels with them)
- payees: they mostly receive payments, how often they can be online
is directly link to their particular mode of operations (since you
need to be online to receive payments)

Of course most nodes would play more or less all roles. However,
mobile nodes would probably be mostly "payers", and they have specific
properties:
- if they don't relay payments they don't have to be announced. There
could be millions of mobile nodes that would have no impact on the
size of the routing table
- it does not impact the network when they're offline
- but they need an accurate routing table. This is very different from
nodes who mostly relay or accept payements
- they would be connected to a very small number of nodes
- they would typically be online for just a few hours every day, but
could be stopped/paused/restarted many times a day

Laolu wrote:
> So I think the primary distinction between y'alls proposals is that
> cdecker's proposal focuses on eventually synchronizing all the set of
> _updates_, while Fabrice's proposal cares *only* about the newly created
> channels. It only cares about new channels as the rationale is that if
once
> tries to route over a channel with a state channel update for it, then
> you'll get an error with the latest update encapsulated.

If you have one filter per day and they don't match (because your peer
has channels that you missed, or
have been closed and you were not aware of it) then you will receive
all channel announcements for
this particular day, and the associated updates

Laolu wrote:
> I think he's actually proposing just a general update horizon in which
> vertexes+edges with a lower time stamp just shouldn't be set at all. In
the
> case of an old zombie channel which was resurrected, it would eventually
be
> re-propagated as the node on either end of the channel should broadcast a
> fresh update along with the original chan ann.

Yes but it could take a long time. It may be worse on testnet since it
seems that nodes
don't change their fees very often. "Payer nodes" need a good routing
table (as opposed
to "relayers" which could work without one if they never initiate payments)

Laolu wrote:
> This seems to assume that both nodes have a strongly synchronized view of
> the network. Otherwise, they'll fall back to sending everything that went
on
> during the entire epoch regularly. It also doesn't address the zombie
churn
> issue as they may eventually send you very old channels you'll have to
deal
> with (or discard).

Yes I agree that for nodes which have connections to a lot of peers,
strongly synchronized routing tables is
harder to achieve since a small change may invalidate an entire
bucket. Real queryable filters would be much
better, but worst case scenario is we've sent an additionnal 30 Kb or
o of sync messages.
(A very naive filter would be sort + pack all short ids for example)

But we focus on nodes which are connected to a very small number of
peers, and in this particular
case it is not an unrealistic expectation.
We have built a prototype and on testnet it works fairly well. I also
found nodes which have no direct
channel betweem them but produce the same filters for 75% of the
buckets ("produce" here means
that I opened a simple gossip connection to them, got their routing
table and used it to generate filters).


Laolu wrote:
> How far back would this go? Weeks, months, years?
Since forever :)
One filter per day for all annoucements that are older than now - 1
week (modulo 144)
One filter per block for recent announcements

>
> FWIW this approach optimizes for just learning of new channels instead of
> learning of the freshest state you haven't yet seen.

I'd say it optimizes the case where you are connected to very few
peers, and are online a few times every day (?)

>
> -- Laolu
>
>
> On Mon, Feb 5, 2018 at 7:08 AM Fabrice Drouin <fabrice.drouin at acinq.fr>
> wrote:
>>
>> Hi,
>>
>> On 5 February 2018 at 14:02, Christian Decker
>> <decker.christian at gmail.com> wrote:
>> > Hi everyone
>> >
>> > The feature bit is even, meaning that it is required from the peer,
>> > since we extend the `init` message itself, and a peer that does not
>> > support this feature would be unable to parse any future extensions to
>> > the `init` message. Alternatively we could create a new
>> > `set_gossip_timestamp` message that is only sent if both endpoints
>> > support this proposal, but that could result in duplicate messages
being
>> > delivered between the `init` and the `set_gossip_timestamp` message and
>> > it'd require additional messages.
>>
>> We chose the other aproach and propose to use an optional feature
>>
>> > The reason I'm using timestamp and not the blockheight in the short
>> > channel ID is that we already use the timestamp for pruning. In the
>> > blockheight based timestamp we might ignore channels that were created,
>> > then not announced or forgotten, and then later came back and are now
>> > stable.
>>
>> Just to be clear, you propose to use the timestamp of the most recent
>> channel updates to filter
>> the associated channel announcements ?
>>
>> > I hope this rather simple proposal is sufficient to fix the short-term
>> > issues we are facing with the initial sync, while we wait for a real
>> > sync protocol. It is definitely not meant to allow perfect
>> > synchronization of the topology between peers, but then again I don't
>> > believe that is strictly necessary to make the routing successful.
>> >
>> > Please let me know what you think, and I'd love to discuss Pierre's
>> > proposal as well.
>> >
>> > Cheers,
>> > Christian
>>
>> Our idea is to group channel announcements by "buckets", create a
>> filter for each bucket, exchange and use them to filter out channel
>> announcements.
>>
>> We would add a new `use_channel_announcement_filters` optional feature
>> bit (7 for example), and a new `channel_announcement_filters` message.
>>
>> When a node that supports channel announcement filters receives an
>> `init` message with the `use_channel_announcement_filters` bit set, it
>> sends back its channel filters.
>>
>> When a node that supports channel announcement filters receives
>> a`channel_announcement_filters` message, it uses it to filter channel
>> announcements (and, implicitly ,channel updates) before sending them.
>>
>> The filters we have in mind are simple:
>> - Sort announcements by short channel id
>> - Compute a marker height, which is `144 * ((now - 7 * 144) / 144)`
>> (we round to multiples of 144 to make sync easier)
>> - Group channel announcements that were created before this marker by
>> groups of 144 blocks
>> - Group channel announcements that were created after this marker by
>> groups of 1 block
>> - For each group, sort and concatenate all channel announcements short
>> channel ids and hash the result (we could use sha256, or the first 16
>> bytes of the sha256 hash)
>>
>> The new `channel_announcement_filters` would then be a list of
>> (height, hash) pairs ordered by increasing heights.
>>
>> This implies that implementation can easily sort announcements by
>> short channel id, which should not be very difficult.
>> An additional step could be to send all short channel ids for all
>> groups for which the group hash did not match. Alternatively we could
>> use smarter filters
>>
>> The use case we have in mind is mobile nodes, or more generally nodes
>> which are often offline and need to resync very often.
>>
>> Cheers,
>> Fabrice
>> _______________________________________________
>> Lightning-dev mailing list
>> Lightning-dev at lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
_______________________________________________
Lightning-dev mailing list
Lightning-dev at lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/lightning-dev/attachments/20180207/cb4ed693/attachment-0001.html>;
Author Public Key
npub1ncnj8arudstdxzfhxk7k4nwgkrw3hyw8sgt0wqqmm5hh2c4knmgs2lqt2n