What is Nostr?
Olaoluwa Osuntokun [ARCHIVE] /
npub19he…kvn4
2023-06-09 13:06:34
in reply to nevent1q…2r5j

Olaoluwa Osuntokun [ARCHIVE] on Nostr: πŸ“… Original date posted:2022-07-01 πŸ“ Original message: > That's not 100% ...

πŸ“… Original date posted:2022-07-01
πŸ“ Original message:
> That's not 100% reliable at all. How long to you want for the new
gossip?

So you know it's a new channel, with a new capacity (look at the on-chain
output), between the same parties (assuming ppl use that multi-sig signal).
If
you attempt to route over it and have a stale policy, you'll get the latest
policy. Therefore, it doesn't really matter how long you wait, as you aren't
removing the channel from your graph, as you know it didn't really close.

If you don't see a message after 2 weeks or w/e, then you mark it as a
zombie just like any other channel.

-- Laolu


On Wed, Jun 29, 2022 at 5:35 PM Rusty Russell <rusty at rustcorp.com.au> wrote:

> Olaoluwa Osuntokun <laolu32 at gmail.com> writes:
> > Hi Rusty,
> >
> > Thanks for the feedback!
> >
> >> This is over-design: if you fail to get reliable gossip, your routing
> will
> >> suffer anyway. Nothing new here.
> >
> > Idk, it's pretty simple: you're already watching for closes, so if a
> close
> > looks a certain way, it's a splice. When you see that, you can even take
> > note of the _new_ channel size (funds added/removed) and update your
> > pathfinding/blindedpaths/hophints accordingly.
>
> Why spam the chain?
>
> > If this is an over-designed solution, that I'd categorize _only_ waiting
> N
> > blocks as wishful thinking, given we have effectively no guarantees w.r.t
> > how long it'll take a message to propagate.
>
> Sure, it's a simplification on "wait 6 blocks plus 30 minutes".
>
> > If by routing you mean a sender, then imo still no: you don't necessarily
> > need _all_ gossip, just the latest policies of the nodes you route most
> > frequently to. On top of that, since you can get the latest policy each
> time
> > you incur a routing failure, as you make payments, you'll get the latest
> > policies of the nodes you care about over time. Also consider that you
> might
> > fail to get "reliable" gossip, simply just due to your peer neighborhood
> > aggressively rate limiting gossip (they only allow 1 update a day for a
> > node, you updated your fee, oops, no splice msg for you).
>
> There's no ratelimiting on new channel announcements?
>
> > So it appears you don't agree that the "wait N blocks before you close
> your
> > channels" isn't a fool proof solution? Why 12 blocks, why not 15? Or 144?
>
> Because it's simple.
>
> >>From my PoV, the whole point of even signalling that a splice is on
> going,
> > is for the sender's/receivers: they can continue to send/recv payments
> over
> > the channel while the splice is in process. It isn't that a node isn't
> > getting any gossip, it's that if the node fails to obtain the gossip
> message
> > within the N block period of time, then the channel has effectively
> closed
> > from their PoV, and it may be an hour+ until it's seen as a usable (new)
> > channel again.
>
> Sure. If you want to not forget channels at all on close, that works too.
>
> > If there isn't a 100% reliable way to signal that a splice is in
> progress,
> > then this disincentives its usage, as routers can lose out on potential
> fee
> > revenue, and sends/receivers may grow to favor only very long lived
> > channels. IMO _only_ having a gossip message simply isn't enough:
> there're
> > no real guarantees w.r.t _when_ all relevant parties will get your gossip
> > message. So why not give them a 100% reliable on chain signal that:
> > something is in progress here, stay tuned for the gossip message,
> whenever
> > you receive that.
>
> That's not 100% reliable at all. How long to you want for the new
> gossip?
>
> Just treat every close as signalling "stay tuned for the gossip
> message". That's reliable. And simple.
>
> Cheers,
> Rusty.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/lightning-dev/attachments/20220701/fa6db025/attachment-0001.html>;
Author Public Key
npub19helcfnqgk2jrwzjex2aflq6jwfc8zd9uzzkwlgwhve7lykv23mq5zkvn4