Joost Jager [ARCHIVE] on Nostr: 📅 Original date posted:2023-02-15 🗒️ Summary of this message: Lightning ...
📅 Original date posted:2023-02-15
🗒️ Summary of this message: Lightning network performance depends on the type of payment flows, with fast payment being favored for end-users and cheap fees for remittance payments. Adding latency as a criteria for pathfinding construction and using forward-error-correction code on top of MPP are potential solutions. More liquidity may be used by routing nodes to serve tailored HTLC requests, but this should be rewarded by higher routing fees. Careful policy rules design and upgradeability are necessary to prevent higher fees for end-users.
📝 Original message:
>
> I think the performance question depends on the type of payment flows
> considered. If you're an
> end-user sending a payment to your local Starbucks for coffee, here fast
> payment sounds the end-goal.
> If you're doing remittance payment, cheap fees might be favored, and in
> function of those flows you're
> probably not going to select the same "performant" routing nodes. I think
> adding latency as a criteria for
> pathfinding construction has already been mentioned in the past for LDK
> [0].
>
My hopes are that eventually lightning nodes can run so efficient that in
practice there is no real trade-off anymore between cost and speed. But of
course hard to say how that's going to play out. I am all for adding
latency as an input to pathfinding. Attributable errors should help with
that too.
> Or there is the direction to build forward-error-correction code on top of
> MPP, like in traditional
> networking [1]. The rough idea, you send more payment shards than the
> requested sum, and then
> you reveal the payment secrets to the receiver after an onion
> interactivity round to finalize payment.
>
This is not very different from payment pre-probing is it? So try a larger
set of possible routes simultaneously and when one proves to be open, send
the real payment across that route. Of course a balance may have shifted in
the mean time, but seems unlikely enough to prevent the approach from being
usable. The obvious downside is that the user needs more total liquidity to
have multiple htlcs outstanding at the same time. Nevertheless an
interesting way to reduce payment latency.
> At the end of the day, we add more signal channels between HTLC senders
> and the routing
> nodes offering capital liquidity, if the signal mechanisms are efficient,
> I think they should lead
> to better allocation of the capital. So yes, I think more liquidity might
> be used by routing nodes
> to serve finely tailored HTLC requests by senders, however this liquidity
> should be rewarded
> by higher routing fees.
>
This is indeed part of the idea. By signalling HA, you may not only attract
more traffic, but also be able to command a higher fee.
> I think if we have lessons to learn on policy rules design and deployment
> on the base-layer
> (the full-rbf saga), it's to be careful in the initial set of rules, and
> how we ensure smooth
> upgradeability, from one version to another. Otherwise the re-deployment
> cost towards
> the new version might incentive the old routing node to stay on the
> non-optimal versions,
> and as we have historical buckets in routing algorithms, or preference for
> older channels,
> this might lead the end-user to pay higher fees, than they could access to.
>
I see the parallel, but also it seems that we have this situation already
today on lightning. Senders apply penalties and routing nodes need to make
assumptions about how they are penalised. Perhaps more explicit signalling
can actually help to reduce the degree of uncertainty as to how a routing
nodes is supposed to perform to keep senders happy?
> This is where the open question lies to me - "highly available" can be
> defined with multiple
> senses, like fault-tolerance, latency processing, equilibrated liquidity.
> And a routing node might
> not be able to optimize its architecture for the same end-goal (e.g more
> watchtower on remote
> host probably increases the latency processing).
>
Yes, good point. So maybe a few more bits to signal what a sender may
expect from a channel exactly?
> > Without shadow channels, it is impossible to guarantee liquidity up to
> the channel capacity. It might make sense for senders to only assume high
> > availability for amounts up to `htlc_maximum_msat`.
>
> As a note, I think "senders assumption" should be well-documented,
> otherwise there will be
> performance discrepancies between node implementations or even versions.
> E.g, an upgraded
> sender penalizing a node for the lack of shadow/parallel channels
> fulfilling HTLC amounts up to
> `htlc_maximum_msat`.
>
Well documented, or maybe even explicit in the name of the feature bit. For
example `htlc_max_guaranteed`.
> I think signal availability should be explicit rather than implicit. Even
> if it's coming with more
> gossip bandwidth data consumed. I would say for bandwidth performance
> management, relying
> on new gossip messages, where they can be filtered in function of the
> level of services required
> is interesting.
>
In terms of implementation, I think this kind of signalling is easier as an
extension of `channel_update`, but it can probably work as a separate
message too.
Joost
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/lightning-dev/attachments/20230215/c17c1304/attachment-0001.html>
🗒️ Summary of this message: Lightning network performance depends on the type of payment flows, with fast payment being favored for end-users and cheap fees for remittance payments. Adding latency as a criteria for pathfinding construction and using forward-error-correction code on top of MPP are potential solutions. More liquidity may be used by routing nodes to serve tailored HTLC requests, but this should be rewarded by higher routing fees. Careful policy rules design and upgradeability are necessary to prevent higher fees for end-users.
📝 Original message:
>
> I think the performance question depends on the type of payment flows
> considered. If you're an
> end-user sending a payment to your local Starbucks for coffee, here fast
> payment sounds the end-goal.
> If you're doing remittance payment, cheap fees might be favored, and in
> function of those flows you're
> probably not going to select the same "performant" routing nodes. I think
> adding latency as a criteria for
> pathfinding construction has already been mentioned in the past for LDK
> [0].
>
My hopes are that eventually lightning nodes can run so efficient that in
practice there is no real trade-off anymore between cost and speed. But of
course hard to say how that's going to play out. I am all for adding
latency as an input to pathfinding. Attributable errors should help with
that too.
> Or there is the direction to build forward-error-correction code on top of
> MPP, like in traditional
> networking [1]. The rough idea, you send more payment shards than the
> requested sum, and then
> you reveal the payment secrets to the receiver after an onion
> interactivity round to finalize payment.
>
This is not very different from payment pre-probing is it? So try a larger
set of possible routes simultaneously and when one proves to be open, send
the real payment across that route. Of course a balance may have shifted in
the mean time, but seems unlikely enough to prevent the approach from being
usable. The obvious downside is that the user needs more total liquidity to
have multiple htlcs outstanding at the same time. Nevertheless an
interesting way to reduce payment latency.
> At the end of the day, we add more signal channels between HTLC senders
> and the routing
> nodes offering capital liquidity, if the signal mechanisms are efficient,
> I think they should lead
> to better allocation of the capital. So yes, I think more liquidity might
> be used by routing nodes
> to serve finely tailored HTLC requests by senders, however this liquidity
> should be rewarded
> by higher routing fees.
>
This is indeed part of the idea. By signalling HA, you may not only attract
more traffic, but also be able to command a higher fee.
> I think if we have lessons to learn on policy rules design and deployment
> on the base-layer
> (the full-rbf saga), it's to be careful in the initial set of rules, and
> how we ensure smooth
> upgradeability, from one version to another. Otherwise the re-deployment
> cost towards
> the new version might incentive the old routing node to stay on the
> non-optimal versions,
> and as we have historical buckets in routing algorithms, or preference for
> older channels,
> this might lead the end-user to pay higher fees, than they could access to.
>
I see the parallel, but also it seems that we have this situation already
today on lightning. Senders apply penalties and routing nodes need to make
assumptions about how they are penalised. Perhaps more explicit signalling
can actually help to reduce the degree of uncertainty as to how a routing
nodes is supposed to perform to keep senders happy?
> This is where the open question lies to me - "highly available" can be
> defined with multiple
> senses, like fault-tolerance, latency processing, equilibrated liquidity.
> And a routing node might
> not be able to optimize its architecture for the same end-goal (e.g more
> watchtower on remote
> host probably increases the latency processing).
>
Yes, good point. So maybe a few more bits to signal what a sender may
expect from a channel exactly?
> > Without shadow channels, it is impossible to guarantee liquidity up to
> the channel capacity. It might make sense for senders to only assume high
> > availability for amounts up to `htlc_maximum_msat`.
>
> As a note, I think "senders assumption" should be well-documented,
> otherwise there will be
> performance discrepancies between node implementations or even versions.
> E.g, an upgraded
> sender penalizing a node for the lack of shadow/parallel channels
> fulfilling HTLC amounts up to
> `htlc_maximum_msat`.
>
Well documented, or maybe even explicit in the name of the feature bit. For
example `htlc_max_guaranteed`.
> I think signal availability should be explicit rather than implicit. Even
> if it's coming with more
> gossip bandwidth data consumed. I would say for bandwidth performance
> management, relying
> on new gossip messages, where they can be filtered in function of the
> level of services required
> is interesting.
>
In terms of implementation, I think this kind of signalling is easier as an
extension of `channel_update`, but it can probably work as a separate
message too.
Joost
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/lightning-dev/attachments/20230215/c17c1304/attachment-0001.html>