What is Nostr?
Antoine Riard [ARCHIVE] /
npub1vjz…x8dd
2023-06-09 13:12:42
in reply to nevent1q…rx8t

Antoine Riard [ARCHIVE] on Nostr: 📅 Original date posted:2023-02-14 🗒️ Summary of this message: The Lightning ...

📅 Original date posted:2023-02-14
🗒️ Summary of this message: The Lightning Network may require routing nodes to operate flawlessly or face penalties, potentially impacting future routing revenue. Gradual implementation of strict penalties may be necessary for routing nodes to adapt.
📝 Original message:
Hi Joost,

> For a long time I've held the expectation that eventually payers on the
lightning network will become very strict about node performance. That they
will > require a routing node to operate flawlessly or else apply a hefty
penalty such as completely avoiding the node for an extended period of time
- multiple > weeks. The consequence of this is that routing nodes would
need to manage their liquidity meticulously because every failure
potentially has a large
> impact on future routing revenue.

I think the performance question depends on the type of payment flows
considered. If you're an
end-user sending a payment to your local Starbucks for coffee, here fast
payment sounds the end-goal.
If you're doing remittance payment, cheap fees might be favored, and in
function of those flows you're
probably not going to select the same "performant" routing nodes. I think
adding latency as a criteria for
pathfinding construction has already been mentioned in the past for LDK [0].

> I think movement in this direction is important to guarantee
competitiveness with centralised payment systems and their (at least
theoretical) ability to
> process a payment in the blink of an eye. A lightning wallet trying
multiple paths to find one that works doesn't help with this.

Or there is the direction to build forward-error-correction code on top of
MPP, like in traditional
networking [1]. The rough idea, you send more payment shards than the
requested sum, and then
you reveal the payment secrets to the receiver after an onion interactivity
round to finalize payment.

> A common argument against strict penalisation is that it would lead to
less efficient use of capital. Routing nodes would need to maintain pools of
> liquidity to guarantee successes all the time. My opinion on this is that
lightning is already enormously capital efficient at scale and that it is
worth
> sacrificing a slight part of that efficiency to also achieve the lowest
possible latency.

At the end of the day, we add more signal channels between HTLC senders and
the routing
nodes offering capital liquidity, if the signal mechanisms are efficient, I
think they should lead
to better allocation of the capital. So yes, I think more liquidity might
be used by routing nodes
to serve finely tailored HTLC requests by senders, however this liquidity
should be rewarded
by higher routing fees.

> This brings me to the actual subject of this post. Assuming strict
penalisation is good, it may still not be ideal to flip the switch from one
day to the other. > Routing nodes may not offer the required level of
service yet, causing senders to end up with no nodes to choose from.

> One option is to gradually increase the strength of the penalties, so
that routing nodes are given time to adapt to the new standards. This does
require > everyone to move along and leaves no space for cheap routing
nodes with less leeway in terms of liquidity.

I think if we have lessons to learn on policy rules design and deployment
on the base-layer
(the full-rbf saga), it's to be careful in the initial set of rules, and
how we ensure smooth
upgradeability, from one version to another. Otherwise the re-deployment
cost towards
the new version might incentive the old routing node to stay on the
non-optimal versions,
and as we have historical buckets in routing algorithms, or preference for
older channels,
this might lead the end-user to pay higher fees, than they could access to.

> Therefore I am proposing another way to go about it: extend the
`channel_update` field `channel_flags` with a new bit that the sender can
use to signal > `highly_available`.

> It's then up to payers to decide how to interpret this flag. One way
could be to prefer `highly_available` channels during pathfinding. But if
the routing
> node then returns a failure, a much stronger than normal penalty will be
applied. For routing nodes this creates an opportunity to attract more
traffic by > marking some channels as `highly_available`, but it also comes
with the responsibility to deliver.

This is where the open question lies to me - "highly available" can be
defined with multiple
senses, like fault-tolerance, latency processing, equilibrated liquidity.
And a routing node might
not be able to optimize its architecture for the same end-goal (e.g more
watchtower on remote
host probably increases the latency processing).

> Without shadow channels, it is impossible to guarantee liquidity up to
the channel capacity. It might make sense for senders to only assume high
> availability for amounts up to `htlc_maximum_msat`.

As a note, I think "senders assumption" should be well-documented,
otherwise there will be
performance discrepancies between node implementations or even versions.
E.g, an upgraded
sender penalizing a node for the lack of shadow/parallel channels
fulfilling HTLC amounts up to
`htlc_maximum_msat`.

> A variation on this scheme that requires no extension of `channel_update`
is to signal availability implicitly through routing fees. So the more
expensive > a channel is, the stronger the penalty that is applied on
failure will be. It seems less ideal though, because it
could disincentivize cheap but reliable
> channels on high traffic links.

> The effort required to implement some form of a `highly_available` flag
seem limited and it may help to get payment success rates up. Interested to
> hear your thoughts.

I think signal availability should be explicit rather than implicit. Even
if it's coming with more
gossip bandwidth data consumed. I would say for bandwidth performance
management, relying
on new gossip messages, where they can be filtered in function of the level
of services required
is interesting.

Best,
Antoine

[0] https://github.com/lightningdevkit/rust-lightning/issues/1647
[1] https://www.rfc-editor.org/rfc/rfc6363.html




Le lun. 13 févr. 2023 à 11:46, Joost Jager <joost.jager at gmail.com> a écrit :

> Hi,
>
> For a long time I've held the expectation that eventually payers on the
> lightning network will become very strict about node performance. That they
> will require a routing node to operate flawlessly or else apply a hefty
> penalty such as completely avoiding the node for an extended period of time
> - multiple weeks. The consequence of this is that routing nodes would need
> to manage their liquidity meticulously because every failure potentially
> has a large impact on future routing revenue.
>
> I think movement in this direction is important to guarantee
> competitiveness with centralised payment systems and their (at least
> theoretical) ability to process a payment in the blink of an eye. A
> lightning wallet trying multiple paths to find one that works doesn't help
> with this.
>
> A common argument against strict penalisation is that it would lead to
> less efficient use of capital. Routing nodes would need to maintain pools
> of liquidity to guarantee successes all the time. My opinion on this is
> that lightning is already enormously capital efficient at scale and that it
> is worth sacrificing a slight part of that efficiency to also achieve the
> lowest possible latency.
>
> This brings me to the actual subject of this post. Assuming strict
> penalisation is good, it may still not be ideal to flip the switch from one
> day to the other. Routing nodes may not offer the required level of service
> yet, causing senders to end up with no nodes to choose from.
>
> One option is to gradually increase the strength of the penalties, so that
> routing nodes are given time to adapt to the new standards. This does
> require everyone to move along and leaves no space for cheap routing nodes
> with less leeway in terms of liquidity.
>
> Therefore I am proposing another way to go about it: extend the
> `channel_update` field `channel_flags` with a new bit that the sender can
> use to signal `highly_available`.
>
> It's then up to payers to decide how to interpret this flag. One way could
> be to prefer `highly_available` channels during pathfinding. But if the
> routing node then returns a failure, a much stronger than normal penalty
> will be applied. For routing nodes this creates an opportunity to attract
> more traffic by marking some channels as `highly_available`, but it also
> comes with the responsibility to deliver.
>
> Without shadow channels, it is impossible to guarantee liquidity up to the
> channel capacity. It might make sense for senders to only assume high
> availability for amounts up to `htlc_maximum_msat`.
>
> A variation on this scheme that requires no extension of `channel_update`
> is to signal availability implicitly through routing fees. So the more
> expensive a channel is, the stronger the penalty that is applied on failure
> will be. It seems less ideal though, because it could disincentivize cheap
> but reliable channels on high traffic links.
>
> The effort required to implement some form of a `highly_available` flag
> seem limited and it may help to get payment success rates up. Interested to
> hear your thoughts.
>
> Joost
> _______________________________________________
> Lightning-dev mailing list
> Lightning-dev at lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/lightning-dev/attachments/20230214/893867dd/attachment-0001.html>;
Author Public Key
npub1vjzmc45k8dgujppapp2ue20h3l9apnsntgv4c0ukncvv549q64gsz4x8dd