ZmnSCPxj [ARCHIVE] on Nostr: π Original date posted:2022-06-29 π Original message: Good morning aj, > On ...
π
Original date posted:2022-06-29
π Original message:
Good morning aj,
> On Sun, Jun 05, 2022 at 02:29:28PM +0000, ZmnSCPxj via Lightning-dev wrote:
>
> Just sharing my thoughts on this.
>
> > Introduction
> > ============
> > Optimize for reliability+
> > uncertainty+fee+drain+uptime...
> > .--~~--.
> > / \
> > / \
> > / \
> > / \
> > / \
> > --' `--
> > Just Just
> > optimize optimize
> > for for
> > low fee low fee
>
>
> I think ideally you want to optimise for some combination of fee, speed
> and reliability (both liklihood of a clean failure that you can retry
> and of generating stuck payments). As Matt/Peter suggest in another
> thread, maybe for some uses you can accept low speed for low fees,
> while in others you'd rather pay more and get near-instant results. I
> think drain should just go to fee, and uncertainty/uptime are just ways
> of estimating reliability.
>
> It might be reasonable to generate local estimates for speed/reliability
> by regularly sending onion messages or designed-to-fail htlcs.
>
> Sorry if that makes me a midwit :)
Actually feerate cards help with this; it just requires an economic insight to translate probability-of-success to an actual cost that the payer incurs.
> > ### Inverting The Filter: Feerate Cards
> > Basically, a feerate card is a mapping between a probability-of-success range and a feerate.
> > * 00%->25%: -10ppm
> > * 26%->50%: 1ppm
> > * 51%->75%: 5ppm
> > * 76%->100%: 50ppm
>
>
> Feerate cards don't really make sense to me; "probability of success"
> isn't a real measure the payer can use -- naively, if it were, they could
> just retry at 1ppm 10 times and get to 95% chances of success. But if
> they can afford to retry (background rebalancing?), they might as well
> just try at -10ppm, 1ppm, 5ppm, 10ppm (or perhaps with a binary search?),
> and see if they're lucky; but if they want a 1s response time, and can't
> afford retries, what good is even a 75% chance of success if that's the
> individual success rate on each hop of their five hop path?
The economic insight here is this:
* The payer wants to pay because it values a service / product more highly than the sats they are spending.
* There is a subjective difference in value between the service / product being bought and the amount to be spent.
* In short, if the payment succeeds and the service / product is acquired, then the payer perceives itself as richer (increased utilons) by that subjective difference.
* If payment fails, then the payer incurs an opportunity cost, as it is unable to utilize the difference in subjective value between the service / product and the sats being spent.
* Thus, the subjective difference in value between the service / product being bought, and the sats to be paid, is the cost of payment failure.
* That difference in value is the "fee budget" that Lightning Network payment algorithms all require as an argument.
* If the LN fee total is greater than the fee budget, the payment algorithm will reject that path outright.
* If the LN fee total is greater than the subjective difference in value between the service / product being bought and the amount to be delivered at the destination, then the payer gets negative utility and would prefer not to continue paying --- which is exactly what the payment algorithm does, it rejects such paths.
Therefore the fee budget is the cost of failure.
We can now use the left-hand side of the feerate card table, by multiplying `100% - middle_probability_of_success` (i.e. probability of failure) by the fee budget (i.e. cost of failure), and getting the cost-of-failure-for-this-entry.
We then evaluate the fee card by plugging this in to each entry of the feerate card, and picking which entry gives the lowest total fee.
This is then added as a fee in payment algorithms, thus translated down to "just optimize for low fee".
If the above logic seems dubious, consider this:
* Nodes utilizing wall strategies and doing lots of rebalancing put low limits on the fee budget of the rebalancing cost.
* These nodes are willing to try lots of possible routes, hoping to nab the liquidity of a low-fee node on the cheap in order to resell it later.
* i.e. those nodes are fine with taking a long time to successfully route a payment from themselves to themselves; they absolutely insist on low fees or else they will not earn anything.
* Such nodes are fine with low probability of success.
* Being fine with low probability of success means that the effect of the left-hand side of the feerate card is smaller and such nodes will tend to get the low probability of success entries.
* Buyers getting FOMOed into buying some neat new widget want to get their grubby hands on the widget ASAP.
* These nodes are willing to pay a premium to get the neat new widget RIGHT NOW.
* i.e. these nodes will be willing to provide a higher fee budget.
* Being fine with a higher fee budget means that the effect of the left-hand side of the feerate card is larger and such nodes will tend to get the high probability of success entries.
Thus feerate cards may very well unify a fair amount of the concerns we have.
All costs are economic costs.
> And if you're not just going by odds of having to retry, then you need to
> get some current information about the channel to plug into the formula;
> but if you're getting current information, why not let that information
> be the feerate directly?
>
> > More concretely, we set some high feerate, impose some kind of constant "gravity" that pulls down the feerate over time, then we measure the relative loss of outgoing liquidity to serve as "lift" to the feerate.
>
>
> If your current fee rate is F (ppm), and your current volume (flow) is V
> (sats forwarded per hour), then your profit is FV. If dropping your fee
> rate by dF (<0) results in an increase of V by dV (>0), then you want:
>
>
> (F+dF)(V+dV) > FV
>
> FV + VdF + FdV + dFdV > FV
>
> FdV > -VdF
>
> dV/dF < -V/F (flip the inequality because dF is negative)
>
> (dV/V)/(dF/F) < -1 (fee-elasticity of volume is in the elastic
> region)
>
> (<-1 == elastic == flow changes more than the fee does == drop the fee
> rate; >-1 == ineleastic == flow changes less than the fee does == raise
>
> the fee rate; =-1 == unit elastic == you've found a locally optimal
> fee rate)
Thank you for the math!
I was going to heuristic it and cross my fingers but this is probably a better framework.
> You could optimise base fee in the same way, if you set F to be sats/tx
> and V to be txs/hour, but then you're trying to optimise two variables
> on a 2 dimensional plane, which is harder. So probably better to do
> zero base fees and just set it to 0 and ignore it, or use your actual
> computation costs -- perhaps about 20msat if you're paying $100USD/month
> for your lightning node, a channel update takes 10ms, each forwarded HTLC
> accounts for 4 updates, 2 on the incoming channel, 2 on the outgoing,
> with no batching, and only 40% of payments are successful, at $20k/BTC.
>
> It's likely more important to have balanced flows than maximally
> profitable ones though, as that's what allows you to keep your channel
> open. That's probably pretty hard to optimise, since a changed fee on
> one channel will affect the volume on ther channels as well.
But if you have balanced flows, then the steady state of your channel is that its balance is going to remain in some constant balance.
Thus, heuristics that target getting your channel balance to the constant balance of 50% will work well enough to get you balanced flows.
There is also the unfortunate fact that lots of nodes are badly managed and apparently do not periodically send out their funds, instead accumulating it on the LN.
Handling those is what is being fixed by the rebalancing heuristics utilized by both passive rebalancers and walls.
> Relatedly:
>
> > I want to propose that all published nodes support some kind of
> > onchain/offchain swap capability.
>
>
> If you're running a forwarding node, and collecting fees for forwarding,
> considered in net your channels won't be balanced: the fees you collect
> are all coming in, and there's nothing to compensate for that. Having some
> way to send those fees "out" is necessary to keep your channels balanced
> and avoid the need to have to close them. Having a swap capability like
> this is perhaps a relatively easy way to be able to (automatically)
> fix imbalances caused by collecting fees, and thus preserve your older
> channels.
Yes, people need to run more swap nodes, not more LSPs.
Regards,
ZmnSCPxj
π Original message:
Good morning aj,
> On Sun, Jun 05, 2022 at 02:29:28PM +0000, ZmnSCPxj via Lightning-dev wrote:
>
> Just sharing my thoughts on this.
>
> > Introduction
> > ============
> > Optimize for reliability+
> > uncertainty+fee+drain+uptime...
> > .--~~--.
> > / \
> > / \
> > / \
> > / \
> > / \
> > --' `--
> > Just Just
> > optimize optimize
> > for for
> > low fee low fee
>
>
> I think ideally you want to optimise for some combination of fee, speed
> and reliability (both liklihood of a clean failure that you can retry
> and of generating stuck payments). As Matt/Peter suggest in another
> thread, maybe for some uses you can accept low speed for low fees,
> while in others you'd rather pay more and get near-instant results. I
> think drain should just go to fee, and uncertainty/uptime are just ways
> of estimating reliability.
>
> It might be reasonable to generate local estimates for speed/reliability
> by regularly sending onion messages or designed-to-fail htlcs.
>
> Sorry if that makes me a midwit :)
Actually feerate cards help with this; it just requires an economic insight to translate probability-of-success to an actual cost that the payer incurs.
> > ### Inverting The Filter: Feerate Cards
> > Basically, a feerate card is a mapping between a probability-of-success range and a feerate.
> > * 00%->25%: -10ppm
> > * 26%->50%: 1ppm
> > * 51%->75%: 5ppm
> > * 76%->100%: 50ppm
>
>
> Feerate cards don't really make sense to me; "probability of success"
> isn't a real measure the payer can use -- naively, if it were, they could
> just retry at 1ppm 10 times and get to 95% chances of success. But if
> they can afford to retry (background rebalancing?), they might as well
> just try at -10ppm, 1ppm, 5ppm, 10ppm (or perhaps with a binary search?),
> and see if they're lucky; but if they want a 1s response time, and can't
> afford retries, what good is even a 75% chance of success if that's the
> individual success rate on each hop of their five hop path?
The economic insight here is this:
* The payer wants to pay because it values a service / product more highly than the sats they are spending.
* There is a subjective difference in value between the service / product being bought and the amount to be spent.
* In short, if the payment succeeds and the service / product is acquired, then the payer perceives itself as richer (increased utilons) by that subjective difference.
* If payment fails, then the payer incurs an opportunity cost, as it is unable to utilize the difference in subjective value between the service / product and the sats being spent.
* Thus, the subjective difference in value between the service / product being bought, and the sats to be paid, is the cost of payment failure.
* That difference in value is the "fee budget" that Lightning Network payment algorithms all require as an argument.
* If the LN fee total is greater than the fee budget, the payment algorithm will reject that path outright.
* If the LN fee total is greater than the subjective difference in value between the service / product being bought and the amount to be delivered at the destination, then the payer gets negative utility and would prefer not to continue paying --- which is exactly what the payment algorithm does, it rejects such paths.
Therefore the fee budget is the cost of failure.
We can now use the left-hand side of the feerate card table, by multiplying `100% - middle_probability_of_success` (i.e. probability of failure) by the fee budget (i.e. cost of failure), and getting the cost-of-failure-for-this-entry.
We then evaluate the fee card by plugging this in to each entry of the feerate card, and picking which entry gives the lowest total fee.
This is then added as a fee in payment algorithms, thus translated down to "just optimize for low fee".
If the above logic seems dubious, consider this:
* Nodes utilizing wall strategies and doing lots of rebalancing put low limits on the fee budget of the rebalancing cost.
* These nodes are willing to try lots of possible routes, hoping to nab the liquidity of a low-fee node on the cheap in order to resell it later.
* i.e. those nodes are fine with taking a long time to successfully route a payment from themselves to themselves; they absolutely insist on low fees or else they will not earn anything.
* Such nodes are fine with low probability of success.
* Being fine with low probability of success means that the effect of the left-hand side of the feerate card is smaller and such nodes will tend to get the low probability of success entries.
* Buyers getting FOMOed into buying some neat new widget want to get their grubby hands on the widget ASAP.
* These nodes are willing to pay a premium to get the neat new widget RIGHT NOW.
* i.e. these nodes will be willing to provide a higher fee budget.
* Being fine with a higher fee budget means that the effect of the left-hand side of the feerate card is larger and such nodes will tend to get the high probability of success entries.
Thus feerate cards may very well unify a fair amount of the concerns we have.
All costs are economic costs.
> And if you're not just going by odds of having to retry, then you need to
> get some current information about the channel to plug into the formula;
> but if you're getting current information, why not let that information
> be the feerate directly?
>
> > More concretely, we set some high feerate, impose some kind of constant "gravity" that pulls down the feerate over time, then we measure the relative loss of outgoing liquidity to serve as "lift" to the feerate.
>
>
> If your current fee rate is F (ppm), and your current volume (flow) is V
> (sats forwarded per hour), then your profit is FV. If dropping your fee
> rate by dF (<0) results in an increase of V by dV (>0), then you want:
>
>
> (F+dF)(V+dV) > FV
>
> FV + VdF + FdV + dFdV > FV
>
> FdV > -VdF
>
> dV/dF < -V/F (flip the inequality because dF is negative)
>
> (dV/V)/(dF/F) < -1 (fee-elasticity of volume is in the elastic
> region)
>
> (<-1 == elastic == flow changes more than the fee does == drop the fee
> rate; >-1 == ineleastic == flow changes less than the fee does == raise
>
> the fee rate; =-1 == unit elastic == you've found a locally optimal
> fee rate)
Thank you for the math!
I was going to heuristic it and cross my fingers but this is probably a better framework.
> You could optimise base fee in the same way, if you set F to be sats/tx
> and V to be txs/hour, but then you're trying to optimise two variables
> on a 2 dimensional plane, which is harder. So probably better to do
> zero base fees and just set it to 0 and ignore it, or use your actual
> computation costs -- perhaps about 20msat if you're paying $100USD/month
> for your lightning node, a channel update takes 10ms, each forwarded HTLC
> accounts for 4 updates, 2 on the incoming channel, 2 on the outgoing,
> with no batching, and only 40% of payments are successful, at $20k/BTC.
>
> It's likely more important to have balanced flows than maximally
> profitable ones though, as that's what allows you to keep your channel
> open. That's probably pretty hard to optimise, since a changed fee on
> one channel will affect the volume on ther channels as well.
But if you have balanced flows, then the steady state of your channel is that its balance is going to remain in some constant balance.
Thus, heuristics that target getting your channel balance to the constant balance of 50% will work well enough to get you balanced flows.
There is also the unfortunate fact that lots of nodes are badly managed and apparently do not periodically send out their funds, instead accumulating it on the LN.
Handling those is what is being fixed by the rebalancing heuristics utilized by both passive rebalancers and walls.
> Relatedly:
>
> > I want to propose that all published nodes support some kind of
> > onchain/offchain swap capability.
>
>
> If you're running a forwarding node, and collecting fees for forwarding,
> considered in net your channels won't be balanced: the fees you collect
> are all coming in, and there's nothing to compensate for that. Having some
> way to send those fees "out" is necessary to keep your channels balanced
> and avoid the need to have to close them. Having a swap capability like
> this is perhaps a relatively easy way to be able to (automatically)
> fix imbalances caused by collecting fees, and thus preserve your older
> channels.
Yes, people need to run more swap nodes, not more LSPs.
Regards,
ZmnSCPxj