Carsten Otto [ARCHIVE] on Nostr: 📅 Original date posted:2022-03-14 📝 Original message: Hi Rene, On Mon, Mar 14, ...
📅 Original date posted:2022-03-14
📝 Original message:
Hi Rene,
On Mon, Mar 14, 2022 at 12:56:24PM +0100, René Pickhardt via Lightning-dev wrote:
> > 1) What's the reasoning behind combining parallel channels?
> I think that should work and especially when including fees to the
> cost function and considering how nodes handle routing requests on
> parallel channels we might have to do so anyway.
I think lnd accepts the minimum fee of all parallel channels, even if
a higher fee rate is configured for the channel that is actually used.
I'm not too sure about this, though. Having parallel channels with
different fee settings seems weird to me, anyway.
> 1.1) A payment of size 2 needs to be split into 1+1 to fit through
> However I believe in practice one cannot just send a 2 satoshi onion and
> expect the routing node to split the amount correctly / accordingly
> between the two parallel channels. (I might be wrong here).
Exactly. This kind of split could be possible in theory, but at least
lnd doesn't do it. I guess there are lots of interesting questions to
answer before this becomes reality (channel jamming?).
> So in that case
> modelling and computing probabilities for parallel channels might be
> necessary anyway though the math indicates that splitting liquidity in
> parallel channels will get you selected less frequently for routing.
Which is OK, as in reality it IS less likely to succeed.
> 1.2) The Mission Control information provided by lnd [...]
> I think you talk a about a maximum available balance of a channel (and not
> min available balance)?
Yes, although MC also has information about "known" amounts (due to
failures that only happened further down the road).
> In the case of parallel channels I am not even sure if such information is
> accurate as it is my understanding that the routing node may decide to use
> the parallel channel to forward the amount even though the other channel
> was specified in the onion.
The routing node is free to pick any of the parallel channels, yes. The
MC data only reasons about pairs of nodes, though, not individual
channels.
> Assuming that routing nodes indeed do so we would have learnt that neither
> channel has an effective capacity of N. So the combined virtual channel
> could be seen as 2N-1.
You mean 2(N-1) = 2N-2?
> However if routing nodes don't locally split a
> forwarding request across both channels we would know that calaculating
> with 2N-1 is bad as a request of N could not be fulfilled.
Exactly, also for 2N-2. Only N-1 would be a reasonable assumption.
Based on your responses I'll treat parallel channels individually, and
see how it works out.
> > 3) Size of Piecewise Linearization
> The main difference here is that a channel of 1 BTC is highly preferable
> from a probabilistic payment delivery perspective over a channel of 0.01
> BTC. Even approximating the 1 BTC channel with 1000 intervalls of 0.001 BTC
> should still have a lower unit cost in all pieces of the first 0.01 BTC of
> the liquidity than the first piece of the 0.01 BTC channel. So I think
> splitting all channels in the equal number of pieces is pretty well
> motivated but let me elaborate on this:
OK great, got it.
> > 4) Leftovers after Piecewise Linearization
> I am not sure if I understand your question / issue here. The splitting
> works by selecting N points on the domain of the function and splitting the
> domain into segments at those points. This should never leave sats over.
With quantization of 10,000 a channel of size 123,456 ends up as an arc
with a capacity of 12 units. Cutting this into 5 pieces gives us
5*2 with 2 units not ending up in any of those pieces. Or am I missing
something here, and we should split into 5 pieces of size 2.4 = 12/5?
> If the quantization however makes a channel so small that we cannot
> even create 5 (or N) disjoint segments then I guess the likelihood for
> being included into the final result is too small anyway.
It may not be very likely, but flat-out ignoring 20k sat (in my
contrived example above) or up to 4*quantization sats (which is the case
you described) doesn't feel right.
> Again this yield interesting pruning opportunities to reduce the seize of
> the network before doing the expensive min cost flow computation. For
> example I could prune channels with high unit costs on the first segment.
> Especially if they are further away from the source and destination node.
> This would overall reduce the size of the graph and improve runtime.
Let's talk about optimizations later :)
> 5) Fees (and other properties?)
> arcs.append((src,dest,int(cap/(N*QUANTIZATION)),(i+1)*unit_cost +
> mu*fee_rate_ppm))
Great, that helps! Thanks alot!
> Note two things:
> 1. the only requirement for the solver to work is that \mu*fee_rate_ppm
> needs to be an integer. So in case \mu was smaller than 1 we could also
> scale the term from the linearized log probabilities by putting a larger mu
> to the feature arising from the cost of the uncertainty.
Good to know!
Bye,
Carsten
--
Dr. Carsten Otto
carsten at c-otto.de
https://c-otto.de
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 195 bytes
Desc: not available
URL: <http://lists.linuxfoundation.org/pipermail/lightning-dev/attachments/20220314/2418fb43/attachment.sig>
📝 Original message:
Hi Rene,
On Mon, Mar 14, 2022 at 12:56:24PM +0100, René Pickhardt via Lightning-dev wrote:
> > 1) What's the reasoning behind combining parallel channels?
> I think that should work and especially when including fees to the
> cost function and considering how nodes handle routing requests on
> parallel channels we might have to do so anyway.
I think lnd accepts the minimum fee of all parallel channels, even if
a higher fee rate is configured for the channel that is actually used.
I'm not too sure about this, though. Having parallel channels with
different fee settings seems weird to me, anyway.
> 1.1) A payment of size 2 needs to be split into 1+1 to fit through
> However I believe in practice one cannot just send a 2 satoshi onion and
> expect the routing node to split the amount correctly / accordingly
> between the two parallel channels. (I might be wrong here).
Exactly. This kind of split could be possible in theory, but at least
lnd doesn't do it. I guess there are lots of interesting questions to
answer before this becomes reality (channel jamming?).
> So in that case
> modelling and computing probabilities for parallel channels might be
> necessary anyway though the math indicates that splitting liquidity in
> parallel channels will get you selected less frequently for routing.
Which is OK, as in reality it IS less likely to succeed.
> 1.2) The Mission Control information provided by lnd [...]
> I think you talk a about a maximum available balance of a channel (and not
> min available balance)?
Yes, although MC also has information about "known" amounts (due to
failures that only happened further down the road).
> In the case of parallel channels I am not even sure if such information is
> accurate as it is my understanding that the routing node may decide to use
> the parallel channel to forward the amount even though the other channel
> was specified in the onion.
The routing node is free to pick any of the parallel channels, yes. The
MC data only reasons about pairs of nodes, though, not individual
channels.
> Assuming that routing nodes indeed do so we would have learnt that neither
> channel has an effective capacity of N. So the combined virtual channel
> could be seen as 2N-1.
You mean 2(N-1) = 2N-2?
> However if routing nodes don't locally split a
> forwarding request across both channels we would know that calaculating
> with 2N-1 is bad as a request of N could not be fulfilled.
Exactly, also for 2N-2. Only N-1 would be a reasonable assumption.
Based on your responses I'll treat parallel channels individually, and
see how it works out.
> > 3) Size of Piecewise Linearization
> The main difference here is that a channel of 1 BTC is highly preferable
> from a probabilistic payment delivery perspective over a channel of 0.01
> BTC. Even approximating the 1 BTC channel with 1000 intervalls of 0.001 BTC
> should still have a lower unit cost in all pieces of the first 0.01 BTC of
> the liquidity than the first piece of the 0.01 BTC channel. So I think
> splitting all channels in the equal number of pieces is pretty well
> motivated but let me elaborate on this:
OK great, got it.
> > 4) Leftovers after Piecewise Linearization
> I am not sure if I understand your question / issue here. The splitting
> works by selecting N points on the domain of the function and splitting the
> domain into segments at those points. This should never leave sats over.
With quantization of 10,000 a channel of size 123,456 ends up as an arc
with a capacity of 12 units. Cutting this into 5 pieces gives us
5*2 with 2 units not ending up in any of those pieces. Or am I missing
something here, and we should split into 5 pieces of size 2.4 = 12/5?
> If the quantization however makes a channel so small that we cannot
> even create 5 (or N) disjoint segments then I guess the likelihood for
> being included into the final result is too small anyway.
It may not be very likely, but flat-out ignoring 20k sat (in my
contrived example above) or up to 4*quantization sats (which is the case
you described) doesn't feel right.
> Again this yield interesting pruning opportunities to reduce the seize of
> the network before doing the expensive min cost flow computation. For
> example I could prune channels with high unit costs on the first segment.
> Especially if they are further away from the source and destination node.
> This would overall reduce the size of the graph and improve runtime.
Let's talk about optimizations later :)
> 5) Fees (and other properties?)
> arcs.append((src,dest,int(cap/(N*QUANTIZATION)),(i+1)*unit_cost +
> mu*fee_rate_ppm))
Great, that helps! Thanks alot!
> Note two things:
> 1. the only requirement for the solver to work is that \mu*fee_rate_ppm
> needs to be an integer. So in case \mu was smaller than 1 we could also
> scale the term from the linearized log probabilities by putting a larger mu
> to the feature arising from the cost of the uncertainty.
Good to know!
Bye,
Carsten
--
Dr. Carsten Otto
carsten at c-otto.de
https://c-otto.de
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 195 bytes
Desc: not available
URL: <http://lists.linuxfoundation.org/pipermail/lightning-dev/attachments/20220314/2418fb43/attachment.sig>