Thomas HUET [ARCHIVE] on Nostr: 📅 Original date posted:2022-11-03 📝 Original message: Hi Joost, This is a very ...
📅 Original date posted:2022-11-03
📝 Original message:
Hi Joost,
This is a very interesting proposal that elegantly solves the problem, with
however a very significant size increase. I can see two ways to keep the
size small:
- Each node just adds its hmac in a naive way, without deleting any part of
the message to relay. You seem to have disqualified this option because it
increases the size of the relayed message but I think it merits more
consideration. It is much simpler and the size only grows linearly with the
length of the route. An intermediate node could try to infer its position
relative to the failing node (which should not be the recipient) but
without knowing the original message size (which can easily be randomized
by the final node), is that really such a problem? It may be but I would
argue it's a good trade-off.
- If we really want to keep the constant size property, as you've suggested
we could use a low limit on the number of nodes. I would put the limit even
lower, at 8 or less. We could still use longer routes but we would only get
hmacs for the first 8 hops and revert to the legacy system if the failure
happens after the first 8 hops. That way we keep the size low and 8 hops
should be good enough for 99% of the payments, and even when there are more
hops we would know that the first 7 hops are clean.
Thanks again for your contribution, I hope we'll soon be able to attribute
failures trustlessly.
Thomas
Le mar. 1 nov. 2022 à 22:10, Joost Jager <joost.jager at gmail.com> a écrit :
> Hey Rusty,
>
> Great to hear that you want to try to implement the proposal. I can polish
> my golang proof of concept code a bit and share if that's useful? It's just
> doing the calculation in isolation. My next step after that would be to see
> what it looks like integrated in lnd.
>
> 16 hops sounds fine to me too, but in general I am not too concerned about
> the size of the message. Maybe a scheme is possible where the sender
> signals the max number of hops, trading off size against privacy. Probably
> an unnecessary complication though.
>
> I remember the prepay scheme, but sounds quite a bit more invasive than
> just touching encode/relay/decode of the failure message. You also won't
> have the timing information to identify slow nodes on the path.
>
> Joost.
>
> On Tue, Oct 25, 2022 at 9:58 PM Rusty Russell <rusty at rustcorp.com.au>
> wrote:
>
>> Joost Jager <joost.jager at gmail.com> writes:
>> > Hi list,
>> >
>> > I wanted to get back to a long-standing issue in Lightning: gaps in
>> error
>> > attribution. I've posted about this before back in 2019 [1].
>>
>> Hi Joost!
>>
>> Thanks for writing this up fully. Core lightning also doesn't
>> penalize properly, because of the attribution problem: solving this lets
>> us penalize a channel, at least.
>>
>> I want to implement this too, to make sure I understand it
>> correctly, but having read it twice it seems reasonable.
>>
>> How about 16 hops? It's the closest power of 2 to the legacy hop
>> limit, and makes this 4.5k for payloads and hmacs.
>>
>> There is, however, a completely different possibility if we want
>> to use a pre-pay scheme, which I think I've described previously. You
>> send N sats and a secp point; every chained secret returned earns the
>> forwarder 1 sat[1]. The answers of course are placed in each layer of
>> the onion. You know how far the onion got based on how much money you
>> got back on failure[2], though the error message may be corrupted.
>>
>> Cheers,
>> Rusty.
>> [1] Simplest is truncate the point to a new secret key. Each node would
>> apply a tweak for decorrelation ofc.
>> [2] The best scheme is that you don't get paid unless the next node
>> decrypts, actually, but that needs more thought.
>>
> _______________________________________________
> Lightning-dev mailing list
> Lightning-dev at lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/lightning-dev/attachments/20221103/ddb54b9c/attachment.html>
📝 Original message:
Hi Joost,
This is a very interesting proposal that elegantly solves the problem, with
however a very significant size increase. I can see two ways to keep the
size small:
- Each node just adds its hmac in a naive way, without deleting any part of
the message to relay. You seem to have disqualified this option because it
increases the size of the relayed message but I think it merits more
consideration. It is much simpler and the size only grows linearly with the
length of the route. An intermediate node could try to infer its position
relative to the failing node (which should not be the recipient) but
without knowing the original message size (which can easily be randomized
by the final node), is that really such a problem? It may be but I would
argue it's a good trade-off.
- If we really want to keep the constant size property, as you've suggested
we could use a low limit on the number of nodes. I would put the limit even
lower, at 8 or less. We could still use longer routes but we would only get
hmacs for the first 8 hops and revert to the legacy system if the failure
happens after the first 8 hops. That way we keep the size low and 8 hops
should be good enough for 99% of the payments, and even when there are more
hops we would know that the first 7 hops are clean.
Thanks again for your contribution, I hope we'll soon be able to attribute
failures trustlessly.
Thomas
Le mar. 1 nov. 2022 à 22:10, Joost Jager <joost.jager at gmail.com> a écrit :
> Hey Rusty,
>
> Great to hear that you want to try to implement the proposal. I can polish
> my golang proof of concept code a bit and share if that's useful? It's just
> doing the calculation in isolation. My next step after that would be to see
> what it looks like integrated in lnd.
>
> 16 hops sounds fine to me too, but in general I am not too concerned about
> the size of the message. Maybe a scheme is possible where the sender
> signals the max number of hops, trading off size against privacy. Probably
> an unnecessary complication though.
>
> I remember the prepay scheme, but sounds quite a bit more invasive than
> just touching encode/relay/decode of the failure message. You also won't
> have the timing information to identify slow nodes on the path.
>
> Joost.
>
> On Tue, Oct 25, 2022 at 9:58 PM Rusty Russell <rusty at rustcorp.com.au>
> wrote:
>
>> Joost Jager <joost.jager at gmail.com> writes:
>> > Hi list,
>> >
>> > I wanted to get back to a long-standing issue in Lightning: gaps in
>> error
>> > attribution. I've posted about this before back in 2019 [1].
>>
>> Hi Joost!
>>
>> Thanks for writing this up fully. Core lightning also doesn't
>> penalize properly, because of the attribution problem: solving this lets
>> us penalize a channel, at least.
>>
>> I want to implement this too, to make sure I understand it
>> correctly, but having read it twice it seems reasonable.
>>
>> How about 16 hops? It's the closest power of 2 to the legacy hop
>> limit, and makes this 4.5k for payloads and hmacs.
>>
>> There is, however, a completely different possibility if we want
>> to use a pre-pay scheme, which I think I've described previously. You
>> send N sats and a secp point; every chained secret returned earns the
>> forwarder 1 sat[1]. The answers of course are placed in each layer of
>> the onion. You know how far the onion got based on how much money you
>> got back on failure[2], though the error message may be corrupted.
>>
>> Cheers,
>> Rusty.
>> [1] Simplest is truncate the point to a new secret key. Each node would
>> apply a tweak for decorrelation ofc.
>> [2] The best scheme is that you don't get paid unless the next node
>> decrypts, actually, but that needs more thought.
>>
> _______________________________________________
> Lightning-dev mailing list
> Lightning-dev at lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/lightning-dev/attachments/20221103/ddb54b9c/attachment.html>