What is Nostr?
Rusty Russell [ARCHIVE] /
npub1zw7…khpx
2023-06-09 12:58:57
in reply to nevent1q…djnd

Rusty Russell [ARCHIVE] on Nostr: πŸ“… Original date posted:2020-02-21 πŸ“ Original message: Anthony Towns <aj at ...

πŸ“… Original date posted:2020-02-21
πŸ“ Original message:
Anthony Towns <aj at erisian.com.au> writes:
> On Tue, Feb 18, 2020 at 10:23:29AM +0100, Joost Jager wrote:
>> A different way of mitigating this is to reverse the direction in which the
>> bond is paid. So instead of paying to offer an htlc, nodes need to pay to
>> receive an htlc. This sounds counterintuitive, but for the described jamming
>> attack there is also an attacker node at the end of the route. The attacker
>> still pays.
>
> I think this makes a lot of sense. I think the way it would end up working
> is that the further the route extends, the greater the payments are, so:
>
> A -> B : B sends A 1msat per minute
> A -> B -> C : C sends B 2msat per minute, B forwards 1msat/min to A
> A -> B -> C -> D : D sends C 3 msat, etc
> A -> B -> C -> D -> E : E sends D 4 msat, etc
>
> so each node is receiving +1 msat/minute, except for the last one, who's
> paying n msat/minute, where n is the number of hops to have gotten up to
> the last one. There's the obvious privacy issue there, with fairly
> obvious ways to fudge around it, I think.

Yes, it needs to scale with distance to work at all. However, it has
the same problems with other upfront schemes: how does E know to send
4msat per minute?

> I think it might make sense for the payments to have a grace period --
> ie, "if you keep this payment open longer than 20 seconds, you have to
> start paying me x msat/minute, but if it fulfills or cancels before
> then, it's all good".

But whatever the grace period, I can just rely on knowing that B is in
Australia (with a 1 second HTLC commit time) to make that node bleed
satoshis. I can send A->B->C, and have C fail the htlc after 19
seconds for free. But B has to send 1msat to A. B can't blame A or C,
since this attack could come from further away, too.

This attack always seems possible. Are you supposed to pay immediately
to fail an HTLC? That makes for a trivial attack, so I guess not.

And if there is a grace period, I can just gum up the network with lots
of slow-but-not-slow-enough HTLCs.

> Maybe this also implies a different protocol for HTLC forwarding,
> something like:
>
> 1. A sends the HTLC onion packet to B
> 2. B decrypts it, makes sure it makes sense
> 3. B sends a half-signed updated channel state back to A
> 4. A accepts it, and forwards the other half-signed channel update to B
>
> so that at any point before (4) Alice can say "this is taking too long,
> I'll start losing money" and safely abort the HTLC she was forwarding to
> Bob to avoid paying fees; while only after (4) can she start the time on
> expecting Bob to start paying fees that she'll forward back. That means
> 1.5 round-trips before Bob can really forward the HTLC on to Carol;
> but maybe it's parallelisable, so Bob/Carol could start at (1) as soon
> as Alice/Bob has finished (2).

We added a ping-before-commit[1] to avoid the case where B has disconnected
and we don't know yet; we have to assume an HTLC is stuck once we send
commitment_signed. This would be a formalization of that, but I don't
think it's any better?

There's an old proposal to fast-fail HTLCs: Bob sends an new message "I
would fail this HTLC once it's committed, here's the error" and if Alice
gets it before she sends the commitment_signed, she sends a new
"unadd_htlc" message first. This theoretically allows Bob to do the
same: optimistically forward it, and unadd it if Alice doesn't commit
with it in time.[2]

Cheers,
Rusty.

[1] Technically, if we haven't seen any traffic from the peer in the
last 30 seconds, we send a ping and wait.

[2] This seems like a speedup, but it only is if someone fails the HTLC.
We still need to send the commitment_signed back and forth (w/
revoke and ack) before committing to it in the next hop.
Author Public Key
npub1zw7cc8z78v6s3grujfvcv3ckpvg6kr0w7nz9yzvwyglyg0qu5sjsqhkhpx