Anthony Towns [ARCHIVE] on Nostr: 📅 Original date posted:2020-02-27 📝 Original message: On Mon, Feb 24, 2020 at ...
📅 Original date posted:2020-02-27
📝 Original message:
On Mon, Feb 24, 2020 at 01:29:36PM +1030, Rusty Russell wrote:
> Anthony Towns <aj at erisian.com.au> writes:
> > On Fri, Feb 21, 2020 at 12:35:20PM +1030, Rusty Russell wrote:
> >> And if there is a grace period, I can just gum up the network with lots
> >> of slow-but-not-slow-enough HTLCs.
> > Well, it reduces the "gum up the network for <timeout> blocks" to "gum
> > up the network for <grace period> seconds", which seems like a pretty
> > big win. I think if you had 20 hops each with a 1 minute grace period,
> > and each channel had a max_accepted_htlcs of 30, you'd need 25 HTLCs per
> > second to block 1000 channels (so 2.7% of the 36k channels 1ml reports),
> > so at the very least, successfully performing this attack would be
> > demonstrating lightning's solved bitcoin's transactions-per-second
> > limitation?
> But the comparison here is not with the current state, but with the
> "best previous proposal we have", which is:
>
> 1. Charge an up-front fee for accepting any HTLC.
> 2. Will hang-up after grace period unless you either prove a channel
> close, or gain another grace period by decrypting onion.
In general I don't really like comparing ideas that are still in
brainstorming mode; it's never clear whether there are unavoidable
pitfalls in one or the other that won't become clear until they're
actually implemented...
Specifically, I'm not a fan of either channel closes or peeling the onion
-- the former causes problems if you're trying to route across sidechains
or have lightning as a third layer above channel factories or similar,
and I'm not convinced even within Bitcoin "proving a channel close"
is that meaningful, and passing around decrypted onions seems like it
opens up privacy attacks.
Aside from those philosophical complaints, seems to me the simplest
attack would be:
* route 1000s of HTLCs from your node A1 to your node A2 via different,
long paths, using up the total channel capacity of your A1/A2 nodes,
with long timeouts
* have A2 offer up a transaction claiming that was the channel
close to A3; make it a real thing if necessary, but it's probably
fake-able
* then leave the HTLCs open until they time out, using up capacity
from all the nodes in your 1000s of routes. For every satoshi of
yours that's tied up, you should be able to tie up 10-20sat of other
people's funds
That increases the cost of the attack by one on-chain transaction per
timeout period, and limits the attack surface by how many transactions
you can get started/completed within whatever the grace period is, but
it doesn't seem a lot better than what we have today, unless onchain
fees go up a lot.
(If the up-front fee is constant, then A1 paid a fee, and A2 collected a
fee so it's a net wash; if it's not constant then you've got a lot of
hassle making it work with any privacy I think)
> > A->B: here's a HTLC, locked in
> > B->C: HTLC proposal
> > C->B: sure: updated commitment with HTLC locked in
> > B->C: great, corresponding updated commitment, plus revocation
> > C->B: revocation
> Interesting; this adds a trip, but not in latency (since C can still
> count on the HTLC being locked in at step 3).
> I don't see how it helps B though? It still ends up paying A, and C
> doesn't pay anything?
The updated commitment has C paying B onchain; if B doesn't receive that
by the time the grace period's about over, B can cancel the HTLC with A,
and then there's statemachine complexity for B to cancel it with C if
C comes alive again a little later.
> It forces a liveness check of C, but TBH I dread rewriting the state
> machine for this when we can just ping like we do now.
I'd be surprised if making musig work doesn't require a dread rewrite
of the state machine as well, and then there's PTLCs and eltoo...
> >> There's an old proposal to fast-fail HTLCs: Bob sends an new message "I
> >> would fail this HTLC once it's committed, here's the error"
> > Yeah, you could do "B->C: proposal, C->B: no way!" instead of "sure" to
> > fast fail the above too.
> > And I think something like that's necessary (at least with my view of how
> > this "keep the HTLC open" payment would work), otherwise B could send C a
> > "1 microsecond grace period, rate of 3e11 msat/minute, HTLC for 100 sat,
> > timeout of 2016 blocks" and if C couldn't reject it immediately would
> > owe B 50c per millisecond it took to cancel.
> Well, surely grace period (and penalty rate) are either fixed in the
> protocol or negotiated up-front, not per-HTLC.
I think the "keep open rate" should depend on how many nodes have
already been in the route (the more hops it's gone through, the more
funds/channels you're tying up by holding onto the HTLC, so the more
you should pay), while the grace period should depend on how many nodes
there are still to go in the route (it needs to be higher to let each of
those nodes deduct their delta from it). So I think you *should* expect
those to change per HTLC than you're forwarding, as those factors will
be different for different HTLCs.
Cheers,
aj
📝 Original message:
On Mon, Feb 24, 2020 at 01:29:36PM +1030, Rusty Russell wrote:
> Anthony Towns <aj at erisian.com.au> writes:
> > On Fri, Feb 21, 2020 at 12:35:20PM +1030, Rusty Russell wrote:
> >> And if there is a grace period, I can just gum up the network with lots
> >> of slow-but-not-slow-enough HTLCs.
> > Well, it reduces the "gum up the network for <timeout> blocks" to "gum
> > up the network for <grace period> seconds", which seems like a pretty
> > big win. I think if you had 20 hops each with a 1 minute grace period,
> > and each channel had a max_accepted_htlcs of 30, you'd need 25 HTLCs per
> > second to block 1000 channels (so 2.7% of the 36k channels 1ml reports),
> > so at the very least, successfully performing this attack would be
> > demonstrating lightning's solved bitcoin's transactions-per-second
> > limitation?
> But the comparison here is not with the current state, but with the
> "best previous proposal we have", which is:
>
> 1. Charge an up-front fee for accepting any HTLC.
> 2. Will hang-up after grace period unless you either prove a channel
> close, or gain another grace period by decrypting onion.
In general I don't really like comparing ideas that are still in
brainstorming mode; it's never clear whether there are unavoidable
pitfalls in one or the other that won't become clear until they're
actually implemented...
Specifically, I'm not a fan of either channel closes or peeling the onion
-- the former causes problems if you're trying to route across sidechains
or have lightning as a third layer above channel factories or similar,
and I'm not convinced even within Bitcoin "proving a channel close"
is that meaningful, and passing around decrypted onions seems like it
opens up privacy attacks.
Aside from those philosophical complaints, seems to me the simplest
attack would be:
* route 1000s of HTLCs from your node A1 to your node A2 via different,
long paths, using up the total channel capacity of your A1/A2 nodes,
with long timeouts
* have A2 offer up a transaction claiming that was the channel
close to A3; make it a real thing if necessary, but it's probably
fake-able
* then leave the HTLCs open until they time out, using up capacity
from all the nodes in your 1000s of routes. For every satoshi of
yours that's tied up, you should be able to tie up 10-20sat of other
people's funds
That increases the cost of the attack by one on-chain transaction per
timeout period, and limits the attack surface by how many transactions
you can get started/completed within whatever the grace period is, but
it doesn't seem a lot better than what we have today, unless onchain
fees go up a lot.
(If the up-front fee is constant, then A1 paid a fee, and A2 collected a
fee so it's a net wash; if it's not constant then you've got a lot of
hassle making it work with any privacy I think)
> > A->B: here's a HTLC, locked in
> > B->C: HTLC proposal
> > C->B: sure: updated commitment with HTLC locked in
> > B->C: great, corresponding updated commitment, plus revocation
> > C->B: revocation
> Interesting; this adds a trip, but not in latency (since C can still
> count on the HTLC being locked in at step 3).
> I don't see how it helps B though? It still ends up paying A, and C
> doesn't pay anything?
The updated commitment has C paying B onchain; if B doesn't receive that
by the time the grace period's about over, B can cancel the HTLC with A,
and then there's statemachine complexity for B to cancel it with C if
C comes alive again a little later.
> It forces a liveness check of C, but TBH I dread rewriting the state
> machine for this when we can just ping like we do now.
I'd be surprised if making musig work doesn't require a dread rewrite
of the state machine as well, and then there's PTLCs and eltoo...
> >> There's an old proposal to fast-fail HTLCs: Bob sends an new message "I
> >> would fail this HTLC once it's committed, here's the error"
> > Yeah, you could do "B->C: proposal, C->B: no way!" instead of "sure" to
> > fast fail the above too.
> > And I think something like that's necessary (at least with my view of how
> > this "keep the HTLC open" payment would work), otherwise B could send C a
> > "1 microsecond grace period, rate of 3e11 msat/minute, HTLC for 100 sat,
> > timeout of 2016 blocks" and if C couldn't reject it immediately would
> > owe B 50c per millisecond it took to cancel.
> Well, surely grace period (and penalty rate) are either fixed in the
> protocol or negotiated up-front, not per-HTLC.
I think the "keep open rate" should depend on how many nodes have
already been in the route (the more hops it's gone through, the more
funds/channels you're tying up by holding onto the HTLC, so the more
you should pay), while the grace period should depend on how many nodes
there are still to go in the route (it needs to be higher to let each of
those nodes deduct their delta from it). So I think you *should* expect
those to change per HTLC than you're forwarding, as those factors will
be different for different HTLCs.
Cheers,
aj