Joost Jager [ARCHIVE] on Nostr: 📅 Original date posted:2020-03-09 📝 Original message: On Thu, Feb 20, 2020 at ...
📅 Original date posted:2020-03-09
📝 Original message:
On Thu, Feb 20, 2020 at 4:22 AM Anthony Towns <aj at erisian.com.au> wrote:
> On Tue, Feb 18, 2020 at 10:23:29AM +0100, Joost Jager wrote:
> > A different way of mitigating this is to reverse the direction in which
> the
> > bond is paid. So instead of paying to offer an htlc, nodes need to pay to
> > receive an htlc. This sounds counterintuitive, but for the described
> jamming
> > attack there is also an attacker node at the end of the route. The
> attacker
> > still pays.
>
> I think this makes a lot of sense. I think the way it would end up working
> is that the further the route extends, the greater the payments are, so:
>
> A -> B : B sends A 1msat per minute
> A -> B -> C : C sends B 2msat per minute, B forwards 1msat/min to A
> A -> B -> C -> D : D sends C 3 msat, etc
> A -> B -> C -> D -> E : E sends D 4 msat, etc
>
> so each node is receiving +1 msat/minute, except for the last one, who's
> paying n msat/minute, where n is the number of hops to have gotten up to
> the last one. There's the obvious privacy issue there, with fairly
> obvious ways to fudge around it, I think.
>
Yes, that is definitely a good point. Otherwise the attacker can hold the
htlc at the end of the route and pay the hold fee to its predecessor. The
hold fee will propagate back to the first node (and increase along the
way). The first node is also owned by the attacker. Meaning there again is
no cost for the attacker to jam the channel.
In the mean time, I've been jamming channels on testnet myself. See what
pathfinding changes are needed to do it efficiently and check out the
effect. There was the expected outcome of a channel being jammed for as
long as I wanted. But I also learned something else:
Traversing a path takes time, especially if the path is optimized for
maximum length and contains loops. In particular when some of the nodes
and/or network connections are slow, the total round-trip from the sender
point of view can get seriously long. Even if the final node immediately
fails the htlc, the nodes at the start of the path still see their outgoing
htlcs being held for quite some time.
What this means is that the channel jamming attack can also be executed
without the attacker controlling the final node. The attacker can construct
long routes for which it doesn't matter where they end. Suppose it takes 1
minute for the htlc to be released again on the channel that is targeted
(the round trip from the targeted channel to the final node). The attacker
just needs to launch htlcs at a rate higher than one per minute to
(eventually) saturate the channel. In my experiment, I launched many htlcs
concurrently, which seemed to make the total latency even longer. Probably
because those htlcs then start competing for limited resources at the route
hops.
This variation does require more action from the attacker. They need to
keep refreshing htlcs that return back to them. Therefore it may be easier
to address this with some form of rate limiting, although that has its own
downsides.
Joost
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/lightning-dev/attachments/20200309/d26ea327/attachment.html>
📝 Original message:
On Thu, Feb 20, 2020 at 4:22 AM Anthony Towns <aj at erisian.com.au> wrote:
> On Tue, Feb 18, 2020 at 10:23:29AM +0100, Joost Jager wrote:
> > A different way of mitigating this is to reverse the direction in which
> the
> > bond is paid. So instead of paying to offer an htlc, nodes need to pay to
> > receive an htlc. This sounds counterintuitive, but for the described
> jamming
> > attack there is also an attacker node at the end of the route. The
> attacker
> > still pays.
>
> I think this makes a lot of sense. I think the way it would end up working
> is that the further the route extends, the greater the payments are, so:
>
> A -> B : B sends A 1msat per minute
> A -> B -> C : C sends B 2msat per minute, B forwards 1msat/min to A
> A -> B -> C -> D : D sends C 3 msat, etc
> A -> B -> C -> D -> E : E sends D 4 msat, etc
>
> so each node is receiving +1 msat/minute, except for the last one, who's
> paying n msat/minute, where n is the number of hops to have gotten up to
> the last one. There's the obvious privacy issue there, with fairly
> obvious ways to fudge around it, I think.
>
Yes, that is definitely a good point. Otherwise the attacker can hold the
htlc at the end of the route and pay the hold fee to its predecessor. The
hold fee will propagate back to the first node (and increase along the
way). The first node is also owned by the attacker. Meaning there again is
no cost for the attacker to jam the channel.
In the mean time, I've been jamming channels on testnet myself. See what
pathfinding changes are needed to do it efficiently and check out the
effect. There was the expected outcome of a channel being jammed for as
long as I wanted. But I also learned something else:
Traversing a path takes time, especially if the path is optimized for
maximum length and contains loops. In particular when some of the nodes
and/or network connections are slow, the total round-trip from the sender
point of view can get seriously long. Even if the final node immediately
fails the htlc, the nodes at the start of the path still see their outgoing
htlcs being held for quite some time.
What this means is that the channel jamming attack can also be executed
without the attacker controlling the final node. The attacker can construct
long routes for which it doesn't matter where they end. Suppose it takes 1
minute for the htlc to be released again on the channel that is targeted
(the round trip from the targeted channel to the final node). The attacker
just needs to launch htlcs at a rate higher than one per minute to
(eventually) saturate the channel. In my experiment, I launched many htlcs
concurrently, which seemed to make the total latency even longer. Probably
because those htlcs then start competing for limited resources at the route
hops.
This variation does require more action from the attacker. They need to
keep refreshing htlcs that return back to them. Therefore it may be easier
to address this with some form of rate limiting, although that has its own
downsides.
Joost
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/lightning-dev/attachments/20200309/d26ea327/attachment.html>