Antoine Riard [ARCHIVE] on Nostr: 📅 Original date posted:2022-07-05 📝 Original message: Hi Bastien, Thanks for ...
📅 Original date posted:2022-07-05
📝 Original message:
Hi Bastien,
Thanks for the proposal,
While I think establishing the rate limiting based on channel topology
should be effective to mitigate against DoS attackers, there is still a
concern that the damage inflicted might be beyond the channel cost. I.e, as
the onion messages routing is source-based, an attacker could exhaust or
reduce targeted onion communication channels to prevent invoices exchanges
between LN peers, and thus disrupt their HTLC traffic. Moreover, if the
HTLC traffic is substitutable ("Good X sold by Merchant Alice can be
substituted by good Y sold by Merchant Mallory"), the attacker could
extract income from the DoS attack, compensating for the channel cost.
If plausible, such targeted onion bandwidth attacks would be fairly
sophisticated so it might not be a concern for the short-term. Though we
might have to introduce some proportion between onion bandwidth units
across the network and the cost of opening channels in the future...
One further concern, we might have "spontaneous" bandwidth DoS in the
future, if the onion traffic is leveraged beyond offers such as for
discovery of LSP liquidity services (e.g PeerSwap, instant inbound
channels, etc). For confidentiality reasons, a LN node might not use the
Noise connections to learn about such services. The LN node might be also
interested to do real market-discovery by fetching the services rates from
all the LSP, while engaging with only one, therefore provoking a spike in
onion bandwidth consumed across the network without symmetric
HTLC traffic. This concern is hypothetical as that class of traffic might
end up announced in gossip.
So I think backpressure based rate limiting is good to boostrap as a
"naive" DoS protection for onion messages though I'm not sure it will be
robust enough in the long-term.
Antoine
Le mer. 29 juin 2022 à 04:28, Bastien TEINTURIER <bastien at acinq.fr> a
écrit :
> During the recent Oakland Dev Summit, some lightning engineers got together to discuss DoS
> protection for onion messages. Rusty proposed a very simple rate-limiting scheme that
> statistically propagates back to the correct sender, which we describe in details below.
>
> You can also read this in gist format if that works better for you [1].
>
> Nodes apply per-peer rate limits on _incoming_ onion messages that should be relayed (e.g.
> N/seconds with some burst tolerance). It is recommended to allow more onion messages from
> peers with whom you have channels, for example 10/seconds when you have a channel and 1/second
> when you don't.
>
> When relaying an onion message, nodes keep track of where it came from (by using the `node_id` of
> the peer who sent that message). Nodes only need the last such `node_id` per outgoing connection,
> which ensures the memory footprint is very small. Also, this data doesn't need to be persisted.
>
> Let's walk through an example to illustrate this mechanism:
>
> * Bob receives an onion message from Alice that should be relayed to Carol
> * After relaying that message, Bob stores Alice's `node_id` in its per-connection state with Carol
> * Bob receives an onion message from Eve that should be relayed to Carol
> * After relaying that message, Bob replaces Alice's `node_id` with Eve's `node_id` in its
> per-connection state with Carol
> * Bob receives an onion message from Alice that should be relayed to Dave
> * After relaying that message, Bob stores Alice's `node_id` in its per-connection state with Dave
> * ...
>
> We introduce a new message that will be sent when dropping an incoming onion message because it
> reached rate limits:
>
> 1. type: 515 (`onion_message_drop`)
> 2. data:
> * [`rate_limited`:`u8`]
> * [`shared_secret_hash`:`32*byte`]
>
> Whenever an incoming onion message reaches the rate limit, the receiver sends `onion_message_drop`
> to the sender. The sender looks at its per-connection state to find where the message was coming
> from and relays `onion_message_drop` to the last sender, halving their rate limits with that peer.
>
> If the sender doesn't overflow the rate limit again, the receiver should double the rate limit
> after 30 seconds, until it reaches the default rate limit again.
>
> The flow will look like:
>
> Alice Bob Carol
> | | |
> | onion_message | |
> |------------------------>| |
> | | onion_message |
> | |------------------------>|
> | | onion_message_drop |
> | |<------------------------|
> | onion_message_drop | |
> |<------------------------| |
>
> The `shared_secret_hash` field contains a BIP 340 tagged hash of the Sphinx shared secret of the
> rate limiting peer (in the example above, Carol):
>
> * `shared_secret_hash = SHA256(SHA256("onion_message_drop") || SHA256("onion_message_drop") || sphinx_shared_secret)`
>
> This value is known by the node that created the onion message: if `onion_message_drop` propagates
> all the way back to them, it lets them know which part of the route is congested, allowing them
> to retry through a different path.
>
> Whenever there is some latency between nodes and many onion messages, `onion_message_drop` may
> be relayed to the incorrect incoming peer (since we only store the `node_id` of the _last_ incoming
> peer in our outgoing connection state). The following example highlights this:
>
> Eve Bob Carol
> | onion_message | |
> |------------------------>| onion_message |
> | onion_message |------------------------>|
> |------------------------>| onion_message |
> | onion_message |------------------------>|
> |------------------------>| onion_message |
> |------------------------>|
> Alice | onion_message_drop |
> | onion_message | +----|
> |------------------------>| onion_message | |
> | |--------------------|--->|
> | | | |
> | | | |
> | | | |
> | onion_message_drop |<-------------------+ |
> |<------------------------| |
>
> In this example, Eve is spamming but `onion_message_drop` is propagated back to Alice instead.
> However, this scheme will _statistically_ penalize the right incoming peer (with a probability
> depending on the volume of onion messages that the spamming peer is generating compared to the
> volume of legitimate onion messages).
>
> It is an interesting research problem to find formulas for those probabilities to evaluate how
> efficient this will be against various types of spam. We hope researchers on this list will be
> interested in looking into it and will come up with a good model to evaluate that scheme.
>
> To increase the accuracy of attributing `onion_message_drop`, more data could be stored in the
> future if it becomes necessary. We need more research to quantify how much accuracy would be
> gained by storing more data and making the protocol more complex.
>
> Cheers,
> Bastien
>
> [1] https://gist.github.com/t-bast/e37ee9249d9825e51d260335c94f0fcf
>
> _______________________________________________
> Lightning-dev mailing list
> Lightning-dev at lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/lightning-dev/attachments/20220705/ce8d1fbe/attachment.html>
📝 Original message:
Hi Bastien,
Thanks for the proposal,
While I think establishing the rate limiting based on channel topology
should be effective to mitigate against DoS attackers, there is still a
concern that the damage inflicted might be beyond the channel cost. I.e, as
the onion messages routing is source-based, an attacker could exhaust or
reduce targeted onion communication channels to prevent invoices exchanges
between LN peers, and thus disrupt their HTLC traffic. Moreover, if the
HTLC traffic is substitutable ("Good X sold by Merchant Alice can be
substituted by good Y sold by Merchant Mallory"), the attacker could
extract income from the DoS attack, compensating for the channel cost.
If plausible, such targeted onion bandwidth attacks would be fairly
sophisticated so it might not be a concern for the short-term. Though we
might have to introduce some proportion between onion bandwidth units
across the network and the cost of opening channels in the future...
One further concern, we might have "spontaneous" bandwidth DoS in the
future, if the onion traffic is leveraged beyond offers such as for
discovery of LSP liquidity services (e.g PeerSwap, instant inbound
channels, etc). For confidentiality reasons, a LN node might not use the
Noise connections to learn about such services. The LN node might be also
interested to do real market-discovery by fetching the services rates from
all the LSP, while engaging with only one, therefore provoking a spike in
onion bandwidth consumed across the network without symmetric
HTLC traffic. This concern is hypothetical as that class of traffic might
end up announced in gossip.
So I think backpressure based rate limiting is good to boostrap as a
"naive" DoS protection for onion messages though I'm not sure it will be
robust enough in the long-term.
Antoine
Le mer. 29 juin 2022 à 04:28, Bastien TEINTURIER <bastien at acinq.fr> a
écrit :
> During the recent Oakland Dev Summit, some lightning engineers got together to discuss DoS
> protection for onion messages. Rusty proposed a very simple rate-limiting scheme that
> statistically propagates back to the correct sender, which we describe in details below.
>
> You can also read this in gist format if that works better for you [1].
>
> Nodes apply per-peer rate limits on _incoming_ onion messages that should be relayed (e.g.
> N/seconds with some burst tolerance). It is recommended to allow more onion messages from
> peers with whom you have channels, for example 10/seconds when you have a channel and 1/second
> when you don't.
>
> When relaying an onion message, nodes keep track of where it came from (by using the `node_id` of
> the peer who sent that message). Nodes only need the last such `node_id` per outgoing connection,
> which ensures the memory footprint is very small. Also, this data doesn't need to be persisted.
>
> Let's walk through an example to illustrate this mechanism:
>
> * Bob receives an onion message from Alice that should be relayed to Carol
> * After relaying that message, Bob stores Alice's `node_id` in its per-connection state with Carol
> * Bob receives an onion message from Eve that should be relayed to Carol
> * After relaying that message, Bob replaces Alice's `node_id` with Eve's `node_id` in its
> per-connection state with Carol
> * Bob receives an onion message from Alice that should be relayed to Dave
> * After relaying that message, Bob stores Alice's `node_id` in its per-connection state with Dave
> * ...
>
> We introduce a new message that will be sent when dropping an incoming onion message because it
> reached rate limits:
>
> 1. type: 515 (`onion_message_drop`)
> 2. data:
> * [`rate_limited`:`u8`]
> * [`shared_secret_hash`:`32*byte`]
>
> Whenever an incoming onion message reaches the rate limit, the receiver sends `onion_message_drop`
> to the sender. The sender looks at its per-connection state to find where the message was coming
> from and relays `onion_message_drop` to the last sender, halving their rate limits with that peer.
>
> If the sender doesn't overflow the rate limit again, the receiver should double the rate limit
> after 30 seconds, until it reaches the default rate limit again.
>
> The flow will look like:
>
> Alice Bob Carol
> | | |
> | onion_message | |
> |------------------------>| |
> | | onion_message |
> | |------------------------>|
> | | onion_message_drop |
> | |<------------------------|
> | onion_message_drop | |
> |<------------------------| |
>
> The `shared_secret_hash` field contains a BIP 340 tagged hash of the Sphinx shared secret of the
> rate limiting peer (in the example above, Carol):
>
> * `shared_secret_hash = SHA256(SHA256("onion_message_drop") || SHA256("onion_message_drop") || sphinx_shared_secret)`
>
> This value is known by the node that created the onion message: if `onion_message_drop` propagates
> all the way back to them, it lets them know which part of the route is congested, allowing them
> to retry through a different path.
>
> Whenever there is some latency between nodes and many onion messages, `onion_message_drop` may
> be relayed to the incorrect incoming peer (since we only store the `node_id` of the _last_ incoming
> peer in our outgoing connection state). The following example highlights this:
>
> Eve Bob Carol
> | onion_message | |
> |------------------------>| onion_message |
> | onion_message |------------------------>|
> |------------------------>| onion_message |
> | onion_message |------------------------>|
> |------------------------>| onion_message |
> |------------------------>|
> Alice | onion_message_drop |
> | onion_message | +----|
> |------------------------>| onion_message | |
> | |--------------------|--->|
> | | | |
> | | | |
> | | | |
> | onion_message_drop |<-------------------+ |
> |<------------------------| |
>
> In this example, Eve is spamming but `onion_message_drop` is propagated back to Alice instead.
> However, this scheme will _statistically_ penalize the right incoming peer (with a probability
> depending on the volume of onion messages that the spamming peer is generating compared to the
> volume of legitimate onion messages).
>
> It is an interesting research problem to find formulas for those probabilities to evaluate how
> efficient this will be against various types of spam. We hope researchers on this list will be
> interested in looking into it and will come up with a good model to evaluate that scheme.
>
> To increase the accuracy of attributing `onion_message_drop`, more data could be stored in the
> future if it becomes necessary. We need more research to quantify how much accuracy would be
> gained by storing more data and making the protocol more complex.
>
> Cheers,
> Bastien
>
> [1] https://gist.github.com/t-bast/e37ee9249d9825e51d260335c94f0fcf
>
> _______________________________________________
> Lightning-dev mailing list
> Lightning-dev at lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/lightning-dev/attachments/20220705/ce8d1fbe/attachment.html>