Billy Tetrud [ARCHIVE] on Nostr: đź“… Original date posted:2022-03-11 đź“ť Original message:Hi Gloria, > 1. ...
đź“… Original date posted:2022-03-11
đź“ť Original message:Hi Gloria,
> 1. Transaction relay rate limiting
I have a similar concern as yours, that this could prevent higher fee-rate
transactions from being broadcast.
> 2. Staggered broadcast of replacement transactions: within some time
interval, maybe accept multiple replacements for the same prevout, but only
relay the original transaction.
By this do you mean basically having a batching window where, on receiving
a replacement transaction, a node will wait for a period of time,
potentially receiving many replacements for the same transaction (or many
separate conflicting transactions), and only broadcasting the "best" one(s)
at the end of that time window?
Its an interesting idea, but it would produce a problem. Every hop that
replacement transaction takes would be delayed by this staggered window. If
the window were 3 minutes and transactions generally take 20 hops to get to
the majority of miners, a "best-case average" delay might be 3.75 minutes
(noting that among your 8 nodes, its quite likely one of them would have a
window ending much sooner than 3 minutes). Some (maybe 3% of) nodes would
experience delays of more than 20 minutes. That kind of delay isn't great.
However it made me think of another idea: a transaction replacement
broadcast cooldown. What if nodes kept track of the time they broadcasted
the last replacement for a package and had a relay cooldown after the last
replacement was broadcasted? A node receiving a replacement would relay the
replacement immediately if the package its replacing was broadcasted more
than X seconds ago, and otherwise it would wait until the time when that
package was broadcasted at least X seconds ago to broadcast it. Any
replacements it receives during that waiting period would replace as
normal, meaning the unrebroadcasted replacement would never be
broadcasted, and only the highest value replacement would be broadcasted at
the end of the cooldown.
This wouldn't prevent a higher-fee-rate transaction from being broadcasted
(like rate limiting could), but would still be effective at limiting
unnecessary data transmission. Another benefit is that in the
non-adversarial case, replacement transactions wouldn't be subject to any
delay at all (while in the staggered broadcast idea, most replacements
would experience some delay). And in the adversarial case, where a
malicious actor broadcasts a low-as-possible-value replacement just before
yours, the worst case delay is just whatever the cooldown period is. I
would imagine that maybe 1 minute would be a reasonable worst-case delay.
This would limit spam for a transaction that makes it into a block to ~10x
(9 to 1). I don't see much of a downside to doing this beyond just the
slight additional complexity of relay rules (and considering it could save
substantial additional code complexity, even that is a benefit).
All a node would need to do is keep a timestamp on each transaction they
receive for when it was broadcasted and check it when a replacement comes
in. If now-broadcastDate < cooldown, set a timer for cooldown -
(now-broadcastDate) to broadcast it. If another replacement comes in, clear
that timer and repeat using the original broadcast date (since the
unbroadcast transaction doesn't have a broadcast date yet).
I think it might also be useful to note that eliminating "extra data"
caused by careless or malicious actors (spam or whatever you want to call
it) should not be the goal. It is impossible to prevent all spam. What we
should be aiming for is more specific: we should attempt to design a system
where spam is manageable. Eg if our goal is to ensure that a bitcoin node
uses no more than 10% of the bandwidth of a "normal" user, if current
non-spam traffic only requires 1% of a "normal" users's bandwidth, then the
network can bear a 9 to 1 ratio of spam. When a node spins up, there is a
lot more data to download and process. So we know that all full nodes can
handle at least as much traffic as they handle during IBD. What's the
difference between those amounts? I'm not sure, but I would guess that IBD
is at least a couple times more demanding than a fully synced node. So I
might suggest that as long as spam can be kept below a ratio of maybe 2 to
1, we should consider the design acceptable (and therefore more complexity
unnecessary).
The 1 minute broadcast cooldown I mentioned before wouldn't be quite
sufficient to achieve that ratio. But a 3.33 minute cooldown would be.
Whether this is "too much" is something that would have to be discussed, I
suspect a worst-case adversarial 3.33 minute delay would not be "too much".
Doing this could basically eliminate any risk of actual service denial via
replacement transactions.
However, I do think that these DOS concerns are quite overblown. I wrote up a
comment on your rbf-improvements.md
<https://gist.github.com/glozow/25d9662c52453bd08b4b4b1d3783b9ff?permalink_comment_id=4093100#gistcomment-4093100>
detailing
my thought process on that. The summary is that as long as the fee-rate
relay rule is maintained, any "spam" is actually paid for, either by the
"first" transaction in the spam chain, or by the "spam" itself. Even
without something like a minimum RBF relay delay limiting how much spam
could be created, the economics of the fee-rate rule already sufficiently
mitigate the issue of spam.
On Wed, Mar 9, 2022 at 9:37 AM Gloria Zhao via bitcoin-dev <
bitcoin-dev at lists.linuxfoundation.org> wrote:
> Hi RBF friends,
>
> Posting a summary of RBF discussions at coredev (mostly on transaction
> relay rate-limiting), user-elected descendant limit as a short term
> solution to unblock package RBF, and mining score, all open for feedback:
>
> One big concept discussed was baking DoS protection into the p2p level
> rather than policy level. TLDR: The fees are not paid to the node operator,
> but to the miner. While we can use fees to reason about the cost of an
> attack, if we're ultimately interested in preventing resource exhaustion,
> maybe we want to "stop the bleeding" when it happens and bound the amount
> of resources used in general. There were two main ideas:
>
> 1. Transaction relay rate limiting (i.e. the one you proposed above or
> some variation) with a feerate-based priority queue
> 2. Staggered broadcast of replacement transactions: within some time
> interval, maybe accept multiple replacements for the same prevout, but only
> relay the original transaction.
>
> Looking to solicit feedback on these ideas and the concept in general. Is
> it a good idea (separate from RBF) to add rate-limiting in transaction
> relay? And is it the right direction to think about RBF DoS protection this
> way?
>
> A lingering concern that I have about this idea is it would then be
> possible to impact the propagation of another person’s transaction, i.e.,
> an attacker can censor somebody’s transaction from ever being announced by
> a node if they send enough transactions to fill up the rate limit.
> Obviously this would be expensive since they're spending a lot on fees, but
> I imagine it could be profitable in some situations to spend a few thousand
> dollars to prevent anyone from hearing about a transaction for a few hours.
> This might be a non-issue in practice if the rate limit is generous and
> traffic isn’t horrendous, but is this a problem?
>
> And if we don't require an increase in (i.e. addition of "new") absolute
> fees, users are essentially allowed to “recycle” fees. In the scenario
> where we prioritize relay based on feerate, users could potentially be
> placed higher in the queue, ahead of other users’ transactions, multiple
> times, without ever adding more fees to the transaction. Again, maybe this
> isn’t a huge deal in practice if we set the parameters right, but it seems…
> not great, in principle.
>
> ---------
>
> It's probably also a good idea to point out that there's been some
> discussion happening on the gist containing my original post on this thread
> (https://gist.github.com/glozow/25d9662c52453bd08b4b4b1d3783b9ff).
>
> Suhas and Matt [proposed][0] adding a policy rule allowing users to
> specify descendant limits on their transactions. For example, some nth bit
> of nSequence with nVersion 3 means "this transaction won't have more than X
> vbytes of descendants" where X = max(1000, vsizeof(tx)) or something. It
> solves the pinning problem with package RBF where the attacker's package
> contains a very large and high-fee descendant.
>
> We could add this policy and deploy it with package RBF/package relay so
> that LN can use it by setting the user-elected descendant limit flag on
> commitment transactions. (Otherwise package RBF is blocked until we find a
> more comprehensive solution to the pinning attack).
>
> It's simple to [implement][1] as a mempool policy, but adds some
> complexity for wallets that use it, since it limits their use of UTXOs from
> transactions with this bit set.
>
> ---------
>
> Also, coming back to the idea of "we can't just use {individual, ancestor}
> feerate," I'm interested in soliciting feedback on adding a “mining score”
> calculator. I've implemented one [here][2] which takes the transaction in
> question, grabs all of the connected mempool transactions (including
> siblings, coparents, etc., as they wouldn’t be in the ancestor nor
> descendant sets), and builds a “block template” using our current mining
> algorithm. The mining score of a transaction is the ancestor feerate at
> which it is included.
>
> This would be helpful for something like ancestor-aware funding and
> fee-bumping in the wallet: [3], [4]. I think if we did the rate-limited
> priority queue for transaction relay, we'd want to use something like this
> as the priority value. And for RBF, we probably want to require that a
> replacement have a higher mining score than the original transactions. This
> could be computationally expensive to do all the time; it could be good to
> cache it but that could make mempool bookkeeping more complicated. Also, if
> we end up trying to switch to a candidate set-based algorithm for mining,
> we'd of course need a new calculator.
>
> [0]:
> https://gist.github.com/glozow/25d9662c52453bd08b4b4b1d3783b9ff?permalink_comment_id=4058140#gistcomment-4058140
> [1]: https://github.com/glozow/bitcoin/tree/2022-02-user-desclimit
> [2] https://github.com/glozow/bitcoin/tree/2022-02-mining-score
> [3]: https://github.com/bitcoin/bitcoin/issues/9645
> [4]: https://github.com/bitcoin/bitcoin/issues/15553
>
> Best,
> Gloria
>
> On Tue, Feb 8, 2022 at 4:58 AM Anthony Towns <aj at erisian.com.au> wrote:
>
>> On Mon, Feb 07, 2022 at 11:16:26AM +0000, Gloria Zhao wrote:
>> > @aj:
>> > > I wonder sometimes if it could be sufficient to just have a relay rate
>> > > limit and prioritise by ancestor feerate though. Maybe something like:
>> > > - instead of adding txs to each peers setInventoryTxToSend
>> immediately,
>> > > set a mempool flag "relayed=false"
>> > > - on a time delay, add the top N (by fee rate) "relayed=false" txs to
>> > > each peer's setInventoryTxToSend and mark them as "relayed=true";
>> > > calculate how much kB those txs were, and do this again after
>> > > SIZE/RATELIMIT seconds
>>
>> > > - don't include "relayed=false" txs when building blocks?
>>
>> The "?" was me not being sure that point is a good suggestion...
>>
>> Miners might reasonably decide to have no rate limit, and always relay,
>> and never exclude txs -- but the question then becomes is whether they
>> hear about the tx at all, so rate limiting behaviour could still be a
>> potential problem for whoever made the tx.
>>
>> > Wow cool! I think outbound tx relay size-based rate-limiting and
>> > prioritizing tx relay by feerate are great ideas for preventing spammers
>> > from wasting bandwidth network-wide. I agree, this would slow the low
>> > feerate spam down, preventing a huge network-wide bandwidth spike. And
>> it
>> > would allow high feerate transactions to propagate as they should,
>> > regardless of how busy traffic is. Combined with inbound tx request
>> > rate-limiting, might this be sufficient to prevent DoS regardless of the
>> > fee-based replacement policies?
>>
>> I think you only want to do outbound rate limits, ie, how often you send
>> INV, GETDATA and TX messages? Once you receive any of those, I think
>> you have to immediately process / ignore it, you can't really sensibly
>> defer it (beyond the existing queues we have that just build up while
>> we're busy processing other things first)?
>>
>> > One point that I'm not 100% clear on: is it ok to prioritize the
>> > transactions by ancestor feerate in this scheme? As I described in the
>> > original post, this can be quite different from the actual feerate we
>> would
>> > consider a transaction in a block for. The transaction could have a high
>> > feerate sibling bumping its ancestor.
>> > For example, A (1sat/vB) has 2 children: B (49sat/vB) and C (5sat/vB).
>> If
>> > we just received C, it would be incorrect to give it a priority equal to
>> > its ancestor feerate (3sat/vB) because if we constructed a block
>> template
>> > now, B would bump A, and C's new ancestor feerate is 5sat/vB.
>> > Then, if we imagine that top N is >5sat/vB, we're not relaying C. If we
>> > also exclude C when building blocks, we're missing out on good fees.
>>
>> I think you're right that this would be ugly. It's something of a
>> special case:
>>
>> a) you really care about C getting into the next block; but
>> b) you're trusting B not being replaced by a higher fee tx that
>> doesn't have A as a parent; and
>> c) there's a lot of txs bidding the floor of the next block up to a
>> level in-between the ancestor fee rate of 3sat/vB and the tx fee
>> rate of 5sat/vB
>>
>> Without (a), maybe you don't care about it getting to a miner quickly.
>> If your trust in (b) was misplaced, then your tx's effective fee rate
>> will drop and (because of (c)), you'll lose anyway. And if the spam ends
>> up outside of (c)'s range, either the rate limiting won't take effect
>> (spam's too cheap) and you'll be fine, or you'll miss out on the block
>> anyway (spam's paying more than your tx rate) and you never had any hope
>> of making it in.
>>
>> Note that we already rate limit via INVENTORY_BROADCAST_MAX /
>> *_INVENTORY_BROADCAST_INTERVAL; which gets to something like 10,500 txs
>> per 10 minutes for outbound connections. This would be a weight based
>> rate limit instead-of/in-addition-to that, I guess.
>>
>> As far as a non-ugly approach goes, I think you'd have to be smarter about
>> tracking the "effective fee rate" than the ancestor fee rate manages;
>> maybe that's something that could fall out of Murch and Clara's candidate
>> set blockbuilding ideas [0] ?
>>
>> Perhaps that same work would also make it possible to come up with
>> a better answer to "do I care that this replacement would invalidate
>> these descendents?"
>>
>> [0] https://github.com/Xekyo/blockbuilding
>>
>> > > - keep high-feerate evicted txs around for a while in case they get
>> > > mined by someone else to improve compact block relay, a la the
>> > > orphan pool?
>> > Replaced transactions are already added to vExtraTxnForCompact :D
>>
>> I guess I was thinking that it's just a 100 tx LRU cache, which might
>> not be good enough?
>>
>> Maybe it would be more on point to have a rate limit apply only to
>> replacement transactions?
>>
>> > For wallets, AJ's "All you need is for there to be *a* path that follows
>> > the new relay rules and gets from your node/wallet to perhaps 10% of
>> > hashpower" makes sense to me (which would be the former).
>>
>> Perhaps a corollarly of that is that it's *better* to have the mempool
>> acceptance rule only consider economic incentives, and have the spam
>> prevention only be about "shall I tell my peers about this?"
>>
>> If you don't have that split; then the anti-spam rules can prevent you
>> from getting the tx in the mempool at all; whereas if you do have the
>> split, then even if the bitcoind anti-spam rules are blocking you at
>> every turn, you can still send your tx to miners by some other route,
>> and then they can add it to their mempool directly without any hassle.
>>
>> Cheers,
>> aj
>>
>> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev at lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20220311/694c49aa/attachment-0001.html>
đź“ť Original message:Hi Gloria,
> 1. Transaction relay rate limiting
I have a similar concern as yours, that this could prevent higher fee-rate
transactions from being broadcast.
> 2. Staggered broadcast of replacement transactions: within some time
interval, maybe accept multiple replacements for the same prevout, but only
relay the original transaction.
By this do you mean basically having a batching window where, on receiving
a replacement transaction, a node will wait for a period of time,
potentially receiving many replacements for the same transaction (or many
separate conflicting transactions), and only broadcasting the "best" one(s)
at the end of that time window?
Its an interesting idea, but it would produce a problem. Every hop that
replacement transaction takes would be delayed by this staggered window. If
the window were 3 minutes and transactions generally take 20 hops to get to
the majority of miners, a "best-case average" delay might be 3.75 minutes
(noting that among your 8 nodes, its quite likely one of them would have a
window ending much sooner than 3 minutes). Some (maybe 3% of) nodes would
experience delays of more than 20 minutes. That kind of delay isn't great.
However it made me think of another idea: a transaction replacement
broadcast cooldown. What if nodes kept track of the time they broadcasted
the last replacement for a package and had a relay cooldown after the last
replacement was broadcasted? A node receiving a replacement would relay the
replacement immediately if the package its replacing was broadcasted more
than X seconds ago, and otherwise it would wait until the time when that
package was broadcasted at least X seconds ago to broadcast it. Any
replacements it receives during that waiting period would replace as
normal, meaning the unrebroadcasted replacement would never be
broadcasted, and only the highest value replacement would be broadcasted at
the end of the cooldown.
This wouldn't prevent a higher-fee-rate transaction from being broadcasted
(like rate limiting could), but would still be effective at limiting
unnecessary data transmission. Another benefit is that in the
non-adversarial case, replacement transactions wouldn't be subject to any
delay at all (while in the staggered broadcast idea, most replacements
would experience some delay). And in the adversarial case, where a
malicious actor broadcasts a low-as-possible-value replacement just before
yours, the worst case delay is just whatever the cooldown period is. I
would imagine that maybe 1 minute would be a reasonable worst-case delay.
This would limit spam for a transaction that makes it into a block to ~10x
(9 to 1). I don't see much of a downside to doing this beyond just the
slight additional complexity of relay rules (and considering it could save
substantial additional code complexity, even that is a benefit).
All a node would need to do is keep a timestamp on each transaction they
receive for when it was broadcasted and check it when a replacement comes
in. If now-broadcastDate < cooldown, set a timer for cooldown -
(now-broadcastDate) to broadcast it. If another replacement comes in, clear
that timer and repeat using the original broadcast date (since the
unbroadcast transaction doesn't have a broadcast date yet).
I think it might also be useful to note that eliminating "extra data"
caused by careless or malicious actors (spam or whatever you want to call
it) should not be the goal. It is impossible to prevent all spam. What we
should be aiming for is more specific: we should attempt to design a system
where spam is manageable. Eg if our goal is to ensure that a bitcoin node
uses no more than 10% of the bandwidth of a "normal" user, if current
non-spam traffic only requires 1% of a "normal" users's bandwidth, then the
network can bear a 9 to 1 ratio of spam. When a node spins up, there is a
lot more data to download and process. So we know that all full nodes can
handle at least as much traffic as they handle during IBD. What's the
difference between those amounts? I'm not sure, but I would guess that IBD
is at least a couple times more demanding than a fully synced node. So I
might suggest that as long as spam can be kept below a ratio of maybe 2 to
1, we should consider the design acceptable (and therefore more complexity
unnecessary).
The 1 minute broadcast cooldown I mentioned before wouldn't be quite
sufficient to achieve that ratio. But a 3.33 minute cooldown would be.
Whether this is "too much" is something that would have to be discussed, I
suspect a worst-case adversarial 3.33 minute delay would not be "too much".
Doing this could basically eliminate any risk of actual service denial via
replacement transactions.
However, I do think that these DOS concerns are quite overblown. I wrote up a
comment on your rbf-improvements.md
<https://gist.github.com/glozow/25d9662c52453bd08b4b4b1d3783b9ff?permalink_comment_id=4093100#gistcomment-4093100>
detailing
my thought process on that. The summary is that as long as the fee-rate
relay rule is maintained, any "spam" is actually paid for, either by the
"first" transaction in the spam chain, or by the "spam" itself. Even
without something like a minimum RBF relay delay limiting how much spam
could be created, the economics of the fee-rate rule already sufficiently
mitigate the issue of spam.
On Wed, Mar 9, 2022 at 9:37 AM Gloria Zhao via bitcoin-dev <
bitcoin-dev at lists.linuxfoundation.org> wrote:
> Hi RBF friends,
>
> Posting a summary of RBF discussions at coredev (mostly on transaction
> relay rate-limiting), user-elected descendant limit as a short term
> solution to unblock package RBF, and mining score, all open for feedback:
>
> One big concept discussed was baking DoS protection into the p2p level
> rather than policy level. TLDR: The fees are not paid to the node operator,
> but to the miner. While we can use fees to reason about the cost of an
> attack, if we're ultimately interested in preventing resource exhaustion,
> maybe we want to "stop the bleeding" when it happens and bound the amount
> of resources used in general. There were two main ideas:
>
> 1. Transaction relay rate limiting (i.e. the one you proposed above or
> some variation) with a feerate-based priority queue
> 2. Staggered broadcast of replacement transactions: within some time
> interval, maybe accept multiple replacements for the same prevout, but only
> relay the original transaction.
>
> Looking to solicit feedback on these ideas and the concept in general. Is
> it a good idea (separate from RBF) to add rate-limiting in transaction
> relay? And is it the right direction to think about RBF DoS protection this
> way?
>
> A lingering concern that I have about this idea is it would then be
> possible to impact the propagation of another person’s transaction, i.e.,
> an attacker can censor somebody’s transaction from ever being announced by
> a node if they send enough transactions to fill up the rate limit.
> Obviously this would be expensive since they're spending a lot on fees, but
> I imagine it could be profitable in some situations to spend a few thousand
> dollars to prevent anyone from hearing about a transaction for a few hours.
> This might be a non-issue in practice if the rate limit is generous and
> traffic isn’t horrendous, but is this a problem?
>
> And if we don't require an increase in (i.e. addition of "new") absolute
> fees, users are essentially allowed to “recycle” fees. In the scenario
> where we prioritize relay based on feerate, users could potentially be
> placed higher in the queue, ahead of other users’ transactions, multiple
> times, without ever adding more fees to the transaction. Again, maybe this
> isn’t a huge deal in practice if we set the parameters right, but it seems…
> not great, in principle.
>
> ---------
>
> It's probably also a good idea to point out that there's been some
> discussion happening on the gist containing my original post on this thread
> (https://gist.github.com/glozow/25d9662c52453bd08b4b4b1d3783b9ff).
>
> Suhas and Matt [proposed][0] adding a policy rule allowing users to
> specify descendant limits on their transactions. For example, some nth bit
> of nSequence with nVersion 3 means "this transaction won't have more than X
> vbytes of descendants" where X = max(1000, vsizeof(tx)) or something. It
> solves the pinning problem with package RBF where the attacker's package
> contains a very large and high-fee descendant.
>
> We could add this policy and deploy it with package RBF/package relay so
> that LN can use it by setting the user-elected descendant limit flag on
> commitment transactions. (Otherwise package RBF is blocked until we find a
> more comprehensive solution to the pinning attack).
>
> It's simple to [implement][1] as a mempool policy, but adds some
> complexity for wallets that use it, since it limits their use of UTXOs from
> transactions with this bit set.
>
> ---------
>
> Also, coming back to the idea of "we can't just use {individual, ancestor}
> feerate," I'm interested in soliciting feedback on adding a “mining score”
> calculator. I've implemented one [here][2] which takes the transaction in
> question, grabs all of the connected mempool transactions (including
> siblings, coparents, etc., as they wouldn’t be in the ancestor nor
> descendant sets), and builds a “block template” using our current mining
> algorithm. The mining score of a transaction is the ancestor feerate at
> which it is included.
>
> This would be helpful for something like ancestor-aware funding and
> fee-bumping in the wallet: [3], [4]. I think if we did the rate-limited
> priority queue for transaction relay, we'd want to use something like this
> as the priority value. And for RBF, we probably want to require that a
> replacement have a higher mining score than the original transactions. This
> could be computationally expensive to do all the time; it could be good to
> cache it but that could make mempool bookkeeping more complicated. Also, if
> we end up trying to switch to a candidate set-based algorithm for mining,
> we'd of course need a new calculator.
>
> [0]:
> https://gist.github.com/glozow/25d9662c52453bd08b4b4b1d3783b9ff?permalink_comment_id=4058140#gistcomment-4058140
> [1]: https://github.com/glozow/bitcoin/tree/2022-02-user-desclimit
> [2] https://github.com/glozow/bitcoin/tree/2022-02-mining-score
> [3]: https://github.com/bitcoin/bitcoin/issues/9645
> [4]: https://github.com/bitcoin/bitcoin/issues/15553
>
> Best,
> Gloria
>
> On Tue, Feb 8, 2022 at 4:58 AM Anthony Towns <aj at erisian.com.au> wrote:
>
>> On Mon, Feb 07, 2022 at 11:16:26AM +0000, Gloria Zhao wrote:
>> > @aj:
>> > > I wonder sometimes if it could be sufficient to just have a relay rate
>> > > limit and prioritise by ancestor feerate though. Maybe something like:
>> > > - instead of adding txs to each peers setInventoryTxToSend
>> immediately,
>> > > set a mempool flag "relayed=false"
>> > > - on a time delay, add the top N (by fee rate) "relayed=false" txs to
>> > > each peer's setInventoryTxToSend and mark them as "relayed=true";
>> > > calculate how much kB those txs were, and do this again after
>> > > SIZE/RATELIMIT seconds
>>
>> > > - don't include "relayed=false" txs when building blocks?
>>
>> The "?" was me not being sure that point is a good suggestion...
>>
>> Miners might reasonably decide to have no rate limit, and always relay,
>> and never exclude txs -- but the question then becomes is whether they
>> hear about the tx at all, so rate limiting behaviour could still be a
>> potential problem for whoever made the tx.
>>
>> > Wow cool! I think outbound tx relay size-based rate-limiting and
>> > prioritizing tx relay by feerate are great ideas for preventing spammers
>> > from wasting bandwidth network-wide. I agree, this would slow the low
>> > feerate spam down, preventing a huge network-wide bandwidth spike. And
>> it
>> > would allow high feerate transactions to propagate as they should,
>> > regardless of how busy traffic is. Combined with inbound tx request
>> > rate-limiting, might this be sufficient to prevent DoS regardless of the
>> > fee-based replacement policies?
>>
>> I think you only want to do outbound rate limits, ie, how often you send
>> INV, GETDATA and TX messages? Once you receive any of those, I think
>> you have to immediately process / ignore it, you can't really sensibly
>> defer it (beyond the existing queues we have that just build up while
>> we're busy processing other things first)?
>>
>> > One point that I'm not 100% clear on: is it ok to prioritize the
>> > transactions by ancestor feerate in this scheme? As I described in the
>> > original post, this can be quite different from the actual feerate we
>> would
>> > consider a transaction in a block for. The transaction could have a high
>> > feerate sibling bumping its ancestor.
>> > For example, A (1sat/vB) has 2 children: B (49sat/vB) and C (5sat/vB).
>> If
>> > we just received C, it would be incorrect to give it a priority equal to
>> > its ancestor feerate (3sat/vB) because if we constructed a block
>> template
>> > now, B would bump A, and C's new ancestor feerate is 5sat/vB.
>> > Then, if we imagine that top N is >5sat/vB, we're not relaying C. If we
>> > also exclude C when building blocks, we're missing out on good fees.
>>
>> I think you're right that this would be ugly. It's something of a
>> special case:
>>
>> a) you really care about C getting into the next block; but
>> b) you're trusting B not being replaced by a higher fee tx that
>> doesn't have A as a parent; and
>> c) there's a lot of txs bidding the floor of the next block up to a
>> level in-between the ancestor fee rate of 3sat/vB and the tx fee
>> rate of 5sat/vB
>>
>> Without (a), maybe you don't care about it getting to a miner quickly.
>> If your trust in (b) was misplaced, then your tx's effective fee rate
>> will drop and (because of (c)), you'll lose anyway. And if the spam ends
>> up outside of (c)'s range, either the rate limiting won't take effect
>> (spam's too cheap) and you'll be fine, or you'll miss out on the block
>> anyway (spam's paying more than your tx rate) and you never had any hope
>> of making it in.
>>
>> Note that we already rate limit via INVENTORY_BROADCAST_MAX /
>> *_INVENTORY_BROADCAST_INTERVAL; which gets to something like 10,500 txs
>> per 10 minutes for outbound connections. This would be a weight based
>> rate limit instead-of/in-addition-to that, I guess.
>>
>> As far as a non-ugly approach goes, I think you'd have to be smarter about
>> tracking the "effective fee rate" than the ancestor fee rate manages;
>> maybe that's something that could fall out of Murch and Clara's candidate
>> set blockbuilding ideas [0] ?
>>
>> Perhaps that same work would also make it possible to come up with
>> a better answer to "do I care that this replacement would invalidate
>> these descendents?"
>>
>> [0] https://github.com/Xekyo/blockbuilding
>>
>> > > - keep high-feerate evicted txs around for a while in case they get
>> > > mined by someone else to improve compact block relay, a la the
>> > > orphan pool?
>> > Replaced transactions are already added to vExtraTxnForCompact :D
>>
>> I guess I was thinking that it's just a 100 tx LRU cache, which might
>> not be good enough?
>>
>> Maybe it would be more on point to have a rate limit apply only to
>> replacement transactions?
>>
>> > For wallets, AJ's "All you need is for there to be *a* path that follows
>> > the new relay rules and gets from your node/wallet to perhaps 10% of
>> > hashpower" makes sense to me (which would be the former).
>>
>> Perhaps a corollarly of that is that it's *better* to have the mempool
>> acceptance rule only consider economic incentives, and have the spam
>> prevention only be about "shall I tell my peers about this?"
>>
>> If you don't have that split; then the anti-spam rules can prevent you
>> from getting the tx in the mempool at all; whereas if you do have the
>> split, then even if the bitcoind anti-spam rules are blocking you at
>> every turn, you can still send your tx to miners by some other route,
>> and then they can add it to their mempool directly without any hassle.
>>
>> Cheers,
>> aj
>>
>> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev at lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20220311/694c49aa/attachment-0001.html>