Gregory Maxwell [ARCHIVE] on Nostr: ๐ Original date posted:2015-12-08 ๐ Original message:On Tue, Dec 8, 2015 at ...
๐
Original date posted:2015-12-08
๐ Original message:On Tue, Dec 8, 2015 at 4:58 AM, Anthony Towns via bitcoin-dev
<bitcoin-dev at lists.linuxfoundation.org> wrote:
> Having a cost function rather than separate limits does make it easier to
> build blocks (approximately) optimally, though (ie, just divide the fee by
> (base_bytes+witness_bytes/4) and sort). Are there any other benefits?
Actually being able to compute fees for your transaction: If there are
multiple limits that are "at play" then how you need to pay would
depend on the entire set of other candidate transactions, which is
unknown to you. Avoiding the need for a fancy solver in the miner is
also virtuous, because requiring software complexity there can make
for centralization advantages or divert development/maintenance cycles
in open source software off to other ends... The multidimensional
optimization is harder to accommodate for improved relay schemes, this
is the same as the "build blocks" but much more critical both because
of the need for consistency and the frequency in which you do it.
These don't, however, apply all that strongly if only one limit is
likely to be the limiting limit... though I am unsure about counting
on that; after all if the other limits wouldn't be limiting, why have
them?
> That seems kinda backwards.
It can seem that way, but all limiting schemes have pathological cases
where someone runs up against the limit in the most costly way. Keep
in mind that casual pathological behavior can be suppressed via
IsStandard like rules without baking them into consensus; so long as
the candidate attacker isn't miners themselves. Doing so where
possible can help avoid cases like the current sigops limiting which
is just ... pretty broken.
๐ Original message:On Tue, Dec 8, 2015 at 4:58 AM, Anthony Towns via bitcoin-dev
<bitcoin-dev at lists.linuxfoundation.org> wrote:
> Having a cost function rather than separate limits does make it easier to
> build blocks (approximately) optimally, though (ie, just divide the fee by
> (base_bytes+witness_bytes/4) and sort). Are there any other benefits?
Actually being able to compute fees for your transaction: If there are
multiple limits that are "at play" then how you need to pay would
depend on the entire set of other candidate transactions, which is
unknown to you. Avoiding the need for a fancy solver in the miner is
also virtuous, because requiring software complexity there can make
for centralization advantages or divert development/maintenance cycles
in open source software off to other ends... The multidimensional
optimization is harder to accommodate for improved relay schemes, this
is the same as the "build blocks" but much more critical both because
of the need for consistency and the frequency in which you do it.
These don't, however, apply all that strongly if only one limit is
likely to be the limiting limit... though I am unsure about counting
on that; after all if the other limits wouldn't be limiting, why have
them?
> That seems kinda backwards.
It can seem that way, but all limiting schemes have pathological cases
where someone runs up against the limit in the most costly way. Keep
in mind that casual pathological behavior can be suppressed via
IsStandard like rules without baking them into consensus; so long as
the candidate attacker isn't miners themselves. Doing so where
possible can help avoid cases like the current sigops limiting which
is just ... pretty broken.