Mark Friedenbach [ARCHIVE] on Nostr: š Original date posted:2015-05-08 š Original message:It is my professional ...
š
Original date posted:2015-05-08
š Original message:It is my professional opinion that raising the block size by merely
adjusting a constant without any sort of feedback mechanism would be a
dangerous and foolhardy thing to do. We are custodians of a multi-billion
dollar asset, and it falls upon us to weigh the consequences of our own
actions against the combined value of the entire bitcoin ecosystem. Ideally
we would take no action for which we are not absolutely certain of the
ramifications, with the information that can be made available to us. But
of course that is not always possible: there are unknown-unknowns, time
pressures, and known-unknowns where information has too high a marginal
cost. So where certainty is unobtainable, we must instead hedge against
unwanted outcomes.
The proposal to raise the block size now by redefining a constant carries
with it risk associated with infrastructure scaling, centralization
pressures, and delaying the necessary development of a constraint-based fee
economy. It also simply kicks the can down the road in settling these
issues because a larger but realistic hard limit must still exist, meaning
a future hard fork may still be required.
But whatever new hard limit is chosen, there is also a real possibility
that it may be too high. The standard response is that it is a soft-fork
change to impose a lower block size limit, which miners could do with a
minimal amount of coordination. This is however undermined by the
unfortunate reality that so many mining operations are absentee-run
businesses, or run by individuals without a strong background in bitcoin
protocol policy, or with interests which are not well aligned with other
users or holders of bitcoin. We cannot rely on miners being vigilant about
issues that develop, as they develop, or able to respond in the appropriate
fashion that someone with full domain knowledge and an objective
perspective would.
The alternative then is to have some sort of dynamic block size limit
controller, and ideally one which applies a cost to raising the block size
in some way the preserves the decentralization and/or long-term stability
features that we care about. I will now describe one such proposal:
* For each block, the miner is allowed to select a different difficulty
(nBits) within a certain range, e.g. +/- 25% of the expected difficulty,
and this miner-selected difficulty is used for the proof of work check. In
addition to adjusting the hashcash target, selecting a different difficulty
also raises or lowers the maximum block size for that block by a function
of the difference in difficulty. So increasing the difficulty of the block
by an additional 25% raises the block limit for that block from 100% of the
current limit to 125%, and lowering the difficulty by 10% would also lower
the maximum block size for that block from 100% to 90% of the current
limit. For simplicity I will assume a linear identity transform as the
function, but a quadratic or other function with compounding marginal cost
may be preferred.
* The default maximum block size limit is then adjusted at regular
intervals. For simplicity I will assume an adjustment at the end of each
2016 block interval, at the same time that difficulty is adjusted, but
there is no reason these have to be aligned. The adjustment algorithm
itself is either the selection of the median, or perhaps some sort of
weighted average that respects the "middle majority." There would of course
be limits on how quickly the block size limit can adjusted in any one
period, just as there are min/max limits on the difficulty adjustment.
* To prevent perverse mining incentives, the original difficulty without
adjustment is used in the aggregate work calculations for selecting the
most-work chain, and the allowable miner-selected adjustment to difficulty
would have to be tightly constrained.
These rules create an incentive environment where raising the block size
has a real cost associated with it: a more difficult hashcash target for
the same subsidy reward. For rational miners that cost must be
counter-balanced by additional fees provided in the larger block. This
allows block size to increase, but only within the confines of a
self-supporting fee economy.
When the subsidy goes away or is reduced to an insignificant fraction of
the block reward, this incentive structure goes away. Hopefully at that
time we would have sufficient information to soft-fork set a hard block
size maximum. But in the mean time, the block size limit controller
constrains the maximum allowed block size to be within a range supported by
fees on the network, providing an emergency relief valve that we can be
assured will only be used at significant cost.
Mark Friedenbach
* There has over time been various discussions on the bitcointalk forums
about dynamically adjusting block size limits. The true origin of the idea
is unclear at this time (citations would be appreciated!) but a form of it
was implemented in Bytecoin / Monero using subsidy burning to increase the
block size. That approach has various limitations. These were corrected in
Greg Maxwell's suggestion to adjust the difficulty/nBits field directly,
which also has the added benefit of providing incentive for bidirectional
movement during the subsidy period. The description in this email and any
errors are my own.
On Fri, May 8, 2015 at 12:20 AM, Matt Whitlock <bip at mattwhitlock.name>
wrote:
> Between all the flames on this list, several ideas were raised that did
> not get much attention. I hereby resubmit these ideas for consideration and
> discussion.
>
> - Perhaps the hard block size limit should be a function of the actual
> block sizes over some trailing sampling period. For example, take the
> median block size among the most recent 2016 blocks and multiply it by 1.5.
> This allows Bitcoin to scale up gradually and organically, rather than
> having human beings guessing at what is an appropriate limit.
>
> - Perhaps the hard block size limit should be determined by a vote of the
> miners. Each miner could embed a desired block size limit in the coinbase
> transactions of the blocks it publishes. The effective hard block size
> limit would be that size having the greatest number of votes within a
> sliding window of most recent blocks.
>
> - Perhaps the hard block size limit should be a function of block-chain
> length, so that it can scale up smoothly rather than jumping immediately to
> 20 MB. This function could be linear (anticipating a breakdown of Moore's
> Law) or quadratic.
>
> I would be in support of any of the above, but I do not support Mike
> Hearn's proposed jump to 20 MB. Hearn's proposal kicks the can down the
> road without actually solving the problem, and it does so in a
> controversial (step function) way.
>
>
> ------------------------------------------------------------------------------
> One dashboard for servers and applications across Physical-Virtual-Cloud
> Widest out-of-the-box monitoring support with 50+ applications
> Performance metrics, stats and reports that give you Actionable Insights
> Deep dive visibility with transaction tracing using APM Insight.
> http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
> _______________________________________________
> Bitcoin-development mailing list
> Bitcoin-development at lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bitcoin-development
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20150508/ba1d33d5/attachment.html>
š Original message:It is my professional opinion that raising the block size by merely
adjusting a constant without any sort of feedback mechanism would be a
dangerous and foolhardy thing to do. We are custodians of a multi-billion
dollar asset, and it falls upon us to weigh the consequences of our own
actions against the combined value of the entire bitcoin ecosystem. Ideally
we would take no action for which we are not absolutely certain of the
ramifications, with the information that can be made available to us. But
of course that is not always possible: there are unknown-unknowns, time
pressures, and known-unknowns where information has too high a marginal
cost. So where certainty is unobtainable, we must instead hedge against
unwanted outcomes.
The proposal to raise the block size now by redefining a constant carries
with it risk associated with infrastructure scaling, centralization
pressures, and delaying the necessary development of a constraint-based fee
economy. It also simply kicks the can down the road in settling these
issues because a larger but realistic hard limit must still exist, meaning
a future hard fork may still be required.
But whatever new hard limit is chosen, there is also a real possibility
that it may be too high. The standard response is that it is a soft-fork
change to impose a lower block size limit, which miners could do with a
minimal amount of coordination. This is however undermined by the
unfortunate reality that so many mining operations are absentee-run
businesses, or run by individuals without a strong background in bitcoin
protocol policy, or with interests which are not well aligned with other
users or holders of bitcoin. We cannot rely on miners being vigilant about
issues that develop, as they develop, or able to respond in the appropriate
fashion that someone with full domain knowledge and an objective
perspective would.
The alternative then is to have some sort of dynamic block size limit
controller, and ideally one which applies a cost to raising the block size
in some way the preserves the decentralization and/or long-term stability
features that we care about. I will now describe one such proposal:
* For each block, the miner is allowed to select a different difficulty
(nBits) within a certain range, e.g. +/- 25% of the expected difficulty,
and this miner-selected difficulty is used for the proof of work check. In
addition to adjusting the hashcash target, selecting a different difficulty
also raises or lowers the maximum block size for that block by a function
of the difference in difficulty. So increasing the difficulty of the block
by an additional 25% raises the block limit for that block from 100% of the
current limit to 125%, and lowering the difficulty by 10% would also lower
the maximum block size for that block from 100% to 90% of the current
limit. For simplicity I will assume a linear identity transform as the
function, but a quadratic or other function with compounding marginal cost
may be preferred.
* The default maximum block size limit is then adjusted at regular
intervals. For simplicity I will assume an adjustment at the end of each
2016 block interval, at the same time that difficulty is adjusted, but
there is no reason these have to be aligned. The adjustment algorithm
itself is either the selection of the median, or perhaps some sort of
weighted average that respects the "middle majority." There would of course
be limits on how quickly the block size limit can adjusted in any one
period, just as there are min/max limits on the difficulty adjustment.
* To prevent perverse mining incentives, the original difficulty without
adjustment is used in the aggregate work calculations for selecting the
most-work chain, and the allowable miner-selected adjustment to difficulty
would have to be tightly constrained.
These rules create an incentive environment where raising the block size
has a real cost associated with it: a more difficult hashcash target for
the same subsidy reward. For rational miners that cost must be
counter-balanced by additional fees provided in the larger block. This
allows block size to increase, but only within the confines of a
self-supporting fee economy.
When the subsidy goes away or is reduced to an insignificant fraction of
the block reward, this incentive structure goes away. Hopefully at that
time we would have sufficient information to soft-fork set a hard block
size maximum. But in the mean time, the block size limit controller
constrains the maximum allowed block size to be within a range supported by
fees on the network, providing an emergency relief valve that we can be
assured will only be used at significant cost.
Mark Friedenbach
* There has over time been various discussions on the bitcointalk forums
about dynamically adjusting block size limits. The true origin of the idea
is unclear at this time (citations would be appreciated!) but a form of it
was implemented in Bytecoin / Monero using subsidy burning to increase the
block size. That approach has various limitations. These were corrected in
Greg Maxwell's suggestion to adjust the difficulty/nBits field directly,
which also has the added benefit of providing incentive for bidirectional
movement during the subsidy period. The description in this email and any
errors are my own.
On Fri, May 8, 2015 at 12:20 AM, Matt Whitlock <bip at mattwhitlock.name>
wrote:
> Between all the flames on this list, several ideas were raised that did
> not get much attention. I hereby resubmit these ideas for consideration and
> discussion.
>
> - Perhaps the hard block size limit should be a function of the actual
> block sizes over some trailing sampling period. For example, take the
> median block size among the most recent 2016 blocks and multiply it by 1.5.
> This allows Bitcoin to scale up gradually and organically, rather than
> having human beings guessing at what is an appropriate limit.
>
> - Perhaps the hard block size limit should be determined by a vote of the
> miners. Each miner could embed a desired block size limit in the coinbase
> transactions of the blocks it publishes. The effective hard block size
> limit would be that size having the greatest number of votes within a
> sliding window of most recent blocks.
>
> - Perhaps the hard block size limit should be a function of block-chain
> length, so that it can scale up smoothly rather than jumping immediately to
> 20 MB. This function could be linear (anticipating a breakdown of Moore's
> Law) or quadratic.
>
> I would be in support of any of the above, but I do not support Mike
> Hearn's proposed jump to 20 MB. Hearn's proposal kicks the can down the
> road without actually solving the problem, and it does so in a
> controversial (step function) way.
>
>
> ------------------------------------------------------------------------------
> One dashboard for servers and applications across Physical-Virtual-Cloud
> Widest out-of-the-box monitoring support with 50+ applications
> Performance metrics, stats and reports that give you Actionable Insights
> Deep dive visibility with transaction tracing using APM Insight.
> http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
> _______________________________________________
> Bitcoin-development mailing list
> Bitcoin-development at lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bitcoin-development
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20150508/ba1d33d5/attachment.html>