Hampus Sjöberg [ARCHIVE] on Nostr: 📅 Original date posted:2016-12-10 📝 Original message:> While disk space ...
📅 Original date posted:2016-12-10
📝 Original message:> While disk space requirements might not be a big problem, block
propagation time is
Is block propagation time really still a problem? Compact blocks and FIBRE
should help here.
> Bitcoin, because its fundamental design, can scale by using offchain
solutions.
I agree.
However, I believe that on-chain scaling will be needed regardless of which
off-chain solution gains popularity.
2016-12-10 11:44 GMT+01:00 s7r via bitcoin-dev <
bitcoin-dev at lists.linuxfoundation.org>:
> t. khan via bitcoin-dev wrote:
> > BIP Proposal - Managing Bitcoin’s block size the same way we do
> > difficulty (aka Block75)
> >
> > The every two-week adjustment of difficulty has proven to be a
> > reasonably effective and predictable way of managing how quickly blocks
> > are mined. Bitcoin needs a reasonably effective and predictable way of
> > managing the maximum block size.
> >
> > It’s clear at this point that human beings should not be involved in the
> > determination of max block size, just as they’re not involved in
> > deciding the difficulty.
> >
> > Instead of setting an arbitrary max block size (1MB, 2MB, 8MB, etc.) or
> > passing the decision to miners/pool operators, the max block size should
> > be adjusted every two weeks (2016 blocks) using a system similar to how
> > difficulty is calculated.
> >
> > Put another way: let’s stop thinking about what the max block size
> > should be and start thinking about how full we want the average block to
> > be regardless of size. Over the last year, we’ve had averages of 75% or
> > higher, so aiming for 75% full seems reasonable, hence naming this
> > concept ‘Block75’.
> >
> > The target capacity over 2016 blocks would be 75%. If the last 2016
> > blocks are more than 75% full, add the difference to the max block size.
> > Like this:
> >
> > MAX_BLOCK_BASE_SIZE = 1000000
> > TARGET_CAPACITY = 750000
> > AVERAGE_OVER_CAP = average block size of last 2016 blocks minus
> > TARGET_CAPACITY
> >
> > To check if a block is valid, ≤ (MAX_BLOCK_BASE_SIZE + AVERAGE_OVER_CAP)
> >
> > For example, if the last 2016 blocks are 85% full (average block is 850
> > KB), add 10% to the max block size. The new max block size would be
> > 1,100 KB until the next 2016 blocks are mined, then reset and
> > recalculate. The 1,000,000 byte limit that exists currently would
> > remain, but would effectively be the minimum max block size.
> >
> > Another two weeks goes by, the last 2016 blocks are again 85% full, but
> > now that means they average 935 KB out of the 1,100 KB max block size.
> > This is 93.5% of the 1,000,000 byte limit, so 18.5% would be added to
> > that to make the new max block size of 1,185 KB.
> >
> > Another two weeks passes. This time, the average block is 1,050 KB. The
> > new max block size is calculated to 1,300 KB (as blocks were 105% full,
> > minus the 75% capacity target, so 30% added to max block size).
> >
> > Repeat every 2016 blocks, forever.
> >
> > If Block75 had been applied at the difficulty adjustment on November
> > 18th, the max block size would have been 1,080KB, as the average block
> > during that period was 83% full, so 8% is added to the 1,000KB limit.
> > The current size, after the December 2nd adjustment would be 1,150K.
> >
> > Block75 would allow the max block size to grow (or shrink) in response
> > to transaction volume, and does so predictably, reasonably quickly, and
> > in a method that prevents wild swings in block size or transaction fees.
> > It attempts to keep blocks at 75% total capacity over each two week
> > period, the same way difficulty tries to keep blocks mined every ten
> > minutes. It also keeps blocks as small as possible.
> >
> > Thoughts?
> >
> > -t.k.
> >
>
> I like the idea. It is good wrt growing the max. block size
> automatically without human action, but the main problem (or question)
> is not how to grow this number, it is what number can the network
> handle, considering both miners and users. While disk space requirements
> might not be a big problem, block propagation time is. The time required
> for a block to propagate in the network (or at least to all the miners)
> is directly dependent of its size. If blocks take too much time to
> propagate in the network, the orphan rate will increase in unpredictable
> ways. For example if the internet speed in China is worse than in
> Europe, and miners in China have more than 50% of the hashing power,
> blocks mined by European miners might get orphaned.
>
> The system as described can also be gamed, by filling the network with
> transactions. Miners have the monetary interest to include as many
> transactions as possible in a block in order to collect the fees.
> Regardless how you think about it, there has to be a maximum block size
> that the network will allow as a consensus rule. Increasing it
> dynamically based on transaction volume will reach a point where the
> number got big enough that it broke things. Bitcoin, because its
> fundamental design, can scale by using offchain solutions.
>
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev at lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20161210/0eae6a60/attachment.html>
📝 Original message:> While disk space requirements might not be a big problem, block
propagation time is
Is block propagation time really still a problem? Compact blocks and FIBRE
should help here.
> Bitcoin, because its fundamental design, can scale by using offchain
solutions.
I agree.
However, I believe that on-chain scaling will be needed regardless of which
off-chain solution gains popularity.
2016-12-10 11:44 GMT+01:00 s7r via bitcoin-dev <
bitcoin-dev at lists.linuxfoundation.org>:
> t. khan via bitcoin-dev wrote:
> > BIP Proposal - Managing Bitcoin’s block size the same way we do
> > difficulty (aka Block75)
> >
> > The every two-week adjustment of difficulty has proven to be a
> > reasonably effective and predictable way of managing how quickly blocks
> > are mined. Bitcoin needs a reasonably effective and predictable way of
> > managing the maximum block size.
> >
> > It’s clear at this point that human beings should not be involved in the
> > determination of max block size, just as they’re not involved in
> > deciding the difficulty.
> >
> > Instead of setting an arbitrary max block size (1MB, 2MB, 8MB, etc.) or
> > passing the decision to miners/pool operators, the max block size should
> > be adjusted every two weeks (2016 blocks) using a system similar to how
> > difficulty is calculated.
> >
> > Put another way: let’s stop thinking about what the max block size
> > should be and start thinking about how full we want the average block to
> > be regardless of size. Over the last year, we’ve had averages of 75% or
> > higher, so aiming for 75% full seems reasonable, hence naming this
> > concept ‘Block75’.
> >
> > The target capacity over 2016 blocks would be 75%. If the last 2016
> > blocks are more than 75% full, add the difference to the max block size.
> > Like this:
> >
> > MAX_BLOCK_BASE_SIZE = 1000000
> > TARGET_CAPACITY = 750000
> > AVERAGE_OVER_CAP = average block size of last 2016 blocks minus
> > TARGET_CAPACITY
> >
> > To check if a block is valid, ≤ (MAX_BLOCK_BASE_SIZE + AVERAGE_OVER_CAP)
> >
> > For example, if the last 2016 blocks are 85% full (average block is 850
> > KB), add 10% to the max block size. The new max block size would be
> > 1,100 KB until the next 2016 blocks are mined, then reset and
> > recalculate. The 1,000,000 byte limit that exists currently would
> > remain, but would effectively be the minimum max block size.
> >
> > Another two weeks goes by, the last 2016 blocks are again 85% full, but
> > now that means they average 935 KB out of the 1,100 KB max block size.
> > This is 93.5% of the 1,000,000 byte limit, so 18.5% would be added to
> > that to make the new max block size of 1,185 KB.
> >
> > Another two weeks passes. This time, the average block is 1,050 KB. The
> > new max block size is calculated to 1,300 KB (as blocks were 105% full,
> > minus the 75% capacity target, so 30% added to max block size).
> >
> > Repeat every 2016 blocks, forever.
> >
> > If Block75 had been applied at the difficulty adjustment on November
> > 18th, the max block size would have been 1,080KB, as the average block
> > during that period was 83% full, so 8% is added to the 1,000KB limit.
> > The current size, after the December 2nd adjustment would be 1,150K.
> >
> > Block75 would allow the max block size to grow (or shrink) in response
> > to transaction volume, and does so predictably, reasonably quickly, and
> > in a method that prevents wild swings in block size or transaction fees.
> > It attempts to keep blocks at 75% total capacity over each two week
> > period, the same way difficulty tries to keep blocks mined every ten
> > minutes. It also keeps blocks as small as possible.
> >
> > Thoughts?
> >
> > -t.k.
> >
>
> I like the idea. It is good wrt growing the max. block size
> automatically without human action, but the main problem (or question)
> is not how to grow this number, it is what number can the network
> handle, considering both miners and users. While disk space requirements
> might not be a big problem, block propagation time is. The time required
> for a block to propagate in the network (or at least to all the miners)
> is directly dependent of its size. If blocks take too much time to
> propagate in the network, the orphan rate will increase in unpredictable
> ways. For example if the internet speed in China is worse than in
> Europe, and miners in China have more than 50% of the hashing power,
> blocks mined by European miners might get orphaned.
>
> The system as described can also be gamed, by filling the network with
> transactions. Miners have the monetary interest to include as many
> transactions as possible in a block in order to collect the fees.
> Regardless how you think about it, there has to be a maximum block size
> that the network will allow as a consensus rule. Increasing it
> dynamically based on transaction volume will reach a point where the
> number got big enough that it broke things. Bitcoin, because its
> fundamental design, can scale by using offchain solutions.
>
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev at lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20161210/0eae6a60/attachment.html>