angus [ARCHIVE] on Nostr: 📅 Original date posted:2022-10-19 📝 Original message:> Let's allow a miner to ...
📅 Original date posted:2022-10-19
📝 Original message:> Let's allow a miner to include transactions until the block is filled, let's call this structure (coining a new term 'Brick'), B0. [brick=block that doesn't meet the difficulty rule and is filled of tx to its full capacity]
> Since PoW hashing is continuously active, Brick B0 would have a nonce corresponding to a minimum numeric value of its hash found until it got filled.
So, if I'm understanding right, this amounts to "reduce difficulty required for a block ('brick') to be valid if the mempool contains more than 1 block's worth of transactions so we get transactions confirmed faster" using 'bricks' as short-lived sidechains that get merged into blocks?
This would have the same fundamental problem as just making the max blocksize bigger - it increases the rate of growth of storage required for a full node, because you're allowing blocks/bricks to be created faster, so there will be more confirmed transactions to store in a given time window than under current Bitcoin rules.
Bitcoin doesn't take the size of the mempool into account when adjusting the difficulty because the time-between-blocks is 'more important' than avoiding congestion where transactions take ages to get into a block. The fee mechanism in part allows users to decide how urgently they want their tx to get confirmed, and high fees when there is congestion also disincentivises others from transacting at all, which helps arrest mempool growth.
I'd imagine we'd also see a 'highway widening' effect with this kind of proposal - if you increase the tx volume Bitcoin can settle in a given time, that will quickly be used up by more people transacting until we're back at a congested state again.
> Fully filled brick B0, with a hash that doesn't meet the difficulty rule, would be broadcasted and nodes would have it on in a separate fork as usual.
How do we know if the hash the miner does find for a brick was their 'best effort' and they're not just being lazy? There's an element of luck in the best hash a miner can find, sometimes it takes a long time to meet the difficulty requirement and sometimes it happens almost at instantly.
How would we know how 'busy' the mempool was at the time a brick from months or years ago was mined?
Nodes have to be able to run through the entire history of the blockchain and check everything is valid. They have to do this using only the previous blocks they've already validated - they won't have historical snapshots of the mempool (they'll build and mutate a UTXO set, but that's different). Transactions don't contain a 'created-at' time that you could compare to the block's creation time (and if they did, you probably couldn't trust it).
With the current system, Nodes can calculate what the difficulty should be for every block based on those previous blocks' times and difficulties - but how would you know an old brick was valid if its difficulty was low but at the time the mempool was busy, vs. getting a fraudulent brick that is actually invalid because there isn't enough work in it? You can't solve this by adding some mempoolsize field to bricks, as you'd have to blindly trust miners not to lie about them.
If we can't be (fairly) certain that a miner put a minimum amount of work into finding a hash, then you lose all the strengths of PoW.
If you weaken the difficulty requirement which is there so that mining blocks is hard so that it is very hard to intentionally fork the chain, re-mine previous blocks, overtake the other fork, and get the network to re-org onto your chain - then there's no Proof of work undergirding consensus in the ledger's state.
Secondly, where does the block reward go? Do brick miners get a fraction of the reward proportionate to the fraction of the difficulty they got to? Later when bricks become part of a block, who gets the block reward for that complete block? Who gets the fees? No miner is going to bother mining a merge-bricks-into-block block if the reward isn't the same or better than just mining a regular block, but each miner of the bricks in it would also want a reward. But, we can't give them both a block reward as that'd increase Bitcoin's issuance rate, which might be the only thing people are more strongly opposed to than increasing the blocksize! xD
> At this point, instead of discarding transactions, our miner would start working on a new brick B1, linked with B0 as usual.
>
> Nodes would allow incoming regular blocks and bricks with hashes that don't satisfy the difficulty rule, provided the brick is fully filled of transactions. Bricks not fully filled would be rejected as invalid to prevent spam (except if constitutes the last brick of a brickchain, explained below).
>
> Let's assume that 10 minutes have elapsed and our miner is in a state where N bricks have been produced and the accumulated PoW calculated using mathematics (every brick contains a 'minimum hash found', when a series of 'minimum hashes' is computationally equivalent to the network difficulty is then the full 'brickchain' is valid as a Block.
But the brick sidechain has to become part of the main blockchain - and as you've got N bricks in the time that there should be 1 block, and each brick is a full block, it feels like this is just a convoluted way to increase the blocksize? Every transaction has to be in the ledger somewhere to be confirmed, so even if the block itself is small and stored references to the bricks, Nodes are going to have to use storage to keep all those full bricks.
It also seems that you'd have to require the bricks sidechain to always be merged into the next actual block - it wouldn't work if the brick chain could keep growing and at the same time the actual blockchain advances (because there'd be risks of double-spends where one tx is in the brick chain and the other in the new block). Which I think further makes this feel like a roundabout way of increasing the blocksize
Despite my critique, this was interesting to think about - and hopefully this is useful (and hopefully I've not seriously misunderstood or said something dumb)
Angus
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20221019/3c06b968/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 249 bytes
Desc: OpenPGP digital signature
URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20221019/3c06b968/attachment.sig>
📝 Original message:> Let's allow a miner to include transactions until the block is filled, let's call this structure (coining a new term 'Brick'), B0. [brick=block that doesn't meet the difficulty rule and is filled of tx to its full capacity]
> Since PoW hashing is continuously active, Brick B0 would have a nonce corresponding to a minimum numeric value of its hash found until it got filled.
So, if I'm understanding right, this amounts to "reduce difficulty required for a block ('brick') to be valid if the mempool contains more than 1 block's worth of transactions so we get transactions confirmed faster" using 'bricks' as short-lived sidechains that get merged into blocks?
This would have the same fundamental problem as just making the max blocksize bigger - it increases the rate of growth of storage required for a full node, because you're allowing blocks/bricks to be created faster, so there will be more confirmed transactions to store in a given time window than under current Bitcoin rules.
Bitcoin doesn't take the size of the mempool into account when adjusting the difficulty because the time-between-blocks is 'more important' than avoiding congestion where transactions take ages to get into a block. The fee mechanism in part allows users to decide how urgently they want their tx to get confirmed, and high fees when there is congestion also disincentivises others from transacting at all, which helps arrest mempool growth.
I'd imagine we'd also see a 'highway widening' effect with this kind of proposal - if you increase the tx volume Bitcoin can settle in a given time, that will quickly be used up by more people transacting until we're back at a congested state again.
> Fully filled brick B0, with a hash that doesn't meet the difficulty rule, would be broadcasted and nodes would have it on in a separate fork as usual.
How do we know if the hash the miner does find for a brick was their 'best effort' and they're not just being lazy? There's an element of luck in the best hash a miner can find, sometimes it takes a long time to meet the difficulty requirement and sometimes it happens almost at instantly.
How would we know how 'busy' the mempool was at the time a brick from months or years ago was mined?
Nodes have to be able to run through the entire history of the blockchain and check everything is valid. They have to do this using only the previous blocks they've already validated - they won't have historical snapshots of the mempool (they'll build and mutate a UTXO set, but that's different). Transactions don't contain a 'created-at' time that you could compare to the block's creation time (and if they did, you probably couldn't trust it).
With the current system, Nodes can calculate what the difficulty should be for every block based on those previous blocks' times and difficulties - but how would you know an old brick was valid if its difficulty was low but at the time the mempool was busy, vs. getting a fraudulent brick that is actually invalid because there isn't enough work in it? You can't solve this by adding some mempoolsize field to bricks, as you'd have to blindly trust miners not to lie about them.
If we can't be (fairly) certain that a miner put a minimum amount of work into finding a hash, then you lose all the strengths of PoW.
If you weaken the difficulty requirement which is there so that mining blocks is hard so that it is very hard to intentionally fork the chain, re-mine previous blocks, overtake the other fork, and get the network to re-org onto your chain - then there's no Proof of work undergirding consensus in the ledger's state.
Secondly, where does the block reward go? Do brick miners get a fraction of the reward proportionate to the fraction of the difficulty they got to? Later when bricks become part of a block, who gets the block reward for that complete block? Who gets the fees? No miner is going to bother mining a merge-bricks-into-block block if the reward isn't the same or better than just mining a regular block, but each miner of the bricks in it would also want a reward. But, we can't give them both a block reward as that'd increase Bitcoin's issuance rate, which might be the only thing people are more strongly opposed to than increasing the blocksize! xD
> At this point, instead of discarding transactions, our miner would start working on a new brick B1, linked with B0 as usual.
>
> Nodes would allow incoming regular blocks and bricks with hashes that don't satisfy the difficulty rule, provided the brick is fully filled of transactions. Bricks not fully filled would be rejected as invalid to prevent spam (except if constitutes the last brick of a brickchain, explained below).
>
> Let's assume that 10 minutes have elapsed and our miner is in a state where N bricks have been produced and the accumulated PoW calculated using mathematics (every brick contains a 'minimum hash found', when a series of 'minimum hashes' is computationally equivalent to the network difficulty is then the full 'brickchain' is valid as a Block.
But the brick sidechain has to become part of the main blockchain - and as you've got N bricks in the time that there should be 1 block, and each brick is a full block, it feels like this is just a convoluted way to increase the blocksize? Every transaction has to be in the ledger somewhere to be confirmed, so even if the block itself is small and stored references to the bricks, Nodes are going to have to use storage to keep all those full bricks.
It also seems that you'd have to require the bricks sidechain to always be merged into the next actual block - it wouldn't work if the brick chain could keep growing and at the same time the actual blockchain advances (because there'd be risks of double-spends where one tx is in the brick chain and the other in the new block). Which I think further makes this feel like a roundabout way of increasing the blocksize
Despite my critique, this was interesting to think about - and hopefully this is useful (and hopefully I've not seriously misunderstood or said something dumb)
Angus
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20221019/3c06b968/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 249 bytes
Desc: OpenPGP digital signature
URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20221019/3c06b968/attachment.sig>