Tadas Varanavičius [ARCHIVE] on Nostr: 📅 Original date posted:2013-03-11 📝 Original message:On 03/11/2013 08:17 PM, ...
📅 Original date posted:2013-03-11
📝 Original message:On 03/11/2013 08:17 PM, Benjamin Lindner wrote:
> The problem of UTXO in principal scales with the block size limit. Thus it should be fixed BEFORE you consider increasing the block size limit. Otherwise you just kick the can down the road, making it bigger.
Let's assume bitcoin has scaled up to 2000 tx/s. We all want this,
right? https://en.bitcoin.it/wiki/Scalability. Block size is 500 MB.
CPU, network and archival blockchain storage seem to solvable.
Let's say SatoshiDice-like systems are doing informational transactions
that produce unspendable outputs, because they can and users are paying
for it anyway (proved in real life). 400 unspendable outputs per second
would be realistic.
This would be bloating UTXO data at a speed of 52 GB/year. That's a very
big memory leak. And this is just the unspendable outputs.
Bitcoin cannot scale up until such dust output spamming is discouraged
at the protocol level.
📝 Original message:On 03/11/2013 08:17 PM, Benjamin Lindner wrote:
> The problem of UTXO in principal scales with the block size limit. Thus it should be fixed BEFORE you consider increasing the block size limit. Otherwise you just kick the can down the road, making it bigger.
Let's assume bitcoin has scaled up to 2000 tx/s. We all want this,
right? https://en.bitcoin.it/wiki/Scalability. Block size is 500 MB.
CPU, network and archival blockchain storage seem to solvable.
Let's say SatoshiDice-like systems are doing informational transactions
that produce unspendable outputs, because they can and users are paying
for it anyway (proved in real life). 400 unspendable outputs per second
would be realistic.
This would be bloating UTXO data at a speed of 52 GB/year. That's a very
big memory leak. And this is just the unspendable outputs.
Bitcoin cannot scale up until such dust output spamming is discouraged
at the protocol level.