Marc Bevand [ARCHIVE] on Nostr: 📅 Original date posted:2017-11-16 📝 Original message:It occurred to me that we ...
📅 Original date posted:2017-11-16
📝 Original message:It occurred to me that we could push the classic concept of pruning even
further: we could significantly shrink the blockchain as well as reduce the
amount of network traffic during initial block download by doing something
I would call protocol-level pruning. This would, as of today, reduce the
size of the blockchain by a factor of 50, hence enabling massive on-chain
scaling.
The idea behind PLP is to serialize the UTXO set in a standardized way, and
publish a hash of it in the block header so that the blockchain commits to
it. Since hashing and verifying it is a moderately intensive operation,
perhaps the UTXO set hash should be published only once every 576 blocks (4
days).
When a new Bitcoin node joins the network, it would download the block
headers only (not the block data), it would identify the most recent block
containing the UTXO set hash, and download the UTXO set from peers. From
that point on, it downloads and verifies all blocks as normal.
Every 576 blocks, nodes serialize and verify that their UTXO set hash
matches the one published in the blockchain. Doing so becomes a new part of
consensus rules. The last 576 blocks could then be permanently discarded as
they are no longer useful.
Today the serialized UTXO set is about 3GB and the blockchain is about
150GB. Therefore PLP would cut down the amount of data stored by full nodes
by a factor of ~50 as they would have to store only the UTXO set plus at
most 576 blocks.
One trivial optimization is possible: to avoid hashing the entire UTXO set
every 576 blocks (which would take multiple seconds even on a fast
machine), the UTXO set serialization could be a sparse merkle tree
<https://eprint.iacr.org/2016/683.pdf> which would allow on-the fly
recomputation of the hash as new blocks are mined: when a UTXO is added to
(or removed from) the tree, only a small number of hash operations are
needed to recalculate the UTXO set merkle tree root hash.
Maybe we don't even need sparse merkle trees, but a regular merkle tree
would suffice: the tree leaves would be small groups of UTXOs (some bits in
the ID/hash of a UTXO would determine which leaf it belongs to.)
Unlike classic pruning mode, *ALL* full nodes on the network could switch
to PLP. There is no need for any node to archive the entire blockchain any
more.
I can think of one downside of PLP: nodes would no longer be able to handle
reorgs that go further back than the last UTXO set hash published on the
chain (since previous blocks have been discarded). So, perhaps keeping
around the last N*576 blocks (N=10?) would be a sufficient workaround.
Thoughts?
-Marc
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20171116/aa078a42/attachment-0001.html>
📝 Original message:It occurred to me that we could push the classic concept of pruning even
further: we could significantly shrink the blockchain as well as reduce the
amount of network traffic during initial block download by doing something
I would call protocol-level pruning. This would, as of today, reduce the
size of the blockchain by a factor of 50, hence enabling massive on-chain
scaling.
The idea behind PLP is to serialize the UTXO set in a standardized way, and
publish a hash of it in the block header so that the blockchain commits to
it. Since hashing and verifying it is a moderately intensive operation,
perhaps the UTXO set hash should be published only once every 576 blocks (4
days).
When a new Bitcoin node joins the network, it would download the block
headers only (not the block data), it would identify the most recent block
containing the UTXO set hash, and download the UTXO set from peers. From
that point on, it downloads and verifies all blocks as normal.
Every 576 blocks, nodes serialize and verify that their UTXO set hash
matches the one published in the blockchain. Doing so becomes a new part of
consensus rules. The last 576 blocks could then be permanently discarded as
they are no longer useful.
Today the serialized UTXO set is about 3GB and the blockchain is about
150GB. Therefore PLP would cut down the amount of data stored by full nodes
by a factor of ~50 as they would have to store only the UTXO set plus at
most 576 blocks.
One trivial optimization is possible: to avoid hashing the entire UTXO set
every 576 blocks (which would take multiple seconds even on a fast
machine), the UTXO set serialization could be a sparse merkle tree
<https://eprint.iacr.org/2016/683.pdf> which would allow on-the fly
recomputation of the hash as new blocks are mined: when a UTXO is added to
(or removed from) the tree, only a small number of hash operations are
needed to recalculate the UTXO set merkle tree root hash.
Maybe we don't even need sparse merkle trees, but a regular merkle tree
would suffice: the tree leaves would be small groups of UTXOs (some bits in
the ID/hash of a UTXO would determine which leaf it belongs to.)
Unlike classic pruning mode, *ALL* full nodes on the network could switch
to PLP. There is no need for any node to archive the entire blockchain any
more.
I can think of one downside of PLP: nodes would no longer be able to handle
reorgs that go further back than the last UTXO set hash published on the
chain (since previous blocks have been discarded). So, perhaps keeping
around the last N*576 blocks (N=10?) would be a sufficient workaround.
Thoughts?
-Marc
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20171116/aa078a42/attachment-0001.html>