Btc Ideas [ARCHIVE] on Nostr: 📅 Original date posted:2017-08-27 📝 Original message:I also like only keeping ...
📅 Original date posted:2017-08-27
📝 Original message:I also like only keeping the last "n" blocks. Every "n" minus all the previous balances are kept, but the transactions are deleted. There's good enough record keeping, and there's excessive. Part of scaling is being able to get the blockchain and sync quickly.
Jason
-------- Original Message --------
On Aug 27, 2017, 05:31, Thomas Guyot-Sionnest via bitcoin-dev wrote:
> Pruning is already implemented in the nodes... Once enabled only unspent inputs and most recent blocks are kept. IIRC there was also a proposal to include UTXO in some blocks for SPV clients to use, but that would be additional to the blockchain data.
>
> Implementing your solution is impossible because there is no way to determine authenticity of the blockchain mid way. The proof that a block hash leads to the genesis block is also a proof of all the work that's been spent on it (the years of hashing). At the very least we'd have to keep all blocks until a hard-coded checkpoint in the code, which also means that as nodes upgrades and prune more blocks older nodes will have difficulty syncing the blockchain.
>
> Finally it's not just the addresses and balance you need to save, but also each unspent output block number, tx position and script that are required for validation on input. That's a lot of data that you're suggesting to save every 1000 blocks (and why 1000?), and as said earlier it doesn't even guarantee you can drop older blocks. I'm not even going into the details of making it work (hard fork, large block sync/verification issues, possible attack vectors opened by this...)
>
> What is wrong with the current implementation of node pruning that you are trying to solve?
>
> --
> Thomas
>
> On 26/08/17 03:21 PM, Adam Tamir Shem-Tov via bitcoin-dev wrote:
>
>> <B> Solving the Scalability issue for bitcoin </B> <BR>
>>
>> I have this idea to solve the scalability problem I wish to make public.
>>
>> If I am wrong I hope to be corrected, and if I am right we will all gain by it. <BR>
>>
>> Currently each block is being hashed, and in its contents are the hash of the block preceding it, this goes back to the genesis block.
>>
>> <BR>
>>
>> What if we decide, for example, we decide to combine and prune the blockchain in its entirety every 999 blocks to one block (Genesis block not included in count).
>>
>> <BR>
>>
>> How would this work?: Once block 1000 has been created, the network would be waiting for a special "pruned block", and until this block was created and verified, block 1001 would not be accepted by any nodes.
>>
>> This pruned block would prune everything from block 2 to block 1000, leaving only the genesis block. Blocks 2 through 1000, would be calculated, to create a summed up transaction of all transactions which occurred in these 999 blocks.
>>
>> <BR>
>>
>> And its hash pointer would be the Genesis block.
>>
>> This block would now be verified by the full nodes, which if accepted would then be willing to accept a new block (block 1001, not including the pruned block in the count).
>>
>> <BR>
>>
>> The new block 1001, would use as its hash pointer the pruned block as its reference. And the count would begin again to the next 1000. The next pruned block would be created, its hash pointer will be referenced to the Genesis Block. And so on..
>>
>> <BR>
>>
>> In this way the ledger will always be a maximum of 1000 blocks.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20170826/6b082a37/attachment.html>
📝 Original message:I also like only keeping the last "n" blocks. Every "n" minus all the previous balances are kept, but the transactions are deleted. There's good enough record keeping, and there's excessive. Part of scaling is being able to get the blockchain and sync quickly.
Jason
-------- Original Message --------
On Aug 27, 2017, 05:31, Thomas Guyot-Sionnest via bitcoin-dev wrote:
> Pruning is already implemented in the nodes... Once enabled only unspent inputs and most recent blocks are kept. IIRC there was also a proposal to include UTXO in some blocks for SPV clients to use, but that would be additional to the blockchain data.
>
> Implementing your solution is impossible because there is no way to determine authenticity of the blockchain mid way. The proof that a block hash leads to the genesis block is also a proof of all the work that's been spent on it (the years of hashing). At the very least we'd have to keep all blocks until a hard-coded checkpoint in the code, which also means that as nodes upgrades and prune more blocks older nodes will have difficulty syncing the blockchain.
>
> Finally it's not just the addresses and balance you need to save, but also each unspent output block number, tx position and script that are required for validation on input. That's a lot of data that you're suggesting to save every 1000 blocks (and why 1000?), and as said earlier it doesn't even guarantee you can drop older blocks. I'm not even going into the details of making it work (hard fork, large block sync/verification issues, possible attack vectors opened by this...)
>
> What is wrong with the current implementation of node pruning that you are trying to solve?
>
> --
> Thomas
>
> On 26/08/17 03:21 PM, Adam Tamir Shem-Tov via bitcoin-dev wrote:
>
>> <B> Solving the Scalability issue for bitcoin </B> <BR>
>>
>> I have this idea to solve the scalability problem I wish to make public.
>>
>> If I am wrong I hope to be corrected, and if I am right we will all gain by it. <BR>
>>
>> Currently each block is being hashed, and in its contents are the hash of the block preceding it, this goes back to the genesis block.
>>
>> <BR>
>>
>> What if we decide, for example, we decide to combine and prune the blockchain in its entirety every 999 blocks to one block (Genesis block not included in count).
>>
>> <BR>
>>
>> How would this work?: Once block 1000 has been created, the network would be waiting for a special "pruned block", and until this block was created and verified, block 1001 would not be accepted by any nodes.
>>
>> This pruned block would prune everything from block 2 to block 1000, leaving only the genesis block. Blocks 2 through 1000, would be calculated, to create a summed up transaction of all transactions which occurred in these 999 blocks.
>>
>> <BR>
>>
>> And its hash pointer would be the Genesis block.
>>
>> This block would now be verified by the full nodes, which if accepted would then be willing to accept a new block (block 1001, not including the pruned block in the count).
>>
>> <BR>
>>
>> The new block 1001, would use as its hash pointer the pruned block as its reference. And the count would begin again to the next 1000. The next pruned block would be created, its hash pointer will be referenced to the Genesis Block. And so on..
>>
>> <BR>
>>
>> In this way the ledger will always be a maximum of 1000 blocks.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20170826/6b082a37/attachment.html>