Keagan McClelland [ARCHIVE] on Nostr: đź“… Original date posted:2021-03-01 đź“ť Original message:> Personally I consider ...
đź“… Original date posted:2021-03-01
đź“ť Original message:> Personally I consider this counterproductive. Apart from the complexity,
it’s not healthy. And the chain grows linearly with storage cost falling
exponentially, leading to a straightforward conclusion.
The motivation for this change is not to encourage full archival nodes to
prune, but to make it possible for pruned nodes to beef up what kind of
archive they retain. Personally I think using the falling storage costs as
a means of providing access to more users is more important than using it
to justify requiring higher node requirements.
> Something to consider adding to this proposal is to keep the idea of
pruning - i.e. retain a sequentially uninterrupted number of the most
recent blocks.
>
> Many users do not run a node for entirely altruistic reasons - they do
so, at least in part, because it allows them to use their wallets
privately. Without this ability, I think the number of users willing to run
their node in this configuration might be reduced.
>
> Another related thought is to have a decreasing density over blocks over
time as you go backwards towards genesis, in order for the data density of
the storage to match the actual usage of the network, in which (I would
imagine) more recent blocks are more heavily requested than early ones.
Per my above comments, this change is actually capitalizing primarily upon
those who wish to do it for more altruistic reasons. Furthermore, doing
linear block scans when you need to query blocks that you don't keep does
not leak privacy details in the same way that bloom filters do. You are not
signaling to the peer that there is something specific in that block that
you care about, because you don't actually know. You are signalling only
that you do not have that block right now, which from the other parts of
the design you are already leaking. In light of this, I don't think that it
is necessary for the blocks to be in sequential sets at all. If there is no
requirement on them being sequential, uniform randomness will take care of
the density problem automatically.
Keagan
On Mon, Mar 1, 2021 at 4:20 AM Eric Voskuil via bitcoin-dev <
bitcoin-dev at lists.linuxfoundation.org> wrote:
> On Sun, Feb 28, 2021 at 10:18 AM Leo Wandersleb via bitcoin-dev <
> bitcoin-dev at lists.linuxfoundation.org> wrote:
>
>
>
> > Only headers need to be downloaded sequentially so downloading relevant
> blocks from one node is totally possible with gaps in between.
>
>
>
> In fact this is exactly how libbitcoin v4 works. We download and store
> blocks in parallel. In the case of a restart block gaps are repopulated.
> Given that headers are validated, we go after the most responsive nodes.
> Based on standard deviation, we drop the slowest peers and rebalance load
> to new/empty channels. We make ordered but not necessarily sequential
> requests. There is no distinction between “initial” block download, a
> restart, or a single or few blocks at the top. So it’s referred to as
> continuous parallel block download.
>
>
>
> But we don’t prune. Personally I consider this counterproductive. Apart
> from the complexity, it’s not healthy. And the chain grows linearly with
> storage cost falling exponentially, leading to a straightforward conclusion.
>
>
>
> e
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev at lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20210301/21fb11a0/attachment-0001.html>
đź“ť Original message:> Personally I consider this counterproductive. Apart from the complexity,
it’s not healthy. And the chain grows linearly with storage cost falling
exponentially, leading to a straightforward conclusion.
The motivation for this change is not to encourage full archival nodes to
prune, but to make it possible for pruned nodes to beef up what kind of
archive they retain. Personally I think using the falling storage costs as
a means of providing access to more users is more important than using it
to justify requiring higher node requirements.
> Something to consider adding to this proposal is to keep the idea of
pruning - i.e. retain a sequentially uninterrupted number of the most
recent blocks.
>
> Many users do not run a node for entirely altruistic reasons - they do
so, at least in part, because it allows them to use their wallets
privately. Without this ability, I think the number of users willing to run
their node in this configuration might be reduced.
>
> Another related thought is to have a decreasing density over blocks over
time as you go backwards towards genesis, in order for the data density of
the storage to match the actual usage of the network, in which (I would
imagine) more recent blocks are more heavily requested than early ones.
Per my above comments, this change is actually capitalizing primarily upon
those who wish to do it for more altruistic reasons. Furthermore, doing
linear block scans when you need to query blocks that you don't keep does
not leak privacy details in the same way that bloom filters do. You are not
signaling to the peer that there is something specific in that block that
you care about, because you don't actually know. You are signalling only
that you do not have that block right now, which from the other parts of
the design you are already leaking. In light of this, I don't think that it
is necessary for the blocks to be in sequential sets at all. If there is no
requirement on them being sequential, uniform randomness will take care of
the density problem automatically.
Keagan
On Mon, Mar 1, 2021 at 4:20 AM Eric Voskuil via bitcoin-dev <
bitcoin-dev at lists.linuxfoundation.org> wrote:
> On Sun, Feb 28, 2021 at 10:18 AM Leo Wandersleb via bitcoin-dev <
> bitcoin-dev at lists.linuxfoundation.org> wrote:
>
>
>
> > Only headers need to be downloaded sequentially so downloading relevant
> blocks from one node is totally possible with gaps in between.
>
>
>
> In fact this is exactly how libbitcoin v4 works. We download and store
> blocks in parallel. In the case of a restart block gaps are repopulated.
> Given that headers are validated, we go after the most responsive nodes.
> Based on standard deviation, we drop the slowest peers and rebalance load
> to new/empty channels. We make ordered but not necessarily sequential
> requests. There is no distinction between “initial” block download, a
> restart, or a single or few blocks at the top. So it’s referred to as
> continuous parallel block download.
>
>
>
> But we don’t prune. Personally I consider this counterproductive. Apart
> from the complexity, it’s not healthy. And the chain grows linearly with
> storage cost falling exponentially, leading to a straightforward conclusion.
>
>
>
> e
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev at lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20210301/21fb11a0/attachment-0001.html>