Matt Corallo [ARCHIVE] on Nostr: 📅 Original date posted:2012-06-15 📝 Original message:On Thu, 2012-06-14 at ...
📅 Original date posted:2012-06-15
📝 Original message:On Thu, 2012-06-14 at 13:52 +0200, Mike Hearn wrote:
> > filterinit(false positive rate, number of elements): initialize
> > filterload(data): input a serialized bloom filter table metadata and data.
>
> Why not combine these two?
I believe its because it allows the node which will have to use the
bloom filter to scan transactions to chose how much effort it wants to
put into each transaction on behalf of the SPV client. Though its
generally a small amount of CPU time/memory, if we end up with a drastic
split between SPV nodes and only a few large network nodes, those nodes
may wish to limit the CPU/memory usage each node is allowed to use,
which may be important if you are serving 1000 SPV peers. It offers a
sort of negotiation between SPV client and full node instead of letting
the client specify it outright.
>
> > 'filterload' and 'filteradd' enable special behavior changes for
> > 'mempool' and existing P2P commands, whereby only transactions
> > matching the bloom filter will be announced to the connection, and
> > only matching transactions will be sent inside serialized blocks.
>
> Need to specify the format of how these arrive. It means that when a
> new block is found instead of inv<->getdata<->block we'd see something
> like inv<->getdata<->merkleblock where a "merkleblock" structure is a
> header + list of transactions + list of merkle branches linking them
> to the root. I think CMerkleTx already knows how to serialize this,
> but it redundantly includes the block hash which would not be
> necessary for a merkleblock message.
A series of CMerkleTx's might also end up redundantly encoding branches
of the merkle tree, so, yes as a part of the BIP/implementation, I would
say we probably want a CFilteredBlock or similar
📝 Original message:On Thu, 2012-06-14 at 13:52 +0200, Mike Hearn wrote:
> > filterinit(false positive rate, number of elements): initialize
> > filterload(data): input a serialized bloom filter table metadata and data.
>
> Why not combine these two?
I believe its because it allows the node which will have to use the
bloom filter to scan transactions to chose how much effort it wants to
put into each transaction on behalf of the SPV client. Though its
generally a small amount of CPU time/memory, if we end up with a drastic
split between SPV nodes and only a few large network nodes, those nodes
may wish to limit the CPU/memory usage each node is allowed to use,
which may be important if you are serving 1000 SPV peers. It offers a
sort of negotiation between SPV client and full node instead of letting
the client specify it outright.
>
> > 'filterload' and 'filteradd' enable special behavior changes for
> > 'mempool' and existing P2P commands, whereby only transactions
> > matching the bloom filter will be announced to the connection, and
> > only matching transactions will be sent inside serialized blocks.
>
> Need to specify the format of how these arrive. It means that when a
> new block is found instead of inv<->getdata<->block we'd see something
> like inv<->getdata<->merkleblock where a "merkleblock" structure is a
> header + list of transactions + list of merkle branches linking them
> to the root. I think CMerkleTx already knows how to serialize this,
> but it redundantly includes the block hash which would not be
> necessary for a merkleblock message.
A series of CMerkleTx's might also end up redundantly encoding branches
of the merkle tree, so, yes as a part of the BIP/implementation, I would
say we probably want a CFilteredBlock or similar