Yuval Kogman [ARCHIVE] on Nostr: 📅 Original date posted:2019-07-19 📝 Original message:Hi, On Fri, 19 Jul 2019 at ...
📅 Original date posted:2019-07-19
📝 Original message:Hi,
On Fri, 19 Jul 2019 at 17:49, Mike Brooks <m at ib.tc> wrote:
> Users will adopt whatever the client accepts - this feature would be
> transparent.
>
My skepticism was based in an assumption on my part that most such data is
produced by actors with a track record of neglecting transaction
efficiency. I'd be happy to be proven wrong, but considering say usage
rates of native witness outputs, or the fraction of transactions with more
than 2 outputs so far I see little evidence in support of widespread
adoption of cost saving measures. Assuming this is intended as a new script
version, all fully validating nodes need to commit to keeping all data
indefinitely before they can enforce the rules that make transactions
benefiting from this opcode safe to broadcast.
That said, if successful, the main concern is still that of address reuse -
currently there is no incentive in the protocol to do that, and with BIP32
wallets fewer reasons to do it as well, but this proposal introduces such
an incentive for users which would otherwise generate new addresses
(disregarding the ones that already reuse addresses anyway), and this is
problematic for privacy and fungibility.
Since address reuse has privacy concerns, I think it's important to draw a
distinction between clients accepting and producing such transactions, if
the latter were done transparently that would be very concerning IMO, and
even the former would not be transparent for users who run currently
pruning nodes.
I'm not sure how an O(1) time complexity leads to DoS, that seems like a
> very creative jump.
>
For what it's worth, that was in reference to hypothetical deduplication
only at the p2p layer, similar to compact blocks, but on further reflection
I'd like to retract that, as since both scenarios which I had in mind seem
easily mitigated.
Based on this response, it makes me want to calculate the savings, what
> if it was a staggering reduction in the tree size?
>
Assuming past address reuse rates are predictive this only places an upper
bound on the potential size savings, so personally I would not find that
very compelling. Even if the realized size savings would be substantial,
I'm not convinced the benefits outweigh the downsides (increased validation
costs, monotonically growing unprunable data, and direct incentive for
address reuse), especially when compared to other methods/proposals that
can reduce on chain footprint generally improve privacy while reducing
validation costs for others (e.g. batching, lightning, MAST for
sufficiently large scripts, Schnorr signatures (musig, adaptor signatures),
{tap,graft}root, ).
Regards,
Yuval
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20190719/f22e2c63/attachment.html>
📝 Original message:Hi,
On Fri, 19 Jul 2019 at 17:49, Mike Brooks <m at ib.tc> wrote:
> Users will adopt whatever the client accepts - this feature would be
> transparent.
>
My skepticism was based in an assumption on my part that most such data is
produced by actors with a track record of neglecting transaction
efficiency. I'd be happy to be proven wrong, but considering say usage
rates of native witness outputs, or the fraction of transactions with more
than 2 outputs so far I see little evidence in support of widespread
adoption of cost saving measures. Assuming this is intended as a new script
version, all fully validating nodes need to commit to keeping all data
indefinitely before they can enforce the rules that make transactions
benefiting from this opcode safe to broadcast.
That said, if successful, the main concern is still that of address reuse -
currently there is no incentive in the protocol to do that, and with BIP32
wallets fewer reasons to do it as well, but this proposal introduces such
an incentive for users which would otherwise generate new addresses
(disregarding the ones that already reuse addresses anyway), and this is
problematic for privacy and fungibility.
Since address reuse has privacy concerns, I think it's important to draw a
distinction between clients accepting and producing such transactions, if
the latter were done transparently that would be very concerning IMO, and
even the former would not be transparent for users who run currently
pruning nodes.
I'm not sure how an O(1) time complexity leads to DoS, that seems like a
> very creative jump.
>
For what it's worth, that was in reference to hypothetical deduplication
only at the p2p layer, similar to compact blocks, but on further reflection
I'd like to retract that, as since both scenarios which I had in mind seem
easily mitigated.
Based on this response, it makes me want to calculate the savings, what
> if it was a staggering reduction in the tree size?
>
Assuming past address reuse rates are predictive this only places an upper
bound on the potential size savings, so personally I would not find that
very compelling. Even if the realized size savings would be substantial,
I'm not convinced the benefits outweigh the downsides (increased validation
costs, monotonically growing unprunable data, and direct incentive for
address reuse), especially when compared to other methods/proposals that
can reduce on chain footprint generally improve privacy while reducing
validation costs for others (e.g. batching, lightning, MAST for
sufficiently large scripts, Schnorr signatures (musig, adaptor signatures),
{tap,graft}root, ).
Regards,
Yuval
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20190719/f22e2c63/attachment.html>