Mike Brooks [ARCHIVE] on Nostr: đ Original date posted:2019-07-24 đ Original message:Fungibility is a good ...
đ
Original date posted:2019-07-24
đ Original message:Fungibility is a good question for inductive reasoning. After all, what is
a claim without some rigger?
There exists some set of wallets 'w' which contain a balance of satoshi too
small to recover because it doesn't meet the minimum transaction fee
required to confirm the transaction. These wallets are un-spendable by
their very nature - and quantifying un-spendable wallets is one way to
measure the fungibility of Bitcoin. The number of un-spendable wallets can
be quantified as follows:
Let 'p' equal the current price/per-bit for a transaction
Let 'n' equal the number of bits in the transaction
Let '[a]' be the set of all wallets with a balance
Let '[w]' be the set of un-spendable wallets
[w0] = [a] > b*n
Now, let's introduce a savings measure 'k' which is a constant flat savings
per transaction.
[w1] = [a] > b*(n - k0)
As the savings 'k' increase, the set of 'w' also increases in size.
len([w0]) < len([w1]) ... < len([wk])
In this case 'k0' could be the savings from a P2PKH transaction to a 233
byte SegWit transaction, and 'k1' could be 148 byte SegWit+PubRef
transaction, and the same model can be used for some future transaction
type that is even smaller. As the savings 'k' approaches infinity the set
of un-spendable wallets approaches zero.
If a user is privacy-aware, and chooses to use single-use p2sh for all
transactions, then these users can still gain from PubRef because
block-weight should be decreased for everyone. There are cases where a user
or merchant might want to reuse their p2sh hash - and PubRef can provide
savings. Additionally, the resulting p2sh script could be a multi-sig
transaction which could still benefit from PubRef compression. PubRef
improves fungibility of Bitcoin in two different ways - both reduced cost
of transaction and increasing the max number of transactions that are able
to be confirmed by a block. Which is pretty hot - QED.
On Fri, Jul 19, 2019 at 3:48 PM Yuval Kogman <nothingmuch at woobling.org>
wrote:
> Hi,
>
> On Fri, 19 Jul 2019 at 17:49, Mike Brooks <m at ib.tc> wrote:
>
>> Users will adopt whatever the client accepts - this feature would be
>> transparent.
>>
>
> My skepticism was based in an assumption on my part that most such data is
> produced by actors with a track record of neglecting transaction
> efficiency. I'd be happy to be proven wrong, but considering say usage
> rates of native witness outputs, or the fraction of transactions with more
> than 2 outputs so far I see little evidence in support of widespread
> adoption of cost saving measures. Assuming this is intended as a new script
> version, all fully validating nodes need to commit to keeping all data
> indefinitely before they can enforce the rules that make transactions
> benefiting from this opcode safe to broadcast.
>
> That said, if successful, the main concern is still that of address reuse
> - currently there is no incentive in the protocol to do that, and with
> BIP32 wallets fewer reasons to do it as well, but this proposal introduces
> such an incentive for users which would otherwise generate new addresses
> (disregarding the ones that already reuse addresses anyway), and this is
> problematic for privacy and fungibility.
>
> Since address reuse has privacy concerns, I think it's important to draw a
> distinction between clients accepting and producing such transactions, if
> the latter were done transparently that would be very concerning IMO, and
> even the former would not be transparent for users who run currently
> pruning nodes.
>
> I'm not sure how an O(1) time complexity leads to DoS, that seems like a
>> very creative jump.
>>
>
> For what it's worth, that was in reference to hypothetical deduplication
> only at the p2p layer, similar to compact blocks, but on further reflection
> I'd like to retract that, as since both scenarios which I had in mind seem
> easily mitigated.
>
> Based on this response, it makes me want to calculate the savings, what
>> if it was a staggering reduction in the tree size?
>>
>
> Assuming past address reuse rates are predictive this only places an upper
> bound on the potential size savings, so personally I would not find that
> very compelling. Even if the realized size savings would be substantial,
> I'm not convinced the benefits outweigh the downsides (increased validation
> costs, monotonically growing unprunable data, and direct incentive for
> address reuse), especially when compared to other methods/proposals that
> can reduce on chain footprint generally improve privacy while reducing
> validation costs for others (e.g. batching, lightning, MAST for
> sufficiently large scripts, Schnorr signatures (musig, adaptor signatures),
> {tap,graft}root, ).
>
> Regards,
> Yuval
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20190724/27211815/attachment.html>
đ Original message:Fungibility is a good question for inductive reasoning. After all, what is
a claim without some rigger?
There exists some set of wallets 'w' which contain a balance of satoshi too
small to recover because it doesn't meet the minimum transaction fee
required to confirm the transaction. These wallets are un-spendable by
their very nature - and quantifying un-spendable wallets is one way to
measure the fungibility of Bitcoin. The number of un-spendable wallets can
be quantified as follows:
Let 'p' equal the current price/per-bit for a transaction
Let 'n' equal the number of bits in the transaction
Let '[a]' be the set of all wallets with a balance
Let '[w]' be the set of un-spendable wallets
[w0] = [a] > b*n
Now, let's introduce a savings measure 'k' which is a constant flat savings
per transaction.
[w1] = [a] > b*(n - k0)
As the savings 'k' increase, the set of 'w' also increases in size.
len([w0]) < len([w1]) ... < len([wk])
In this case 'k0' could be the savings from a P2PKH transaction to a 233
byte SegWit transaction, and 'k1' could be 148 byte SegWit+PubRef
transaction, and the same model can be used for some future transaction
type that is even smaller. As the savings 'k' approaches infinity the set
of un-spendable wallets approaches zero.
If a user is privacy-aware, and chooses to use single-use p2sh for all
transactions, then these users can still gain from PubRef because
block-weight should be decreased for everyone. There are cases where a user
or merchant might want to reuse their p2sh hash - and PubRef can provide
savings. Additionally, the resulting p2sh script could be a multi-sig
transaction which could still benefit from PubRef compression. PubRef
improves fungibility of Bitcoin in two different ways - both reduced cost
of transaction and increasing the max number of transactions that are able
to be confirmed by a block. Which is pretty hot - QED.
On Fri, Jul 19, 2019 at 3:48 PM Yuval Kogman <nothingmuch at woobling.org>
wrote:
> Hi,
>
> On Fri, 19 Jul 2019 at 17:49, Mike Brooks <m at ib.tc> wrote:
>
>> Users will adopt whatever the client accepts - this feature would be
>> transparent.
>>
>
> My skepticism was based in an assumption on my part that most such data is
> produced by actors with a track record of neglecting transaction
> efficiency. I'd be happy to be proven wrong, but considering say usage
> rates of native witness outputs, or the fraction of transactions with more
> than 2 outputs so far I see little evidence in support of widespread
> adoption of cost saving measures. Assuming this is intended as a new script
> version, all fully validating nodes need to commit to keeping all data
> indefinitely before they can enforce the rules that make transactions
> benefiting from this opcode safe to broadcast.
>
> That said, if successful, the main concern is still that of address reuse
> - currently there is no incentive in the protocol to do that, and with
> BIP32 wallets fewer reasons to do it as well, but this proposal introduces
> such an incentive for users which would otherwise generate new addresses
> (disregarding the ones that already reuse addresses anyway), and this is
> problematic for privacy and fungibility.
>
> Since address reuse has privacy concerns, I think it's important to draw a
> distinction between clients accepting and producing such transactions, if
> the latter were done transparently that would be very concerning IMO, and
> even the former would not be transparent for users who run currently
> pruning nodes.
>
> I'm not sure how an O(1) time complexity leads to DoS, that seems like a
>> very creative jump.
>>
>
> For what it's worth, that was in reference to hypothetical deduplication
> only at the p2p layer, similar to compact blocks, but on further reflection
> I'd like to retract that, as since both scenarios which I had in mind seem
> easily mitigated.
>
> Based on this response, it makes me want to calculate the savings, what
>> if it was a staggering reduction in the tree size?
>>
>
> Assuming past address reuse rates are predictive this only places an upper
> bound on the potential size savings, so personally I would not find that
> very compelling. Even if the realized size savings would be substantial,
> I'm not convinced the benefits outweigh the downsides (increased validation
> costs, monotonically growing unprunable data, and direct incentive for
> address reuse), especially when compared to other methods/proposals that
> can reduce on chain footprint generally improve privacy while reducing
> validation costs for others (e.g. batching, lightning, MAST for
> sufficiently large scripts, Schnorr signatures (musig, adaptor signatures),
> {tap,graft}root, ).
>
> Regards,
> Yuval
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20190724/27211815/attachment.html>