Mark Friedenbach [ARCHIVE] on Nostr: š Original date posted:2014-03-22 š Original message:Please, by all means: ...
š
Original date posted:2014-03-22
š Original message:Please, by all means: ignore our well-reasoned arguments about
externalized storage and validation cost and alternative solutions.
Please re-discover how proof of publication doesn't require burdening
the network with silly extra data that must be transmitted, kept, and
validated from now until the heat death of the universe. Your failure
will make my meager bitcoin holdings all the more valuable! As despite
persistent assertions to the contrary, making quality software freely
available at zero cost does not pay well, even in finance. You will not
find core developers in the Bitcoin 1%.
Please feel free to flame me back, but off-list. This is for *bitcoin*
development.
On 03/22/2014 08:08 AM, Troy Benjegerdes wrote:
> On Sat, Mar 22, 2014 at 04:47:02AM -0400, Peter Todd wrote:
>> There's been a lot of recent hoopla over proof-of-publication, with the
>> OP_RETURN <data> length getting reduced to a rather useless 40 bytes at
>> the last minute prior to the 0.9 release. Secondly I noticed a
>> overlooked security flaw in that OP_CHECKMULTISIG sigops weren't taken
>> into account, making it possible to broadcast unminable transactions and
>> bloat mempools.(1) My suggestion was to just ditch bare OP_CHECKMULTISIG
>> outputs given that the sigops limit and the way they use up a fixed 20
>> sigops per op makes them hard to do fee calculations for. They also make
>> it easy to bloat the UTXO set, potentially a bad thing. This would of
>> course require things using them to change. Currently that's just
>> Counterparty, so I gave them the heads up in my email.
>
> I've spend some time looking at the Datacoin code, and I've come to the
> conclusion the next copycatcoin I release will have an explicit 'data'
> field with something like 169 bytes (a bakers dozen squared), which will
> add 1 byte to each transaction if unused, and provide a small, but usable
> data field for proof of publication. As a new coin, I can also do a
> hardfork that increases the data size limit much easier if there is a
> compelling reason to make it bigger.
>
> I think this will prove to be a much more reliable infrastructure for
> proof of publication than various hacks to overcome 40 byte limits with
> Bitcoin.
>
> I am disclosing this here so the bitcoin 1% has plenty of time to evaluate
> the market risk they face from the 40 byte limit, and put some pressure to
> implement some of the alternatives Todd proposes.
>
š Original message:Please, by all means: ignore our well-reasoned arguments about
externalized storage and validation cost and alternative solutions.
Please re-discover how proof of publication doesn't require burdening
the network with silly extra data that must be transmitted, kept, and
validated from now until the heat death of the universe. Your failure
will make my meager bitcoin holdings all the more valuable! As despite
persistent assertions to the contrary, making quality software freely
available at zero cost does not pay well, even in finance. You will not
find core developers in the Bitcoin 1%.
Please feel free to flame me back, but off-list. This is for *bitcoin*
development.
On 03/22/2014 08:08 AM, Troy Benjegerdes wrote:
> On Sat, Mar 22, 2014 at 04:47:02AM -0400, Peter Todd wrote:
>> There's been a lot of recent hoopla over proof-of-publication, with the
>> OP_RETURN <data> length getting reduced to a rather useless 40 bytes at
>> the last minute prior to the 0.9 release. Secondly I noticed a
>> overlooked security flaw in that OP_CHECKMULTISIG sigops weren't taken
>> into account, making it possible to broadcast unminable transactions and
>> bloat mempools.(1) My suggestion was to just ditch bare OP_CHECKMULTISIG
>> outputs given that the sigops limit and the way they use up a fixed 20
>> sigops per op makes them hard to do fee calculations for. They also make
>> it easy to bloat the UTXO set, potentially a bad thing. This would of
>> course require things using them to change. Currently that's just
>> Counterparty, so I gave them the heads up in my email.
>
> I've spend some time looking at the Datacoin code, and I've come to the
> conclusion the next copycatcoin I release will have an explicit 'data'
> field with something like 169 bytes (a bakers dozen squared), which will
> add 1 byte to each transaction if unused, and provide a small, but usable
> data field for proof of publication. As a new coin, I can also do a
> hardfork that increases the data size limit much easier if there is a
> compelling reason to make it bigger.
>
> I think this will prove to be a much more reliable infrastructure for
> proof of publication than various hacks to overcome 40 byte limits with
> Bitcoin.
>
> I am disclosing this here so the bitcoin 1% has plenty of time to evaluate
> the market risk they face from the 40 byte limit, and put some pressure to
> implement some of the alternatives Todd proposes.
>