Russell O'Connor [ARCHIVE] on Nostr: 📅 Original date posted:2022-01-27 📝 Original message:I am sensitive to ...
📅 Original date posted:2022-01-27
📝 Original message:I am sensitive to technical debt and soft fork processes, and I don't
believe I'm unordinary particular about these issues. Once implemented,
opcodes must be supported and maintained indefinitely. Some opcodes are
easier to maintain than others. These particular opcodes involve caching
of hash computations and, for that reason, I would judge them to be of
moderate complexity.
But more importantly, soft-forks are inherently a risky process, so we
should be getting as much value out of them as we reasonably can. I don't
think implementing a CTV opcode that we expect to largely be obsoleted by a
TXHASH at a later date is yielding good value from a soft fork process.
The strongest argument I can make in favour of CTV would be something like:
"We definitely want bare CTV and if we are going to add CTV to legacy
script (since we cannot use TXHASH in legacy script), then it is actually
easier not to exclude it from tapscript, even if we plan to add TXHASH to
tapscript as well."
But that argument basically rests the entire value of CTV on the shoulders
of bare CTV. As I understand, the argument for why we want bare CTV,
instead of just letting people use tapscript, involves the finer details of
weight calculations, and I haven't really reviewed that aspect yet. I
think it would need to be pretty compelling to make it worthwhile to add
CTV for that one use case.
Regarding "OP_TXHASH+CSFSV doesn't seem to be the 'full' set of things
needed", I totally agree we will want more things such as CAT, rolling
SHA256 opcodes, wider arithmetic, pushing amounts onto the stack, some kind
of tapleaf manipulation and/or TWEAKVERIFY. For now, I only want to argue
TXHASH+CSFSV is better than CTV+APO because it gives us more value, namely
oracle signature verification. In particular, I want to argue that
TXHASH's push semantics is better that CTV's verify semantics because it
composes better by not needing to carry an extra 32-bytes (per instance) in
the witness data. I expect that in a world of full recursive covenants,
TXHASH would still be useful as a fast and cheap way to verify the
"payload" of these covenants, i.e. that a transaction is paying a certain,
possibly large, set of addresses certain specific amounts of money. And
even if not, TXHASH+CSFSV would still be the way that eltoo would be
implemented under this proposal.
On Wed, Jan 26, 2022 at 5:16 PM Jeremy <jlrubin at mit.edu> wrote:
> Hi Russell,
>
> Thanks for this email, it's great to see this approach described.
>
> A few preliminary notes of feedback:
>
> 1) a Verify approach can be made to work for OP_TXHASH (even with CTV
> as-is) E.g., suppose a semantic added for a single byte stack[-1] sighash
> flag to read the hash at stack[-2], then the hash can be passed in instead
> of put on the stack. This has the disadvantage of larger witnesses, but the
> advantage of allowing undefined sighash flags to pass for any hash type.
> 2) using the internal key for APO covenants is not an option because it
> makes transaction construction interactive and precludes contracts with a
> NUMS point taproot key. Instead, if you want similar savings, you should
> advocate an OP_GENERATOR which puts G on the stack. Further, an untagged
> APO variant which has split R and S values would permit something like
> <sig> OP_GENERATOR OP_GENERATOR CHECKSIGAPO, which would be only 2 more
> bytes than CTV.
> 3) I count something like 20 different flags in your proposal. As long as
> flags are under 40 bytes (and 32 assuming we want it to be easy) without
> upgrading math this should be feasible to manipulate on the stack
> programmatically. This is ignoring some of the more flexible additions you
> mention about picking which outputs/inputs are included. However, 20 flags
> means that for testing we would want comprehensive tests and understanding
> for ~1 million different flag combos and the behaviors they expose. I think
> this necessitates a formal model of scripting and transaction validity
> properties. Are there any combinations that might be undesirable?
> 4) Just hashing or not hashing isn't actually that flexible, because it
> doesn't natively let you do things like (for example) TLUV. You really do
> need tx operations for directly manipulating the data on the stack to
> construct the hash if you want more flexible covenants. This happens to be
> compatible with either a Verify or Push approach, since you either
> destructure a pushed hash or build up a hash for a verify.
> 5) Flexible hashing has the potential for quadratic hashing bugs. The
> fields you propose seem to be within similar range to work you could cause
> with a regular OP_HASH256, although you'd want to be careful with some of
> the proposed extensions that you don't create risk of quadratic hashing,
> which seems possible with an output selecting opcode unless you cache
> properly (which might be tricky to do). Overall for the fields explicitly
> mentioned, seems safe, the "possibles" seem to have some more complex
> interactions. E.g., CTV with the ability to pick a subset of outputs would
> be exposed to quadratic hashing.
> 6) Missing field: covering the annex or some sub-range of the annex
> (quadratic hashing issues on the latter)
> 7) It seems simpler to, for many of these fields, push values directly (as
> in OP_PUSHTXDATA from Johnson Lau) because the combo of flags to push the
> hash of a single output's amount to emulate OP_AMOUNT looks 'general but
> annoying'. It may make more sense to do the OP_PUSHTXDATA style opcode
> instead. This also makes it simpler to think about the combinations of
> flags, since it's really N independent multi-byte opcodes.
>
>
> Ultimately if we had OP_TXHASH available "tomorrow", I would be able to
> build out the use cases I care about for CTV (and more). So I don't have an
> opposition on it with regards to lack of function.
>
> However, if one finds the TXHASH approach acceptable, then you should also
> be relatively fine doing APO, CTV, CSFS, TXHASH acceptable in any order
> (whenever "ready"), unless you are particularly sensitive to "technical
> debt" and "soft fork processes". The only costs of doing something for CTV
> or APO given an eventual TXHASH is perhaps a wasted key version or the 32
> byte argument of a NOP opcode and some code to maintain.
>
> Are there other costs I am missing?
>
> However, as it pertains to actual rollout:
>
> - OP_TXHASH+CSFSV doesn't seem to be the "full" set of things needed (we
> still need e.g. OP_CAT, Upgraded >=64 bit Math, TLUV or OP_TWEAK
> OP_TAPBRANCH OP_MANIPULATETAPTREE, and more) to full realize covenanting
> power it intends to introduce.
> - What sort of timeline would it take to ready something like TXHASH (and
> desired friends) given greater scope of testing and analysis (standalone +
> compared to CTV)?
> - Is there opposition from the community to this degree of
> general/recursive covenants?
> - Does it make "more sense" to invest the research and development effort
> that would go into proving TXHASH safe, for example, into Simplicity
> instead?
>
> Overall, *my opinion *is that:
>
> - TXHASH is an acceptable theoretical approach, and I am happy to put more
> thought into it and maybe draft a prototype of it.
> - I prefer CTV as a first step for pragmatic engineering and availability
> timeline reasons.
> - If TXHASH were to take, optimistically, 2 years to develop and review,
> and then 1 year to activate, the "path dependence of software" would put
> Bitcoin in a much better place were we to have CTV within 1 year and
> applications (that are to be a subset of TXHASH later) being built over the
> next few years enhanced in the future by TXHASH's availability.
> - There is an element of expediency meritted for something like CTV
> insofar as it provides primitives to tackle time sensitive issues around
> privacy, scalability, self custody, and decentralization. The
> aforementioned properties may be difficult to reclaim once given away (with
> the exception of perhaps scalability).
> - Bringing CTV to an implemented state of near-unanimous "we could do
> this, technically" is good for concretely driving the process of review for
> any covenant proposals forward, irrespective of if we ultimately activate.
> (I.e., if there were a reason we could not do CTV safely, it would likely
> have implications for any other future covenant)
>
> Concretely, I'm not going to stop advocating for CTV based on the above,
> but I'm very happy to have something new in the mix to consider!
>
> Best,
>
> Jeremy
>
>
> --
> @JeremyRubin <https://twitter.com/JeremyRubin>
> <https://twitter.com/JeremyRubin>
>
>
> On Wed, Jan 26, 2022 at 9:23 AM Russell O'Connor via bitcoin-dev <
> bitcoin-dev at lists.linuxfoundation.org> wrote:
>
>> Recapping the relationship between CTV and ANYPREVOUT::
>>
>> It is known that there is a significant amount of overlap in the
>> applications that are enabled by the CTV and ANYPREVOUT proposals despite
>> the fact that their primary applications (congestion control for CTV and
>> eltoo lightning channels for ANYPREVOUT) are quite distinct.
>> In particular, ANYPREVOUT can enable most of the applications of CTV,
>> albeit with a higher cost. The primary functionality of CTV is to allow a
>> scriptPubKey to make a commitment to its spending transaction's hash with
>> the input's TXID excluded from the hash. This exclusion is necessary
>> because the scriptPubKey is hashed into the input's TXID, and including the
>> TXID would cause a cycle of hash commitments, which is impossible to
>> construct. On the other hand, ANYPREVOUT defines a signature hash mode
>> that similarly excludes the inputs TXID for its purpose of rebindable
>> signatures.
>>
>> This means that ANYPREVOUT can mimic most of the properties of CTV by
>> committing both a public key along with an ANYPREVOUT signature inside
>> scriptPubKey. In fact, the only reason Bitcoin doesn't have covenants
>> today is due to this cycle between scriptPubKeys and the TXIDs that occur
>> in all the sighash modes.
>>
>> The major differences between simulating CTV via ANYPREVOUT and the
>> actual CTV proposal is: (1) The cost of simulating CTV. With CTV the
>> spending transaction is committed using a hash of 32 bytes, while
>> simulating it with ANYPREVOUT requires 64 bytes for a signature, and 32
>> bytes for some public key, plus a few more bytes for various flags. Some
>> of that cost could be reduced by using the inner public key (1 byte
>> representation) and, if we had CAT, maybe by assembling the signature from
>> reusable pieces (i.e. setting the nonce of the commited signature equal to
>> the public key).
>>
>> The other major difference is: (2) CTV's transaction hash covers values
>> such as the number of inputs in the transaction and their sequence numbers,
>> which ANYPREVOUT does not cover. CTV's hash contains enough information so
>> that when combined with the missing TXIDs, you can compute the TXID of the
>> spending transaction. In particular if the number of inputs is committed
>> to being 1, once the scriptpubkey's transaction id is known and committed
>> to the blockchain, the TXID of its spending transaction is deducible. And
>> if that transaction has outputs that have CTV commitments in them, you can
>> deduce their spending TXIDs in turn. While this is a pretty neat feature,
>> something that ANYPREVOUT cannot mimic, the main application for it is
>> listed as using congestion control to fund lightning channels, fixing their
>> TXIDs in advance of them being placed on chain. However, if ANYPREVOUT
>> were used to mimic CTV, then likely it would be eltoo channels that would
>> be funded, and it isn't necessary to know the TXIDs of eltoo channels in
>> advance in order to use them.
>>
>>
>>
>> An Alternative Proposal::
>>
>> Given the overlap in functionality between CTV and ANYPREVOUT, I think it
>> makes sense to decompose their operations into their constituent pieces and
>> reassemble their behaviour programmatically. To this end, I'd like to
>> instead propose OP_TXHASH and OP_CHECKSIGFROMSTACKVERIFY.
>>
>> OP_TXHASH would pop a txhash flag from the stack and compute a (tagged)
>> txhash in accordance with that flag, and push the resulting hash onto the
>> stack.
>> OP_CHECKSIGFROMSTACKVERIFY would pop a pubkey, message, and signature
>> from the stack and fail if the signature does not verify on that message.
>>
>> CTV and TXHASH have roughly equivalent functionality. 'CTV DROP' can be
>> simulated by '<ctv_style_flag> TXHASH EQUALVERIFY'. The reverse is also
>> true where '<ctv_style_flag> TXHASH' can be simulated by CTV by
>> '<ctv-result-from-witness-stack> CTV', however, as you can see, simulating
>> TXHASH from CTV is much more expensive than the other way around, because
>> the resulting 32-byte hash result must be included as part of the witness
>> stack.
>>
>> '<anyprevout-pubkey> CHECKSIGVERIFY can be simulated by '<apo_style_flag>
>> TXHASH <pubkey> CHECKSIGFROMSTACKVERIFY'. Here we see the advantage of
>> pushing the hash value onto the stack. APO can be simulated without
>> needing to include a copy of the resulting txhash inside the witness data.
>>
>> In addition to the CTV and ANYPREVOUT applications, with
>> CHECKSIGFROMSTACKVERIFY we can verify signatures on arbitrary messages
>> signed by oracles for oracle applications. This is where we see the
>> benefit of decomposing operations into primitive pieces. By giving users
>> the ability to program their own use cases from components, we get more
>> applications out of fewer op codes!
>>
>>
>>
>> Caveats::
>>
>> First, I acknowledge that replicating the behaviour of CTV and ANYPREVOUT
>> does cost a few more bytes than using the custom purpose built proposals
>> themselves. That is the price to be paid when we choose the ability to
>> program solutions from pieces. But we get to reap the advantages of being
>> able to build more applications from these pieces.
>>
>> Unlike CTV, TXHASH is not NOP-compatable and can only be implemented
>> within tapscript. In particular, bare CTV isn't possible with this
>> proposal. However, this proposal doesn't preclude the possibility of
>> having CTV added to legacy script in while having TXHASH added to tapscript.
>>
>> For similar reasons, TXHASH is not amenable to extending the set of
>> txflags at a later date. In theory, one could have TXHASH
>> abort-with-success when encountering an unknown set of flags. However,
>> this would make analyzing tapscript much more difficult. Tapscripts would
>> then be able to abort with success or failure depending on the order script
>> fragments are assembled and executed, and getting the order incorrect would
>> be catastrophic. This behavior is manifestly different from the current
>> batch of OP_SUCCESS opcodes that abort-with-success just by their mere
>> presence, whether they would be executed or not.
>>
>> I believe the difficulties with upgrading TXHASH can be mitigated by
>> designing a robust set of TXHASH flags from the start. For example having
>> bits to control whether (1) the version is covered; (2) the locktime is
>> covered; (3) txids are covered; (4) sequence numbers are covered; (5) input
>> amounts are covered; (6) input scriptpubkeys are covered; (7) number of
>> inputs is covered; (8) output amounts are covered; (9) output scriptpubkeys
>> are covered; (10) number of outputs is covered; (11) the tapbranch is
>> covered; (12) the tapleaf is covered; (13) the opseparator value is
>> covered; (14) whether all, one, or no inputs are covered; (15) whether all,
>> one or no outputs are covered; (16) whether the one input position is
>> covered; (17) whether the one output position is covered; (18) whether the
>> sighash flags are covered or not (note: whether or not the sighash flags
>> are or are not covered must itself be covered). Possibly specifying which
>> input or output position is covered in the single case and whether the
>> position is relative to the input's position or is an absolute position.
>>
>> That all said, even if other txhash flag modes are needed in the future,
>> adding TXHASH2 always remains an option.
>>
>>
>>
>> Interactions with potential future opcodes::
>>
>> We should give some consideration on how these opcodes may interact with
>> future opcodes such as CAT, rolling SHA256 opcodes, or how it might
>> interface with other covenant opcodes that may do things like, directly
>> push input or output amounts onto the stack for computation purposes,
>> opcodes which have been added to the Elements project.
>>
>> With CAT and/or rolling SHA256 opcodes and/or existing SHA256 opcodes,
>> the CHECKSIGFROMSTACKVERIFY could verify signatures on programmatically
>> assembled messages. Also, in combination with multiple calls to TXHASH,
>> could be used to create signatures that commit to complex subsets of
>> transaction data.
>>
>> If new opcodes are added to push parts of the transaction data direction
>> onto the stack, e.g. OP_INSPECTOUTPUTVALUE, there is perhaps concern that
>> they would obsolete TXHASH, since, in the presence of rolling SHA256
>> opcodes, TXHASH could be simulated. However, given that TXHASH can
>> compactly create a hash of large portions of transaction data, it seems
>> unlikely that TXHASH would fall into disuse. Also, a combination of TXHASH
>> and transaction introspection opcodes can be used to build "*subtractive
>> covenants*".
>>
>> The usual way of building a covenant, which we will call "*additive *
>> *covenants*", is to push all the parts of the transaction data you would
>> like to fix onto the stack, hash it all together, and verify the resulting
>> hash matches a fixed value. Another way of building covenants, which we
>> will call "*subtractive covenants*", is to push all the parts of the
>> transaction data you would like to remain free onto the stack. Then use
>> rolling SHA256 opcodes starting from a fixed midstate that commits to a
>> prefix of the transaction hash data. The free parts are hashed into that
>> midstate. Finally, the resulting hash value is verified to match a value
>> returned by TXHASH. The ability to nicely build subtractive covenants
>> depends on the details of how the TXHASH hash value is constructed,
>> something that I'm told CTV has given consideration to.
>> _______________________________________________
>> bitcoin-dev mailing list
>> bitcoin-dev at lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20220127/97b618df/attachment-0001.html>
📝 Original message:I am sensitive to technical debt and soft fork processes, and I don't
believe I'm unordinary particular about these issues. Once implemented,
opcodes must be supported and maintained indefinitely. Some opcodes are
easier to maintain than others. These particular opcodes involve caching
of hash computations and, for that reason, I would judge them to be of
moderate complexity.
But more importantly, soft-forks are inherently a risky process, so we
should be getting as much value out of them as we reasonably can. I don't
think implementing a CTV opcode that we expect to largely be obsoleted by a
TXHASH at a later date is yielding good value from a soft fork process.
The strongest argument I can make in favour of CTV would be something like:
"We definitely want bare CTV and if we are going to add CTV to legacy
script (since we cannot use TXHASH in legacy script), then it is actually
easier not to exclude it from tapscript, even if we plan to add TXHASH to
tapscript as well."
But that argument basically rests the entire value of CTV on the shoulders
of bare CTV. As I understand, the argument for why we want bare CTV,
instead of just letting people use tapscript, involves the finer details of
weight calculations, and I haven't really reviewed that aspect yet. I
think it would need to be pretty compelling to make it worthwhile to add
CTV for that one use case.
Regarding "OP_TXHASH+CSFSV doesn't seem to be the 'full' set of things
needed", I totally agree we will want more things such as CAT, rolling
SHA256 opcodes, wider arithmetic, pushing amounts onto the stack, some kind
of tapleaf manipulation and/or TWEAKVERIFY. For now, I only want to argue
TXHASH+CSFSV is better than CTV+APO because it gives us more value, namely
oracle signature verification. In particular, I want to argue that
TXHASH's push semantics is better that CTV's verify semantics because it
composes better by not needing to carry an extra 32-bytes (per instance) in
the witness data. I expect that in a world of full recursive covenants,
TXHASH would still be useful as a fast and cheap way to verify the
"payload" of these covenants, i.e. that a transaction is paying a certain,
possibly large, set of addresses certain specific amounts of money. And
even if not, TXHASH+CSFSV would still be the way that eltoo would be
implemented under this proposal.
On Wed, Jan 26, 2022 at 5:16 PM Jeremy <jlrubin at mit.edu> wrote:
> Hi Russell,
>
> Thanks for this email, it's great to see this approach described.
>
> A few preliminary notes of feedback:
>
> 1) a Verify approach can be made to work for OP_TXHASH (even with CTV
> as-is) E.g., suppose a semantic added for a single byte stack[-1] sighash
> flag to read the hash at stack[-2], then the hash can be passed in instead
> of put on the stack. This has the disadvantage of larger witnesses, but the
> advantage of allowing undefined sighash flags to pass for any hash type.
> 2) using the internal key for APO covenants is not an option because it
> makes transaction construction interactive and precludes contracts with a
> NUMS point taproot key. Instead, if you want similar savings, you should
> advocate an OP_GENERATOR which puts G on the stack. Further, an untagged
> APO variant which has split R and S values would permit something like
> <sig> OP_GENERATOR OP_GENERATOR CHECKSIGAPO, which would be only 2 more
> bytes than CTV.
> 3) I count something like 20 different flags in your proposal. As long as
> flags are under 40 bytes (and 32 assuming we want it to be easy) without
> upgrading math this should be feasible to manipulate on the stack
> programmatically. This is ignoring some of the more flexible additions you
> mention about picking which outputs/inputs are included. However, 20 flags
> means that for testing we would want comprehensive tests and understanding
> for ~1 million different flag combos and the behaviors they expose. I think
> this necessitates a formal model of scripting and transaction validity
> properties. Are there any combinations that might be undesirable?
> 4) Just hashing or not hashing isn't actually that flexible, because it
> doesn't natively let you do things like (for example) TLUV. You really do
> need tx operations for directly manipulating the data on the stack to
> construct the hash if you want more flexible covenants. This happens to be
> compatible with either a Verify or Push approach, since you either
> destructure a pushed hash or build up a hash for a verify.
> 5) Flexible hashing has the potential for quadratic hashing bugs. The
> fields you propose seem to be within similar range to work you could cause
> with a regular OP_HASH256, although you'd want to be careful with some of
> the proposed extensions that you don't create risk of quadratic hashing,
> which seems possible with an output selecting opcode unless you cache
> properly (which might be tricky to do). Overall for the fields explicitly
> mentioned, seems safe, the "possibles" seem to have some more complex
> interactions. E.g., CTV with the ability to pick a subset of outputs would
> be exposed to quadratic hashing.
> 6) Missing field: covering the annex or some sub-range of the annex
> (quadratic hashing issues on the latter)
> 7) It seems simpler to, for many of these fields, push values directly (as
> in OP_PUSHTXDATA from Johnson Lau) because the combo of flags to push the
> hash of a single output's amount to emulate OP_AMOUNT looks 'general but
> annoying'. It may make more sense to do the OP_PUSHTXDATA style opcode
> instead. This also makes it simpler to think about the combinations of
> flags, since it's really N independent multi-byte opcodes.
>
>
> Ultimately if we had OP_TXHASH available "tomorrow", I would be able to
> build out the use cases I care about for CTV (and more). So I don't have an
> opposition on it with regards to lack of function.
>
> However, if one finds the TXHASH approach acceptable, then you should also
> be relatively fine doing APO, CTV, CSFS, TXHASH acceptable in any order
> (whenever "ready"), unless you are particularly sensitive to "technical
> debt" and "soft fork processes". The only costs of doing something for CTV
> or APO given an eventual TXHASH is perhaps a wasted key version or the 32
> byte argument of a NOP opcode and some code to maintain.
>
> Are there other costs I am missing?
>
> However, as it pertains to actual rollout:
>
> - OP_TXHASH+CSFSV doesn't seem to be the "full" set of things needed (we
> still need e.g. OP_CAT, Upgraded >=64 bit Math, TLUV or OP_TWEAK
> OP_TAPBRANCH OP_MANIPULATETAPTREE, and more) to full realize covenanting
> power it intends to introduce.
> - What sort of timeline would it take to ready something like TXHASH (and
> desired friends) given greater scope of testing and analysis (standalone +
> compared to CTV)?
> - Is there opposition from the community to this degree of
> general/recursive covenants?
> - Does it make "more sense" to invest the research and development effort
> that would go into proving TXHASH safe, for example, into Simplicity
> instead?
>
> Overall, *my opinion *is that:
>
> - TXHASH is an acceptable theoretical approach, and I am happy to put more
> thought into it and maybe draft a prototype of it.
> - I prefer CTV as a first step for pragmatic engineering and availability
> timeline reasons.
> - If TXHASH were to take, optimistically, 2 years to develop and review,
> and then 1 year to activate, the "path dependence of software" would put
> Bitcoin in a much better place were we to have CTV within 1 year and
> applications (that are to be a subset of TXHASH later) being built over the
> next few years enhanced in the future by TXHASH's availability.
> - There is an element of expediency meritted for something like CTV
> insofar as it provides primitives to tackle time sensitive issues around
> privacy, scalability, self custody, and decentralization. The
> aforementioned properties may be difficult to reclaim once given away (with
> the exception of perhaps scalability).
> - Bringing CTV to an implemented state of near-unanimous "we could do
> this, technically" is good for concretely driving the process of review for
> any covenant proposals forward, irrespective of if we ultimately activate.
> (I.e., if there were a reason we could not do CTV safely, it would likely
> have implications for any other future covenant)
>
> Concretely, I'm not going to stop advocating for CTV based on the above,
> but I'm very happy to have something new in the mix to consider!
>
> Best,
>
> Jeremy
>
>
> --
> @JeremyRubin <https://twitter.com/JeremyRubin>
> <https://twitter.com/JeremyRubin>
>
>
> On Wed, Jan 26, 2022 at 9:23 AM Russell O'Connor via bitcoin-dev <
> bitcoin-dev at lists.linuxfoundation.org> wrote:
>
>> Recapping the relationship between CTV and ANYPREVOUT::
>>
>> It is known that there is a significant amount of overlap in the
>> applications that are enabled by the CTV and ANYPREVOUT proposals despite
>> the fact that their primary applications (congestion control for CTV and
>> eltoo lightning channels for ANYPREVOUT) are quite distinct.
>> In particular, ANYPREVOUT can enable most of the applications of CTV,
>> albeit with a higher cost. The primary functionality of CTV is to allow a
>> scriptPubKey to make a commitment to its spending transaction's hash with
>> the input's TXID excluded from the hash. This exclusion is necessary
>> because the scriptPubKey is hashed into the input's TXID, and including the
>> TXID would cause a cycle of hash commitments, which is impossible to
>> construct. On the other hand, ANYPREVOUT defines a signature hash mode
>> that similarly excludes the inputs TXID for its purpose of rebindable
>> signatures.
>>
>> This means that ANYPREVOUT can mimic most of the properties of CTV by
>> committing both a public key along with an ANYPREVOUT signature inside
>> scriptPubKey. In fact, the only reason Bitcoin doesn't have covenants
>> today is due to this cycle between scriptPubKeys and the TXIDs that occur
>> in all the sighash modes.
>>
>> The major differences between simulating CTV via ANYPREVOUT and the
>> actual CTV proposal is: (1) The cost of simulating CTV. With CTV the
>> spending transaction is committed using a hash of 32 bytes, while
>> simulating it with ANYPREVOUT requires 64 bytes for a signature, and 32
>> bytes for some public key, plus a few more bytes for various flags. Some
>> of that cost could be reduced by using the inner public key (1 byte
>> representation) and, if we had CAT, maybe by assembling the signature from
>> reusable pieces (i.e. setting the nonce of the commited signature equal to
>> the public key).
>>
>> The other major difference is: (2) CTV's transaction hash covers values
>> such as the number of inputs in the transaction and their sequence numbers,
>> which ANYPREVOUT does not cover. CTV's hash contains enough information so
>> that when combined with the missing TXIDs, you can compute the TXID of the
>> spending transaction. In particular if the number of inputs is committed
>> to being 1, once the scriptpubkey's transaction id is known and committed
>> to the blockchain, the TXID of its spending transaction is deducible. And
>> if that transaction has outputs that have CTV commitments in them, you can
>> deduce their spending TXIDs in turn. While this is a pretty neat feature,
>> something that ANYPREVOUT cannot mimic, the main application for it is
>> listed as using congestion control to fund lightning channels, fixing their
>> TXIDs in advance of them being placed on chain. However, if ANYPREVOUT
>> were used to mimic CTV, then likely it would be eltoo channels that would
>> be funded, and it isn't necessary to know the TXIDs of eltoo channels in
>> advance in order to use them.
>>
>>
>>
>> An Alternative Proposal::
>>
>> Given the overlap in functionality between CTV and ANYPREVOUT, I think it
>> makes sense to decompose their operations into their constituent pieces and
>> reassemble their behaviour programmatically. To this end, I'd like to
>> instead propose OP_TXHASH and OP_CHECKSIGFROMSTACKVERIFY.
>>
>> OP_TXHASH would pop a txhash flag from the stack and compute a (tagged)
>> txhash in accordance with that flag, and push the resulting hash onto the
>> stack.
>> OP_CHECKSIGFROMSTACKVERIFY would pop a pubkey, message, and signature
>> from the stack and fail if the signature does not verify on that message.
>>
>> CTV and TXHASH have roughly equivalent functionality. 'CTV DROP' can be
>> simulated by '<ctv_style_flag> TXHASH EQUALVERIFY'. The reverse is also
>> true where '<ctv_style_flag> TXHASH' can be simulated by CTV by
>> '<ctv-result-from-witness-stack> CTV', however, as you can see, simulating
>> TXHASH from CTV is much more expensive than the other way around, because
>> the resulting 32-byte hash result must be included as part of the witness
>> stack.
>>
>> '<anyprevout-pubkey> CHECKSIGVERIFY can be simulated by '<apo_style_flag>
>> TXHASH <pubkey> CHECKSIGFROMSTACKVERIFY'. Here we see the advantage of
>> pushing the hash value onto the stack. APO can be simulated without
>> needing to include a copy of the resulting txhash inside the witness data.
>>
>> In addition to the CTV and ANYPREVOUT applications, with
>> CHECKSIGFROMSTACKVERIFY we can verify signatures on arbitrary messages
>> signed by oracles for oracle applications. This is where we see the
>> benefit of decomposing operations into primitive pieces. By giving users
>> the ability to program their own use cases from components, we get more
>> applications out of fewer op codes!
>>
>>
>>
>> Caveats::
>>
>> First, I acknowledge that replicating the behaviour of CTV and ANYPREVOUT
>> does cost a few more bytes than using the custom purpose built proposals
>> themselves. That is the price to be paid when we choose the ability to
>> program solutions from pieces. But we get to reap the advantages of being
>> able to build more applications from these pieces.
>>
>> Unlike CTV, TXHASH is not NOP-compatable and can only be implemented
>> within tapscript. In particular, bare CTV isn't possible with this
>> proposal. However, this proposal doesn't preclude the possibility of
>> having CTV added to legacy script in while having TXHASH added to tapscript.
>>
>> For similar reasons, TXHASH is not amenable to extending the set of
>> txflags at a later date. In theory, one could have TXHASH
>> abort-with-success when encountering an unknown set of flags. However,
>> this would make analyzing tapscript much more difficult. Tapscripts would
>> then be able to abort with success or failure depending on the order script
>> fragments are assembled and executed, and getting the order incorrect would
>> be catastrophic. This behavior is manifestly different from the current
>> batch of OP_SUCCESS opcodes that abort-with-success just by their mere
>> presence, whether they would be executed or not.
>>
>> I believe the difficulties with upgrading TXHASH can be mitigated by
>> designing a robust set of TXHASH flags from the start. For example having
>> bits to control whether (1) the version is covered; (2) the locktime is
>> covered; (3) txids are covered; (4) sequence numbers are covered; (5) input
>> amounts are covered; (6) input scriptpubkeys are covered; (7) number of
>> inputs is covered; (8) output amounts are covered; (9) output scriptpubkeys
>> are covered; (10) number of outputs is covered; (11) the tapbranch is
>> covered; (12) the tapleaf is covered; (13) the opseparator value is
>> covered; (14) whether all, one, or no inputs are covered; (15) whether all,
>> one or no outputs are covered; (16) whether the one input position is
>> covered; (17) whether the one output position is covered; (18) whether the
>> sighash flags are covered or not (note: whether or not the sighash flags
>> are or are not covered must itself be covered). Possibly specifying which
>> input or output position is covered in the single case and whether the
>> position is relative to the input's position or is an absolute position.
>>
>> That all said, even if other txhash flag modes are needed in the future,
>> adding TXHASH2 always remains an option.
>>
>>
>>
>> Interactions with potential future opcodes::
>>
>> We should give some consideration on how these opcodes may interact with
>> future opcodes such as CAT, rolling SHA256 opcodes, or how it might
>> interface with other covenant opcodes that may do things like, directly
>> push input or output amounts onto the stack for computation purposes,
>> opcodes which have been added to the Elements project.
>>
>> With CAT and/or rolling SHA256 opcodes and/or existing SHA256 opcodes,
>> the CHECKSIGFROMSTACKVERIFY could verify signatures on programmatically
>> assembled messages. Also, in combination with multiple calls to TXHASH,
>> could be used to create signatures that commit to complex subsets of
>> transaction data.
>>
>> If new opcodes are added to push parts of the transaction data direction
>> onto the stack, e.g. OP_INSPECTOUTPUTVALUE, there is perhaps concern that
>> they would obsolete TXHASH, since, in the presence of rolling SHA256
>> opcodes, TXHASH could be simulated. However, given that TXHASH can
>> compactly create a hash of large portions of transaction data, it seems
>> unlikely that TXHASH would fall into disuse. Also, a combination of TXHASH
>> and transaction introspection opcodes can be used to build "*subtractive
>> covenants*".
>>
>> The usual way of building a covenant, which we will call "*additive *
>> *covenants*", is to push all the parts of the transaction data you would
>> like to fix onto the stack, hash it all together, and verify the resulting
>> hash matches a fixed value. Another way of building covenants, which we
>> will call "*subtractive covenants*", is to push all the parts of the
>> transaction data you would like to remain free onto the stack. Then use
>> rolling SHA256 opcodes starting from a fixed midstate that commits to a
>> prefix of the transaction hash data. The free parts are hashed into that
>> midstate. Finally, the resulting hash value is verified to match a value
>> returned by TXHASH. The ability to nicely build subtractive covenants
>> depends on the details of how the TXHASH hash value is constructed,
>> something that I'm told CTV has given consideration to.
>> _______________________________________________
>> bitcoin-dev mailing list
>> bitcoin-dev at lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20220127/97b618df/attachment-0001.html>