What is Nostr?
Conner Fromknecht [ARCHIVE] /
npub1za0…ua2a
2023-06-09 12:51:40
in reply to nevent1q…lam0

Conner Fromknecht [ARCHIVE] on Nostr: πŸ“… Original date posted:2018-10-23 πŸ“ Original message: Good evening ...

πŸ“… Original date posted:2018-10-23
πŸ“ Original message:
Good evening lightning-dev,

> If we receive later receive two `channel_update`s whose
`short_channel_id`s
> reference the spending transaction (and the node pubkeys are the same), we
> assume the splice was successful and that this channel has been subsumed.
I
> think this works so long as the spending transaction doesn't contain
multiple
> funding outputs, though I think the current proposal is fallible to this
as
> well.

Thought about this some more. The main difference seems to be whether the
gossiped data is forward or backward looking. By forward looking, I mean
that we
gossip where the splice will move to, and backward looking gossips where the
splice moved from.

If we want to make the original proposal work w/ multiple funding outputs on
one splice, I think it can be accomplished by sending the funding outpoint
as
opposed to just the txid. For the backward looking proposal, the
`channel_update`
could be modified to include the `short_channel_id` of the prior funding
output.
IMO we probably want to include the extra specificity even if we don't plan
to
have multiple funding outputs on a commitment implemented tomorrow, since
outputs are what we truly care about.

Of the two, it still seems like the backward looking approach results in
less
gossiped data since are able to reference a single confirmed output by
location
(8 bytes), instead of N unconfirmed outputs by outpoint (N*34 bytes).

Another advantage I see with the backward looking splice announcments is
that
they can be properly verified before forwarding to the network by examining
the
channel lineage. In contrast, one can't be sure if the outpoint in a
forward looking
announcement will ever confirm, or even if it spends from the original
channel point
unless one also has the transaction. Until a splice does confirm, a node has
to store multiple potential splice outpoints. Seeing this, it seems to me
that
backward looking announcements are less susceptible to abuse and DOS in
this regard.

Thoughts?

Cheers,
Conner

On Thu, Oct 18, 2018 at 8:04 PM Conner Fromknecht
<conner at lightning.engineering> wrote:

> Good evening all,
>
> Thank you Rusty for starting us down this path :) and to ZmnSCPxj and Lisa
> for
> your thoughts. I think this narrows down the design space considerably!
>
> In light of this, and if I'm following along, it seems our hand is forced
> in
> splicing via a single on-chain transaction. In my book, this is preferable
> anyway. I'd much rather push complexity off-chain than having to do a
> mutli-stage splicing pipeline.
>
> > To add some context to this, if you start accepting HTLC's for the new
> balance
> > after the parallel commitment is made, but before the re-anchor is
> buried,
> > there's the potential for a race condition between a unilateral close
> (or any
> > revoked commitment transaction) and the re-anchoring commitment
> transaction,
> > that spends the 'pre-committed' UTXO of splicing in funds and the
> original
> > funding transaction
>
> Indeed, I'm not aware of any splicing mechanism that enables off-chain use
> of
> spliced-in funds before the new funding output confirms. Even in the async,
> single-txn case, the new funds cannot be spent until the new funding output
> confirms sufficiently.
>
> From my POV, the desired properties of a splice are:
> 1. non-blocking (asynchronous) usage of the channel
> 2. single on-chain txn
> 3. ability to RBF (have multiple pending splices)
>
> Of these, it seems we've solidified 1 and 2. I understand the desire to not
> tackle RBF on the first attempt given the additional complexity. However,
> I
> do believe there are ways we can proceed in which our first attempt largely
> coincides with supporting it in the future.
>
> With that in mind, here are some thoughts on the proposals above.
>
> ## RBF and Multiple Splices
>
> > 1. type: 132 (`commitment_signed`)
> > 2. data:
> > * [`32`:`channel_id`]
> > * [`64`:`signature`]
> > * [`2`:`num_htlcs`]
> > * [`num_htlcs*64`:`htlc_signature`]
> > * [`num_htlcs*64`:`htlc_splice_signature`] (`option_splice`)
>
> This will overflow the maximum message size of 65535 bytes for num_htlcs >
> 511.
>
> I would propose sending a distinct message, which references the
> `active_channel_id` and a `splice_channel_id` for the pending splice:
>
> 1. type: XXX (`commitment_splice_signed`) (`option_splice`)
> 2. data:
> * [`32`:`active_channel_id`]
> * [`32`:`splice_channel_id`]
> * [`64`:`signature`]
> * [`2`:`num_htlcs`]
> * [`num_htlcs*64`:`htlc_signature`]
>
> This more directly addresses handling multiple pending splices, as well as
> preventing us from running into any size constraints. The purpose of
> including the `active_channel_id` would be to remote node locate the
> spliced channel, since it may not be populated indexes containing
> active channels. If we don't want to include this, the existing message
> can be used without modification.
>
> > We shouldn't allow more than one pending splice operation anyway, as
> > stated in your proposal initially. We are already critically reliant on
> our
> > transaction being confirmed on-chain, so I don't see this as much of an
> > added issue.
>
> IMO there's no reason to limit ourselves to one pending splice at the
> message
> level. I think it'd be an oversight to not to plan ahead with RBF in mind,
> given that funding transactions have gone unconfirmed precisely because of
> improperly chosen fee rates. Arguably, funding flow should be extended to
> support this as well.
>
> CPFP works, though it's more wasteful than resigning and I'd prefer only
> to do
> so out of necessity, rather than relying on it. CPFP is nice because it
> doesn't
> require interaction, though we are already assuming the other party to be
> online during the splice (unlike unilateral closes).
>
> Adding a splice-reject message/error code should be sufficient to allow
> implementations to signal that their local tolerance for number of pending
> splices has been reached. It's likely we'd all start with getting one
> splice
> working, but then the messages won't need to modified if we want to
> implement
> additional pending splices via RBF.
>
> A node that wants to RBF but receives a reject can then proceed with CPFP
> as a
> last resort.
>
> Are there any downsides I'm overlooking with this approach?
>
> > | Bit Position | Name | Field
> |
> > | ------------- | ------------------------- |
> -------------------------------- |
> > | 0 | `option_channel_htlc_max` | `htlc_maximum_msat`
> |
> > | 1 | `option_channel_moving` | `moving_txid
> |
> >
> > The `channel_update` gains the following field:
> > * [`32`: moving_txid`] (option_channel_moving)
>
> Do we actually need to send the `moving_txid` via a channel update? I
> think it's
> enough for both parties to send `channel_update`s with the
> `option_channel_moving` bit set, and continue to keep the channel in our
> routing
> table.
>
> If we receive later receive two `channel_update`s whose `short_channel_id`s
> reference the spending transaction (and the node pubkeys are the same), we
> assume the splice was successful and that this channel has been subsumed. I
> think this works so long as the spending transaction doesn't contain
> multiple
> funding outputs, though I think the current proposal is fallible to this as
> well.
>
> To me, this proposal has the benefit of not bloating gossip bandwidth with
> an
> extra field that would need to parsed indefinitely, and gracefully
> supporting
> RBF down the road. Otherwise we'd need to gossip and store each potential
> txid.
>
> With regards to forwarding, both `short_channel_id`s would be accepted by
> the
> splicers for up to 100 blocks (after splice confirm?), at which point they
> can
> both forget the prior `short_channel_id`.
>
> ## Shachain
>
> > I thought about restarting the revocation sequence, but it seems like
> > that only saves a tiny amount since we only store log(N) entries. We
> > can drop old HTLC info post-splice though, and (after some delay for
> > obscurity) tell watchtowers to drop old entries I think.
>
> I agree the additional state isn't too burdensome, and that we would still
> be
> able to drop watchtower state after some delay as you mentioned.
>
> On one hand, it does seem like the opportune time to remove such state if
> desired.
>
> OTOH, it is _really_ nice from an atomicity perspective that the current
> channel and (potentially) N pending channels can be revoked using a single
> commitment secret and message. Doing so would mean we don't have to
> modify the `revoke_and_ack` or `channel_reestablish` messages. The receiver
> would just apply the commitment secrets/points to the current channel and
> any
> pending splices.
>
> ## Misc
>
> > Any reason to now make the splicing_add_* messages allow one to add
> several
> > inputs in a single message? Given "acceptable" constraints for how large
> the
> > witness and pkScripts can be, we can easily enforce an upper limit on the
> > number of inputs/outputs to add.
>
> Yes, I prefer this simplification.
>
> > Additionally, as the size of the channel is either expanding or
> contracting,
> > both sides should be allowed to modify things like the CSV param,
> reserve,
> > max accepted htlc's, max htlc size, etc. Many of these parameters like
> the
> > CSV value should scale with the size of the channel, not allowing these
> > parameters to be re-negotiated could result in odd scenarios like still
> > maintain a 1 week CSV when the channel size has dipped from 1 BTC to 100k
> > satoshis.
>
> Agreed!
>
> > These all seem marginal to me. I think if we start hitting max values,
> > we should discuss increasing them.
>
> Doesn't this defeat the goal of firewalling funds against individual
> channel
> failures?
>
> >>> One thing that I think we should lift from the multiple funding output
> >>> approach is the "pre seating of inputs". This is cool as it would allow
> >>> clients to generate addresses, that others could deposit to, and then
> have
> >>> be spliced directly into the channel. Public derivation can be used,
> along
> >>> with a script template to do it non-interactively, with the clients
> picking
> >>> up these deposits, and initiating a splice in as needed.
> >>
> >> How about this restatement?
> >>
> >> 1. Each channel has two public-key-derivation paths (BIP32) to create
> onchain
> >> addresses. One for each side of the channel.
> >> 2. The base of the above is actually a combined private-public keypair
> of both
> >> sides (e.g. created via MuSig or some other protocol). Thus the
> addresses
> >> require cooperation of both parties to spend.
> >> 3. When somebody sends to one of the onchain addresses in the path,
> their
> >> client detects this.
> >> 4. The client updates the current transaction state, such that the new
> commit
> >> transaction has two inputs ( the original channel transaction and the
> new UTXO).
> >>
> >> The above seems unsafe without trust in the other peer, as, the other
> peer can
> >> simply refuse to create the new commit transaction. Since the address
> requires
> >> both parties to spend, the money cannot be spent and there is no backoff
> >> transaction that can be used. But maybe you can describe some
> mechanism to
> >> ensure this, if this is what is meant instead?
> >
> > This could easily be solved by making the destination address a Taproot
> > address, which by default is just a 2-of-2, but in the uncooperative
> > case it can reveal the script it commits to, which is just a timelocked
> > refund that requires a single-sig. The only problem with this is that
> > the refund would be non-interactive, and so the entirety of the funds,
> > that may be from a third-party, need to be claimed by one endpoint,
> > i.e., there is no splitting the funds in case of an uncollaborative
> > refund. Not sure how important that is though, since I don't think
> > third-party funds will come from unrelated parties, e.g., most of these
> > funds will come from an on-chain wallet that is under the control of
> > either parties so the refund should go back to that party anyway.
>
> This can be accomplished similarly by having either (or both) party
> publishing a
> static address or publicly derivable address specific to the channel,
> derived
> from their HD seed.
>
> Arguably, the address should perhaps be global, so that it can outlive the
> lifetime of the channel, i.e. as soon as the first person deposits and a
> splice
> is initiated, is the address still valid for the new channel if new keys
> are
> used? Similarly, the channel could be closed and the funds locked until
> the timeout if the peer disappears.
>
> Regardless, both approaches can be made to have equivalent amounts of
> [non-]interactivity. However, the recipient isn't burdened in spending by
> 1) interaction with the channel peer, or 2) an absolute timeout if 1 fails,
> giving the receiver more flexibility if they wish to not commit the
> received
> funds to a splice. It also benefits from smaller witness sizes, a larger
> anonymity set, etc.
>
> In general, using a 2-of-2+timeout to stage funds for splicing doesn't
> offer
> that much IMO. It seems the primary purpose is to prevent the funds from
> being
> double spent during the splice, but observe that this is still possible if
> the
> timeout matures, perhaps because the splice doesn't confirm in a timely
> manner.
>
> Acknowledging this, detecting double-spent inputs is still required for
> full
> correctness. By implementing it, either party is free to propose arbitrary
> inputs for a splice, which I believe reduces complexity in the long run.
>
> Splice out,
> Conner
>
> On Tue, Oct 16, 2018 at 10:00 PM ZmnSCPxj via Lightning-dev <
> lightning-dev at lists.linuxfoundation.org> wrote:
>
>> Good morning lisa,
>>
>> This is a good observation.
>>
>> Before, I'd already considered the rationale, for why channels have a
>> single 2-of-2 UTXO as funding output. And it seems I should have
>> considered this, prior to accepting the "parallel" construction as feasible.
>>
>> For sake of posterity, I leave the below writeup as a tangential to the
>> design of splice (and to the design of Lightning having a single 2-of-2
>> UTXOs):
>>
>> # 0-conf is Unsafe, Yet Lightning is Safe; Why?
>>
>> To accept a 0-conf transaction output, is known to be unsafe.
>> Replace-by-fee is always a possibility, regardless of whether the
>> transaction opts in to RBF or not: a rational miner will always accept the
>> higher feerate, disregarding any "opt-in" flag that is set or not set on
>> the transaction. Thus we reject any advice that claims that 0-conf is
>> tenable, even for tiny amounts.
>>
>> Yet when viewed solely in terms of transactions, Lightning protocol uses
>> transactions that are not on any block (are kept offchain). Since they are
>> not in a block, they are indistinguishable from 0-conf transactions, which
>> are accepted by the receiver, yet are also not on any block. One might
>> argue the distinction, that a "real" 0-conf transaction exists on some
>> mempool somewhere, and thus has a chance to be on a block in the future,
>> but mempools have no consensus, and the existence of a transaction on some
>> mempool is not a safe assurance of it existing in the mempool of the next
>> winning miner.
>>
>> So why is Lightning safe, when 0-conf transactions are in general not
>> safe?
>>
>> Again, we should focus on why 0-conf transactions in general are not
>> safe: transaction replacement. Thus, 0-conf transactions can be made safe,
>> if you are somehow able to ensure that replacement transactions cannot be
>> made.
>>
>> For example, if you are part of an n-of-n federation that signs the
>> transaction, you can always safely accept a 0-conf transaction from that
>> federation paying only to you, because you can always veto any replacement
>> (by simply refusing to sign) that is not in your interests.
>>
>> This is in fact how Lightning works: a 2-of-2 federation (the channel
>> counterparties) are the signatories of the 0-conf transactions that are the
>> commitment transactions of the Lightning protocol. Replacement of the
>> commitment transactions is strictly guided by the protocol; both sides have
>> veto rights, since the source transaction output is 2-of-2.
>>
>> Thus, Lightning, though it uses 0-conf transactions, is safe, because it
>> prevents the replacement of a 0-conf transaction without the receiver
>> allowing it, by the simple expedient of including the receiver in the
>> 2-of-2 multisig guarding its single funding TXO.
>>
>> ## The Implications for Splice Proposals
>>
>> Some splice proposals involve creating the equivalent of multiple funding
>> TXOs for a single channel. Such constructions are unsafe-by-default on
>> Poon-Dryja.
>>
>> In reality, every commitment transaction (or update transaction in
>> Decker-Osuntokun-Russell) is replaceable by any other commitment (or
>> update) transaction for that channel. Under Poon-Dryja older transactions
>> are revoked (and hence one side risks loss of their collateral) while under
>> Decker-Osuntokun-Russell older transactions may be "gainsaid" (i.e. newer
>> update transactions may be reanchored to consume the TXO of the older
>> update transaction, thus preventing that update from truly being committed
>> to).
>>
>> This is relevant since before a splice, the channel has a single funding
>> TXO, while after the splice, the channel has multiple.
>>
>> In particular, a commitment (or update) transaction, that has multiple
>> inputs (to consume the multiple funding TXOs), can be replaced with a
>> commitment (or update) transaction that was created before the splice.
>> Under Poon-Dryja, such a commitment transaction may be revoked, but this
>> leaves the other funding TXOs unuseable. Under Decker-Osuntokun-Russell,
>> as long as the sequence number is preserved across the splice, it is
>> possible for a later update transaction with multiple inputs to simply
>> gainsay the old single-input update with the new multiple-input update
>> transaction. (I suppose, that this is another advantage that
>> Decker-Osuntokun-Russell has).
>>
>> Regards,
>> ZmnSCPxj
>>
>>
>> Sent with ProtonMail <https://protonmail.com>; Secure Email.
>>
>> ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
>> On Wednesday, October 17, 2018 9:09 AM, lisa neigut <niftynei at gmail.com>
>> wrote:
>>
>> To add some context to this, if you start accepting HTLC's for the new
>> balance after the parallel commitment is made, but before the re-anchor is
>> buried, there's the potential for a race condition between a unilateral
>> close (or any revoked commitment transaction) and the re-anchoring
>> commitment transaction, that spends the 'pre-committed' UTXO of splicing in
>> funds and the original funding transaction.
>>
>> You can get around this by waiting until both the pre-commitment UTXO and
>> the re-anchor have cleared a minimum depth before accepting HTLC's for the
>> new balance totals, but that's twice as long of a wait as the first,
>> synchronized re-commitment scheme that Rusty originally proposed.
>>
>> It also makes leaving the original funding transaction 'exposed' (ie
>> Rene's version of parallel splice) untenable, as there's always the risk of
>> an old state being published to consume that input. This foobars your
>> current HTLC commitments.
>>
>> On Tue, Oct 16, 2018 at 3:31 PM Rusty Russell <rusty at rustcorp.com.au>
>> wrote:
>>
>>> Rusty Russell <rusty at rustcorp.com.au> writes:
>>> > If we're going to do side splice-in like this, I would use a very
>>> > different protocol: the reason for this protocol was to treat splice-in
>>> > and splice-out the same, and inline splice-in requires wait time.
>>> Since
>>> > splice-out doesn't, we don't need this at all.
>>> >
>>> > It would look much more like:
>>> >
>>> > 1. Prepare any output with script of specific form. eg:
>>> > OP_DEPTH 3 OP_EQUAL OP_IF
>>> > <funding_pubkey1> <funding_pubkey2> OP_CHECKMULTISIG
>>> > OP_ELSE
>>> > <blockheight> OP_CHECKLOCKTIMEVERIFY OP_DROP
>>> > <myrescue_pubkey> OP_CHECKSIG
>>> > OP_ENDIF
>>> >
>>> > 1. type: 40 (`splice_in`) (`option_splice`)
>>> > 2. data:
>>> > * [`32`:`channel_id`]
>>> > * [`8`: `satoshis`]
>>> > * [`32`: `txid`]
>>> > * [`4`: `txoutnum`]
>>> > * [`4`: `blockheight`]
>>> > * [`33`: `myrescue_pubkey`]
>>> >
>>> > 1. type: 137 (`update_splice_in_accept`) (`option_splice`)
>>> > data:
>>> > * [`32`:`channel_id`]
>>> > * [`32`: `txid`]
>>> > * [`4`: `txoutnum`]
>>> >
>>> > 1. type: 138 (`update_splice_in_reject`) (`option_splice`)
>>> > data:
>>> > * [`32`:`channel_id`]
>>> > * [`32`: `txid`]
>>> > * [`2`:`len`]
>>> > * [`len`:`errorstr`]
>>> >
>>> > The recipient of `splice_in` checks that it's happy with the
>>> > `blockheight` (far enough in future). Once it sees the tx referred to
>>> > buried to its own `minimum_depth`, it checks output is what they
>>> > claimed, then sends `update_splice_in_accept`; it's followed up
>>> > `commitment_signed` like normal, but from this point onwards, all
>>> > commitment txs signatures have one extra sig.
>>>
>>> Lisa started asking pointed questions, and so I noticed that parallel
>>> splice doesn't work with Poon-Dryja channels.
>>>
>>> The counterparty can spend the old funding txout with a revoked spend.
>>> Sure, I can take all the money from that, but what about the spliced
>>> input?
>>>
>>> I came up with increasingly elaborate workarounds, but nothing stuck.
>>>
>>> Back to Plan A...
>>> Rusty.
>>> _______________________________________________
>>> Lightning-dev mailing list
>>> Lightning-dev at lists.linuxfoundation.org
>>> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>>>
>>
>> _______________________________________________
>> Lightning-dev mailing list
>> Lightning-dev at lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/lightning-dev/attachments/20181022/337246c7/attachment-0001.html>;
Author Public Key
npub1za0a9afyj7um5feva0d5xmhfsah3zxm252hna2duq0numa952frqvuua2a