Joseph Poon [ARCHIVE] on Nostr: đ Original date posted:2015-07-27 đ Original message: Hi Anthony, On Sat, Jul ...
đ
Original date posted:2015-07-27
đ Original message:
Hi Anthony,
On Sat, Jul 25, 2015 at 06:44:26PM +1000, Anthony Towns wrote:
> On Fri, Jul 24, 2015 at 04:24:49PM -0700, Joseph Poon wrote:
> > Ah sorry, that only solves the Commitment Transactions, not the HTLC
> > outputs. It's also not possible to use the pubkeys as identifiers,
> > as Rusty said, P2SH would be used.
> >
> > While it's possible to only check only recent blocks before the
> > Commitment Transaction for the search space (e.g. 3 days worth),
> > since you know when the Commitment Transaction was broadcast, the
> > search space limitation sort of breaks down if you permit long-dated
> > HTLCs.
>
> I don't think it matters how long the HTLC was; maybe they're way old
> and all expired, but were payments to you. Say the current channel is:
>
> 12 -> Cheater 88 -> You
>
> and the old transaction that Cheater just pushed to the blockchain
> was:
>
> 55 -> Cheater 3 -> You 10 -> You & R1 | Cheater & Timeout1 20 -> You
> & R2 | Cheater & Timeout2 12 -> You & R3 | Cheater & Timeout3
>
> To get at least your 88 owed, you need all but the last transaction,
> so you need to be able to workout #R1 and #R2 and Timeout1 and
> Timeout2, no matter how long ago they were.
Yes, I agree, that is absolutely true. I was alluding to something
different (but didn't properly explain myself), which is that if you did
grinding of only recent Commitments, it's possible that there will be
HTLCs with very high timeouts in the future and this may be a necessary
requirement for some possible future use cases (e.g.
recurring/pre-allocated billing).
> > For now, I think a reasonable stop-gap solution would be to have
> > some small storage of prior commitment transactions. For every
> > commitment, and each HTLC output, store the timeout and the original
> > Commitment Transaction height when the HTLC was first made.
>
> I don't think you want to multiply each HTLC output by every
> commitment it's stored in -- if the TIMEOUT is on the order of a day,
> and the channel is updated just once a second that's a x86,400 blowout
> in storage, so almost 5 orders of magnitude.
>
> But if everytime you see a new HTLC output (ie, R4, Timeout4), you
> could store those values and use the nLockTime trick to store the
> height of your HTLC storage. Then you just have to search back down
> from R4 to find the other HTLCs in the txn, ie R3, R2 and R1, which is
> just a matter of pulling out the values R, Timeout, dropping them into
> payment script templates, and checking if they match.
Yes, that's a good point(!), especially when you're doing local storage.
If you're relying on OP_RETURN, though, you must put in some more
contextual data. If you're willing to regenerate the revocation hash
every time, I guess the OP_RETURN can just be timeout and H. For local
storage, you don't need to do it for every HTLC if you're willing to
search back on near-dated HTLCs, but long-dated HTLCs (say, greater than
a couple days) could be included (class memory vs. computation
tradeoff). Agreed, the necessary data storage isn't *that bad* for core
nodes, and trivial for edge nodes not doing liquidity providing
(ignoring backup concerns, of course).
> BTW, 10 commitments per second (per channel) doesn't sound /that/ high
> volume :) Pay per megabyte for an end user at 100Mb/s is already
> around that at least at peak times, eg.
Perhaps with a relatively distributed graph and core nodes having many
connections, it's possible that's the range. Either way, it should be
fine. If you have enough entropy to filter by hundreds of millions using
nLockTime, even if you have 10 billion (or 100 billion) to search
through it should be nearly instant. If you have 1000 possible
revocation hashes, just look at the first txout (the non-HTLC payouts to
Alice and Bob) and see which revocation fits. Once you know the exact
Commitment number, the rest of the outputs are easy.
--
Joseph Poon
đ Original message:
Hi Anthony,
On Sat, Jul 25, 2015 at 06:44:26PM +1000, Anthony Towns wrote:
> On Fri, Jul 24, 2015 at 04:24:49PM -0700, Joseph Poon wrote:
> > Ah sorry, that only solves the Commitment Transactions, not the HTLC
> > outputs. It's also not possible to use the pubkeys as identifiers,
> > as Rusty said, P2SH would be used.
> >
> > While it's possible to only check only recent blocks before the
> > Commitment Transaction for the search space (e.g. 3 days worth),
> > since you know when the Commitment Transaction was broadcast, the
> > search space limitation sort of breaks down if you permit long-dated
> > HTLCs.
>
> I don't think it matters how long the HTLC was; maybe they're way old
> and all expired, but were payments to you. Say the current channel is:
>
> 12 -> Cheater 88 -> You
>
> and the old transaction that Cheater just pushed to the blockchain
> was:
>
> 55 -> Cheater 3 -> You 10 -> You & R1 | Cheater & Timeout1 20 -> You
> & R2 | Cheater & Timeout2 12 -> You & R3 | Cheater & Timeout3
>
> To get at least your 88 owed, you need all but the last transaction,
> so you need to be able to workout #R1 and #R2 and Timeout1 and
> Timeout2, no matter how long ago they were.
Yes, I agree, that is absolutely true. I was alluding to something
different (but didn't properly explain myself), which is that if you did
grinding of only recent Commitments, it's possible that there will be
HTLCs with very high timeouts in the future and this may be a necessary
requirement for some possible future use cases (e.g.
recurring/pre-allocated billing).
> > For now, I think a reasonable stop-gap solution would be to have
> > some small storage of prior commitment transactions. For every
> > commitment, and each HTLC output, store the timeout and the original
> > Commitment Transaction height when the HTLC was first made.
>
> I don't think you want to multiply each HTLC output by every
> commitment it's stored in -- if the TIMEOUT is on the order of a day,
> and the channel is updated just once a second that's a x86,400 blowout
> in storage, so almost 5 orders of magnitude.
>
> But if everytime you see a new HTLC output (ie, R4, Timeout4), you
> could store those values and use the nLockTime trick to store the
> height of your HTLC storage. Then you just have to search back down
> from R4 to find the other HTLCs in the txn, ie R3, R2 and R1, which is
> just a matter of pulling out the values R, Timeout, dropping them into
> payment script templates, and checking if they match.
Yes, that's a good point(!), especially when you're doing local storage.
If you're relying on OP_RETURN, though, you must put in some more
contextual data. If you're willing to regenerate the revocation hash
every time, I guess the OP_RETURN can just be timeout and H. For local
storage, you don't need to do it for every HTLC if you're willing to
search back on near-dated HTLCs, but long-dated HTLCs (say, greater than
a couple days) could be included (class memory vs. computation
tradeoff). Agreed, the necessary data storage isn't *that bad* for core
nodes, and trivial for edge nodes not doing liquidity providing
(ignoring backup concerns, of course).
> BTW, 10 commitments per second (per channel) doesn't sound /that/ high
> volume :) Pay per megabyte for an end user at 100Mb/s is already
> around that at least at peak times, eg.
Perhaps with a relatively distributed graph and core nodes having many
connections, it's possible that's the range. Either way, it should be
fine. If you have enough entropy to filter by hundreds of millions using
nLockTime, even if you have 10 billion (or 100 billion) to search
through it should be nearly instant. If you have 1000 possible
revocation hashes, just look at the first txout (the non-HTLC payouts to
Alice and Bob) and see which revocation fits. Once you know the exact
Commitment number, the rest of the outputs are easy.
--
Joseph Poon