Anthony Towns [ARCHIVE] on Nostr: 📅 Original date posted:2016-02-09 📝 Original message:On Mon, Feb 08, 2016 at ...
📅 Original date posted:2016-02-09
📝 Original message:On Mon, Feb 08, 2016 at 07:26:48PM +0000, Matt Corallo via bitcoin-dev wrote:
> As what a hard fork should look like in the context of segwit has never
> (!) been discussed in any serious sense, I'd like to kick off such a
> discussion with a (somewhat) specific proposal.
> Here is a proposed outline (to activate only after SegWit and with the
> currently-proposed version of SegWit):
Is this intended to be activated soon (this year?) or a while away
(2017, 2018?)?
> 1) The segregated witness discount is changed from 75% to 50%. The block
> size limit (ie transactions + witness/2) is set to 1.5MB. This gives a
> maximum block size of 3MB and a "network-upgraded" block size of roughly
> 2.1MB. This still significantly discounts script data which is kept out
> of the UTXO set, while keeping the maximum-sized block limited.
This would mean the limits go from:
pre-segwit segwit pkh segwit 2/2 msig worst case
1MB - - 1MB
1MB 1.7MB 2MB 4MB
1.5MB 2.1MB 2.2MB 3MB
That seems like a fairly small gain (20% for pubkeyhash, which would
last for about 3 months if you're growth rate means doubling every 9
months), so this probably makes the most sense as a "quick cleanup"
change, that also safely demonstrates how easy/difficult doing a hard
fork is in practice?
On the other hand, if segwit wallet deployment takes longer than
hoped, the 50% increase for pre-segwit transactions might be a useful
release-valve.
Doing a "2x" hardfork with the same reduction to a 50% segwit discount
would (I think) look like:
pre-segwit segwit pkh segwit 2/2 msig worst case
1MB - - 1MB
1MB 1.7MB 2MB 4MB
2MB 2.8MB 2.9MB 4MB
which seems somewhat more appealing, without making the worst-case any
worse; but I guess there's concern about the relay networking scaling
above around 2MB per block, at least prior to IBLT/weak-blocks/whatever?
> 2) In order to prevent significant blowups in the cost to validate
> [...] and transactions are only allowed to contain
> up to 20 non-segwit inputs. [...]
This could potentially make old, signed, but time-locked transactions
invalid. Is that a good idea?
> Along similar lines, we may wish to switch MAX_BLOCK_SIGOPS from
> 1-per-50-bytes across the entire block to a per-transaction limit which
> is slightly looser (though not too much looser - even with libsecp256k1
> 1-per-50-bytes represents 2 seconds of single-threaded validation in
> just sigops on my high-end workstation).
I think turning MAX_BLOCK_SIGOPS and MAX_BLOCK_SIZE into a combined
limit would be a good addition, ie:
#define MAX_BLOCK_SIZE 1500000
#define MAX_BLOCK_DATA_SIZE 3000000
#define MAX_BLOCK_SIGOPS 50000
#define MAX_COST 3000000
#define SIGOP_COST (MAX_COST / MAX_BLOCK_SIGOPS)
#define BLOCK_COST (MAX_COST / MAX_BLOCK_SIZE)
#define DATA_COST (MAX_COST / MAX_BLOCK_DATA_SIZE)
if (utxo_data * BLOCK_COST + bytes * DATA_COST + sigops * SIGOP_COST
> MAX_COST)
{
block_is_invalid();
}
Though I think you'd need to bump up the worst-case limits somewhat to
make that work cleanly.
> 4) Instead of requiring the first four bytes of the previous block hash
> field be 0s, we allow them to contain any value. This allows Bitcoin
> mining hardware to reduce the required logic, making it easier to
> produce competitive hardware [1].
> [1] Simpler here may not be entirely true. There is potential for
> optimization if you brute force the SHA256 midstate, but if nothing
> else, this will prevent there being a strong incentive to use the
> version field as nonce space. This may need more investigation, as we
> may wish to just set the minimum difficulty higher so that we can add
> more than 4 nonce-bytes.
Could you just use leading non-zero bytes of the prevhash as additional
nonce?
So to work out the actual prev hash, set leading bytes to zero until
you hit a zero. Conversely, to add nonce info to a hash, if there are
N leading zero bytes, fill up the first N-1 (or less) of them with
non-zero values.
That would give a little more than 255**(N-1) possible values
((255**N-1)/254) to be exact). That would actually scale automatically
with difficulty, and seems easy enough to make use of in an ASIC?
Cheers,
aj
📝 Original message:On Mon, Feb 08, 2016 at 07:26:48PM +0000, Matt Corallo via bitcoin-dev wrote:
> As what a hard fork should look like in the context of segwit has never
> (!) been discussed in any serious sense, I'd like to kick off such a
> discussion with a (somewhat) specific proposal.
> Here is a proposed outline (to activate only after SegWit and with the
> currently-proposed version of SegWit):
Is this intended to be activated soon (this year?) or a while away
(2017, 2018?)?
> 1) The segregated witness discount is changed from 75% to 50%. The block
> size limit (ie transactions + witness/2) is set to 1.5MB. This gives a
> maximum block size of 3MB and a "network-upgraded" block size of roughly
> 2.1MB. This still significantly discounts script data which is kept out
> of the UTXO set, while keeping the maximum-sized block limited.
This would mean the limits go from:
pre-segwit segwit pkh segwit 2/2 msig worst case
1MB - - 1MB
1MB 1.7MB 2MB 4MB
1.5MB 2.1MB 2.2MB 3MB
That seems like a fairly small gain (20% for pubkeyhash, which would
last for about 3 months if you're growth rate means doubling every 9
months), so this probably makes the most sense as a "quick cleanup"
change, that also safely demonstrates how easy/difficult doing a hard
fork is in practice?
On the other hand, if segwit wallet deployment takes longer than
hoped, the 50% increase for pre-segwit transactions might be a useful
release-valve.
Doing a "2x" hardfork with the same reduction to a 50% segwit discount
would (I think) look like:
pre-segwit segwit pkh segwit 2/2 msig worst case
1MB - - 1MB
1MB 1.7MB 2MB 4MB
2MB 2.8MB 2.9MB 4MB
which seems somewhat more appealing, without making the worst-case any
worse; but I guess there's concern about the relay networking scaling
above around 2MB per block, at least prior to IBLT/weak-blocks/whatever?
> 2) In order to prevent significant blowups in the cost to validate
> [...] and transactions are only allowed to contain
> up to 20 non-segwit inputs. [...]
This could potentially make old, signed, but time-locked transactions
invalid. Is that a good idea?
> Along similar lines, we may wish to switch MAX_BLOCK_SIGOPS from
> 1-per-50-bytes across the entire block to a per-transaction limit which
> is slightly looser (though not too much looser - even with libsecp256k1
> 1-per-50-bytes represents 2 seconds of single-threaded validation in
> just sigops on my high-end workstation).
I think turning MAX_BLOCK_SIGOPS and MAX_BLOCK_SIZE into a combined
limit would be a good addition, ie:
#define MAX_BLOCK_SIZE 1500000
#define MAX_BLOCK_DATA_SIZE 3000000
#define MAX_BLOCK_SIGOPS 50000
#define MAX_COST 3000000
#define SIGOP_COST (MAX_COST / MAX_BLOCK_SIGOPS)
#define BLOCK_COST (MAX_COST / MAX_BLOCK_SIZE)
#define DATA_COST (MAX_COST / MAX_BLOCK_DATA_SIZE)
if (utxo_data * BLOCK_COST + bytes * DATA_COST + sigops * SIGOP_COST
> MAX_COST)
{
block_is_invalid();
}
Though I think you'd need to bump up the worst-case limits somewhat to
make that work cleanly.
> 4) Instead of requiring the first four bytes of the previous block hash
> field be 0s, we allow them to contain any value. This allows Bitcoin
> mining hardware to reduce the required logic, making it easier to
> produce competitive hardware [1].
> [1] Simpler here may not be entirely true. There is potential for
> optimization if you brute force the SHA256 midstate, but if nothing
> else, this will prevent there being a strong incentive to use the
> version field as nonce space. This may need more investigation, as we
> may wish to just set the minimum difficulty higher so that we can add
> more than 4 nonce-bytes.
Could you just use leading non-zero bytes of the prevhash as additional
nonce?
So to work out the actual prev hash, set leading bytes to zero until
you hit a zero. Conversely, to add nonce info to a hash, if there are
N leading zero bytes, fill up the first N-1 (or less) of them with
non-zero values.
That would give a little more than 255**(N-1) possible values
((255**N-1)/254) to be exact). That would actually scale automatically
with difficulty, and seems easy enough to make use of in an ASIC?
Cheers,
aj