What is Nostr?
Eric Lombrozo [ARCHIVE] /
npub1azv…2krq
2023-06-07 15:37:58

Eric Lombrozo [ARCHIVE] on Nostr: đź“… Original date posted:2015-06-15 đź“ť Original message:> On Jun 14, 2015, at 9:11 ...

đź“… Original date posted:2015-06-15
đź“ť Original message:> On Jun 14, 2015, at 9:11 PM, Peter Todd <pete at petertodd.org> wrote:
>
> On Sun, Jun 14, 2015 at 05:53:05PM -0700, Eric Lombrozo wrote:
>> I think the whole complexity talk is missing the bigger issue.
>>
>> Sure, per block validation scales linearly (or quasilinearly…there’s an O(log n) term in there somewhere but it’s probably dominated by linear factors at current levels…asymptotic limits don’t always apply very well to finite systems). And there’s an O(n^2) bandwidth issue.
>>
>> The real issue, though, is validation cost. The n in O(n) here does not represent block size - it represents the size of the entire block chain for every new validator that must be synchronized! It means we have no way to construct short proofs (or at least arguments that are computationally *hard* to forge) without requiring the validator to maintain the complete system state. And currently, there is no mechanism for directly compensating validators.
>
> ...and can there be? The goal of validation after all is finding if a
> mistake has been made, and current production cryptography doesn't have
> any way to prove you have done that honestly. You need "moon math" like
> recursive SNARKS to do that, and it's unknown when they'll be available
> for production usage.
>

While things like zero-knowledge and homomorphic encryption would be awesome, they are not really needed to achieve the objective of an efficient proof that is hard to forge with at least a decently thought out security model (i.e. we can make information withholding far more difficult)…and we can dramatically improve search times and local storage requirements by doing some of the things that you’ve actually proposed, Peter, like shifting the responsibility of maintaining and constructing proofs over to transaction senders and committing proof hashes to the global state. At least the incentives would be far better aligned in such a scenario.

How do we deal with things like the discovery of an invalid proof a couple weeks after it’s already been committed? This is a tricky issue I’ve been giving a lot of thought to recently - but we’ll deal with this topic in a separate thread. :)

> When we say "compensating validators", if we're being honest with
> outselves what we really mean is the much more boring task of
> compensating servers who are giving us blockchain data. That has nothing
> to do with validation.

If we were to shift responsibility of constructing proofs over to transaction senders, today's “validators” would indeed become nothing more than compensated servers. Clients would be able to query for proofs and verify them for themselves efficiently.

> A useful task would be to make an SPV archival node implementation that
> did no validation at all, while distributing the blockchain data linked
> to the longest chain. Such an implementation can and should serve SPV
> clients, as this is what their actual security model usually is given
> the lack of authentication of the identity of the server they're
> connecting too. Actually implementing this would be a simple matter of
> patching Bitcoin Core to turn off block validation.
>
>> A full validator that goes offline even for a short period of time takes a while to fully catch up to the rest of the network - and starting up a new validator from scratch will continue to be painful…even for those of us who’ve turned this into routine by now, let alone new nontechnical users.
>
> Concretely, 20MB blocks lead to 20GB/week of blocks. On my 1MB/second
> down internet, turning on my node after a week away would take five
> hours; starting up a new node after two years of 20MB blocks would take
> 23 days - likely longer in practice.
>
> There's serious unsolved and undiscussed devops and development issues
> with this. For instance, after changes to the validation code, it's
> routine to resync/reindex Bitcoin Core to ensure starting up a new node
> actually works. Even now we haven't really come to grips with what
> consistent 1MB blocks looks like from this point of view after a few
> years of usage, let another order of magnitude longer sync times.
>

It’s a disaster. Even with 1MB blocks this is already the principal centralization pressure on Bitcoin.

>> Satoshi’s SPV is not a real solution - it’s a mere suggestion that wasn’t fully thought out at the time of the Bitcoin white paper. Besides lacking a good validation security model, practical implementations of it weaken privacy and complicate client implementations substantially…and the worst part, it still doesn’t scale all that well. The validator still has to query every single block (even if filtered) back to the first transaction (which cannot be determined without doing a blockchain scan anyway).
>
> Note how with 20MB blocks it would take up to 1TB of IO per year-synced
> for a bloom-filter-using wallet to sync the blockchain. We already have
> a bloom IO DoS attack issue - what are the consequences of making that
> issue 20x worse? Nobody has analysed it yet.
>

We clearly need better data structures and algorithms. This talk of bigger blocks seems so petty by comparison, TBH.

> --
> 'peter'[:-1]@petertodd.org
> 0000000000000000127ab1d576dc851f374424f1269c4700ccaba2c42d97e778

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 842 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20150614/909b7912/attachment.sig>;
Author Public Key
npub1azvhdrf9fu6n0tm7yez4j6zcxcedp2ct6nrcq3z74naqs7kgpk8s5t2krq