What is Nostr?
gabe appleton [ARCHIVE] /
npub1ggt…kvay
2023-06-07 15:35:01
in reply to nevent1q…fvjv

gabe appleton [ARCHIVE] on Nostr: 📅 Original date posted:2015-05-12 📝 Original message:Yet this holds true in our ...

📅 Original date posted:2015-05-12
📝 Original message:Yet this holds true in our current assumptions of the network as well: that
it will become a collection of pruned nodes with a few storage nodes.

A hybrid option makes this better, because it spreads the risk, rather than
concentrating it in full nodes.
On May 12, 2015 3:38 PM, "Jeff Garzik" <jgarzik at bitpay.com> wrote:

> One general problem is that security is weakened when an attacker can DoS
> a small part of the chain by DoS'ing a small number of nodes - yet the
> impact is a network-wide DoS because nobody can complete a sync.
>
>
> On Tue, May 12, 2015 at 12:24 PM, gabe appleton <gappleto97 at gmail.com>
> wrote:
>
>> 0, 1, 3, 4, 5, 6 can be solved by looking at chunks chronologically. Ie,
>> give the signed (by sender) hash of the first and last block in your range.
>> This is less data dense than the idea above, but it might work better.
>>
>> That said, this is likely a less secure way to do it. To improve upon
>> that, a node could request a block of random height within that range and
>> verify it, but that violates point 2. And the scheme in itself definitely
>> violates point 7.
>> On May 12, 2015 3:07 PM, "Gregory Maxwell" <gmaxwell at gmail.com> wrote:
>>
>>> It's a little frustrating to see this just repeated without even
>>> paying attention to the desirable characteristics from the prior
>>> discussions.
>>>
>>> Summarizing from memory:
>>>
>>> (0) Block coverage should have locality; historical blocks are
>>> (almost) always needed in contiguous ranges. Having random peers
>>> with totally random blocks would be horrific for performance; as you'd
>>> have to hunt down a working peer and make a connection for each block
>>> with high probability.
>>>
>>> (1) Block storage on nodes with a fraction of the history should not
>>> depend on believing random peers; because listening to peers can
>>> easily create attacks (e.g. someone could break the network; by
>>> convincing nodes to become unbalanced) and not useful-- it's not like
>>> the blockchain is substantially different for anyone; if you're to the
>>> point of needing to know coverage to fill then something is wrong.
>>> Gaps would be handled by archive nodes, so there is no reason to
>>> increase vulnerability by doing anything but behaving uniformly.
>>>
>>> (2) The decision to contact a node should need O(1) communications,
>>> not just because of the delay of chasing around just to find who has
>>> someone; but because that chasing process usually makes the process
>>> _highly_ sybil vulnerable.
>>>
>>> (3) The expression of what blocks a node has should be compact (e.g.
>>> not a dense list of blocks) so it can be rumored efficiently.
>>>
>>> (4) Figuring out what block (ranges) a peer has given should be
>>> computationally efficient.
>>>
>>> (5) The communication about what blocks a node has should be compact.
>>>
>>> (6) The coverage created by the network should be uniform, and should
>>> remain uniform as the blockchain grows; ideally it you shouldn't need
>>> to update your state to know what blocks a peer will store in the
>>> future, assuming that it doesn't change the amount of data its
>>> planning to use. (What Tier Nolan proposes sounds like it fails this
>>> point)
>>>
>>> (7) Growth of the blockchain shouldn't cause much (or any) need to
>>> refetch old blocks.
>>>
>>> I've previously proposed schemes which come close but fail one of the
>>> above.
>>>
>>> (e.g. a scheme based on reservoir sampling that gives uniform
>>> selection of contiguous ranges, communicating only 64 bits of data to
>>> know what blocks a node claims to have, remaining totally uniform as
>>> the chain grows, without any need to refetch -- but needs O(height)
>>> work to figure out what blocks a peer has from the data it
>>> communicated.; or another scheme based on consistent hashes that has
>>> log(height) computation; but sometimes may result in a node needing to
>>> go refetch an old block range it previously didn't store-- creating
>>> re-balancing traffic.)
>>>
>>> So far something that meets all those criteria (and/or whatever ones
>>> I'm not remembering) has not been discovered; but I don't really think
>>> much time has been spent on it. I think its very likely possible.
>>>
>>>
>>> ------------------------------------------------------------------------------
>>> One dashboard for servers and applications across Physical-Virtual-Cloud
>>> Widest out-of-the-box monitoring support with 50+ applications
>>> Performance metrics, stats and reports that give you Actionable Insights
>>> Deep dive visibility with transaction tracing using APM Insight.
>>> http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
>>> _______________________________________________
>>> Bitcoin-development mailing list
>>> Bitcoin-development at lists.sourceforge.net
>>> https://lists.sourceforge.net/lists/listinfo/bitcoin-development
>>>
>>
>>
>> ------------------------------------------------------------------------------
>> One dashboard for servers and applications across Physical-Virtual-Cloud
>> Widest out-of-the-box monitoring support with 50+ applications
>> Performance metrics, stats and reports that give you Actionable Insights
>> Deep dive visibility with transaction tracing using APM Insight.
>> http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
>> _______________________________________________
>> Bitcoin-development mailing list
>> Bitcoin-development at lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/bitcoin-development
>>
>>
>
>
> --
> Jeff Garzik
> Bitcoin core developer and open source evangelist
> BitPay, Inc. https://bitpay.com/
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20150512/1ba83e55/attachment.html>;
Author Public Key
npub1ggtl2ytnafdfz7qgh0t5vt7uy4z43y6vg80c6e4gkpws24sg00xshzkvay