Benedict Chan [ARCHIVE] on Nostr: đź“… Original date posted:2015-07-23 đź“ť Original message:On Thu, Jul 23, 2015 at ...
đź“… Original date posted:2015-07-23
đź“ť Original message:On Thu, Jul 23, 2015 at 1:52 PM, Eric Lombrozo via bitcoin-dev
<bitcoin-dev at lists.linuxfoundation.org> wrote:
> On Thu, Jul 23, 2015 at 3:14 PM, Eric Lombrozo <elombrozo at gmail.com> wrote:
>>
>> Mainstream usage of cryptocurrency will be enabled primarily by direct
>> party-to-party contract negotiation…with the use of the blockchain primarily
>> as a dispute resolution mechanism. The block size isn’t about scaling but
>> about supply and demand of finite resources. As demand for block space
>> increases, we can address it either by increasing computational resources
>> (block size) or by increasing fees. But to do the former we need a way to
>> offset the increase in cost by making sure that those who contribute said
>> resources have incentive to do so.’
>
>
> I should also point out, improvements in hardware and network infrastructure
> can also reduce costs…and we could very well have a model where resource
> requirements can be increased as technology improves. However, currently,
> the computational cost of validation is clearly growing far more quickly
> than the cost of computational resources is going down. There are
> 7,000,000,000 people in the world. Payment networks in the developed world
> already regularly handle thousands of transactions a second. Even with
> highly optimized block propagation, pruning, and signature validation, we’re
> still many orders shy of being able to satisfy demand. To achieve mainstream
> adoption, we’ll have to pass through a period of quasi-exponential growth in
> userbase (until the market saturates…or until the network resources run
> out). Unless we’re able to achieve a validation complexity of O(polylog n)
> or better, it’s not a matter of having a negative attitude about the
> prospects…it’s just math. Whether we have 2MB or 20MB or 100MB blocks (even
> assuming the above mentioned optimizations and that the computational
> resources exist and are willing to handle it) we will not be able to satisfy
> demand if we insist on requiring global validation for all transactions.
>
Scaling the network will come in the form of a combination of many
optimizations. Just because we do not know for sure how to eventually
serve 7 billion people does not mean we should make decisions on
global validation that impact our ability to serve the current set of
users.
Also, blocking a change because it's "more important to address issues
such as..." other improvements will further slow down the discussion.
I believe an increase will not prevent the development of other
improvements that we need - in contrast, the sooner we can get over
the limit (which, as you agree, needs to be changed at some point),
the sooner we can get back to work.
>
> On Jul 23, 2015, at 1:26 PM, Jorge TimĂłn <jtimon at jtimon.cc> wrote:
>
> On Thu, Jul 23, 2015 at 9:52 PM, Jameson Lopp via bitcoin-dev
> <bitcoin-dev at lists.linuxfoundation.org> wrote:
>
> Running a node certainly has real-world costs that shouldn't be ignored.
> There are plenty of advocates who argue that Bitcoin should strive to keep
> it feasible for the average user to run their own node (as opposed to
> Satoshi's vision of beefy servers in data centers.) My impression is that
> even most of these advocates agree that it will be acceptable to eventually
> increase block sizes as resources become faster and cheaper because it won't
> be 'pricing out' the average user from running their own node. If this is
> the case, it seems to me that we have a problem given that there is no
> established baseline for the acceptable performance / hardware cost
> requirements to run a node. I'd really like to see further clarification
> from these advocates around the acceptable cost of running a node and how we
> can measure the global reduction in hardware and bandwidth costs in order to
> establish a baseline that we can use to justify additional resource usage by
> nodes.
>
>
> Although I don't have a concrete proposals myself, I agree that
> without having any common notion of what the "minimal target hardware"
> looks like, it is very difficult to discuss other things that depend
> on that.
> If there's data that shows that a 100 usd raspberry pi with a 1 MB
> connection in say, India (I actually have no idea about internet
> speeds there) size X is a viable full node, then I don't think anybody
> can reasonably oppose to rising the block size to X, and such a
> hardfork can perfectly be uncontroversial.
> I'm exaggerating ultra-low specifications, but it's just an example to
> illustrate your point.
> There was a thread about formalizing such "minimum hardware
> requirements", but I think the discussion simply finished there:
> - Let's do this
> - Yeah, let's do it
> - +1, let's have concrete values, I generally agree.
>
>
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev at lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
đź“ť Original message:On Thu, Jul 23, 2015 at 1:52 PM, Eric Lombrozo via bitcoin-dev
<bitcoin-dev at lists.linuxfoundation.org> wrote:
> On Thu, Jul 23, 2015 at 3:14 PM, Eric Lombrozo <elombrozo at gmail.com> wrote:
>>
>> Mainstream usage of cryptocurrency will be enabled primarily by direct
>> party-to-party contract negotiation…with the use of the blockchain primarily
>> as a dispute resolution mechanism. The block size isn’t about scaling but
>> about supply and demand of finite resources. As demand for block space
>> increases, we can address it either by increasing computational resources
>> (block size) or by increasing fees. But to do the former we need a way to
>> offset the increase in cost by making sure that those who contribute said
>> resources have incentive to do so.’
>
>
> I should also point out, improvements in hardware and network infrastructure
> can also reduce costs…and we could very well have a model where resource
> requirements can be increased as technology improves. However, currently,
> the computational cost of validation is clearly growing far more quickly
> than the cost of computational resources is going down. There are
> 7,000,000,000 people in the world. Payment networks in the developed world
> already regularly handle thousands of transactions a second. Even with
> highly optimized block propagation, pruning, and signature validation, we’re
> still many orders shy of being able to satisfy demand. To achieve mainstream
> adoption, we’ll have to pass through a period of quasi-exponential growth in
> userbase (until the market saturates…or until the network resources run
> out). Unless we’re able to achieve a validation complexity of O(polylog n)
> or better, it’s not a matter of having a negative attitude about the
> prospects…it’s just math. Whether we have 2MB or 20MB or 100MB blocks (even
> assuming the above mentioned optimizations and that the computational
> resources exist and are willing to handle it) we will not be able to satisfy
> demand if we insist on requiring global validation for all transactions.
>
Scaling the network will come in the form of a combination of many
optimizations. Just because we do not know for sure how to eventually
serve 7 billion people does not mean we should make decisions on
global validation that impact our ability to serve the current set of
users.
Also, blocking a change because it's "more important to address issues
such as..." other improvements will further slow down the discussion.
I believe an increase will not prevent the development of other
improvements that we need - in contrast, the sooner we can get over
the limit (which, as you agree, needs to be changed at some point),
the sooner we can get back to work.
>
> On Jul 23, 2015, at 1:26 PM, Jorge TimĂłn <jtimon at jtimon.cc> wrote:
>
> On Thu, Jul 23, 2015 at 9:52 PM, Jameson Lopp via bitcoin-dev
> <bitcoin-dev at lists.linuxfoundation.org> wrote:
>
> Running a node certainly has real-world costs that shouldn't be ignored.
> There are plenty of advocates who argue that Bitcoin should strive to keep
> it feasible for the average user to run their own node (as opposed to
> Satoshi's vision of beefy servers in data centers.) My impression is that
> even most of these advocates agree that it will be acceptable to eventually
> increase block sizes as resources become faster and cheaper because it won't
> be 'pricing out' the average user from running their own node. If this is
> the case, it seems to me that we have a problem given that there is no
> established baseline for the acceptable performance / hardware cost
> requirements to run a node. I'd really like to see further clarification
> from these advocates around the acceptable cost of running a node and how we
> can measure the global reduction in hardware and bandwidth costs in order to
> establish a baseline that we can use to justify additional resource usage by
> nodes.
>
>
> Although I don't have a concrete proposals myself, I agree that
> without having any common notion of what the "minimal target hardware"
> looks like, it is very difficult to discuss other things that depend
> on that.
> If there's data that shows that a 100 usd raspberry pi with a 1 MB
> connection in say, India (I actually have no idea about internet
> speeds there) size X is a viable full node, then I don't think anybody
> can reasonably oppose to rising the block size to X, and such a
> hardfork can perfectly be uncontroversial.
> I'm exaggerating ultra-low specifications, but it's just an example to
> illustrate your point.
> There was a thread about formalizing such "minimum hardware
> requirements", but I think the discussion simply finished there:
> - Let's do this
> - Yeah, let's do it
> - +1, let's have concrete values, I generally agree.
>
>
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev at lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>