Angel Leon [ARCHIVE] on Nostr: đ Original date posted:2015-08-17 đ Original message:I've been sharing a ...
đ
Original date posted:2015-08-17
đ Original message:I've been sharing a similar solution for the past 2 weeks. I think 2016
blocks is too much of a wait, I think we should look at the mean block size
during the last 60-120 minutes instead and avert any crisis caused by
transactional spikes that could well be caused by organic use of the
network (Madonna sells her next tour tickets on Bitcoin, OpenBazaar network
starts working as imagined, XYZ startup really kicks ass and succeeds in a
couple of major cities with major PR push)
Pseudo code in Python
https://gist.github.com/gubatron/143e431ee01158f27db4
My idea stems from a simple scalability metric that affects real users and
the desire to use Bitcoin:
Waiting times to get your transactions confirmed on the blockchain.
Anything past 45mins-1 hour should be unnacceptable.
Initially I wanted to measure the mean time for the transactions in blocks
to go from being sent by the user
(initial broadcast into mempools) until the transaction was effectively
confirmed on the blockchain, say for 2 blocks (acceptable 15~20mins)
When blocks get full, people start waiting unnaceptable times for their
transactions to come through
if they don't adjust their fees. The idea is to avoid that situation at all
costs and keep the network
churning to the extent of its capabilities, without pretending a certain
size will be right at some
point in time, nobody can predict the future, nobody can predict real
organic usage peaks
on an open financial network, not all sustained spikes will come from
spammers,
they will come from real world use as more and more people think of great
uses for Bitcoin.
I presented this idea to measure the mean wait time for transactions and I
was told
there's no way to reliably meassure such a number, there's no consensus
when transactions are still
in the mempool and wait times could be manipulated. Such an idea would have
to include new timestamp fields
on the transactions, or include the median wait time on the blockheader
(too complex, additional storage costs)
This is an iteration on the next thing I believe we can all agree is 100%
accurately measured, blocksize.
Full blocks are the cause why many transactions would have to be waiting in
the mempool, so we should be able
to also use the mean size of the blocks to determine if there's a
legitimate need to increase or reduce the
maximum blocksize.
The idea is simple, If blocks are starting to get full past a certain
threshold then we double the blocksize
limit starting the next block, if blocks remain within a healthy bound,
transaction wait times should be as
expected for everyone on the network, if blocks are not getting that full
and the mean goes below a certain
threshold then we half the maximum block size allowed until we reach the
level we need.
Similar to what we do with hashing difficulty, it's something you can't
predict, therefore no fixed limits,
or predicted limits should be established.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20150817/795d714c/attachment.html>
đ Original message:I've been sharing a similar solution for the past 2 weeks. I think 2016
blocks is too much of a wait, I think we should look at the mean block size
during the last 60-120 minutes instead and avert any crisis caused by
transactional spikes that could well be caused by organic use of the
network (Madonna sells her next tour tickets on Bitcoin, OpenBazaar network
starts working as imagined, XYZ startup really kicks ass and succeeds in a
couple of major cities with major PR push)
Pseudo code in Python
https://gist.github.com/gubatron/143e431ee01158f27db4
My idea stems from a simple scalability metric that affects real users and
the desire to use Bitcoin:
Waiting times to get your transactions confirmed on the blockchain.
Anything past 45mins-1 hour should be unnacceptable.
Initially I wanted to measure the mean time for the transactions in blocks
to go from being sent by the user
(initial broadcast into mempools) until the transaction was effectively
confirmed on the blockchain, say for 2 blocks (acceptable 15~20mins)
When blocks get full, people start waiting unnaceptable times for their
transactions to come through
if they don't adjust their fees. The idea is to avoid that situation at all
costs and keep the network
churning to the extent of its capabilities, without pretending a certain
size will be right at some
point in time, nobody can predict the future, nobody can predict real
organic usage peaks
on an open financial network, not all sustained spikes will come from
spammers,
they will come from real world use as more and more people think of great
uses for Bitcoin.
I presented this idea to measure the mean wait time for transactions and I
was told
there's no way to reliably meassure such a number, there's no consensus
when transactions are still
in the mempool and wait times could be manipulated. Such an idea would have
to include new timestamp fields
on the transactions, or include the median wait time on the blockheader
(too complex, additional storage costs)
This is an iteration on the next thing I believe we can all agree is 100%
accurately measured, blocksize.
Full blocks are the cause why many transactions would have to be waiting in
the mempool, so we should be able
to also use the mean size of the blocks to determine if there's a
legitimate need to increase or reduce the
maximum blocksize.
The idea is simple, If blocks are starting to get full past a certain
threshold then we double the blocksize
limit starting the next block, if blocks remain within a healthy bound,
transaction wait times should be as
expected for everyone on the network, if blocks are not getting that full
and the mean goes below a certain
threshold then we half the maximum block size allowed until we reach the
level we need.
Similar to what we do with hashing difficulty, it's something you can't
predict, therefore no fixed limits,
or predicted limits should be established.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20150817/795d714c/attachment.html>