Alex Morcos [ARCHIVE] on Nostr: 📅 Original date posted:2014-10-28 📝 Original message:Sorry, perhaps I ...
📅 Original date posted:2014-10-28
📝 Original message:Sorry, perhaps I misinterpreted that question. The estimates will be
dominated by the prevailing transaction rates initially, so the estimates
you get for something like "what is the least I can pay and still be 90%
sure I get confirmed in 20 blocks" won't be insane, but they will still be
way too conservative. I'm not sure what you meant by reasonable. You
won't get the "correct" answer of something significantly less than 40k
sat/kB for quite some time. Given that the half-life of the decay is 2.5
days, then within a couple of days. And in fact even in the steady state,
the new code will still return a much higher rate than the existing code,
say 10k sat/kB instead of 1k sat/kB, but that's just a result of the
sorting the existing code does and the fact that no one places transactions
with that small fee. To correctly give such low answers, the new code
will require that those super low feerate transactions are occurring
frequently enough, but the bar for enough datapoints in a feerate bucket is
pretty low, an average of 1 tx per block. The bar can be made lower at the
expense of a bit of noisiness in the answers, for instance for priorities I
had to make the bar significantly lower because there are so many fewer
transactions confirmed because of priorities. I'm certainly open to tuning
some of these variables.
On Tue, Oct 28, 2014 at 10:30 AM, Alex Morcos <morcos at gmail.com> wrote:
> Oh in just a couple of blocks, it'll give you a somewhat reasonable
> estimate for asking about every confirmation count other than 1, but it
> could take several hours for it to have enough data points to give you a
> good estimate for getting confirmed in one block (because the prevalent
> feerate is not always confirmed in 1 block >80% of the time) Essentially
> what it does is just combine buckets until it has enough data points, so
> after the first block it might be treating all of the txs as belonging to
> the same feerate bucket, but since the answer it returns is the "median"*
> fee rate for that bucket, its a reasonable answer right off the get go.
>
> Do you think it would make sense to make that 90% number an argument to
> rpc call? For instance there could be a default (I would use 80%) but then
> you could specify if you required a different certainty. It wouldn't
> require any code changes and might make it easier for people to build more
> complicated logic on top of it.
>
> *It can't actually track the median, but it identifies which of the
> smaller actual buckets the median would have fallen into and returns the
> average feerate for that median bucket.
>
>
>
>
>
> On Tue, Oct 28, 2014 at 9:59 AM, Gavin Andresen <gavinandresen at gmail.com>
> wrote:
>
>> I think Alex's approach is better; I don't think we can know how much
>> better until we have a functioning fee market.
>>
>> We don't have a functioning fee market now, because fees are hard-coded.
>> So we get "pay the hard-coded fee and you'll get confirmed in one or two or
>> three blocks, depending on which miners mine the next three blocks and what
>> time of day it is."
>>
>> git HEAD code says you need a fee of 10,0000 satoshis/kb to be pretty
>> sure you'll get confirmed in the next block. That looks about right with
>> Alex's real-world data (if we take "90% chance" as 'pretty sure you'll get
>> confirmed'):
>>
>> Fee rate 100000 Avg blocks to confirm 1.09 NumBlocks:% confirmed 1: 0.901
>> 2: 1.0 3: 1.0
>>
>> My only concern with Alex's code is that it takes much longer to get
>> 'primed' -- Alex, if I started with no data about fees, how long would it
>> take to be able to get enough data for a reasonable estimate of "what is
>> the least I can pay and still be 90% sure I get confirmed in 20 blocks" ?
>> Hours? Days? Weeks?
>>
>> --
>> --
>> Gavin Andresen
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20141028/06848964/attachment.html>
📝 Original message:Sorry, perhaps I misinterpreted that question. The estimates will be
dominated by the prevailing transaction rates initially, so the estimates
you get for something like "what is the least I can pay and still be 90%
sure I get confirmed in 20 blocks" won't be insane, but they will still be
way too conservative. I'm not sure what you meant by reasonable. You
won't get the "correct" answer of something significantly less than 40k
sat/kB for quite some time. Given that the half-life of the decay is 2.5
days, then within a couple of days. And in fact even in the steady state,
the new code will still return a much higher rate than the existing code,
say 10k sat/kB instead of 1k sat/kB, but that's just a result of the
sorting the existing code does and the fact that no one places transactions
with that small fee. To correctly give such low answers, the new code
will require that those super low feerate transactions are occurring
frequently enough, but the bar for enough datapoints in a feerate bucket is
pretty low, an average of 1 tx per block. The bar can be made lower at the
expense of a bit of noisiness in the answers, for instance for priorities I
had to make the bar significantly lower because there are so many fewer
transactions confirmed because of priorities. I'm certainly open to tuning
some of these variables.
On Tue, Oct 28, 2014 at 10:30 AM, Alex Morcos <morcos at gmail.com> wrote:
> Oh in just a couple of blocks, it'll give you a somewhat reasonable
> estimate for asking about every confirmation count other than 1, but it
> could take several hours for it to have enough data points to give you a
> good estimate for getting confirmed in one block (because the prevalent
> feerate is not always confirmed in 1 block >80% of the time) Essentially
> what it does is just combine buckets until it has enough data points, so
> after the first block it might be treating all of the txs as belonging to
> the same feerate bucket, but since the answer it returns is the "median"*
> fee rate for that bucket, its a reasonable answer right off the get go.
>
> Do you think it would make sense to make that 90% number an argument to
> rpc call? For instance there could be a default (I would use 80%) but then
> you could specify if you required a different certainty. It wouldn't
> require any code changes and might make it easier for people to build more
> complicated logic on top of it.
>
> *It can't actually track the median, but it identifies which of the
> smaller actual buckets the median would have fallen into and returns the
> average feerate for that median bucket.
>
>
>
>
>
> On Tue, Oct 28, 2014 at 9:59 AM, Gavin Andresen <gavinandresen at gmail.com>
> wrote:
>
>> I think Alex's approach is better; I don't think we can know how much
>> better until we have a functioning fee market.
>>
>> We don't have a functioning fee market now, because fees are hard-coded.
>> So we get "pay the hard-coded fee and you'll get confirmed in one or two or
>> three blocks, depending on which miners mine the next three blocks and what
>> time of day it is."
>>
>> git HEAD code says you need a fee of 10,0000 satoshis/kb to be pretty
>> sure you'll get confirmed in the next block. That looks about right with
>> Alex's real-world data (if we take "90% chance" as 'pretty sure you'll get
>> confirmed'):
>>
>> Fee rate 100000 Avg blocks to confirm 1.09 NumBlocks:% confirmed 1: 0.901
>> 2: 1.0 3: 1.0
>>
>> My only concern with Alex's code is that it takes much longer to get
>> 'primed' -- Alex, if I started with no data about fees, how long would it
>> take to be able to get enough data for a reasonable estimate of "what is
>> the least I can pay and still be 90% sure I get confirmed in 20 blocks" ?
>> Hours? Days? Weeks?
>>
>> --
>> --
>> Gavin Andresen
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20141028/06848964/attachment.html>