mleku on Nostr: not sure if you considered this but running COUNT queries over windows of time that ...
not sure if you considered this but running COUNT queries over windows of time that are short enough should let you estimate good limit boundaries in those time periods
it reminds me that i currently still just run a query which decodes the events in response COUNT... when in fact, the search first makes a set of indexes and then searches for them and returns the list of derived keys for the actual events. this number would be the answer.
you could even do bisection searches so at first you only query half each side of your window and then break it down from there to find the smallest window size that works and then you can paginate based on that (ie, it will never exceed a limit once you have that, unless somehow a lot of backdated events get injected)
pagination is always a two step even with more "sophisticated" database query systems anyway
it reminds me that i currently still just run a query which decodes the events in response COUNT... when in fact, the search first makes a set of indexes and then searches for them and returns the list of derived keys for the actual events. this number would be the answer.
you could even do bisection searches so at first you only query half each side of your window and then break it down from there to find the smallest window size that works and then you can paginate based on that (ie, it will never exceed a limit once you have that, unless somehow a lot of backdated events get injected)
pagination is always a two step even with more "sophisticated" database query systems anyway