mleku on Nostr: well the serials (monotonic index keys) in badger are 64 bit, and i just use them as ...
well the serials (monotonic index keys) in badger are 64 bit, and i just use them as is and whenever possible handle them as actual uint64, but yeah, the size saving is incredible
the encoding for the database was already simple, took very little to add, it's based on the canonical encoding... i need to make a more efficient way to regenerate the ID, which is made from the hash of the canonical form... this plus the binary data of the signature made the data storage format simple, but this complicates that part of it (getting the ID) but i'm gonna just do one thing at a time, for now it just has to re-serialize after substituting the indexes for the keys
but yes, in theory it means the average storage cost per pubkey is around 8-9 bytes as i am using hex encoding so the json decode doesn't trip on anything
yeah, the only downside is it does require two searches of the database... but because the pubkey<->index table is relatively small and when compacted, packed together in a single block of data in the compacted logs, yeah, it's way faster than a complex, generalised compression algorithm
the encoding for the database was already simple, took very little to add, it's based on the canonical encoding... i need to make a more efficient way to regenerate the ID, which is made from the hash of the canonical form... this plus the binary data of the signature made the data storage format simple, but this complicates that part of it (getting the ID) but i'm gonna just do one thing at a time, for now it just has to re-serialize after substituting the indexes for the keys
but yes, in theory it means the average storage cost per pubkey is around 8-9 bytes as i am using hex encoding so the json decode doesn't trip on anything
yeah, the only downside is it does require two searches of the database... but because the pubkey<->index table is relatively small and when compacted, packed together in a single block of data in the compacted logs, yeah, it's way faster than a complex, generalised compression algorithm