conduition on Nostr: I agree about the DoS and scaling risks for websockets. Publishing data about spent ...
I agree about the DoS and scaling risks for websockets. Publishing data about spent proofs intermittently sounds more realistic to me, but I don't see any point in using Nostr to do that. A cashu mint is itself an always-on server already. What is gained by pushing that data to other people's servers besides offloading the DoS/scaling problems onto someone else? Instead the mint should just make that data available directly to clients via a new API endpoint.
Publishing one data point per spent proof wouldn't scale well - the mint needs some kind of accumulator or filter which compresses the information about spent proofs while allowing client-side validation. This seems like a use-case for Golomb-Coded sets (GCS), as used in BIP158 compact filters.
https://github.com/bitcoin/bips/blob/master/bip-0158.mediawiki
The mint computes a GCS which compresses all proofs spent recently, and updates the filter regularly. If the mint keeps the GCS hash representations sorted in memory, it could easily perform this update atomically for every single swap/melt without much of a performance hit (it'd be O(log n) insertion complexity).
If the filter gets too big, the mint compresses and archives it, and starts building a new one from scratch, or else evicts older GCS members (spent proofs) with a FIFO queue.
Clients can download and check the compact GCS filter at any time to see if their proofs have been spent recently, without revealing which ecash notes they're curious about. There's a chance for false positives but no chance of false negatives, so you might think your token was spent when it wasn't, but if your token was spent you'll always know about it.
Publishing one data point per spent proof wouldn't scale well - the mint needs some kind of accumulator or filter which compresses the information about spent proofs while allowing client-side validation. This seems like a use-case for Golomb-Coded sets (GCS), as used in BIP158 compact filters.
https://github.com/bitcoin/bips/blob/master/bip-0158.mediawiki
The mint computes a GCS which compresses all proofs spent recently, and updates the filter regularly. If the mint keeps the GCS hash representations sorted in memory, it could easily perform this update atomically for every single swap/melt without much of a performance hit (it'd be O(log n) insertion complexity).
If the filter gets too big, the mint compresses and archives it, and starts building a new one from scratch, or else evicts older GCS members (spent proofs) with a FIFO queue.
Clients can download and check the compact GCS filter at any time to see if their proofs have been spent recently, without revealing which ecash notes they're curious about. There's a chance for false positives but no chance of false negatives, so you might think your token was spent when it wasn't, but if your token was spent you'll always know about it.