Doug Hoyte on Nostr: > If you’re sending binary, base64 only inflates by 33% compared to hex strings’ ...
> If you’re sending binary, base64 only inflates by 33% compared to hex strings’ 100%.
On purely random data, after compression hex is usually only ~10% bigger than base64. For example:
$ head -c 1000000 /dev/urandom > rand
$ alias hex='od -A n -t x1 | sed "s/ *//g"'
$ cat rand |hex|zstd -c|wc -c
1086970
$ cat rand |base64|zstd -c|wc -c
1018226
$ wcalc 1086970/1018226
= 1.06751
So only ~7% bigger in this case. When data is not purely random, hex often compresses *better* than base64. This is because hex preserves patterns on byte boundaries but base64 does not. For example, look at these two strings post-base64:
$ echo 'hello world' | base64
aGVsbG8gd29ybGQK
$ echo ' hello world' | base64
IGhlbGxvIHdvcmxkCg==
They have nothing in common. Compare to the hex encoded versions:
$ echo 'hello world' | hex
68656c6c6f20776f726c640a
$ echo ' hello world' | hex
2068656c6c6f20776f726c640a
The pattern is preserved, it is just shifted by 2 characters. This means that if "hello world" appears multiple times in the input, there may be two different patterns for it in Base64, but only one in hex (meaning hex effectively has a 2x larger compression dictionary).
Since negentropy is mostly (but not entirely) random data like hashes and fingerprints, it's probably a wash. However, hex is typically faster to encode/decode and furthermore is used for almost all other fields in the nostr protocol, so on the whole seems like the best choice.
> Personally, I’d prefer to see the message format specified explicitly as debuggable JSON, if feasible.
This is theoretically possible, but it would be very difficult to interpret/debug it anyway, and it would add a lot of bandwidth/CPU overhead.
On purely random data, after compression hex is usually only ~10% bigger than base64. For example:
$ head -c 1000000 /dev/urandom > rand
$ alias hex='od -A n -t x1 | sed "s/ *//g"'
$ cat rand |hex|zstd -c|wc -c
1086970
$ cat rand |base64|zstd -c|wc -c
1018226
$ wcalc 1086970/1018226
= 1.06751
So only ~7% bigger in this case. When data is not purely random, hex often compresses *better* than base64. This is because hex preserves patterns on byte boundaries but base64 does not. For example, look at these two strings post-base64:
$ echo 'hello world' | base64
aGVsbG8gd29ybGQK
$ echo ' hello world' | base64
IGhlbGxvIHdvcmxkCg==
They have nothing in common. Compare to the hex encoded versions:
$ echo 'hello world' | hex
68656c6c6f20776f726c640a
$ echo ' hello world' | hex
2068656c6c6f20776f726c640a
The pattern is preserved, it is just shifted by 2 characters. This means that if "hello world" appears multiple times in the input, there may be two different patterns for it in Base64, but only one in hex (meaning hex effectively has a 2x larger compression dictionary).
Since negentropy is mostly (but not entirely) random data like hashes and fingerprints, it's probably a wash. However, hex is typically faster to encode/decode and furthermore is used for almost all other fields in the nostr protocol, so on the whole seems like the best choice.
> Personally, I’d prefer to see the message format specified explicitly as debuggable JSON, if feasible.
This is theoretically possible, but it would be very difficult to interpret/debug it anyway, and it would add a lot of bandwidth/CPU overhead.