Event JSON
{
"id": "8a01c076166feb52241c9010778a5e088c6a9ffcc063a79d29ecdc57f794df3b",
"pubkey": "310811d70de0b73d09c6293869015c592e36ccc190aaa504d8d11f7777baa2b8",
"created_at": 1731156720,
"kind": 1,
"tags": [
[
"p",
"23f3b5a9df0c322c4f66590614c620cf90e2538720a106335c75239dbed152fc",
"wss://relay.mostr.pub"
],
[
"p",
"5aeb250b3075a12bd05e16c8a3c40da91a553fa92164a39915a3a0615fe51864",
"wss://relay.mostr.pub"
],
[
"e",
"d52a6984372fbfc2df1146b6d3a1ed6a62a5e86e558154fe2608da11b79edb68",
"wss://relay.mostr.pub",
"reply"
],
[
"proxy",
"https://fosstodon.org/users/kero/statuses/113453086812966996",
"activitypub"
]
],
"content": "nostr:npub1y0emt2wlpsezcnmxtyrpf33qe7gwy5u8yzssvv6uw53em0k32t7q7smm9n I'am asking this because right now, in Meilisearch, we want to read the content in LMDB because it is already sorted (and why spent time sorting the entries by ourselves when LMDB already did it?). So, we collect all data pointers and associate them to the keys to then read them in parallel. Same technique as this one: https://blog.kerollmops.com/multithreading-and-memory-mapping-refining-ann-performance-with-arroy#reading-the-user-item-nodes-from-different-threads",
"sig": "6532c135beb1c9ae4153cf56c38fc811cf9a4fa4e775e76819f91ded3dfb058420ecda23990a3a6c062ed16dd5927f3341e3edbfae48d70589665e6bca3a2d7a"
}