Event JSON
{
"id": "6dbb227290e7199bc782d2cbebd20aa74599e1ac030447e62c00dd278740d92d",
"pubkey": "5569b60d620748da9ace30dfcd04b04e50b4a28deced8960bd4c70d5eaab1f40",
"created_at": 1703177576,
"kind": 1,
"tags": [
[
"p",
"9d982b6d5257f77aba65b8be6ce556f266799e9571b5b5cb1a2abcca61ccec4f",
"wss://relay.mostr.pub"
],
[
"p",
"2dda13a4519e36ffe536a92838c2b9bd209b57bf9868ec50ec49be86bc966c4c",
"wss://relay.mostr.pub"
],
[
"e",
"b1bd4e8e677eb08ed22f5154244f9fdb0c47aeecc256e8a1a387cc16f4be5e8e",
"wss://relay.mostr.pub",
"reply"
],
[
"content-warning",
"GPT/LLM rant"
],
[
"proxy",
"https://layer8.space/users/necrophcodr/statuses/111619445657427523",
"activitypub"
]
],
"content": "nostr:npub1nkvzkm2j2lmh4wn9hzlxee2k7fn8n854wx6mtjc6927v5cwva38snfcnfh there's no guarantees that your bog standard programs will evaluate to something useful either. Even if they seemingly do it all the time, it's all statistics anyway. And plain text happens to also be the universally usable format. If you want a LLM that works with specific types of data, then you can still make it work with it in a structured way too. If you're reusing existing models, they'll be biased on their training data and goals too, just like any software ever.",
"sig": "3eeb6f317af8001906ea1687766675c4e222c335b5607d98fd29bc1083fcd8c4844840b9bfdf90580e42f1a3b8489bef35fc4ea8e56388dd08b03079535aaf8a"
}