Event JSON
{
"id": "959fe6845754e41ae3d3f1c356679fb7e82fff99537bf2ae88ac055bb69ceb55",
"pubkey": "70ffc8f2ca600f8735f06b2655927ed5967c6d423a2e0e707ad35e444e4ed62c",
"created_at": 1690915616,
"kind": 1,
"tags": [
[
"p",
"ffb899cf38113ccf741c4a79768af2b574f6ba7a6458e0dce5671d8a1cf8ccdf",
"wss://relay.mostr.pub"
],
[
"p",
"f9d348f7e2a160b586c3415c45c2b86074240668450f90ac3a2df4462e698e56",
"wss://relay.mostr.pub"
],
[
"p",
"7c1428df4e9526bbea21c3f70d137f12f65fc1c029add5f8f31987e5897ca933",
"wss://relay.mostr.pub"
],
[
"p",
"733bdc96628508d1329f16c0026da2c769a07eda859247f0e0d2ee69bf5c84fb",
"wss://relay.mostr.pub"
],
[
"p",
"3ca546aff8441dd6a767c1c72cfeff7830df85716bda7ba4adf6a5b2e50af822",
"wss://relay.mostr.pub"
],
[
"p",
"c5a914089292822ab15093937cc1253e820355a0321a8ede972c221315f76d8d",
"wss://relay.mostr.pub"
],
[
"e",
"d93582d0bf1f6901448594268fc4b6f0382c42f1605b6b16568f589ab961d610",
"wss://relay.mostr.pub",
"reply"
],
[
"mostr",
"https://hachyderm.io/users/spacer/statuses/110815845834201943"
]
],
"content": "nostr:npub1l7ufnneczy7v7aquffuhdzhjk460dwn6v3vwph89vuwc588cen0sfv4up2 nostr:npub1l8f53alz59sttpkrg9wyts4cvp6zgpngg58eptp69h6yvtnf3etqr89lph nostr:npub10s2z3h6wj5nth63pc0ms6ymlztm9lswq9xkat78nrxr7tztu4yesckljew I've heard of someone running a 4bit quantized Llama llm on an 8gb raspberry pi CPU. We really should be looking more into low power ML, I know it's early days still",
"sig": "6542379f1e5e28e58afc16944cf76fe73101d13711ae0626580dea78a0458e0ab36fdeaeec06269705e090690d37dd38d08fdf56cae1465bdd10792587404269"
}