Event JSON
{
"id": "5dcb559c157cded4d49e59dce0df128c0690a2830a600ee6c950bf2919198d01",
"pubkey": "9a64dd44256e6741e56390a24c93311b2f8fe69dd81379b18b58fb9fec304a83",
"created_at": 1736349929,
"kind": 1,
"tags": [
[
"p",
"5fc55304c9e1a0df2a271a4300440dafed05b44ff8b9badb28a74d633fa464f7",
"wss://relay.mostr.pub"
],
[
"p",
"77437e7e93b9c87e82e1ea025630c7e8ba448ad54aed505275ec5e6c584ad80d",
"wss://relay.mostr.pub"
],
[
"e",
"ba7e34a3627cd7ebb3ce7879ce42bc6db2402983ed690a126cb261ae4cbf3328",
"wss://relay.mostr.pub",
"reply"
],
[
"proxy",
"https://fosstodon.org/users/djspiewak/statuses/113793429005189008",
"activitypub"
]
],
"content": "nostr:nprofile1qy2hwumn8ghj7un9d3shjtnddaehgu3wwp6kyqpqtlz4xpxfuxsd7238rfpsq3qd4lkstdz0lzum4keg5axkx0ayvnmshszpm2 Fwiw, I'm mostly nitpicking methodology here. The broad conclusion still clearly holds: large models are very very power-hungry. Now, that can be justified in some cases (I have another thread from yesterday about how DLSS4 shows a case where inference results in less energy usage than classical computation by a significant margin), but I'm quite convinced search is not at all one of those problems.",
"sig": "38ee18ad471f612d7109d2065469c56c9289e6dd97abb84851093680cf8c38558b9bd55e17d9e051b553ad664c25ae2b437098c3ead577da5b58e5b282f80673"
}