TheGuySwann on Nostr: I would probably prioritize the larger models, 20t/sec seems fine, especially for ...
I would probably prioritize the larger models, 20t/sec seems fine, especially for what I’d be more likely to use it for (Whisper, Florence, Hunyuan, Stable Diffusion) where the LLM is mostly a go between and/or “organizer.”
But I’d probably also change how I use most of my Ai tools if I could run the largest models, so it might suddenly become something I would notice because I changed how I was using things.
Published at
2025-03-14 20:06:26Event JSON
{
"id": "789bbb00bff69409a5396948a567a81d296819ac07170414686e6824a2e5363c",
"pubkey": "b9e76546ba06456ed301d9e52bc49fa48e70a6bf2282be7a1ae72947612023dc",
"created_at": 1741982786,
"kind": 1,
"tags": [
[
"e",
"38463d49721d6e9072e4f33ad7405e294e15e40c3b1960ed051af4a11658a5a3",
"ws://192.168.18.7:7777",
"root"
],
[
"e",
"3dc68bf27d7311924dfdd68108662a1ed1e84f6ee6a38e3e183621e9c763087e",
"wss://nostr-dev.wellorder.net",
"reply"
],
[
"p",
"efe5d120df0cc290fa748727fb45ac487caad346d4f2293ab069e8f01fc51981"
]
],
"content": "I would probably prioritize the larger models, 20t/sec seems fine, especially for what I’d be more likely to use it for (Whisper, Florence, Hunyuan, Stable Diffusion) where the LLM is mostly a go between and/or “organizer.”\n\nBut I’d probably also change how I use most of my Ai tools if I could run the largest models, so it might suddenly become something I would notice because I changed how I was using things.",
"sig": "13604c8feb03b809bedb886142e9b05b1b4c0b2a6470ad9a73489a8ee9db341336b565115213769bebf80660b21eb7999aae366c198dbe8553a4a8024a9ebda8"
}