Amalgam on Nostr: Any #llm or #ai experts out there? Is a local model more energy efficient than a ...
Any #llm or #ai experts out there? Is a local model more energy efficient than a model hosted somewhere else?
I’m starting to see many smaller models tuned for specific tasks and meant to run locally. I’m guessing since they are smaller they will require less energy than gpt or sonnet or whatever. But their architecture is more optimized than my laptop.
If I’m worried about energy usage how should I think about this?
I’m starting to see many smaller models tuned for specific tasks and meant to run locally. I’m guessing since they are smaller they will require less energy than gpt or sonnet or whatever. But their architecture is more optimized than my laptop.
If I’m worried about energy usage how should I think about this?