plusultra on Nostr: Running a local model is the most private. It's pretty easy to fo with Ollama and ...
Running a local model is the most private. It's pretty easy to fo with Ollama and OpenWebUI
If you don't want to spend $500 on a GPU, then Venice (supported by Erik Voorhees) looks like a pretty good option :
https://venice.ai/?r=0
If you don't want to spend $500 on a GPU, then Venice (supported by Erik Voorhees) looks like a pretty good option :
https://venice.ai/?r=0