pluja on Nostr: I have successfully run the #LLaMA LLM (from Facebook) locally on my GNU/Linux laptop ...
I have successfully run the #LLaMA LLM (from Facebook) locally on my GNU/Linux laptop using llama.cpp and I have achieved better results and performance than what I expected. It was the 7B model and I was also able to run the 13B model. You can expect much better results with larger models!
It is exciting to see progress towards the possibility of a self-hosted #ChatGPT that runs locally.
https://github.com/ggerganov/llama.cppPublished at
2023-03-12 20:03:29Event JSON
{
"id": "ac45e1a0ad10e270f46cd7ba4f8e996422eecdfcae4b154b868427676ce51ae4",
"pubkey": "95ea0e2914cd4b020dd751620380af366df634d5f0672a3098ea976fcb2d79f9",
"created_at": 1678651409,
"kind": 1,
"tags": [],
"content": "I have successfully run the #LLaMA LLM (from Facebook) locally on my GNU/Linux laptop using llama.cpp and I have achieved better results and performance than what I expected. It was the 7B model and I was also able to run the 13B model. You can expect much better results with larger models!\n\nIt is exciting to see progress towards the possibility of a self-hosted #ChatGPT that runs locally. \n\nhttps://github.com/ggerganov/llama.cpp",
"sig": "f889b2d3060a6940120090404e1d50ad1ab62edd8abad90d8bb9fc284f6f7895261336acab202529f07c267ba90a6eeb01ff2c52d580004b4ed4770ca5590d7f"
}