jb55 on Nostr: mainly llama atm, but been playing with others. I want to try qwen ...
mainly llama atm, but been playing with others. I want to try qwen
I noticed llama.cpp supports ROCM (amdgpu) now! I can sample the 8B parameter llama 3 model with my 8GB VRAM graphics card! It's fast! Local ai ftw.
Published at
2024-11-24 18:22:53Event JSON
{
"id": "58c79c49d75f50bdf918b1d594ce2da100a77e3c178ee27d2093130040154df3",
"pubkey": "32e1827635450ebb3c5a7d12c1f8e7b2b514439ac10a67eef3d9fd9c5c68e245",
"created_at": 1732472573,
"kind": 1,
"tags": [
[
"e",
"1cce0ac3a1d126d9754d07f629b8da8e375b6afd3b873929ba90b3a6f4e5243f",
"",
"root"
],
[
"e",
"f35d03a130877f8158a5899b599ab8f4b975883062c47fe5fa5b9474b8394cc3",
"",
"reply"
],
[
"p",
"971615b70ad9ec896f8d5ba0f2d01652f1dfe5f9ced81ac9469ca7facefad68b"
]
],
"content": "mainly llama atm, but been playing with others. I want to try qwen nostr:nevent1qqs88zs80vrrndpns2l88hxdgaumg4hstnttth9jfzhxcejww7tjyzcpz4mhxue69uhhyetvv9ujumt0wd68ytnsw43qzrthwden5te0dehhxtnvdakqz9rhwden5te0wfjkccte9ejxzmt4wvhxjmcpzemhxue69uhhyetvv9ujuurjd9kkzmpwdejhgmvsqle",
"sig": "1885e4e5cfcd7d27970e80e5ca15725def0d5f94a817886a993243695711f78d5cc9772d4e56aaea3e3dbf4ab6d44f706d507372bac3cfe81535b2f1efcee573"
}