{
"id":"738a077b0639b43382be73dccd4779b456f05cd6b5dcb248ae6c664e7797220b",
"pubkey":"32e1827635450ebb3c5a7d12c1f8e7b2b514439ac10a67eef3d9fd9c5c68e245",
"created_at":1729450101,
"kind":1,
"tags": [],
"content":"I noticed llama.cpp supports ROCM (amdgpu) now! I can sample the 8B parameter llama 3 model with my 8GB VRAM graphics card! It's fast! Local ai ftw.\n\nhttps://cdn.jb55.com/s/rocm-llama-2.mp4",
"sig":"6550cbbd21e05574488a9639fd243383acb5f793b45bab80dcd8625457c836958e464f4e9e94916dbbd86e78321bec9bc85516faab86a63cc1a3cb140d608c97"
}