hermeticvm on Nostr: You're forced to use way smaller models because you don't have the necessary RAM/VRAM ...
You're forced to use way smaller models because you don't have the necessary RAM/VRAM (basically very fast memory) to store the largest model after it is read from your (much slower) hard drive. So the model has less parameters (basically a lesser amount of knowledge) and won't perform as well, especially for more complex tasks. It's also likely to be slower. You can download updated versions of the model with ollama once they're released.
Published at
2025-01-29 19:33:15Event JSON
{
"id": "81904d6eb10b5c58be173817637ae5a45bbd6f021c0e6b653376677e91353977",
"pubkey": "1a5cff5118d071a2c5d46534733abb9f3dcdfc41b24db0132fc20dbf01c75f78",
"created_at": 1738179195,
"kind": 1,
"tags": [
[
"e",
"0bc46513df98914803bbb7c47a338be91d2d283929a72cc360ea8da9628ff174",
"",
"root"
],
[
"e",
"6ae5977d68c517333be8bd341de9e80c9c17c84e2d13b214bb7c65835a2532c1"
],
[
"e",
"53ce8ea47ef9d1bdd1fa457cef58a2058e5dc8c7dc0360739519050ccec8fc41",
"",
"reply"
],
[
"p",
"dfaf081183885a0069c793af2f4bcb817829a44b3c46d107cafeee06724a44d0"
],
[
"p",
"a42048d7ea26e9c36a67b5ff266c508882a89dbe36ef603cc9c6726326886c32"
]
],
"content": "You're forced to use way smaller models because you don't have the necessary RAM/VRAM (basically very fast memory) to store the largest model after it is read from your (much slower) hard drive. So the model has less parameters (basically a lesser amount of knowledge) and won't perform as well, especially for more complex tasks. It's also likely to be slower. You can download updated versions of the model with ollama once they're released. ",
"sig": "2ac9d7e1c46649f7fa33610c1bb805031aaee416f344736d86754c40e68ee5284a84161a7cde4f17ea4ce36e61412c078feffc1f00fee0d99782a949e972f155"
}