TheGuySwann on Nostr: Venice.ai and select the llama3.1 model. Great option for a big model that you ...
Venice.ai and select the llama3.1 model. Great option for a big model that you can’t run locally.
Otherwise a local llama3.1 20B is solid if you have the RAM
Published at
2024-10-04 14:36:13Event JSON
{
"id": "525783a796ecca88a663a66043eb28e492679dee2cea8cd0359c06c6b0f386e0",
"pubkey": "b9e76546ba06456ed301d9e52bc49fa48e70a6bf2282be7a1ae72947612023dc",
"created_at": 1728052573,
"kind": 1,
"tags": [
[
"e",
"c56578c9a11bce0016e09dbc974f282352e2ba66761dc9b2b500284f49c8301e",
"",
"root"
],
[
"e",
"3fa1429e2971b5f8d51d0358911302ba62e660e549b1d58ae5c63ed10079763f",
"",
"reply"
],
[
"p",
"c80b5248fbe8f392bc3ba45091fb4e6e2b5872387601bf90f53992366b30d720"
],
[
"p",
"dc4cd086cd7ce5b1832adf4fdd1211289880d2c7e295bcb0e684c01acee77c06"
]
],
"content": "Venice.ai and select the llama3.1 model. Great option for a big model that you can’t run locally.\n\nOtherwise a local llama3.1 20B is solid if you have the RAM",
"sig": "ece8fa838f2b200256078cd04f49d9fdda711005d4e23e12aa61609d9a5ae4378f8fa566610cf2f14e6cbfd1b5a61e50f020a442baebf2d1b3d4f5baf9d8cb92"
}