Sherry on Nostr: Because during model training process there are a lot of useful tricks, aka the core ...
Because during model training process there are a lot of useful tricks, aka the core secret of openAI and DeepSeek found it independently ( at least they claim, and make it public as a paper)
And their work is also based on other open source llm
Technically, people could train an uncensored version. Or, more easily if it’s an rag filter, run the model on your local fixed
Published at
2025-01-30 18:28:41Event JSON
{
"id": "e6e040a370d53907776ad3d8395f2c6660f60bce953370b4c400a40622ea5b6b",
"pubkey": "cc8d072efdcc676fcbac14f6cd6825edc3576e55eb786a2a975ee034a6a026cb",
"created_at": 1738261721,
"kind": 1,
"tags": [
[
"e",
"63a17550d15baae34dd4c363d9e9f6ad9792705caddae025f61bee373e1d72cf",
"",
"root"
],
[
"p",
"97c70a44366a6535c145b333f973ea86dfdc2d7a99da618c40c64705ad98e322"
]
],
"content": "Because during model training process there are a lot of useful tricks, aka the core secret of openAI and DeepSeek found it independently ( at least they claim, and make it public as a paper) \n\nAnd their work is also based on other open source llm \n\nTechnically, people could train an uncensored version. Or, more easily if it’s an rag filter, run the model on your local fixed",
"sig": "c6e71bd2f69ff2ab8f6baaf1d882c1a8f3262ee32d7352ecddd3414f9e0648d07504be5a8e28c0ed1f749cf6a754a164a9c4bba7160b511e21966b225cb0949f"
}