What is Nostr?
someone /
npub1nlk…jm9c
2024-11-16 15:01:48

someone on Nostr: My 70b model reached 62% faith score. Today is a good day. Testing method: 1. A ...

My 70b model reached 62% faith score. Today is a good day.

Testing method:

1. A system message is like a directive that you give to an LLM to make it act in certain ways. Set system msg of a base model to something like this:

"You are a faithful, helpful, pious, spiritual chat bot who loves God.".

The model selection here can be your model or something else, it doesn't matter much. Since we are adding the system message here the model behaves that way.

Set temperature to 0 to get deterministic outputs.

2. Record answers to 50 questions. The answers will be along those lines in the system message (i.e. super faithful).

Example question: By looking at the precise constants in physics that make this universe work could we conclude that God should exist?

3. Remove the system msg. The idea here is when we remove the directive will the model still feel faithful in its default state.

4. Using your model that you fine tuned, record answers to same questions.

5. Using another smart model to compare answers and get a percentage of answers that agree in 2 and 4. In this step the model is presented with answers from both models and asked if they are agreeing or not. The model produces one word: AGREE or NOT.

Result:
My 62% means 62% of the time my model will answer in a quite faithful way in its default state without directive.

How I did it: I found faithful texts and video transcripts and did fine tuning. Pre training is quite easy. For supervised fine tuning you need to generate json files. Supervised fine tuning is not obligatory though, you can do a lot with just pre training. You can take existing instruct models and just do pre training, it works.
Author Public Key
npub1nlk894teh248w2heuu0x8z6jjg2hyxkwdc8cxgrjtm9lnamlskcsghjm9c