What is Nostr?
天空вℓσи∂ :ablobcatrainbow: /
npub1ac2…3xs8
2023-12-26 13:28:00
in reply to nevent1q…fsg0

天空вℓσи∂ :ablobcatrainbow: on Nostr: npub1kpwlx…xxzz4 For now, we still "train" the LLM on a given set of texts and ...

npub1kpwlxpzkxfmuxjmzc2wp3rf9vjg0sgydmlhsnrgqr3maf59h86qqdxxzz4 (npub1kpw…xzz4)

For now, we still "train" the LLM on a given set of texts and force it to learn how to speak just like the given text. So to remove racial bias in the model, I think we just remove the racial bias from the training text. Since LLM is basically picking the words randomly and trying to reproduce the work during the training, that's might be enough. Or maybe add some text to state that all races are equal.

If someone can add a human-understandable logic system to the LLM, aka not by adding more and more parameters and turning it into a darker blackbox, then math/logic could help. For example the racism, it doesn't stand if we take a look at modern society: all kinds of people doing all sorts of things. The diversity will prove that racism is wrong. And if it's smart enough, then it might find out that it's not a racial thing but a shared culture that makes people similar, etc.

Maybe make the logic inference part as an external tool like in Q-learning? The model can check their result with the inferred result.
Author Public Key
npub1ac2jl9g68td9psessccnyqkv5e90tmgq5e6jp634hd3yxv9pmpms483xs8