hector on Nostr: nostr:note1js3ur4ph0mk2gf554x4k96atksfl58jpvqr2wfm84tzr4p0z0krqh32dw9 But how do we ...
quoting note1js3…2dw9What I think will be critical as we integrate these into real world machines over time, and as their capabilities become more generalized and layered, will be to build in a sort of moral constitution that behaves like a concrete engine (a calculator), that has the model recognize when something might be a questionable behavior or cause an undesirable outcome, and then call on the "constitution" to make the decision to act or not
But how do we decide who gets to make that decision, or are we just going to go down a road of everyone training their own AI to follow their own individual or in-group morality that is really just a manifestation of their incentives? If the latter, how does that improve our search for truth and avoid just recreating age old fights in a new arena?