badbron on Nostr: Sun i wanna say this is a piece of proof that shows true empirical evidence of the ...
Sun (nprofile…h6f2) i wanna say this is a piece of proof that shows true empirical evidence of the active and imminently increasing worsening of the Replication crisis.
backinfo: For all researchpapers (assuming the classic p value equals 0.05), figuring if the scientific paper is correct, is a statistical nightmare which calculates out to, that in the best case scenario, with 0 bad actors anywhere in any part of the process, equals out to roughly 60% papers being correct in results, and 40% having wrong results (false positive AND false negatives). That is just the baseline statistical inevitability due to how the work is done, cant be avoided.
imo that image is empirical evidence of bad actors, which will skew it already from a ~60% correct results in papers, to even lower values.
In addition, its proof of failure to search and verify one or more features of one's own work copied from elsewhere to make sure things are real. The failure may be accidental (bad, reminds me of yesterday's youtube video discussion we had a bit), or malicious (worse, and i have no idea how to prevent/disincentivise the malicious actor in this scenario). I cant think of a method change to hinder the malicious actor, all the wrong behaviours are currently rewarded very much, with money and fame, and several correct behaviours are currently rewarded with the opposite (loss of funding). Not sure how to, and if its worth it to (or even possible to?), change public research towards having an extra strong focus in the direction of replication papers.
true60%-false40% is probably a terrible baseline statistical ratio, but something like true95+% - false5% is probably something to aim towards i think. But this would require every research paper with any remarkable results to have like 3 or 5 independent replication studies, which as far as i know, is not a feature in this world that we currently have or are willing to fund.
backinfo: For all researchpapers (assuming the classic p value equals 0.05), figuring if the scientific paper is correct, is a statistical nightmare which calculates out to, that in the best case scenario, with 0 bad actors anywhere in any part of the process, equals out to roughly 60% papers being correct in results, and 40% having wrong results (false positive AND false negatives). That is just the baseline statistical inevitability due to how the work is done, cant be avoided.
imo that image is empirical evidence of bad actors, which will skew it already from a ~60% correct results in papers, to even lower values.
In addition, its proof of failure to search and verify one or more features of one's own work copied from elsewhere to make sure things are real. The failure may be accidental (bad, reminds me of yesterday's youtube video discussion we had a bit), or malicious (worse, and i have no idea how to prevent/disincentivise the malicious actor in this scenario). I cant think of a method change to hinder the malicious actor, all the wrong behaviours are currently rewarded very much, with money and fame, and several correct behaviours are currently rewarded with the opposite (loss of funding). Not sure how to, and if its worth it to (or even possible to?), change public research towards having an extra strong focus in the direction of replication papers.
true60%-false40% is probably a terrible baseline statistical ratio, but something like true95+% - false5% is probably something to aim towards i think. But this would require every research paper with any remarkable results to have like 3 or 5 independent replication studies, which as far as i know, is not a feature in this world that we currently have or are willing to fund.