Karthik Srinivasan on Nostr: npub19d9p0…kf02p Once you choose a (any) criterion for evaluation, it is impossible ...
npub19d9p04u4xfysdy92fycw947jrca3xve2gnsauysshzewxvmz8dms6kf02p (npub19d9…f02p)
Once you choose a (any) criterion for evaluation, it is impossible to not be biased. This is not just an issue in academia, it's in society writ large. I would suggest we should go with what we think are the best "metrics" available at the moment, but when that starts becoming problematic, break the vicious cycle by making the selection/evaluation random. This type of idea has been around for a long time.
https://en.wikipedia.org/wiki/Sortition
Once you choose a (any) criterion for evaluation, it is impossible to not be biased. This is not just an issue in academia, it's in society writ large. I would suggest we should go with what we think are the best "metrics" available at the moment, but when that starts becoming problematic, break the vicious cycle by making the selection/evaluation random. This type of idea has been around for a long time.
https://en.wikipedia.org/wiki/Sortition