What is Nostr?
Dan Goodman /
npub1pmx…q5qg
2024-02-24 21:57:02

Dan Goodman on Nostr: Scientists do a lot of measuring, quantifying and applying algorithms to make ...

Scientists do a lot of measuring, quantifying and applying algorithms to make decisions. In their scientific work, they do this with a very critical approach to what is being measured, with high standards of evidence to justify the decisions. But when they apply this to themselves, ranking students, papers and grant applications, for example, they don't question the measures or demand any evidence at all. Indeed, many will actually dismiss what little evidence there is on the basis of intuition or anecdote. I really struggle to understand this. How can you be so skilled at applying critical analysis in one part of your job and not even try to do the same in another equally large part of the same job?

Let me give an example. One of the committees I am part of at my university is about diversity and inclusion. In order to be certified by Athena SWAN we have to write a report every few years, and part of that report is measuring and reporting the number of female students in our courses, applications versus acceptance, etc. We're required to monitor these numbers and understand why and how they are changing. I've seen a number of successful reports from our and other universities, and not one of them has done a statistical test on the numbers. They just report things like the number of female students is lower than the number of males, but it has increased compared to last year. Can you imagine if you wrote that in a paper and tried to make a scientific claim based on that? But major decisions which affect how the university is run are based on this sort of reasoning. And the extraordinary thing is that the Athena SWAN organisation that judges these reports doesn't ask for statistical analysis, provides no guidance on how to do it, and the lack of it has never been mentioned in any of the feedback I've read.

This isn't an isolated example (I could list many more as can you all I'm sure), and it's not just limited to administration since it's true of student grades and peer review, which are pretty central to academic life.

I'm interested in thoughts on the psychology of why we do this and what we can do to change it? Either by measuring more critically or perhaps not measuring things that can't be meaningfully quantified and analysed.
Author Public Key
npub1pmxyaxv6dm8xnd99c80fqyvtjmsmp68gv026tdyfsyz4000ttk0s48q5qg