𝐀𝐧𝐧𝐢𝐦𝐞 𝐖𝐨𝐧𝐠 on Nostr: Sorry for the mucho texto. I know you probably do not care, but anyway... Explaining ...
Sorry for the mucho texto. I know you probably do not care, but anyway...
Explaining the specific problem in simple terms, a statistician would probably look at an astronomers paper and think "it's good!" because he doesn't understand astrophysics or instrumentation processes very well. Statisticians may assume they are looking at raw data, which is a mistake. Astronomers have to be critical of their data. The mistake astronomers were basically making in the article is assuming "my data is good", when they know their data had to be refined at some point. Basically they were asking "given my result, what is the chances of it being true?", instead they should have been asking "given my result, what are the chances of it being an artifact of me messing with it, and if it wasn't then what is the chances of it being true?"
Similarly, the sub-field of astrophysics, photometry (literally counting photons over a given period of time), seems like a perfect candidate for something basic like Poisson statistics (counting events over a period). However photon detection may be incredibly rare (a big problem for x-ray astronomy where they count tens of photons over a 24 hour observation). Data reduction techniques may actually make it not a purely Poisson process (basically if a data point survives is subjected to two dice roles now instead of one, one of which has time component, and the other which doesn't), and events may be so infrequent anyway that it forces them to use more advanced techniques like Survival Analysis.
If an astronomer explained his problem simply as "I am counting photons over a period of time", a statistician would say "Oh, yeah, that's Poisson!" Meanwhile, the astronomer may know that his data is more complex than that, but he lacks the experience to know what details are important to tell the statistician, or to even be aware of something fields like Survival Analysis exist. This leads to avoidable mistakes.
Explaining the specific problem in simple terms, a statistician would probably look at an astronomers paper and think "it's good!" because he doesn't understand astrophysics or instrumentation processes very well. Statisticians may assume they are looking at raw data, which is a mistake. Astronomers have to be critical of their data. The mistake astronomers were basically making in the article is assuming "my data is good", when they know their data had to be refined at some point. Basically they were asking "given my result, what is the chances of it being true?", instead they should have been asking "given my result, what are the chances of it being an artifact of me messing with it, and if it wasn't then what is the chances of it being true?"
Similarly, the sub-field of astrophysics, photometry (literally counting photons over a given period of time), seems like a perfect candidate for something basic like Poisson statistics (counting events over a period). However photon detection may be incredibly rare (a big problem for x-ray astronomy where they count tens of photons over a 24 hour observation). Data reduction techniques may actually make it not a purely Poisson process (basically if a data point survives is subjected to two dice roles now instead of one, one of which has time component, and the other which doesn't), and events may be so infrequent anyway that it forces them to use more advanced techniques like Survival Analysis.
If an astronomer explained his problem simply as "I am counting photons over a period of time", a statistician would say "Oh, yeah, that's Poisson!" Meanwhile, the astronomer may know that his data is more complex than that, but he lacks the experience to know what details are important to tell the statistician, or to even be aware of something fields like Survival Analysis exist. This leads to avoidable mistakes.