Inconsistency in two-sided hypothesis testing
Suppose you have two sets of data with known populati
katkoota00ys
Answered question
2022-05-22
Inconsistency in two-sided hypothesis testing Suppose you have two sets of data with known population variances and want to test the null hypothesis that two means are equal, ie. against . There's a certain way I want to think about it, which is the following:
To me, this 'derivation' makes it perfectly clear what's actually going on. You're actually calculating the probability that H1 is true and not just blindly looking up some z-score. However, now suppose that . The problem with this is that the method I just described doesn't seem to work. If I write
Then all that happens is . I think I'm probably not interpreting the above equation correctly.
Answer & Explanation
Melina Glover
Beginner2022-05-23Added 11 answers
In your way of thinking, you decided to set
and from this you conclude (correctly, I think) that . The question is, if you cannot assign a probability distribution to , how do you compute ? And of course if you can assign a probability distribution to , then you can use that distribution to compute directly. Note that if , then
because you have defined z in such a way that it is simply if the two populations have the same mean. When you conclude that , you are assuming that z has zero probability to equal exactly. This would be true if had a continuous distribution, but how do we know it does? Edit: There are some quite reasonable motivations (from a practical point of view, if not a mathematical one) for attempting some kind of approach to coming up with a value for as explained in the question. The main motivation is that this is the way we seem to want to be able to think about statistics: just how much weight (i.e. likelihood) should I assign to the possibility that certain facts are true? Unfortunately it's often very difficult to make a convincing case for a particular value of such a likelihood. Instead, what frequentist statistics gives us is an apparently roundabout statement that if a certain fact were not true (that is, if that fact's "null hypothesis" were true instead), it would have been extremely unlikely for us to have made the observations we just made. A more precise and succinct explanation is given in this answer to another question. To test the hypothesis that in the posted question, we can define the null hypothesis as . Now, having obtained samples from the two populations, how likely is it that we would have gotten samples "like those" if the null hypothesis were true? If the answer is "likely enough," so we only have an interesting statistical test in the case where . Assuming that's the kind of sample results we got, then among all possible ways the null hypothesis could be true, the one that gives us the best chance to obtain samples "like" the ones we did is if . But if we assume that , then prior to taking the samples, was a random variable with a standard normal distribution (mean zero, variance 1). Suppose that having taken our samples, we find that . That's a relatively extreme value for a standard normal variable; 99 times out of 100 the value of a standard normal variable will be less than that. In fact, the probability is 0.99134 that a standard normal variable will have a value less than 2.38. (I know this because someone computed that probability and put it in a table, and I looked it up there.) There is therefore less than a 1% chance that we would have observed a sample mean so much larger than if the population mean were not actually at least a little bit larger than . We therefore reject the null hypothesis. Using samples drawn from populations with continuous distributions, it appears impossible to test statistically whether the means of two populations are exactly the same, because even if they were the same, with probability 1 we would still observe different sample means. (See also this answer on that topic.) There is another kind of statistics called Bayesian statistics that (as far as I understand it) does assign probabilities to the truth of statements that you might want to prove or disprove, but only by using observations of experiments to modify probability assignments that one was able to make before the experiment.