Why the way of calculating STD error and STD error of difference is different?
Please help me to understand. I did researches in the internet, but it is still unclear for me.
STD error calculation is =stdev.p/sqrt(n) or (std of population divided by square root of sample sie) right? Then the std error of difference is the sum of square root variance divided by sample sizes? Can't think it the sum of two std errors? if I calculate it like std divided by square root of sample size the result is different. Why is the result is different when calculated square root of variance divided by sample size and std pop divided by square root of sample size? Logically shouldn't it be the same?
I have the same question too. That's why my calculation is different from the example. Did you find some answer yet, Timur?
I'm with the same issue. If we calculate the Standart Error (SE) like before in this course, the Square Root should be separatedly in each term. It isn't the same as calculating the Square Root of everything. We can't split the sum inside of square root. Therefore, they are different results, different things.
The change comes with the calculation of Variance. The Variance of difference already includes dividing by Sample Size (n).
We could think that the Square Root of the Variance is equal to Standard Deviation (STD). But since the Variance is different (includes dividing by sample size), my conclusion is:
- STD isn't necessary / possible when analyzing difference between independent variables.
- We get directly the SE when calculating Square Root of Variance of difference because (but I don't understand why)
I have the same challenge. But the possible explanation I could come up with is "Since we are working with 2 independent variables. I think we have to solve with the total Standard Error. Therefore, we have to add the two variance/sample size before we get the square root. Just the way square root of 4 is not equal to square of 2 + square root of 2".