Abhijeet said:
It is said that mean deviation does not take into account algebraic signs of deviations where as Std. dev takes that into consideration.The question I have is while calculating the std deviation since we take the square we (kind of) get away from signs any way so why do that?What's the mathematical significance of it?Why do we say that std dev. is mathematically more correct?When we know that std dev. is more influenced by deviations from extreme values as compared to mean deviation.
Hi Abhijeet,
Let me discuss four types of standard deviations.
sample standard deviation = s sub x = sqrt((sum(x - x bar)^2)/(n - 1)) where the sum is from i = 1 to n, for all sample values of x, x bar is the sample mean, and n is the number of samples.
population standard deviation = sigma = sqrt((sum(x - mu)^2)/(N)) where the sum is from i = 1 to n, for all values of x, mu is the population mean, and N is the number of items in the population.
s is an estimate of sigma
sample standard deviation of the mean = s sub (x bar) = (s sub x)/sqrt
population standard deviation of the mean = sigma sub (x bar) = (sigma sub x)/sqrt
These last two are also called the standard error of the mean.
s sub (x bar) is an estimate of sigma sub (x bar)
The spread of the distibution of individuals, described by s sub x, is wider than the spread of the distribution of averages, described by s sub (x bar).
When calculating s or sigma, the signs of the (x - x bar) are in effect removed since this term is squared.
Wes R.