D

#### Don Winton

-------Snip-------

I am taking my first ever statistics course. However as an elementary teacher, I did teach such concepts as mean, median, mode and range. In this course we learned that range is the highest score/data minus the lowest then add 1 to the difference. I had taught that the way to figure out the range was to simply subtract the lowest score/data from the highest. This is also the way it is done in the statistics program I have. My husband (a statistician) agrees with my method. Which method is correct? If the first method is correct, please explain the reason one adds 1 to the difference. Thanks!

In response to Range:

**{Not Mine, Don}**

Dona, you and your husband is right. The method you used of subtracting the smallest data value from the largest is the correct way of calculating the sample range. The only possible alternative I could think of would be if you wanted to construct an unbiased estimate of the population range.

For example, suppose you have a uniform distribution between A and B, with both limits being unknown. You then calculate from the data, the sample range, r = largest minus smallest. This r necessarily is less than the population range, R=B-A. To obtain an unbiased estimate of R, you need to multiply r by (N+1)/(N-1), where N is the sample size. Note that this is a multiplicative factor, not an additive factor. In short, I can't think of any reason to add one to the sample range.

-------End Snip-------

In this course we learned that range is the highest score/data minus the lowest then add 1 to the difference. I had taught that the way to figure out the range was to simply subtract the lowest score/data from the highest.

Thoughts,

Regards,

Don