The last page is also part of a bunch of wiki pages that are... surely technically correct but difficult to grasp intuitively.
Note the difference between population (1/N) and sample (1/(N-1)) stats. The first has better mean squared error but biased with respect to the population, and the […]
Some of the answers in the last link do point out interesting results : sample mean and variance are optimal for a Gaussian distribution.
en.wikipedia.org/wiki/Unbiased_estimation... adds on that the midrange ((min+max)/2) would be optimal for unknown bounded […]
An intuition I haven't yet verified (and I would appreciate some insights/comments on the matter!) : when we qualify samples using simple means and standard deviations, a hidden assumption is often made of a normal (Gaussian) distribution.
This might be what we want (the central limit theorem […]
What made my gears turn a little was : if instead of adding more data you can only take subsets of your samples? For example you're trying to write a color picker tool on a photo? Different subsets of equal pixel count (the size of the picker tool) come out of a Poisson distribution (plus extra […]
So, first, maybe means? I know, the things below might have been evident, I've been starting from a very low bar okay?
I hear about means and averages all the time.
One thing that surprised me a few years ago was that the mean of a random variable is itself a random variable.
This was not […]
One big motivation for this is that people often shove (pseudo-)statistical results under my nose. In some cases it "looks" like they did due diligence but more often than not the significance results look fuzzily fishy — and I can't argue why with confidence because I don't have enough […]
So, I'm going to start a thread on the things I learn while trying to better understanding #statistics, in case anybody is interested. Boosts and/or clarifications welcome and appreciated!
It's been bugging me for a while that I don't seem to have a good intuitive grasp of statistics, and this […]