In probability theory and statistics, the variance is a way to measure how far a set of numbers is spread out.
Variance describes how much a random variable differs from its expected value. The variance is defined as the average of the squares of the differences between the individual (observed) and the expected value. This means that it is always positive. A variance is often represented by the symbol
, if the data is the entire population, and
, if the data is from a sample.[1][2][3]
In practice, variance is a measure of how much something changes. For example, temperature has more variance in Moscow than in Hawaii.
The variance is not simply the average difference from the expected value. The standard deviation, which is the square root of the variance and comes closer to the average difference, is also not simply the average difference. Variance and standard deviation are used because it makes the mathematics easier—when adding two random variables together.
In accountancy, a variance refers to the difference between the budget for a cost, and the actual cost.
History
Karl Pearson, the father of biometry, first used the term variance as follows:
"It is here attempted to (show) the biometrical properties of a population of a more general type that has (..) been examined, inheritance in which follows this scheme. It is hoped that in this way it will be possible to make a more exact analysis of the causes of human variability. The great body of available statistics shows us that the deviations of a human measurement from its mean follow very closely the Normal Law of Errors, and that therefore, the variablility may be uniformly measured by the standard deviation, corresponding to the square root of the mean square error."[4]
Related pages
References
|
|---|
|
|
|---|
| Continuous data | |
|---|
| Count data | |
|---|
| Summary tables | |
|---|
| Dependence | |
|---|
| Graphics |
- Bar chart
- Biplot
- Box plot
- Control chart
- Correlogram
- Fan chart
- Forest plot
- Histogram
- Pie chart
- Q–Q plot
- Run chart
- Scatter plot
- Stem-and-leaf display
- Radar chart
- Violin plot
|
|---|
|
|
|
|---|
| Study design |
- Population
- Statistic
- Effect size
- Statistical power
- Optimal design
- Sample size determination
- Replication
- Missing data
|
|---|
| Survey methodology | |
|---|
| Controlled experiments | |
|---|
| Adaptive Designs |
- Adaptive clinical trial
- Up-and-Down Designs
- Stochastic approximation
|
|---|
| Observational Studies |
- Cross-sectional study
- Cohort study
- Natural experiment
- Quasi-experiment
|
|---|
|
|
|
|---|
| Statistical theory | |
|---|
| Frequentist inference | | Point estimation |
- Estimating equations
- Unbiased estimators
- Mean-unbiased minimum-variance
- Rao–Blackwellization
- Lehmann–Scheffé theorem
- Median unbiased
- Plug-in
|
|---|
| Interval estimation | |
|---|
| Testing hypotheses |
- 1- & 2-tails
- Power
- Uniformly most powerful test
- Permutation test
- Multiple comparisons
|
|---|
| Parametric tests |
- Likelihood-ratio
- Score/Lagrange multiplier
- Wald
|
|---|
|
|---|
| Specific tests | | | Goodness of fit | |
|---|
| Rank statistics |
- Sign
- Signed rank (Wilcoxon)
- Rank sum (Mann–Whitney)
- Nonparametric anova
- 1-way (Kruskal–Wallis)
- 2-way (Friedman)
- Ordered alternative (Jonckheere–Terpstra)
|
|---|
|
|---|
| Bayesian inference | |
|---|
|
|
|
|---|
| Correlation | |
|---|
| Regression analysis |
- Errors and residuals
- Regression validation
- Mixed effects models
- Simultaneous equations models
- Multivariate adaptive regression splines (MARS)
|
|---|
| Linear regression | |
|---|
| Non-standard predictors |
- Nonlinear regression
- Nonparametric
- Semiparametric
- Isotonic
- Robust
- Heteroscedasticity
- Homoscedasticity
|
|---|
| Generalized linear model | |
|---|
| Partition of variance |
- Analysis of variance (ANOVA, anova)
- Analysis of covariance
- Multivariate ANOVA
- Degrees of freedom
|
|---|
|
|
Categorical / Multivariate / Time-series / Survival analysis |
|---|
| Categorical |
- Cohen's kappa
- Contingency table
- Graphical model
- Log-linear model
- McNemar's test
- Cochran-Mantel-Haenszel statistics
|
|---|
| Multivariate |
- Regression
- Manova
- Principal components
- Canonical correlation
- Discriminant analysis
- Cluster analysis
- Classification
- Structural equation model
- Multivariate distributions
|
|---|
| Time-series | | General |
- Decomposition
- Trend
- Stationarity
- Seasonal adjustment
- Exponential smoothing
- Cointegration
- Structural break
- Granger causality
|
|---|
| Specific tests |
- Dickey–Fuller
- Johansen
- Q-statistic (Ljung–Box)
- Durbin–Watson
- Breusch–Godfrey
|
|---|
| Time domain |
- Autocorrelation (ACF)
- Cross-correlation (XCF)
- ARMA model
- ARIMA model (Box–Jenkins)
- Autoregressive conditional heteroskedasticity (ARCH)
- Vector autoregression (VAR)
|
|---|
| Frequency domain | |
|---|
|
|---|
| Survival | | Survival function |
- Kaplan–Meier estimator (product limit)
- Proportional hazards models
- Accelerated failure time (AFT) model
- First hitting time
|
|---|
| Hazard function | |
|---|
| Test | |
|---|
|
|---|
|
|
Applications |
|---|
| Biostatistics | |
|---|
| Engineering statistics |
- Chemometrics
- Methods engineering
- Probabilistic design
- Process / quality control
- Reliability
- System identification
|
|---|
| Social statistics | |
|---|
| Spatial statistics |
- Cartography
- Environmental statistics
- Geographic information system
- Geostatistics
- Kriging
|
|---|
|
|