In statistics, the method of moments is a method of estimation of population parameters.
Method
Suppose that the problem is to estimate
unknown parameters
describing the distribution
of the random variable
.[1] Suppose the first
moments of the true distribution (the "population moments") can be expressed as functions of the
s:
![{\displaystyle {\begin{aligned}\mu _{1}&\equiv \operatorname {E} [W]=g_{1}(\theta _{1},\theta _{2},\ldots ,\theta _{k}),\\[4pt]\mu _{2}&\equiv \operatorname {E} [W^{2}]=g_{2}(\theta _{1},\theta _{2},\ldots ,\theta _{k}),\\&\,\,\,\vdots \\\mu _{k}&\equiv \operatorname {E} [W^{k}]=g_{k}(\theta _{1},\theta _{2},\ldots ,\theta _{k}).\end{aligned}}}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/678951940462615d65aeff797eba8e2b0e068c7e.svg)
Suppose a sample of size
is drawn, and it leads to the values
. For
, let

be the j-th sample moment, an estimate of
. The method of moments estimator for
denoted by
is defined as the solution (if there is one) to the equations:
![{\displaystyle {\begin{aligned}{\widehat {\mu }}_{1}&=g_{1}({\widehat {\theta }}_{1},{\widehat {\theta }}_{2},\ldots ,{\widehat {\theta }}_{k}),\\[4pt]{\widehat {\mu }}_{2}&=g_{2}({\widehat {\theta }}_{1},{\widehat {\theta }}_{2},\ldots ,{\widehat {\theta }}_{k}),\\&\,\,\,\vdots \\{\widehat {\mu }}_{k}&=g_{k}({\widehat {\theta }}_{1},{\widehat {\theta }}_{2},\ldots ,{\widehat {\theta }}_{k}).\end{aligned}}}](./_assets_/eb734a37dd21ce173a46342d1cc64c92/38a99f372ab240288d0ee58be952396b6ee3687b.svg)
Reasons to use it
The method of moments is simple and gets consistent estimators (under very weak assumptions). However, these estimators are often biased.
References
- ↑ K. O. Bowman and L. R. Shenton, "Estimator: Method of Moments", pp 2092–2098, Encyclopedia of statistical sciences, Wiley (1998).
|
|---|
|
|
|---|
| Continuous data | |
|---|
| Count data | |
|---|
| Summary tables | |
|---|
| Dependence | |
|---|
| Graphics |
- Bar chart
- Biplot
- Box plot
- Control chart
- Correlogram
- Fan chart
- Forest plot
- Histogram
- Pie chart
- Q–Q plot
- Run chart
- Scatter plot
- Stem-and-leaf display
- Radar chart
- Violin plot
|
|---|
|
|
|
|---|
| Study design |
- Population
- Statistic
- Effect size
- Statistical power
- Optimal design
- Sample size determination
- Replication
- Missing data
|
|---|
| Survey methodology | |
|---|
| Controlled experiments | |
|---|
| Adaptive Designs |
- Adaptive clinical trial
- Up-and-Down Designs
- Stochastic approximation
|
|---|
| Observational Studies |
- Cross-sectional study
- Cohort study
- Natural experiment
- Quasi-experiment
|
|---|
|
|
|
|---|
| Statistical theory | |
|---|
| Frequentist inference | | Point estimation |
- Estimating equations
- Unbiased estimators
- Mean-unbiased minimum-variance
- Rao–Blackwellization
- Lehmann–Scheffé theorem
- Median unbiased
- Plug-in
|
|---|
| Interval estimation | |
|---|
| Testing hypotheses |
- 1- & 2-tails
- Power
- Uniformly most powerful test
- Permutation test
- Multiple comparisons
|
|---|
| Parametric tests |
- Likelihood-ratio
- Score/Lagrange multiplier
- Wald
|
|---|
|
|---|
| Specific tests | | | Goodness of fit | |
|---|
| Rank statistics |
- Sign
- Signed rank (Wilcoxon)
- Rank sum (Mann–Whitney)
- Nonparametric anova
- 1-way (Kruskal–Wallis)
- 2-way (Friedman)
- Ordered alternative (Jonckheere–Terpstra)
|
|---|
|
|---|
| Bayesian inference | |
|---|
|
|
|
|---|
| Correlation | |
|---|
| Regression analysis |
- Errors and residuals
- Regression validation
- Mixed effects models
- Simultaneous equations models
- Multivariate adaptive regression splines (MARS)
|
|---|
| Linear regression | |
|---|
| Non-standard predictors |
- Nonlinear regression
- Nonparametric
- Semiparametric
- Isotonic
- Robust
- Heteroscedasticity
- Homoscedasticity
|
|---|
| Generalized linear model | |
|---|
| Partition of variance |
- Analysis of variance (ANOVA, anova)
- Analysis of covariance
- Multivariate ANOVA
- Degrees of freedom
|
|---|
|
|
Categorical / Multivariate / Time-series / Survival analysis |
|---|
| Categorical |
- Cohen's kappa
- Contingency table
- Graphical model
- Log-linear model
- McNemar's test
- Cochran-Mantel-Haenszel statistics
|
|---|
| Multivariate |
- Regression
- Manova
- Principal components
- Canonical correlation
- Discriminant analysis
- Cluster analysis
- Classification
- Structural equation model
- Multivariate distributions
|
|---|
| Time-series | | General |
- Decomposition
- Trend
- Stationarity
- Seasonal adjustment
- Exponential smoothing
- Cointegration
- Structural break
- Granger causality
|
|---|
| Specific tests |
- Dickey–Fuller
- Johansen
- Q-statistic (Ljung–Box)
- Durbin–Watson
- Breusch–Godfrey
|
|---|
| Time domain |
- Autocorrelation (ACF)
- Cross-correlation (XCF)
- ARMA model
- ARIMA model (Box–Jenkins)
- Autoregressive conditional heteroskedasticity (ARCH)
- Vector autoregression (VAR)
|
|---|
| Frequency domain | |
|---|
|
|---|
| Survival | | Survival function |
- Kaplan–Meier estimator (product limit)
- Proportional hazards models
- Accelerated failure time (AFT) model
- First hitting time
|
|---|
| Hazard function | |
|---|
| Test | |
|---|
|
|---|
|
|
Applications |
|---|
| Biostatistics | |
|---|
| Engineering statistics |
- Chemometrics
- Methods engineering
- Probabilistic design
- Process / quality control
- Reliability
- System identification
|
|---|
| Social statistics | |
|---|
| Spatial statistics |
- Cartography
- Environmental statistics
- Geographic information system
- Geostatistics
- Kriging
|
|---|
|
|