请输入您要查询的百科知识:

 

词条 Principle of marginality
释义

  1. Regression form

  2. See also

  3. References

  4. External links

{{Use dmy dates|date=August 2013}}

In statistics, the principle of marginality is the fact that the average (or main) effects, of variables in an analysis are marginal to their interaction effect—that is, the main effect of one explanatory variable captures the effect of that variable averaged over all values of a second explanatory variable whose value influences the first variable's effect. The principle of marginality implies that, in general, it is wrong to test, estimate, or interpret main effects of explanatory variables where the variables interact or, similarly, to model interaction effects but delete main effects that are marginal to

them.[1] While such models are interpretable, they lack applicability, as they ignore the dependence of a variable's effect upon another variable's value.

Nelder[2] and Venables[3] have argued strongly for the importance of this principle in regression analysis.

Regression form

If two independent continuous variables, say x and z, both influence a dependent variable y, and if the extent of the effect of each independent variable depends on the level of the other independent variable then the regression equation can be written as:

where i indexes observations, a is the intercept term, b, c, and d are effect size parameters to be estimated, and e is the error term.

If this is the correct model, then the omission of any of the right-side terms would be incorrect, resulting in misleading interpretation of the regression results.

With this model, the effect of x upon y is given by the partial derivative of y with respect to x; this is , which depends on the specific value at which the partial derivative is being evaluated. Hence, the main effect of x – the effect averaged over all values of z – is meaningless as it depends on the design of the experiment (specifically on the relative frequencies of the various values of z) and not just on the underlying relationships. Hence:

  • In the case of interaction, it is wrong to try to test, estimate, or interpret a "main effect" coefficient b or c, omitting the interaction term.[4]

In addition:

  • In the case of interaction, it is wrong not to include b or c, because this will give incorrect estimates of the interaction.[5][6]

See also

{{Portal|Statistics}}
  • General linear model
  • Analysis of variance

References

1. ^Fox, J. Regression Notes.
2. ^{{Cite journal | last1 = Nelder | first1 = J. A. |authorlink = John Nelder| title = A Reformulation of Linear Models | journal = Journal of the Royal Statistical Society | volume = 140 | issue = 1 | pages = 48–77 | doi = 10.2307/2344517 | year = 1977 | pmid = | pmc = }} (Section 2.1: The Neglect of Marginality)
3. ^Venables, W.N. (1998). "Exegeses on Linear Models". Paper presented to the S-PLUS User’s Conference Washington, DC, 8–9 October 1998.
4. ^See Venables, p.13: "... testing main effects in the presence of an interaction is a violation of the marginality principle".
5. ^See Venables, p.14/15, about the S-Plus command drop1, which does not drop the main effect terms from a model with interaction: "To my delight I see that marginality constraints between factor terms are by default honoured". In R, the marginality requirement of the dropterm function (in package MASS) is stated in the Reference Manual.
6. ^The above regression model, with two independent continuous variables, is presented with a numerical example, in Stata, as Case 3 in What happens if you omit the main effect in a regression model with an interaction?.

External links

3 : Statistical principles|Regression analysis|Analysis of variance

随便看

 

开放百科全书收录14589846条英语、德语、日语等多语种百科知识,基本涵盖了大多数领域的百科知识,是一部内容自由、开放的电子版国际百科全书。

 

Copyright © 2023 OENC.NET All Rights Reserved
京ICP备2021023879号 更新时间:2024/11/11 14:59:27