词条 | Dependent and independent variables |
释义 |
In mathematical modeling, statistical modeling and experimental sciences, the values of dependent variables depend on the values of independent variables. The dependent variables represent the output or outcome whose variation is being studied. The independent variables, also known in a statistical context as regressors, represent inputs or causes, that is, potential reasons for variation. In an experiment, any variable that the experimenter manipulates can be called an independent variable. Models and experiments test the effects that the independent variables have on the dependent variables. Sometimes, even if their influence is not of direct interest, independent variables may be included for other reasons, such as to account for their potential confounding effect. MathematicsIn mathematics, a function is a rule for taking an input (in the simplest case, a number or set of numbers)[2] and providing an output (which may also be a number).[2] A symbol that stands for an arbitrary input is called an independent variable, while a symbol that stands for an arbitrary output is called a dependent variable.[3] The most common symbol for the input is x, and the most common symbol for the output is y; the function itself is commonly written .[3][4] It is possible to have multiple independent variables or multiple dependent variables. For instance, in multivariable calculus, one often encounters functions of the form , where z is a dependent variable and x and y are independent variables.[5] Functions with multiple outputs are often referred to as vector-valued functions. StatisticsIn an experiment, a variable, manipulated by an experimenter, is called an independent variable.[6] The dependent variable is the event expected to change when the independent variable is manipulated.[7] In data mining tools (for multivariate statistics and machine learning), the dependent variable is assigned a role as {{visible anchor|target variable}} (or in some tools as label attribute), while an independent variable may be assigned a role as regular variable.[8] Known values for the target variable are provided for the training data set and test data set, but should be predicted for other data. The target variable is used in supervised learning algorithms but not in unsupervised learning. ModelingIn mathematical modeling, the dependent variable is studied to see if and how much it varies as the independent variables vary. In the simple stochastic linear model the term is the i th value of the dependent variable and is the i th value of the independent variable. The term is known as the "error" and contains the variability of the dependent variable not explained by the independent variable. With multiple independent variables, the model is , where n is the number of independent variables. SimulationIn simulation, the dependent variable is changed in response to changes in the independent variables. Statistics synonymsDepending on the context, an independent variable is sometimes called a "predictor variable", regressor, covariate, "controlled variable", "manipulated variable", "explanatory variable", exposure variable (see reliability theory), "risk factor" (see medical statistics), "feature" (in machine learning and pattern recognition) or "input variable."[9][10] In econometrics, the term "control variable" is usually used instead of "covariate".[11][12] [13][14][15]Depending on the context, a dependent variable is sometimes called a "response variable", "regressand", "criterion", "predicted variable", "measured variable", "explained variable", "experimental variable", "responding variable", "outcome variable", "output variable" or "label".[10] "Explanatory variable"{{anchor|explanatory variable}} is preferred by some authors over "independent variable" when the quantities treated as independent variables may not be statistically independent or independently manipulable by the researcher.[16][17] If the independent variable is referred to as an "explanatory variable" then the term "response variable"{{anchor|response variable}} is preferred by some authors for the dependent variable.[10][16][17] "Explained variable"{{anchor|explained variable}} is preferred by some authors over "dependent variable" when the quantities treated as "dependent variables" may not be statistically dependent.[18] If the dependent variable is referred to as an "explained variable" then the term "predictor variable"{{anchor|predictor variable}} is preferred by some authors for the independent variable.[18] Variables may also be referred to by their form: continuous, binary/dichotomous, nominal categorical, and ordinal categorical, among others. An example is provided by the analysis of trend in sea level by {{Harvtxt|Woodworth|1987}}. Here the dependent variable (and variable of most interest) was the annual mean sea level at a given location for which a series of yearly values were available. The primary independent variable was time. Use was made of a covariate consisting of yearly values of annual mean atmospheric pressure at sea level. The results showed that inclusion of the covariate allowed improved estimates of the trend against time to be obtained, compared to analyses which omitted the covariate. Other variablesA variable may be thought to alter the dependent or independent variables, but may not actually be the focus of the experiment. So that variable will be kept constant or monitored to try to minimize its effect on the experiment. Such variables may be designated as either a "controlled variable", "control variable", or "extraneous variable". Extraneous variables, if included in a regression analysis as independent variables, may aid a researcher with accurate response parameter estimation, prediction, and goodness of fit, but are not of substantive interest to the hypothesis under examination. For example, in a study examining the effect of post-secondary education on lifetime earnings, some extraneous variables might be gender, ethnicity, social class, genetics, intelligence, age, and so forth. A variable is extraneous only when it can be assumed (or shown) to influence the dependent variable. If included in a regression, it can improve the fit of the model. If it is excluded from the regression and if it has a non-zero covariance with one or more of the independent variables of interest, its omission will bias the regression's result for the effect of that independent variable of interest. This effect is called confounding or omitted variable bias; in these situations, design changes and/or controlling for a variable statistical control is necessary. Extraneous variables are often classified into three types:
In modelling, variability that is not covered by the independent variable is designated by and is known as the "residual", "side effect", "error", "unexplained share", "residual variable", or "tolerance". Examples
In a study measuring the influence of different quantities of fertilizer on plant growth, the independent variable would be the amount of fertilizer used. The dependent variable would be the growth in height or mass of the plant. The controlled variables would be the type of plant, the type of fertilizer, the amount of sunlight the plant gets, the size of the pots, etc.
In a study of how different doses of a drug affect the severity of symptoms, a researcher could compare the frequency and intensity of symptoms when different doses are administered. Here the independent variable is the dose and the dependent variable is the frequency/intensity of symptoms.
In measuring the amount of color removed from beetroot samples at different temperatures, temperature is the independent variable and amount of pigment removed is the dependent variable. See also
References1. ^Hastings, Nancy Baxter. Workshop calculus: guided exploration with review. Vol. 2. Springer Science & Business Media, 1998. p. 31 {{wikiversity|Independent variable}}{{wikiversity|Dependent variable}}{{DEFAULTSORT:Dependent And Independent Variables}}2. ^1 Carlson, Robert. A concrete introduction to real analysis. CRC Press, 2006. p.183 3. ^1 Stewart, James. Calculus. Cengage Learning, 2011. Section 1.1 4. ^Anton, Howard, Irl C. Bivens, and Stephen Davis. Calculus Single Variable. John Wiley & Sons, 2012. Section 0.1 5. ^Larson, Ron, and Bruce Edwards. Calculus. Cengage Learning, 2009. Section 13.1 6. ^http://onlinestatbook.com/2/introduction/variables.html 7. ^Random House Webster's Unabridged Dictionary. Random House, Inc. 2001. Page 534, 971. {{ISBN|0-375-42566-7}}. 8. ^English Manual version 1.0 {{webarchive|url=https://web.archive.org/web/20140210002634/http://1xltkxylmzx3z8gd647akcdvov.wpengine.netdna-cdn.com/wp-content/uploads/2013/10/rapidminer-5.0-manual-english_v1.0.pdf |date=2014-02-10 }} for RapidMiner 5.0, October 2013. 9. ^Dodge, Y. (2003) The Oxford Dictionary of Statistical Terms, OUP. {{ISBN|0-19-920613-9}} (entry for "independent variable") 10. ^1 2 Dodge, Y. (2003) The Oxford Dictionary of Statistical Terms, OUP. {{ISBN|0-19-920613-9}} (entry for "regression") 11. ^{{cite book |last=Gujarati |first=Damodar N. |last2=Porter |first2=Dawn C. |title=Basic Econometrics |location=New York |publisher=McGraw-Hill |year=2009 |edition=Fifth international |isbn=978-007-127625-2 |chapter=Terminology and Notation |pages=21 }} 12. ^{{cite book |last=Wooldridge |first=Jeffrey |year=2012 |title=Introductory Econometrics: A Modern Approach |location=Mason, OH |publisher=South-Western Cengage Learning |edition=Fifth |isbn=978-1-111-53104-1 |pages=22–23 }} 13. ^{{cite book |title=A Dictionary of Epidemiology |edition=Fourth |editor-first=John M. |editor-last=Last |publisher=Oxford UP |year=2001 |isbn=0-19-514168-7 }} 14. ^{{cite book |title=The Cambridge Dictionary of Statistics |edition=2nd |first=B. S. |last=Everitt |publisher=Cambridge UP |year=2002 |isbn=0-521-81099-X }} 15. ^{{cite journal |last=Woodworth |first=P. L. |year=1987 |title=Trends in U.K. mean sea level |journal=Marine Geodesy |volume=11 |issue=1 |pages=57–87 |doi=10.1080/15210608709379549 |ref=harv }} 16. ^1 Everitt, B.S. (2002) Cambridge Dictionary of Statistics, CUP. {{ISBN|0-521-81099-X}} 17. ^1 Dodge, Y. (2003) The Oxford Dictionary of Statistical Terms, OUP. {{ISBN|0-19-920613-9}} 18. ^1 Ash Narayan Sah (2009) Data Analysis Using Microsoft Excel, New Delhi. {{ISBN|978-81-7446-716-4}} 4 : Design of experiments|Regression analysis|Mathematical terminology|Independence (probability theory) |
随便看 |
|
开放百科全书收录14589846条英语、德语、日语等多语种百科知识,基本涵盖了大多数领域的百科知识,是一部内容自由、开放的电子版国际百科全书。