词条 | Bennett, Alpert, and Goldstein’s S |
释义 |
Rationale for useBennett et al. suggested adjusting inter-rater reliability to accommodate the percentage of rater agreement that might be expected by chance was a better measure than simple agreement between raters.[2] They proposed an index which adjusted the proportion of rater agreement based on the number of categories employed. Mathematical formulationThe formula for S is where Q is the number of categories and Pa is the proportion of agreement between raters. The variance of S is NotesThis statistic is also known as Guilford’s G.[3] Guilford was the first person to use the approach extensively in the determination of inter-rater reliability.{{Citation needed|date=March 2012}} References1. ^{{cite journal | last1 = Bennett | first1 = EM | last2 = Alpert | first2 = R | last3 = Goldstein | first3 = AC | year = 1954 | title = Communications through limited response questioning | url = | journal = Public Opinion Quarterly | volume = 18 | issue = 3| pages = 303–308 | doi=10.1086/266520}} {{statistics}}{{DEFAULTSORT:Bennett, Alpert, and Goldstein's S}}2. ^{{cite journal | url = http://www.sciencedirect.com/science/article/pii/S157231271100089X | doi=10.1016/j.stamet.2011.09.001 | volume=9 | issue=3 | title=The effect of combining categories on Bennett, Alpert and Goldstein's | journal=Statistical Methodology | pages=341–352| date=May 2012 | last1=Warrens | first1=Matthijs J. }} 3. ^{{cite journal | last1 = Holley | first1 = JW | last2 = Guilford | first2 = JP | year = 1964 | title = A note on the G index of agreement | url = | journal = Educ Psych Measurement | volume = 24 | issue = 4| pages = 749–753 | doi=10.1177/001316446402400402}} 2 : Categorical variable interactions|Inter-rater reliability |
随便看 |
|
开放百科全书收录14589846条英语、德语、日语等多语种百科知识,基本涵盖了大多数领域的百科知识,是一部内容自由、开放的电子版国际百科全书。