Rwg Agreement

Cohen, A., Doveh, E., and Nahum-Shani, I. (2009). Test agreement for multi-item scales with rwg (j) and ADm (j) indices. organ. Res. Methods 12, 148-164. doi: 10.1177/1094428107300365 Burke, M. J., Finkelstein, L.M., and Dusig, M. S. (1999). For average indices of variation for the estimate of the Interrater agreement. organ. Res.

Methods 2, 49-68. doi: 10.1177/10944281921004 If this value – the share of error in judges` assessments – is deducted from 1, the remaining difference can be construed as the share of variance under an agreement. Therefore, the IRA may be for element scales: keywords: Interrater agreement, rwg, multi-level methods, data aggregation, group agreement, Klein reliability, K. J., Conn, A.B., Smith, D.B., and Sorra, J. S. (2001). Do everyone agree? A study of the intergroup agreement in employees` perception of the work environment. J.

Appl. Psychol. 86, 3-16. doi: 10.1037/0021-9010.86.1.3 Biemann, T., Cole, M. and Voelpel, p. (2012). Within the group: on the use (and abuse) of rWG and rWG (J) in leadership research and some guidelines for good practice. Leadersh.

Q. 23, 66-80. doi: 10.1016/j.leaqua.2011.11.006 Cohen, A., Doveh, E., and Eick, E. (2001). Statistical characteristics of the agreement index rwg(j) Psychol. Methods 6, 297-310. doi: 10.1037/1082-989X.6.3.297 Burke, M. J., and Dunlap, W. P. (2002). Estimate of the Interrater agreement with the average variation index: a user manual. organ.

Res. Methods 5, 159-172. doi: 10.1177/109442810205002 Harvey, R. J., and Hollander, E. (2004, April). “Benchmarking rWG interrater agreement indices: let`s drop the.70 rule-of-thumb,” in Paper Presented at the Meeting of the Society for Industrial Organizational Psychology (Chicago, IL). Multi-level leadership researchers applied direct consensus composition or reference-layer-consensus models for data aggregation at the individual level at a higher level of analysis. The composition of the consensus assumes that there is sufficient consensus within the group regarding the management structure of interest; in the absence of agreement, the entire governance structure is unsustainable.

At the same time, guidelines to help leadership researchers make decisions about data aggregation have received little attention. In particular, a discussion of how data aggregation decisions can improve or conceal the theoretical contribution of a study – a central priority of this article – has not been addressed in depth. Recognizing that empirical generalization depends on the accuracy with which aggregation decisions are applied, we examine the often overlooked assumptions associated with the most common consensus statistic used to justify data aggregation – rWG and rWG (J) (James, Demaree and Wolf, 1984). Based on a dataset published as part of a “Leadership Quarterly Special Issue” (Bliese, Halverson, Schriesheim, 2002), we highlight the potential abuse of rWG and rWG (J) as the only statistic justifying aggregation at a higher level of analysis. We conclude with prescriptive implications for promoting consistency in the way leadership research is conducted and reported at multiple levels. Pasisz, D. J., and Hurtz, G.