![]() ![]() While inter-rater reliability emphasizes on the variability among raters on the same target, inter-rater agreement focuses on the absolute differences of the ratings among raters ( Tinsley and Weiss, 2000). ![]() Inter-rater reliability is often discussed alongside the similar concept inter-rater agreement. In clinical psychology, it is commonly used when the target being measured involves observed performance or behaviors, such as clinical interviews or projective tests ( Geisinger, 2013). Every single assessor who evaluates the same property is a single repeat of the measure and the error variance comes from the variability among the evaluations of different assessors. Inter-rater reliability is applied in situations where different assessors or raters provide subjective judgment on the same target. Hui-Fang Chen, in Comprehensive Clinical Psychology (Second Edition), 2022 4.02.3.2.4 Inter-rater Reliability ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |