Interobserver agreement

This database provides resources on methods for assessing the reliability, between human coders, of coding schemes for observed behaviour occurrences, linguistic events in discourse or dialogue, e.g. kappa statistic.

To add a record, click on the plus symbol in the corner of the table. If you want to insert a link please go to our instruction page to see the syntax for the implementation of an external link.

If you would like to discuss a particular record in the comment section below the database, please first indicate which record you are discussing by citing the red coloured number on the left side of the table.

Search for resources using field below.

Title/Author(s)/Date  Description  Type/URL 
1 Understanding interobserver agreement: The kappa statistic. Viera, A.j & Garrett, J.M. (2005) Family Medicine article. Discusses use of kappa statistic journal paper
2 Tips for teachers of evidence-based medicine: 3. Understanding and calculating kappa. McGinn, T. et al. (2004) Canadian Medical Association Journal. Presents 3 approaches to helping clinicians use the concepts of kappa when applying diagnostic tests in practice. journal paper (pdf)
3 Observer agreement for event sequences: Methods and software for sequence alignment and reliability estimates. Quera,V. et al. (2007) Behavior Research Methods article. Describes a method—based on the Needleman and Wunsch (1970) algorithm originally devised for aligning nucleotide sequences—for optimally aligning such sequences. journal paper
4 Train-to-Code: an adaptive expert system for training systematic observation and coding skills. Ray, J.M. & Ray, R.D. (2008) Behavior Research Methods article. This article presents design characteristics of and results from three formative evaluations of an adaptive computerized expert system that shapes observation and recording skills and maximizes both individual coding accuracy and stability. journal paper (pdf)
5 Can One Use Cohen’s Kappa to Examine Disagreement? von Eye, A. & von Eye, M. (2005) Methodology article. This research discusses the use of Cohen’s j (kappa), Brennan and Prediger’s (kappa n), and the coefficient of raw agreement for the examination of disagreement. journal paper (pdf)
6 Using Cohen’s Kappa to Calculate Inter-Rater Reliability. Eslea, M. (1996) short paper (pdf)
6 records

You could leave a comment if you were logged in.
 
Back to top
data_analysis/interobserver_agreement.txt · Last modified: 2011/03/22 22:39 by manuela