Inter-rater reliability (original) (raw)
Die Interrater-Reliabilität oder Urteilerübereinstimmung bezeichnet in der empirischen Sozialforschung (u. a. Psychologie, Soziologie, Epidemiologie etc.) das Ausmaß der Übereinstimmungen (= Konkordanzen) der Einschätzungsergebnisse bei unterschiedlichen Beobachtern („Ratern“). Hierdurch kann angegeben werden, inwieweit die Ergebnisse vom Beobachter unabhängig sind, weshalb es sich genau genommen um ein Maß der Objektivität handelt. Die Reliabilität ist ein Maß für die Güte der Methode, die zur Messung einer bestimmten Variablen eingesetzt werden. Dabei kann zwischen Interrater- und Intrarater-Reliabilität unterschieden werden.
Property | Value |
---|---|
dbo:abstract | Die Interrater-Reliabilität oder Urteilerübereinstimmung bezeichnet in der empirischen Sozialforschung (u. a. Psychologie, Soziologie, Epidemiologie etc.) das Ausmaß der Übereinstimmungen (= Konkordanzen) der Einschätzungsergebnisse bei unterschiedlichen Beobachtern („Ratern“). Hierdurch kann angegeben werden, inwieweit die Ergebnisse vom Beobachter unabhängig sind, weshalb es sich genau genommen um ein Maß der Objektivität handelt. Die Reliabilität ist ein Maß für die Güte der Methode, die zur Messung einer bestimmten Variablen eingesetzt werden. Dabei kann zwischen Interrater- und Intrarater-Reliabilität unterschieden werden. (de) Estatistikan, adostasun neurriak balorazio metodo edo epaile ezberdinen arteko adostasuna neurtzeko erabiltzen dira, hainbat elementu edo objekturi buruz emandako balorazioei buruz. Ohizkoak dira giza ezaugarri bati buruz epaile edo balorazio metodo ezberdinek ematen duten neurketa fina den erabakitzeko azterketan. Fenomeno baterako egin daitezkeen neurketen zehaztasuna (ez ordea baliozkotasuna) aztertzeko ere erabil daitezke. Adibidez, haizearen norabidea jasotzeko orduan bi neurketa tresnen artean adostasun txikia bada, bietako bat gutxienez oker dagoela erabaki behar da, baina adostasuna handia izanda ere, ezingo da baieztatu neurketa tresnak baliozkoak direnik, norabidea ongi neurtzen denik alegia. Adostasun neurri ezberdinak erabili behar izaten dira epaile kopurua 2 edo 2 baino handiagoa den eta balorazioa egiteko erabiltzen den arabera (nominala, ordinala edo tarte erakoa). Kendallen tau esaterako bi epailetarako bakarrik erabil daiteke. (eu) In statistics, inter-rater reliability (also called by various similar names, such as inter-rater agreement, inter-rater concordance, inter-observer reliability, inter-coder reliability, and so on) is the degree of agreement among independent observers who rate, code, or assess the same phenomenon. Assessment tools that rely on ratings must exhibit good inter-rater reliability, otherwise they are not valid tests. There are a number of statistics that can be used to determine inter-rater reliability. Different statistics are appropriate for different types of measurement. Some options are joint-probability of agreement, such as Cohen's kappa, Scott's pi and Fleiss' kappa; or inter-rater correlation, concordance correlation coefficient, intra-class correlation, and Krippendorff's alpha. (en) La concordance inter-juges (dite aussi fiabilité inter-juges ou inter-observateurs) est une mesure statistique de l'homogénéité des jugements formulés par plusieurs évaluateurs face à une même situation, c'est-à-dire à mesurer quantitativement leur degré de consensus. (fr) Em estatística, a concordância entre avaliadores (também chamada por vários nomes semelhantes, como confiabilidade entre avaliadores, confiabilidade entre observadores, confiabilidade entre codificadores e assim por diante) é o grau de concordância entre observadores independentes que classificam, codificam ou avaliam o mesmo fenômeno. As ferramentas de avaliação que dependem de classificações devem apresentar boa confiabilidade entre avaliadores, caso contrário, não são testes válidos. Há uma série de estatísticas que podem ser usadas para determinar a confiabilidade entre avaliadores. Diferentes estatísticas são apropriadas para diferentes tipos de medição. Algumas opções são a probabilidade conjunta de concordância, como o kappa de Cohen, o pi de Scott e o ; ou correlação interexaminadores, coeficiente de correlação de concordância, correlação intraclasse e alfa de Krippendorff. (pt) 在統計學中,評分者間信度(英語:inter-rater reliability;inter-rater agreement;inter-rater concordance;interobserver reliability)指的是評分者間對於某件事情彼此同意的程度。其分數顯示評分者之間想法相似和共識的程度。評分者間信度與並不同。 (zh) |
dbo:thumbnail | wiki-commons:Special:FilePath/Comparison_of_rubrics...rrelation)_coefficients.png?width=300 |
dbo:wikiPageExternalLink | http://www.med-ed-online.org/rating/reliability.html https://nlp-ml.io/jg/software/ira http://john-uebersax.com/stat/agree.htm https://www.worldcat.org/title/handbook-of-inter-rater-reliability-the-definitive-guide-to-measuring-the-extent-of-agreement-among-raters/oclc/891732741 https://www.worldcat.org/title/measures-of-interobserver-agreement-and-reliability/oclc/815928115 http://justus.randolph.name/kappa https://agreestat360.com/ http://www.agreestat.com/research_papers/bjmsp2008_interrater.pdf |
dbo:wikiPageID | 7837393 (xsd:integer) |
dbo:wikiPageLength | 18132 (xsd:nonNegativeInteger) |
dbo:wikiPageRevisionID | 1109833550 (xsd:integer) |
dbo:wikiPageWikiLink | dbr:Inter-rater_reliability dbr:Standard_deviation dbr:Rating_(pharmaceutical_industry) dbr:Cronbach's_alpha dbc:Comparison_of_assessments dbr:Generalizability_theory dbr:Nominal_data dbr:Survey_research dbr:Concordance_correlation_coefficient dbr:Observational_studies dbc:Inter-rater_reliability dbr:Computational_linguistics dbr:Krippendorff's_alpha dbc:Statistical_data_types dbr:Kendall_rank_correlation_coefficient dbr:Pearson_product-moment_correlation_coefficient dbr:Rasch_model dbr:Bland–Altman_plot dbr:Cohen's_kappa dbr:Spearman's_rank_correlation_coefficient dbr:Test_validity dbr:Fleiss'_kappa dbr:Psychometrics dbr:Intra-class_correlation dbr:Intra-class_correlation_coefficient dbr:Scott's_pi dbr:Experimenter's_bias dbr:File:Bland-Altman-Plot.png dbr:File:Comparison_of_rubrics_for_evaluat...a-class_correlation)_coefficients.png |
dbp:wikiPageUsesTemplate | dbt:Commons_category dbt:ISBN dbt:Main dbt:More_citations_needed_section dbt:Ordered_list dbt:Reflist dbt:Short_description |
dct:subject | dbc:Comparison_of_assessments dbc:Inter-rater_reliability dbc:Statistical_data_types |
gold:hypernym | dbr:Degree |
rdf:type | dbo:University |
rdfs:comment | Die Interrater-Reliabilität oder Urteilerübereinstimmung bezeichnet in der empirischen Sozialforschung (u. a. Psychologie, Soziologie, Epidemiologie etc.) das Ausmaß der Übereinstimmungen (= Konkordanzen) der Einschätzungsergebnisse bei unterschiedlichen Beobachtern („Ratern“). Hierdurch kann angegeben werden, inwieweit die Ergebnisse vom Beobachter unabhängig sind, weshalb es sich genau genommen um ein Maß der Objektivität handelt. Die Reliabilität ist ein Maß für die Güte der Methode, die zur Messung einer bestimmten Variablen eingesetzt werden. Dabei kann zwischen Interrater- und Intrarater-Reliabilität unterschieden werden. (de) La concordance inter-juges (dite aussi fiabilité inter-juges ou inter-observateurs) est une mesure statistique de l'homogénéité des jugements formulés par plusieurs évaluateurs face à une même situation, c'est-à-dire à mesurer quantitativement leur degré de consensus. (fr) 在統計學中,評分者間信度(英語:inter-rater reliability;inter-rater agreement;inter-rater concordance;interobserver reliability)指的是評分者間對於某件事情彼此同意的程度。其分數顯示評分者之間想法相似和共識的程度。評分者間信度與並不同。 (zh) In statistics, inter-rater reliability (also called by various similar names, such as inter-rater agreement, inter-rater concordance, inter-observer reliability, inter-coder reliability, and so on) is the degree of agreement among independent observers who rate, code, or assess the same phenomenon. Assessment tools that rely on ratings must exhibit good inter-rater reliability, otherwise they are not valid tests. (en) Estatistikan, adostasun neurriak balorazio metodo edo epaile ezberdinen arteko adostasuna neurtzeko erabiltzen dira, hainbat elementu edo objekturi buruz emandako balorazioei buruz. Ohizkoak dira giza ezaugarri bati buruz epaile edo balorazio metodo ezberdinek ematen duten neurketa fina den erabakitzeko azterketan. Fenomeno baterako egin daitezkeen neurketen zehaztasuna (ez ordea baliozkotasuna) aztertzeko ere erabil daitezke. Adibidez, haizearen norabidea jasotzeko orduan bi neurketa tresnen artean adostasun txikia bada, bietako bat gutxienez oker dagoela erabaki behar da, baina adostasuna handia izanda ere, ezingo da baieztatu neurketa tresnak baliozkoak direnik, norabidea ongi neurtzen denik alegia. (eu) Em estatística, a concordância entre avaliadores (também chamada por vários nomes semelhantes, como confiabilidade entre avaliadores, confiabilidade entre observadores, confiabilidade entre codificadores e assim por diante) é o grau de concordância entre observadores independentes que classificam, codificam ou avaliam o mesmo fenômeno. As ferramentas de avaliação que dependem de classificações devem apresentar boa confiabilidade entre avaliadores, caso contrário, não são testes válidos. (pt) |
rdfs:label | Inter-rater reliability (en) Interrater-Reliabilität (de) Adostasun neurri (eu) Concordance inter-juges (fr) Concordância entre avaliadores (pt) 評分者間信度 (zh) |
owl:sameAs | freebase:Inter-rater reliability wikidata:Inter-rater reliability dbpedia-de:Inter-rater reliability dbpedia-eu:Inter-rater reliability dbpedia-fr:Inter-rater reliability dbpedia-hu:Inter-rater reliability dbpedia-pt:Inter-rater reliability dbpedia-tr:Inter-rater reliability dbpedia-zh:Inter-rater reliability https://global.dbpedia.org/id/4Msnd |
prov:wasDerivedFrom | wikipedia-en:Inter-rater_reliability?oldid=1109833550&ns=0 |
foaf:depiction | wiki-commons:Special:FilePath/Bland-Altman-Plot.png wiki-commons:Special:FilePath/Comparison_of_rubrics...a-class_correlation)_coefficients.png |
foaf:isPrimaryTopicOf | wikipedia-en:Inter-rater_reliability |
is dbo:wikiPageDisambiguates of | dbr:IRR |
is dbo:wikiPageRedirects of | dbr:Inter‐rater dbr:Digit_preference dbr:Interrater_agreement dbr:Interrater_reliability dbr:Agreement_limit dbr:Intra-observer_variability dbr:Observer_variability dbr:Limit_of_agreement dbr:Limits_of_agreement dbr:Inter-annotator_agreement dbr:Inter-judge_reliability dbr:Inter-observer_reliability dbr:Inter-observer_variability dbr:Inter-rater_agreement dbr:Inter-rater_variability |
is dbo:wikiPageWikiLink of | dbr:Cardiac_imaging dbr:Amita_Manatunga dbr:Behavior_Rating_Inventory_of_Executive_Function dbr:Rorschach_test dbr:Patient_Health_Questionnaire dbr:Reliability_(statistics) dbr:DSM-5 dbr:Index_of_psychology_articles dbr:Ink_blot_test dbr:Inter-rater_reliability dbr:Intra-rater_reliability dbr:Intraclass_correlation dbr:Level_of_measurement dbr:Lincoln_index dbr:Crackles dbr:Child_Mania_Rating_Scale dbr:Cognitive_linguistics dbr:Glasgow_Coma_Scale dbr:Concordance_correlation_coefficient dbr:Conflict_tactics_scale dbr:Content_analysis dbr:Bennett,_Alpert_and_Goldstein's_S dbr:Bogardus_social_distance_scale dbr:Clean_language_interviewing dbr:Computer-based_test_interpretation_in_psychological_assessment dbr:Empathy_quotient dbr:Empty_can/Full_can_tests dbr:Fencing_response dbr:Krippendorff's_alpha dbr:Robert_F._Bales dbr:Thematic_apperception_test dbr:Transdiagnostic_process dbr:WINdows_KwikStat dbr:Hebephilia dbr:JumpSTART_triage dbr:Youden's_J_statistic dbr:ADHD_rating_scale dbr:Barthel_scale dbr:Breast_cancer_classification dbr:Diagnostic_and_Statistical_Manual_of_Mental_Disorders dbr:Glossary_of_clinical_research dbr:Kendall's_W dbr:Rating_(clinical_trials) dbr:Attribution_questionnaire dbr:Inter‐rater dbr:Jean_Charles_Athanase_Peltier dbr:Stanford_Sleepiness_Scale dbr:Astrology_and_science dbr:Kappa dbr:Cohen's_kappa dbr:Hierarchical_Taxonomy_of_Psychopathology dbr:Reference_ranges_for_blood_tests dbr:Relationship_science dbr:Autism_Diagnostic_Interview dbr:Automated_essay_scoring dbr:CAGE_questionnaire dbr:Classification_of_mental_disorders dbr:Screen_for_child_anxiety_related_disorders dbr:Klaus_Krippendorff dbr:IRR dbr:Manual_Ability_Classification_System dbr:Sentiment_analysis dbr:Word-sense_disambiguation dbr:Washington_University_Sentence_Completion_Test dbr:Seinsheimer_classification dbr:Saprobic_system dbr:Urinary_cell-free_DNA dbr:Ethogram dbr:FBI_method_of_profiling dbr:Concordance dbr:Digit_preference dbr:List_of_statistics_articles dbr:List_of_unsolved_problems_in_medicine dbr:Live_blood_analysis dbr:LOA dbr:Papillary_urothelial_neoplasm_of_low_malignant_potential dbr:Standard-setting_study dbr:Evaluation_of_machine_translation dbr:Fleiss'_kappa dbr:Veterans_benefits_for_post-traumatic_stress_disorder_in_the_United_States dbr:Scott's_Pi dbr:Pittsburgh_Sleep_Quality_Index dbr:Vanderbilt_ADHD_diagnostic_rating_scale dbr:Interrater_agreement dbr:Interrater_reliability dbr:Online_content_analysis dbr:Psychopathy_Checklist dbr:Agreement_limit dbr:Intra-observer_variability dbr:Observer_variability dbr:Limit_of_agreement dbr:Limits_of_agreement dbr:Inter-annotator_agreement dbr:Inter-judge_reliability dbr:Inter-observer_reliability dbr:Inter-observer_variability dbr:Inter-rater_agreement dbr:Inter-rater_variability |
is rdfs:seeAlso of | dbr:Observational_methods_in_psychology |
is owl:differentFrom of | dbr:Intra-rater_reliability |
is foaf:primaryTopic of | wikipedia-en:Inter-rater_reliability |