Interrater Reliability dari Checklist OSCE Keterampilan Pemeriksaan Tanda-Tanda Vital di Program Studi Ilmu Keperawatan UGM
Fitri Rochmana(1*), Totok Harjanto(2), Sri Mulyani(3)
(1) Program Studi Ilmu Keperawatan Fakultas Kedokteran, Kesehatan Masyarakat dan Keperawatan, Universitas Gadjah Mada
(2) Departemen Keperawatan Dasar dan Emergensi Program Studi Ilmu Keperawatan Fakultas Kedokteran, Kesehatan Masyarakat, dan Keperawatan Universitas Gadjah Mada
(3) Departemen Keperawatan Dasar dan Emergensi Program Studi Ilmu Keperawatan Fakultas Kedokteran, Kesehatan Masyarakat, dan Keperawatan Universitas Gadjah Mada
(*) Corresponding Author
Abstract
Background: Vital signs examination (VSE) is one of the key competencies that every nurse should possess. In reality, many VSE results are inaccurate, leading to less precise treatment decisions for patients. One academic effort to enhance student skills is through Objective Structured Clinical Examinations (OSCE), which are evaluated using a checklist instrument. The reliability of an instrument indicates internal validity and the certainty that the obtained measurement results are representative and stable. To this date, no reliability test has been conducted on the OSCE checklist instrument for VSE used in the Nursing Science Study Program at Universitas Gadjah Mada (PSIK FK-KMK UGM).
Object: To measure the reliability of the VSE assessment checklist in PSIK FK-KMK UGM using the interrater reliability method.
Method: This study is quantitative descriptive research with a cross-sectional design. The sample consists of 92 items of OSCE scores from first-year students in PSIK FK-KMK UGM, assessed by 2 raters, namely the examiner lecturer and a master’s student in Nursing Science at UGM. The assessment was performed once using the VSE skills checklist. The assessment results were then analyzed using the Kappa coefficient and Percent Agreement (PA) to evaluate interrater reliability. Acceptable values in this study are higher than 0,41 for the Kappa coefficient and higher than 80% for PA.
Result: Overall, the interrater reliability test for the VSE skills checklist showed a Kappa value of 0,427 and a PA of 82,60%.
Conclusion: Overall, the PA and Kappa values for the VSE checklist are acceptable, with the Kappa value falling into the moderate category. Improvement is needed for some items to enhance the reliability of the checklist.
Latar belakang: Pemeriksaan tanda-tanda vital (TTV) merupakan salah satu kompetensi utama yang harus dimiliki oleh setiap perawat. Kenyataannya banyak hasil pemeriksaan TTV yang tidak akurat sehingga hasil keputusan pengobatan pada pasien menjadi kurang tepat. Salah satu upaya akademisi untuk meningkatkan ketrampilan mahasiswa adalah dengan melalui Objective Structured Clinical Examinations (OSCE) yang dievaluasi menggunakan instrument checklist. Reliabilitas dari suatu instrumen menunjukkan validitas internal dan kepastian hasil pengukuran yang diperoleh bersifat representatif dan stabil. Sampai saat ini belum dilakukan uji reabilitas terhadap instrumen checklist OSCE pemeriksaan TTV yang dipakai di Program Studi Ilmu Keperawatan FK-KMK UGM (PSIK FK-KMK UGM).
Tujuan: Untuk mengukur nilai reliabilitas dari checklist penilaian pemeriksaan TTV di PSIK FK-KMK UGM dengan menggunakan metode interrater rebility.
Metode: Penelitian ini merupakan penelitian deskriptif kuantitatif dengan rancangan cross-sectional. Sampel terdiri dari 92 item nilai OSCE mahasiswa tahun pertama di PSIK FK-KMK UGM yang diambil oleh 2 penilai, yaitu dosen penguji dan mahasiswa S2 Ilmu Keperawatan UGM. Penilaian dilakukan sebanyak satu kali dengan menggunakan checklist keterampilan TTV. Hasil penilaian selanjutnya dianalisis menggunakan koefisien Kappa dan Percent Agreement (PA) untuk menilai interrater reability. Nilai yang dapat diterima dalam penelitian ini adalah lebih tinggi dari 0,41 untuk koefisien Kappa dan lebih tinggi dari 80% untuk PA.
Hasil: Secara keaseluruhan, uji interrater rebility checklist keterampilan pemeriksaan TTV menunjukkan nilai Kappa sebesar 0,427 dan PA 62,60%.
Kesimpulan: Secara keseluruhan PA dan nilai Kappa ceklist pemeriksaan tanda –tanda vital dapat diterima, dengan nilai Kappa berada dalam kategori moderat. Perlu dilakukan perbaikan pada beberapa item untuk meningkatkan reliabilitas checklist tersebut.
Keywords
Full Text:
PDFReferences
Nursalam, & Efendi, F. Pendidikan dalam Keperawatan. Jakarta: Salemba Medika;2008 2. PPNI, AIPNI, AIPDIKI. Standar Kompetensi Perawat Indonesia; 2013 3. Johnston, A. N., Weeks, B., Shuker, M.-A., Coyne, E., Mitchell, M., & Massey, D. Nursing Students’ Perceptions of the Objective. Clinical Simulation in Nursing. 2017;127-142. (diakses pada bulan Februari 2017) 4. Kimberlin, C., & Winterstein, A. Validity and reliability of measurement instruments used in research. American Society of Health-System Pharmacists.2008;2276-2284. 5. Rasyida, A. Z. Interrater Reliability Dari: Checklist Keterampilan Oral Care Di Program Studi Ilmu Keperawatan Fakultas Kedokteran Universitas Gadjah Mada.Skripsi;2016 6. McHugh, M.Interrater reliability: the kappa statistic. Biochemia Medica,2012;276–282. 22(3), https://doi.org/10.11613/BM.2012.031 7. McCray,G. Assessing inter-rater agreement for nominal judgement variables. Paper presented at the Language Testing Forum. Nottingham, November. 2013;15-17 8. Morris, R., MacNeela, P., Scott, A., Treacy, P., Hyde, A., O'Brien, J., et al. Ambiguities and conflicting result: The limitations of the kappa statistic in establishing the interrater reliability of the Irish nusring minimum data set for mental health: A discussion paper . Internasional Journal of nursing Studies, 2008; 1-3. 9. Graham, M., Milanowski, A., & Miller, JMeasuring and Promoting Inter-Rater Agreement of Teacher and Principal Performance Ratings;2012 10. Rodgers, W., Durnford, A., Kirkham, F., Whitney, A., Mullee, M., & Gray, W. Interrater reliability of Engel, International League Against Epilepsy, and McHugh Seizure outcome Classifications Following Vagus Nerve Stimulator Implantation. Journal of Neurosurgical Pediatrics.2012; 226-229. 11. Cazzell M, Howe C. Using Objective Structured Clinical Evaluation for Simulation Evaluation: Checklist Considerations for Interrater Reliability. Clinical Simulation Nursing. 2012;8(6):e219–25 12. Zee, S., Soriani, N., Comoretto, R., & Baldi, I.High Agreement and High Prevalence: The Paradox of Cohen’s Kappa. The Open Nursing Journal, 2017;211-218. 13. Cunningham, M. More than Just the Kappa Coefficient: A Program to Fully Characterize Inter-Rater Reliability between Two Raters. Sgf 2009;1-7. 14. Kvålseth TO. Measurement of Interobserver Disagreement: Correction of Cohen’s Kappa for Negative Values. J Probab Stat. 2015;2015(1). 15. Davis, L. The influence of training and experience on rater performance in scoring spoken language. USA: Language Testing;2016 . 16. Bao, S., Howard, N., Spielholz, P., Silverstein, B., & Polissar, N. Interrater Reliability of Posture Observations. The Journal of the Human Factors and Ergonomics. 2009;1-18. 17. Isseroff, T., Parasher, A., Richards, A., Sivak, M., & Peak, W.Interrater Reliability in Analysis of Laryngoscopic Features for Unilateral Vocal Fold Paresis. Journal Of Voice. 2015;736-740.
DOI: https://doi.org/10.22146/jkkk.44250
Article Metrics
Abstract views : 792 | views : 829Refbacks
- There are currently no refbacks.
Copyright (c) 2023 Jurnal Keperawatan Klinik dan Komunitas
Jurnal Keperawatan Klinis dan Komunitas (Clinical and Community Nursing Journal)
collaborates with DPW PPNI DIY
Jurnal Keperawatan Klinis dan Komunitas (Clinical and Community Nursing Journal) is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.