Eksplorasi Kemunculan Efek Metode dalam Tes Potensi Kemampuan Kognitif

Wahyu Widhiarso, Alifa Rahmi Khairunisa
(Submitted 2 June 2022)
(Published 31 May 2024)


Skor tes individu dapat dipengaruhi oleh tiga sumber variasi yaitu konstruk ukur, metode pengukuran, dan eror pengukuran. Idealnya, pengaruh terhadap skor dalam sebuah tes didominasi oleh konstruk ukur. Namun, dalam beberapa kasus terdapat pengaruh dari metode pengukuran yang disebabkan oleh keunikan yang terkait dengan metode pengukuran dalam tes tersebut. Pengaruh dari keunikan ini disebut sebagai efek metode yang dapat memengaruhi validitas struktural suatu alat ukur. Penelitian ini mengeksplorasi efek metode pada Tes PAPS UGM Seri E menggunakan analisis faktor konfirmatori dengan model bifaktor (bifactor). Hasil analisis (N = 2.170) menunjukkan bahwa tidak terdapat efek metode dengan porsi yang cukup besar di dalam PAPS Seri E1 dan E2. Semua variasi skor didominasi oleh efek konstruk ukur daripada efek metode pengukuran yang ditunjukkan dengan nilai bobot faktor pada komponen lebih didominasi oleh konstruk ukur dibandingkan dengan metode pengukuran. Dapat disimpulkan bahwa skor yang didapatkan oleh peserta tes cenderung merepresentasikan kemampuan yang bersifat umum yaitu penalaran kognitif daripada kemampuan yang bersifat spesifik yang terkait dengan media pengukuran.


analisis faktor konfirmatori; efek metode; Tes PAPS; validitas konstruk

Full Text: PDF

DOI: 10.22146/gamajop.75080


Abad, F. J., Sorrel, M. A., Garcia, L. F., & Aluja, A. Modeling General, Specific, and Method Variance in Personality Measures. Assessment, 0(0), 1073191116667547. doi:doi:10.1177/1073191116667547

APA, AERA, & NCME. (2014). The Standards for Educational and Psychological Testing. Washington, DC: American Educational Research Association

Arendasy, M. E., Hergovich, A., & Sommer, M. (2008). Investigating the ‘g’-saturation of various stratum-two factors using automatic item generation. Intelligence, 36(6), 574-583. doi:https://doi.org/10.1016/j.intell.2007.11.005

Campbell, D. T., & Fiske, D. W. (1959). Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological Bulletin, 56(2), 81-105. doi:10.1037/h0046016

Castille, C., & Williams, L. J. (2022). To Partial or Not? Re-Examining the Unmeasured Latent Method Construct (ULMC). Academy of Management Journal, 2022(1), 10998. doi:10.5465/AMBPP.2022.10998abstract

Ding, C. G., Chen, C.-F., & Jane, T.-D. (2023). Improving the performance of the unmeasured latent method construct technique in common method variance detection and correction. 44(3), 519-542. doi:https://doi.org/10.1002/job.2673

Eid, M. (2000). A multitrait-multimethod model with minimal assumptions. Psychometrika, 65(2), 241-261. doi:10.1007/bf02294377

Eid, M., Geiser, C., & Koch, T. (2016). Measuring Method Effects:From Traditional to Design-Oriented Approaches. 25(4), 275-280. doi:10.1177/0963721416649624

Flanagan, D. P., & Dixon , S. G. (2014). The Cattell‐Horn‐Carroll theory of cognitive abilities. In C. R. Reynolds & E. Fletcher-Janzen (Eds.), Encyclopedia of Special Education: A Reference for the Education of Children, Adolescents, and Adults with Disabilities and Other Exceptional Individuals. New Jersey: John Wiley & Sons, Inc.

Fogarty, G. J. (1999). Principles and applications of educational and psychological testing. In J. A. Athanasou (Ed.), Adult educational psychology. Rotterdam: Sense Publishers.

Furr, R. M., & Bacharach, V. R. (2013). Psychometrics: An Introduction. Los Angeles, CA: SAGE Publications, Inc.

Gefen, D. (2000). Structural Equation Modeling and Regression : Guidelines for Research Practice Structural Equation Modeling and Regression : Guidelines for Research Practice. 4(August).

Gottfredson, L. S. (2002). Where and Why g Matters:Not a Mystery. Human Performance, 15(1-2), 25-46. doi:10.1080/08959285.2002.9668082

Gustafsson, J.-E. (1984). A unifying model for the structure of intellectual abilities. Intelligence, 8(3), 179-203. doi:https://doi.org/10.1016/0160-2896(84)90008-4

Haig, B. (2022). Repositioning construct validity theory: From nomolgical networks to pragmatic theories, and their evaluation by explanatory means. In: PsyArXiv.

Jensen, A. R. (1982). Reaction Time and Psychometric g. In H. J. Eysenck (Ed.), A Model for Intelligence (pp. 93-132). Berlin, Heidelberg: Springer Berlin Heidelberg.

Lance, C. E., Baranik, L. E., Lau, A. R., & Scharlau, E. A. (2009). If it ain't trait it must be method: (Mis)application of the multitrait-multimethod design in organizational research. In Statistical and methodological myths and urban legends: Doctrine, verity and fable in the organizational and social sciences. New York, NY, US: Taylor & Francis Group.

Lewis, J., & Sireci, S. G. (2022). Digital Module 30: Validity and Educational Testing: Purposes and Uses of Educational Tests. Educational Measurement: Issues and Practice, 41(4), 81-82. doi:https://doi.org/10.1111/emip.12533

Luo, Y., & Al-Harbi, K. (2016). The Utility of the Bifactor Method for Unidimensionality Assessment When Other Methods Disagree:An Empirical Illustration. 6(4), 2158244016674513. doi:10.1177/2158244016674513

Marsh, H. W., Wales, N. S., & Hocevar, D. (1988). A new, more powerful approach to multitrait-multimethod analysis: Application of second-order confirmatory factor analysis. Journal of Applied Psychology, 73(1), 107--117. doi:10.1037/0021-9010.73.1.107

Newton, P., & Shaw, S. (2014). Validity in Educational and Psychological Assessment. Thousand Oaks: SAGE Publications Ltd

Pattipeilohy, F. W. C. (2017). Pengujian Validitas Konstruk Tes Potensi Akademik Pascasarjana (PAPS) melalui Analisis Faktor Eksploratori. (Skripsi), Universitas Gadjah Mada, Yogyakarta.

Podsakoff, P. M., MacKenzie, S. B., & Podsakoff, N. P. (2011). Sources of Method Bias in Social Science Research and Recommendations on How to Control It. Annual Review of Psychology, 63(1), 539-569. doi:10.1146/annurev-psych-120710-100452

Podsakoff, P. M., MacKenzie, S. B., Podsakoff, N. P., & Lee, J. Y. (2003). The mismeasure of man(agement) and its implications for leadership research. The Leadership Quarterly, 14(6), 615-656. doi:10.1016/j.leaqua.2003.08.002

Reeves, T. D., & Marbach-Ad, G. (2016). Contemporary Test Validity in Theory and Practice: A Primer for Discipline-Based Education Researchers. CBE Life Sci Educ, 15(1), rm1. doi:10.1187/cbe.15-08-0183

Reise, S. P. (2012). The Rediscovery of Bifactor Measurement Models. Multivariate Behavioral Research, 47(5), 667-696. doi:10.1080/00273171.2012.715555

Schneider, W. J., & McGrew, K. S. (2012). The Cattell-Horn-Carroll Model of Intelligence. Contemporary intellectual assessment: Theories, tests, and issues. doi:10.3233/978-1-60750-588-4-1344

Sireci, S., & Benítez Baena, I. (2023). Evidence for Test Validation : A Guide for Practitioners. Psicothema, 35(3), 217-226.

Spearman, C. (1904). General intelligence, objectively determined and measured. American Journal of Psychology, 15, 201-293.

Taub, G. E., & McGrew, K. S. (2004). A Confirmatory Factor Analysis of Cattell-Horn-Carroll Theory and Cross-Age Invariance of the Woodcock-Johnson Tests of Cognitive Abilities III. School Psychology Quarterly, 19(1), 72-87. doi:10.1521/scpq.

Tavakol, M., & Wetzel, A. (2020). Factor Analysis: a means for theory and instrument development in support of construct validity. Int J Med Educ, 11, 245-247. doi:10.5116/ijme.5f96.0f4a

Urbina, S. (2004). Essentials of psychological testing. Hoboken, NJ.: John Wiley & Sons, Inc.

Vallevand, A., Manthey, D. E., Askew, K., Hartman, N. D., Burns, C., Strowd, L. C., & Violato, C. (2023). Assessing clinical competence: a multitrait-multimethod matrix construct validity study. Advances in Health Sciences Education. doi:10.1007/s10459-023-10269-0

Vigneau, F., & Cormier, S. (2008). The Factor Structure of the State-Trait Anxiety Inventory: An Alternative View. Journal of Personality Assessment, 90(3), 280-285. doi:10.1080/00223890701885027

Viswanathan, M. (2005). Measurement Error and Research Design Thousand Oaks: SAGE Publications, Inc

Wang, Y., Kim, E. S., Dedrick, R. F., Ferron, J. M., & Tan, T. (2018). A Multilevel Bifactor Approach to Construct Validation of Mixed-Format Scales. Educ Psychol Meas, 78(2), 253-271. doi:10.1177/0013164417690858

Ward, W. C. (1982). A comparison of free-response and multiple-choice forms of verbal aptitude tests. Applied Psychological Measurement, 6, 1-11. doi:10.1177/014662168200600101

Widhiarso, W. (2017). Analisis Butir dan Penormaan Tes Penalaran (A3). Seri Technical Report UPAP, 2(3), 1-7.

Widhiarso, W., & Hanifa, S. (2022). Fitur Non Konten dan Intensitas Konten Ukur pada Butir Skala Psikologi. Intuisi, 14(2), 10-24.

Widhiarso, W., & Haryanta. (2015). Examining Method Effect of Synonym and Antonym Test in Verbal Abilities Measure. Europe’s Journal of Psychology, 11(3), 419–431. doi: 10.5964/ejop.v11i3.865

Widhiarso, W., & Haryanta. (2016). Comparing the performance of synonym and antonym tests in measuring verbal abilities. Testing, Psychometrics, Methodology in Applied Psychology, 23(3), 335--345. doi:10.4473/TPM23.3.5


  • There are currently no refbacks.

Copyright (c) 2024 Gadjah Mada Journal of Psychology (GamaJoP)

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.