Resources

These resources offer content for faculty and program administrators who are interested in conducting student learning outcomes assessment or academic program assessment and evaluation.

Assessment Commons

Assessment Commons is an open learning space that curates content for faculty and assessment professionals through resources and tools for student learning outcomes assessment, teaching and learning, program review and accreditation.

Visit the Assessment Commons website

Research Design Resources

Johnson, R. B., Onwuegbuzie, A. J., & Turner, L.A. (2007). Toward a definition of mixed methods research. Journal of Mixed Methods Research, 1(2), 112–133. https://doi.org/10.1177/1558689806298224

Kelley, K., Clark, B., Brown, V., & Sitzia, J. (2003). Good practice in the conduct and reporting of survey research. International Journal for Quality in Health Care, 15(3), 261–266. https://doi.org/10.1093/intqhc/mzg031

Singer, E., & Bossarte, R. M. (2006). Incentives for survey participation: When are they “coercive”? American Journal of Preventive Medicine, 31(5), 411–418. https://doi.org/10.1016/j.amepre.2006.07.013

What Works Clearinghouse. (2020). What Works Clearinghouse Procedures Handbook, Version 4.1. U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance. https://ies.ed.gov/ncee/wwc/handbooks

Assessment and Measurement Resources

Allen, S., & Knight, J. (2009). A method for collaboratively developing and validating a rubric. International Journal for the Scholarship of Teaching and Learning, 3(2). https://doi.org/10.20429/ijsotl.2009.030210

Braun, H. I., & Mislevy, R. (2005). Intuitive test theory. The Phi Delta Kappan, 86(7), 488–497. JSTOR. https://doi.org/10.1177/003172170508600705

Black, P., Harrison, C., Lee, C., Marshall, B., & Wiliam, D. (2004). Working inside the black box: Assessment for learning in the classroom. The Phi Delta Kappan, 86(1), 8–21. JSTOR. https://www.jstor.org/stable/20441694?seq=1#metadata_info_tab_contents

Brown, N. J. S., & Wilson, M. (2011). A model of cognition: The missing cornerstone of assessment. Educational Psychology Review, 23(2), 221–234. JSTOR. https://www.jstor.org/stable/23882860

Montenegro, E., & Jankowski, N. A. (2020). A new decade for assessment: Embedding equity into assessment praxis (Occasional Paper No. 42; p. 26). University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA). https://www.learningoutcomesassessment.org/wp-content/uploads/2020/01/A-New-Decade-for-Assessment.pdf

Statistical Resources

Bell, B. A., Ferron, J. M., & Kromrey, J. D. (2008). Cluster size in multilevel models: The impact of sparse data structures on point and interval estimates in two-level models. Proceedings of the Joint Statistical Meetings, Survey Research Methods Section, 1122–1129. http://www.asasrms.org/Proceedings/y2008/Files/300933.pdf

Costello, A., & Osborne, J. (2019). Best practices in exploratory factor analysis: Four recommendations for getting the most from your analysis. Practical Assessment, Research, and Evaluation, 10(1). https://scholarworks.umass.edu/pare/vol10/iss1/7

Dimitrov, D. M. (2010). Testing for factorial invariance in the context of construct validation. Measurement and Evaluation in Counseling and Development, 43(2), 121–149. https://doi.org/10.1177/0748175610373459

Dushoff, J., Kain, M. P., & Bolker, B. M. (2019). I can see clearly now: Reinterpreting statistical significance. Methods in Ecology and Evolution, 10, 756–759. https://doi.org/10.1111/2041-210X.13159

Höfler, M., Pfister, H., Lieb, R., & Wittchen, H.-U. (2005). The use of weights to account for non-response and drop-out. Social Psychiatry and Psychiatric Epidemiology, 40, 291–299. https://doi.org/10.1007/s00127-005-0882-5

Little, R. J. A. (1988). A test of missing completely at random for multivariate data with missing values. Journal of the American Statistical Association, 83(404), 1198–1202. https://doi.org/10.2307/2290157

MacDonald, P., & Paunonen, S. V. (2002). A Monte Carlo comparison of item and person statistics based on Item Response Theory versus Classical Test Theory. Educational and Psychological Measurement, 62(6), 921–943. https://doi.org/10.1177/0013164402238082

Raudenbush, S. W., & Bryk, A. S. (1988). Methodological advances in analyzing the effects of schools and classrooms on student learning. Review of Research in Education, 15, 423–475. https://doi.org/10.2307/1167369

Rubin, D. B. (1976). Inference and missing data. Biometrika, 63(3), 581–592. https://doi.org/10.2307/2335739

Program Evaluation Resources

American Evaluation Association. (2011). Public Statement on Cultural Competence in Evaluationhttps://www.eval.org/About/Competencies-Standards/Cutural-Competence-Statement (PDF)

Chen, H. T., Donaldson, S. I., & Mark, M. M. (2011). Validity frameworks for outcome evaluation. In Advancing validity in out-come evaluation: Theory and practice (Vol. 130, pp. 5–16). New Directions for Evaluation. https://doi.org/10.1002/ev.361

Thurston, W. E., Graham, J., & Hatfield, J. (2003). Evaluability assessment: A catalyst for program change and improvement. Evaluation & the Health Professions, 26(2), 206–221. https://doi.org/10.1177/0163278703026002005