editorial

Oman Medical Journal [2021], Vol. 36, No. 3: e260 

Programmatic Evaluation: A Prospect in Program Evaluation Design

Siham Al Sinani1*and Khalid Al Naamani2

1Professional Competence Affairs, Oman Medical Specialty Board, Muscat, Oman

2Department of Internal Medicine, Armed Forces Hospital, Muscat, Oman

article info

Online:

DOI 10.5001/omj.2021.122

Improving the provision of health care is the ultimate goal of health profession education (HPE) programs. Therefore, a well-founded evaluation program is fundamental in the effective planning and implementation of HPE to achieve quality health care.

Educational program evaluation is the systematic information gathering and analysis approach related to the educational program’s design, implementation, and outcomes with the aim of monitoring and candid pronouncement of its value.1 Although much of the attention is on academic program development and implementation, evaluation in education has transformed into a practice-based science over the past few decades.1

Because educational programs are essentially about bringing change,2,3 HPE programs are required to implement procedures that denote pedagogical outcomes resulting in better accountability in the health profession.4 Consequently, program evaluation would have to be designed to ensure that change, both intended and unintended, has transpired3 and allow educators and the overseeing bodies to obtain important information on their HPE programs.1,3 Furthermore, evidence of undergoing mediocre evaluation can be utilized to decide on the allocation of resources. Accordingly, evaluation-based feedback is an inherent part of purposeful practice concepts of pedagogical processes.5

Evaluation approaches are contingent on the purpose utilization of the intended evaluation models. Additionally, HPE programs incorporate complex and interactive educational components resulting in diverse outcomes.6 Nevertheless, most HPE programs utilize one or two evaluation tools and are possibly missing essential parts. Therefore, evaluation systems of complex HPE programs should not be constructed based on conventional approaches assuming a direct cause-and-effect relationship between elements and outcomes.6 Thus, a change in evaluators’ mindsets to recognize the various combinations of programs’ elements is essential to reconcile such complexity.7 In addition, combining evaluation theories, approaches, and models to build a comprehensive evaluation model that would suit the complex educational programs may lead to less misleading knowledge and express the pertinent components of evaluation.3,7,8

Evaluation models such as the Action-Logic Model,9 Bennett’s Hierarchy model,10 and the Context, Input, Process, and Product model11 have been utilized in many educational fields. However, HPE specific program evaluation models are limited, and research examining the efficacy of these models in HPE programs is inadequate, especially considering programmatic evaluation as a purposeful mix of evaluation activities to enhance HPE programs’ decision-making and quality-assurance functions, thus enhancing its outcomes. Because HPE programs are regarded as open, complex systems, they cannot be evaluated by examining their components separately.12

All considered, there is an obvious gap in evaluation practice and knowledge in HPE evaluation. One can adopt one or more existing methods or tools utilizing their strengths and avoiding their limitations in combination with one self’s addition and/or alterations in a programmatic and holistic approach to design a model that fits the purpose and needs.

There is increasing demand for HPE programs to pronounce their impact on society.13 However, meta-analyses demonstrate a minimal effect of HPE programs on physicians’ behaviors and patient care outcomes.14 This could be attributed to the possibility that the utilized evaluation models do not satisfactorily demonstrate the intended outcomes and/or the fact that programs have little value to add.8 Therefore, when evaluating HPE programs, a combination model that is attentive to the complexity of such programs and allows change examination is advocated for as the best option in educational program evaluation design.3,12

Subsequently, and in parallel to the ‘programs of assessment’ notion and in recognition of the principle that the whole is greater than the sum of its parts,15 the suggestion to move towards a ‘programmatic medical education program evaluation’ is put forward here. This proposition was attempted to maximize the benefits and minimize the limitations observed in some of the commonly utilized evaluation models. When adopting the concept that the whole is greater than the sum of its parts, we might be able to create evaluation programs that incorporate multiple tools, involving multiple stakeholders, and occurring across the whole program resulting in a more comprehensive evaluation and achieve the goal of improved patient care.8 Therefore, a programmatic, comprehensive, and holistic evaluation program that determines the educational program’s impact is necessary in the era of rapidly unfolding HPE.

In conclusion, models for evaluating learning programs exist but lack evidence of comprehensiveness in complex and intricate HPE programs. In addition, there is limited literature on the programmatic approach to the evaluation of educational programs. This paper might provide one of the initial suggestions for the utilization of fit for purpose programmatic evaluation of educational programs, add to the existing education program evaluation knowledge, and help future programmatic educational program evaluation frameworks in general. The structure of the proposed fit-for-purpose programmatic evaluation model can be context-specific, flexible in approach, and actively involve its stakeholders, and not limited to a particular theory, approach, or methodology. It may help users to determine the best approach and methodology for their program.

references

  1. 1. Goldie J. AMEE Education Guide no. 29: evaluating educational programmes. Med Teach 2006 May;28(3):210-224.
  2. 2. Cook DA. Twelve tips for evaluating educational programs. Med Teach 2010;32(4):296-301.
  3. 3. Frye AW, Hemmer PA. Program evaluation models and related theories: AMEE guide no. 67. Med Teach 2012;34(5):e288-e299.
  4. 4. Musick DW. A conceptual model for program evaluation in graduate medical education. Acad Med 2006 Aug;81(8):759-765.
  5. 5. Van Merriënboer JJ, Dent JA, Harden RM. A practical guide for medical teaching. London: Churchill Livingstone Elsevier. 2013 May 28:199-206.
  6. 6. Van Melle E, Gruppen L, Holmboe ES, Flynn L, Oandasan I, Frank JR; International Competency-Based Medical Education Collaborators. Using contribution analysis to evaluate competency-based medical education programs: It’s all about rigor in thinking. Acad Med 2017 Jun;92(6):752-758.
  7. 7. Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M. Developing and evaluating complex interventions: the new Medical Research Council guidance. Int J Nurs Stud 2013 May;50(5):587-592.
  8. 8. Haji F, Morin MP, Parker K. Rethinking programme evaluation in health professions education: beyond ‘did it work?’. Med Educ 2013 Apr;47(4):342-351.
  9. 9. Frechtling JA. Logic modeling methods in program evaluation. John Wiley & Sons; 2007.
  10. 10. Radhakrishna RB, Relado RZ. A framework to link evaluation questions to program outcomes. J Ext 2009 Jun;47(3):7.
  11. 11. Stufflebeam DL, Zhang G. The CIPP evaluation model: how to evaluate for improvement and accountability. Guilford Publications; 2017.
  12. 12. Mennin S. Complexity and health professions education. J Eval Clin Pract 2010 Aug;16(4):835-837.
  13. 13. Chen FM, Bauchner H, Burstin H. A call for outcomes research in medical education. Acad Med 2004 Oct;79(10):955-960.
  14. 14. Forsetlund L, Bjørndal A, Rashidian A, Jamtvedt G, O’Brien MA, Wolf F, et al. Continuing education meetings and workshops: effects on professional practice and health care outcomes. Cochrane Database Syst Rev 2009;2009(2):CD003030.
  15. 15. Dijkstra J, Van der Vleuten CP, Schuwirth LW. A new framework for designing programmes of assessment. Adv Health Sci Educ Theory Pract 2010 Aug;15(3):379-393.