Health Programs Uncovered: Simplifying Evaluative Research Decisions with Logic Analysis
Breitner Gomes Chaves1*, Khayreddine Bouabida2
1Research and Health Evaluation Consultant for Vitalité Health Network, New Brunswick, Canada
2Research Center of the Hospital Center of the University of Montreal (CRCHUM), Montreal, Canada
*Corresponding author: Breitner Gomes Chaves, Research and Health Evaluation Consultant for Vitalité Health Network, New Brunswick, Canada
Received Date: 02 June, 2023
Accepted Date: 12 June, 2023
Published Date: 15 June, 2023
Citation: Chaves BG, Bouabida K (2023) Health Programs Uncovered: Simplifying Evaluative Research Decisions with Logic Analysis. J Community Med Public Health 7: 329. https://doi.org/10.29011/2577-2228.100329
Abstract
Evaluation researchers and consultants are frequently presented with the challenge of deciding whether to evaluate a health program in advanced stages of implementation, given the investment of time and resources required of a research evaluative project. In this article, we explore a practical solution to this problem by introducing the logic analysis methodology as an alternative approach to support evaluators determine if a program deserves to be evaluated. By providing a transparent and effective method of communication to all parties involved, this approach can help prevent the wastage of valuable resources. Moreover, such approach has great potential to collaborate with the improvement of programs from their initial conception phase up to more advanced stages.
Keywords: Logic analysis; Health Programs; Health Research Evaluation
Introduction
Assessing the effectiveness of health programs is a demanding and resource-intensive endeavor [1]. Evaluators frequently encounter demands to evaluate programs that are already in an advanced stage of implementation, typically from program advocates seeking greater insight into the impact or effects of their initiatives.
In the process of proposing an evaluation research approach, the evaluator may come across two possible obstacles: 1) programs that lack a comprehensive examination of their theoretical foundations, and 2) programs that face issues due to inefficient implementation (Implementation issues). The two scenarios outlined suggest distinct reasons for why a proposed intervention may fail to address a given problem effectively.
The first scenario posits that the program itself was not intentionally designed to tackle the problem at hand. In contrast, the second proposes that the failure stems from inadequate planning or a lack of necessary adaptations during implementation. It is important to differentiate between these possibilities to accurately identify and address the root cause of the program’s failure. A common misconception often leads to programs being wrongly labeled as failures, when in reality; the shortcomings are due to problems during the implementation phase. This misinterpretation is referred to as a Type 3 error [2].
It is crucial for health evaluators to recognize that evaluating program implementation failure can yield significant insight for involved parties. However, in cases where the underlying program theory fails, evaluating its effects may not be warranted, as establishing a causal relationship (in the broad sense) between the program, and observed effects may not be possible [3]. As such, health evaluators must exercise discretion when determining the value of evaluating program effects in instances where program theory proves inadequate.
To tackle this issue, this critical article outlines an approach for evaluators to consider when deciding whether to engage in evaluative research. Specifically, we propose the logic analysis methodology as an alternative approach for evaluators to use in making informed decisions.
What is Logic analysis?
The logic analysis is a valuable method within the broader category of theory-based evaluations. Through this approach, we can examine the coherence between an intervention’s program theory and its potential outcomes [4]. By doing so, we can identify the program’s strengths and areas for improvement, ideally before the program starts to be implemented or even in its initial conception. Ultimately, the logic analysis provides important insights into an intervention’s underlying logic and helps us to refine and improve its design. Furthermore, it offers a comprehensive explanation that stakeholders can readily comprehend, detailing the methods by which the desired effects may be achieved [5].
Logic analysis can be applied through two approaches: direct and inverse. The direct method entails identifying the pivotal features and contextual circumstances vital for facilitating the program to yield the intended outcomes. Conversely, the inverse technique is employed to identify alternative pathways and optimal strategies for achieving the program’s set goals [6]. Both approaches are instrumental in unlocking the full potential of logic analysis.
Program designers in the health field may have considerable experience and implicitly integrate theoretical robustness into program components. However, the logic analysis may reveal areas for improvement due to insufficient in-depth and systematic reflection during the program’s initial design phase.
How to Apply It?
This approach requires a minimum of three application steps:
The Construction of the Logical Model
Logical models serve as visual representations of a health program’s theory [7]. These models illustrate the relationships between program resources, processes, and desired effects [8]. Multiple ways exist to build a logical model [9,10]. In causal logical models, intended effects are often depicted in a causal pathway, extending from more immediate outcomes or outputs to longterm outcomes. As such, logical models provide a straightforward means for elucidating a program’s theoretical assumptions. It should be emphasized that every connection or link within the program’s causal model can be considered as a hypothesis that must be examined and validated.
Prior to conducting any evaluative project, it is highly recommended that evaluators establish a logical model for the program (if it doesn’t have one). Unless proven otherwise, a program lacking a logical model should be viewed with caution. The absence of a logical model suggests a lack of deep reflection on the theoretical underpinnings of the program, thereby increasing the likelihood of failure and the consequent wastage of resources.
The Design of the Conceptual or Integrative Framework
The evaluators in the current phase utilize diverse sources such as scientific literature and expert opinions to scrutinize the program’s logical model for coherence, logical consistency, and scientific robustness [11]. This analytical process enables them to identify potential areas of improvement and recommend alternative solutions to enhance the program’s chance of success. In essence, the evaluator is required to sift through different sources, including systematic reviews and meta-analyses, to determine if the proposed solutions in a health program are founded on a credible theoretical basis to resolve the targeted issue. The logical model’s elements (and their relationships) must be scrutinized closely. Additionally, this phase may identify contextual factors that may facilitate or impede the implementation of the program [6].
Evaluation of Program Theory
In the final phase, the evaluator conducts a comparison between the logical models and the findings from phase two and subsequently provides a comprehensive initial assessment and recommendations to all relevant parties involved [11]. This critical step involves identifying both the strengths and weaknesses of the program. In the worst-case scenario, the program may lack a sound theoretical foundation to address the intended problem and may require significant modification or even interruption. Alternatively, specific recommendations for improving certain program components may be suggested to enhance the likelihood of success in real-world settings.
Is the Program Worth Evaluating?
Evaluative research involves more than just measuring the effects of a program. It also seeks to analyze potential causal relationships between the program our its components and observed effects [12,13]. Without an analysis of the theoretical foundation of programs using methods such as logic analysis, the value of evaluative research could be negligible.
In the practical world of evaluative consultants and researchers, it is not uncommon for large-scale programs with significant public resources invested in health systems to lack logical models, underscoring the fundamental role of logic analysis for evaluators. Due to the complexity of health programs, evaluation processes can take years and consume direct public resources or research agencies’ funding, despite private actors’ greater interest in more rigorous evaluation methods.
Evaluators must always consider the relevance of engaging in certain evaluative projects, particularly in cases where programs are already in an advanced stage of implementation, and the parties involved have high expectations for their initiatives. In such circumstances, evaluators may face a significant ethical dilemma, as their initial evaluative judgment through a logic analysis may recommender the discontinuation of an ongoing program in extreme cases. Nonetheless, logic analysis in this context can provide evaluators with a rapid initial assessment of the program and lend weight to their initial advice for stakeholders.
Conclusion
This article discusses a frequently encountered challenging situation among evaluators worldwide, who evaluate programs already in advanced stages of implementation. To assist evaluators and evaluation researchers in making informed decisions, we suggest adopting the logic analysis as a practical, efficient, and cost-effective approach to assess the suitability of coordinating an evaluation research project. We consider that programs lacking a plausible theoretical foundation are not worth it for undergoing an exhausted evaluation research process, and the evaluator should be transparent with all parties involved.
Finally, evaluative researchers should only embark on an evaluation research project for a complex health program if a solid and reasonable theoretical foundation has been built. Failure to meet this prerequisite would put the evaluation process at risk of failure and so wasting resources and prolong the existence of a
program that fails to address the problems for which the evaluation is requested and or initiated.
References
- Vargas C, Whelan J, Brimblecombe J, Allender S (2022) Co-creation, co-design, co-production for public health: a perspective on definition and distinctions. Public Health Res Pract 32: 3222211.
- Basch CE, Sliepcevich EM, Gold RS, Duncan DF, Kolbe LJ (1985) Avoiding type III errors in health education program evaluations: a case study. Health Educ Q 12: 315-331.
- Rey L, Brousselle A, Dedobbeleer N (2011) Logic Analysis: Testing Program Theory to Better Evaluate Complex Interventions. Can J Program Eval 26: 61-89.
- Hudon C, Chouinard M-C, Brousselle A, Bisson M, Danish A (2020) Evaluating complex interventions in real context: Logic analysis of a case management program for frequent users of healthcare services. Evaluation and Program Planning 79: 101753.
- Tremblay M-C, Brousselle A, Richard L, Beaudet N (2013) Defining, illustrating and reflecting on logic analysis with an example from a professional development program. Eval Program Plann 40: 64-73.
- Brousselle A, Champagne F (2011) Program theory evaluation: Logic analysis. Eval Program Plann 34: 69-78.
- Champagne F, Brousselle A, Hartz Z, Contandriopoulos A, Denis J (2009) Modéliser les interventions. L’évaluation: concepts et méthodes. 2009: 57-70.
- Chen WW, Cato BM, Rainford N (1999) Using a logical model to plan and evaluate a community intervention program: A case study. International Quarterly of Community Health Education 18: 449-458.
- Renger R, Titcomb A (2002) A Three-Step Approach to Teaching Logical models. American Journal of Evaluation 23: 493-503.
- Chaves BG, Catherine B, Lord M-M, Thériault J, Lambert F, et al. (2021) Logical model for Mental Health College Model in Quebec, Canada. International Journal of Development Research 11: 4878148784.
- Gomes Chaves B (2022) L’analyse d’implantation du modèle Recovery college au Québec: Université de Montréal.
- Dryden-Palmer KD, Parshuram CS, Berta WB (2020) Context, complexity and process in the implementation of evidence-based innovation: a realist informed review. BMC Health Serv Res 20: 81.
- Hayes H, Parchman ML, Howard R (2011) A logical model framework for evaluation and planning in a primary care practice-based research network (PBRN). J Am Board Fam Med 24: 576-582.
© by the Authors & Gavin Publishers. This is an Open Access Journal Article Published Under Attribution-Share Alike CC BY-SA: Creative Commons Attribution-Share Alike 4.0 International License. Read More About Open Access Policy.