Propositional Evaluation & Outcomes Assurance by Andrew Hawkins is licensed under CC BY 4.0

top of page
Search
  • Writer's pictureandrewjhawkins

The easiest way to ‘fix’ your program logic

Updated: Nov 15, 2022

Replace those arrows with plus signs. This simple fix can make a program logic more logical. Retain all those boxes – making sure that each output, outcome and impact is written in the form of a condition statement. That is, as a subject and a predicate in the form of ‘who/what is in what condition’. Then make sure you have stated the assumptions and/or constraints that need to be in place for the program to work. With a dose of humility about what your proposition or program may actually achieve, you can display a valid Program Design Logic.


A valid program logic means one that makes sense on paper when viewed by subject matter experts – it will be valid if it appears reasonable that if lower order conditions (including assumptions) hold then we can expect that higher order outcome(s) will occur. We can then test the overall soundness of the proposition through the collection of empirical data about the extent to which the important condition statements do in fact hold, for whom and under what circumstances (how well grounded the proposition is). The combination of a valid logic and empirical methods to ground it ‘in reality’ is what results in a ‘sound’, or logical program logic.


The general formula is

Inputs + Assumptions = Outputs

Outputs + Assumptions = Outcomes

Outcomes + External Factors = Impacts


Many program logic diagrams purport to be a causal model, or display the causal links or a chain of cause and effect. Most of these attempts are unsatisfactory. This is because the attempt is being made to render a complex plan intervening into a complex system as a factory or linear model of inputs and outputs. This is rarely achieved. A causal model in the form of a Structural Causal Model (Pearl 2020) that measures the strength of causal relationships between theoretically important variables is an aspiration – but rarely kmachieved in program logic in complex systems. This is where the analogy between causal modelling and program logic breaks down. A casual model seeks to determine the strength of causal links between variables. Program logic is concerned with conditions or aggregations of activities and people in context, not the independent impact of individual variables.


The easiest way to appreciate the difference between a casual model and a program logic or ‘Program (Propositional) Design Logic’ is to consider whether your program is best understood as a scientific hypothesis that concerns the causes of human behaviour – or whether it is a proposition or argument about the value of a course of action in a highly contingent and interdependent or ‘messy’ reality. If it is the latter, you could consider your program as a proposition in the form – ‘if we do these activities, we have these reasons to expect these outputs and outcomes to materialise’, rather than any kind of theory.


Let’s consider an example, a causal model focused on behaviour change might attempt to model the links between self-efficacy, self-concept and social norms on the likelihood of a person performing a behaviour. A program logic model for a behaviour change program may be concerned with whether people are aware of a program, whether they attend a program, and whether they engage with its content. In the casual model we are interested in the extent to which each variable has a causal impact on the other variables and on actual behaviour. In program logic we are not interested in the extent to which awareness causes attendance and the extent to which attendance causes engagement or how much each of these variables cause the actual behaviour. These are difficult research questions about the nature of human behaviour, and not ones that could be answered in a single study or represent a good use of evaluation resources to make practical change in the world. To determine these scientifically for a single program would require painstaking research given the complex set of mechanisms and contexts that would affect each one of these conditions. Luckily this is not the point of evaluation. Research is concerned with internal and external validity of knowledge claims. Evaluation is about the value of action; a useful or good action will draw on research but is not itself research.


Thankfully evaluation is very rarely if ever, actually concerned with scientific causal models. It is more often focused on propositions about the value of action. In program logic we are setting out a causal package that is intended to work ‘here and now’ and not hold across time and space. This is more like setting out a recipe than establishing a chain of cause and effect. We are setting out the activities that we think will be sufficient to generate the outputs which we think are necessary for our program to be effective, or to continue with technical jargon, to be sufficient for out intended outcome or impact. This is similar to the INUS model of cause and effect (Mackie 1974). What we are displaying could be described as an SNS model – Sufficient-Necessary-Sufficient. That is, our set of activities are collectively sufficient and individually necessary, for the program to be sufficient for (or bring about) our intended outcome(s). We can drill down into any one of our actions and determine what is necessary and sufficient for it, and so on ad infinitum, but in the world of everyday reasoning we only need to deal with what activities or conditions are reaonably uncertain and interrogate until we reach consensus that we can accept the proposition as sound.


It is worth noting that our proposition or program is itself rarely necessary, as there is more than one way to address any problem. But the activites of our selected approach must be sufficient (for it to be effective) and all the activities we undertake actually necessary for it to be efficient.


Of course, when we implement a program there are many other factors in the world that affect the results of what we do. In addition to our activities and outputs there are assumptions being made that will mediate (i.e., the program won’t work unless these assumptions are true) whether our activities are in fact sufficient for our outputs and outcomes. There will also be external factors that moderate (i.e., the program effect will be amplified or suppressed based on these other factors) the extent to which the direct outcomes of our efforts contribute to the longer term or higher order impacts that are indirect outcomes of our work. The key is to focus on what your program will be sufficient for and therefore you need to pay careful attention to assumptions. What your program contributes towards is generally very difficult to determine empirically and determining this will not usually represent the best use of evaluation resources. It must be logically connected, or why are you doing the program? But measuring independent contribution is not usually the best use of evaluation resources.


Let’s consider a very simple example for illustrative purposes. We are showing a collection of necessary conditions (outputs and assumptions) and then an arrow (or equals sign) to indicate we think they will be sufficient for an outcome. We will often need to state our reasons for thinking these are necessary and sufficient – and that is where theories of behaviour and behaviour change are necessary and discussion with subject matter experts and intended beneficiaries is crucial. But we are relying on these theories, not testing them directly. Much like the building of a bridge relies on theories of torsion and gravity but is not attempting to test them.


Notice in this example we are assuming ‘Intended beneficiaries have the intention to change their behaviour’. This is because in our program we are not really doing a lot to bring this about. If subject matter experts felt it was not reasonable to rely on this assumption (as when offenders attend a program to get time off prison rather than to sincerely change their behaviour) we could develop some activity that would ensure we only include those with this intention, and then it would no longer be an assumption but an output or our activity. Depending on what stage in the process you did this new activity you may place this condition further towards the bottom or top of the diagram to indicate to your reader how foundational it is or how early in the process you think this step is necessary. It doesn’t really matter because in the end it is the configuration rather than the order of conditions that we are most interested in.


Many program logics have these condition statements already – so its not that hard to adapt it. Why should you care? It changes the way you consider whether a program is logical, and it shifts the focus from measuring long term (and often indirect) outcomes of a proposition towards determining the soundness of a proposition, now and into the future based on what it is reasonable for it to achieve. It focuses on ‘what makes this a good idea’ rather than the research questions such as ‘what works’ that have so often evaded our ability to generate credible answers.


I hope this one simple fix of replacing all but the last of your arrows with plus signs (retaining the arrow that goes from the collection of outputs and assumptions to outcomes or changing it to an equal sign) helps you to think about whether this program makes sense on paper before it is delivered. This should prompt you to evaluate its design and amend it early – this can be considered a form of prospective evaluation (Datta 1990, Pawson 2013, pp 160-190). If you want more detail on the logical foundations of this approach read this article (Hawkins 2020).


References

Datta, L, E. (1990). Prospective evaluation methods: The prospective evaluation synthesis. GAO/PEMD-10.1.10. United States General Accounting Office, 119.


Hawkins, A (2020) Program Logic Foundations: Putting the Logic Back into Program Logic Journal of MultiDisciplinary Evaluation 16 (37):38-57


Mackie, J. L. (1974). The cement of the universe: A study of causation. Oxford: Clarendon Press.


Pawson, R. (2013). The science of evaluation: A realist manifesto. SAGE Publications Ltd.


Pearl, J. (University of C. (2017) The Book of Why: The New Science of Cause and Effect. Illustrated edition. New York: Ingram Publisher Services Us.


198 views0 comments

Comments


Commenting has been turned off.
Post: Blog2_Post
bottom of page