Propositional Evaluation & Outcomes Assurance by Andrew Hawkins is licensed under CC BY 4.0

How do I get started? | Propositional Evaluation
top of page
runner_edited.jpg

How do I get started?

From an idea to a good idea in under two hours

Let's assume you have studied a problem, you have worked out some of the key root causes of that problem. You have isolated those root causes that are within your mandate to do something about. Not every option is on the table, politics and money limit what can be done. Eventually, someone suggests an idea that might work. You are now ready to set out the logic and see if you can develop a sound plan.


The first step is to check if the plan is valid. This is akin to what we call in Australia, the pub test. Can you explain it simply? Can you with a straight face and in good conscience set out the steps and assumptions in your plan and the reasons why it will work? 

This thinking is what you shape into a Propositional (or Program) Design Logic (PDL) diagram. This focuses on what is necessary for the plan to work, and what it must be sufficient for achieving if it does work. Reasons should be given as to why a collection of conditions we bring about and assumptions about the context will be sufficient for our intended outcome. This is about working out if the plan makes sense on paper, ex-ante, or whether it is a valid proposition.

At this stage, we are just concerned with making sure the plan is valid. Later when your plan is both 'valid' and 'well grounded' (that is you have evidence to show that it is delivering outputs and outcomes in reality) it can be said to be a logical plan.


Given the world is a dynamic place and outcomes from the past do not always repeat, a constant process of checking is required to make sure the plan remains logical.

What might a valid PDL or plan look like?

How do I get started?: Our Services
PDL image.png
How do I get started?: Image

Six key steps

Asking the core questions to develop a sound, logical or just plain 'good' plan

1. What is the ultimate outcome or impact that the plan must contribute towards (this will usually be part of a corporate or strategic plan - it shows how your plan aligns to overall strategy). Write this at the top of your diagram.


2. What is the minimum outcome the plan must achieve for it to be considered a success? - provide a number and a timeframe - write this as the Sufficient Condition in your diagram underneath the ultimate outcome or impact. While the minimum might be a low bar, it is good to focus on the essentials of any plan. Different stakeholders may emphasise different things so these will need to be negotiated. Articulate that outcome(s) as a sufficient condition in the form of 'who/what is in what condition'. It can be really useful to be very specific and include a number and timeframe e.g. '500 members of the target group commence employment by Dec 2022'.

3. What are the core actions that are considered necessary for the plan to work? - Write these out in chronological order along the bottom of your diagram.

4. If each action in the plan was delivered perfectly - who or what would be in what condition? - write out the Necessary conditions above the actions.

5. What assumptions are you relying on about the operating context for your plan to work? i.e. so that what you do and what you are relying on, are enough to achieve intended outcomes - fill in the Assumptions

6. Do stakeholders and subject matter experts agree that if all of the necessary conditions are met, then they have a high degree of confidence that the plan goal will be a success? That is, when looking at the complete diagram, do relevant people agree that you have good reasons to expect that if your actions were delivered perfectly AND the assumptions held, your output would be achieved, and so would your outcomes (sufficient conditions)? That is, would the plan deliver the intended outcomes?

Are there good reasons to accept that Outputs (i.e. Necessary conditions) + Assumptions =  Outcomes (i.e Sufficient conditions)? See the next section for more on good reasons and warrants.

Revise as necessary until you have a plan that people agree makes sense on paper. Generally, you will have to either be more realistic about what your plan will be sufficient for achieving OR do more actions so you don't have to rely on shaky assumptions.

Remember, other than for the actions - all the conditions on your diagram should be in the form of a subject and a predicate - who/what is in what condition.

Now that that plan looks 'valid', what are the core tactics or methods you will use to provide the appropriate level of assurance that necessary conditions are in fact being met? That is, that the plan is 'well grounded'. This will constitute your monitoring and evaluation framework for accountability, learning, and continuous improvement. 

How do I get started?: Text
Simple calculation.png
How do I get started?: Image

How do I know if I have 'good' reasons to accept the plan?

Finding reasons or 'warrants' that actions will lead to necessary conditions and necessary conditions to sufficient conditions.

As your plan is written as a proposition and is being evaluated as such, you will need to pay attention to the reasons why each step in the plan should be accepted.

In many cases, this can be achieved through the processes of deliberation between subject matter experts and stakeholders or 'dialectic'. This is the core of argumentation and is about turning facts into evidence.

Facts are conditions that can be established to be true or false, for example, that a certain proportion of participants are satisfied with the program. Whether this provides evidence that the program is good depends on your reasons or warrants for linking facts to claims about the value of the program. It may be more or less warranted to infer that 'overall satisfaction is one measure of success in an education program (in addition to many other things, such as educational achievements) than when issuing fines to people for speeding.

A large part of the early history of philosophy from Aristotle onwards is concerned with logical arguments and the process of argumentation or turning facts into evidence. More recently Stephen Toulim introduced a method for interrogation arguments that can be used where necessary to justify 'reasons' to support your plan.

Many of the reasons that may be contested and need further exploration will be when someone claims a link between cause and effect. For example, the claim that an assumption and an action will lead to a necessary condition.

The philosophy of causality is also long and there is no single settled understanding of what is a 'cause'. Propositional Evaluation builds on a configurational theory of causality focused on manipulable causes i.e things you can do something about. This uses the concepts of necessary and sufficient conditions and is based on the work of John Mackie.

The key point about causality for Propositional Evaluation is that we are trying to establish that this plan will be sufficient in this case. We are not trying to establish stable cause and effect relationships between parts of the plan that are 'transfactual' or will hold across time and space as is the case in much experimental or realist evaluation. This approach is more aligned to systems evaluation thinking where cause and effect are not necessarily repeatable.

How do I get started?: Text

So it makes sense on paper, what about in reality?

Once you have a logical plan you will now need to give flesh to the plan. This is about securing the resources to be able to put the plan into action. This part of the policy and program process will require more than just a logical plan - although it should cover any arguments about the inherent value of the plan, it will not solve political questions about where money should be spent.


The next major phase in Propositional Evaluation and Outcomes Assurance is working out if the plan makes sense in reality.

In the first phase, we just wanted to make sure the proposition was valid. But it is highly likely that some of your outputs or necessary conditions will not be delivered perfectly, and some of your assumptions will not hold. In other words, the plan is not 'well-grounded' in reality. You may even find that what seemed logical at the time was based on an incorrect understanding of the nature of the problem or how the change happens in the world.


So you now need to ground the ex-ante Propositional Program Logic in empirical evidence. You must now track if each condition in the diagram is being brought about, to what extent and for whom, as well as whether assumptions are holding as you go, ex-itinere.

You can also use data analysis to determine if all your conditions were in fact necessary and what combination was actually sufficient after the fact, ex-post.


There are numerous paradigms and research methods that can help with this step - from quasi-experimental design to participatory methods. Many methods are designed to establish the extent to which one of the premises or conditions in the proposition has actually been realised.


Qualitative Comparative Analysis (QCA) is a very useful method for testing the whole proposition. Other methods such as Structural Equation Modelling and Bayesian Causal models can also provide insights into the overall logic of the proposition or plan.

The objective in Propositional Evaluation is to make the plan work, or if it can't work, to replace it with a better plan. Sometimes we might learn things, or happen to develop and test 'transfactual' theories about how people behave in the world. These might have broader relevance and should be communicated to the sector in which you work and the broader academic literature. But this is a by-product, not a core objective of Propositional Evaluation which remains firmly focused on making valuable changes in the world.

How do I get started?: Text

How much evidence do I need?

Evidence-based policy is all about having good reasons to support a current or proposed course of action.

Reasons may be theories about the world, but they are more often facts of experience. In some rare instances your plan may be considered to be evidence-based from one theoretical perspective but not another - in these cases, you may have to conduct more research or test competing theories directly in your evaluation. We tend to find however that usually there is less argument about theories and more consensus about what any particular plan is likely to achieve. Theory, while crucial to making sense of the word and planning action is less often in dispute when it comes to the making of plans with specific objectives than when making broad generalisations about why the world is the way it is or what we should be doing about it. 

It is important to remember that reasons or warrants turn a fact into evidence for something. A cake may have lots of calories, that is a fact, but eating the whole cake is a bad idea because lots of calories are bad for my health, in another circumstance it might be a good idea. This means that eating a cake is not necessarily a good idea or a bad idea - it depends on the circumstances. This is the same for programs. They cannot a priori be considered evidence-based or good irrespective of the circumstance. In addition, it is often very hard to define the causal power in a program i.e. what we must do and must avoid in replicating the success of a previous iteration of a program.


See the blogs below for more descriptions of the importance of context and time in developing evidence-based policy.

How do I get started?: Text
bottom of page