What is Propositional Evaluation?
Testing or evaluating whether propositions are logical
"Evaluation is the process of determining the merit, worth or significance of something" - Michael Scriven
Propositional Evaluation treats a policy, strategy, program, intervention, initiative or action as a proposition about a course of action. A proposition can be evaluated.
Specifically, a proposition can be rendered into a series of premises with a conclusion; that if we do certain things—making certain assumptions about the system—we can reasonably expect certain outcomes.
A proposition will be ‘sound’ if it is ‘valid’ and ‘well grounded’. It will be valid if it makes sense on paper (with a deductively valid or 'truth preserving' argument structure) as well as in reality (using research methods to establish the extent to which premises in the argument are 'true' i.e. the extent to which they were achieved in 'reality'). A proposition is ‘good’ if it could/is/did obtain something of value without unduly affecting other things of value.
While it sounds technical it is very simple. Propositional Evaluation treats programs as propositions or plans. A good plan will leverage what we think we know about the world (including scientific theories about how people behave, and society works), but a plan itself is not a theory or hypothesis to be tested.
Many of the causes of poor quality, expensive, unhelpful or culturally inappropriate evaluation stem from the failure to appreciate the difference between science, research, and understanding the world and engineering, evaluation and how you get things done in the world.
We must avoid overextended theory and underdeveloped logic. We must avoid theories that apply everywhere in general and nowhere in particular. We must stop experimenting on people and treat them as co-creators of propositions for public good.
Programs or plans are vehicles for theory, they may be delivery mechanisms imbued with scientific theory. But to treat the vehicle itself as a theory confuses the difference between research and evaluation. It may be considered harmless that we use lowercase 't' when we talk of program theory. But in using theory in this way we are shooting ourselves in the foot for the modest benefit of making plans sound more 'sophisticated'. Doing so leads people to think that naturally, evaluation should be about testing theories. And we wonder why people are obsessed with experimental evaluation!? We should also consider the ethics of providing a client with a 'theory of change' or program logic that provides a false sense of certainty that a particular plan or course of action is likely to generate a certain outcome. When programs fail, it is most often because they were logically invalid or poorly implemented - not because of a failure of the theory. And as stewards of the public good evaluators should not be complicit in faulty logic.
Propositional Evaluation has adapted program logic into something explicitly logical and useful for evaluating a plan to achieve some outcome at any stage of its design or delivery. Simply, a one-page diagram that shows the conditions (or outputs) that are intended to result from your actions and if all achieved and if certain other conditions hold (or assumptions) would logically or almost certainly, bring about the desired new condition (or outcomes). Of course, there is uncertainty everywhere, and risks to the realisation of certain conditions abound. Propositions may be valid, but to be sound, they must also be well grounded in reality, in this time and space. Testing the well-groundedness of conditions in the Propositional (program) Design Logic (PDL) become the major focus of evaluation once a program is actively running. Identifying the various risks to failure and managing them is a key task for useful evaluation, and the focus of Outcomes Assurance.
The theory of Propositional Evaluation
Shadish, Cook and Levitan in Foundations of Program Evaluation proposed Five key elements for any evaluation theory - to which we add a sixth. a theory of error. This framework can be used to situate Propositional Evaluation in the evaluation theory literature.
Theory of social programming
Social programs are intentional or planned actions or intervetions designed to change a system from one state to another more desirable state. The systems into which these programs intervene are not only open systems with many interdependent parts, they are inhabited by adaptive human beings who make decisions and change their behaviour.
Often programs have lofty goals but are often quite limited in what they can realistically achieve, given the innumerable factors that maintain a system in equilibrium or push it in other directions. It is crucial to be clear on what the program must be sufficient for achieving. This is different from what it might logically contribute towards: often a higher-order vision that justifies the need for the program. This follows the advice of Stephen Covey to focus on the 'circle of influence' so that it may expand towards your 'circle of interest'. Or as Paul Kelly sung, 'from small things big things grow'.
Theory of knowledge
Propositional Evaluation treats knowledge as a claim about action supported by evidence. A logical claim can be set out in an argument structure.
Praxeology
Propositional Evaluation is concerned with the study of purposeful human action. That is with reasoned and practical action rather than the development and testing of theories and accumulation of knowledge based on an epistemology that assumes a certain ontology.
Epistemology
Propositional Evaluation seeks to create knowledge using a deductive form of argument structure based on Aristotle's enthymeme. Unlike other types of deductive logic, conclusions are probable rather than certain and some assumptions remain unstated.
This knowledge is considered to have a very short 'shelf life'. Propositional Evaluation is not seeking knowledge that will hold across time and space. Internal validity (are we responsible for the changes?) or external validity (will it happen if we do it again?) are less important on this approach to local validity (will it, or does it work, here and now?).
Knowledge is to be based on valid and well-grounded claims about the conditions that are necessary for a particular intervention to be sufficient to bring about a valuable change in the state of a system.
Ontology
As a pragmatic approach, there is little that is definitive to say on this matter. The ontology of Propositional Evaluation is probably best understood by how it deals with the nature of 'causality'. There are many theories of causality and one that works well for describing programs in society is the 'configurational' theory of causality built on INUS conditions. In short, Propositional Evaluation expects a program will be comprised of conditions (outputs and assumptions) that are necessary but not individually sufficient for an outcome, but when combined together in the right context (and sometimes in a specific order) then they are sufficient for an outcome that is generated or emerges out of the configuration of conditions (outputs and assumptions).
Theory of valuing
In Propositional Evaluation, a valuable intervention is one that achieves what it sets out to achieve.
Axiology
Propositional Evaluation does not have a set of values that are implicit in human action. To know what the 'good' is before setting out a means of achieving it is critical. But it is not a task for Propositional Evaluation to determine what we should value. This is considered a task for the democratic process that includes civil society and diverse groups of people, including minorities.
This means Propositional Evaluation attempts to put into action the values of those designing an intervention. It does not prescribe or discuss any particular system of values or axiology. It relies on the democratic process for the inclusion and balancing of diverse or divergent values in the logic of a proposed course of action.
Theory of use
A Propositional Evaluation must be immediately and instrumentally useful. It will be conceptually useful for isolating the barriers and enablers to success that are often useful in other times and places - but it is focused on the here and now.
Theory of practice
Propositional evaluation is a two-step process. The first step is asking lots of questions of subject matter experts and program designers, and probing responses to make explicit what is implicit and identify any gaps in the validity of the proposition for action. The output of this process is a Propositional Program Logic that makes sense on paper. Sometimes a program has to make a lot of assumptions in order to make sense 'on paper' - that is ok, at least now they are explicit.
The second step is about testing if the logic is well-grounded (or makes sense 'in reality'). This involves developing cost-effective means of generating facts that will provide evidence of sufficient depth of insight and granularity to improve decision-making about whether to scale up, scale down, modify, expand or terminate a program.
Theory of error
This is where it can all go wrong. Feyerband argued that all systems of thought require a theory of error. Propositional Evaluation will run into problems when mistakes are made about the purpose and audience for any given evaluation.
For practitioners, failing to understand who is actually making decisions based on an evaluation, their logic and the parts of the program they are most interested in can lead to a poor choice of propositions to test with empirical data. Also, if program designers fail to obtain sufficient input from stakeholders on what they actually value or want from the program then any given evaluation may not meet their needs because the proposition does not meet their needs.
It will not be unusual that a program is not a fully valid and well-grounded plan. The domain of Propositional Evaluation is rational decision-making and cost-effective evaluation. It does not seek to supplant democracy with any kind of technocracy. Propositional Evaluation provides a tool to help people achieve what they set out to achieve. It does not gaurantee they will use that tool as intended and seek to persuade with logical propositions rather than rhetorical devices and logical fallacies. Providing a means of setting out and spotting flaws in logic or an absence of values may be the most important use of Propositional Evaluation in the hands of citizens.
Comparison with other forms of evaluation
Different approaches to evaluation start with different ideas about what it means to specify a program. The most common approach is with a theory of change or program logic. These set out actions and intended outcomes. They are usually fairly vague about actual casual mechanisms or processes (unless you have a realist involved). The most ensuring approach to determining the value of a program is to measure it’s outcomes or establish causal relations. The table above compares three types of evaluation with Propositional Evaluation on core concerns for any theory or approach to evaluation.
Experimental evaluation
This is the original and for some, still the best form of evaluation. The goal is to warrant a program or intervention as effective or 'evidence-based as a result of comparing observations of what happens for people who do, and not not, experience it. The most common way to measure outcomes or test casual hypothesis is through some form of experimental design – where the aim is to control for context. That is, to discover the ‘true’ value of a program irrespective of context. A series of program evaluations may give a nod or tip of the hat to ‘context’ when making decisions about implementation, but do not seriously consider context as an important causal factor.
Realist evaluation
Realist evaluation grew out of dissatisfaction with experimental evaluation's failure to be specific about what 'it' is in any program that actually causes change, and how to deal with the fact that different people respond differently.
The whole program is not the unit of analysis in a realist evaluation. Realists are instead focused on the effects of mechanisms in context that are thought to be ‘transfactual’ causal mechanisms that provide the causla power within a program. To realists programs have value when they help people make better decisions. The realists are right, but their focus on scientific casual analysis and consistent (if not constant) conjunctions of mechanisms in context is less suited to evaluating whether a whole program is a good idea, and by extension, less suited to thinking about the delivery of whole programs in complex adaptative systems.
Systems evaluation
Systems evaluation seeks to determine the value/ health of a system, or the value of intervention into a system. In either case, the first step is specifying the system. The next common step to evaluate the system is to seek to understand the relationships between parts of the system. Here the whole is greater than the sum of its parts. The strength of relations between individual parts is what determines how this whole emerges. In this way systems evaluation is usually not concerned with establishing consistent or constant conjunctions of cause and effect or about knowledge claims with either internal or external validity. It is focused on changing conditions here and now and less on cause and effect.
Systems evaluation is suited to complex programs and contexts. Where categorising a problem and ‘solving for x’ does not work e.g. when working out how to raise a child, run a nation or deal with poverty, harmlessness crime etc. Where people make and change their decisions based on changing knowledge and the actions of others.
Systems evaluation prefers accurate real time data to more precise historical data because it is focused on better decisions that suit this time and place, not historical determinations of merit worth or significance in other specific (and never to be repeated) times and places. Here context is king, it is not an unwelcome intruder or co-conspirator as it is in experimental or realist evaluation. Some systems modeling seeks to forecast but is limited by ‘bounded rationality' as well as the nature of complexity. Systems evaluation is best suited to problem defintion. It may help us work out what is going on and what needs to change, but is less focused on identifying what is likely to work here and now.
Propositional Evaluation
Propositional Evaluation takes the focus of experimental evaluation on programs, the realist focus on mechanisms in context, and the systems focus on constant monitoring and adjustment in the here and now. It addresses the weakness of experimental evaluation's focus on identifying stable cause and effect relationships in complex systems. It addresses realist evaluation's practical limitations of how many mechanisms and contexts we can actually understand. It provides a means for systems evaluation to move from problem definition to the evaluation of possible interventions into a system.
Validity
A core idea separating different theories of evaluation
Experimental evaluation is strong on internal validity but weak on external validity. Realist evaluation is strong on external validity but weak on internal validity. Systems evaluation is weak on both forms of validity as often considered in program evaluation. Or rather, as systems are dynamic the pursuit of 'valid' knowledge about stable cause and effect relationships is not a priority. It is never the case that we did - the total system was the cause - and it will never repeat, again because the systems will always be different. As Heraclitus said,'no man steps in the same river twice, for he is not the same man, and it is not the same river'.
Propositional Evaluation is concerned with the logical validity of claims about a proposed course of action. This is a local form of validity rather than one focused on knowledge claims purporting to hold across time and space. It is focused on whether the premises and conclusions, or inputs, outputs, and outcomes of a proposed course of action can be arranged in a deductively valid proposition. It is accepted that in reality not every premise will be either true or false, but probable, partial, and different for different people. Assessing if the argument is valid is just the first step - if it is well-grounded in reality is a crucial second step.
Internal Valdity
Internal validity in evaluation is about whether we can attribute an outcome to our actions.
External Validity
External validity in evaluation is about whether we can generalise anything about our actions and outcomes to new situations.
Logical validity
Logical validity in Propositional Evaluation is about whether a proposition should be accepted as making sense 'on paper' before it is tested in reality. More specifically, a proposition in the form of a syllogism is logically valid when the conclusion is guaranteed to be true if all the premises are true - i.e. if the argument is 'truth preserving'.
Should we aspire to rational or reasonable action?
This website sets out an explicitly logical approach to sound reasoning about the value of a course of action. The distinction between reasonable and rational might seem trivial. Reasonable thinking is inclusive and concerned with standards of behaviour or values as well as logic, whereas 'rational' is more closely associated with calculated self-interest. Another way to think about it is that reason is more humble, less certain, and as Julia Galef writes - more focused on a scout mindset of assuming any knowldege is imperfect. This is contrasted with the soldier mindset of defending one's own ideas. Soldiers debate and try to win an argument, while scouts engage in 'dialectic' and 'unmotivated reasoning' with others in an effort to find a better answer.
There is a good reason that ARTD's vision is for a more 'thoughtful' world - a world of both logic and ethics grounded in recognition of the great uncertainty of human action and our limited rationality to know it and plan for it.
The gulf between scientific theory and practical action
More science, more ethics, or more logic?
John Dewey the pragmatist may be right when he said 'a problem well defined is half solved' but that second half really matters! There is a crucial difference between defining a problem and solving it. Dictators and despots throughout history have provided compelling arguments as to why certain people were in a poor condition but often failed spectacularly to provide a sound proposition for action to change this state of affairs.
The history of Marxism in practice is a good example of the gulf between understanding the causes of a problem, and good intentions related to human emancipation or empowerment on the one hand, and developing practical solutions on the other. Marxism is motivated by an ethical focus on human development and a 'scientific' understanding of the causes of poverty (or at least historical understanding of the development of the political economy as understood in historical materialism). But in every case, in practice, its proposed solutions were ineffective, or catastrophic. Many would say Marx and his proponents failed to understand human psychology and how incentive systems work. But above all, they failed to develop logical propositions for effective action. Marx did react against merely understanding the world (see the famous epitaph above), and attempted a scientific plan of action (as did Stalin and Mao) - but these failed, in part, because they did not provide a coherent plan or a sound proposition.
But this is not just about the past. Our present climate problems illustrate the gulf between what science can tell us, and what we want to achieve. This was pointed out expertly by Brett Walker SC in the Royal Commission into the Murray Darling - where he noted that science cannot determine how much water to take out of the river, that is a political question and involves balancing various 'goods' or uses for water. The crucial point is that choosing what to do is influenced by ethics (or ideology) and science but cannot be reduced to either of these.
Ethics may help us with 'good' intentions and axiology with thinking about what 'values' to consider in the ‘good’ (or outcomes we should seek to achieve). Science may provide an understanding of the causes of a problem (to the extent possible in complex systems), but intentions and understanding are not synonymous with effective action. Praxeology and pragmatism must consider ethics, values, and science, but are the playing field where practical action lives and dies. This is the focus of Propositional Evaluation and of evaluating the logic of a proposed course of action, before, during, and after its delivery.