From a good idea to assurance that it is a good idea and that outcomes will be delivered
Program funders want assurances that outcomes will be achieved. Program managers need to manage the risk of program failure. The Australian PGPA Act and insights provided by the Australian National Audit Office set out clear expectations for performance management in functions and objectives, risk identification, mitigation or management at all stages of a policy or program in its lifecycle.
Treating a program like a business proposition and subjecting its assumptions and plans to critique like a venture capitalist might, is crucial to the design of an evidence-based initiative. There has to be a strong business case to support the idea that whatever is proposed is a good idea. The assumptions that underpin it and the risks to its achievement need to be made very explicit. Discussion and debate are crucial to good design, but it is sometimes hard to keep track of all the moving parts in an argument. Propositional Program Logic keeps all the core arguments on one page.
Outcomes Assurance puts Propositional Evaluation to work. It provides an end-end solution for the design, delivery, monitoring, and evaluation of any policy or program. It is focused on identifying the risks to program failure and mitigating or monitoring the extent to which these materialise.
Using Outcomes Assurance you can build a logical business case or proposition and demonstrate its inherent persuasiveness. It identifies risks and provides a framework for setting out the ways and means by which these risks will be monitored, and certain claims evaluated. It provides a method where outcomes are assured. This is because risks will be identified early - possibly even in the design phase. Once identified they can be addressed, this may include a decision to terminate a program early if certain assumptions cannot be substantiated, or additional measures put in place to reduce reliance on shaky assumptions.
Outcomes Assurance's basis in Propositional Evaluation’s explicit use of logic sets it apart from other approaches to strategic planning, business case development, performance management, monitoring or evaluation. In these you often need different people with different expertise who may see the problem differently. You might end up with lists of pros and cons, or an ex-ante cost-benefit analysis as your key agreed on claims. But how do you arrange all these into an overall claim that stands up to scrutiny?
With Outcomes Assurance using Propositional Evaluation, it is the same strategic process across the program lifecycle. There is no separation of strategy and evaluation. The focus throughout is simply addressing the question, what makes this a good idea? And then setting out a means for gathering evidence along the way that can be used to ensure it really is a good idea and will deliver the intended outcomes.
Different methods will be useful at different times to establish facts about what's happening – but everyone will remain on the same page, and be able to see how new facts alter the soundness of the overall claim or proposition allowing for course corrections informed by the logic of the proposition about this course of action.
Governance and evidence-informed decision-making that is updated as new infromation comes to light are crucial to eliminating the gap between strategic planning, monitoring, and evaluation. For too long evaluation has been seen as a research or accountability discipline, rather than primarily as a management discipline that draws on research methods. Accountability and learning are important, but Propositional Evaluation and Outcomes Assurance proceed on the basis that evaluation's true value lies in making decisions about the value of a proposed or current course of action.
Managing the risk of policy or program failure
Ten key risks to manage for sound public policy
Outcomes Assurance is focused on ensuring the soundness of a proposition for a current or proposed course of action, and then setting up governance processes to monitor and evaluate the risks that the policy or program fails.
If your proposition is both 'valid' and 'well grounded' it is 'sound' and receives the Outcomes Assurance stamp of approval for evidence-based or 'sound public policy.'
You can determine if your proposition is sound public policy by identifying, mitigating, or managing risks found in its design or operations.
Design risks relate to things that looked like they made sense on paper but don’t actually make sense (see validity risk, assumption risk, logical risk, theoretical risk, efficiency risk, and value for money risk). This is about whether your proposition is 'valid'.
Operational risks relate to the quality of implementation and changes in the operating context that can overwhelm any value delivered by the proposed course of action (see performance risk, external factor risk, measurement risk). This is about whether your proposition is 'well grounded'.
Managing the 10 risks
In the section below we set out 10 key risks to failure in the order in which they tend to materialise. In practice, design risks and operational risks materialise at different time points, not in a straight chronological sequence. For example, a flaw in design if not evaluated logically, may only show up in the data after some years when intended outcomes aren't materialising despite the program being implemented as intended.
The first set of risks are most important as they mediate whether success happens at all, the next moderate the extent to which we are successful.
Five core risks that mediate success (in order in which they are often encountered)
1. It doesn't make sense on paper, there are heroic assumptions being made and our efforts could not reasonably be expected to achieve what is being claimed - validity risk
2. It makes sense on paper, but we didn’t do what we said we would do, so it doesn’t work – performance risk
3. It makes sense on paper, we did what we said we would, but assumptions don’t hold, so it doesn’t work – assumption risk
4. It makes sense on paper, we do what we said we would do, assumptions hold, but necessary conditions (or outputs) don’t materialise, so it doesn’t work – theoretical risk (i.e. the theories and other reasons that provide the warrants to infer that one or more actions will lead to a necessary condition or output turns out to be wrong)
5. It makes sense on paper, we do what we said we would do, assumptions hold, necessary conditions materialise, but the sufficient condition (or outcomes) does not follow, so it doesn’t work – logical risk (i.e. the risk that the necessary conditions once achieved are not actually sufficient for the sufficient condition to be achieved)
Five core risks that moderate the extent of success. That is, it might work, but is not the best version of itself or the best option, or we don't really know its value.
6. It makes sense on paper, we do what we said we would do, assumptions hold, necessary and sufficient conditions materialise, but longer-term contributory conditions are not significantly impacted – external factor risk
7. The program is generating the intended outcomes, but the monitoring system is not picking these up, so the monitoring system provides an invalid indicator of progress – measurement risk. This risk is materialised when a program is canceled because outcomes couldn’t be measured and risks ‘throwing the baby out with the bathwater' or continued based on invalid KPIs.
8. The program is generating the intended outcomes, but it turns out that not every necessary condition (or output) is actually necessary – we may see outputs even when we don’t perform certain actions, our outcomes without certain outputs meaning that our activities or outputs along the way are unnecessary and the program inefficient – efficiency risk.
9. The program is generating the intended outcomes, but it is very expensive creating an opportunity cost when some other initiative may be more valuable – the value for money risk. This risk occurs when there is not enough attention to alternative options for achieving a public policy goal with other mechanisms in the policymakers tool kit.
10. The program is generating the intended outcomes but didn't sufficiently identify how this would impact other parts of the system or whether our problems are actually just symptoms of deeper 'root causes' driving the problem we are seeking to address - the systems thinking risk. This occurs when the problem definition process is incomplete.
This list does not include risks such as fraud or sovereign risk. It does however cover the majority of the risks that a public policy professional will need to consider when developing evidence-based policies and programs.
Outcomes Assurance built on Propositional Evaluation provides a complete framework and method for setting out the design logic, identifying risks, and putting into place strategies to monitor the likelihood and consequences of each risk. This is achieved with a monitoring and evaluation framework focused on adaptive management and continuous improvement in addition to 'lessons learned' and accountability. It provides decision-makers with transparent assurances that outcomes have the best possible chance of being obtained at any point in the policy or program life-cycle.
Outcomes Assurance and Best Practice Regulation
Alignment with advice from PM&C for Regulation Impact Analysis
The Australian Department of Prime Minister and Cabinet (PM&C) Office of Best Practice Regulation (OBPR) set 7 questions to be answered for developing a Regulatory Impact Statement (RIS)
1. What is the policy problem you are trying to solve?
2. Why is government action needed?
3. What policy options are you considering?
4. What is the likely net benefit of each option?
5. Who will you consult and how will you consult them?
6. What is the best option from those you have considered?
7. How will you implement and evaluate your chosen option?
Outcomes Assurance provides a systematic and logical means of answering questions 4 & 7. Associated practices such as Root Cause Analysis (RCA) may also assist in determining how to address a problem and shortlist options that may address the root causes of problems. This can assist with the process of selecting and evaluating options that are likely to have the greatest return on investment. Formal cost-benefit analysis, if required, will be greatly improved with a basis in these foregoing disciplines of considering options.
Assurance and the business case
Outcomes Assurance using Propositional Evaluation provides a detailed approach to developing a business case and providing assurance for the intended outcomes of a public policy or program.
Professor Kenneth Wiltshire AO of the University of Queensland Business School identifies ten elements for a public policy business case. Outcomes Assurance is relevant at all stages but is particularly concerned with steps 5 & 6 'Brainstorm Alternatives' to surface risks and 'Design Pathway'. As a tool Outcomes Assurance is most important for ensuring the 'design pathway' is logical, sound, and evidence-based.
1. Establish Need: Identify a demonstrable need for the policy, based on hard evidence and consultation with all the stakeholders involved, particularly interest groups who will be affected. (‘Hard evidence’ in this context means both quantifying tangible and intangible knowledge, for instance, the actual condition of a road as well as people’s view of that condition so as to identify any perception gaps).
2. Set Objectives: Outline the public interest parameters of the proposed policy and clearly establish its objectives. For example interpreting public interest as ‘the greatest good for the greatest number’ or ‘helping those who can’t help themselves’.
3. Identify Options: Identify alternative approaches to the design of the policy, preferably with international comparisons where feasible. Engage in realistic costings of key alternative approaches.
4. Consider Mechanisms: Consider implementation choices along a full spectrum from incentives to coercion.
5. Brainstorm Alternatives: Consider the pros and cons of each option and mechanism. Subject all key alternatives to a rigorous cost-benefit analysis. For major policy initiatives over $100 million), require a Productivity Commission analysis.
6. Design Pathway: Develop a complete policy design framework including principles, goals, delivery mechanisms, program or project management structure, the implementation process and phases, performance measures, ongoing evaluation mechanisms and reporting requirements, oversight and audit arrangements, and a review process ideally with a sunset clause.
7. Consult Further: Undertake further consultation with key affected stakeholders of the policy initiative.
8. Publish Proposals: Produce a Green and then a White paper for public feedback and final consultation purposes and to explain complex issues and processes.
9. Introduce Legislation: Develop legislation and allow for comprehensive parliamentary debate especially in committee, and also intergovernmental discussion where necessary.
10. Communicate Decision: Design and implement a clear, simple, and inexpensive communication strategy based on information not propaganda, regarding the new policy initiative.
Source: Institute of Public Administration Australia (IPAA), Public Policy Drift – Why governments must replace ‘policy on the run’ and ‘policy by fiat’ with a ‘business case’ approach to regain public confidence, April 2012, page viii.
Outcomes Assurance is a tool to implement the theory of Propositional Evaluation for the higher purpose of developing truly evidence-based policy that is fit for purpose in a complex world where past performance is no guarantee of future outcomes.
At the end of this section is a link to a short document on developing an evidence-based business case for a new policy proposal. The proposition set out in the business case must be turned into a living document or Outcomes Assurance Framework so assurance can be provided and risks managed.
Developing a sound business case and ensuring that it works is what Propositional Evaluation and Outcomes Assurance are all about.