Introduction
Let’s say your intervention intends to change a certain situation. You do your intervention and the situation changes. The conclusion is obvious isn’t it? Your intervention worked, right? No. Let’s assume that you really did measure the change properly: using indicators and appropriate methodologies to measure the change. The validity and reliability of the measurement are strong. Does this make a convincing story that your intervention worked? It still doesn’t.
You did assess the change (the relevant box in your Theory of Change diagram), but you did not assess if the change did indeed happen because of your intervention (the arrow, or arrows, connecting your intervention to this change). The change may have happened because of completely other reasons. Or it may have happened because of a mixture of your intervention and other unknown reasons.
This is the problem of causality: is there a “cause and effect” relation between your intervention and the change? Can the change be attributed to your intervention? Did your intervention contribute to the change?
This expert lens will explain the most relevant concepts and describe different approaches to assessing causal relations. Some of these approaches are elaborate and complicated. Therefore, the final section will propose some rather simple ways to incorporate the assessment of causality in your regular progress monitoring.
The Concepts
This section explains some key terms. You can skip it and continue to the more practical section about approaches for analysis and return to this section for reference.
CausalityA good starters’ source to explore concepts of causality is Befani 2012, Models of causality and causal inference.
Causality means that something happens as the result of something else. A desired change happens because of your intervention. The whole idea of developing a Theory of Change is to make explicit which causal relations exist in relation to the changes you are looking for. In fact, every arrow in a Theory of Change diagram is (or should be) a causal relation. And for each arrow one can ask the question: is it indeed true that A is a cause of B?
Sometimes, causality is very straightforward, for example, at the level of output results. You offer training to 40 people and as a result all complete the training successfully. There is no need to reflect deeply on the causal relation between “offering the training” and “people who completed the training”; although even at this level one could think of additional factors that helped to cause the result, such as the willingness of the participants, the motivation by their boss, and so on.
As you move further in the Theory of Change (often from bottom to top or from left to right), the causal relations often become less straightforward. For example, the people who finished the training increased their knowledge (causal relation with completing the training still rather clear), and changed their practices (causal relation not immediately clear; they could have changed their practices because their boss said so, or because everyone else around them is doing so).
Forward and backward causalitySome of these ideas are derived from Goertz and Mahoney 2012, A tale of two cultures.
When analysing causality, it is sometimes useful to distinguish forward causality and backward causality. Forward causality or effects-of-causes: We start at the intervention and analyse whatever happened as a result of it. We look at the effects that happened as a result of the cause (the intervention). Backward causality or causes-of-effects: We start at a given change and look backwards to find what caused it. We have an effect and try to find the cause. The cause, or one of the causes, may be the intervention. Forward causality is often how practitioners reflect on their interventions. Their intervention is the starting point. It must have been the cause of many (usually good) things. This way of looking at causality has the disadvantage that the intervention is easily given too much credit for changes. Backward causality is more open with regard to the intervention as it is quite possible that the intervention does not appear as one of the causes of the change that is analysed. And if that change was part of the intended changes described in the Theory of Change, such conclusion (that the intervention does not appear as one of its causes) means that a causal assumption of the Theory of Change does not hold true. This way of analysing and reflecting is often more difficult for practitioners (who have their intervention at the forefront in their mind) and is more common for researchers and evaluators.
Contribution and Attribution
The term attribution relates to backward causality: we observe a change and we attribute it to an intervention, meaning we hold the intervention ‘responsible’ to have caused the change.
Like attribution, the term contribution can also relate to backward causality: we have a change and we try to find out what or who contributed to it. However, contribution can also be used in a forward sense: we have the intervention and we try to find to what change the intervention has contributed.
Attribution often has an implicit undertone that the intervention has been the sole cause of the change. And because in social intervention this is almost never the case, many practitioners and evaluators prefer to talk about contribution rather than attribution. Often, the use of the word contribution assumes that there is more than one contribution to a change and it communicates a more humble and realistic view on the intervention; which is at most just one of the causes of a desired change, even if it is a very necessary factor.
Even though the connotation of attribution is ‘intervention as single cause’ and the connotation of contribution is ‘intervention as one of the causes’ this is not completely fair. It is also possible to redefine the change in such a way that the specific part of the change is found that can be attributed to the intervention, while still acknowledging that there are other contributing causes. We thus specify that part of the change that can be attributed to the intervention, while at the same time recognizing it as part of a larger change for which there are other contributing causes as well.
Attribution like this can be done qualitatively, where one makes a qualitative assessment of the specific aspect of the change for which the intervention is held responsible. But it can also be done quantitatively when the ‘net change’ is found (for example by comparing the differences between an intervention and a control group and between the situation before and after). This ‘net change’ (which is normally much smaller than the overall change) is then attributed to the intervention, while other contributing actors and factors have also helped to cause the overall change. For each of those their own ‘net change’ could be found and all the ‘net changes’ together add up to the overall change.
More about contribution and attribution
- BetterEvaluation, Understand Causes.
- J. Mayne, Making causal claims.
Necessity, Sufficiency and INUS Causes
Necessary: without the intervention the change could not have happened.
Sufficient: the presence of the intervention was enough to realise the change.
In analysing causal relations between an intervention and a change, the terms necessity and sufficiency are often used. You could say an intervention is the cause of a change if the intervention is both necessary and sufficient.
However, in real life, interventions often aren’t necessary and sufficient. Often the intervention is not sufficient and the presence of other factors is needed. And often, the intervention is also not necessary in the sense that there are also other possible pathways toward the same change.
Does this then mean the intervention did not make a difference? Not necessarily. Sometimes the intervention is part of a causal package that caused the change. This means the intervention really does make a difference. For these situations, it is relevant to understand so-called INUS causes.
This means that the intervention is an insufficient (I) but necessary (N) element of a package that is unnecessary (U) but sufficient (S) to cause the change. In practical terms, this means that:
- the intervention alone is not sufficient to cause the change;
- the intervention is needed – along with the other factors – to achieve the change; and
- the change could also be reached through another package of contributing factors, but
- this package of factors is sufficient to (jointly) cause the change.
Four Frameworks to Analyse Contribution or Attribution
Different frameworks for analysing cause and effect relations exist. These frameworks are related to various approaches and methods. Some methods combine the measurement of the change with the analysis of the contribution or attribution while other methods are specifically designed to analyse causality.The four frameworks below are derived from Stern et al. (2012 ‘Broadening the range of designs and methods for impact evaluations’ DfID Department for International Development: Working Paper 38.
Regularity Framework
Depends on the frequency that the cause and the effect occur together. Simply stated, the more often you see a certain change happening in places where your intervention is done, the more likely it is that this has something to do with your intervention. This framework is the basis for statistical approaches, such as regression and correlation analysis. This approach needs large numbers of measurement. In regression analysis, for each set of variables (of which the intervention is one), the percentage change is calculated that can be explained by a change in this variable.
- Related methods: regression analysis, multiple regression, structural equation modelling, analysis of covariance, correlation analysis
Counterfactual Framework
Depends on the difference between equal situations where only the presence or absence of the intervention is different. Simply stated: if there is a change in a situation with your intervention and no change in a situation that is completely similar without your intervention, then the change is because of your intervention. Counterfactual analysis is the basis for all experimental and quasi-experimental approaches to impact evaluation.
Experimental approaches can only be used when you are still at the start of the intervention and have the opportunity to randomise the selection of your target groups. For example, if you have two hundred potential trainees and you can only take in a hundred of them. You could then organise a lottery to decide who is in and who is out, and subsequently track the progress of the trainees as well as that of the hundred who were not selected.
Quasi experimental approaches are all other methods where a counterfactual situation is estimated, but where there is no random assignment to treatment and control groups.
- Related methods: random controlled trials and all quasi-experimental methods.
Multiple Causation Framework
Depends on the combinations of causes that lead to an effect. Simply stated: among a list of actors and factors that could help to create the change, which combinations can do the job? This is the basis for methods in which many cases are compared on the presence of a number of variables. With the use of logic, conclusions are drawn about which factors are necessary for the change, i.e. those combinations of factors that often are observed when change did happen but are not observed if change did not happen. In this way you can conclude that your intervention (or a specific part of your intervention) is a necessary element of a sufficient package of factors to cause the desired change (i.e. an INUS cause). Qualitative comparative analysis is most known in this context.
- Related methods: qualitative comparative analysis, comparative case studies
Generative Framework
Depends on understanding the process through which the change was caused. Simply said: if you know exactly which mechanisms were triggered and which dynamics took place; and you have evidence for those, then you know what caused the change. This is the basis for all theory based approaches.
One could argue that the whole idea of working with a Theory of Change fits in this framework for understanding causality. However, in practice, a Theory of Change could be used with all frameworks. In fact, many Theories of Change do not really elaborate the mechanisms that actually take place. This is the reason why most arrows in Theory of Change diagrams are causal assumptions instead of explanations of the mechanisms that link one box of the diagram to the other. This demonstrates the importance of making these mechanisms explicit, either by detailing them as part of the Theory of Change diagram or by including them as assumptions behind arrows.
Because the generative framework fits best working with a Theory of Change, and leads to most understanding about how and why change occurs, some of the related methods are elaborated a little further. The Realist approach to evaluation is the most explicit approach that uses this framework, but also Contribution Analysis and Process Tracing fit within the generative framework.
Contribution Analysis
This is a conceptual framework for answering a question about causality and can be used together with many other methods. You have already measured the change in which you are interested, after which contribution analysis follows the following steps:
- You define a clear causal question, mostly something like “to what extent did the intervention cause this change?”
- You list all possible contributions to this change. One of these contributions obviously is your intervention. Sometimes it is better to list different aspects of your intervention separately if they contribute through different causal pathways (or different mechanisms). Interventions by any other actor including the target groups are other potential contributions, as well as factors and trends in the global, national and local context.
- After that, you collect evidence about each of these contributing actors and factors. This could be confirming evidence or refuting evidence (meaning that the evidence states that this factor did not actually contribute).
- Weighing all the evidence for a contributing factor leads to a conclusion about each factor. And weighing all factors together leads to a contribution claim about the contribution of the intervention relative to all other contributions.
Usually this leads to much nuanced claims. It is important to realise that this methodology can only be used if the change is already known and that no specific methods for collecting evidence are prescribed.
More about contribution analysis
- BetterEvaluation, Contribution Analysis.
- J. Mayne (2012) Contribution analysis: coming of age? Evaluation 18(3): 270-280.
Process Tracing
This methodology starts with the development of a hypothesis, usually stating that the intervention was the cause of the change measured (possibly along with other factors). Often also alternative hypotheses are developed, usually stating that other interventions or other factors caused the change. Next, a set of four tests (hoop, straw in the wind, smoking gun and doubly decisive) is used, and on the basis of these a conclusion is drawn about the strength of the evidence that the intervention indeed contributed to the change. Carrying out these tests requires in-depth logical thinking and availability of detailed information.
More about process tracing
- J. B. Barrett, M. van Wessel & D. Hilhorst (2016) Advocacy for Development: Effectiveness, Monitoring and Evaluation, pp. 59-69.
- D. Collier (2011) Understanding Process Tracing, PS: Political Science and Politics 44(4): 823–830.
Realist Evaluation
This methodology centres on the question what worked for whom in what circumstances. Its main logic is that context plus mechanism equals outcome. The complete realist question is: “What works, for whom, in what respects, to what extent, in what contexts, and how?”
Realist evaluators seek to answer this question by elaborating in a detailed way the mechanisms through which the intervention works. That is, what is the ‘reasoning’ that makes actors respond the way they do to an intervention, or which dynamics take place that lead to the next change? This can be regarded as writing the arrows of the Theory of Change diagrams as narratives that explain in detail what exactly happens. Realist evaluation also attempts to describe precisely in what way contextual factors interact with the operating mechanisms, rather than just listing context factors as loose assumptions.
In this methodology it is important to know existing theories about mechanisms in the social, psychological, economic or other relevant domains. Indeed, many mechanisms have already been elaborated by others, and one can take advantage of these theories to build one’s own.
More about realist evaluation
- On mechanisms: Astbury and Leeuw 2010, Unpacking black boxes.
- BetterEvaluation: Realist evaluation.
Participatory Analysis or Analysis by Stakeholders
This is not a single methodology, but a large group of methods that can be combined with most of the methodologies described above. The essence is asking people about their perception of causality: when they reflect on the changes they have seen happening, why do they believe this happens. While these methods are prone to biases (for example respondents wanting to credit the organisation), triangulationTriangulation of methods means you apply different methods to answer the same question, you bring the resulting analysis of both methods together and come to one coherent story. Triangulation of sources means you apply the same methods, but make sure to get the information from different sources (e.g. persons who are internal and persons who are external to the organisation) and you weigh the differences between the different sources into a coherent story. Triangulation is not simply listing different data sources or outcomes from different methods alongside each other without intelligently bringing them together into a single story. of independent views of different groups of people can lead to credible claims. This is particularly true if the views of people who have no stake in the intervention are included, such as external experts.
Guidelines How to do This in Simple Ways
The section above provided an overview of frameworks and approaches to get insight in causal relations. Most of these methods are rather serious and may not always be feasible for regular progress monitoring and reflection.
This section provides some relatively simple guidelines to assess the contribution of your intervention.
- For the main causal pathways in your Theory of Change, elaborate the mechanisms that take place. This means that the arrows in the diagram are translated into concrete stories about what actually happens. The Validation and Assumptions theme or the description box for a relationship may be the appropriate place to register this information within Changeroo.
- In all exercises of outcome measurement, keep an open mind and apply backward causal thinking. This means that once you have measured a certain change or improvement, you do not immediately assume that this is because of your intervention, but you leave this as an open question: “how come this change happened?” Another way to do this, is to attempt to think of alternative explanations why a change or improvement might have come about. This could help you as a team to step out of the tunnel vision that only your intervention has caused improvements (while other factors probably explain any lack of success).
- If your data collection methods involve asking stakeholders or target groups (in online or offline surveys, questionnaires, interviews, focus group discussions), ask them in a manner as open as possible the question, what has caused the changes . For example by asking: “In your view what have been the reasons for change x?” Asking the question in this way already prompts the respondents to think about more than one factor.
- Collect and keep evidence from other sources about particular causal relations. Many interventions are not completely new or innovative and others may have carried out rigorous research to analyse if the intervention contributes to a certain change. The more such existing evidence is found, the stronger your claims become and the less the need to carry out in-depth causal analysis.
Our brain is predisposed to see causal relations too easily and too often. As soon as our brain can make up a coherent story connecting one thing as a cause and one thing as an effect, we easily believe there must be a causal relation.See for example Kahneman 2011, Thinking fast and slow, chapter 16. Or see here and look for ‘illusory correlation’. In doing so, our brain is also biased to pick up those events that are at the forefront of our mind. It is therefore not surprising that practitioners (for whom the intervention is at the forefront in their mind) easily see their intervention as the cause of many good things they see happening. In the worst situation, Theories of Change are but a reflection of such biased thinking about causality, and in the best situation, reflecting on a Theory of Change may help unearth such ‘too-good-to-be-true’ biases in our minds.