Imagine one of your partner organisations just launched a financial literacy programme and wants to work with you to get the word out to young people to attend the trainings. You design a Facebook and Instagram campaign that runs for 3 months. Let’s say that at baseline, 50 people were attending the trainings, and after the campaign the same number of people were attending the trainings: 50.
It would be very tempting to conclude that the campaign didn’t work because the number of attendees didn’t increase. But this is not accurate because we don’t know whether the campaign was implemented with fidelity.
Fidelity of implementation is the degree to which an intervention is carried out the way it was designed. Monitoring implementation fidelity, though a crucial part of an intervention, is often ignored. Without monitoring fidelity, we can’t really say with any certainty whether interventions work or not. In the Behavioural Drivers Model, this is referred to as ‘the black box of behaviour change programming’ (Unicef, 2019, pg. 3).
Continuing with our example above of the social media campaign we designed, consider the four scenarios below. In all four cases, let’s assume that we monitored fidelity.
In the first scenario, fidelity was high and the outcome was also positive. In this case, we can be confident that our intervention, as it is designed, works for the purpose we designed it for.
In the second scenario, we can’t draw conclusions about the effectiveness of our intervention, because we didn’t implement it the way it was designed. For example, maybe we planned to post three times per day on Instagram but did not do that throughout the implementation period.
In the third scenario, we have good reason to believe that our campaign is not effective in getting young people to attend the financial literacy training. It may be because there are factors we did not design for or the modality is not correct. In any case, we have great information that, the campaign, as it is currently designed, does not work and we can draw important lessons as we move forward.
In the final scenario, although the outcome was positive, we don’t know whether we can take any credit for our campaign, because it wasn’t implemented as designed. Perhaps there was another parallel intervention that caused this positive outcome.
As we can see, scenarios 2 and 4 would be particularly troubling. In scenario 2, we may incorrectly draw the conclusion that our campaign doesn’t work at all, and throw it out, meanwhile, it may not have worked in that instance because we didn’t implement it with fidelity. Scenario 4 would also be unfortunate because without measuring fidelity, we would think that our campaign is fantastic and continue to implement it piecemeal, and never really know whether it works or not, or what exact aspects work.
According to Finney et al (2021), there are five criteria that one should look at when assessing implemenation fidelity.
As a starting point, we want to have a theory of change that explains why we think the intervention will work in the way that we designed it: the programme differentiation. Second, we want to monitor how well we adhered to implementing the different features of the intervention. Third, we need to check the standards to which we implemented the intervention and ensure that the quality was high. Fourth, we need to check whether the full dosages of the intervention were administered. And finally, we need to note how responsive our target audience was to the project.
To get an objective perspective on fidelity, it would be good to have multiple check-in points of project implemenation. For example, the programmes department responsible for roll-out should have a robust monitoring system that would allow them to track project implemenation. It would also be good to have regular feedback loops with the intended audiences on how much of the dosage is reaching them, by, for example asking them how many posts they see in their feed or how often they interact with a component of the project. Finally, it would also be important to verify the five implementation fidelity components by an external evaluator.
In conclusion, implementation fidelity is essentially just as important as implementing the project at all. Without it, we would be left with little information on whether what we are doing works and to what extent.
Has implementation fidelity impacted a project you worked on recently? Share your thoughts with me here!
Finney, S. J., Wells, J.B. & Henning, G. W. (2021) The Need for Program Theory and Implementation Fidelity in Assessment Practice and Standards. National Institute for Learning Outcomes Assessment
Kok, G. (2014). Practical Guideline to Effective Behavioral Change. The European Health Psychologist, 16(5), 156–170.
UNICEF. (2019). The Behavioural Drivers Model.