7th March 2024 

Over 100 colleagues joined our online event on the 7th March 2024 which included the following presentations:


The event also included the launch of two new, practical tools developed by the Behavioural Science Unit: 

Below are the written responses to the questions asked of Professor Katherine Brown and Dr Tim Chadborn during the event.

Q&A with Professor Katherine Brown and Dr Tim Chadborn

How do you know the intervention worked through the mechanism you expected it to work? 

Understanding whether an intervention works as expected relies on a structured approach. By using the Theory of Change or a logic model informed by behavioural science theory, we can outline the expected mechanisms of action and their effects. This helps prioritize evaluation questions and shape the evaluation approach.  

For simpler interventions, an experimental approach might suffice, possibly with a factorial design. However, for more complex interventions, a theory-based impact evaluation would be more appropriate. This tailored approach ensures we capture the nuances of the intervention’s effectiveness accurately. 

 How do you know which bit of the intervention worked if there are multiple components?  

Determining which part of an intervention worked in a complex system is one of our biggest challenges. Breaking down the intervention and isolating specific components (such as Behaviour Change Techniques and the associated Mechanisms of Action) can help us to understand each one in more detail. Being clear on the components at the start of the intervention development and evaluation planning stage can help us to do this more effectively. 

You can also consider combining evidence from different type of evaluations to help understand the likely effects of the behaviour change components or their combination. This means looking at the behavioural outcomes, but also the active components of the intervention: the content (the smallest identifiable components that can potentially change behaviour), how was the behaviour change intervention delivered and in what context.  

This is typically a challenge in evaluating complex behaviour change intervention, as many behaviour change components occur together, interact with each other to reduce or amplify effectiveness, the effectiveness will also depend on how they are delivered and this will vary across different population and setting.   

 With so many elements that can be evaluated, how do you focus your aims and evaluation questions down to something that can be done within the time and resources available while managing expectations? 

Narrowing down evaluation aims and questions amidst various elements can be challenging. Practically speaking on PHIRST (Public Health Intervention Responsive Studies Team), an iterative approach is key. We begin by constructing a logic model of the intervention, outlining its components and expected outcomes. Through iterative cycles of drafting and refining research questions, involving diverse stakeholders such as local partners and the public, we gradually hone in on feasible and relevant evaluation aims. 

This iterative process allows for flexibility and responsiveness to emerging insights and constraints. By engaging stakeholders throughout, we ensure alignment with their priorities and perspectives, thereby managing expectations effectively. Ultimately, this collaborative approach enables us to focus evaluation efforts on aspects that are most feasible and meaningful within the available time and resources. 

Key to any evaluation, is a well-defined evaluation question. It is advisable to make them as specific as possible, in other words, make them answerable. As outlined above, this will depend on the priorities (e.g. what is it you need to understand about the intervention, or how do you hope to use the information from the evaluation). You can consider also looking at APEASE criteria to help structure your questions. Theory of Change can also be a particularly useful tool to help identify priority areas for the evaluation, inform the development of meaningful evaluation questions and identify key indicators for monitoring changes in your outcomes.  

The other considerations that will influence your research questions will also depend on the stage of your intervention, and what you would like to explore (e.g. efficacy, effectiveness, how the intervention works to produce the desired change, how does it interact with the context). 

 How do you overcome researcher bias with behaviour change evaluation – seeing things and in populations you want to?   

To counteract researcher bias in behaviour change evaluation, it’s crucial to acknowledge potential biases upfront (e.g. independence, or not) and plan accordingly. Strategies include diversifying the research team, using transparent methodologies like pre-registration of your proposed evaluation approach, and communicating openly about limitations and biases in findings.  

You should also carefully consider firstly, your recruitment strategy and utilising standardised questions/surveys, and also a thorough understanding of the evidence base and whether your findings support or are different to those of other researchers and why? This proactive approach helps ensure the credibility and validity of the evaluation. Also, this links to evaluation standards and good evaluation principles, described in an answer to a question below. 

With so many elements that can be evaluated, how do you focus your aims and evaluation questions down to something that can be done within the time and resources available while managing expectations? 

Narrowing down evaluation aims and questions amidst various elements can be challenging. Practically speaking on PHIRST (Public Health Intervention Responsive Studies Team), an iterative approach is key. We begin by constructing a logic model of the intervention, outlining its components and expected outcomes. Through iterative cycles of drafting and refining research questions, involving diverse stakeholders such as local partners and the public, we gradually hone in on feasible and relevant evaluation aims. 

This iterative process allows for flexibility and responsiveness to emerging insights and constraints. By engaging stakeholders throughout, we ensure alignment with their priorities and perspectives, thereby managing expectations effectively. Ultimately, this collaborative approach enables us to focus evaluation efforts on aspects that are most feasible and meaningful within the available time and resources. 

Key to any evaluation, is a well-defined evaluation question. It is advisable to make them as specific as possible, in other words, make them answerable. As outlined above, this will depend on the priorities (e.g. what is it you need to understand about the intervention, or how do you hope to use the information from the evaluation). You can consider also looking at APEASE criteria to help structure your questions. Theory of Change can also be a particularly useful tool to help identify priority areas for the evaluation, inform the development of meaningful evaluation questions and identify key indicators for monitoring changes in your outcomes.  

The other considerations that will influence your research questions will also depend on the stage of your intervention, and what you would like to explore (e.g. efficacy, effectiveness, how the intervention works to produce the desired change, how does it interact with the context). 

How do you overcome researcher bias with behaviour change evaluation – seeing things and in populations you want to?   

To counteract researcher bias in behaviour change evaluation, it’s crucial to acknowledge potential biases upfront (e.g. independence, or not) and plan accordingly. Strategies include diversifying the research team, using transparent methodologies like pre-registration of your proposed evaluation approach, and communicating openly about limitations and biases in findings.  

You should also carefully consider firstly, your recruitment strategy and utilising standardised questions/surveys, and also a thorough understanding of the evidence base and whether your findings support or are different to those of other researchers and why? This proactive approach helps ensure the credibility and validity of the evaluation. Also, this links to evaluation standards and good evaluation principles, described in an answer to a question below. 

What are the gaps in behaviour change evaluations that us as practitioners might be able to help contribute to? 

One gap lies in the evaluation’s focus on short-term outcomes, often neglecting the sustainability of behaviour change over time. By designing evaluations that assess behaviour change trajectories and long-term impacts, practitioners ensure interventions are sustainable beyond the initial implementation phase.  

Another area for improvement is the integration of theory into evaluations. Behaviour change interventions are frequently implemented without a clear theoretical foundation, hindering our understanding of underlying mechanisms of change. Practitioners can contribute by incorporating theoretical frameworks into evaluation designs, facilitating a deeper understanding of how and why behaviour change occurs.  

Evaluations can often overlook the broader context in which behaviour change takes place, including environmental, social, and cultural factors. Practitioners can help address this gap by adopting a more holistic approach to evaluation, considering the multifaceted influences on behaviour change and designing interventions that address these complexities.  

Equity and inclusion are also important considerations. Behaviour change interventions may unintentionally exacerbate disparities or fail to reach marginalized populations. Practitioners can contribute by ensuring evaluations consider equity implications, actively involving diverse stakeholders throughout the evaluation process, and designing interventions that prioritize inclusivity and accessibility.  

What are your suggestions, on overcoming some of the common challenges in evaluation of behaviour change intervention (e.g. tendency to consider short term impacts rather than whether the behaviour was sustained over time)? 

Longitudinal studies offer insights into behaviour changes over time, ensuring a comprehensive understanding of sustainability.  

Employing a mixed-methods approach, blending quantitative and qualitative methods, allows for a nuanced examination of behaviour dynamics.  

Grounding evaluations in a robust theoretical framework, such as the Theory of Change, provides clarity and guidance throughout the process.  

Engaging participants and stakeholders can help to provide a deeper understanding of their perspectives, enriching the evaluation process.  

Utilizing validated measures for both short-term and long-term impacts ensures accurate assessment.  

Careful planning of data collection points enables capturing changes over time effectively.  

Additionally, acknowledging and incorporating contextual factors like environmental and social influences enriches the evaluation, offering valuable insights into behaviour change processes.  

Will you be talking about the behaviours of those delivering the interventions?   

The behaviours of those delivering interventions, such as healthcare workers and other professionals, are crucial considerations in behaviour change evaluations. While it’s common to focus primarily on service users or patients, broadening the scope to include the behaviours of intervention deliverers offers valuable insights.  

In healthcare, for instance, understanding and addressing healthcare worker behaviours can significantly impact patient outcomes and the overall effectiveness of interventions. By incorporating a broader perspective that encompasses the behaviours of all relevant actors, we can better understand the complexities of behaviour change and develop more comprehensive and effective interventions.  

From an evaluation point of view – it is always useful to look at the whole system in which the intervention is being delivered, as these components do not exist in isolation but interact and in combination lead to the results. So, in other words – exploring: ‘Is the intervention being delivered as intended?’ And if so/if not, why? 

This can be undertaken, for example, as part of process evaluation, so looking at the implementation of the intervention in practice and fidelity to the intervention. This can commonly include data collection methods such as observations of delivery of the intervention in practice, qualitative interviews or focus groups with those delivering the intervention on the ground in order to gather their viewpoints, identify any challenges, assess what worked well and what worked less well, what can be done differently in the future. 

Is there a sliding scale of rigour in evaluation and a level of rigour we should avoid sliding below?  

Evaluation standards identify how the quality of an evaluation will be judged, rather than a linear ‘sliding scale’. Many organisations have guidelines addressing issues of quality for evaluation, largely centred around the need for accessible, transparent and reproducible evaluation. These guidelines often include good evaluation principles such as:  

  • The evaluation has followed applicable standards in the design, planning and conduct and governance.  
  • There is consistency in data collection, methodology, reporting and interpretation of findings; applicable methodological guidelines/standards followed.   
  • The evaluation is credible, grounded ion independence, impartiality and with rigorous methodology.  
  • The evaluation is demand-driven, fair, impartial, transparent, timely & used.    
  • Impartiality contributes to the credibility of evaluation and the avoidance of bias in findings, analyses and conclusions. 

Further reading:  

https://www.betterevaluation.org/methods-approaches/methods/evaluation-standards

https://evaluationstandards.org/program

Yarbrough, D.B., Shula, L.M., Hopson, R.K., & Caruthers, F.A. (2010). The Program Evaluation Standards: A guide for evaluators and evaluation users (3rd. ed). Thousand Oaks, CA: Corwin Press.  

How do we take a ‘complex-systems lens’ to behaviour change and evaluation for complex public health issues/solutions when you have to account for equity, wider determinants, different stakeholders, multiple interacting interventions/system components AND behaviour change? Are there examples where this is being done and well?  

Mobility paradigm within the context of “smartphone apps” and how this produces ‘social life’ would need to be further defined to understand the relationship with public health. 

For instance, is the app being considered for: 

a) corporeal travel (organising/undertaking travel) 

b) mobility of objects (e.g. mobilities of objects across supply chain, so buying and selling) 

c) imaginative mobility (e.g. for media projected images of places) 

d) virtual travel (e.g. by which people traverse online worlds and share presence with others in social media platforms, message boards and other virtual spaces 

e) communicative travel (e.g. through person-to-person contact via embodied conduct, messages, texts, letters, telegraph, telephone, fax and mobile) 

Mobility relies on and exists within complex systems; whichever one we choose an evaluation should consider utilising complex systems evaluation approach and methodology (e.g. Social Network Analysis) 

Evaluation data collection methods chosen would depend on the theory of change (ToC) developed and underlying indicators identified for the short, medium and long term outcomes, in order to answer the evaluation aims and questions. There are different data collection methods that can be considered depending on the indicators and evaluation questions that have been identified, and also considering the population you want to gather data from.  

Evaluating the use of digital apps, can potentially utilise data both collected passively by the app. Examples include collecting unstructured data such as:  

  • Social media and online data, i.e. “virtual digital trails” (information and usage patterns recorded by and derived from virtual digital media, which include social media and search engine data, digital data entry). 
  • Consumption data i.e. “real-life digital trails” (includes signals produced by people’s everyday actions, recorded digitally through devices and sensors measuring individuals’ movements and behaviour). Interaction with social media or websites or apps (e.g. “searches”, “likes”, or “follows” for goods). Customer shopping data  
  • Spatial/geographic data. Georeferenced social media (e.g. photos or microblogs). 
  • Physical environmental data. Environmentally referenced social media (e.g. photos or microblogs;  

Data actively collected data through apps can also be used. Examples include collecting semi-structured data through wearable health monitoring or environmental monitoring devices, or surveys embedded within apps. 

To read more about different data collection methods used in health research a good starting point is: 

Bowling, Ann (2014) Research methods in health: investigating health and health services. 4th edition, Maidenhead, GB. McGraw Hill; Open University Press, 536pp 

What are your thoughts around the Human Learning Systems approach/Appreciative Enquiry? 

I found Health Literacy Skills (HLS) to be very useful when I looked into it whilst working at the Cabinet Office, especially when addressing complex policy issues. It provided a useful framework for understanding and addressing the intricacies involved. Additionally, Appreciative Inquiry, with its evidence-based support for delivering positive outcomes, proved to be a valuable asset in tackling various challenges. 

If you could recommend one interesting book on the subject for a novice, which one would it be? 
When thinking about qualitative analysis techniques, how could the emerging practice of reflexive thematic analysis be used in evaluations? I’ve been using to help understand behaviours but what value could this to have to evaluation?   

A key value of reflexive thematic analysis in evaluations lies in its ability to identify and explore patterns, themes, and nuances within qualitative data. This approach allows evaluators to uncover underlying meanings and experiences related to behaviour change, providing rich, contextualized insights that quantitative measures alone may not capture.  

This type of approach can also encourage reflexivity and critical engagement with the data, enabling evaluators to acknowledge and explore their own biases, assumptions, and perspectives. This reflexivity enhances the rigor and transparency of the evaluation process, ensuring that findings are grounded in a nuanced understanding of the data and its context. 

 We’re currently developing an Alcohol Brief Intervention online training module for professionals to upskill them around delivering ABI. We’ve developed a logic model based on barriers and facilitators identified in the literature around delivering brief interventions. Any tips on how we can evaluate behaviour change of these professionals when contact with them post-intervention will most likely not be possible? 

If there are systems in place that capture MECC conversations, you could monitor rates of delivery before and after the training. You could also capture self-report measures for example by asking those who undertaken the training to indicate the percentage of interactions in which they use MECC pre and post training.  

It may also be helpful to monitor changes to the determinants of MECC related behaviours, that is the barriers that the online training seeks to address, for example changes in attitudes, knowledge, skills, confidence and beliefs, measured before and after the training. This could be done via a short questionnaire, for example. You can find a similar example here on page 8. 

Also, qualitative methods can be a useful consideration, if you are able to follow up with the attendees. An example of what you could consider looking at can be found here.  

A lot of our work is working to influence policy makers and commissioners to invest in prevention and a public health approach (in my case, to violence prevention). Do you have any examples of developing a theory of change and evaluation for professional behaviour change to achieve public health aims?  

We haven’t logic-modelled that specifically before but there are a few studies looking at what the barriers to some of this are: 

Byrne-Davis, L. M. T., Turner, R. R., Amatya, S., Ashton, C., Bull, E. R., Chater, A. M., Lewis, L. J. M., Shorter, G. W., Whittaker, E., & Hart, J. K. (2022). Using behavioural science in public health settings during the COVID-19 pandemic: The experience of public health practitioners and behavioural scientists. Acta Psychologica, 224. https://doi.org/10.1016/j.actpsy.2022.103527    

Curtis, K., Fulton, E., & Brown, K. (2018). Factors influencing application of behavioural science evidence by public health decision-makers and practitioners, and implications for practice. Preventive Medicine Reports, 12, 106–115. https://doi.org/10.1016/j.pmedr.2018.08.012  

Moffat, A., Jane Cook, E., & Marie Chater, A. (2022). Examining the influences on the use of behavioural science within UK local authority public health: Qualitative thematic analysis and deductive mapping to the COM-B model and Theoretical Domains Framework. Frontiers in Public Health, 10 https://doi.org/10.3389/fpubh.2022.1016076  

Knowles, N., Elliott, M., Cline, A., Poole, H. (2024) Factors influencing midwives’ conversations about smoking and referral to specialist support: a qualitative study informed by the Theoretical Domains Framework. Perspect Public Health. 10.1177/17579139241231213 

Knowles, N., Gould, A. (2024). Exploring factors influencing the application of behavioural science within public health practice across Wales. https://phwwhocc.co.uk/wp-content/uploads/2023/06/Capability-and-Readiness-Report-V1c-1.pdf  

Shikako, K., El Sherif, R., Cardoso, R., Zhang, H., Lai, J., Mogo, E.R.I., Schuster, T. (2023). Applying behaviour change models to policy-making: development and validation of the Policymakers’ Information Use Questionnaire (POLIQ). Health Res Policy Syst. 23;21 (1):8. doi:10.1186/s12961-022-00942-y   

Gofen, A., Moseley, A., Thomann, E., & Kent Weaver, R. (2021). Behavioural governance in the policy process: introduction to the special issue. Journal of European Public Policy, 28 (5), 633–657. https://doi.org/10.1080/13501763.2021.1912153  

Resources

A list of helpful resources can be found below, as well as on our resources page: