Measuring and improving our impact: how can evaluation help us learn what works in STEM engagement?

Jul 5, 2022

By Sarah Hamilton, Senior Evaluation Manager at EngineeringUK

At EngineeringUK we have been looking again at how we can step up the evaluation of our programmes to develop stronger evidence for what works in engineering engagement. Robust evaluation is essential for demonstrating the value of programmes, but we want to push further to use evaluation to build the evidence-base for what makes engineering outreach effective. Tomorrow’s Engineers Live, taking place on Monday 11 July, will bring the engineering community together to consider how we can collaborate to better understand what works.

In 2021, EngineeringUK published the Impact Framework for the engineering outreach sector which suggested that by improving young people’s capability, opportunity and motivation we can shift their behaviour towards further study of STEM subjects and ultimately to pursuing STEM careers. This model (the COM-B model – capability, opportunity, motivation - behaviour) helps identify the short-term outcomes needed to achieve this long-term goal, and how we can measure them. It also forms the basis for a theory of change – a logic flow of inputs, activities and outcomes for what works in engineering outreach to achieve our intended impact.

So, how can we use this starting point to both measure and improve our success in achieving our ambitions for engineering engagement? Evaluation should be more than a mechanism for scoring our efforts. It should be the driver for ever-improving practice and a deeper understanding of how to achieve our long-term goals.

Understanding the problem

Effective programmes need a clear understanding of the problem they are tackling to shape their approach and content. The better our understanding of the problem, the more chance we have of solving it. Evaluation can be used to test this understanding and to provide more insight into the specific barriers for some groups. Who benefits most from an engineering engagement activity? Who is not benefitting and why? Are we reinforcing existing gaps by cementing an interest in those already likely to pursue engineering at the expense of changing the minds of those furthest from it? If so, have we got our understanding of the problem right?

Unpicking complex solutions to complex problems

In reality, the problem we are trying to tackle in engineering engagement – the lack of engineers and other STEM professionals to meet future demand, and particularly the lack of diversity in these sectors – is a complex one. It is influenced by widespread cultural and social attitudes and beliefs, it reflects intergenerational inequalities, and it is tied up with broad challenges around disadvantage that impact on a whole spectrum of outcomes for young people. Engineering outreach can only ever be a part of the solution. A good theory of change and thoughtful evaluation can help to identify where other pieces of the puzzle are needed and how they fit together.

Testing, changing, and testing again

Evaluation is sometimes seen as the last stage of programme delivery, but in reality it is most effective when it forms part of an iterative cycle of improvement. Engaging with evaluation in this way takes courage. It means looking for things that could be done better rather than looking for confirmation that what we do has worked. It also requires us to identify what kinds of improvement might be worth trying. In addition to quantitative measurement, qualitative data can be extremely valuable to get beneath the figures and uncover new ideas.

Sharing learning

Evaluation is an investment. Doing robust, detailed evaluation costs time and money, but too often the findings of such evaluations remain internal. They may be seen as too exposing or risky to share, particularly when findings are not completely positive, but evaluations provide learning that can be of real value to other organisations. They help to uncover patterns of what might be working and for who. Even – or perhaps especially - when a programme might be considered to have ‘failed’, this can provide valuable information for others about what might or might not be worth trying in the future.

Evaluations may also be left unpublished because of concerns about the rigour of the evaluation itself or the strength of the evidence. But here, too, there is a lot of learning to be gained. What was it about those questions that didn’t work? Why was it hard to get the necessary, robust data? What would it have taken to get findings that you could be confident in? We have to be prepared to show what we’ve tried and be transparent about what we have achieved.

We’re looking forward to more conversations with the wider sector, including at Tomorrow’s Engineers Live, to learn how others are tackling these challenges and to find more opportunities to collaborate in finding ways to improve engineering outreach.

< Back to Blog