Feb 17, 2023
Sarah Hamilton, Senior Evaluation Manager at EngineeringUK, shares her insights on evaluation ahead of upcoming Tomorrow's Engineers Live 2023 conference.
If you want to know how someone feels, you have to ask the right question. The questions we ask lead people in different directions. We ask people to think about their lives and their opinions in particular ways, perhaps in ways they haven’t considered before. We make assumptions that they understand what we mean by our question; and that we know what they mean by their answer.
This is even more so when you restrict their answer to a simple likert scale.
Evaluation frequently relies on asking people questions and restricting their answers so that they can be scored, analysed and compared. When we evaluate STEM outreach activities designed to change young people’s attitudes, beliefs and aspirations, these answers are the best evidence we have that we’ve had any impact in the short-term.
So we need to be asking the right questions, in the right ways. Which is easier said than done.
There are many existing instruments that cover a wide range of outcomes that might be of interest to STEM engagement activities – interest in science, engineering career aspiration, knowledge about engineering, engineering-related self-efficacy. Matching the right construct, or constructs, to your activity can be challenging.
Some survey instruments that ask about students experiences and circumstances may not be useful as outcomes measures which are designed to quantify change as a result of an intervention. Not every aspect of a young person’s life is open to manipulation from outside. An activity may be dismissed as ineffective because the wrong type of measure was used.
Think about your audience
Even where an outcomes measure exists that measures the right construct, you still need to assess whether the questions are right for your participants. Questions developed for students aged 16+ might not be the right questions for students aged 9 to 11. Questions developed in the United States might not make sense to students in the UK where education systems and cultural norms differ.
Perhaps most importantly, any selected measure needs to be practical for use in your context. Questionnaires measuring complex constructs often include many questions. Alongside demographic questions and programme specific feedback, surveys can easily become unreasonably long, especially if the activity you are evaluating is short. Even if you could persuade your participants to complete it, the chances that they’re still paying attention by the end are pretty slim.
Getting the balance right
At the other extreme, a handful of bespoke questions getting at the absolute core of the issue can be an easier way to go. But you may miss out on a lot of crucial information, or rely too heavily on single questions that reduce complex ideas too much.
Getting the balance right is the art of outcomes measurement. At EngineeringUK we’re still working on this balance. In the last year we have trialled very short pre and post questionnaires on either side of a postcard. It’s been highly effective in getting good completion rates, but provides very limited scope for detailed analysis. In contrast, some of the longer measures we use suffer from high rates of incompletion or non-response.
Over the coming year EngineeringUK will be doing further work to select or develop a measure that can pick up the change we want to see without placing too much demand on participants.
At Tomorrow’s Engineers Live, we’re hoping to find out more about how other organisations are approaching this tricky challenge, and whether there is an opportunity to collaborate on finding just the right tool for the job.
< Back to Blog