When should you do your follow up survey?

I am often asked how long is best to leave in between an initial and follow up survey. The quick answer is that it really depends on what you want to measure.

For example, if running a counselling course, you may want to do your initial survey a few days before and then your follow up survey a few days after the course. The length of the course doesn’t really matter. Whether it’s one week, six weeks, six months or a year, your results will describe the well-being of your group at those points in time, and the change that happens in between.

You should however keep in mind that the larger the gap you leave between surveys the more potential external influences there are on young people’s lives. For example, over a period of a year, a young person may experience changes in their family situation, move house, or their parent might lose their job – all of which can affect well-being.

These changes will be picked up by the Well-being Measure – something which is unavoidable. This makes it difficult to disentangle what changes are due to your activities and what changes are due to external influences. Because of this, you may want to create a control group for surveys over longer periods of time.

Exactly how you design your survey depends on what you want to measure. But when you are planning you should follow the guidance we provide and think through all the options. Whatever you decide on timing, you can be sure that NPC’s Well-being Measure will always give you a reliable indication of well-being.

Posted in Uncategorized | Leave a comment

Comparing like for like

I was asked recently why the well-being scores from an initial survey don’t always match the initial figures shown in the follow up results.

Your follow up results only show participants that have completed both the initial and follow up survey. This methodology is called a ‘matched pairs analysis’ and allows you to accurately look at the differences within the same sample between the two points in time. Young people that have completed only one of the surveys are not included in the analysis.

Even if you have the same number of participants for both your initial and follow up survey, you may still see a difference in your scores.

The reason for this is that the scores can only be calculated from comparing like for like. If all young people taking part in the survey gave a response to all 45 statements of their initial survey, they would need to respond to all 45 again in the follow up survey in order for a complete comparison.

If however, any individuals missed out a few statements on their follow up survey, the well-being score will be calculated by comparing only those that have a completed set of responses (for both surveys). For example, the self esteem aspect consists of 10 statements. If an individual responded to all 10 statements in the initial survey but only 8 in the follow up, they would not be included in the comparison.

For further information on interpreting your results, you can refer to our guideline here. Alternatively, contact us at wellbeing@thinknpc.org if you have any questions.

Posted in Handy hints | Leave a comment

A taste of what’s in our national baseline

Our national baseline is a sample of 4,122 young people across the UK that have completed the well-being survey. It is used in all the statistics generated in NPC’s Well-being Measure and uses data drawn from far and wide across the UK .

To give you a taste of the baseline, below is a graph that shows the distribution of scores for life satisfaction across the whole sample. This is based on the ‘ladder question’ where young people are asked to rate themselves on a scale from 0 to 10. The average response for the whole sample of 11 to 16 years olds on life satisfaction is 7.4 (rounded to one decimal place).

The baseline also shows averages for males and females and different age groups. The second graph gives a sense of how the averages vary by age – showing a gradual decline during adolescence.

These graphs give you a sense of the richness of the baseline. The general shape of these distributions is repeated for scores across the other areas of well-being – self-esteem, resilience, emotional well-being, friends, family, school and community. We admit that the national baseline isn’t perfect but it’s still the best there is around. Over time it will grow and become more refined – and as it does we will report more results like these!

To read more about the national baseline, see here.

To read an article about how we use the national baseline when we present your results, see here.

Posted in About NPC's Well-being Measure | Leave a comment

Our updated national baseline!

Today we launch our new and updated national baseline! It contains records from 4,122 young people who have completed the survey.

The ability to create a baseline is partly thanks to you – as it includes data from your surveys (collected and used anonymously).

We hope that it improves your experience of using the Measure. And watch this space for more updated and sneak previews of what’s in the data…

To read more details on the national baseline and how it is used click here.

Posted in Uncategorized | Leave a comment

Tackling the tricky question of attributing impact

A question we often get asked is ‘how do I know that the difference I see in young people’s well-being is due to my programme, or due to other causes? This question is not new for researchers and one that you will find comes up in any survey or research that you do.

The simple answer is that you can never be 100% certain. In their laboratories, scientists put enormous effort (and money) into isolating different factors or conditions in their experiments. This allows them to focus their experiment on what they want to test.

Unfortunately, social scientists don’t have this luxury. Because they work with people, it is impossible to recreate laboratory conditions.

Instead, the nearest thing to recreating lab conditions is to use a ‘control group’. This is a comparison group, drawn from a population with similar characteristics and selected at random, which allows you to say ‘what would have happened’. By comparing the results of your experiment with what happens to the control group over the same period of time you can isolate the impact of what you are testing.

It is always worth thinking about whether you could run a control group within your survey. However, it may not be possible or desirable. For one, it is expensive and it is not worth doing unless you do it properly. And second, if you work with a particularly vulnerable group it may not be something you want to do as it might mean not providing a service to a comparison group that you know really needs it.

Without a control group your results will still give you a good sense of the difference you make to young people’s lives. When presenting your results it can be useful to talk about ‘contribution’ rather than attribution – recognising that there are many influences on young people’s lives and your work is one of them.

If you would like to use a control group and need help, you can contact us about our consulting services by emailing wellbeing@philanthropycapital.org.

Posted in General Well-being | Leave a comment

It’s all in the presentation: how we use our national baseline to put your results in context

One of the unique features of the results you get from NPC’s Well-being Measure is that you can see how your group compares to other young people. We do this using our national baseline – a sample of young people across the UK that have completed the well-being survey.

The way we do this is to present your results as a score from 0 to 100 that is standardised using this baseline. This means that the baseline isn’t presented separately from your results, it is part of them.

To explain what I mean, consider this example. If you ran a race backwards and it took you 20 minutes to cover a distance of a mile, would you be pleased or disappointed with your time? Your answer now is probably ‘I don’t know’.

But then if I gave you all of the times of the runners in the race and told you that you finished 40th out of 220 runners, you would know that you did pretty well.

The same is true with your well-being scores. If we tell you that your group scored 176, that isn’t very meaningful. If tell you instead where your group’s score is in relation to all young people in the UK, that is much more helpful.

So in your results, we present all well-being scores on a percentage scale from 0 to 100. If your group scores 30% on self-esteem, for example, it means that 30% of the national population has lower self-esteem and 70% of the national population has higher self-esteem than your group. Presenting it this way therefore means you can instantly see how your results compare to others.

Posted in Handy hints | 1 Comment

How will measuring well-being help your organisation?

This is something we ask our workshop attendees regularly.

Knowing why you are measuring anything is important but particularly when you are exploring methods you haven’t used before. For us, it is important that our workshop attendees know what they are trying to achieve. Not only so they get the most out of the session, but also to put into context what they learn with what they want to do.

Here are some of the answers we commonly hear:

  • to provide evidence of impact for funder’s, ourselves and young people
  • to find out what works and what doesn’t (in terms of impacting well-being)
  • to identify gaps in our work
  • to improve services for the future
  • to increase understanding

If you can relate to any of the above or have other reasons why you think measuring well-being is important to your organisation, please let us know!

For more information on our Well-being Measure visit our website.

Posted in Uncategorized | 1 Comment

Good grades = happy faces?

Almost 700,000 students across England, Wales and Northern Ireland have today received their GCSE results.

With 40% of pupils expected to get five A* to C grades, an increase of 5% from last year, it is easy to assume that schools across the UK will be full of smiling faces.

However, top grades have dropped for the first time since the very first GCSE exams were sat in 1988, sparking concern over the way they have been graded. Is this really surprising given the recent media attention surrounding GCSEs and the supposed declaration that they are becoming easier? Michael Gove wanted to buck the trend of grade inflation and it appears he has succeeded.

So what effect will this have on young people? Teenagers who have worked their socks off to achieve their predicted grades will certainly be disheartened by the news. Even those who have achieved good grades may be questioning how justified their results are.

It’s not all about the grades though. Exam season is a stressful time for students, parents and teachers. My concern with the ever increasing pressure to perform well is that it can have a detrimental effect on other areas of a young person’s life. Take for example their relationships with friends. In a time where opportunities for further education and employment are harder to come by, is there an added layer of competitiveness between peers? Are relationships with family strained as expectations are raised higher?

Being able to measure subjective aspects alongside attainment is key. This is why we have developed NPC’s Well-being Measure, a tool which enables organisations to measure impact on eight aspects of well-being. It gives schools and charities working with 11-16 year olds a full picture of how happy they are.

I think it is easy to assume that good grades lead to happy faces. I think it is also safe to assume that the higher the expectation, the less smiley that face becomes.

Posted in Uncategorized | Leave a comment

The impact of measurement on your work – choosing the right approach to evaluation

We all know that measuring outcomes can be a tricky business. In particular, measuring ‘soft outcomes’ such as self-esteem or life satisfaction can be difficult because it can affect how we interact with our clients or beneficiaries. In turn this has implications for the objectivity of your results – so you need to tread carefully and decide exactly what you want from your measurement tools before using them.

With this in mind, it’s helpful to think of three approaches, each of which involve a different disagree of interaction.

First, there is measurement as evaluation. This is where measurement is for the purpose of testing the effectiveness or a programme or intervention. Whilst on a programme, a young person may be asked to complete a survey but the impact of measurement on the intervention itself is kept to a minimum. For evaluation the key thing is that methods are as objective as possible so tend to rely on a relatively low degree of interaction with young people – to minimise bias and so that they can answer honestly.  NPC’s Well-being Measure fits into this category – your survey might be gathering detailed information from young people but you are not substantially affecting their experience.

Second, there is measurement as diagnosis. This is where approaches focus on the individual, often using ‘clinical scales’ to focus on mental health. An example of this approach is the Goodman’s Strengths and Difficulties questionnaire (the ‘SDQ’), which measures mental health in young people and focuses on identifying problematic thoughts or behaviours. The SDQ itself involves a relatively low degree of interaction but it may also be combined with individual face-to-face assessment by a professional. Results from the SDQ can also be aggregated used to find out impact at programme level.

Third is measurement as part of the therapeutic process. This is where approaches form a direct and deliberate part of interaction with beneficiaries – the very act of measurement becomes part of the intervention. In this approach, measurement is done in discussions between project worker and young person, often where the two parties agree on a ‘score’ or outcome. Examples of this approach are the Rickter Scale and Outcomes Stars. As a case-working tool this approach can provide a great way of opening up dialogue and working through problems. However, as an evaluation tool it is problematic as it can be open to influence by project workers and risks producing ‘false positive’ results – leading to accusations that it is not an objective way to measure.

Overall, measuring outcomes and how it afffects your intervention is an area where you have to tread carefully. Ultimately the approach you choose depends on why you want to measure, what you want to achieve, and how it will impact upon the experience of your beneficaries.

Interested in discussing your approach to measurement and evaluation? Contact the NPC team to hear more about our consulting services by emailing wellbeing@philanthropycapital.org.

Posted in General Well-being | Leave a comment

Can I use the Well-being Measure for younger or older children?

We’ve been contacted by lots of people that work with children and young people outside the Well-being Measure’s 11-16 year target age group. Everyone wants to know whether they can use it and whether there are plans to extend the tool to cover a wider age group?

Why do you focus on 11 to 16 year olds?

Because that’s where the Well-being Measure has been validated and tested. We pride ourselves in a high standard of research and want customers to be confident that they are using a tool that they can trust. The Well-being Measure also uses a national baseline to benchmark results and we do not this outside the 11-16 group.

What about young people outside this age group?

If you work with young people slightly outside this age group (perhaps ages 9 or 18) and you think that the tool would be appropriate for them, you can still use it. It is up to you to make this decision.

There are few practical concerns with using the tool slightly outside the recommended age group and it is being used by a number of our customers already in this way. However, in terms of the analysis you need to be aware that when benchmarking results we will automatically compare young people’s scores with the nearest available age – for example, if the survey is completed by a 17 year old, we will compare that to national data for a 16 year old.

What about extending the Well-being Measure to 5 to 11 year olds?

We are currently exploring whether we can extend the range of the Well-being to the age of 8 – but is it unlikely that we would be able to go beyond this.

As a general rule, surveys don’t work very well with children younger than eight. This is because of literacy issues – they struggle to read and understand the questions that they are being asked. Most evaluation approaches with this age group instead use parents, teacher or professionals to assess progress – something that requires an entirely different approach which is outside the scope of the tool.

What about extending the Well-being Measure to older age groups?

This is something that we are keen to explore. With older groups, literacy issues are not a problem but some of the questions currently included in the Measure may not be appropriate – for example those on satisfaction with school and family relationships.

On both these questions, we hope to have some more news on our plans by the end of the summer, which we will report on this blog and in our newsletter.

Want to know more about using our Well-being Measure? Contact us by emailing wellbeing@philanthropycapital.org.

Posted in Uncategorized | Leave a comment