Myth-busting and evaluating soft outcomes

Soft outcomesLast week I gave the opening plenary for the Third Sector conference on ‘Measuring Soft Outcomes’. A roomful of 70+ organisations had gathered to mull over the big questions around monitoring and evaluating impact, and learn how to do it better.

I provided an overview of the topic, defining what we mean by a ‘soft outcome’, why they are important, and some examples of how they can be measured. As part of the discussion, I also sought to bust some myths:

Myth no. 1 – Soft outcomes matter less. Not for the people who matter most. Consider, for example, what’s more important for mental health service users: becoming more emotionally stable and learning how to manage mental health problems (a soft outcome) or preventing hospital admissions (a hard outcome)?

Myth no. 2 – Soft outcomes can’t be defined and quantified. They can. There are rigorous and well developed methods – eg, for measuring young people’s well-being.

Myth no. 3 – Soft outcomes are more ‘unreliable’ than hard outcomes. Not true either. Measuring anything depends on valid and non-biased methods. You have the same issues around sampling and accuracy of reporting no matter what you are measuring.

Following this were presentations from NSPCC, Coram and Shelter, who talked about what they are doing to measure soft outcomes.

You can download my presentation from the conference here.


About John Copps

John is part of NPC's research and consulting team and is the founder of NPC's Well-being Measure, a social business that provides an online tool to measure young people’s well-being. He has eight years experience of research and consulting, and is passionate about how data can be used to improve the performance of organisations. John is a regular contributor to NPC's blog and has also contributed to pieces for BBC Radio, the Guardian, and the Financial Times. John is a governor of a secondary school.
This entry was posted in General Well-being. Bookmark the permalink.

1 Response to Myth-busting and evaluating soft outcomes

  1. Paul Edkins says:

    I’ve read a lot about your well-being model and I really like the thinking and structure behind the delivery of the analysis. I just wanted to say thanks for your contribution to the research and implementation of social impact measurement – they will really help me as I try to go forward in this field.

    You blogpost struck a chord with me. For years I have been telling my less mathematically inclined friends that just because something seems difficult to measure doesn’t mean we can’t have a sense in which measuring it is meaningful. Your distinction between ‘hard’ and ‘soft’ echoes a book I read about the difference between ‘hard science’ and ‘soft science’, called Nonsense on Stilts.

    The author tries to define what constitutes a ‘science’ is, and argues that soft sciences (e.g. sociology) are just as much sciences as hard sciences (e.g. physics), but their parameters are shifted. E.g. the error bars in physics need to be much smaller than in sociology, and the factors involved in a sociological experiment far outweigh and are tougher to define those involved in a physical one.

    Anyways, I’m sure you’ve come across these sorts of ideas before, but I just wanted to say thanks and I hope to keep up to speed with new work in this area.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s