Last week I gave the opening plenary for the Third Sector conference on ‘Measuring Soft Outcomes’. A roomful of 70+ organisations had gathered to mull over the big questions around monitoring and evaluating impact, and learn how to do it better.
I provided an overview of the topic, defining what we mean by a ‘soft outcome’, why they are important, and some examples of how they can be measured. As part of the discussion, I also sought to bust some myths:
Myth no. 1 – Soft outcomes matter less. Not for the people who matter most. Consider, for example, what’s more important for mental health service users: becoming more emotionally stable and learning how to manage mental health problems (a soft outcome) or preventing hospital admissions (a hard outcome)?
Myth no. 2 – Soft outcomes can’t be defined and quantified. They can. There are rigorous and well developed methods – eg, for measuring young people’s well-being.
Myth no. 3 – Soft outcomes are more ‘unreliable’ than hard outcomes. Not true either. Measuring anything depends on valid and non-biased methods. You have the same issues around sampling and accuracy of reporting no matter what you are measuring.
Following this were presentations from NSPCC, Coram and Shelter, who talked about what they are doing to measure soft outcomes.
You can download my presentation from the conference here.