Despite the fact that in recent years social intervention in Spain has been incorporating impact indicators
and continuous evaluation processes into its actions, the question many of us ask ourselves is “are we measuring what we should be measuring?
At the beginning of this millennium, many social organisations began to systematise evaluation processes not so much as a form of continuous learning, but as a way of complying with the bodies that finance our projects, as well as differentiating ourselves from the multitude of organisations that were born in the decade between the 1990s and the 2000s . In this sense, it is worth recalling the study carried out on the 1 initiative of the Tomillo Foundation9 , which put the total number of social action organisations at 11,043 in 2 1999, of which 47% are estimated to have been born in the 1990s and only 18% already existed before 1980.
In spite of the fact that, in perspective this data may seem negative, in practice it allowed us to “force” ourselves to make a first approach to what today is an obligation in our field: ASKING the people for and with whom we work. This provoked a revolution, those of us who managed projects had to come down from our watchtower, as there were people who questioned the sense and usefulness of what we were
This meant that in the first decade of the 2000s, the use of questionnaires and in-depth interviews became particularly popular in social intervention, as a way of being able to measure our impact and justify our work to both increasingly demanding public administrations and a society that demanded more transparency. But what did we measure? One of the unwritten rules in this field referred to the need to measure, and for this purpose the phrase of William Thomson Kelvin (Lord Kelvin) was used, which went
something like this: “What cannot be defined cannot be measured. What cannot be measured cannot be improved”.
Well, actually we were wrong.
A first reflection along the lines of Lilian Arellano Rodríguez’s  point is that evaluating is not measuring; 4 education (as well as the processes of social intervention) is evaluable but not measurable. A starting point that, although it may seem provocative, only confirms a reality: “But what is evaluation; what is evaluable, why evaluate, what are the units of evaluation, how to evaluate, what are the requirements to become an evaluator… Don’t we often confuse evaluation with measurement?” When we analyse the results of our workshops, projects or programmes, do we evaluate the quality of our action? do we evaluate the impact of our action on the people for whom and with whom we work? is the only impact the acquisition of knowledge in the traditional way? Is the role of social organisations to supplant academia? Is evaluating the same as measuring?
In this sense, an interesting debate is the one raised by the Open Education Association , which is easily extrapolated to the field of social intervention: “What would happen if instead of measuring the acquisition of knowledge that will quickly become obsolete, we were to evaluate them for their ability to learn to learn and to learn to be? What would happen if we allowed them to implement self- and co-assessment strategies? What would happen if we assessed them for their ability to evaluate and assess themselves? What would happen if we assessed them for their ability to transform their environments; for their ability to transform society; what would happen if we assessed their ability to live and work in uncertainty?”.
This provides a space for self-criticism and the need to reflect on whether we are measuring what we really value or whether, on the contrary, as Gert Biesta  puts it, “we are measuring what is easily measurable because we have often only approximated what we can measure in a simple way”. Going a little further and in the words of Carlos Magro : “It’s not the data, it’s the questions” that have confused us in our need for continuous improvement.
This raises the need to look for new scenarios in which we can establish mechanisms for continuous learning through evaluation mechanisms… that really evaluate the impact of such a particular sector as social intervention.
For this debate, we launch some ideas:
All of this brings us to the last challenge of our evaluation processes of social organisations, which goes beyond the punctual analysis of the daily activities we carry out. It is about, as outlined in the last point, assuming the intrinsic role of social intervention, which is none other than the transformation of contexts (personal, group, family and social), and incorporating all these new variables into our impact analysis and our continuous learning processes.
Perhaps it is all much simpler and that the design of our evaluations and continuous learning systems should only reflect what Paulo Freire  pointed out years ago: “I am not in the world simply to adapt myself to it, but to transform it. -If the structure does not permit dialogue, the structure must be changed”. In other words, we are not – as social organisations – simply to develop or evaluate isolated projects, but to transform our personal and social context. If the tools we use to evaluate do not allow us to reach this knowledge, we must change them. Do we dare?