Despite the fact that in recent years social intervention in Spain has been incorporating impact indicators
and continuous evaluation processes into its actions, the question many of us ask ourselves is “are we measuring what we should be measuring?

At the beginning of this millennium, many social organisations began to systematise evaluation processes not so much as a form of continuous learning, but as a way of complying with the bodies that finance our projects, as well as differentiating ourselves from the multitude of organisations that were born in the decade between the 1990s and the 2000s [1]. In this sense, it is worth recalling the study carried out on the 1 initiative of the Tomillo Foundation9 [2], which put the total number of social action organisations at 11,043 in 2 1999, of which 47% are estimated to have been born in the 1990s and only 18% already existed before 1980.


In spite of the fact that, in perspective this data may seem negative, in practice it allowed us to “force” ourselves to make a first approach to what today is an obligation in our field: ASKING the people for and with whom we work. This provoked a revolution, those of us who managed projects had to come down from our watchtower, as there were people who questioned the sense and usefulness of what we were

This meant that in the first decade of the 2000s, the use of questionnaires and in-depth interviews became particularly popular in social intervention, as a way of being able to measure our impact and justify our work to both increasingly demanding public administrations and a society that demanded more transparency. But what did we measure? One of the unwritten rules in this field referred to the need to measure, and for this purpose the phrase of William Thomson Kelvin (Lord Kelvin)[3] was used, which went

something like this: “What cannot be defined cannot be measured. What cannot be measured cannot be improved”.


Well, actually we were wrong.

A first reflection along the lines of Lilian Arellano Rodríguez’s [4] point is that evaluating is not measuring; 4 education (as well as the processes of social intervention) is evaluable but not measurable. A starting point that, although it may seem provocative, only confirms a reality: “But what is evaluation; what is evaluable, why evaluate, what are the units of evaluation, how to evaluate, what are the requirements to become an evaluator… Don’t we often confuse evaluation with measurement?” When we analyse the results of our workshops, projects or programmes, do we evaluate the quality of our action? do we evaluate the impact of our action on the people for whom and with whom we work? is the only impact the acquisition of knowledge in the traditional way? Is the role of social organisations to supplant academia? Is evaluating the same as measuring?


In this sense, an interesting debate is the one raised by the Open Education Association [5], which is easily extrapolated to the field of social intervention: “What would happen if instead of measuring the acquisition of knowledge that will quickly become obsolete, we were to evaluate them for their ability to learn to learn and to learn to be? What would happen if we allowed them to implement self- and co-assessment strategies? What would happen if we assessed them for their ability to evaluate and assess themselves? What would happen if we assessed them for their ability to transform their environments; for their ability to transform society; what would happen if we assessed their ability to live and work in uncertainty?”.


This provides a space for self-criticism and the need to reflect on whether we are measuring what we really value or whether, on the contrary, as Gert Biesta [6] puts it, “we are measuring what is easily measurable because we have often only approximated what we can measure in a simple way”. Going a little further and in the words of Carlos Magro [7]: “It’s not the data, it’s the questions” that have confused us in our need for continuous improvement.


This raises the need to look for new scenarios in which we can establish mechanisms for continuous learning through evaluation mechanisms… that really evaluate the impact of such a particular sector as social intervention.


For this debate, we launch some ideas:


  1. We must assume once and for all that the traditional natural-scientific measurement approach seeks explanations with perfect cause-effect relationships that hardly manifest themselves in social intervention settings. Insofar as few or none of the contexts in which it is carried out is generalisable given their uniqueness.
  2. We must focus on learning and skill-oriented assessment. Is this demagogic? Certainly not. Our evaluative action should not renounce grading, but rather that grading (and only some grades) be the only function of evaluation, as has been the case until now. That is why we must enrich (and assume) the multitude of variables that intervene in this type of process that go beyond evaluation (and vision).
  3. The role of civil society cannot occupy the space of knowledge accumulation, but rather the capacity for social and personal transformation. This leads us to the need to establish new evaluation formulas based on participation and the transformative impact of our action. In this sense, we must move from the analysis of the accumulation of knowledge to the analysis of the process of acquiring social competences and the capacity of these new competences in social change; a social change that will have to be analysed at the personal, group, family and community levels.
  4. We must immediately eliminate the traditional organisational vision of taking the people we work for and with from “mere subjects of intervention” with a subject/object relationship to a new relational framework of subject/subject using methodologies that allow a collective exercise of construction, social improvement and, by extension, improvement of the processes of social inclusion of these people.
  5. Eliminate from our vocabulary, words such as “user” and “beneficiary” because they generate a conceptual framework in which evaluation processes are seen as a requirement to be fulfilled and not as a process both of improving the action of our organisations and of questioning our action.
  6. The practices and approaches of formative evaluation involve handing over responsibility to the participants with different degrees of participation: self-evaluation, co-evaluation, shared evaluation, dialogue-based evaluation, self-evaluation or a democratic evaluation model. We must incorporate new methodologies, new visions that allow us to know what is happening and what impact our work has.
  7. We must incorporate into our analyses (and prior to our action) the competency knowledge incorporated by the people for and with whom we work, insofar as it integrates a conceptual type of knowledge: concepts, principles, theories, data and facts (to know); knowledge related to skills, referring both to observable physical action and mental action (know-how); and a component with great social and cultural influence, and which implies a set of attitudes and values (know-how to be) that have an impact on our closest [8] context.

All of this brings us to the last challenge of our evaluation processes of social organisations, which goes beyond the punctual analysis of the daily activities we carry out. It is about, as outlined in the last point, assuming the intrinsic role of social intervention, which is none other than the transformation of contexts (personal, group, family and social), and incorporating all these new variables into our impact analysis and our continuous learning processes.


Perhaps it is all much simpler and that the design of our evaluations and continuous learning systems should only reflect what Paulo Freire [9] pointed out years ago: “I am not in the world simply to adapt myself to it, but to transform it. -If the structure does not permit dialogue, the structure must be changed”. In other words, we are not – as social organisations – simply to develop or evaluate isolated projects, but to transform our personal and social context. If the tools we use to evaluate do not allow us to reach this knowledge, we must change them. Do we dare?




Get in touch

For general requests regarding the project, please contact:

To get in touch with one of the partners, see partner’s details!

© Copyright 2020 Creative Action Against Discrimination