Power, knowledge and courage: our experience with feminist evaluation

“The feminist evaluator provides technical support and poses questions that support the gender transformative nature of the program, the capture of information and other evaluative processes. The evaluator must be attuned to potentially sensitive power dynamics and reflect on his/her/their own power in an evaluative exercise.” (Oxfam Canada 2018, 6)

Over the course of the past year, we have been working closely with the William and Flora Hewlett Foundation's Gender Equity and Governance team to evaluate their 2015 - 2021 Women’s Economic Empowerment (WEE) Strategy. As feminist evaluators we were - and continue to be - excited about this evaluation for two main reasons.

First, evaluations are not always user focused. Hewlett’s approach of combining evaluations with strategy development processes helped to ensure that the evaluation produced actionable knowledge to shape the next five-year strategy.

Second, as feminists, we are ultimately interested in transforming power relations. We were, therefore, not only interested in examining why gender and power relations exist and how they change, but also ensuring that this information was used to engage the foundation in meaningful discussions about how philanthropic grantmaking can contribute to power shifts. Our role not only as “evaluators” but as facilitators and critical friends provided us ample opportunity to do this and we’re especially excited that our partners at Hewlett held similar feminist values.

The remainder of this blog examines how we put three (out of many) feminist evaluation principles into practice in this evaluation: power sharing, multiple ways of knowing, and speaking truth to power.  

Power-sharing

Feminist evaluators see knowledge as a powerful - and value-laden - resource. Therefore, feminist evaluations seek to be participatory and inclusive to encourage creating and sharing both knowledge and power. We centred power-sharing within our team, with our client, and with our evaluation participants as much as possible. 

Power sharing within the team. The evaluation team was incredibly diverse in skills sets, opinions, and experiences. We created space for reflection, debate, challenge, and support within the team. This included less experienced members of the team who had opportunities to meaningfully participate and actively shape our evaluation approach and practice.

While we aimed for a more horizontal and fluid team structure, we also recognised that it was important to have an inclusive team leader in setting and maintaining team “culture”, including values and ways of working. The Team Leader helped the team “hold the feminist centre”; creating a safe space where all team members were able to contribute freely and engage in critical thinking. Being explicit about - and putting into practice - our shared feminist values also provided our clients with some of the predictability and consistency that larger and diverse teams don’t always provide, something we feel is essential in building trust during a long and complex evaluation. 

Power sharing with our client. Feminist evaluation is a collective endeavour, carried out in the spirit of shared inquiry and co-learning that encourages ongoing reflection and learning for all evaluation participants. We worked closely with WEE team colleagues and others in the foundation to co-design the evaluation framework and methodologies, reflect regularly on whether our approach was working as planned, and adapt as needed. We also shared our early findings and tentative learnings as they emerged for discussion and reflection. 

Taking this approach allowed the foundation team to surface their own experiences and an enormous amount of tacit knowledge. Rather than compromising the integrity of the summative evaluation, this feminist practice enriched it, and we feel produced an evaluation that was even more fit for its forward-looking purpose of informing the next strategy.

Power sharing with evaluation participants. As feminist practitioners, we believe that there are no tools or methods that are inherently feminist. While some evaluation approaches are more amenable to feminist practice, the ingredients that make an evaluation feminist lie firmly in design and implementation. Central to our evaluation, therefore, was the process of engaging participants in generating evidence and learning that was right-sized and nuanced rather than extractive. As much as remote data collection allowed, we built space in our consultations to test our early insights with a wide range of stakeholders, including grantees and other WEE experts from the Global South and Global North. 

Multiple ways of knowing

Building on the theme of knowledge as a powerful resource, feminist evaluation challenges us to think differently about what is considered as evidence, pushes the boundaries of how evidence is captured, and questions who gives it meaning and relevance. It was important to us and the Hewlett team that we engage a range of views for the evaluation. The foundation’s approach to grant-making and the trust and respect they have for their grantees enabled us to quickly and easily arrange to speak with a wide range of people from the Global South and Global North and garner a range of honest and divergent views.

Another important aspect of Hewlett’s feminist approach to grant-making is their commitment to general support grants which honour grantees’ autonomy to pursue their own priorities within their own contexts and provide them with the financial support needed to build, strategise, network and grow. The challenge with this type of grant is that they often do not have stringent reporting requirements, so they are often difficult to evaluate.  On the one hand, it is a very feminist approach to grant-making to not overburden grantees with reporting and to invest in them at a foundational level rather than through project deliverables. However, it does make traditional evaluation methods, such as document reviews, tricky. In our data production process, we did not weight or value certain types of data over others; rather, we used multiple information collection tools (reviews of internal documents and external literature, key informant interviews, focus group discussions, in-depth discussions with Project Officers, and survey data collection) and triangulated data in evaluation team participatory synthesis and analysis workshops. 

Just as there are multiple ways of knowing, there are diverse ways in which people engage in knowledge sharing. Hewlett itself has a very flat structure and a dedication to inclusion and participation of staff at all levels within its own program teams. We tried to ensure that there were multiple ways in which people could engage throughout the process by sharing pre-reads, creating space for discussions, allowing time to comment and engage meaningfully with shared documents. Despite the restrictions of remote working, we encouraged interactive engagement with foundation staff and with evaluation participants through online facilitation tools. We regularly reflected and sought feedback on our engagement, evolving and pivoting our approach to meet the needs of participants with different styles. We recognised this is a continuous process that takes time and care and there is always room for improvement.

Finally, we consciously and carefully navigated the inevitable conflicts that arose around “knowing”. Members of both the evaluation team and the Hewlett team had diverse, and sometimes opposing, views on what constituted “evidence” and who’s knowledge counted. It took time, patience, and empathy to address these points of tension; key to this was creating space for all team members to listen and be heard, reflect, and come to a consensus. 

Speaking truth to power

Evaluation is a political exercise; it is neither neutral nor merely technical. Feminist evaluation therefore demands us to be reflective of the power structures and the political nature of our work. 

As a team, we brought this political awareness to our client relationships, the language we used, the ‘evidence’ we generated, and our own positionalities. We were careful as evaluators to acknowledge our own biases, and to continuously challenge those biases throughout the evaluation. One of the main ways that we did this was by having a ‘whole team’ approach to the synthesis and analysis of evidence: multiple reviewers synthesised the data, analysis was conducted in workshops, and we had healthy debate and honest discussions on the final findings, key lessons and implications going forward.

While “speaking truth to power” is often a relatively easy principle to put into practice within the evaluation team where power is shared and the relationship can be a relatively “horizontal one”, it is often much harder to achieve with client partners where the relationship is more “vertical”. Many clients want to maintain more power in the relationship. It was thus critical that we, as evaluators, created an environment where the Hewlett team could safely and productively ask the hard questions, hear the hard messages, challenge assumptions and work through tensions together. 

Conclusion 

We have learned that to be committed feminist evaluators, we must examine every evaluation opportunity for the possibility of shifting power and reversing gender inequities. Our experience shows that promoting feminist values and practices requires manoeuvring and influencing complex operational and political arrangements, drawing on a set of skills that are not always associated with evaluative practice. 

Although we knew it before, this experience has underscored for us that taking a feminist approach to evaluation is time consuming and hard work! It requires empathy, humility and patience as well as methodological excellence. But we think it’s worth it, and so do our client partners. 


Previous
Previous

International Day of the Girl: Investing in girls’ education is more important than ever