THOUGHTS ON USING COLLABORATIVE EVALUATION STRATEGIES

April 9, 2010

I have been asked on several occassions when I would employ collaborative evaluation strategies and when other evaluation methods would suffice. I cannot think of a straight-forward, simple answer.

My choice of evaluation strategies depends on a number of factors. An important factor to consider is the anticipated or articulated level of involvement the client organization wants or needs for the evaluation to be successful. Here, the concept of success, in my mind, and in the mind of some evaluation theorists, such as Michael Quinn Patton, is the utilization of evaluation results. But I digressed, so let’s get back on track.

In my experience, I have found that collaborative evaluation methods, such as participatory evaluation (see Earl & Cousins), and empowerment evaluation (see Fetterman) have been quite fruitful in enhancing the utility of evaluation findings. In fact, I prefer these methods to others just based on the my familiarity with them, and the successes I have had with them.

However, during a recent evaluation of a high-stakes credentialing program, I chose, again based on client needs, and what I thought would bring about the greatest use of evaluation results, a systems-based approach that examined the critical pathway of program activities. A primary focus of this evaluation was the assessments involved in the credentialing program. The systems-based approach would allow an investigation of program activities within the greater scope of a 40, 000 member organization spread across a large geo-political area, Canada.

The client organization, represented by its national governing body, provided information on program activities and provided various sources of outcome data, such as assessment results, but wished to leave the nuts and bolts of the evaluation to the experts, which in this case were Dr. Singh and myself. The hands-off approach of the client organization was prudent, as presenting the evaluation as an objective, un-biased inquiry to its members was key to the use of evaluation results. Collaboration between the evaluators and the client organization, in its basic sense, did occur, but only in times when logistical support was needed to collect member-related data. Collaboration in the design and essential operations of the evaluation did not occur.

So what happened with the use of evaluation results? As planned, the evaluation was presented as an objective, un-biased inquiry to the executive board of the organization and one of the other key governing committees that oversee the critical pathway of the program. Though some of the recommendations we provided called for some paradigm-shifting and fairly radical departure from traditional processes, the client organization and its power-wielding committees embraced the evaluation results as “ground-breaking,” “essential,” “important” and “thorough”and have begun the action-planning stage to implement several of the recommendations.

So in this case, a collaborative evaluation strategy would not have enhanced the utility of the evaluation results. On a personal level, the feedback I received from the evaluation was some of the most professionally and personally gratifying that I have ever received. I have the uttmost confidence in the client organization to carry forward the action planning and implement recommendations to improve their program.