Sunday, September 20, 2009

Assignment #1a - Youth at Risk Pilot Program

Youth at Risk Pilot Program Evaluation    

     Program evaluation involves “…the systematic assessment of the operation and/or outcomes of a program or policy, compared to a set of explicit or implicit standards as a means of contributing to the improvement of the program or policy” (Weiss, 2009). By identifying the evaluation model, and assessing the strengths and weaknesses of the Youth at Risk Pilot Program evaluation, I will be able to contribute to the improvement of my understanding of program evaluation.


One important step in assessing an evaluation of a program is to identify the assessment model. Nuri Muhammad, Consultant for Inna Dynamics, applied components of Stufflebeam’s Context, Input, Process, Product (CIPP) model and Scriven’s goals versus roles model. The external evaluator identified eleven objectives for the Youth at Risk Pilot Program evaluation. The objectives addressed all three types of evaluation: formative, process, and summative. The primary focus of the program evaluation was summative as indicated in the first objective: “measure how well the program met its goals/objectives” (Muhammad, 2001. p. 4). This objective reflected the product component of Stufflebeam’s CIPP model and Scriven’s model of assessment of program outcomes. Objectives two through six, and objective eight, also addressed the components of summative evaluation as they focused on identifying and examining the impact of the program on the target group, as well as the impact of program components. Objectives seven and nine, which included implementation issues such as activities and collaboration of partners, assessed the effectiveness of process evaluation, and corresponded to Stufflebeam’s CIPP model components of input and process, and Scriven’s roles component. By applying a mixture of models to evaluate the Youth at Risk Pilot Program, Muhammad was able to assess a variety of the program’s components.

The mixed models framework used to evaluate the Youth at Risk Pilot Program included both strengths and weaknesses. One of the evaluation’s strengths was the identification of the eleven objectives which provided a comprehensive framework for the assessment. For each objective, quantitative and qualitative data were included as indicators of how the objectives were evaluated and the degree to which the intended outcomes were met. Another strength related to the objectives was the inclusion of the evaluator’s observations for each objective that provided a summary of the data. Some weaknesses associated with the objectives were the omission of information about how the objectives were determined, and the status of the evaluator-stakeholder relationship.

The methodology utilized in this evaluation included a variety of data collection tools. This was another one of the evaluation’s strengths because within the six-week evaluation period, the evaluator was able to collect a variety of data in an efficient manner. Questionnaires, interviews, consultations, group meetings, background program files, and time spent in a variety of the feeder neighbourhoods, provided the evaluator with a range of data collection tools. The selected tools supported the collection of data that related to the identified evaluation objectives. Some weaknesses associated with the methodology were the omission of data collection instrument samples, the origin of the instruments (commercially produced or original creations), and whether or not these data instruments had been pilot tested.

Overall, Muhammad’s evaluation of Youth at Risk Pilot Program was an effective approach to assessing a pilot program. The executive summary provided an overview of the evaluation, and the report, written in the language of the stakeholders, focused on the identified objectives of the evaluation. The evaluator’s recommendations addressed both the strengths of the pilot program, lessons learned, and areas for improvement in order to address the outlined objectives of the evaluation. As with all assessment and evaluation, the most important piece is what the stakeholders do with the provided information. Through this process of evaluating a program evaluation, I have made some improvement in my understanding of program evalution.

1 comment:

  1. Wow Shelly
    It sounds like you stumbled upon the perfect evaluation example. I can only guess about how much was spent on the data gathering. Such an impressive selection of data in such a short period of time. My only concern would be trying to evaluate such a large number of objectives but it sounds like they were able to pull it off. Your analysis of this evaluation is excellent. It is always a bonus when the theoretical approach is stated up front and you do not have to guess at how the evaluator determined their methodology.

    Well done.

    Jay

    ReplyDelete