Saturday, October 17, 2009

Assignment Two - Student Services Program Description

Standards for the Provision of Early Childhood Special EducationECS Programming for Children with Severe Disabilities

     ECS Programming for Children with Severe Disabilities


Medicine Hat Catholic Separate Regional Division No. 20



In the age of accountability, Individual Program Plans (IPPs), objectives and goals, it is important that these written plans are actualized in order to support students’ needs. Below is a visual that represents the programming path for children identified with severe disabilities ranging in age from 2 years 6 months to under 6 years of age in the province of Alberta. This path begins at the government level with ECS written standards and travels down to the written goals and objectives contained within each student’s IPP.

Alberta Education
ECS Standards

Medicine Hat Catholic Separate Regional Division No. 20

ECS Programming for Children with Severe Disabilities

School within MHCSRD No. 20

ECS Programming for Children with Severe Disabilities

Individual Program Plan                        Individual Program Plan

(IPP)                                                    (IPP)

*could not insert created flowchart

A question that arises from this visual is how do written standards become enacted so that the recipients of programming experience the intended standards and outcomes? Program evaluation is one way to monitor and assess this issue. However, the challenge in evaluating the ECS Programming for Children with Severe Disabilities is how to determine if these young children experience the intended outcomes as outlined in their IPPs. Therefore, it would be important to choose an evaluation model that would be utilized by those who are involved in enacting the goals and objectives of IPPs. An evaluation process that would be effective in addressing this unique programming would be the Utilization-Focused Evaluation or U-FE.

What is Utilization-Focused Evaluation?

Foundational to U-FE is that an evaluation and its findings should focus on the “…intended use by intended users” (Patton, 2002, p.1). Utilization-focused evaluation “…is a process for making decisions about issues in collaboration with an identified group of primary users focusing on their intended uses of evaluation” (Patton, 2002, p.1). From the beginning, the evaluator works with the primary users to plan an appropriate evaluation based on the nature and situation of the program. Through this ongoing interactive process, the evaluator and primary users collaborate to determine the following components of evaluation:

• Purpose – formative, summative, process

• Data collection – qualitative, quantitative, mixed

• Design – experimental, naturalistic, quasi-experimental

• Focus – inputs, process, outcomes, cost-benefit

This evaluation process is not static, but instead, is based on situational responsiveness which guides U-FE. Utilization-focused evaluation takes a constructivist approach to evaluation as primary users build their understanding of the process and use of evaluation. By actively being involved in the process, primary users are more likely to implement the findings of the evaluation. Although U-FE is based on collaborative and constructivist learning, the evaluation process is framed by a twelve part checklist that is organized according to the primary tasks of evaluation and the challenges identified for each task.

Why use U-FE for ECS Programming for Children with Severe Disabilities?

Utilization-focused evaluation would be an effective process for evaluating Medicine Hat’s ECS Programming for Children with Severe Disabilities. The foundational framework for ECS programming throughout Alberta is reported in provincial documents which contain detailed standards for ECS programs. The Standards for the Provision of Early Childhood Special Education, September 2006 and the ECS Special Education Handbook 2008/2009 School Year outline the expectations and processes involved with meeting the needs of these young children ranging from requirements for eligibility, funding, programming options and program monitoring and evaluation. However, in order for these provincial documents to be enacted through the IPPs for the identified children, it would be important to involve those intended users of the evaluation and its recommendations. As U-FE advocates, the intended users are more likely to enact the recommendations if they are involved in the process. This evaluation process would provide a collaborative working relationship between the primary users of the findings and recommendations to be implemented. Primary users, as identified by the U-FE checklist, include people who have a direct stake in the evaluation and meet identified criteria. Primary users for this evaluation could include government representation, teachers, teacher assistants, child development specialists and parents. This programming is based on individual program plans, so the reality of evaluating every IPP through U-FE may not be practical, but by piloting this evaluation process with one school, it could provide a model for evaluation that could be implemented and used by primary intended users of programming for children with severe disabilities. Although it may be a challenge to coordinate U-FE, the detailed checklist provides a framework for constructing an evaluation as well as clarifying the roles of the facilitator and intended users. An evaluation could be conducted at the provincial level, but when the use of the evaluation findings will impact users and recipients, it would be more effective to focus on an evaluation from the perspective of the intended users of the findings. Thus, a U-FE could support the actualization of the written provincial standards, but within a framework of how those standards will be enacted to meet the individual needs of children with severe disabilities.





Resource Links:

ECS Special Education Handbook 2008/2009 School Year

http://education.alberta.ca/media/842010/ecs_specialedhdbk2008-2009.pdf



Standards for the Provision of Early Childhood Special Education, September 2006

http://www.education.alberta.ca/media/452316/ecs_specialedstds2006.pdf

Minister of Education.

Alberta Education, Edmonton, Alberta



Utilization-Focused Evaluation (U-FE) Checklist

http://www.wmich.edu/evalctr/checklists/ufe.pdf
















































Source:


http://www.education.alberta.ca/media/452316/ecs_specialedstds2006.pdf

Sunday, September 20, 2009

Assignment #1a - Youth at Risk Pilot Program

Youth at Risk Pilot Program Evaluation    

     Program evaluation involves “…the systematic assessment of the operation and/or outcomes of a program or policy, compared to a set of explicit or implicit standards as a means of contributing to the improvement of the program or policy” (Weiss, 2009). By identifying the evaluation model, and assessing the strengths and weaknesses of the Youth at Risk Pilot Program evaluation, I will be able to contribute to the improvement of my understanding of program evaluation.


One important step in assessing an evaluation of a program is to identify the assessment model. Nuri Muhammad, Consultant for Inna Dynamics, applied components of Stufflebeam’s Context, Input, Process, Product (CIPP) model and Scriven’s goals versus roles model. The external evaluator identified eleven objectives for the Youth at Risk Pilot Program evaluation. The objectives addressed all three types of evaluation: formative, process, and summative. The primary focus of the program evaluation was summative as indicated in the first objective: “measure how well the program met its goals/objectives” (Muhammad, 2001. p. 4). This objective reflected the product component of Stufflebeam’s CIPP model and Scriven’s model of assessment of program outcomes. Objectives two through six, and objective eight, also addressed the components of summative evaluation as they focused on identifying and examining the impact of the program on the target group, as well as the impact of program components. Objectives seven and nine, which included implementation issues such as activities and collaboration of partners, assessed the effectiveness of process evaluation, and corresponded to Stufflebeam’s CIPP model components of input and process, and Scriven’s roles component. By applying a mixture of models to evaluate the Youth at Risk Pilot Program, Muhammad was able to assess a variety of the program’s components.

The mixed models framework used to evaluate the Youth at Risk Pilot Program included both strengths and weaknesses. One of the evaluation’s strengths was the identification of the eleven objectives which provided a comprehensive framework for the assessment. For each objective, quantitative and qualitative data were included as indicators of how the objectives were evaluated and the degree to which the intended outcomes were met. Another strength related to the objectives was the inclusion of the evaluator’s observations for each objective that provided a summary of the data. Some weaknesses associated with the objectives were the omission of information about how the objectives were determined, and the status of the evaluator-stakeholder relationship.

The methodology utilized in this evaluation included a variety of data collection tools. This was another one of the evaluation’s strengths because within the six-week evaluation period, the evaluator was able to collect a variety of data in an efficient manner. Questionnaires, interviews, consultations, group meetings, background program files, and time spent in a variety of the feeder neighbourhoods, provided the evaluator with a range of data collection tools. The selected tools supported the collection of data that related to the identified evaluation objectives. Some weaknesses associated with the methodology were the omission of data collection instrument samples, the origin of the instruments (commercially produced or original creations), and whether or not these data instruments had been pilot tested.

Overall, Muhammad’s evaluation of Youth at Risk Pilot Program was an effective approach to assessing a pilot program. The executive summary provided an overview of the evaluation, and the report, written in the language of the stakeholders, focused on the identified objectives of the evaluation. The evaluator’s recommendations addressed both the strengths of the pilot program, lessons learned, and areas for improvement in order to address the outlined objectives of the evaluation. As with all assessment and evaluation, the most important piece is what the stakeholders do with the provided information. Through this process of evaluating a program evaluation, I have made some improvement in my understanding of program evalution.