ACED 7620 ASU Stanford Teacher Education Program STEP Evaluation Paper

Custom Writing Services by World Class PhD Writers: High Quality Papers from Professional Writers

Best custom writing service you can rely on:

☝Cheap essays, research papers, dissertations.

✓14 Days Money Back Guerantee

✓100% Plagiarism FREE.

✓ 4-Hour Delivery

✓ Free bibliography page

✓ Free outline

✓ 200+ Certified ENL and ESL writers

✓  Original, fully referenced and formatted writing

ACED 7620 ASU Stanford Teacher Education Program STEP Evaluation Paper
Sample Answer for ACED 7620 ASU Stanford Teacher Education Program STEP Evaluation Paper Included After Question

ACED 7620 ASU Stanford Teacher Education Program STEP Evaluation Paper

Description

Instructions

Critique one of the following interviews published in the Evaluation in Action textbook using Chapter 7 as a guideline. I have attached the interview, guidelines from Chapter 7 and form.

Please critique the following interview

ACED 7620 ASU Stanford Teacher Education Program STEP Evaluation Paper
A Sample Answer For the Assignment: ACED 7620 ASU Stanford Teacher Education Program STEP Evaluation Paper
Title: ACED 7620 ASU Stanford Teacher Education Program STEP Evaluation Paper

Pages 97-128

Fitzpatrick, J.L., & Fetterman, D. The Evaluation of the Stanford Teacher Education Program (STEP)

Chapter 7 • Decision-Oriented Evaluation Approaches Page 172 Decision-oriented evaluation approaches were designed to address the problems that evaluations encountered in the 1970s—being ignored and having no impact. These approaches are meant to serve decision makers. Their rationale is that evaluative information is an essential part of good decision making and that the evaluator can be most effective by serving administrators, managers, policymakers, boards, program staff, and others who need good evaluative information. The three major decision-oriented approaches or methods we will review here are the CIPP model, which takes a systems approach to the stages of program develop- ment and the information needs that may occur at each stage; utilization-focused evaluation (UFE), which identifies primary users and works closely with them to identify information needs and conduct the study; and performance monitoring, which is not truly evaluation but provides information to managers to help in decision making and has been advocated by well-known evaluators. CIPP an Chapter 7 • Decision-Oriented Evaluation Approaches 173 utilization-focused evaluation are rather different—the first is system and stage oriented, while the second is people oriented. But they share a firm goal of improving decision making in schools, nonprofits, and government. You will find elements of each that can be helpful in improving your own evaluations. Developers of Decision-Oriented Evaluation Approaches and Their Contributions Important contributions to the decision-oriented approach have been made by many evaluators. In education, Daniel Stufflebeam was a leader in developing an approach oriented to decisions. In the mid-1960s, Stufflebeam (1968) recognized the shortcomings of available evaluation approaches. Working to expand and systematize thinking about administrative studies and educational decision mak- ing, Stufflebeam (1968) made the decision(s) of program managers, rather than program objectives, the pivotal organizer for the evaluation. This made him one of the first evaluation theorists to focus on use. In the approaches proposed by him and other theorists (e.g., Alkin, 1969), the evaluator, working closely with an administrator, identifies the decisions the administrator must make, based on the stage of the program, and then collects sufficient information about the relative advantages and disadvantages of each decision alternative to allow a fair judgment based on specified criteria. The success of the evaluation rests on the quality of teamwork between evaluators and decision makers. Michael Patton, with his utilization-focused approach, was another leader in focusing evaluations on decisions and use. In 1978, he published the first book on UFE. Patton argued that the first task of the evaluator was to identify a key user, often a manager with interest in the evaluation and with the authority and interest to make decisions with it. he Decision-Oriented Approaches The CIPP Evaluation Model Stufflebeam (1971, 2004b, 2005) has been an influential proponent of a decision- oriented evaluation approach structured to help administrators make good deci- sions. He defines evaluation as “the process of delineating, obtaining, reporting and applying descriptive and judgmental information about some object’s merit, worth, probity, and significance to guide decision making, support accountability, disseminate effective practices, and increase understanding of the involved phenomena” (Stufflebeam, 2005, p. 61). This definition expands on his original definition in 1973 when he first developed the CIPP model, but is essentially quite similar. Then, he defined evaluation more succinctly as “the process of delineat- ing, obtaining, and providing useful information for judging decision alternatives” (Stufflebeam, 1973b, p. 129). The newer definition emphasizes the importance of 174 judging merit and worth, something that was central to evaluation in 1973. But his 2005 definition also emphasizes the currency of accountability, dissemination, and understanding in today’s world of evaluation. However, the essentials of his CIPP model remain the same and, today, are used widely in the United States and around the world in educational evaluation. He developed this evaluation framework to serve managers and administrators facing four different kinds of decisions: 1. Context evaluation, to serve planning decisions: Determining what needs are to be addressed by a program and what programs already exist helps in defining objectives for the program. Context evaluation, as the name implies, concerns studying the context for a program that has not yet been planned: What are the needs and problems of students or clients? What assets or qualifications does the organization have to address these needs? What should be the goals and intended outcomes for a program? 2. Input evaluation, to serve structuring decisions: After defining needs and considering organizational assets and potential interventions, using input evaluation helps managers to select a particular strategy to implement and to resolve the problem and make decisions about how to implement it. 3. Process evaluation, to serve implementing decisions: Once the program has begun, the important decisions concern how to modify its implementation. Key eval- uation questions are: Is the program being implemented as planned? What changes have been made? What barriers threaten its success? What revisions are needed? As these questions are answered, procedures can be monitored, adapted, and refined. 4. Product evaluation, to serve recycling decisions: What results were obtained? How well were needs reduced? What should be done with the program after it has run its course? Should it be revised? Expanded? Discontinued? These questions are important in judging program attainments. The first letters of the four types of evaluation—context, input, process, and product—form the acronym CIPP, by which Stufflebeam’s evaluation model is best known. Table 7.1 summarizes the main features of the four types of evaluation, as proposed by Stufflebeam (2005, p. 63). As a logical structure for designing each type of evaluation, Stufflebeam (1973a) proposed that evaluators follow these general steps: A. Focusing the Evaluation 1. Identify the major level(s) of decision making to be served, for example, local, state, or national; classroom, school, or district. 2. For each level of decision making, project the decision situations to be served and describe each one in terms of its locus, focus, criticality, timing, and composition of alternatives. 3. Define criteria for each decision situation by specifying variables for measurement and standards for use in the judgment of alternatives. 4. Define policies within which the evaluator must operate. 176 E. Reporting of Information 1. Define the audiences for the evaluation reports. 2. Specify the means for providing information to the audiences. 3. Specify the format for evaluation reports and/or reporting sessions. 4. Schedule the reporting of information. F. Administration of the Evaluation 1. Summarize the evaluation schedule. 2. Define staff and resource requirements and plans for meeting these requirements. 3. Specifythemeansformeetingthepolicyrequirementsforconductingthe evaluation. 4. Evaluate the potential of the evaluation design for providing information that is valid, reliable, credible, timely, and pervasive (that will reach all relevant stakeholders). 5. Specify and schedule the means for periodic updating of the evaluation design. 6. Provide a budget for the total evaluation program (p. 144). Evolution of the CIPP Approach. The CIPP model has had the most staying power of any early evaluation model. Its principles have remained solid: the focus on serving decisions, judging merit and worth, the four stages of a program reflecting the importance of context in considering evaluation questions, and an emphasis on standards and use. Its focus has traditionally been on program im- provement. Stufflebeam, building on Egon Guba, writes, “evaluation’s most im- portant purpose is not to prove but to improve” (2004b, p. 262). He notes that his modification is not to exclude proving as a purpose, but to acknowledge that the primary purpose is improvement. With CIPP, Stufflebeam has always emphasized using multiple methods, both qualitative and quantitative—whatever methods are most appropriate for measuring the construct of interest. Nevertheless, as Stufflebeam noted in 2004, “the CIPP model is a work in progress” (2004b, p. 245). The approach has been influenced by changes in evalua- tion practice and learning, as the model has been implemented in many different settings over the years. Although the original CIPP model focused very much on managers as the primary stakeholders, today’s CIPP recommends involving many stakeholders, though the focus remains on decisions. The evaluator remains in firm control of the evaluation, but, Stufflebeam writes, “evaluators are expected to search out all relevant stakeholder groups and engage them in communication and consensus-building processes to help define evaluation questions, clarify evaluative criteria; contribute needed information; and reach firm, defensible conclusions” (2005, p. 62). Similarly, Stufflebeam is more forthright today in acknowledging that evaluation occurs in a political environment and that values play a key role. He writes, “Throughout my career, I have become increasingly sensitive to evaluation’s political nature. Evaluators must regularly seek, win, and sustain power over their evaluations to assure their integrity, viability, and credibility” (2004b, pp. 261–262). 177 Stufflebeam’s wheel (See Figure 7.1) illustrates the impact of core values on each evaluation activity. The evaluation, he writes, should be grounded in these values which include “ideals held by society, group, or individual” and “provide the foun- dation for deriving and/or validating particular evaluative criteria” for judging the program or for making decisions and “provide the basis for selecting/constructing the evaluation instruments and procedures, accessing existing information,” and other evaluation decisions (Stufflebeam, 2004b, p. 250). Stufflebeam’s work and his approach have added elements that differ from other approaches. His emphasis is practical, improving programs through im- proving decisions. He has written about and advocated many practical tools, including means for negotiating contracts, use of stakeholder panels for review and input, development of professional standards,1 and metaevaluations—the evaluation of evaluations. He established the Evaluation Center at Western Michigan University whose web site includes many tools and checklists for eval- uation approaches and tasks, including information on developing budgets, contracts, and negotiating agreements. See http://www.wmich.edu/evalctr/ checklists/checklistmenu.htm Significant Contributions of CIPP. Alkin and Christie, in their review of evalua- tion theories, use a tree with three main branches—use, methods, and valuing— to illustrate the many different evaluation theories. They place Stufflebeam at the root of the “use” branch and write that, “Stufflebeam’s CIPP model is one of the most well-known of these [use] theories” (2004, p. 44). The CIPP approach has proved appealing to many evaluators and program managers, particularly those at 178 home with the rational and orderly systems approach, to which it is clearly related. Perhaps its greatest strength is that it gives focus to the evaluation. Experienced evaluators know how tempting it is simply to cast a wide net, collecting an enor- mous amount of information, only later to discard much of it because it is not directly relevant to the key issues or questions the evaluation must address. Deciding precisely what information to collect is essential. Focusing on informa- tional needs and pending decisions of managers limits the range of relevant data and brings the evaluation into sharp focus. This evaluation approach also stresses the importance of the utility of information. Connecting decision making and evaluation underscores the very purpose of evaluation. Also, focusing an evalua- tion on the decisions that managers must make prevents the evaluator from pursuing fruitless lines of inquiry that are not of interest to the decision makers. CIPP was instrumental in showing evaluators and program managers that they need not wait until an activity or program has run its course before evaluat- ing it. In fact, evaluation can begin when ideas for programs are first being discussed. Because of lost opportunities and heavy resource investment, evalua- tion is generally least effective at the end of a developing program. But today’s em- phasis on outcomes and impact has reduced evaluation’s role at the planning stages. Nevertheless, particularly when purposes are formative, examining issues concerning context, input, and process can be helpful in identifying problems be- fore they have grown and in suggesting solutions that will work better at achiev- ing outcomes. For example, process studies may identify ways that teachers or other program deliverers are implementing a program, such as deviating from the intended activities because they are not working or are not feasible. Discovering these new methods, modifying the program model to conform to the new meth- ods, and training others in them can help achieve program success. Although the program stages used by CIPP indicate that the evaluation should focus on the stage of the program and that different questions arise at different stages, another advantage of the approach is that it encourages managers and evaluators to think of evaluation as cyclical, rather than project based. Like perfor- mance monitoring, evaluating programs at each stage can provide a “continual information stream to decision makers to ensure that programs continually improve their services” (Alkin & Christie, 2004, p. 44, analyzing the CIPP approach). Nevertheless, as we will discuss further in our review of decision-making approaches, CIPP is not without its critics. Of principal concern is that, although the current model encourages participation from many stakeholders, the focus is typically on managers. Other stakeholders, who may not have explicit decision- making concerns, will necessarily receive less attention in defining the purposes of the evaluation, the means of data collection, and the interpretation of results. The UCLA Evaluation Model While he was director of the Center for the Study of Evaluation at UCLA, Alkin (1969) developed an evaluation framework that closely paralleled some aspects of the CIPP model. Alkin defined evaluation as “the process of ascertaining the 179 decision areas of concern, selecting appropriate information, and collecting and analyzing information in order to report summary data useful to decision-makers in selecting among alternatives” (p. 2). Alkin’s model included the following five types of evaluation: 1. Systems assessment, to provide information about the state of the system (similar to context evaluation in the CIPP model) 2. Program planning, to assist in the selection of particular programs likely to be effective in meeting specific educational needs (similar to input evaluation) 3. Program implementation, to provide information about whether a program was introduced to the appropriate group in the manner intended 4. Program improvement, to provide information about how a program is func- tioning, whether interim objectives are being achieved, and whether unan- ticipated outcomes are appearing (similar to process evaluation) 5. Program certification, to provide information about the value of the program and its potential for use elsewhere (similar to product evaluation) As Alkin (1991) has pointed out, his evaluation model made four assumptions about evaluation: 1. Evaluation is a process of gathering information. 2. The information collected in an evaluation will be used mainly to make decisions about alternative courses of action. 3. Evaluation information should be presented to the decision maker in a form that he can use effectively and that is designed to help rather than confuse or mislead him. 4. Different kinds of decisions require different kinds of evaluation procedures Utilization-Focused Evaluation Utilization-focused evaluation (UFE) is a well-known approach that is based on two assumptions: (a) the primary purpose of evaluation is to inform decisions; and (b) use is most likely to occur if the evaluator identifies one or more stakeholders who care about the evaluation and are in a position to use it. Patton calls the latter “the personal factor” and defines it as “the presence of an identifiable individual or group of people who personally care about the evaluation and the findings it generates” (2008a, p. 66). The personal factor is a central element to UFE. Patton first identified it as a factor critical to use in a study of use he conducted in the mid- 1970s. In that study he interviewed evaluators and users of 20 federal health eval- uations to learn the factors that contributed to the use of the evaluation. Patton and his colleagues had identified 11 potential factors from a review of the litera- ture, such as methodological issues, political factors, and nature of the findings (positive, negative, surprising). They found that when asked about the single factor that most influenced use, two factors consistently emerged: political con- siderations and what Patton now calls the personal factor, the presence of an 180 individual or group who cared about the evaluation and its results. Patton’s UFE approach has built on these findings, helping the evaluator identify these individuals and working closely with them to achieve use. Patton defines UFE as “a process for making decisions and focusing an evaluation on intended use by intended users” (1994, p. 317). Similarly, in his most recent edition of Utilization-Focused Evaluation, he defines UFE as “evaluation done for and with specific intended primary users for specific, intended uses” (2008a, p. 37). His decision focus is further confirmed by his definition of evaluation: Program evaluation is undertaken to inform decisions, clarify options, identify improvements and provide information about programs and policies within con- textual boundaries of time, place, values, and politics. (2008a, p. 40) Although Patton sees UFE as a type of participatory approach, because of the focus on working with a key stakeholder or group of key stakeholders, he acknowl- edges that many place it among the decision-oriented approaches (Patton, 1994). We have placed UFE in this chapter because of its focus on an intended use, typ- ically a decision. Patton does make use of intensive primary stakeholder involve- ment to achieve the intended use because, like Cousins and Earle (1992 1995), Greene (1988), and others, he believes that involving stakeholders increases their sense of ownership in the evaluation, their knowledge of it, and, ultimately, their use of the results. The first step in UFE concerns identifying the intended user or users— individuals who care about the study and its results. This step is, of course, central to achieving the personal factor. Given today’s focus on networks and collabora- tion, Patton emphasizes that a careful stakeholder analysis, to identify the right stakeholders for the evaluation, is more important than ever. He suggests consid- ering two important factors in identifying primary stakeholders: (a) interest in the study, and (b) power in the organization and/or power in the program or policy to be evaluated (Eden & Ackerman, 1998). Of course, the ideal stakeholder would be high on both, but a stakeholder with both interest and connections to others with power can be more useful than a powerful stakeholder with low or no interest. The latter may fail to attend important meetings, respond to messages, or partici- pate in meaningful ways, thus harming the overall quality of the study and its credibility to others in the organization. To help these primary users think about their needs for the evaluation, Patton indicates that he pushes users “to be more intentional and prescient about evalu- ation use during the design phase” (2008a, p. 146). He also suggests questions to ask these intended users to help them consider decisions and the feasibility of affecting them and the type of data or evidence that would be most likely to have an effect. The remaining stages of UFE concern involving these stakeholders in the conduct of the study. This might include anything from identifying their questions of interest, which would then serve as the focus of the study, and considering how they would use the information obtained, to involving them in the 181 design and data collection stages, making sure they understand the methodol- ogy and that the choices made reflect their values and produce credible results that are useful to them. In the final stage, the primary stakeholders in UFE are involved in interpreting the results and making decisions about judgments, recommendations, and dissemination. The nature of the interaction between the evaluator and the primary intended users during these stages is very important in securing the personal factor. The evaluator is developing a personal relation- ship with the primary users to meet their needs and sustain their interest in the evaluation 182 Evaluability Assessment and Performance Monitoring Joseph Wholey, like Michael Patton and Daniel Stufflebeam, has been prominent in the evaluation world for many years. Stufflebeam’s work, however, lies mainly in education. Patton’s is concerned with individual programs in schools and in social welfare settings. Wholey’s influence and work has been with the federal government, starting with his work with the U.S. Department of Health, Education, and Welfare (HEW) in the 1970s. His focus is federal policy decision making. But like Patton and Stufflebeam, his goal is to have evaluation improve decisions. Therefore, he has developed several methods over the years to improve the utility of evaluation. We briefly review some of his major efforts here. Evaluability assessment was developed by Wholey to prevent costly evalua- tions from being conducted when programs were not, in fact, ready for evaluation. Unlike Stufflebeam, who advocates evaluation during the context and input stages to help in program planning, Wholey’s focus is typically on program outcomes (Wholey, 2004a, 2004b). In fact, most of his decision makers—federal policy makers—were not operating programs, and thus were not making formative deci- sions for program improvement; instead, they were making summative decisions re- garding program funding, initiation, and continuation (M. Smith, 2005). Thus, Wholey’s work at the federal level presents a stark contrast to the CIPP and UFE ap- proaches, which are designed to work with policymakers and managers who are closer to the program being evaluated. During his early work with HEW, he and his colleagues were concerned that many evaluations were not being used. One reason for this, they wrote, was that the people implementing the programs had not had the opportunity to work things out, to clearly define what they were doing, to try them out, and to consider what information they needed from an evaluation. There- fore, he proposed evaluability assessment to help improve the likelihood that when 183 evaluations were conducted, the program would, in fact, be ready for evaluation. In order to be ready for evaluation, the evaluability assessment was designed to deter- mine the following: Page 97 HE EVALUATION OF THE STANFORD TEACHER EDUCATION PROGRAM (STEP) An Interview With David Fetterman Introduction: David Fetterman was a member of the faculty of the School of Education and Director of the MA Policy Analysis and Evaluation Program at Stanford University at the time he conducted this evaluation. He is currently Director of Evaluation in the Division of Evaluation in the School of Medicine at Stanford University. He is the past president of the American Evaluation Association. Fetterman has received the highest honors from the association, including the Lasersfeld Award for evaluation theory and the Myrdal Award for evaluation practice. Fetterman has been a major contributor to ethnographic evaluation and is the founder of empowerment evaluation. He has published 10 books and more than 100 articles, chapters, and reports, including contributions to various encyclopedias. His most recent books include Empowerment Evaluation Principles in Practice and Ethnography: Step by Step. He is President of Fetterman & Associates, an international consulting firm, conducting work in Australia, Brazil, Finland, Japan, Mexico, Nepal, New Zealand, Spain, the United Kingdom, and the United States. This interview concerns Fetterman’s complex, three-year evaluation of the Stanford Teacher Education Program (STEP). In this evaluation, Fetterman chose to use an approach other than his well-known empowerment approach. He describes the reasons for his choice, which provides guidance as to the conditions necessary to use an empowerment approach. As a member of the Stanford University education faculty, though not a member of STEP, Fetterman served in a partially internal evaluator role and discusses some of the problems he encountered in that role. He describes the methods he used, including intensive immersion, surveys, interviews, reviews of literature, and discussions with experts in other teacher education programs, to judge the quality of delivery of STEP and some of the conclusions he reached. Page 98 Summary of the STEP Evaluation David Fetterman The president of Stanford University, Gerhard Casper, requested an evaluation of the Stanford Teacher Education Program (STEP). The first phase of the evaluation was formative, designed to provide information that might be used to refine and improve the program. It concluded at the end of the 1997–1998 academic year. Findings and recommendations from this phase of the evaluation were reported in various forms, including a formal summer school evaluation report (Fetterman, Dunlap, Greenfield, & Yoo, 1997), more than 30 memoranda, and various informal exchanges and discussions. The second stage of this evaluation was summative in nature, providing an overall assessment of the program (Fetterman, Connors, Dunlap, Brower, Matos, & Paik, 1999). The final report highlights program evaluation findings and recommendations, focusing on the following topics and issues: unity of purpose or mission, curriculum, research, alumni contact, professional development schools/university school partnerships, faculty involvement, excellence in teaching, and length of the program. Specific program components also were highlighted in the year-long program evaluation report, including admissions, placement, supervision, and portfolios. (See the STEP Web site for copies of all evaluation reports: www.stanford.edu/davidf/step.html.) Page 99 The Methodology The evaluation relied on traditional educational evaluation steps and techniques, including a needs assessment; a plan of action; data collection (interviews, observations, and surveys); data analysis; and reporting findings and recommendations. Data collection involved a review of curricular, accreditation, and financial records, as well as interviews with faculty and students, and observations of classroom activity. Informal interviews were conducted with every student in the program. Focus groups were conducted with students each quarter and with alumni from the classes of ’95, ’96, and ’97. More than 20 faculty interviews were conducted. Survey response rates were typically high (90%–100%) for master teachers, current STEP students, and alumni. Data collection also relied on the use of a variety of technological tools, including digital photography of classroom activity, Web surveys, and evaluation team videoconferencing on the Internet. Data analysis was facilitated by weekly evaluation team meetings and frequent database sorts. Formal and informal reports were provided in the spirit of formative evaluation. Responses to preliminary evaluation findings and recommendations were used as additional data concerning program operations. (A detailed description of the methodology is presented in Fetterman, Connors, Dunlap, Brower, & Matos, 1998.) Page 100 Brief Description of STEP STEP is a 12-month teacher education program in the Stanford University School of Education, offering both a master’s degree and a secondary school teaching credential. Subject area specializations include English, languages, mathematics, sciences, and social studies. The program also offers a Cross-Cultural, Language, and Academic Development (CLAD) emphasis for students who plan to teach second-language learners. The 1997–1998 class enrollment was 58 students. Tuition and board were approximately $30,000. The program introduces students to teaching experiences under the guidance of a master teacher during the summer quarter. Students enter the academic year with a nine-month teaching placement, which begins in the fall quarter under the supervision of a cooperating teacher and field supervisor. Students also are required to take the School of Education master’s degree and state-required course work throughout the year. The program administration includes a faculty sponsor, director, placement coordinator, student services coordinator, lead supervisor, field supervisors, and a program assistant. In addition, the program has a summer school coordinator/liaison and part-time undergraduate and doctoral students. Findings, Recommendations, and Impact The most significant finding was that the STEP program had some of the ingredients to be a first-rate teacher education program, ranging from a worldrenowned faculty to exceptional students. At the time of the evaluation, the program and faculty had a unique opportunity to raise the standard of excellence in the program and the field. The evaluation identified some noteworthy qualities of STEP. These included high-caliber faculty and students, supportive and critical supervision, the year-long student teaching experience, a culminating portfolio conference, and strong support from alumni. Nevertheless, problem areas were identified. Key among these was the lack of a unifying purpose to shape the program. Related to the absence of a clear vision for the program was the fact that faculty designed their courses in isolation from each other and the central activities of STEP, leading to a fragmented curriculum and a lack of connection between educational theory and practice. Instructional quality was occasionally a problem, particularly as students expect to have faculty they can view as models for exemplary teaching. Students also received no systematic research training to help them develop an inquiry-based approach to teaching. Finally, the program may need to be lengthened to accomplish all that is desired. Final recommendations included developing a mission statement focusing on reflective practice; instituting faculty meetings and retreats to design, revise, and coordinate curriculum and course content; reducing fragmentation in the curriculum and developing a rationale for course sequencing, including more content on classroom practice to balance educational theory; developing a research training pro

error: Not Allowed