What outcomes is the model expected to produce? —— What evidence will be generated about how the model works? BOTTOM LINE Be clear about the It (model) in a proposal. That’s what will be evaluated. © 2017 Otto Bremer Trust 20 FIDELITY OR ADAPTATION DIFFERENT APPROACHES TO DISSEMINATING MODELS REQUIRE D ­ IFFERENT EVALUATION APPROACHES. Two opposing approaches to implementing a model have very different evaluation implications. The two approaches follow. 1. Fidelity-focused programming and evaluation means a national model is being implemented in a local community and is supposed to be implemented exactly as prescribed in the national model. Fidelity-focused program models provide best practices and standard operating procedures that amount to a recipe for success. A McDonald’s Big Mac is supposed to be the same anywhere in the world. Core evaluation questions: —— Is the local model faithfully and rigorously implementing the standard model as specified? —— Is the local model getting the results promised by the national model? 2. Adaptation-focused programming and evaluation means a national model offers principles and guidance, but local implementation will be adapted to fit the local context. The Pew Children’s Dental Campaign is an example of a national approach to bridging the gap between coverage and care that provides an overarching framework for research and policy engagement, but it has to be adapted to a statewide context. Core evaluation questions: —— How is the national framework being adapted locally? —— What are the implications of these adaptations for outcomes? —— Is the local adaptation getting the results promised by the national model? PROPOSAL REVIEW AND SITE VISIT IMPLICATIONS When funds are requested to implement a model being done elsewhere, find out whether implementation is expected to be fidelity-focused or adaptive in nature. High fidelity will typically require capacity development and technical support from those who have developed and implemented the model elsewhere. This usually includes already-developed evaluation instruments and tools. Adaptation will typically require astute understanding of local context and capacity to articulate how the local situation will influence the adaptive process and local outcomes. BOTTOM LINE Distinguish fidelity-focused proposals from adaptive-focused proposals. The implications for programming and evaluation are substantial. © 2017 Otto Bremer Trust 21 HIGH-QUALITY LESSONS LEARNED LESSONS CAN BE GENERATED FROM ALL KINDS OF EXPERIENCES AND DATA. HIGH-QUALITY LESSONS ARE THOSE THAT ARE SUPPORTED BY DIVERSE TYPES OF EVIDENCE. High-quality lessons are supported by multiple sources of information. Knowledge confirmed from multiple sources increases confidence that a lesson is valid and can be used to inform decisions and future actions. A common problem when an idea becomes highly popular — in this case the search for lessons learned — is that the idea loses its substance and meaning. Anybody who wants to glorify his or her opinion can proclaim it a “lesson learned.” High-quality lessons, in contrast, represent principles extrapolated from multiple sources and cross-validated that inform future action. In essence, high-quality lessons constitute validated, credible, trustworthy, and actionable knowledge. Places to look for potential lessons 1. Evaluation findings—patterns across programs 2. Basic and applied research findings 3. Cross-validation from multiple and mixed methods, both quantitative and qualitative 4. Reflective practice wisdom based on the experiences and insights of practitioners 5. Insights reported by program participants 6. Expert opinion 7. Cross-disciplinary findings and patterns 8. Theory as an explanation for the lesson and its mechanism of impact Assessment criteria for judging the quality of lessons —— Importance of the lesson learned —— Strength of the evidence connecting intervention lessons to outcomes attainment —— Consistency of findings across sources, methods, and types of evidence The idea is that the greater the number of supporting sources for a “lesson learned,” the more rigorous the supporting evidence; and the greater the cross-validation from supporting sources, the more confidence one has in the significance and meaningfulness of a lesson. Lessons learned with only one type of supporting evidence would be considered a “lessons learned hypothesis.” Nested within and cross-referenced to lessons learned should be the actual cases from which practice wisdom and evaluation findings have been drawn. A critical principle here is to maintain the contextual frame for lessons learned—that is, to keep lessons grounded in their context. For ongoing learning, the trick is to follow future applications of lessons learned in new settings to test their wisdom and relevance over time—and adapt accordingly. EXAMPLE The importance of intervening in preschool years for healthy child development and later school success is supported by numerous evaluations, basic research on child development, expert knowledge, practitioner wisdom, and child development theory. In contrast, lessons about how to work effectively with troubled teenagers are weak in evidence, theory, research, and number of evaluations. PROPOSAL REVIEW AND SITE VISIT IMPLICATIONS What lessons are program proposals based on? What are the sources of evidence supporting supposed lessons? To what extent do similar lessons show up in different sites, proposals, and reports? BOTTOM LINE Distinguish opinions and single-source lessons from high-quality, cross-validated lessons. The former are hypotheses. The latter constitute actionable knowledge. © 2017 Otto Bremer Trust 22 EVALUATION QUALITY STANDARDS EVALUATION CAN AND SHOULD BE EVALUATED. SO WHAT’S A GOOD EVALUATION? The evaluation profession has adopted standards that are criteria for what constitutes a good evaluation. A high-quality evaluation is: —— Useful —— Practical —— Ethical —— Accurate —— Accountable EXAMPLE A foundation commissions an evaluation of focus work on youth homelessness. The first phase of the evaluation documents that: —— the targeted number of new beds and services were added to shelters; and —— t he grantees collaborated to design an evaluation of the critical factors that lead to permanent housing and stability for homeless youth. The grantees and foundation staff use the Phase 1 evaluation findings to develop a proposal for Phase 2. The foundation’s trustees use the evaluation findings and proposal based on the findings to inform (1) their decision about whether to fund the next stage of the youth homelessness work and (2) how to shape future work. The findings are useful—and actually used—because they are practical (concrete conclusions are reported that can be applied to improve programs), ethical (data was gathered in a way that showed respect for youth and program staff serving youth), and accurate (the data is meaningful and the findings are credible). The evaluation was worth what it cost because it was used to improve the work and inform future decision-making (accountability). PROPOSAL REVIEW AND SITE VISIT IMPLICATIONS When a grantee submits evaluation data as part of a proposal, a foundation program officer asks: 1. How do you use the evaluation findings? What changes, improvements, or decisions have you made based on evaluation findings? 2. What is the process for collecting evaluation data? To what extent is the process practical, manageable, and ­sustainable? 3. How do staff and program participants experience the evaluation process? Do they find it meaningful and ­respectful? 4. How is accuracy ensured in data collection? What steps are taken to ensure that the evaluation findings are ­credible? 5. B  ased on your evaluation approach (and the answers to the preceding questions), what do you consider to be the strengths and weaknesses of your evaluation process and findings? Note: These questions are asked only when grantees have made significant evaluation claims as part of the proposal. BOTTOM LINE Focus on evaluation use. Don’t let evaluation become just compliance reporting. © 2017 Otto Bremer Trust 23 COMPLETE EVALUATION REPORTING THE ELEMENTS OF A COMPREHENSIVE EVALUATION REPORT What? What are the findings? What does the data say? So what? What do the findings mean? Making interpretations and judgments. Now what? Action implications and recommendations. Four distinct processes are involved in making sense of evaluation findings: 1. Analysis involves organizing raw data into an understandable form that reveals basic patterns and constitutes the evaluation’s empirical findings, thereby answering the what? question. 2. Interpretation involves determining the significance of and explanations for the findings. This is Part One of answering the so what? question. 3. Judgment brings values to bear to determine merit, worth, and significance, including the extent to which the results are positive or negative, good or bad. This is Part Two of answering the so what? question. 4. R  ecommendations involve determining the action implications of the findings. This means answering the now what? question. The graphic below depicts the inter-relationships among these four dimensions of evaluation sense making. The three fundamental questions—What? So what? Now what?—are connected to the four evaluation processes of (1) analyzing basic findings, (2) making interpretations, (3) rendering judgments, and (4) generating recommendations. 1 BASIC FINDINGS WHAT 2 INTERPRETATIONS SO WHAT? 3 JUDGMENTS NOW WHAT? 4 RECOMMENDATIONS PROPOSAL REVIEW AND SITE VISIT IMPLICATIONS To what extent does the proposal reflect evaluative thinking? If the program has past reports, do they reflect the distinctions between what, so what, and now what? BOTTOM LINE When reviewing an evaluation report, watch for distinctions between basic findings, interpretations, judgments, and recommendations—and the logical alignment and consistency among these elements. © 2017 Otto Bremer Trust 24 UTILIZATION-FOCUSED EVALUATION MAKE ATTENTION TO USE THE DRIVING FORCE BEHIND EVERY DECISION IN AN EVALUATION. Utilization-focused evaluation begins with the premise that evaluations should be judged by their utility and actual use. Therefore, evaluators should facilitate the evaluation process and design an evaluation with careful consideration of how everything that is done, from beginning to end, will affect use. Use concerns how real people in the real world apply evaluation findings and experience the evaluation process. Therefore, the focus in utilization-focused evaluation is on intended use by intended users. —— Who is the evaluation for? —— How is it intended to be used? EXAMPLES OF DIFFERENT INTENDED USERS WITH LIKELY DIFFERENT INFORMATION NEEDS EXAMPLES OF DIFFERENT USES OF EVALUATION Program staff Evaluation feedback to improve a program (formative evaluation). Program director Summative evaluation findings to decide whether to expand a model to new sites. Government policymakers Accountability evaluation to determine if funds were spent appropriately as intended, or to determine whether to invest in the program more broadly. Utilization-focused evaluation does not advocate any particular evaluation content, model, method, theory, or even use. Rather, it is a process for helping primary intended users select the most appropriate content, model, methods, theory, and uses for their particular situation. Situational responsiveness guides the interactive process between evaluator and primary intended users. This means that the interactions between the evaluator and the primary intended users focus on fitting the evaluation to the particular situation with special sensitivity to context. A utilizationfocused evaluation can include any evaluative purpose (formative, summative, developmental), any kind of data (quantitative, qualitative, mixed), any kind of design (e.g., naturalistic, experimental), and any kind of focus (processes, outcomes, impacts, costs, and cost-benefit, among many possibilities). Utilization-focused evaluation is a process for making decisions about these issues in collaboration with an identified group of primary users, focusing on their intended uses of evaluation. A psychology of use undergirds and informs utilizationfocused evaluation. Intended users are more likely to use evaluation if they understand and feel ownership of the evaluation process and findings. They are more likely to understand and feel ownership if they have been actively involved. By actively involving primary intended users, the evaluator is training users in use, preparing the groundwork for use, and reinforcing the intended utility of the evaluation every step along the way. PROPOSAL REVIEW AND SITE VISIT IMPLICATIONS How utilization focused is the evaluation portion of a proposal? Is evaluation just compliance and reporting oriented, or does there appear to be a commitment to making evaluation truly useful? BOTTOM LINE When reviewing an evaluation proposal or report, is it clear who is intended to use the evaluation and for what purposes? © 2017 Otto Bremer Trust 25 DISTINGUISH DIFFERENT KINDS OF EVIDENCE EVIDENCE-BASED PROGRAMS The label “evidence based” is widely used. The question is: What does it mean, and what’s the evidence? Evidence about program effectiveness involves systematically gathering and carefully analyzing data about the extent to which observed outcomes can be attributed to a program’s interventions. Evaluators distinguish three types of evidence-based programs: 1. Single-Summative. Rigorous and credible summative evaluation of a single program. 2. Meta-Analysis. Systematic “meta-analysis” (statistical aggregation) of the results of a group of programs all implementing the same model in a high-fidelity, standardized, and replicable manner to determine best practices. 3. Principles-Based. Synthesis of the results of a group of diverse programs all adhering to the same principles but each adapting those principles to its own particular target population within its own context. TYPE OF EVIDENCEBASED PROGRAM EXAMPLES EVALUATION FOCUS AND FINDINGS Single program summative A local job-training ­program. Evidence of the model’s effectiveness for one particular site: Extensive, systematic, multi-year monitoring and evaluation data, including external summative evaluation on job placement and retention outcomes, will yield evidence-based conclusions about this particular program. Meta-analysis Results of implementing a standardized quality improvement and rating system for childcare providers in multiple sites. Evidence of effectiveness across multiple sites: The quality-rating program is being implemented as a standardized, prescribed model, applying the same criteria and tool to all childcare providers. Systematic aggregate statistical analysis of standardized processes and outcomes will yield evidence-based best practices. Principles-based ­synthesis Youth homelessness work engaging programs operated by six organizations that share common principles and values but operate independently. Evidence of effective principles: Each program is unique and provides different services but all work from a common set of principles of engagement, even though the implementation techniques built from those principles might vary from program to program. For example, “harm reduction” is a guiding principle. A synthesis of findings from case studies of their processes and outcomes will yield evidence-based effective principles. PROPOSAL REVIEW AND SITE VISIT IMPLICATIONS When a program claims to be evidence based, inquire into the nature of the evidence and the type of evidence-based program it aspires or claims to be. BOTTOM LINE Evidence-based programs must have evidence, but different kinds of evidence-based programs make different claims. Beware simple opinions masquerading as evidence. Beliefs are beliefs. Beliefs about program effectiveness must be evaluated to become an evidence-based program or model. © 2017 Otto Bremer Trust 30 E. 7th St. Ste. 2900, St. Paul, MN 55101–2988 Main 651 227 8036 Toll-free 888 291 1123 OT TOBREMER.ORG Suggestions PART 2: Core Problem / Opportunity 1) State the problem of food insecurity and include a statistic about how many children are insecure. (Food insecurity for children is at all-time high in the U.S. and one out of six children in the L.A. are food insecure.) CSR Goal: Ex. Our goal is to work toward ending food insecurity for children in the L.A. PART 4: Communication Objectives (I Have already done, it’s in the strategy and tactic

Do you have a similar assignment and would want someone to complete it for you? Click on the ORDER NOW option to get instant services at essayloop.com. We assure you of a well written and plagiarism free papers delivered within your specified deadline.