Module 11 Monitoring and Evaluation CSR Campaigns Spring 2022 For more information, contact me, Kellie Cummings at: or post your questions to the Syllabus/Assignment Discussion. AS.480.642.81 An Iterative Process Iterate: (v) to perform or utter repeatedly. 2 An Iterative Process “The Card Players” Sketch and final painting by Paul Cezanne 3 4 Measurement and Evaluation 5 Criteria and Tools 6 Lush Cosmetics Example OBJECTIVE CRITERIA TOOLS OBJECTIVE #1 YOUTH At least 30% more LGBT youth understand the negative affects of homelessness. YOUTH • Pre-, post-, and mid-campaign surveys with a representative sample of 1,000 LGBT youth contacted through True Colors social media accounts and utilizing Influencers to engage youth in the survey. • Measure traffic to / and engagement with the accounts that were created specifically for this campaign. • Track conversions from specialized campaign accounts (demonstrated by youth who sign up to learn more). PARENTS • Pre-, post-, and mid-campaign surveys with a representative sample of 1,000 parents contacted through Facebook parent groups. Partner with the True Colors Fund to raise awareness of LGBT youth homelessness and its negative impacts on individuals and families PARENTS by 30% nationwide over a two-year At least 30% of parents understand period. the risk of LGBT youth homelessness and its negative impacts.. 7 Social Media Measurement 8 Social Media Measurement 9 Your Homework Describe your measurement and evaluation approach for each objective. At minimum, show alignment across your objective, measurement criteria, and measurement tool. Note: The section following this slide presents additional measurement tools. These are not necessary for you to earn a good grade on this project. 10 Additional Measurement Resource #1 11 AMEC: Out-takes 12 AMEC: Output 13 AMEC: Outcomes 14 AMEC: Impacts 15 Additional Resource #2: Flash Cards 16 Additional Resource: Flash Cards 17 Additional Resource: Flash Cards 18 Your Homework Describe your measurement and evaluation approach for each objective. At minimum, show alignment across your objective, measurement criteria, and measurement tool. Alternatively, you may follow the AMEC Integrated Evaluation Framework. 19 Module 11 Monitoring and Evaluation CSR Campaigns Spring 2022 For more information, contact me, Kellie Cummings at: or post your questions to the Syllabus/Assignment Discussion. AS.480.642.81 EVALUATION FLASH CARDS Embedding Evaluative Thinking in Organizational Culture Developed by Michael Quinn Patton Utilization-Focused Evaluation St. Paul, Minnesota Updated March 2017 CONTENTS 01. Evaluative Thinking INTRODUCING THE EVALUATION FLASH CARDS 02. Evaluation Questions 03. Logic Models 04. Theory of Change 05. Evaluation vs. Research 06. Dosage As part of our ongoing work to strengthen our support for communities, the trustees and staff of the Otto Bremer Trust engaged in a series of learning seminars on evaluation. In order to make the core concepts easily accessible and retrievable, we asked Michael Quinn Patton, who led the seminars, to create a set of basic reference cards. These became the Evaluation Flash Cards presented here, with the idea that a core concept can be revisited “in a flash.” Illustrations of the concepts are drawn from Otto Bremer Trust grants. We hope this resource is useful to other organizations committed to understanding and improving the results of the programs they support. 07. Disaggregation These cards are not intended to be definitive, universally applicable, or exhaustive of possibilities. 08. Changing Denominators, Changing Rates ABOUT THE AUTHOR 09. SMART Goals 10. Distinguishing Outcomes From Indicators 11. Performance Targets 12. Qualitative Evaluation 13. Triangulation Through Mixed Methods 14. Important and Rigorous Claims of Effectiveness 15. Accountability Evaluation 16. Formative Evaluation 17. Summative Evaluation 18. Developmental Evaluation 19. The IT Question 20. Fidelity or Adaptation 21. High-Quality Lessons Learned 22. Evaluation Quality Standards 23. Complete Evaluation Reporting 24. Utilization-Focused Evaluation 25. Distinguish Different Kinds of Evidence Michael Quinn Patton is an independent evaluation consultant with 40 years of experience conducting evaluations, training evaluators, and writing about ways to make evaluation useful. He is former president of the American Evaluation Association and recipient of both the Alva and Gunnar Myrdal Award for outstanding contributions to evaluation use and practice and the Paul F. Lazarsfeld Award for lifetime contributions to evaluation theory, both from the American Evaluation Association. The Society for Applied Sociology honored him with the Lester F. Ward Award for outstanding contributions to applied sociology. He is the author of six books on evaluation, including Essentials of Utilization-Focused Evaluation (2012). PERMISSION AND CITATION The Otto Bremer Trust permits use of these Evaluation Flash Cards for non‑commercial purposes, subject to full attribution (see the suggested citation reference below). For permission to use this material for commercial purposes, please contact the Trust at 651‑227‑8036 or Citation reference: Patton, Michael Quinn (2014). Evaluation Flash Cards: Embedding Evaluative Thinking in Organizational Culture. St. Paul, MN: Otto Bremer Trust, This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. 1 EVALUATIVE THINKING DISTINGUISH EVALUATIVE THINKING FROM EVALUATION. Evaluation is activity. Evaluative thinking is a way of doing business. Evaluative thinking is systematic results-oriented thinking about: —— What results are expected, —— How results can be achieved, —— What evidence is needed to inform future actions and judgments, and —— How results can be improved in the future. Evaluative thinking becomes most meaningful when it is embedded in an organization’s culture. This means that people in the organization expect to engage with each other in clarifying key concepts, differentiating means and ends, thinking in terms of outcomes, examining the quality of evidence available about effectiveness, and supporting their opinions and judgments with evidence. Evaluative thinking is what characterizes learning organizations. Keeping up with research and evaluation findings becomes part of everyone’s job. Inquiring into the empirical basis for assertions about what works and doesn’t work becomes standard operating procedure as people in the organization engage with each other and interact with partners and others outside the organization. Critical thinking and reflection are valued and reinforced. Infusing evaluative thinking into organizational culture involves looking at how decision makers and staff incorporate evaluative inquiry into everything they do as part of ongoing attention to mission fulfillment and continuous improvement. Integrating evaluation into organizational culture means “mainstreaming evaluation”—that is, making it central to the work rather than merely an add-on, end-of-project paperwork mandate. INDICATORS THAT EVALUATIVE THINKING IS EMBEDDED IN AN ORGANIZATION’S CULTURE —— Evaluative thinking permeates the work, with conscious and constant reflection on project, program, regional, and organizational experience and the intention to implement improvements based on what is learned. —— Evaluative thinking is demonstrated in the implementation of well-focused programs and in the use of highquality evaluations that feed into program and organizational decision making. —— Time and resources are allocated for reflection on evaluation findings and using those findings. The antithesis of evaluative thinking is treating evaluation as a check-it-off compliance activity. EVALUATIVE THINKING EMBEDDED AND VALUED AS A WAY OF DOING BUSINESS EVALUATION AS A COMPLIANCE ACTIVITY Thinking about what kinds of information are most needed for learning and improvement. Focusing on evaluation contract requirements and procedures. Reflecting together on evaluation findings, learning lessons, and applying them in future decisions. Checking off that evaluation reports have been submitted and filed. PROPOSAL REVIEW AND SITE VISIT IMPLICATIONS Pay attention to how, and how much, evaluative thinking is manifest, embedded, and valued. BOTTOM LINE Practice evaluative thinking. Like any important skill, evaluative thinking improves with practice and reinforcement. © 2017 Otto Bremer Trust 2 EVALUATION QUESTIONS BEGIN WITH BASIC DESCRIPTION. Evaluation supports reality testing — finding out what is actually going on in a program. This can then be compared to what was intended and hoped for. But the first step is basically descriptive. I keep six honest serving-men (They taught me all I knew); Their names are What and Why and When And How and Where and Who. — Rudyard Kipling (1865–1936), The Elephant’s Child For professionals as diverse as journalists, police detectives, lawyers, and evaluators, Kipling’s five Ws and one H is the formula for full understanding and a complete report. These are descriptive, factual, and open-ended questions. None can be answered “yes” or “no.” You have to find out what happened. When first entering a program situation (for example, on a site visit), it can be helpful to begin with some basic facts to get the lay of the land. Keep it simple: Who’s proposing to do what? Where? When? How? Why? EXAMPLE: A JOB-TRAINING PROGRAM PROGRAM DESCRIPTION PARALLEL EVALUATION QUESTIONS Who: The target population is chronically unemployed people of color. The staff consists of “coaches” and trainers selected for their capacity to work with this population. Who does the program actually serve? How does the actual population served compare to the targeted population? What: Train participants in both “soft skills” and “hard skills” to get living-wage jobs with benefits in companies the program has cultivated. What training do participants actually receive? How does the training received compare to the proposed training? What do companies report about the skills of participants hired? Where: The main program operates in two local offices. How does the location of the program affect its operation? Strengths and weaknesses of location? How: The program uses an “empowerment curriculum” that engages participants in being accountable, responsible, and successful. Building on empowerment, the program offers skill training matched to the needs and interests of participants and job needs of companies. How does the curriculum work in practice? What are participants’ reactions? What is evidence of “empowerment,” of acquisition of “soft” and “hard” skills, and of alignment between companies’ needs and program participants’ skills? Why: Evaluation of successful employment programs shows that the combination of positive attitudes, appropriate behaviors for the workplace, and training in skills needed by companies leads to successful outcomes. To what extent does the program reproduce the results documented in previous evaluations? How do the results of this program compare to other models? When: Participants are generally in the program for 18 months to 2 years. The intended outcome is retention of a living-wage job with benefits for at least one year. To what extent is the intended outcome actually attained? PROPOSAL REVIEW AND SITE VISIT IMPLICATIONS Use the full set of descriptive questions to get a comprehensive picture of what’s being proposed. BOTTOM LINE Ground evaluation in basic descriptive questions. © 2017 Otto Bremer Trust 3 LOGIC MODELS MODELS CAN BE DISPLAYED AS A SERIES OF LOGICAL AND SEQUENTIAL CONNECTIONS. EACH STEP IN A LOGIC MODEL CAN BE EVALUATED. A logic model is a way of depicting the program intervention by specifying inputs, activities, outputs, outcomes, and impacts in a sequential series. EXPLANATIONS OF SOME OF THE TERMS USED IN LOGIC MODELS —— Inputs are resources like funding, qualified staff, participants ready to engage in the program, a place to hold the program, and basic materials to conduct the program. These inputs, at an adequate level, are necessary precursors to the program’s activities. —— Participating in program activities and processes logically precedes outputs, like completing the program or getting a certificate of achievement. —— Outputs lead to short-term participant outcomes, like a better job or improved health. —— Short-term outcomes lead to longer-term impacts, like a more prosperous or healthy community. INPUTS/ RESOURCES ACTIVITIES/ PROCESSES OUTPUTS/ PRODUCTS SHORT-TERM OUTCOMES LONG-TERM IMPACT Logic models are one way of answering the It question in evaluation. The logic model depicts what is being evaluated. The primary criteria for judging a logic model are whether the linkages are logical and reasonable. 1. Are the inputs (resources) sufficient to deliver the proposed activities? 2. Will the proposed activities lead to the expected outputs? 3. Do the outputs lead logically and reasonably to the outcomes? 4. Will successful outcomes lead to desired impacts? NOT LOGICAL AND REASONABLE LOGICAL AND REASONABLE Attending an after-school drop-in center will increase school achievement. Participating in an after-school drop-in center will help keep kids out of trouble after school. A safe house for victims of domestic abuse will lead to jobs. A safe house for domestic abuse victims will provide support and stability to enable participants to figure out next steps and get referrals for longer-term help. PROPOSAL REVIEW AND SITE VISIT IMPLICATIONS Does the proposal include a logic model? If so, is it reasonable and logical? Do the steps make sense? BOTTOM LINE Is the proposed logic model sequence from inputs to impacts logical and reasonable? © 2017 Otto Bremer Trust 4 THEORY OF CHANGE TESTING A THEORY OF CHANGE CAN BE AN IMPORTANT CONTRIBUTION OF EVALUATION. A theory of change explains how to produce desired outcomes. It is explanatory. A logic model just has to be sequential (inputs before activities, activities before outcomes), logical, and reasonable. In contrast, a theory of change must explain why the activities produce the outcomes. EXAMPLE A program to help homeless youth move from the streets to permanent housing proposes to: 1. Build trusting relationships with the homeless youth; 2. Work to help them feel that they can take control of their lives, instill hope, and help them plan their own futures; and 3. Help them complete school, both for their economic well-being and to help them achieve a sense of accomplishment. This approach is based on resilience research and theory. Resilience research and theory posits that successful youth: (1) have at least one adult they trust and can interact with, (2) have a sense of hope for the future, (3) have something they feel good about that they have accomplished, and (4) have at least some sense of control over their lives. The issue that arises in examining a proposal based on a theory of change is whether the proposed program activities constitute a practical and reasonable implementation of the theory. Does the program provide specific and concrete experiences that reflect the theory of change? The key conceptual and realworld challenge is translating a theory of change into an actual implemented program with real outcomes. Evaluation of a program with an explicit theory of change is sometimes called “theory-driven evaluation” because the evaluation can be a test of the theory. If the program fails to produce the predicted outcomes, the critical interpretative and explanatory issue becomes: Did the program fail because the theory was inadequately implemented, or because the theory itself was inadequate? This is the difference between implementation failure versus theory failure, a longstanding and important distinction in evaluation. PROPOSAL REVIEW AND SITE VISIT IMPLICATIONS How explicit and articulate is the program’s theory of change? BOTTOM LINE Can a program identify a theory of change based on research and, if so, can it demonstrate how it will translate the theory into an actual program? © 2017 Otto Bremer Trust 5 EVALUATION VS. RESEARCH EVALUATION AND RESEARCH HAVE DIFFERENT PRIMARY PURPOSES, BUT THE STATE OF RESEARCH KNOWLEDGE AFFECTS WHAT EVALUATION CAN CONTRIBUTE. Evaluation generates improvements, judgments, and actionable learning about programs. Research generates knowledge about how the world works and why it works as it does. Scientific research is undertaken to discover knowledge, test theories, and generalize across time and space. Program evaluation is undertaken to inform decisions, clarify options, identify improvements, and provide information about programs and policies within contextual boundaries of time, place, values, and politics. Research informs science. Useful evaluation supports action. Research informs evaluation in that the more knowledge that exists about a problem, the more an evaluation can draw on that knowledge. For example, research shows that children immunized against polio do not get polio. Therefore, evaluation of an immunization program can stop at determining that children have been immunized and confidently calculate how many cases of polio have been prevented based on epidemiological research. The evaluation design does not have to include followup to determine whether immunized children get polio. That question has been settled by research. A program aimed at getting senior citizens to exercise to improve their health does not have to prove that exercise improves health and contributes to a longer, higher quality life. Health research has demonstrated that. Evaluation of the exercise program, then, only has to demonstrate that it is effective in getting seniors to exercise at the levels shown by research to be effective. In contrast, there is little research on homeless youth. The knowledge gap is huge. So evaluation has to be more developmental and exploratory because the research foundation is weak. RESEARCH EVALUATION Purpose is testing theory and producing generalizable findings. Purpose is to determine the effectiveness of a specific program or model. Questions originate with scholars in a discipline. Questions originate with key stakeholders and primary intended users of evaluation findings. Quality and importance judged by peer review in a discipline. Quality and importance judged by those who will use the findings to take action and make decisions. Ultimate test of value is contribution to knowledge. Ultimate test of value is usefulness to improve ­effectiveness. PROPOSAL REVIEW AND SITE VISIT IMPLICATIONS Find out if research supports a program proposal. Have those submitting the proposal done their homework in finding out and taking into account what research shows? BOTTOM LINE Distinguish research from evaluation. Use research to inform both program and evaluation designs. © 2017 Otto Bremer Trust 6 DOSAGE DIFFERENT DEGREES OF INTERVENTION AND ENGAGEMENT PRODUCE DIFFERENT LEVELS OF OUTCOMES. Dosage effects refer to the fact that different people engage in and experience a program with different degrees of intensity. A higher dose of engagement should be related to higher-level outcomes. EXAMPLE A youth community center reports serving 300 kids each quarter. QUESTION What are different degrees of dosage for those 300 kids? DATA High dosage/high outcomes: Thirty kids come to the center after school every day. They have important, ongoing relationships with staff. They benefit greatly from the staff’s mentoring, homework help, personal support, and individualized problem solving. Medium dosage/medium outcomes: Fifty kids come to the center about once a week for a specific program, like a volunteer program that helps them improve reading; they get some modest help on a specific outcome (reading). Low dosage/minimal outcomes: Another 220 kids come once a quarter for pizza night, or a Friday night dance. This is a source of recruiting and connection to the community, but it is really outreach rather than “serving” those kids. PROPOSAL REVIEW AND SITE VISIT IMPLICATIONS Explore how aware the program is of variations in dosage and the implications of those variations. BOTTOM LINE Watch for and understand dosage effects. All programs have them. © 2017 Otto Bremer Trust 7 DISAGGREGATION WHAT WORKS FOR WHOM IN WHAT WAYS WITH WHAT RESULTS? Subgroups in programs have different experiences and different outcomes. Disaggregation refers to distinguishing the experiences and outcomes of different subgroups. EXAMPLE A program aims to prevent teenage pregnancies. The program typically reports aggregate results for all teens served (ages 13–19). The reported success rate is 60 percent, which means that 60 percent of the teens do not get pregnant during the year they are engaged in the program. DISAGGREGATED DATA —— Success rate for teens aged 16–19: 80 percent —— Success rate for teens aged 13–15: 40 percent LESSON  he overall 60 percent success rate for all teens disguises the fact that the program is highly effective with older T teens and relatively ineffective with younger teens. Indeed, some outcomes are different. The program works to help older teens maintain safe and supported independence but attempts to get younger teens integrated into a family, either their own or a foster family. In reality, the two subgroups constitute different approaches with different results. The disaggregated data can help decision makers target improvements to the subgroups for whom the program is less effective—and learn from those that show higher levels of impact. PROPOSAL REVIEW AND SITE VISIT IMPLICATIONS Explore the capacity of the program to disaggregate data for learning, management, and reporting. BOTTOM LINE When looking at overall results for a program, ask about the disaggregated results for important subgroups. © 2017 Otto Bremer Trust 8 CHANGING DENOMINATORS, CHANGING RATES DIFFERENT DENOMINATORS PRODUCE DIFFERENT RESULTS. To understand and interpret data on rates and performance indicators, like the participation rate in a program, the drop-out rate, or the completion rate, pay special 

Do you have a similar assignment and would want someone to complete it for you? Click on the ORDER NOW option to get instant services at We assure you of a well written and plagiarism free papers delivered within your specified deadline.