pay special attention to the denominator. EXAMPLE A local job-training program reports a 40 percent drop-out rate. The denominator for this program’s rate is based on the number who have completed the initial training and signed the program contract. Thus, the drop-out rate is NOT based on the number who initially enroll in the program but rather the number who enroll and complete the course and sign the contract. Half of the initial enrollees do not reach that stage. ILLUSTRATIVE DATA 1. Number who enter the program from January to June: 200 2. Number who complete course and sign contract: 100 3. Contract signing rate: 50 percent (100/200 = 50 percent) 4. Number who drop out before job placement: 40 5. Drop-out rate for contract signers is 40 percent (40/100 = 40 percent) 6. Drop-out rate for ALL enrollees is 70 percent (140/200 = 70 percent) 7. Program completion (placed in a job): 60 8. Completion rate of contract signers: 60 percent (60/100 = 60 percent) 9. Job retention one year after placement: 30 participants 10. Job retention rate: 50 percent (30/60 = 50 percent) 11. Job retention percentage of all participants who enroll: 15 percent (30/200 = 15 percent) LESSON Different rates have different denominators. Different denominators yield different rates. Programs define and calculate drop-out and completion rates differently, which makes comparisons difficult. PROPOSAL REVIEW AND SITE VISIT IMPLICATIONS Explore how the program computes key indicators like participation, completion, and drop-out rates. BOTTOM LINE Be clear about the denominator being used when rates are reported. © 2017 Otto Bremer Trust 9 SMART GOALS NOT ALL GOALS ARE CREATED EQUAL. Traditionally, evaluation has been synonymous with measuring goal attainment. The most basic evaluation question is: To what extent is the program attaining its goals? To evaluate goal attainment, goals have to be clear enough to permit evaluation. A CLEAR GOAL HAS FIVE DIMENSIONS, WHICH FORM THE ACRONYM SMART: Specific EXAMPLES Weak goal: Achievable Improve quality of life. This goal is vague and general (not specific). What is meant by quality of life? How would it be measured? What’s the timeframe? Relevant SMART goal: Time bound Graduates will get a job paying a living wage with benefits and keep the job for at least a year. Measurable —— The outcome is specific (get and keep a certain kind of job) —— The goal is measurable (living-wage job with benefits) —— The goal is achievable (the level of aspiration is reasonable) —— The outcome is relevant (the goal is aimed at the chronically unemployed; getting and keeping a living-wage job is relevant to both participants and society) —— The goal is time bound (keep the job at least one year) PROPOSAL REVIEW AND SITE VISIT IMPLICATIONS When reviewing goals, examine if they are SMART. BOTTOM LINE Goal statements vary tremendously. Not all are SMART. © 2017 Otto Bremer Trust 10 DISTINGUISHING OUTCOMES FROM INDICATORS EVALUATION DEPENDS ON IMPORTANT DISTINCTIONS. ONE SUCH D ISTINCTION IS OUTCOMES VS. INDICATORS. An outcome is a clear statement of the targeted change. An indicator is a measurement of the outcome. EXAMPLES OF TYPES OF OUTCOMES ILLUSTRATIVE INDICATORS Change in circumstances Number of children in foster care who are safely reunited with their families of origin Change in status Number of unemployed who become employed Change in behavior Number of former truants who regularly attend school Change in functioning Measures of increased self-care among nursing home residents Change in attitude Score on an instrument that measures self-esteem Change in knowledge Score on an instrument that measures understanding of the needs and capabilities of children at different ages An indicator is just that, an indicator. It’s not the same as the desired outcome, but only an indicator of that outcome. A score on a reading test is an indicator of reading capability but should not be confused with a particular person’s true capacity to read. Many kinds of things affect a test score on a given day. Thus, indicators are inevitably approximations. They are imperfect and vary in validity and reliability. Figuring out how to measure a desired outcome is called operationalizing the outcome. The resources available for measurement will greatly affect the kinds of data that can be collected for indicators. For example, if the desired outcome for abused children is no subsequent abuse or neglect, regular in-home visits and observations, including interviews with the child, parent(s), and knowledgeable others, would be desirable, but such data collection is expensive. With constrained resources, one may have to rely on data collected routinely by government through mandated reporting—that is, official, substantiated reports of abuse and neglect over time. Moreover, when using such routine data, privacy and confidentiality restrictions may limit the indicator to aggregate results quarter by quarter rather than one that tracks specific families over time. Another factor affecting indicator selection is the demands data collection will put on program staff and participants. Short-term interventions such as food shelves, recreational activities for people with developmental disabilities, drop-in centers, and one-time community events do not typically engage participants with a high enough dosage level to justify collection of sophisticated data. Many programs can barely collect data on end-of-program status, much less follow-up data six months after program participation. Programs may need to develop the capacity to measure outcomes. PROPOSAL REVIEW AND SITE VISIT IMPLICATIONS Examine the clarity of proposed outcomes and the meaningfulness of indicators. BOTTOM LINE Outcomes are the desired results; indicators are how you know about outcomes. The key is to make sure that the indicator is a reasonable, useful, and meaningful measure of the intended participant outcome. © 2017 Otto Bremer Trust 11 PERFORMANCE TARGETS WHAT’S THE BULL’S-EYE? A performance target specifies the level of outcome that is hoped for, expected, or intended. What percentage of participants in employment training will have full-time jobs six months after graduation? 40 percent? 65 percent? 80 percent? What percentage of fathers failing to make child support payments will be meeting their full child support obligations within six months of intervention? 15 percent? 35 percent? 60 percent? Setting performance targets should be based on data about what is possible. The best basis for establishing future performance targets is past performance. “Last year we had 65 percent success. Next year we aim for 70 percent.” Lacking data on past performance, it may be advisable to wait until baseline data has been gathered before specifying a performance target. Arbitrarily setting performance targets without some empirical baseline may create artificial expectations that turn out unrealistically high or embarrassingly low. One way to avoid arbitrariness is to seek norms for reasonable levels of attainment from other, comparable programs, or review the evaluation literature for parallels. Just making up arbitrary or ambitious performance targets is not very useful. SEPARATE GOALS FROM INDICATORS AND PERFORMANCE TARGETS. —— Desired outcome: All children will be immunized against polio. —— Indicator: Health records when children enter school show whether they have been vaccinated. —— Performance target: Children receive four doses of IPV: a dose at 2 months, at 4 months, and at 6–18 months; and a booster dose at 4–6 years. As indicators are collected and examined over time, it becomes more meaningful and useful to set performance targets. EXAMPLE Consider this outcome statement: Student achievement test scores in reading will increase one grade level from the beginning of first grade to the beginning of second grade. Such a statement mixes together and potentially confuses the (1) specification of a desired outcome (better reading) with (2) its measurement (achievement test scores) and (3) the desired performance target (one grade level improvement). Specifying the desired outcome, selecting indicators, and setting targets are separate decisions. They are related, of course, but each should be examined on its own merits. For example, there are ways other than standardized tests for measuring achievement, like student portfolios or competency-based tests. The desired outcome should not be confused with its indicator. PROPOSAL REVIEW AND SITE VISIT IMPLICATIONS Examine the appropriateness and basis of performance indicators. BOTTOM LINE The challenge is to make performance targets realistic, meaningful, and useful. © 2017 Otto Bremer Trust 12 QUALITATIVE EVALUATION QUALITATIVE DATA COMES FROM OPEN-ENDED INTERVIEWS, ON-SITE O BSERVATIONS, FIELDWORK, SITE VISITS, AND DOCUMENT ANALYSIS. Qualitative evaluation uses case studies, systematically collected stories, and indepth descriptions of processes and outcomes to generate insights into what program participants experience and what difference those experiences make. Suppose you want to evaluate learning to read. If you want to know how well children can read, give them a reading test (quantitative data). If you want to know what reading means to them, you have to talk with them (qualitative data). Qualitative questions aim at getting an in-depth, individualized, and contextually sensitive understanding of reading for each child interviewed. Of course, the actual questions asked are adapted for the child’s age, language skills, school and family situation, and purpose of the evaluation. But regardless of the precise wording and sequence of questions, the purpose is to hear children talk about reading in their own words; find out about their reading behaviors, attitudes, and experiences; and get them to tell stories that illuminate what reading means to them. You might talk to groups of kids about reading as a basis for developing more indepth, personalized questions for individual interviews. While doing field work (actually visiting schools and classrooms), you would observe children reading and the interactions between teachers and children around reading. You would also observe what books and reading materials are in a classroom and observe how they are arranged, handled, and used. In a comprehensive inquiry, you would also interview teachers and parents to get their perspective on the meaning and practice of reading, both for children and for themselves, as models children are likely to emulate. EXAMPLES OF QUALITATIVE EVALUATION QUALITATIVE DATA COLLECTED, SYNTHESIZED, AND REPORTED Evaluate the principles that guide work with homeless youth, both to improve effective use of principles and find out the impacts on youth. Case studies of diverse homeless youth using shelters and youth programs; in-depth interviews with youth, street workers, and shelter or program staff; review of files; focus groups with youth to understand their perspectives and experiences. Evaluate the role of community colleges in rural communities. Interview community college teachers, students, and administrators about their experiences and perspectives; interview key community people and leaders; do case studies of successful students compared to drop-outs. Evaluate a community leadership program. Interviews with program participants about the leadership training, then follow-up community case studies to find out what they do with the training. Evaluate a drop-in center for inner-city Native American youth. Work with Native American leaders to develop culturally appropriate questions. Observe. Interview. Report patterns. PROPOSAL REVIEW AND SITE VISIT IMPLICATIONS Develop skills in open-ended interviewing and systematic site visit observations—emphasis on being skilled and systematic. Document what you see and hear. Analyze and synthesize qualitatively. BOTTOM LINE Qualitative evaluation captures and communicates the perspectives, experiences, and stories of people in programs to understand program processes and outcomes from their viewpoint. © 2017 Otto Bremer Trust 13 TRIANGULATION THROUGH MIXED METHODS ANY SINGLE SOURCE OF DATA, LIKE INTERVIEWS, FOCUS GROUPS, OR SURVEYS, HAS BOTH STRENGTHS AND WEAKNESSES. Using multiple methods increases confidence in overlapping patterns and findings. Checking for consistency across different data sources is called triangulation. The term triangulation is taken from land surveying. Knowing a single landmark only locates you somewhere along a line in a direction from the landmark, whereas with two landmarks you can take bearings in two directions and locate yourself at their intersection. The notion of triangulating also works metaphorically to call to mind the world’s strongest geometric shape— the triangle. The logic of triangulation is based on the premise that no single method ever adequately solves the problem of interpreting how much the weakness of any particular method may give a false or inadequate result. Because different kinds of data reveal different aspects of a program, multiple methods of data collection and analysis provide more grist for the interpretation mill. Combinations of interviewing, observation, surveys, performance indicators, program records, and document analysis can strengthen evaluation. Studies that use only one method are more vulnerable to errors. COMBINING QUANTITATIVE AND QUALITATIVE DATA Statistics tell us about the size or scope of an issue, like the number of homeless youth, how many rural people lack access to quality dental care, or whether the number of children in poverty is increasing or decreasing. Qualitative data tells us what the numbers mean through the perceptions of program participants and staff. Openended interviews with program participants, case studies, and site visits provide insights into how to interpret and make sense of the numbers. Stories also put faces on the numbers and humanize statistics so that we never forget that behind the numbers are real people living their lives. Strong evaluations include both quantitative and qualitative data. Triangulating across statistics and stories make each data source more valuable, meaningful, and credible. EXAMPLE A site visit to a housing development turned up statistics on residents’ characteristics, diversity, and income levels as well as the needs people expressed and stories about living in the housing development. Staff learned that to live in this development “you need to work, be in school, or have formal volunteering occurring.” An evaluation going forward might inquire how this policy works in practice. Statistics would reveal patterns of work, school attendance, volunteering, and resident turnover. Open-ended interviews would find out how residents and staff experience these policies—the attitudes, knowledge, behaviors, and feelings that affect the desired outcome of building a vibrant residential community. PROPOSAL REVIEW AND SITE VISIT IMPLICATIONS When reviewing a proposal or conducting a site visit, look for both numbers and stories, and examine the consistency or conflicts between these different data sources. BOTTOM LINE The evaluation ideal is: No numbers without stories; no stories without numbers. Learn what each kind of data reveals and teaches, and how to use them together: triangulating. © 2017 Otto Bremer Trust NOT ALL FINDINGS ARE THE SAME. WHAT’S WORTH PAYING ATTENTION TO? WHAT MATTERS MOST? The most powerful, useful, and credible claims are those that are of major importance and have strong empirical support. Claims can be important or unimportant, and the evidence for the claims can be strong or weak. The ideal is strong evi
Do you have a similar assignment and would want someone to complete it for you? Click on the ORDER NOW option to get instant services at essayloop.com. We assure you of a well written and plagiarism free papers delivered within your specified deadline.