Data collection instruments for program evaluation


















Knowledge, Attitude, and Practice KAP survey: A KAP survey is a quantitative method with predefined questions formatted in standardized questionnaires that provide quantitative and qualitative information among a population. KAP surveys uncover misconceptions or misunderstandings that may represent obstacles to the activities that we would like to implement and potential barriers to behavior change.

This means that the KAP survey reveals what was said, however considerable gaps may exist between what is said and what is done.

It is a management tool that yields a set of indicators used to monitor and estimate the results of program activities. The KPC survey can be used at the any point in a project cycle. A comprehensive KPC survey training guide is also available. It covers a broad range of maternal and child health indicators, and can reveal specific household behaviors and care-seeking patterns critical to designing and evaluating interventions. Qualitative data is textual and is often used to determine the why to a question, but can also explore who, where, and what.

An example of one type of qualitative data would be to find out why some people think or act the way they do. Examples of qualitative methods of data collection include focus group discussions, in-depth interviews, key informant interviews, and participant observation. While qualitative studies are not representative or generalizable they are intentionally purposeful , they can provide a more in-depth analysis of issues related to specific behaviors and practices that may not be captured through quantitative surveys.

The following section goes into detail on a variety of qualitative data collection techniques. In-depth Interviews Asking questions of one person in a private setting in order to understand their perspective on a topic. This data collection technique is an appropriate way to explore a sensitive issue that respondents are not likely to speak about openly in a group setting. Guide for Designing and Conducting In-Depth Interviews for Evaluation Input Key Informant Interviews Key informant interviews are a way to gather first-hand knowledge about a topic from an individual who is deemed an authority on the topic.

Conducting Key Informant Interviews Participant Observation Participant observation is a way to collect information about people while spending time in their presence. Participant observation is unique in that is allows researchers to observe verbal as well as non-verbal communication as well as interactions between individuals. Participant Observation as a Data Collection Method Most Significant Change Collect a series of stories about change and systematically selecting the most significant example.

Subnational: Multimedia, multi messages based on barrier analysis, community groups. Subnational: Training of mother and child health community volunteers in health promotion. Community: Mass media, posters, distribution of messaging materials, message dissemination through road-shows and established community-based organizations.

In order to improve the design of SBCC interventions in malaria case management, formative research program design is critical. The results from the formative research are then used to design a more effective SBCC program. The goals of formative research can include:. Routine monitoring, including process evaluation, is used to track program activity progress towards expected goals.

It will identify areas of excellence and deficiency which should then inform midcourse corrections. Preliminary successes have the added benefit of boosting morale and the commitment of program staff. Finally, SBCC monitoring can help inform future programs. It is important that the monitoring plan includes plans to monitor both activities and audiences. It is also important to describe how the data will flow up the system, where it will be stored, and who is responsible for it. Impact, or outcome, evaluation takes place after the activity is finished, however, it must be planned for at the beginning of the project as it is most useful when the data are compared to a baseline survey conducted prior to program implementation.

An important concept to consider when thinking about impact evaluation is the difference between correlation and causation. Causation is the measurement of an input of interest, for example — a program activity, that occurs in time before the outcome of interest, for example — prompt care seeking activity for children under 5 by caregivers.

In contrast, correlation assesses the exposure of the project activity input at the same time as the desired outcome behavior. As a result, if two variables are correlated this does not necessarily mean that the input has caused the behavior output. There are three main types of impact evaluation research designs: experimental, quasi-experimental, and non-experimental.

Only with the experimental design can the team assess causation. Impact evaluations for SBCC need to be designed to answer three questions: a Was the program effective? B Did it change behavior? SBCC program messages influence behaviors indirectly through knowledge, attitudes, and beliefs that drive behavioral decisions. Understanding the specific attitudes through which messages affected behavior is important since this helps take the lessons from a successful program and apply them elsewhere.

One way to establish a link between exposure and behavior is to use self-reported exposure to SBCC messages in household surveys to construct the groups of exposed and unexposed individuals. In this approach, a series of questions in a household survey are constructed to ask each respondent about their exposure to SBCC messages and to specific program elements such as logos and slogans.

Quasi-Experimental Description : Exposure not randomized —exposed and un-exposed to intervention may not be similar on background characteristics. Association between exposure to SBCC intervention and behavior change is weaker than with experimental designs. Non-Experimental Description : Exposed and un exposed groups are not compared. Only one point of data collection. Example : Cross-sectional: data gathered at one point in time Pros :. The Research Methods Knowledge Base provides a comprehensive overview of these methods.

How-To Guide Guidance on how to develop a monitoring and evaluation plan. Dose Response Analysis: Caution. As mentioned above, quantitative data collection makes it possible to show that the more messages audiences recall, the greater increases in knowledge, attitudes, and behavior. A randomized cluster controlled trial conducted in Burkina Faso attempted to show a dose response between radio broadcasting intensity and reported behavior change including possible increases in prompt care seeking for fever.

Radio ownership, or access to a radio is clearly an important consideration. To adjust for this, study designers broke audiences down in to three categories, those with: no radio in the compound, radio in the compound, and radio in the household. Mid-term results from this study actually did not find a dose response relationship between intensity of radio messages and prompt care seeking for fever.

Higher exposure did not result in proportionally higher care seeking for fever. As the only communication channel in this study used was radio, perhaps this article's most important contribution to understanding SBCC for malaria case management is that it demonstrates the importance of multi-channel communication: reliance on a single channel failed to improve prompt care seeking for fever. Examples of Qualitative Methods Collection Technique Description Resources Focus Group Discussions A group discussion often using a semi-structured guide asking participants to share their opinions and experiences.

This form of inquiry is helpful in drawing out social and cultural norms among a group of people who are similar in characteristics that are associated with the study interest. Is there an increase in knowledge of behaviors linked to the intervention? What is the prevalence of healthy malaria prevention behaviors in intervention areas? What are the determinants of use of services for different childhood illnesses? Either method can pose challenges: Tools that have been developed for one evaluation may not prove suitable for another, at least not without careful modification.

At the same time, creating new tools requires expertise in measurement and instrument design. Good question! When considering the use of an instrument, keep in mind the following:. The following websites provide tools and instruments that can be used for evaluating the wide range of outcomes addressed by informal STEM education projects, or that can serve as starting points for modification.

What is Evaluation? Developing an Evaluation Plan. Evaluation Reporting and Dissemination. Learn More About Evaluation. Evaluation Tools and Instruments. How do you know if an off-the-shelf instrument is appropriate for your needs? When considering the use of an instrument, keep in mind the following: What is the instrument measuring? Review how the instrument developers define what it is they are measuring. Does it match exactly what you want to measure?

Also look for validity evidence that the instrument measures what it proposes to measure. Validity evidence can be from expert reviews, think-aloud interviews, factor analysis, and other validation techniques. What audience was the instrument created for and tested with? Instruments are created for a particular audience. If your audience matches the one that an instrument was designed for, great. For instance, a survey created for adults may or may not be appropriate for children.



0コメント

  • 1000 / 1000