Knowledge, Attitude, and Practice (KAP) surveys are frequently used in healthcare research due to their perceived simplicity in design and execution while offering valuable insights. However, early-career researchers should be aware of the complexities involved in creating and implementing these surveys effectively. This article provides practical guidance on conducting KAP surveys, covering aspects such as identifying the need for a survey, defining the target population, formulating knowledge, attitude, and practice questions, developing answer options, designing scoring mechanisms, and validating the survey instrument. We will focus on tangible examples to illustrate how this guidance can be applied in diverse situations, offering A Guide To Developing Knowledge Attitude And Practice Surveys.
KAP surveys are common in the health sciences. For example, in mental health, these surveys evaluate beliefs and behaviors about specific illnesses or treatments. The popularity of KAP surveys increases during novel events, as shown by the numerous COVID-19 KAP studies conducted among the general population and healthcare workers.1–3 These surveys appear straightforward and affordable, making them accessible to students and researchers. However, a solid foundation in areas like classical test theory, item response theory, reliability, validity, factor analysis, and Rasch analysis is essential for creating robust research instruments.
Beyond theoretical understanding, familiarity with the practical aspects of developing a KAP instrument is crucial. This includes creating a title, writing a concise introduction with instructions, gathering sociodemographic and background data, and crafting questions that assess knowledge, attitudes, and practices. It is also important to consider how to record and score responses. In this guide to developing knowledge attitude and practice surveys, we will provide a brief background on KAP studies and offer simple, practical advice for creating a KAP instrument. Our goal is to provide novice researchers with essential insights into the field.
Understanding KAP Surveys
KAP surveys emerged in the 1950s in family planning and population research. Today, these surveys are a widely accepted method for studying health behaviors and practices. A KAP survey is a representative survey designed to uncover what a target population knows (knowledge), believes (attitude), and does (practice) regarding a specific topic. Data collection involves using structured or semi-structured questionnaires that are either self-administered or conducted via interviews. The process may involve both qualitative and quantitative data collection.4, 5
The Value of KAP Surveys
KAP surveys are relatively easy to create, conduct, analyze, and interpret. This ease of use has made them popular, especially in public health, where they provide important data for resource allocation, planning, and implementing public health programs. Key reasons for conducting KAP surveys are highlighted in Box 1.5
Ideally, a KAP survey should precede any awareness or intervention program. The survey results provide the data needed to design an effective program and establish baseline data for future evaluation. For example, KAP surveys can assess baseline awareness of mental health and healthcare-seeking behaviors before implementing educational or interventional programs. These surveys can then be repeated post-intervention to evaluate the impact. Interim KAP assessments can also be scheduled to monitor program performance and make necessary adjustments. These assessments can identify and correct potential failures early on.
A KAP survey is necessary if no prior surveys exist in the target population, if there are gaps in existing knowledge, or if there is a specific need, as outlined in Box 1.
Conducting a KAP Study: A Step-by-Step Guide
The following sections describe the essential steps for conducting a KAP study. These steps include identifying the topic, selecting the target population, preparing KAP questions, creating answer options, developing a scoring system, and validating the instrument. Our focus is on practical guidance in this guide to developing knowledge attitude and practice surveys, rather than theoretical discussions.
Identifying the Topic and Target Population
The first steps in any KAP study are identifying the topic and selecting the target population. Consider a researcher studying knowledge and attitudes towards electroconvulsive therapy (ECT). To ensure the study’s relevance, the target population must be well-defined. This could include patients with depression, relatives of patients with mental illness (key stakeholders in ECT consent), nurses, psychologists, medical professionals (excluding psychiatrists), or even psychiatrists themselves (who provide guidance about ECT). Because these groups have different professional backgrounds, their knowledge of and attitudes toward ECT will vary. Therefore, the questionnaire must be tailored to each target population.
Effective research addresses an existing need. Therefore, the target population selects itself based on where the need exists. There is no value in surveying a population without a clear need. For example, while there may be limited data on ECT knowledge and attitudes among medical students or bartenders, surveying these groups is unlikely to influence behavior and practice in the field. If feasible, the general population can be a legitimate study group. Alternatively, specific groups like nurses, psychologists, or non-psychiatric medical professionals can be surveyed to answer research questions, identify concerns, and suggest actions.
Crafting Effective Questions
Developing a KAP questionnaire involves framing questions, creating answer options, planning the scoring system, and validating the instrument. The “questions” can be statements or actual questions; we will use “questions” or “items” to refer to both.
Crafting questions to assess knowledge, attitudes, and practices starts with defining the expected knowledge level within the target population. Knowledge assessment is only useful if it influences attitudes and practices. For example, a researcher studying ECT knowledge and attitudes in the general population must decide what specific facts about ECT the public should know to form science-based attitudes. Similarly, a researcher studying ECT knowledge and practice among psychiatrists must determine what they need to know to run a competent ECT clinic. The researcher should have expertise in the field and consult with experienced psychiatrists, ECT experts, patients, or a small sample from the target population.
When assessing ECT knowledge in the general public, questions should cover facts, myths, and misconceptions about its effectiveness and side effects. Details about ECT history, pre-ECT evaluation, and techniques are less relevant. A similar approach applies to assessing attitudes, which are defined as enduring beliefs that influence a person’s responses.6 This strategy also applies to assessing practice or behaviors related to knowledge and attitudes.
Separate sections for knowledge, attitudes, and practice are not necessary due to potential overlap. For example, the statement “ECT is an outdated treatment” reflects both a lack of knowledge and a negative attitude.
Careful question framing is crucial. Avoid questions that are too easy or obvious, as well as those that are overly difficult. The questionnaire is not an exam. Avoid questions that can be interpreted differently, use complex language, contain technical terms or jargon, are lengthy, or address multiple issues at once, or contain double negatives. Avoid structuring questions so that “Yes” is always the ideal answer, as this can create response bias. Also, avoid questions whose answers can be deduced from previous or subsequent questions. While validation can identify these issues, researchers should be proactive.
Developing Answer Options
Carefully develop answer options for knowledge, attitude, and practice items. For example, for the statement “ECT causes brain damage,” options like “True/Don’t know/False” or “Agree/Don’t know/Disagree” can be used. The “Don’t know” option prevents forced responses and reduces unanswered questions.
Avoid offering too many options. For the statement “ECT causes brain damage,” it is not helpful to have options like “Very strongly disagree, Strongly disagree, Disagree, Don’t know, Agree, Strongly agree, and Very strongly disagree.” This can increase completion time, make it difficult to differentiate between options (e.g., “strongly” vs. “very strongly”), and not significantly impact the analysis.
For attitude assessments, excessive response options can create a false sense of detail because there is no gold standard for validation. Unless each rating point is anchored, respondents may struggle to justify their choices on a Likert scale, potentially leading to arbitrary responses and decreased test-retest reliability. For example, someone who “Agrees” is likely to respond “Agree” again, but someone who selects “4” might choose “5” the next day. (These suggestions are recommendations, not mandates.)
Avoid answer options that encourage guessing. Finally, include open-ended questions to allow respondents to share thoughts not covered in the questionnaire. Qualitative analysis can be used to analyze responses to open-ended questions.
Designing a Scoring System
A KAP questionnaire should have three subscale scores (knowledge, attitude, and practice) rather than a total score. These constructs are distinct and should not be combined, similar to how depression, quality of life, and daily activities are not summed into one construct. This means each subscale’s reliability, validity, and other psychometric properties must be assessed separately.
Questionnaire items can overlap across subscales. For example, answering “Yes” to “ECT causes brain damage” indicates both a lack of knowledge and a negative attitude. Similarly, answering “Yes” to “I would not accept ECT if recommended” reflects both a negative attitude and the respondent’s practice.
Scoring knowledge items is generally simple. Assign a score of 1 for each correct answer and 0 for each “Don’t know” or wrong answer. This clarifies why Agree, Strongly Agree, etc., are not ideal answer options; scoring “Strongly agree” as 2 would not logically mean someone knows twice as much as someone who “Agrees.”
If all knowledge items have similar difficulty, simple summation can be used to calculate the total knowledge score. However, if some items are more difficult, correctly answering them should carry more weight. In this case, consider advanced statistical models like Rasch analysis.7, 8
Attitude items can be scored by assigning a score of 1 for a positive response and 0 for any other response, or by generating a negative attitude score. Summation produces a total positive (or negative) attitude score.
Examining summated subscale scores is useful for comparing groups or assessing changes before and after an educational intervention. However, analyzing response patterns to individual items can be more insightful than looking at total scores. For example, knowing the percentage of patients or relatives who believe ECT causes brain damage can be more important than their average knowledge score.
If computing the total score, interpret it in relation to the maximum possible score. If the mean knowledge score is 7 out of a maximum of 14, the average patient answers only half the questions correctly.
Avoid setting cut-off scores for total scores. Perfection is unattainable, so determining how much a patient or participant should know is arbitrary.
Scoring and analyzing practice subscale responses should be determined based on the practices being measured.
Validating the Questionnaire
After creating the KAP questionnaire, it must be validated, typically through face and content validation. Face validation assesses whether the instrument is likely to measure what it intends to, while content validation ensures the instrument includes all necessary items, excludes unnecessary items, and is appropriately framed.
For validation, the questionnaire is circulated among experts (typically 3–5), who independently rate each item as satisfactory or unsatisfactory on a validation form. If unsatisfactory, they provide reasons and suggestions. The researcher calculates the content validity index, revises the questionnaire based on the suggestions, and recirculates it for approval.
As an additional validation step, administer the questionnaire to a small group (e.g., 5–10) of volunteers from the target population to identify any problems they encounter and gather their suggestions. Incorporate any improvements into the questionnaire.
Conclusion
Creating and conducting an effective KAP study requires careful consideration. This guide to developing knowledge attitude and practice surveys emphasized the need to give great thought to the selection of the target population, the items in the instrument, the creation of options for answers to items in the instrument, the decision on how the instrument will be scored, and the validation of the instrument. We hope that this guide to developing knowledge attitude and practice surveys, along with the practical suggestions offered in this article, will be helpful. Readers interested in learning more about scale development should consult the articles in the “Recommended Reading” section.
Declaration of Conflicting Interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The authors received no financial support for the research, authorship, and/or publication of this article.
Recommended Reading
- Kyriazos TA and Stalikas A. Applied psychometrics: The steps of scale development and standardization process. Psychology 2018; 9: 2531–2560.
- Furr RM. Scale construction and psychometrics for social and personality psychology [Internet]. London: SAGE Publications Ltd, 2011 [cited Jul 8, 2020], http://methods.sagepub.com/book/scale-construction-and-psychometrics-for-social-and-personality-psychology (accessed July 28, 2020).
- Boateng GO, Neilands TB, Frongillo EA, Melgar-Quiñonez HR, and Young SL. Best practices for developing and validating scales for health, social, and behavioral research: A primer. Front Public Health 2018; 6: 149.