Glossary


Action research – Research/evaluation to promote or change a specific victim service or response practice.

Administrative data – Secondary data originally collected for administrative rather than research purposes, such as information collected by an organization for registration or record-keeping.

Anonymity – An approach to data collection in which no personally identifiable information, such as names or Social Security numbers, is gathered from participants. As a result, participants cannot be linked to their responses, even by the research team.

Assessment – The process of gathering and interpreting detailed information to describe, understand, or evaluate the needs of a person, such as an assessment tool, or the qualities of a program or policy.

Closed-ended question – A question designed to be answered using a pre-specified set of choices, such as “yes” or “no.” Closed-ended questions are quicker to answer and easier to analyze than open-ended questions.

Community-based participatory research – A collaborative approach to research that equitably includes all stakeholders in the research process, particularly members of the community being studied, and recognizes the unique strengths that each brings.

Confidentiality – An approach to data collection in which participants’ personally identifiable information is gathered by the research team, but not made public. Because participants can be linked to their responses, the research team must store this data securely to ensure participants’ privacy.

Correlation – A statistical measure that describes the extent to which a relationship exists between two variables. Two variables that are correlated have a relationship in which a change in one is associated with a change in the other. However, a correlation between two variables does not necessarily mean that one variable causes the changes observed in the other variable.

Cost-benefit analysis – The process of identifying, monetizing, and adding up the positive benefits and negative costs associated with a program, practice, or decision to determine whether it is worthwhile.

Cross-sectional study – A type of study that analyzes data from a population or sample at a single point in time.

Data – Information gathered with the specific purpose of measurement. Data is used for a variety of purposes, but is most commonly used to identify larger trends and expand or test our understanding of particular concepts.

Evaluability assessment – The process of determining whether or not evaluation of a program or practice is justified, feasible, and likely to provide useful information.

Evaluation– The systematic assessment of the implementation, effectiveness, efficiency, cost-effectiveness, and/or attributes of a program, policy, or activity.

Focus group – A form of qualitative research in which a small group of people are chosen to participate in a guided discussion on a particular topic.

Gap analysis – The process of identifying gaps between the services and programs currently provided by an organization and the ideal level of service provision. Gap analyses are typically conducted following a needs assessment to help an organization clarify its desired goals.

Human subjects protections –  The ethical obligations of researchers to protect the well-being of people who participate in research. This includes identifying and communicating the risks associated with research participation, as well as the steps that researchers will take to protect human subjects from those risks.

Informed consent – The voluntary informed agreement to participate in a research project. To give informed consent, subjects must understand their rights, the purpose of the study, the procedures or activities associated with the study, and the potential risks associated with participation. Participants must not be coerced into participating and must understand they are free to opt out of the research at any time.

Institutional review board (IRB) – A group of at least five members responsible for reviewing and approving human subjects research within their jurisdiction to ensure adequate protections for subjects. IRBs have the power to approve, deny, monitor, and request modification of research activities.

Logic model – A visual roadmap of what a program or project is intended to achieve and how it is expected to work. Components of a logic model may include problems, goals, objectives, activities, and output measures, as well as short-term and long-term outcomes and impacts.

Longitudinal study – A type of study that analyzes data from the same population or sample across several points in time.

Needs assessment – A systematic process to determine a specific population’s needs, with the goal of developing or improving services to effectively meet those needs. This may be accomplished through information-gathering activities such as surveys, interviews, focus groups, community meetings, and reviewing administrative data.

Open-ended question – A question designed to be answered fully using respondent’s own words, thoughts, or feelings. Open-ended questions are richer in meaning and detail but harder to analyze than closed-ended questions.

Outcome – A specific, measurable result of a research project or program. Examples include increased knowledge and positive changes in the attitudes or behaviors of program participants.

Outcome evaluation – An evaluation that measures a program’s results and determines whether or not it achieved its intended goals. An outcome evaluation answers the question: Did this program change anything?

Performance measurement – The measurement of key outcomes at consistent intervals, which generates reliable data on the effectiveness and efficiency of programs, projects, and policies.

Primary data – Data collected by the person or organization conducting the research specifically for the purposes of that research. Secondary data, by contrast, is data that was originally collected for a different purpose, such as census data.

Process evaluation – An evaluation that measures the progress or implementation of a program or service to determine how closely the activities are implemented as intended, and to identify process barriers, facilitators, and opportunities for correction.

Program evaluation – An evaluation that systematically collects, analyzing, and uses data to answer questions about the effectiveness and efficiency of projects, programs, and policies.

Qualitative data – Information that cannot be quantified, but can be described. Some examples of qualitative data are a person’s opinions, motivations, and personal experiences. Qualitative data is typically collected through methods such as participant observation, interviews, and focus groups, then systematically interpreted by researchers.

Quantitative data – Information that can be measured or recorded using numbers. Some examples of quantitative data are a person’s age, height, and weight; the number of services provided; and annual donations received. Quantitative data is numerical and typically interpreted through statistical analysis.

Randomized controlled trial (RCT) – A research method used to assess the efficacy of a program, project, or experiment. An RCT randomly assigns a subset of participants to receive a program or a service, then compares the outcomes of participants who received the program or service with those who did not. Comparing these outcomes allows researchers to identify the impact of a program or service.

Reliability – The consistency or stability of a questionnaire, tool, or measure. A tool is considered reliable if it produces similar results when retested in similar contexts.

Sample – A subset of a population of interest. For example, a researcher may be interested in the impact that a particular program has on victims, but it is highly unlikely that they will be able to identify and survey all victims. In this case, the researcher may select a sample of victims to participate in his or her study.

Screening –The brief process of gathering information to detect the presence of a need or problem within a person, such as a screening tool. When a problem is detected through screening, a fuller assessment is often suggested as follow-up.

Statistical analysis – The act of processing, summarizing, interpreting, and inferring conclusions from collected data.

Validity – The accuracy or correctness of a questionnaire, tool, or measure. A tool is considered valid if it accurately measures the qualities of the specific population for which it is used.