Poster Paper: Front-End Evaluation Planning: Articulating Purpose and Key Questions

Friday, November 4, 2016
Columbia Ballroom (Washington Hilton)

*Names in bold indicate Presenter

Jacqueline H. Singh, Qualitative Advantage, LLC


The purpose of any evaluation should be clearly stated and defined. Much like research, a key feature is to ensure “meaningful” questions are written, as there are implications for data collection, data analysis, and resources needed. But, articulating an evaluation’s purpose and developing questions are perhaps the most difficult and challenging for individuals to do—especially, when evaluation planning occurs simultaneously with research. In fact, evaluation is often avoided or out rightly resisted, which inadvertently constrains the production and use of research. So what’s an evaluator, principal investigator, or faculty member to do? At this poster session, you'll learn about fundamental frameworks for addressing purpose and developing evaluation questions. Participants are encouraged to ask questions about frameworks and resources shared.

The Association For Public Policy Analysis & Management (APPAM) 38th Annual Fall Research Conference raises intriguing questions suitable for research, as well as for evaluation on this topic. In academic settings where research is a key activity, and solutions to societal issues matter, evaluation is often over looked or ignored. That’s unfortunate, because evaluation can significantly contribute to the knowledge base and identify strategies as well as techniques that promote more effective use of research evidence in policymaking and public management. 

The proposed poster acknowledges that evaluation can be confusing. Stakeholders on campuses across the country that engage in research (e.g., investigators, faculty, students, etc.) can be more effective if they maintain a basic understanding of what is “evaluation.” Indeed, there are many evaluation models, theories, alternative approaches, and strategies. It is not an “activity” that simply occurs at the end of a research project, initiative, program, technique, technology, or some other type of innovation. Rather, evaluation is something that needs to be considered, planned, and designed before launching an intervention, as multiple and simultaneous evaluation activities will undoubtedly occur through out. In short, evaluation involves locating and systematically gathering, in a methodological way, evidences needed to answer key stakeholders’ questions about the intervention, innovation, or “object” under investigation. Quite possibly, answers to these questions will inform future decision-making regarding public policy.

In effort to assist participants to strategically take charge of an evaluation effort, this poster session will display a Navigational Map and Front-end Evaluation Planning Framework to engage participant in meaningful conversation about intersections between research and evaluation. These are practical evaluation planning resources that help distinguish between different evaluation purposes to help articulate key questions— and, provide insights into “what is” evaluation. The goals of the poster are to: 1) demystify evaluation, overall; 2) differentiate between research, evaluation research, program evaluation, assessment, and performance measurement; 3) demonstrate how evaluation purposes inform evaluation questions—and, vice versa; and, 4) increase participants’ awareness of frameworks, strategies, approaches, frameworks, and practical tools to begin a conversation to help build evaluation capacity within research universities.