Tracking & Evaluation

Great Plains Center for Translational Research (GPCTR) Evaluation Policy

The GPCTR Evaluation Team is represented by Mary Cramer, PhD; Jolene Rohde, MPH; Fabio Almeida, PhD; and Fernando Wilson, PhD. The team has diverse expertise in outcomes and impact evaluation, social network analysis, process evaluation, and cost effectiveness analysis.

The team is responsible for evaluating how well the GPCTR is in achieving its goals—both as a cohesive center overall and within each Key Component Area (KCA).  Specifically, they are charged with tracking the Logic Model (see below) Inputs and Activities in order to provide quality improvements for programs and services. Evaluation is accountable for tracking Logic Model Intermediate Outcomes in order to ensure that the GPCTR is building research capacity across the consortium of partners through increased multidisciplinary collaborations; externally funded grants; dissemination products; vibrant scholar development; and innovative, accessible clinical translational resources. Likewise, the GPCTR is committed to enhancing community partnerships to ensure research projects address community-identified priorities and build the community’s capacity for active participation in research projects. Finally, the evaluation team relies on impact metrics to assess the long-term social and economic contributions identified in our Logic Model for improved population health and return on investment.

The evaluation team will adhere to American Evaluation Association (AEA) professional standards. Our GPCTR Logic Model will guide evaluation to ensure that findings have utility, feasibility, propriety, and accuracy. We will also adhere to the five core principles of evaluation taken from the AEA: systematic inquiry, competence, integrity/honesty, respect for people, and responsibility for general and public welfare. Consistent with current literature on complex science consortia, the evaluation team will provide mixed methods and innovative approaches that include:

The evaluation team will regularly review the literature and communicate with other IDeA CTR evaluators in order to establish and maintain best practices in evaluation. The team will also publish results of its work in order to broaden knowledge among the larger academic research community so that they may learn from our experiences.


Click to enlarge
Logic Model