Frequently Asked Questions
What areas of educational policy does CREP address?
While our work is diverse, our primary areas of policy focus are:
- Educational Equity
- Educator Preparation and Development
- Methodology and Measurement
- School Choice
- Student Success
- Economic Impact
Secondarily, we address:
- Child Development
- Curriculum and Instruction
- English Learners
- Leadership Quality
- Online Education
What is program evaluation and why do I need it?
Program evaluation is a systematic investigation that determins the extent to which an intervention is accomplishing its stated goals. The value of having an external program evaluator lies in the unbiased nature of a third party separate from the intervention's implementation team. In addition to providing an independent viewpoint, an external evaluator can collect confidential information from program participants who may not be comfortable giving honest feedback to the implementation team. Many granting organizations require an external evaluator to be proposed when requesting funding.
What does an evaluation cost?
Typically, a minimum of 10% of your program budget should be dedicated to evaluation. Some federal funders have specific recommendations for evaluation budgets, and consider under-funded evaluations a weakness when reviewing and scoring proposals. For example, NSF recommends that 15-20% of a project's budget should be allocated for program evaluation.
Another guideline is to consider how many points the reviewers of a proposal can award based on the quality of the program evaluation. This is a good guideline for how much emphasis that program expects you to give to the evaluation components of your project. In other words: What percentage of the total possible award points are assigned based on the evaluation? That is the percentage of your budget that should be dedicated to funding your evaluator.
If evaluation costs are not recommended in the guidelines for a grant application, there are many other factors that can increase or decrease the cost of an evaluation. The number of sites studied, data collection strategies, complexity of statistical analyses planned, and extent of reporting and dissemination all feed into the final cost of a program evaluation. The Social Innovation Fund’s Evaluation Budgeting Quick Guide provides some good additional recommendations.
Ultimately, CREP staff will work with you to determine the evaluation plan most suited to your project and budget. Contact us for additional details.
What sort of analyses can CREP conduct?
We begin any evaluation with a comprehensive data analysis plan, but ultimately the analyses used will be those most suited to the data that we collect. For interviews, focus groups, and open-ended survey questions, we typically either generalize results or employ qualitative coding methods. Likert-type survey results may be summarized using descriptive statistics and compared longitudinally year-to-year, or may be used to calculate dimensions for a more qualitative approach. Complex quantitative data such as student achievement measurements will be analyzed by our statistics team using methods that provide the greatest statistical power for the data available. At the highest level of complexity, our experienced statisticians are capable of executing both quasi-experimental designs and Randomized Control Trials (RCTs), and have thorough knowledge of What Works Clearinghouse (WWC) procedures and standards.
What is the What Works Clearinghouse?
The WWC is run by the Institute of Education Sciences (IES), and lays out guidelines for educational research design that are often considered to be the gold standard for high-quality research. These guidelines are laid out in the WWC Standards Handbook, currently in its fourth edition. Many federal grants are awarded in part based on a study’s potential to meet WWC group design standards with or without reservations. CREP has had multiple studies that met WWC group design studies without reservations, its highest requirement for rigor in research design.