Provides guidance to program managers and agencies on how to use existing administrative and secondary data sources to conduct lower-cost impact evaluations, with a focus on evaluating business technical assistance programs.

“Increasingly, government agencies are called upon to use rigorous impact evaluations to promote learning about what works in government programs and use the evidence to continually improve programs to achieve better outcomes. And, they are asked to do so at least cost and burden to taxpayers. Agencies are responding by looking for new ways to utilize program administrative data and secondary data sources for impact evaluations, thereby reducing reliance on surveys when possible. This guide can serve as a practical tool to help agencies identify important data-related practices and the critical data that need to be collected and retained. This will allow agencies to effectively use their administrative data for rigorous impact evaluations. While focused primarily on the data needs for evaluating business technical assistance programs, the vast majority of the recommended data practices will be useful in building other types of evidence” (p.43).

The guide is organized into eight chapters:

  • Chapter I explains how impact evaluations are complementary to other evidence-building activities… and demonstrates the value of designing data collections that fulfill the needs to track program performance and enable impact evaluations” (p.7).
  • “Chapter II provides an overview of program theory and an example of a logic model—a graphical representation of a program theory—for a business technical assistance program.
  • Chapter III…identifies several key considerations when determining whether a program is suitable for an impact evaluation using statistical methods” (p.7).
  • “Chapter IV…summarize key impact evaluation concepts and the two main approaches, Randomized Control Trials and Quasi-Experimental Designs, for conducting high-quality evaluations.
  • Chapter V…identifies critical types of data that enable administrative data systems to be used in high-quality evaluations.
  • Chapter VI briefly describes several other methods for building evidence, explaining that they have different strengths and can be complementary to one another” (p.7).
  • “Chapter VII identifies many of the data challenges faced by program managers who inherit a program and ways of dealing with the challenges” (p.7).
  • “Chapter VIII, explains how making initial investments to improve administrative data to support evaluation can save evidence-building time and costs in the long run” (p.7).
(Abstractor: Author and Website Staff)