Building Smarter Data for Evaluating Business Assistance Programs: A Guide for Practitioners

Author(s): Nickerson, Cynthia; Park, Timothy; Pender, John; Wojan, Tim; Brown, David J., et al.

Organizational Author(s): U.S. Small Business Administration

Funding Source: Funding source not identified

Resource Availability: Publicly available

WSS Icon 3 Large

Summary

Provides guidance to program managers and agencies on how to use existing administrative and secondary data sources to conduct lower-cost impact evaluations, with a focus on evaluating business technical assistance programs.

Description

“Increasingly, government agencies are called upon to use rigorous impact evaluations to promote learning about what works in government programs and use the evidence to continually improve programs to achieve better outcomes. And, they are asked to do so at least cost and burden to taxpayers. Agencies are responding by looking for new ways to utilize program administrative data and secondary data sources for impact evaluations, thereby reducing reliance on surveys when possible. This guide can serve as a practical tool to help agencies identify important data-related practices and the critical data that need to be collected and retained. This will allow agencies to effectively use their administrative data for rigorous impact evaluations. While focused primarily on the data needs for evaluating business technical assistance programs, the vast majority of the recommended data practices will be useful in building other types of evidence” (p.43).

The guide is organized into eight chapters:

  • Chapter I explains how impact evaluations are complementary to other evidence-building activities… and demonstrates the value of designing data collections that fulfill the needs to track program performance and enable impact evaluations” (p.7).
  • “Chapter II provides an overview of program theory and an example of a logic model—a graphical representation of a program theory—for a business technical assistance program.
  • Chapter III…identifies several key considerations when determining whether a program is suitable for an impact evaluation using statistical methods” (p.7).
  • “Chapter IV…summarize key impact evaluation concepts and the two main approaches, Randomized Control Trials and Quasi-Experimental Designs, for conducting high-quality evaluations.
  • Chapter V…identifies critical types of data that enable administrative data systems to be used in high-quality evaluations.
  • Chapter VI briefly describes several other methods for building evidence, explaining that they have different strengths and can be complementary to one another” (p.7).
  • “Chapter VII identifies many of the data challenges faced by program managers who inherit a program and ways of dealing with the challenges” (p.7).
  • “Chapter VIII, explains how making initial investments to improve administrative data to support evaluation can save evidence-building time and costs in the long run” (p.7).
(Abstractor: Author and Website Staff)

Major Findings & Recommendations

“In summary, the guide recommends that program managers:

• Identify administrative data needed for both program service delivery and eventual impact evaluation at the beginning of a program or pilot. This prevents the need for expensive, after-the-fact additional data collection. Similarly, for existing programs, early assessments of the quality and availability of administrative data and actions needed to remedy data deficiencies can increase the value of administrative data for eventual evaluation.

• Solicit the input of evaluation experts early in the process of developing data plans. These experts can help identify the best methodology for measuring program impact and the most cost-effective ways to assemble the data necessary to support high-quality evaluations.

• Explore whether linking program administrative data on assisted businesses to secondary data of government agencies or commercial sources is a viable option. This linkage, which requires sufficient unique applicant/participant-level identifying information in all datasets, can increase evaluation quality and reduce the need to conduct post-service surveys. Ensure that sufficient security procedures are in place to protect the data and their confidentiality.

• Engage departmental attorneys and policy officers, including privacy, confidentiality, and security officers, early in the process of developing data plans. This can avoid problems and delays that arise when data collection and sharing for evaluation purposes are treated as separate or after-the-fact considerations” (p.1).

The guide also identified 18 best practices for the use of existing data sources for impact evaluations, including:
1. “Have One Plan for All Data Needs. Design one system to collect the necessary data for program administration, impact evaluation, and other evidence-building strategies. Identify and implement relevant data security and privacy and confidentiality requirements” (p.3).
2. “Develop a Program Theory and Logic Model. Create a Data Dictionary. Establish and maintain a data dictionary documenting data item definitions and changes, how data are collected (e.g., retain example forms and instructions), and relationships between key data items….Describe any new records or revisions to existing records, including when the changes were made” (p.3).

(Abstractor: Author and Website Staff)

Related Content

Audio Player

Click here to download file.

Workforce System Strategies Content Information

Methodology: How-to Guides
Content Type: How-To Guide
Target Populations: Other

Post Information

Likes:
Views: 80
Publication Date: 2017
Posted: 4/13/2018 5:38 PM
Posted In: Workforce System Strategies
Like Share Print Email




Comments