Reporting Accuracy in Local Economic Development Programs

This exploratory study finds significant inaccuracies in self-reported data in the International City Manager’s 2014 Survey on Local Economic Development. These inaccuracies hinder the use of the data in evaluating program results, and raise issues about previous scholarship that used this data exclusively without validation from secondary sources. This exploratory study was followed by another study looking at predictors of accurate reporting. This latter study is currently under peer-review. 

Defining the limits of local economic development programs has been a vexing problem for scholars and public officials. Every aspect of planning, organizing, operating, and reporting economic development programs is fraught with complications. Complications with planning local economic development programs start with defining and measuring realistic outcomes. Increasing jobs and tax revenues are two traditional goals, but these, as Paul Peterson and others argue, are typically beyond the resources or political strength of their institutions or elected leaders (Peterson, 1981, p. 4; United States Congress, 1975, p. 11). If increasing jobs is a priority, the challenge becomes balancing the needs of business, and the interests of employed and unemployed of the community  (Adua & Lobao, 2015; Lobao & Kraybill, 2014). If tax growth is a priority, overcoming barriers to intergovernmental cooperation  poses significant challenges (Kwon & Feiock, 2010).  Measuring and reporting effectiveness, particularly when it comes to job creation, is another test.  Past research on local economic development program outcomes finds little in the way of employment, income, or fiscal benefits (Feiock, 2002) or worse, spur zero- or negative-sum competition (Reese, 1991; Reese & Rosenfeld, 2004).  Reporting negative or contradictory results is problematic for public administrators tasked with producing successful economic development programs.  The pressure to report positive results, applied more broadly to all public programs, has sparked a separate stream of research, focused on the integrity of the information exchange between public officials and their principals (Bartik, 1994, p. 99; Musgrave & Musgrave, 1989, p. 99).

While past research on local economic development programs has studied the relationship between priorities, barriers to success, and programming (Reese & Fasenfest, 2004, p. 12; Warner & Zheng, 2013), a noticeable gap is an assessment of the accuracy of program results reported by local officials. Despite the wide use self-reported data from national surveys of local economic development programs in academic research, there is no known research testing the validity of claims of success (or failure) by survey respondents. This paper explores this gap with an inductive study of the accuracy of self-reporting successes and failures.

To assess the state of local economic development program reporting practices, responses by municipalities to an economic development survey conducted by ICMA are paired with data collected by the United States Census Bureau on employment and tax revenues from the same time period.  The ICMA surveys its members on a regular basis on government operations, facilities, technology, sustainability and economic development. The results of these survey are provided in summary form to members and the public, and the underlying data is available to purchase for research purposes. The data set used in this study is the most recent ICMA survey on local economic development (2014). The survey was distributed to ICMA’s 5,237 members and 23% responded (N: 1201).  The response rate reflects the deletion of eleven cases that were not identified with a government entity. Additional steps used to clean the ICMA data are detailed in the appendix. Approximately two-thirds of the respondents were municipalities – with the balance being counties, special governments, and non-profits (International City Managers Association, 2014). Municipalities are the focus of this inquiry because of the consistency of their organization, roles, and responsibilities. Counties, special districts, and non-profits can be organized in a number of ways, making it difficult to compare survey results (Peterson, 1981, p. 10).  Table 1 provides basic demographic comparisons between the municipal survey respondents (n=827) and all places within the United States. The ICMA participants are generally among the larger places/municipalities, but have relatively equal income per capita, and slightly higher income deficits per capita. Property taxes are generally higher, and sales taxes are lower (Table 1). The stratification of the participants by population, compared to all other places/municipalities, is approximate for populations above ten thousand (Table 2).

Table 1



The ICMA survey instrument included 25 questions. Twenty-two of these questions were close-ended with a predetermined set of multiple-choice answers.  The number of multiple-choice options in these close-ended questions ranged from 2 to 32. The questions involved planning (motivation, barriers, priorities), programs (tools and incentives), and claims of success.  The menu of priorities included traditional and Type II programing; creating jobs, increasing the tax base, quality of life, environmental sustainability, and social equity. A summary of the responses from municipalities on priorities and claims of success is provided in Table 3. The priorities of municipalities in the survey are generally consistent, with over 85% indicating that increasing jobs and the local tax base, along with improving quality of life are priorities. The responses are more varied for establishing environmental sustainability (42%), and social equity (25%) as priorities. As for success in meeting these priorities, just over 89% claim some level of success with job growth, tax growth, improvement in the quality of life, and progress towards environmental sustainability. Claims of success on social equity are substantially less, dropping to approximately 76%.


Testing Accuracy of Claims of Success or Failure

The first part of the exploratory study compares secondary data on actual job growth (2010 to 2014) and actual tax base growth (2012-2014) with the claims of success or failures from the ICMA survey responses. No definitive secondary data sources were identified to verify claims of success, or failure, improving quality of life, or advancing environmental sustainability or social equity.  For job growth, unemployment and employment status was collected for each municipality from the American Community Survey (Table S2301) for 2010 through 2014  (U.S. Census Bureau, 2010, 2011, 2012b, 2013b, 2014b). Both measures of employment were used because it is not known how each jurisdiction measures job creation. Also, since it is unclear what time-frame respondents to the ICMA survey used for gauging success, rates of change were calculated for the periods, 2010 to 2014, 2011 to 2014, 2012 to 2014 and 2013 to 2014. Eight binary dummy variables were created, representing each of the four time-frames and the two measurements (employment rate and unemployment rate) with “0” representing no growth in employment or decrease in unemployment, and “1” representing growth in employment and a decrease in unemployment.  For tax growth, property and sales tax data was collected from the Census Bureau’s State and Local Government Finances survey for 2012 through 2014 (codes T01 and T09)(U.S. Census Bureau, 2012c, 2013b, 2014c).  The Bureau notes that this data set may include high sampling errors (U.S. Census Bureau, 2012a, 2013a, 2014a). Again, since it is not clear what time frame respondents were contemplating when making claims of success or failure, rates of change were calculated for the periods 2012 to 2014 and 2013 to 2014. Only two periods were used for tax revenues, because typically municipalities measure and report tax revenues on an annual or bi-annual basis. Two binary dummy variables were created, representing each of the two time-frames and the single measurements (change in tax collections) with “0” representing no growth in tax revenue and “1” representing growth in tax revenue.

After the ten dummy variables were created for both actual jobs and tax growth, they were cross-tabulated with binary dummy variables representing survey claims of success or failure. Municipalities are deemed to be accurate if claims matched actual results. For example, if a city claims no success growing jobs, and the job growth rate was negative, the city is deemed to be accurate. If a city claims some job growth success, but yet experienced negative employment growth, that city’s claim in the survey is designated as inaccurate. The results of this cross-tabulation are present ed in Table 4.  The accuracy of claims of success, or failure, by respondents increased with shorter time frames. For example, the accuracy of job claims using the unemployment rate, increased from 28% using a four-year period (2010 to 2014) to 75% using a one-year period (2013-2014). Similar improvement occurred using the employment rate, with 24% accuracy using a four-year period and 66% accuracy using a one-year period. The accuracy of tax revenue claims improved slightly, 71% to 74%, contrasting two- and one-year periods.


Evaluating Programs using Self-reported and Actual Data

The next step to explore the accuracy of survey data was to compare the effectiveness of local economic development programs using self-reported and actual data. For this analysis, a binary dummy variable was created with the ICMA data to measure success with a response of “none” being a “0” and responses of “somewhat successful” and “very successful” being a “1”. Measures for programs (tools/incentives) were developed by combining multiple responses from the ICMA survey through factor analysis. The factor analysis started with 48 items from the ICMA survey relating to programs. The items associated programs, measured on four-point Likert scales, were reduced through factor analysis to seven measures and tested for internal consistency (α). The measures created for programs used 24 of the 48 select items. These measures are “direct business support” (α:.792), “sustainability programs” (α:.732), “marketing” (α:.731), “finance” (α:.719), “investment” (α: .681), “contributions” (α: .664), and “assistance” (α:.703) (Table 5). A consolidated description of all the program measures is provided in Table 6.



Binomial logistic regression was then used to test and compare the relationship between the programs (collectively) and the binary success variables using self-reported and actual data. The results of these models are presented in Tables 7 and 8.   In Table 7 a statistically significant relationship is found between the seven measures (collectively) and self-reported claims of success for both jobs and tax revenues. The investment measure was strongly associated with success with job growth (OR 1.824) and tax growth (OR 2.761).  With actual data (Table 8), no statistically significant relationship was found between the seven program measures (collectively) and actual success increasing jobs, decreasing unemployment or increasing tax revenues. This analysis used the period from 2013 to 2014 for the success measures. In summary, with self-reported data the relationship between programs and claims of success were statistically significant in some instances, but using actual data, the relationship between programs and results were not statistically significant.




The use of self-reported data in ICMA economic development surveys is problematic for public officials, community members and researchers. Alternative data, or validation methods must be employed to ensure the voracity of the self-reported data. Finding reliable alternatives to assess program results is important for public officials and community members as the information exchange between staff and the public is a critical part of the democratic process.

For researchers, the use of self-reported data in ICMA surveys is compounded by past research that used this data exclusively to address research questions. ICMA, or equivalent survey data,  has been used to quantify the balance between pro-business policies social services (Adua & Lobao, 2015), determine the interaction of privatization, business attraction and social services (Lobao & Kraybill, 2014) assess the ability of poor cities to pursue local economic development (Lobao & Kraybill, 2009) and determine the economic climate where business incentives are deployed (Warner & Zheng, 2013; Zheng & Warner, 2010).  Researchers should be careful in citing these studies without checking to see how the ICMA data was used, and if it was validated by other sources.

Further study

The disparities between findings in the exploratory study using self-reported versus actual data generates the research question for further study; what are the predictors of accurate performance reporting in local economic development? There are two important reasons to address the accuracy of reporting from local economic development programs. First, access to accurate information about public programs is a critical safeguard of our democratic system. Murray Edelman summed it up this way, “Citizens who are informed about political development can more effectively protect and promote their own interests and the public interests”(Edelman, 1989, p. 382). For Paul Peterson, access to quality information is critical to a reasoned discourse on urban policy, “neither city residents nor city leaders are fools. On the contrary, they can be expected to think about their situations and take reasoned positions on the problems they face – within the limits of the information available to them (Peterson, 1981, p. xii).

I have completed a study with the subject research question. My methods and findings are currently under review by a peer-reviewed journal. I will post my results once I have passed the peer-review process. 

About the author: Bill Farley has 30 years of experience in local economic and community development as a public official, entrepreneur and corporate executive. He is a former instructor of public policy and public finance at the University of Southern California Price School of Public Policy. He is currently advising organizations on local economic policy while completing a PhD in Public Policy and Administration at Virginia Commonwealth University. 


Adua, L., & Lobao, L. (2015). Business Attraction and Redistribution by U.S. Local Governments: To What Extent Is There a Zero-sum Relationship between Business and Citizens Interests? State and Local Government Review, 47(4).
Bartik, T. J. (1994). Better Evaluation Is Needed for Economic Development Programs to Thrive. Economic Development Quarterly, 8(2), 99–106.
Edelman, M. (1989). Constructing the Political Spectacle. In Public Policy, The Essential Readings (pp. 381–389). Upper Saddle River: Prentice Hall.
Feiock, R. C. (2002). A Quasi-Market Framework for Development Competition. Journal of Urban Affairs, 24(2), 123–142.
International City Managers Association. (2014). Economic Development 2014 Survey Results.
Kwon, S.-W., & Feiock, R. C. (2010). Overcoming the Barriers to Cooperation: Intergovernmental Service Agreements. Public Administration Review, 70(6), 876–884.
Lobao, L., & Kraybill, D. (2009). Poverty and Local Governments: Economic Development and Community Service Provision in an Era of Decentralization. Growth and Change, 40(3).
Lobao, L., & Kraybill, D. (2014). Privatization, Business Attraction, and Social Services across the United States: Local Governments’ Use of Market-Oriented, Neoliberal Policies in the Post-2000 Period. Social Problems, 61(4).
Musgrave, R. A., & Musgrave, P. B. (1989). Public Finance in Theory and Practice (International). Singapore: McGraw-Hill-International.
Peterson, P. E. (1981). City Limits. Chicago and London: University of Chicago Press.
Reese, L. A. (1991). Municipal Fiscal Health and Tax Abatement Policy. Economic Development Quarterly, 5(1), 23–32.
Reese, L. A., & Fasenfest, D. (2004). Critical Evaluations of Economic Development Policies. Detroit: Wayne State University Press.
Reese, L. A., & Rosenfeld, R. A. (2004). Local Economic Development in the United States and Canada: Institutionalizing Policy Approaches. The American Review of Public Administration, 34(3), 277–292.
United States Congress. (1975). New York City’s fiscal problem: Its origin, potential repercussions, and some alternative policy responses (Background paper No. 01). Washington D.C.: Government Printing Office.
U.S. Census Bureau. (2010). Employment Status (No. S2301). Washington D.C.
U.S. Census Bureau. (2011). Employment Status (No. S2301). Washington D.C.
U.S. Census Bureau. (2012a). 2012 State and Local Finance Individual Unit File Disclaimer. U.S. Bureau of Census.
U.S. Census Bureau. (2012b). Employment Status (No. S2301). Washington D.C.
U.S. Census Bureau. (2012c). State and Local Government Finances. Washington D.C.
U.S. Census Bureau. (2013a). 2013 State and Local Finance Individual Unit File Disclaimer. U.S. Bureau of Census.
U.S. Census Bureau. (2013b). State and Local Government Finances. Washington D.C.
U.S. Census Bureau. (2014a). 2014 State and Local Finance Individual Unit File Disclaimer. U.S. Bureau of Census.
U.S. Census Bureau. (2014b). Employment Status (No. S2301). Washington D.C.
U.S. Census Bureau. (2014c). State and Local Government Finances. Washington D.C.
Warner, M. E., & Zheng, L. (2013). Business Incentive Adoption in the Recession. Economic Development Quarterly, 27(2), 90–101.
Zheng, L., & Warner, M. (2010). Business Incentive Use Among U.S. Local Governments: A Story of Accountability and Policy Learning. Economic Development Quarterly, 24(4), 325–336.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s