Following Dove (2015), our objective is to examine the top 100 U.S. cities in terms of their population size. Collecting data on bond ratings proved challenging. U.S. Census Bureau’s Statistical Abstract of the United States provides municipal bond rating data (separately for the three rating agencies) from 1995 to 2010. For the post-2010 period, we collected municipal bond rating data directly from S&P, Moody’s, and Fitch. Eventually, we constructed a balanced panel covering 80 cities for the period 1995-2018. Upon publication of this paper, we will make this data publicly available to support future research on this important subject.
Our dependent variable is the average of three standardized General Obligation (G.O.) bond ratings provided by S&P, Moody’s, and Fitch. The reason is that rating agencies assess G.O. bonds based on the cities’ future fiscal capacity. In contrast, revenue bonds are evaluated based on the extent to which borrowing entities can generate additional revenue from the funded projects (such as toll revenue from a new toll road). Because we seek to study how membership in climate clubs might signal a city’s overall preparedness to deal with future climate events, G.O. bonds are the appropriate instruments to study.
One can ask why focus on credit ratings of municipal bonds, not borrowing costs (interest rates)? The reason is that interest rates vary by the characteristics of each bond cities float, which makes us hard to interrogate the relationship between borrowing costs and climate club membership. However, G.O. credit ratings are the outcome of judgments made by rating agencies based on their risk assessment of the borrowers in general based on all current bonds they have floated so far. Therefore, if the municipal bond market is sensitive to climate risk cities face and climate club membership provides information on how much they climate-proof themselves, which is what we argue, we expect that rating agencies will be among the first-movers to consider membership status of municipal borrowers and reflect it in their G.O. rating actions.
Following Rubinfield (1973), we suggest a direct relationship between credit ratings with borrowing costs and construct the dependent variable following these steps. First, since all bond ratings are in terms of letter grades (i.e., A+, A.A., B-), we transformed bond ratings into numeric scores based on the scale they use. S&P has 22 lettered grades ranging from AAA (highest) to CC (lowest). Moody’s uses 21 lettered grades ranging from Aaa to C, and Fitch uses 19 lettered grades ranging from AAA to D. For S&P, since A+ is the 18th rank from the lowest, we transformed A+ into the numeric score of 18, which we refer to as bond ratings as “bond scores.”
Each bond rater uses a different scale: A score of, say, 18 for S&P provides a different risk assessment in relation to the same score for Moody’s or Fitch. Hence, we standardized each bond score assigned by any rating agency based on these means and standard deviations of the rating this agency has provided for any bond in our dataset. Finally, for each city-year, we averaged the three standardized bond scores to get a single numeric score.
The key independent variables of interest are three dummy variables that indicate whether a city was a member of each city-level climate club in a given year. We assign the value of one if a city-year was a member of a specific climate club and zero otherwise. As the literature suggests, several additional factors can affect a city’s future fiscal health, both in its tax revenue and expenditures, and therefore bond ratings (Fisher 2013; Peck 2012; Omstedt 2020). We, therefore, include control variables to isolate the treatment effect of individual club membership. First, we control for population and per capita income as they impact the city’s tax revenue base. Second, we control for the percentage share of homeowners in the total population and the unemployment rate (Simonsen et al. 2001; Peng and Brucato 2004; Dove 2015), which bear both in future revenues and expenditures. Lastly, we control for the number of State-level and County-level (Parish-level for Kentucky, Borough-level for Alaska)[1] emergency declarations from the Federal Emergency Management Agency (FEMA) as a proxy for city-year exposure to natural disasters. The intuition is that cities that experience extreme weather events are more likely to join climate clubs. But at the same time, extreme weather events might also affect a city’s fiscal capacity and its bond ratings.[2] Since, however, emergency declarations include not only climate-related disasters but also related, such as earthquakes, we consider emergency declarations that are limited to wildfires.[3] Because including emergency declaration pertaining to hurricane and snowstorms do not change our substantive findings, we do not report them in the main model. Finally, we include two-way fixed effects (city and year fixed effects) to isolate the effect of individual club membership on bond ratings.[4] Our model equation is as follows:
Where i = 1, 2, ..., 80 is a city index and t = 1996, 1997, ..., 2018 is a year index, score is the average of standardized scores from three rating agencies, X is a set of three explanatory variables (membership in 100RC, C40, and ICLEI), Z is a set of control variables, αi is a unit-fixed effect, Θt is a year-fixed effect, and εit is an idiosyncratic error.
Consistent with the prior literature (Biglaiser and Staats 2012), we include a dependent variable lagged by one year to account for the stickiness of the bond ratings. We recognize the debate on using a lagged dependent variable, which scholars see imposing a downward bias in the estimate (Achen 2000; Keele and Kelly 2006; Wilkins 2018). But we see this as a positive because it makes our estimates conservative. Given the scope of our sample data, we allow for the error terms to be spatially and temporally correlated by using Driscoll and Kraay’s standard error, clustered by the state in which cities are located (Driscoll and Kraay 1998; Vogelsang 2012).
Typically, agencies rate bonds at the time of their issue, although they can re-evaluate these ratings at any point in time. In effect, these agencies do not issue new bond ratings for U.S. cities every year. This creates a missing observation problem. To mitigate bias from listwise deletion, we created 100 imputed datasets using the Amelia R package that fills out missing observations with a bootstrapping-based algorithm (King et al. 2001; Honaker and King 2010).[5] Then, we estimate the above two-way fixed effects model with each imputed data set and simulate 100 coefficients with model results, which gives us 10,000 simulated coefficients in total that capture the uncertainty from both the model results and the multiple imputation process. In the next section, we report our results based on these 10,000 simulated coefficients.
3.1. Potential endogeneity issue for 100RC membership
We recognize a potential threat to our causal identification strategy: endogeneity between 100RC membership and bond ratings. After all, the Rockefeller Foundation selected a subset of cities for membership in the 100RC (Gordon 2020). This selection process is a possible source of bias: cities with good standing in terms of fiscal and financial capacity, including higher bond ratings, might have been selected as members of 100RC.
To address this issue, we investigate the effect of 100RC membership on bond ratings through a difference-in-difference (DiD) design. We exploit the fact that all 100RC members joined the club in 2013 and maintained membership afterwards. This simultaneous and lasting treatment separates pre-treatment and post-treatment periods, which allows us to casually identify the DiD estimator of the 100RC membership effect (Angrist and Pischke 2009). We use the following regression to estimate the DiD causal effect:
Where is a binary indicator whose value of one means that the ith observation is assigned to a treatment group (i.e., if a city joined 100RC in 2013) and zero otherwise; POSTt is a binary indicator that takes the value 1 when the city joined the club in 2013 (or afterwards) and zeroes otherwise; and variable of interest is δDiD . In this design, we use not only the standardized bond score but each bond rating of three rating agencies transformed into numeric scores as additional dependent variables.
[1] New York City is located across five counties (Kings, Queens, New York, Bronx, and Richmond) across which its population is almost equally distributed. Therefore, we calculated the number of county-level emergency declarations for New York City by counting (not summing up) all declarations in five Counties. The declarations of other cities which locate in more than two Counties are counted based on the County with the largest population.
[2] ND-GAIN provides indicators for city level exposure to climate-induced risks. Because ND-GAIN data are not provided on an annual basis, it is not useful for a longitudinal study that spans a period of 23 years.
[3] Including three additional types of emergency declaration in our original model, hurricane, flood, and snowstorm, does not significantly change its main findings.
[4] We were unable to find data on city-level cumulative debt burden for a sufficiently large number of cities. Statistical Abstract of the United States provides data for 30 cities only, which is about 38% of cities in 0ur dataset. We believe that introducing fixed effects, along with a lagged dependent variable, will account for idiosyncratic factors associated with municipal debt burden. In addition, we have included year fixed effects which account for year specific fiscal shocks such as the 2007-2008 financial crisis, which would affect any city’s fiscal health.
[5] We specified the following conditions to run the algorithm. First, we included all control variables which include county and state-level emergency declarations (all types and wildfire-type only). Next, we excluded our three independent variables (membership for 100RC, C40, ICLEI) to mitigate a possible bias from multiple imputation. Also, we allowed lags and leads of bond ratings (by one year) to be correlated with bond ratings in each year to account for the fact that bond ratings are sticky across time. Lastly, we allowed for bond ratings to flexibly change over time by assuming cubic time effects of all variables included in the imputation algorithm.