اثرات بازدارنده کار از برنامه بیمه از کار افتادگی در سال 1990s
|کد مقاله||سال انتشار||مقاله انگلیسی||ترجمه فارسی||تعداد کلمات|
|21036||2008||28 صفحه PDF||سفارش دهید||18447 کلمه|
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Journal of Econometrics, Volume 142, Issue 2, February 2008, Pages 757–784
In this paper we evaluate the work disincentive effects of the disability insurance (DI) program during the 1990s using comparison group and regression-discontinuity methods. The latter approach exploits a particular feature of the DI eligibility determination process to estimate the program's impact on labor supply for an important subset of DI applicants. Using merged survey-administrative data, we find that during the 1990s the labor force participation rate of DI beneficiaries would have been at most 20 percentage points higher had none received benefits. In addition, we find even smaller labor supply responses for the subset of ‘marginal’ applicants whose disability determination is based on vocational factors.
With labor force participation (LFP) rates of older males falling throughout the last three decades, researchers have sought to explain this phenomenon by examining the interaction between a number of different social insurance programs and LFP (Leonard, 1986, Burtless, 1999 and Bound and Burkhauser, 1999). Among these, the disability insurance (DI) program has been identified as being one of the primary potential reasons for the non-participation of prime aged males in the labor force. As its eligibility criteria imply a very high tax rate on earnings, the DI program has long been criticized for its apparent work disincentives. Despite an extensive body of research on this issue, there is little consensus among economists on the magnitude of its work disincentives and on the role attributed to the DI program in explaining the large decline in the LFP rate of older men. A better understanding of the incentive effects of the DI program is not only needed to explain DI's contribution to the changing employment rates of older men and women but also to improve our ability to predict, explain, and manage the increasing costs of DI programs, which is of great concern to decision makers, and will be essential for evaluating potential changes to the disability program. Participation in the DI program is the outcome of an individual's decision to apply for disability benefits combined with an eligibility determination decision. To the extent that incentives to apply vary across individuals and that eligibility criteria depend on various individual characteristics, disability benefit receipt cannot be treated as an exogenous explanatory variable in a LFP equation. With generous income replacement ratios, particularly for low earners, there is an economic incentive for the disabled previously capable of work to stop working and for people who are not truly disabled but receive a high disutility from working to take advantage of the program (for example, by misreporting their health status). Similarly, medical and vocational criteria used to determine eligibility for disability benefits result in large differences in characteristics between those receiving and not receiving DI benefits. Because some of these characteristics are likely to be unobserved by the econometrician, this implies that DI receipt should be treated as endogenous in an econometric analysis of its effect on labor supply. A popular way to deal with this problem has been to model LFP (or non-participation) as a function of the ratio of potential benefit levels to wages, known as the replacement rate. Most of the earlier empirical studies conducted in this area analyzed the impact of the DI program this way using cross-sectional data and traditional econometric regression models. In these models, non-participation is modeled as a function of the replacement rate and demographic and health characteristics such as age, education, and health status. In the best-known work of this type, Parsons (1980) estimated a non-LFP elasticity for prime aged men (45–59) of 0.63, while Slade (1984) found an elasticity of 0.81. Two problems arise in such an analysis. First, by grouping both wages and benefit levels into the replacement ratio, the separate impacts of wages versus benefit levels on non-LFP are confounded. Second, the actual benefit amounts participants receive depend on past earnings, and therefore on past work decisions, generating an additional direct source of endogeneity in the amount of benefits received. In an attempt to address this endogeneity problem Haveman and Wolfe, 1984a and Haveman and Wolfe, 1984b replaced the actual replacement rate with a predicted value obtained from a first stage regression of the replacement rate on a set of exogenous variables. In contrast to the earlier studies, they found much lower elasticity estimates of between 0 and 0.03. To identify the replacement rate effect (or the separate wage and disability benefit effects) some exogenous variables that determine wages or (and) disability benefits must be excluded from the LFP equation. However, without a convincing justification for their exclusion restrictions it is not clear how credible their estimates are. While these earlier cross-sectional studies based on US data either ignored the potential endogeneity of the replacement rate or relied on arbitrary exclusion restrictions for identification, three recent studies explore alternative identification approaches for dealing with the endogeneity of disability benefit receipt. Gruber (2000) exploits an exogenous policy change conducted in Canada in 1987 where the benefit levels of the rest of the country were adjusted upwards to meet those of Quebec province. Using data covering the 1985–1989 period, he estimates the elasticity of labor force non-participation with respect to DI benefit levels to be between 0.28 and 0.36. The identification approach and the credibility of his estimate depend on the validity of the assumption that any changes in the relative labor market conditions in Quebec as compared to the rest of the country during this period were uncorrelated with the differential change in DI benefits. Autor and Duggan (2003) also use differential time variation in average benefits across geographical regions to identify the impact of DI on the LFP of low skilled workers. Using state level data from the CPS and the Social Security Administration (SSA), they exploit variation in the replacement rate due to differences across states and over time in the wage distribution, to identify the effect for low-income workers. They maintain that the widening dispersion of earnings in the US, combined with the progressivity of the disability benefits formula and the fact that DI benefits are set nationally and do not adjust for variation in regional wage levels, provide an exogenous measure of program generosity independent of a workers underlying taste for work. They conclude that the disability system provided many low-skilled workers with a viable alternative to unemployment. They estimate that the overall unemployment rate in 1998 would have been one half a percentage point higher in the absence of the DI program. Unfortunately, their reported estimates do not allow calculation of an elasticity that can be compared to those in other studies. The identification strategy relies on the absence of other differences across states in both the changes in labor market conditions over time as well as the impact of such changes on labor supply, which seems problematic since variation in the wage distribution over time across states can itself be expected to directly affect labor supply. A very different approach to deal with the non-comparability between DI recipients and non-recipients was suggested by Bound (1989). Instead of estimating the effect of benefits and wages on labor supply, Bound considers the more basic problem of evaluating the effect on labor supply of being a DI beneficiary. Arguing that they should be much more similar in observed and unobserved characteristics, he uses a sample of rejected disability applicants as a control group for DI beneficiaries and considers their LFP rate as an estimate of the counterfactual LFP rate of DI beneficiaries. Given that accepted and rejected applicants are not completely comparable, with rejected applicants generally being healthier, Bound argues that their work behavior forms an upper bound for the behavior of DI beneficiaries had they not been receiving benefits. Bound estimates in this way that the LFP rate of DI beneficiaries in his sample would have been at most 30 percentage points higher had they not received disability benefits. The validity of Bound's identification approach relies on several assumptions. First, the interpretation of the LFP rate of rejected applicants as an upper bound is based on the assumption that the only difference between rejected DI applicants and beneficiaries is that the former are in better health. While most rejections are based on an initial medical screening, the disability determination process also has an important component, which will be discussed in detail later, which is based on vocational factors. As a result rejected applicants and beneficiaries may on average differ not only in their average health but also in other characteristics, such as their average pre-application earnings, work histories, age and education level. Moreover, they may differ in their preferences for working. It may be the difference in these average characteristics rather than DI benefit receipt that leads to the lower labor force attachment of beneficiaries. Second, strictly speaking Bound's identification procedure provides an upper limit for the effect of disability benefit receipt on the LFP rate of applicants only. For it to represent an upper limit of the effect of the DI program on total labor supply, the behavior of rejected DI applicants should be comparable to what it would have been if the DI program had not existed. As pointed out by Parsons (1991), the LFP of rejected applicants may underestimate what their LFP would have been had they not applied, if denied applicants are awaiting appeals or are planning to reapply, or if the period of time they were out of work while applying for benefits makes it more difficult to return to work. Bound, 1989 and Bound, 1991 directly addresses both issues and presents evidence suggesting that both are unlikely to invalidate his main findings. With respect to the second, he argues that most appeals should have been accounted for by restricting his sample to applicants who filed their applications at least 18 months before the survey dates. He notes that even if a denied applicant were to exhaust all appeals, the maximum time between application and awards (including reconsiderations, appeals, etc.) would have been about 496 days in 1982. He does agree that some of the rejected applicants in his sample may be out of the labor force while planning to reapply. However, he argues that they only represent a small number of applicants who probably have a lower opportunity cost of remaining out of the labor force (those in worse health and unemployed). Any resulting bias is therefore likely to be small. Bound also argues that there exists little evidence that rejected applicants face special problems in returning to work due to the time they were out of work while applying for benefits. While Bound's estimates are arguably among the most convincing to date, they were based on data collected in 1972 and 1978 and the DI program as well as the population of elderly has changed considerably since then. Over the last 30 years, the screening process for DI applicants has become more formalized, especially with respect to borderline cases. In addition, administrative control of DI has undergone many changes. Between 1985 and 2004 the number of disabled individuals receiving DI increased by over 100%. This increase is particularly striking given the enactment of the Americans with Disabilities Act in 1990. This act should have provided more economic opportunity for the disabled and relieved some of the pressure on growing DI rolls. In light of these macro economic changes, as well as in society's attitude towards people on welfare over the last 30 years, an analysis of the impact of DI program using more recent data is timely and important. Using matched survey-administrative data on DI applicants from the 1990s, our estimation approach will consist of two separate components. We first apply Bound's comparison group approach to estimate an upper bound of the impact of disability benefit on LFP. We will discuss its underlying assumptions and their applicability to our data set. Second, we adopt a quasi-experimental approach to obtain a point estimate of the impact of disability benefit receipt for an important subgroup of applicants most likely to be the target of any policy reform: ‘marginal’ applicants who are not immediately awarded or denied benefits on the basis on their specific medical impairment and whose eligibility needs to be determined by considering vocational factors. The regression-discontinuity (RD) approach that we use to obtain this estimate, exploits the fact that the eligibility determination process is based in part on an individual's age. This provides us with an intuitive way of obtaining a policy-relevant estimate of the program's impact on labor supply. While the literature to date has focused mainly on the labor supply of men, we extend our analysis to study the employment effects of the DI program on the labor supply of both men and women. In addition, we analyze both the short and long-term employment effects of benefit receipt. Our estimates indicate that the work disincentive effects associated with DI benefit receipt during the 1990s were relatively modest, implying that the LFP rate of DI beneficiaries would have been at most 20 percentage points higher had they not received benefits. This result is robust to a variety of specification tests, appears insensitive to how much time has passed between the award date and the measurement of labor supply, applies to male and female applicants, as well as applicants to the SSDI and the SSI programs. We find even smaller effect estimates for a group of ‘marginal’ applicants whose disability determination was based on vocational considerations. However, within this group we find estimates to vary, with male and SSDI applicants generally showing somewhat larger labor supply responses than female and SSI applicants. The rest of the paper is organized as follows. We begin with an overview of the disability determination process and the program's benefit rules in Section 2. This is followed by a description of our data set in Section 3. In Section 4 we apply Bound's comparison group approach to estimate an upper limit on the impact of disability benefit receipt on the labor supply of DI applicants. We compare our results to those reported by Bound and discuss the validity of the conditions under which this estimate represents an upper bound. Section 5 describes a relatively unexplored feature of the DI program's disability determination procedure, the medical vocational grid, and shows how the use of this grid leads to discontinuities in the award rate as a function of the applicant's age. We explain how this can be used to identify and estimate the program's work disincentive effect for the policy relevant subgroup of applicants whose disability status is determined by the grid. In Section 6 we assess the sensitivity of our findings to the estimation method used, examine the validity of the underlying assumptions of our evaluation approach, and present estimates for several subsamples of DI applicants. We conclude in Section 7 with a discussion of the implications of our findings.
نتیجه گیری انگلیسی
In this paper we assessed the work disincentive effect of the DI program during the 1990s, based on a new data set in which administrative disability application and award records were merged with the 1990–1996 panels of the SIPP. Using a comparison group approach suggested by John Bound, we estimate that during the 1990s the LFP rate of DI beneficiaries would have been at most 20 percentage points higher had none received benefits. Similarly, we estimate that DI receipt leads to a reduction in average monthly hours of work of at most 30 h. We also apply a regression discontinuity approach to find even smaller labor supply responses for a group of ‘marginal’ applicants whose medical condition is more difficult to assess and whose disability determination is based on vocational factors. Based on our longer-term sample the RD estimates indicate that DI decreases labor supply by approximately 16–20 h per month and participation by 6–12 percentage points. This applicant group represents a non-trivial proportion of applicants and of beneficiaries. Between 1980 and 1990 the number of applicants who qualified on the basis of vocational criteria increased from 26% to 37% (Lahiri et al., 1995), and it represents 39% of all applicants in our sample. The RD estimates measure the average labor supply response for this group to a change in one component of the DI program—an age cutoff in the medical vocational grid. In terms of spending, the DI program is one of the largest social insurance programs in the United States. In 1999, over 100 billion dollars was spent providing medical benefits and cash payments to beneficiaries and their families. Credible estimates or bounds for the effect of DI on labor supply are therefore extremely important to policymakers. Increasing the age eligibility cutoff values would be one way to control the growing costs and caseload of the DI program that is likely to accompany the SSA's increase in the normal retirement age. The regression discontinuity estimates presented in this paper indicate that the impact of a small increase in the age cutoffs would only have a modest overall effect on LFP. Our estimates also indicate that within this group responses vary with individual characteristics, with male and SSDI applicants generally showing somewhat larger labor supply responses than female and SSI applicants. Combined our findings suggest that during the 1990s the work disincentive effects of the DI program were rather modest: a large majority of applicants would not have worked even if none had received disability benefits. At first sight these findings appear to be at odds with time series evidence presented by Bound and Waidmann (2002) of a close association during the 1990s between the fraction of the working aged population receiving DI benefits and the growth in the fraction of the population that identifies itself as health-limited and out of work. These trends would be consistent with a movement of men and women in relative poor health out of the labor force and onto disability roles, suggesting that those drawn to apply for disability benefits when the program was expanding during the 1990s would have worked had they not applied for, and in many cases, been awarded DI benefits. However, the results shown by Bound and Waidmann are inconclusive about the causality of the observed trends. In addition to an increase in generosity and a relaxation in the requirements to qualify for DI benefits, there are several other factors which are likely to have contributed to the observed employment decline. Several researchers have attributed some of the decline in employment among the disabled to the introduction of the Americans with Disability Act of 1990 which took effect in 1992 and which led to an increase in the cost to employers of hiring such workers (DeLeire, 2000, DeLeire, 2003, Acemoglu and Angrist, 2001 and Jolls and Prescott, 2004). There is also evidence pointing to the more general role of declining labor market conditions for low skilled workers during the 1990s (Juhn et al., 2002). Decreased opportunities for low-skilled workers during the early and middle part of the decade, coupled with a decreasing real wage and increasing real disability benefit amounts for this segment of the labor force is likely to have induced some employed and laid off workers, to apply for disability benefits. Paired with a liberalization of screening standards, the drop in demand for older, less skilled workers in poor health may therefore have led to higher DI recipiency rates (Rupp and Stapleton, 1995). Our finding of a small work disincentive effect associated with disability benefit receipt, even smaller than those found for the 1970s, is therefore likely to reflect the unfavorable labor market conditions, and the lower labor force attachment of DI applicants during this period. During the period between the 1970s (when Bound's data was collected) and the nineties, there has also been a significant increase in the labor supply of married women. If the presence of these women creates a source of income maintenance then this may also have contributed to the smaller employment effect associated with benefit receipt. In conclusion, our results suggest that most of those drawn to the DI program during the 1990s would not have worked in absence of disability benefits, but instead would have been unemployed or on other types of welfare support. Whether they would not work because of their medical conditions, unfavorable labor market conditions, or increased spousal labor supply is beyond the scope of this paper, but is an important area for further inquiry.