شدت نظارت بر جستجوی کار، خروج از بیکاری و ورود به شغل: شواهد شبه تجربی از انگلستان
|کد مقاله||سال انتشار||مقاله انگلیسی||ترجمه فارسی||تعداد کلمات|
|26728||2008||18 صفحه PDF||سفارش دهید||محاسبه نشده|
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Labour Economics, Volume 15, Issue 6, December 2008, Pages 1451–1468
Because unemployment benefit reforms tend to package together changes to job search requirements, monitoring and assistance, few existing studies have been able to empirically isolate the effects of job search monitoring intensity on the behaviour of unemployment benefit claimants. This paper exploits periods where monitoring has been temporarily withdrawn during a series of Benefit Office refurbishments — with the regime otherwise unchanged — to allow such identification. During these periods of zero monitoring the hazard rates for exits from claimant unemployment and for job entry both fall.
Job search monitoring is the process of checking whether unemployed workers engage in sufficient search activity to qualify for receipt of unemployment benefits. Its purpose is to counteract the search disincentive effect of such benefits. Johnson and Klepinger (1994), Fredriksson and Holmlund (2005) and Manning (2005) present models in which search effort increases with the threshold search level required for eligibility. Intuitively, increasing the intensity of job search monitoring, which can be interpreted as the degree to which such search requirements are enforced, will have similar effects (for a discussion see Klepinger et al., 2002), and will therefore reduce the duration of unemployment spells and boost job entry rates. van den Berg and van der Klaauw (2006), however, introduce some ambiguity to this prediction. They present a model which differentiates between formal job search and informal job search. Formal job search is the label given to all those job search activities that are monitored by the benefits agency, e.g. visits to the employment service office, time spent reading newspaper job advertisements. Informal job search, on the other hand, is those search activities that are not monitored by the benefits agency, e.g. search through social networks. In this case, more intense job search monitoring leads to increased formal job search but reduced informal job search, with ambiguous overall impact on unemployment duration and job entry rates depending on which type of job search is most effective. Manning (2005) introduces ambiguity in a different way by showing that if search requirements are set too high, unemployed workers may respond by reducing search effort, ceasing to claim unemployment benefits, and moving into unregistered (non-claimant) unemployment or inactivity rather than into employment. Even without these theoretical ambiguities, there is a clear need for empirical evidence on the effects of job search monitoring because of its widespread use (e.g. see Martin and Grubb, 2001). Introducing theoretical ambiguity makes this even more crucial. Such empirical evidence, however, is rather thin on the ground. The main reason for this is that benefit reforms, although extensively evaluated, have tended to package together changes to job search monitoring with other changes, e.g. to job search requirements, job search assistance, or benefit rates, so preventing separate identification of monitoring impacts. Further, in the few cases where studies have looked for such impacts, they have found contrasting results. This paper exploits exogenous periods where job search monitoring was temporarily suspended, during a series of sometimes lengthy Benefit Office refurbishments across one region of the UK (Northern Ireland), to provide new quasi-experimental evidence on the impact of monitoring intensity on male unemployment durations and on the flow of unemployed men into employment and into other non-employment states including education or training and inactivity. Although job search monitoring was completely suspended during these periods, job search requirements, job search assistance services, and all other benefit characteristics were unchanged. So, these refurbishments represent a rare opportunity to identify the impact of monitoring intensity. The resulting estimates show that the suspension of monitoring increased average unemployment duration and reduced the hazard rate for job entry. In the context of van den Berg and van der Klaauw (2006), this suggests the positive impact of monitoring on formal search dominates the negative impact on informal search, at least for the monitoring intensities and benefit claimants considered here. Suspension of monitoring also affects the hazards for exits to non-employment states as Manning (2005) suggests, although the evidence in this respect is more mixed. The remainder of this paper is set out as follows. The following section briefly reviews the existing empirical literature on the impacts of job search monitoring. Section 3 provides details of the Benefit Office refurbishment programme and Section 4 discusses identification of the monitoring impacts. Section 5 describes the data and the hazard functions to be estimated. Section 6 presents and discusses the estimation results and Section 7 concludes.
نتیجه گیری انگلیسی
Column two of Table 3 presents results from estimation of the MPH model for all JSA exits assuming a Weibull baseline hazard and gamma distributed unobserved heterogeneity, and including area and time fixed effects. Column three presents results from the same model with the addition of area-specific time quadratics. These are jointly significant at the 99% level, suggesting the presence of time-varying area-specific unobserved factors that affect the hazard rate. Subsequent discussion (and all other reported results) therefore focuses on models including these time quadratics. Note that the Weibull index parameter is greater than one, suggesting a gently upward sloping hazard function. The gamma term captures significant unobserved heterogeneity at the level of the individual. The estimated effects of the policy dummies and covariates are presented in coefficient form, i.e. the βs from Eq. ( 1), and are interpretable as semi-elasticities. So, in the case of the binary treatment dummies, they indicate the percentage impact on the hazard rate (not the percentage point impact) of treatment. For the age covariate the coefficients indicate the percentage impact on the hazard rate of a one year increase in claimant age. An alternative interpretation — giving the multiplicative effect of treatment on the hazard rate — is given by taking the exponential of the reported coefficients, bearing in mind that exp(β) ≈ 1 + β for small β. The fixed effects are jointly significant and the control variables act in the expected directions, e.g. with hazard rates lower for older and single men and for those seeking unskilled employment (probably proxying for skills and/or qualifications). According to these single risk Weibull estimates, suspension of job search monitoring significantly reduces the hazard rate for JSA exit. So, in contrast to the findings of Ashenfelter et al. (2005), job search monitoring appears to matter. The magnitude of the effect — reducing the hazard rate by 17% — suggests an associated increase in average claim duration of 16%, somewhat larger than the 10% change in UI claim duration found by Klepinger et al. (2002). In other words, unemployment spells would on average last for 16% longer in a regime with no monitoring compared to a regime with the original level of monitoring under JSA. J&B — the new regime of tougher monitoring coupled with enhanced job search assistance — increases the hazard rate for JSA exits by an estimated 31%, which implies a reduction in average claim duration of almost one third. These are big estimated treatment effects, but how confident can we be that they are robust? The assumption of a Weibull baseline, although commonly adopted, imposes monotonicity, which, according to Fig. 1, may be inappropriate. Incorrectly imposing such a restriction can lead to biased estimates of coefficients on time-varying covariates (Narendranathan and Stewart, 1993). This is important here because both the zero monitoring and J&B dummies are time-varying (the other covariates are measured at start of spell). To check sensitivity to this Narendranathan and Stewart recommend estimating Cox Proportional Hazard (CPH) models with unrestricted baselines. The corresponding results are presented in the fourth column of Table 3. Note that unobserved heterogeneity cannot be included in a CPH model without the presence of multiple integrals of the same order as the number of individuals in the risk set (Han and Hausman, 1990). So, given the size of the data set here, the CPH model is estimated without controlling for unobserved heterogeneity. Encouragingly, the CPH results are very similar to the Weibull MPH results, with zero monitoring leading to an estimated 15% fall in the hazard rate and J&B leading to an estimated 27% rise. Results for a piecewise constant model (see Meyer, 1990) — estimated as a further test of the robustness of the results to the assumed form of the baseline hazard — are presented in column five of Table 3. In this case the daily duration data are aggregated into monthly groups and zero monitoring start and end dates are assigned to the nearest months. Estimation of such a model requires expansion of the data so that each month of each spell is represented by a separate row in the data array. To keep things manageable the piecewise constant model is therefore estimated on a half sample of the data consisting of all spells on JSA taken from every second local area in the roll-out schedule. Zero monitoring is again estimated to have a negative impact on the hazard rate for unemployment exit, although with a slightly smaller magnitude — close to that found by Klepinger et al. (2002) — compared to the Weibull and Cox estimates. J&B again has a positive impact on the hazard rate. Similar estimates are obtained using the other half sample. So, where Anderson (2001) notes a 10% difference in average UI duration between Klepinger et al.'s (2002) zero monitoring and ‘tough’ monitoring regimes, here the estimated difference in average claim duration implied by the monitoring suspension estimates presented in Table 3 ranges from 10% to 16%. Two factors might contribute to these apparently larger monitoring impacts. First, the standard monitoring regime under JSA, involving fortnightly face-to-face interviews with Benefit Office staff, could be viewed by claimants as tougher than either of the (mail based) monitoring regimes in the Maryland Work-Search Demonstration, with suspension of monitoring therefore representing a more significant regime change in Northern Ireland than in Maryland. Second, where the Maryland experiment randomly assigned a sample of UI claimants in each area to reduced or increased monitoring, in Northern Ireland it was the population of claimants in each area that were subjected to zero monitoring. This suggests greater scope for social interactions effects between claimants under the Northern Ireland zero monitoring regime, e.g. through leisure complementarities, to reinforce the ‘direct’ treatment effect on individual claimants. 3 Table 4 presents estimates from the independent competing risks Weibull MPH model. Almost half (44%) of all exits from JSA are recorded by Benefit Office staff as exits to employment, 8% as exits to education or training and 10% as exits to other benefits — mostly incapacity benefits and other means-tested social welfare — with payment unconditional or less conditional upon job search. The remaining 38% are recorded under various categories including failure to turn up to a fortnightly monitoring interview and exit to unknown destination, which are here classified as exits to ‘other destinations'. As for the single risk models, the fixed effects are jointly significant and control variables act on the hazard rates in the expected directions. According to these estimates suspension of monitoring reduces the hazard rate for exits to employment (job entry) by 26%. Although we do not observe job search directly, this effect is consistent with a significant reduction in average job search effort. In the context of van den Berg and van der Klaauw (2006) the suggestion is that unemployed workers are not substituting for reduced formal search with increased informal search, or that any additional informal search is less effective than the lost formal search. To put the result in the more usual way, the hazard rate for job entry is increasing with the degree of monitoring, and significantly so both in the statistical and economic senses. In contrast, Klepinger et al. (2002) find no significant impact on employment entry. This, then, is the first such result reported in the literature, and to the extent that it can be generalized beyond time and place, it provides strong support for the standard theory and for policy reforms that seek to tighten search monitoring. What of other kinds of exits from unemployment? Remember the prediction of Manning (2005) that making the unemployment benefit regime tougher could drive some claimants out of registered unemployment into unregistered unemployment or inactivity. Table 4 shows that the hazard rate for exits to education and training is increased during zero monitoring by an estimated 36%, albeit from a low base. The explanation for this apparent effect is not immediately clear. If we naively interpret education as a form of inactivity — a view that is not uncommon amongst parents of students — then Manning's (2005) model implies that removal of job search monitoring would make JSA more attractive and, if anything, would reduce the hazard rate for such exits. It may be that the threat of tougher monitoring to come under the J&B regime, with suspension of monitoring always preceding the implementation of J&B, drives this apparent effect, i.e. it is anticipatory (see Black et al., 2003). It could also be that some unemployed workers respond to reduced monitoring of job search by increased search for education or training opportunities in the spirit of van den Berg and ven der Klaauw (2006). Although this estimated impact is robustly non-negative, however, it is not robustly significant, as shown in the sensitivity analysis presented in Table 5. Suspension of job search monitoring reduces the hazard rate for exits to other benefits by an estimated 8%. This is the only category that unambiguously corresponds to exits to inactivity and is therefore a better test of Manning's (2005) prediction that unemployed workers might respond to a tougher regime by exiting registered unemployment into unregistered unemployment or inactivity, i.e. by moving further from the labour market. The results are suggestive of such an effect, although it is small compared to the job entry effect. Table 5 also suggests sensitivity of this particular estimate to functional form and other assumptions. Finally, suspension of job search monitoring reduces the hazard rate for exits to ‘other destinations' by 29%. Because this category of exits includes those that are removed from the JSA Register simply because they fail to turn up to a fortnightly monitoring interview, there will inevitably be a negative impact on the hazard rate when all monitoring interviews are in any case suspended. Unfortunately, because of data constraints this somewhat ‘mechanical’ impact of suspension of monitoring cannot be separated here from what Manning (2005) has in mind when he predicts some of the claimant unemployed might respond to a tougher regime by ceasing to claim JSA but nevertheless remaining ‘non-claimant unemployed’. (But of course the finding that zero monitoring affects the hazards for the other competing risks shows that the overall effect of monitoring on unemployment duration reflects a ‘real’ impact and not just this ‘mechanical’ impact.) Now consider the subsequent implementation of the new J&B regime combining tougher monitoring and enhanced job search assistance. The single risk estimates presented in Table 3 suggest that J&B increases the hazard rate for exits from unemployment by 31%. Interestingly, the competing risks estimates presented in Table 4 show that this overall effect is driven by positive impacts to the hazards for exits to education and training, to other benefits and to other destinations, and not by a positive impact on the hazards for exits to employment. In trying to explain this zero job entry impact we are constrained by the fact that we cannot separately identify the effects of monitoring changes from the effects of job search assistance changes making up the overall J&B package. We can, however, speculate a number of possible scenarios that could drive these results. First, a positive impact of enhanced monitoring might counteract a negative impact of enhanced job search assistance. But this requires enhanced job search assistance to have a counter-intuitive impact on job entry (for a model, see van den Berg (1994)). Second, it may be that monitoring is not tougher in practice under the new regime than under the old regime and that the zero J&B impact on the job entry hazard reflects a zero impact of enhanced job search assistance. This also seems unlikely, however, given the nature of the reforms and given that the hazards for exits to education, other benefits and other destinations are all significantly increased after the introduction of J&B. The most interesting scenario is that moving from an already tough monitoring regime to an even tougher regime might in fact lead to meaningful substitution between formal and informal search à la van den Berg and van der Klaauw (2006), or to substitution of exits to non-employment for exits to employment à la Manning (2005). Both forms of substitution are reconcilable with the zero job entry effect and positive non-employment exit effects of J&B. Again we see the need for more empirical studies — covering different levels of intensity and different directional changes in intensity — of the impact of search monitoring on job entry. Table 5 presents estimated hazard ratios for the zero monitoring dummy from variations of the model in order to test the sensitivity of the results to particular modelling assumptions. First consider clustering of errors. The Weibull models discussed above allow for individual-specific unobserved heterogeneity and associated clustering of errors. Despite area fixed effects and time quadratics, however, there may still be residual correlation within JBO areas which, if ignored, could lead to downward bias in the standard errors reported in Table 3 and Table 4 (see Moulton, 1990 and Bertrand et al., 2004). In contrast to many of the evaluations discussed by Bertrand et al. (2004), the clear economic significance of the estimated treatment effects here, and the amount of leeway in terms of statistical significance, e.g. with t-ratios ranging from −9 to −24 for the various estimated single risk treatment effects in Table 3, suggests any such bias is unlikely to materially affect our conclusions. Nevertheless, sensitivity to this is examined in two ways: first by estimating the Weibull model allowing for area-level clustering (for comparison purposes the Weibull model is also estimated without area-level clustering and omitting the individual unobserved heterogeneity term)4; second by using a bootstrap technique with area-level clustering, similar to that suggested by Bertrand et al. (2004), to re-estimate the Weibull-with-unobserved-heterogeneity model.5 The standard errors for the zero monitoring coefficients are slightly smaller in the version of the model with no clustering than in the standard version. In the model allowing for area-based clustering the standard errors are up to 4.5 times larger than the version with no clustering. In no case, however, does this make any qualitative difference to the statistical significance of the results: all estimates continue to be significant at the 99% level, and coefficients are of course unaffected. Similarly, the bootstrapped standard errors are up to six times larger than in the standard Weibull case, with the only qualitative difference in results concerning the statistical significance of the impact of zero monitoring on exits to other benefits. Second consider the specification of the baseline hazard. Table 3 presents estimates from single risk models with various baseline specifications. Table 5 extends this to the competing risks estimates, with the key results presented for Weibull, Cox, piecewise constant and lognormal variants of the model. (Note that the lognormal version of the model — included because it does not impose monotonicity like the Weibull model — is estimated as an Accelerated Failure Time (AFT) model. The equivalent AFT estimates with Weibull baselines, which themselves correspond to the estimates for the standard MPH model presented in Table 3 and Table 4, are presented for comparison purposes. Briefly, the AFT model is given by lntj =Mjtδ + Xjtβ + zj for spells j = 1,…,N and results are presented in the form of coefficients indicating the impact of a one unit change in the covariate on the log spell duration, with negative values indicating effects that shorten durations and vice versa. For more details see van den Berg (2001) and StataCorp (2003).) The precise estimates of the zero monitoring impact do vary somewhat across the different versions of the model, i.e. there is some sensitivity in terms of magnitudes, but the Weibull, Cox, piecewise constant and lognormal specifications differ qualitatively only with respect to the statistical significance of the impact of zero monitoring on the education hazard and the other benefits hazard. There is no ambiguity in terms of signs and there is no ambiguity in terms of the negative impact of monitoring suspension on the hazards for job entry and for exits to ‘other destinations’. Again, the picture doesn't change when clustering of errors is accounted for. Sensitivity to the assumption of the independence of the competing risks is examined by estimating a dependent competing risks Weibull model where, broadly following the approach of Eberwein et al. (1997), the unobserved heterogeneity term is assumed to be perfectly correlated, rather than uncorrelated, across the risks. In other words, αk=δkααk=δkα, with α assumed to be distributed according to a gamma distribution as before. The only qualitative difference between the independent and dependent competing risk estimates concerns the statistical significance or otherwise of the zero monitoring impact on the hazard for exits to other benefits. We already have cause to question the significance of this particular estimate, however, and the other results stand up well. Finally, because of statistically significant differences between covariate means for treatment and comparison areas (see Section 4) the Weibull single risk and independent competing risks models are re-estimated on a sub-sample of local areas omitting all Benefit Offices from the urban centres of Belfast and Londonderry. These city Benefit Offices tend to display the most extreme covariate means and their exclusion removes any significant contrast in observables between the treatment and comparison areas. Again, the results are robust. Not only does this increase our confidence that we are indeed identifying the impacts of monitoring and not something unobserved and not otherwise controlled for, but it also gives little indication that suspension of monitoring might affect behavior differently in urban and rural contexts. To sum up, the overall picture is that suspension of monitoring has a robust negative impact on the single risk hazard rate for exits from unemployment corresponding to an increase in average unemployment duration of between 10% and 19%. There are similarly robust impacts on the hazards for exits to employment and exits to other destinations. The impact on the hazard rate for exits to education and training is robustly non-negative but not always statistically significant, and the impact on the hazard rate for exits to other benefits is not robustly non-positive but also not always statistically significant.