غربالگری متقاضیان برای برنامه های آموزش کار:تظاهرات دستیاران بهداشت خانه دار AFDC
|کد مقاله||سال انتشار||تعداد صفحات مقاله انگلیسی||ترجمه فارسی|
|16323||2002||23 صفحه PDF||سفارش دهید|
نسخه انگلیسی مقاله همین الان قابل دانلود است.
هزینه ترجمه مقاله بر اساس تعداد کلمات مقاله انگلیسی محاسبه می شود.
این مقاله تقریباً شامل 10089 کلمه می باشد.
هزینه ترجمه مقاله توسط مترجمان با تجربه، طبق جدول زیر محاسبه می شود:
|شرح||تعرفه ترجمه||زمان تحویل||جمع هزینه|
|ترجمه تخصصی - سرعت عادی||هر کلمه 90 تومان||15 روز بعد از پرداخت||908,010 تومان|
|ترجمه تخصصی - سرعت فوری||هر کلمه 180 تومان||8 روز بعد از پرداخت||1,816,020 تومان|
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Labour Economics, Volume 9, Issue 2, April 2002, Pages 279–301
Government employment and training programs typically do not have sufficient resources to serve all those who apply for assistance. Those to be served are usually selected by program staff based on management guidelines that allow considerable policy discretion at the local level. A longstanding issue in employment and training policy is whether allowing this flexibility leads to selection of applicants (1) most likely to benefit from the program or (2) who are likely to experience the highest absolute outcomes in the absence of program services, sometimes called “creaming”. The distinction is crucial to the success of many programs, both as redistributional tools and as economic investments. Selection of those most likely to benefit from the program—i.e., those for whom the program's impact on subsequent labor market success will be greatest—will maximize the social return on the investment in training. In contrast, “creaming” may lead to little or no social benefit or to a substantial gain, depending on whether those selected for training—the group most likely to succeed without the treatment—in fact benefit most from it. The redistributional effects of a program will also depend on who is served: among the applicant group, a more equal distribution of economic well-being, ex post, will be achieved only if the program favors applicants likely to do worst without the intervention. This paper explores the role of creaming in the operation of seven welfare-to-work training programs, the type of programs that have been the focus of increased expenditures over the last 10 years as more and more welfare recipients have been pushed to become self-sufficient. It considers whether the program intake practices adopted in the studied programs furthered the social goals pursued and, if not, what consequences they had on the twin concerns of distributional equity and economic efficiency. The analysis begins by reviewing the history of the creaming issue and its importance in the literature. A unique data set is then examined to discover the factors that influenced admission decisions in seven state-run employment and training programs for welfare recipients and how those decisions played out in terms of the in-training performance and later labor market outcomes of program participants. The principal conclusions are that these programs “creamed” the most able applicants on both observable and unobservable characteristics, but that this targeting did not systematically affect the size of program impacts or the return on investment.
نتیجه گیری انگلیسی
The principal conclusions of this analysis concern applicant targeting and its implications for the in-program and post-program accomplishments of the AFDC Homemaker–Home Health Aide Demonstrations. Intake workers for the seven state-run demonstrations relied largely on their perceptions of unmeasured applicant characteristics when selecting applicants for admission. More conventionally measured background characteristics explain only 5–13% of the variation in workers' potential ratings for applicants in six of the seven projects. The most consistently valued measured characteristics are those related to education and work experience. However, potential ratings were only weak predictors of trainee performance in the demonstrations' classroom, practicum, and subsidized employment components. They explain less than 6% of the variation in classroom training performance and little or none of the variation in performance during practicum (on-the-job) training and subsidized employment. Moreover, intake staff could not have predicted in-program performance much better on the basis of the information available at intake; all baseline measures taken together rarely explain more than 10–15% of the variation in in-program performance. Similarly, workers' ratings of trainee potential explain less than 10% of the variation in follow-up earnings and welfare benefits. When other baseline characteristics are added, it is possible to explain 15–20% of the variation in earnings and 30–40% of the variation in monthly welfare benefits. Intake workers consistently rated more favorably (and, presumably, selected) those applicants who had a higher level of earnings and a lower level of welfare benefits over the follow-up period, even controlling for their measured baseline characteristics. They were not able, however, to consistently identify those applicants for whom the program would do most to increase earnings or reduce welfare benefits. In short, intake workers' ratings were consistent with the “creaming” goals of the demonstrations and an admissions pattern that did not realize the largest possible returns to training through increased participant earnings and reduced welfare benefits. Whether these conclusions can be generalized to other programs is, of course, an open question. The consistency of the major findings across the seven state demonstration gives some assurance that these results are not a matter of chance, but they are conditional on what the seven demonstrations had in common, including their intake priorities. Moreover, the demonstrations all dealt with a relatively narrow, homogeneous population—AFDC recipients interested in employment—and were staffed primarily by social workers and home health professionals rather than employment and training professionals. Thus, the results are certainly not directly applicable to most other training programs. We therefore urge others evaluating employment and training programs or demonstrations to collect the relatively simple baseline data used here—including intake staff ratings of each applicant relative to the stated goals of the program's targeting strategies—needed to replicate this analysis in other programmatic settings.