هزینه متوسط بیمارستان
|کد مقاله||سال انتشار||تعداد صفحات مقاله انگلیسی||ترجمه فارسی|
|50||2005||29 صفحه PDF||سفارش دهید|
نسخه انگلیسی مقاله همین الان قابل دانلود است.
هزینه ترجمه مقاله بر اساس تعداد کلمات مقاله انگلیسی محاسبه می شود.
این مقاله تقریباً شامل 16730 کلمه می باشد.
هزینه ترجمه مقاله توسط مترجمان با تجربه، طبق جدول زیر محاسبه می شود:
|شرح||تعرفه ترجمه||زمان تحویل||جمع هزینه|
|ترجمه تخصصی - سرعت عادی||هر کلمه 90 تومان||21 روز بعد از پرداخت||1,505,700 تومان|
|ترجمه تخصصی - سرعت فوری||هر کلمه 180 تومان||11 روز بعد از پرداخت||3,011,400 تومان|
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Accounting, Organizations and Society, Volume 30, Issue 6, August 2005, Pages 555–583
In 1998, the UK government introduced the National Reference Costing Exercise (NRCE) to benchmark hospital costs. Benchmarking is usually associated with “excellence”; the government emphasised the raising of standards in the 1997 White Paper “The New NHS: Modern, Dependable” that heralded the NRCE. This paper argues that the UK “New Labour” government's introduction of, and increasing reliance on, hospital cost benchmarking is promoting “averageness”. Average hospitals will be cheaper to run and easier to control than highly differentiated ones; they may also score more highly on certain measures of service improvement. The paper aims, through empirical investigation, both to demonstrate how the activities and processes of hospital life “become average” as they are transformed to comply with the cost accounting average and to indicate how the “average” is being promoted as the norm for hospitals to aspire to. To benchmark to average costs, comparisons are necessary. To compare hospital costs involves the creation of categories and classification systems for clinical activities. Empirical evidence shows that as doctors, patients and clinical practices are moulded into costed categories, they become more standardized, more commensurate and the average hospital is created.
Health care is expensive; funding it puts a significant burden on national governments worldwide. Acute care in hospitals is particularly costly and an explosion in medical technologies, associated with the rapidly growing science of genetics, looks likely to make it more so.1 Hospitals are diverse and differentiated places, controlled by medical elites––and not readily transparent to organizational review. Yet spending on healthcare, investment in hospitals and demonstrations that illness is being “conquered” are persuasive symbols that any government “cares”. Given this situation, it is to be expected that governments would like more control over both hospital costs and the medical profession. The “average” hospital may offer a way of achieving the goals of less costly healthcare and less sovereign clinicians. The average hospital has a cost index score of 100; this paper tracks the complex processes that create the hospital of average cost. Mapping costs on to the highly differentiated activities of health care to create averages is difficult and problematic. Yet, in the UK, there is a strong political will to use the average cost both as a specific measure to compare hospital performance and, generally, as a benchmark to control activities in health care. In this paper we aim, through empirical investigation, first, to demonstrate how the activities and processes of hospital life “become average” as they are transformed to comply with the construction of the cost accounting average and, second, to indicate how the “average” is being promoted as the norm for hospitals to aspire to. Walgenbach and Hegele (2001) point out a central paradox of benchmarking: through benchmarking, organizational processes become increasingly similar (DiMaggio & Powell, 1991). This similarity erodes competitive advantage, hence, in the longer term, all an organization can expect from benchmarking is to become a “good average”. In the private sector, striving to be “average” is not an, obviously, advantageous strategy. However, for an expensive public sector activity like health care (which is financed from taxation and where competitive advantage between institutions for “customers” is not an issue) a benchmarking strategy that results in all hospitals becoming “more average” has political appeal. Average hospitals would be cheaper to run and easier to control than highly differentiated ones. Before government intervention, evidence did not indicate that UK hospital costs tended to the average; rather there were some quite astonishing Healthcare Resource Group (HRG)2 comparisons. Below, two particular HRGs (one surgical, one medical) are illustrative of the range of reported cost variability. The British government proclaimed that these differences pointed to differing underlying levels of efficiency (see next section).Differential efficiency in cost performance can arise in three ways: first, from differences in the unit cost of resources used in hospitals (e.g. direct costs such as salaries and consumables); second, from differences in the running costs for hospital facilities (e.g. infrastructure costs and overheads); and, third, from variations in the clinical practices that drive cost (e.g. the skill mix employed in patient care, the use of diagnostic tests, the allocated theatre time and the designated length of stay in hospital post-procedure). Clearly, not all of these costs are controllable; in particular, infrastructure costs are fixed. Moreover, cost reduction may impact adversely on the quality of care delivered; despite this, hospitals are considered responsible for controlling their costs. But the extent of the HRG cost variations reported initially in hospitals raised questions about the meaningfulness of the efficiency comparisons being made; there were other factors––besides efficiency––impacting upon the costs reported above. Northcott and Llewellyn (forthcoming) identified 10 different influences on reported costs and grouped them into four categories: first, differences in costing approaches (variations in cost allocation practices and differences in how costed `care profiles' are produced); second, variations in underlying clinical activities “legitimately” related to patient need but not adjusted for in HRGs3; third, issues of information quality (differences in clinical coding, differences in the counting of activity and variations in the data collection capacity of Trusts' information systems); and fourth, the “efficiency” differences outlined above. In sum, before government action, the reported costs of supposedly similar clinical activities across hospitals varied, dramatically. In part, this reflected the complexity of measurement but the startling extent of the variability also resulted from the hospitals' not taking the costing of medical work `seriously' (see empirical sections below). From the government's perspective, these `measurement muddles' (real or intentional) obscured the efficiency question: were some hospitals wasting resources? Put more formally, were there “unacceptable variations in performance” (see next section) in UK hospitals? Until measurement practices were `tightened up' or `modernized' the relative efficiency of UK hospitals could not be assessed. So the government introduced the National Reference Costing Exercise (NRCE) and the National Reference Costing Office (NRCO) first, to prescribe cost measurement protocols, second, to calculate cost results and, finally, to publish information on relative cost efficiency of hospitals. But “measurement” is not only a technical issue; all measures, “…construct a commensurability that did not exist before their calibration” (Latour, 1993, p. 113). HRG costing necessitates the classification, counting and coding of clinical activities and, actually through these processes, work in hospitals becomes more standardized. Moreover, once a cost average is published it becomes the visible standard against which institutions compare themselves––in the absence of other measures the average, by default, being the operational norm for hospital activity. The benchmarking of British hospitals via the NRCE compares their performance against a standard, in this case an average cost. The concept of the “standard” is equivocal: either an exemplary or an average performance can be implied. The “average cost” benchmark plays on this ambiguity by establishing the average performance as the one to be aimed for. The complex processes of classification, coding and counting (entailed in the measurement of the average cost) standardize hospital activities. The publication of the average cost encourages hospitals to aim for the average. This `encouragement' is now backed up through a “standard tariff” for HRGs; since 2002, UK hospitals must ensure that their activities take account of the average as they are now funded on the basis of the average cost. This paper is structured as follows. The next section explores the policy background to the introduction of the National Reference Costing Exercise (NRCE), and introduces the theoretical underpinnings to the paper––work drawn mainly from Latour and writers in the sociology of science tradition. Then the research design is explained, before the empirical sections (“Being Average”; “Constructing Commensurability for Averages” and “Making Clinical Activities More Average”) are presented. The interview data for the study is explored through critical discourse analysis. Identified themes are: the uniformity introduced by classification; the contemporary significance of information; and the construction of commensurability. These themes contribute to a fuller understanding of standardization and “averageness”. The paper ends with a discussion on the impact of “the average” on hospitals, finally there are some concluding comments on the international interest generated by the UK's cost index for hospitals.
نتیجه گیری انگلیسی
The key argument of this paper is that hospitals are more average places as a consequence of the introduction of HRG reference costs. The text and talk presented here support this conclusion in several ways. First, statistics gathered over the five years of the reference cost exercise show that in 2001/2002 the percentage of Trusts within 10% of the average cost jumped to 72% (having been around 60% across the first four years); this movement toward the average seems likely to continue consequent upon the government announcement in 2002 that hospitals are to be funded on the basis of the average HRG cost. Second, the talk of key players concerned with the reference costs (regulators, clinicians and managers) indicated that they believed that hospitals were becoming more average (in cost and practice terms) as a result of HRG costing. Third, theoretical discussion on the impact of regulation, categorization, and standardization posits that these processes result in more similarity, homogeneity and “averageness” in practices. This “averageness” comes about in several ways as peoples' behaviour and organizational practices are moulded so as to fit into categories (Bowker & Star, 1999, p. 53). In the case of the NRCE, the governing “category” is the HRG. To produce HRG costs hospitals have to standardize and simplify their “production” processes so that they can count, code and cost activity in the same ways. Once the average cost for a particular HRG across all UK hospitals is known and funding is on this basis, there is political and managerial pressure on hospitals not to exceed this average. Consequently, it becomes more difficult for clinicians to engage in practices that may cause costs to rise; “clinical divergence” comes under scrutiny whenever an HRG category exceeds the average. Does the NRCE modify the behaviour of clinicians? The earlier empirical section, based on talk, signals so. Does text also indicate an impact on clinical behaviour? The sheer complexity of the factors that drive reported costs renders any assessment of what, precisely, is resulting in the trend towards the average very difficult. However, some tentative conclusions are possible. Northcott and Llewellyn, forthcoming, discussed earlier, report ten different factors operating to produce cost variability. However, if reliance is placed on the assessments of finance directors and cost accountants within Trusts on which of these ten possible drivers of reported cost differences are most influential, survey evidence in 1999 revealed that three factors were thought to dominate: differences in cost allocation practices; differences in fixed running costs for hospital facilities; and variations in the clinical practices that drive costs (Northcott & Llewellyn, forthcoming). Evidence presented in this paper indicates that scope for differential cost allocation has decreased over the five years of the NRCE, as the NRCO made more standardization mandatory. Moreover, in the short term, possibilities for decreases in “fixed” costs are limited. There is, therefore, some prima facie calculative evidence that standardization of clinical practices has played a major part in the trend to the average cost. Why should variability in clinical practice make hospitals more costly? First, in hospitals, the individual (and different) aspirations and career goals of providers have driven the content of services (Champagne et al., 1997) and providers had little incentive to control costs. Rather the converse was the case, as providers were accustomed to arguing for more resources on the basis of overspent budgets and the unmet needs of patients (Brunsson, 1994, p. 326). Moreover, in the context of the hospital as a loosely-coupled proliferation of very different specialist workshops (Hogg, 1999, p. 165), “difference” became a platform from which to launch individual advocacy campaigns for additional funding. Second, clinical “production processes” and “case management” procedures have been non-standardized; there is considerable evidence from the private sector that the standardization and simplification of production processes reduces costs (Ezzamel & Willmott, 2002). Third, clinicians have made individual diagnoses of patients based on their (differing) judgements. The practice of judgement has gaps and idiosyncrasies (Porter, 1995) and involves reasoning by exclusion (Abbott, 1988) as different possible solutions are tried and found incorrect. A clinical judgement proceeds (with its associated diagnoses, tests and interventions) until an adequate “solution” is found. In addition, varying levels of clinical competence result in differential outcomes for patients. These issues drive the “clinical divergence” (differing “length of stay”, varying theatre time and abnormal mortality and complication rates) discussed in this study. Judgement and differential competences (as compared to standardization) are expensive. As argued earlier, the NHS is a very high profile area in the UK public sector and attention to healthcare serves to demonstrate that any government “cares” (Hogg, 1999, p. 158). On the basis of the Wanless (2002) report, the UK government made its unprecedented financial investment in the NHS (see earlier discussion). This funding (from tax rises) increases the pressure on the government to use “metrics” to demonstrate “results” in terms of better hospital performances, to show that resources are flowing to “good” performers and to avoid “waste” by directing funds on the basis of average costs. The Prime Minister wagered his political future on raising standards in healthcare declaring, “Judge Me on NHS Challenge” (news.bbc.co.uk/1/hi/uk_politics/1941949.stm). Will “averageness” raise standards in hospitals? This is a complex question that this study did not set out to answer, being focussed on “the average” as a consequence of the reference cost exercise, however some initial thoughts can be set out. NHS performance continues to improve in terms of numbers of patients treated; Allsop and Mulcahy (1996, p. 128) argue that this is due to changes in clinical practices (e.g. reductions in length of stay, increases in day cases and more intensive use of plant). As discussed above, the government clearly believe that funding on the basis of average costs will intensify this trend to increased productivity, however it may deter clinicians from meeting the very expensive care needs of particular patients. Patients may find day cases (and reduced length of stay) more convenient but `less care' may be `less good care' and the latter may externalise costs-passing them on to other agencies. Moreover, given that the predominant performance indicator for productivity is reduced waiting times, this incentivises clinicians not to prioritise the complex and more severe cases on the waiting lists. Also, there may be a trade off between productivity and innovation. In the past, there was evidence that doctors pre-empted additional NHS monies directed for service “growth” into continual clinical innovation (Harrison & Pollitt, 1994; Hunter, 1980). A policy climate that relies on standards may constrain innovation through reducing providers' propensity to take risks (Hood, Rothstein, & Baldwin, 2001; Newman, Raine, & Skelcher, 2001; Power, 1994a, Power, 1994b and Power, 1997). Also the extent to which patients may benefit from more standardised care is unclear. On the one hand, they are protected from ill-judged idiosyncratic practice but on the other, complex cases may require finely tuned medical expertise. In sum, an assessment of whether a focus on averages and standards improves health services hinges on both the dimensions of health care under consideration and the ways in which these aspects are being measured. The study on which this paper is based was limited in several ways: first, the primary focus was on costing, hence, any conclusions regarding clinical practices should be regarded as suggestive rather than in any way definitive; second, although sources of evidence comprised both talk and text, a greater reliance has been place on the former and the extent to which `talk' is indicative of actual practices in hospitals is uncertain; and, third, the interviewees spoke from within a discrete time period so their opinions and expectations are likely to change over time. Health care is a dynamic area and to trace the further impact of the NCRE on hospitals clearly more research is called for. In particular, policy development requires more understanding of the impact on clinical practice of funding on the basis of the average cost. Another crucial unanswered policy question is whether the “average” can be taken as the standard for hospital performance, currently, by default, it is but to what extent is this justified? Personnel at the National Reference Costing Office (NRCO) reported that the UK is alone, internationally, in consistently publishing comparative hospital cost data on an annual basis. There was no international “blueprint” from which to develop the NRCE. 10 Given the UK's “leading edge” practice in cost governance metrics for hospitals, it is unsurprising that there has been international interest in the NRCE. The NRCO, at a regional office in Leeds, has presented and/or answered questions on the NRCE to the World Health Organization, the World Bank, and representatives from countries as diverse as Albania, Belgium, Canada, France, Iraq, Norway, Japan and the USA. 11 This advocacy positions the UK as the “first mover” in the dissemination of the “metrics” approach to hospital governance (a striking example of a regime that equates the integrity of public services with their transparency of operations (Strathern, 2000)), and as a prime instigator in the global spread of the emergent neo-liberal values that underpin this regime.