آموزش ماشین اعمال شده به منظور مطالعه مدیریت کیفیت در حوزه تعمیر کشتی
|کد مقاله||سال انتشار||تعداد صفحات مقاله انگلیسی||ترجمه فارسی|
|4409||2007||10 صفحه PDF||سفارش دهید|
نسخه انگلیسی مقاله همین الان قابل دانلود است.
هزینه ترجمه مقاله بر اساس تعداد کلمات مقاله انگلیسی محاسبه می شود.
این مقاله تقریباً شامل 7166 کلمه می باشد.
هزینه ترجمه مقاله توسط مترجمان با تجربه، طبق جدول زیر محاسبه می شود:
- تولید محتوا با مقالات ISI برای سایت یا وبلاگ شما
- تولید محتوا با مقالات ISI برای کتاب شما
- تولید محتوا با مقالات ISI برای نشریه یا رسانه شما
پیشنهاد می کنیم کیفیت محتوای سایت خود را با استفاده از منابع علمی، افزایش دهید.
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Computers in Industry, Volume 58, Issue 5, June 2007, Pages 464–473
The awareness about the importance of knowledge within the quality management community is increasing. For example, the Malcolm Baldrige Criteria for Performance Excellence recently included knowledge management into one of its categories. However, the emphasis in research related to knowledge management is mostly on knowledge creation and dissemination, and not knowledge formalisation process. On the other hand, identifying the expert knowledge and experience as crucial for the output quality, especially in dynamic industries with high share of incomplete and unreliable information such as ship repair, this paper argues how important it is to have such knowledge formalised. The paper demonstrates by example of delivery time estimate how for that purpose the deep quality concept (DQC)—a novel knowledge-focused quality management framework, and machine learning methodology could be effectively used. In the concluding part of the paper, the accuracy of the obtained prediction models is analysed, and the chosen model is discussed. The research indicates that standardisation of problem domain notions and expertly designed databases with possible interface to machine learning algorithms need to be considered as an integral part of any quality management system in the future, in addition to conventional quality management concepts.
Ship repair is a complex, highly dynamic and stochastic process with high interdependencies. The process is also characterised with a high share of incomplete and unreliable information that is particularly expressed in some stages of the process. In such processes output quality is significantly influenced by the quality of assessments and decisions that cannot be ensured only by adherence to certain predefined procedures and instructions, on which, e.g. the standard ISO 9001 is based. In such systems expert knowledge and experience play a decisive role, and they are often of the nature that makes it practically impossible for them to be formalised with traditional methods. Also, because of so expressed technological complexity, and too many inter and intra dependent variables of influence, it is not easy (or even possible) to define efficient analytical models. Delivery time estimate in ship repair is one of typical examples of such processes. It includes the overall repair time estimate, as well as the estimate of duration of repair works in dock. The accuracy of these estimates significantly influences the quality of ship repair service. Also, it is critical for the business results of the shipyard. If the estimated times are too long, the shipyard will not be competitive. And if they are estimated too short, a production schedule may fail due to unrealistically estimated activity durations, which may result in final delivery time delay and penalties. Also, the quality of performed job might be influenced negatively given that delay often means doing things in hurry. This particularly goes for the overall repair time estimates. On the other hand, developments in artificial intelligence provide powerful means for modelling expert knowledge. They also allow the automatic acquisition of such knowledge by means of machine learning or data mining techniques. Unfortunately, the use of such techniques in quality management context is not of systematic but rather of an ad hoc manner. In industry this is caused by at least two main reasons. The first is Taylorian philosophy of manufacturing that still prevails in the current quality management models. Determinism of operations, predictable behaviour of the system, and a priori information that is reliable, complete and accurate, identified as the basic Taylorian presumptions of manufacturing systems by Peklenik , are still the main presumptions of the most well known quality management models (total quality management model (TQM), Malcolm Baldrige Criteria for Performance Excellence, EFQM Excellence Model, and standard ISO 9001). For example, fact-based management, i.e. the factual approach to decision making, are still listed among core quality concepts in the frame of all these models. Also, the use of information technology is not sufficiently systematic. One of the consequences of this is the lack of accurate and standardised bases of organisational as well as of technological data in some manufacturing organisations and domains. The second reason why the use of artificial intelligence techniques in quality management context is not of systematic but rather of an ad hoc manner is that knowledge of artificial intelligence techniques is typically modest. On the other hand, although the Malcolm Baldrige criteria included recently knowledge management into one of its categories, the emphasis in related research is mostly on learning, i.e. on knowledge creation and knowledge sharing, and not knowledge formalisation process (see, e.g. ). Also, distinction between the terms ‘knowledge’ and ‘information’ is not always clear in such research (see, e.g. ). A more detailed explanation of these limitations, as well as the DQC model—a new theoretical framework how to overcome these deficiencies are presented by Srdoc et al. . In difference to other quality models that are typically concerned only with shallow knowledge, in this model particular attention is paid to standardisation of domain concepts, and domain deep knowledge. Integration of information systems, defined as systems whose purpose is to acquire and represent knowledge, and quality systems is also proposed in . Dooley  also suggests that TQM paradigm based on predictability, control and linearity may be insufficient. How TQM approaches are inadequate because they do not address the uncertainties that impact significantly on results in some industries, is also described in . On the other hand, a review of the use of intelligent systems in manufacturing can be found in, e.g. . The review shows variety in the use of these techniques. Concerning the use of machine learning algorithms for quality management in manufacturing, there are also several approaches. For example, Shigaki and Narazaki  demonstrated an approximate summarisation method of process data for acquiring knowledge to improve product quality based on the induction of decision trees, one of machine learning techniques. They also demonstrated a machine learning approach for a sintering process using a neural network . Concerning the ship repair domain there has been no work reported on the use of artificial intelligence for quality management. Thus the use of machine learning algorithms has also not been reported. Instead, approaches based mainly on statistical techniques and ISO 9000 standards can be found (e.g.  and ). On the other hand, some work concerning manufacturing databases in the ship repair domain has been reported (e.g. ). In this study, the approach as suggested within the DQC model is applied. The mechanisms investigated are: (1) systematic recording of data into expertly designed database, (2) standardisation of the data, and (3) transformation of the data into a knowledge base by means of machine learning. The data studied in the research and collected from a real ship repair yard are: (1) parameters defining repair activities that were described within each repair project (attribute values), and (2) related times estimated by the human expert (the target attribute). The data are limited to dock works. The reasons for that are: (1) dock works are technologically self contained subset of repair works, present in almost every ship repair project, (2) dock works often contain activities that influence the overall delivery time the most, such as anti-corrosive and steel works, and (3) since docks appertain to the most valuable and bottleneck resources of any shipyard the duration of these works is always important, and estimated separately. The goal of machine learning from these data was to construct comprehensible delivery time predictors, such as regression or model trees for computer-supported estimate, eliciting the hidden implicit knowledge from the data. Attribute selection and data refinement are done manually, based on the deep understanding of the learning problem and what the attributes actually mean. Given that in the inquiries-answering stage detailed technical data typically are not known, they are not included into this study.
نتیجه گیری انگلیسی
Increasingly, differences in a firm's performance are attributed to tacit knowledge (e.g.  and ). According to Simon , the reason why experts on a given subject can solve a problem more readily than novices is that the experts have in mind a pattern born on experience, which they can overlay on a particular problem and use to quickly detect a solution. On the other hand, the uncertainty associated with humans increases the need for knowledge formalisation. Rarity of real experts increases such need too. For example, in the examined shipyard after one expert was retired, only one capable of making reliable estimates on the problem explored remained. Consequently, the reliability of quality as well as of business systems, among others, lies in formalising as many as possible of patterns born on experience. In this paper an attempt to formalise one such pattern from the ship repair database is presented. The works required on the enquiries received by the shipyard have been analysed, and the related ship repair data model was created. The methodology of learning from examples was employed, and several models were induced eliciting and representing the implicit knowledge from the database. Finally, the model tree directly usable for estimating the possible repair time was chosen. The experiments confirmed that total quantity of steel within the renewal of steel on shell plating, and shell plating treatment are the most important attributes, appearing in all generated models. On the other hand, HP washing surface revealed as a more informative parameter than it is usually thought. The greatest improvements in results accuracy on unseen cases evaluated by 10-fold cross-validation were obtained using different learning algorithms. On the other hand, varying the datasets influenced the achieved performances the least. That clearly confirmed Witten and Frank's  thesis that in many practical situations perhaps the overwhelming majority of attributes are irrelevant or redundant. However, their statement about negative effect of irrelevant attributes on most machine learning schemes was not confirmed in this application—the results remain stable regardless of the discarded attributes, particularly when accuracy was evaluated on unseen cases. On the training data, these results were more dependent on the dataset used. The experiments also demonstrated that machine learning methods can offer an advantage over approaches based on linear and network analysis that are usually employed for the time estimate problem in shipyards, such as, e.g. Gant charts or activity networks, because they do not need any prior assumptions or knowledge about the relationships between the variables. Also, of 21 attributes used for learning, the machine-learning algorithm identified model with as few as seven attributes in the chosen tree. The comparison of the induced model predictions with the expert estimates, and analysis of the instances with the greatest deviation showed that such inconsistencies do not always have to indicate bad performance of the induced learner. They could indicate inconsistency in human reasoning, or in data available. In this study it is demonstrated how data mining could reveal such inconsistencies. For this reason, the checking of the results obtained using data mining techniques has to be always addressed in two directions—i.e. in (1) direction of checking accuracy of the learner, and (2) in checking consistency of predictions used for learning. Once the input data model was created and data standardised and cleansed, the estimate structures were obtained relatively quickly, although the process was not trivial, and required a high share of domain modelling knowledge and reasoning. In case of additional records on new projects, such models can be improved, of course, when estimating presumptions are not significantly changed. On the other hand, at least 10–15 years of experience are needed for graduated engineer or talented technician to be able to give reliable estimates. This includes supervised work with the experienced mentor. Inaccuracies of predictions of insufficiently experienced human predictors could exceed 30, and sometimes even 50% or higher. For example, in ship repair community it is well known the case for which difficulties in accurate delivery time estimate and tons of steel needed caused serious problems to one of renowned ship repair yards in Croatia. The fact that the shipyard has already had the standard ISO 9001 certificate did not prevent the final disaster. This is also consistent with Massow and Siksne-Pedersen  statement that shipbuilding as an extremely complex process requires special methods and tools for order processing. How overconfidence in forecasts based on expert judgement can be risky is also discussed in Armstrong . In the handbook he edited, an overview of principles of forecasting, as well as methods of reducing the impact of inconsistency and bias in judgemental forecasting are also described. Many of the concepts discussed, such as careful identification of the most important causal forces (attributes in this case study), accurate records, use of models—especially computerised models, are also included in the DQC approach, and employed in this study. Of course, all these do not mean that conventional quality management principles have to be put apart. As shown in , it only means that all these concepts, as well as other concepts relevant for quality management that are still to be identified, need to be re-validated and put together, leading to a new, more sophisticated and complete quality management philosophy. The quality management community, as well as quality standardisation and award bodies need to recognise that need. The criteria for knowledge formalisation are stated in . As it is suggested, besides now already established functions like quality managers and engineers, in designing, development and maintenance of quality systems knowledge engineers should be anticipated as well.