یک چارچوب تحلیلی مدیریت برای فرایندهای توسعه محصول جدید شامل تکرارهای نامشخص
|کد مقاله||سال انتشار||مقاله انگلیسی||ترجمه فارسی||تعداد کلمات|
|2799||2013||27 صفحه PDF||سفارش دهید||13700 کلمه|
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Journal of Engineering and Technology Management, Volume 30, Issue 1, January–March 2013, Pages 45–71
This paper presents an analytical framework for effective management of projects with uncertain iterations. The framework is based upon: (1) the combination of two complementary techniques, one focused on improving iterative process architectures, the Design Structure Matrix, and one focused on predicting project performance, the Graphical Evaluation Review Technique; and (2) the introduction of an activity set-based criticality measure. The intent of the framework is to help project managers and researchers identify and evaluate alternative process architectures, in order to help them determine the alternative which best balances risk and other project performance parameters, as illustrated through an example application.
Project management, in particular project planning, is a critical factor in the success or failure of new product development (NPD) projects (Patanakul et al., 2012). For instance, Verworn et al. (2008) found that proper planning at the fuzzy front end of NPD projects improves not only project efficiency but also effectiveness through reduced market and technical risks. Meanwhile, Ahmadi and Wang (1999) showed that the use of inappropriate planning techniques results in NPD project time overruns. Specifically, failures in project planning can result in the lack of effective identification, evaluation, prioritization, and controlling of process variation and foreseeable uncertain events (De Meyer et al., 2002). However, a key difficulty in NPD project planning is that most “traditional” project planning techniques ignore or inadequately address one key source of process variability: uncertain iterations, which have probabilistic occurrences and/or durations. Iterations are a fundamental and often unavoidable characteristic of NPD processes (Braha and Maimon, 1997, Browning and Eppinger, 2002, Lévárdy and Browning, 2009 and Unger and Eppinger, 2009). Through iterations, design ideas evolve and converge into solutions, and design incompatibilities are fixed. As such, iterations can be either beneficial or counterproductive to NPD performance (Hong-Bae et al., 2005). Iterations may be opportune when activities are repeated for idea refinement, which contributes to the reduction of technical, schedule, and budgetary risks. These types of iterative loops may therefore be retained and accelerated if possible (e.g., Smith and Eppinger, 1997b, Thomke and Fujimoto, 2000 and Langerak and Hultink, 2008). However, counterproductive iterations are akin to the rework of manufacturing processes, and should be avoided if possible (Steward, 1981, Black et al., 1990, Browning, 1998 and Denker et al., 2001). In either case, previous research has shown that iterations constitute a major source of increased NPD lead-time and cost (Meier et al., 2007), a key driver of schedule risk (Browning and Eppinger, 2002), and a source of major uncertainties in the commitment of resources, which may further delay the project (Luh et al., 1999). Accordingly, the improvement of NPD projects, through project planning, demands a deep understanding of design iterations, both productive and counterproductive (Smith and Eppinger, 1997b). Many researchers suggest that wasteful iterations typically stem from poor process architectures, namely, poor activity sequencing (Browning and Eppinger, 2002 and Wang and Lin, 2009), avoidable information changes in coupled activities due to information management problems, e.g., modifications in design objectives, wrong execution times, poor communication and coordination (Browning, 1998), and poorly timed or too frequent design reviews (Ha and Porteus, 1995). Therefore, design of a NPD project management plan should include the identification of iterative loops, the assessment of alternative process architectures (i.e., alternative arrangements of loops), removal of counterproductive iterative loops where possible, and the definition of strategies for accelerating the execution of productive or otherwise required iterative loops, among other elements. This paper presents an analytical management framework, which is intended to support effective management of projects with uncertain occurrence and duration of iterations, through the activities described above. The framework is based upon: (1) the combination of two complementary techniques, one focused on improving iterative process architectures, the Design (or Dependency) Structure Matrix (DSM), and one focused on predicting performance of projects with iterative process architectures in a closed-form, the Graphical Evaluation Review Technique (GERT); and (2) the introduction of an activity set-based criticality measure, referred to as Loop Criticality Index (LCI) based on traditional risk management methods, specifically Risk Priority Numbers (RPN) from Failure Modes and Effects Analysis (FMEA). The intent of the framework is to help project managers and researchers identify and evaluate alternative process architectures, in order to help them determine the alternative which best balances risk and other project performance parameters, as illustrated through an industrial example application. This paper is organized as follows. The section literature review provides a review of related research work. The section framework presents the major elements of the analytical management framework. The next section describes the example application. Finally, conclusions, practical implications, research limitations, and areas for future work are discussed in section conclusions.
نتیجه گیری انگلیسی
Summary of research results This paper has provided insights regarding how the performance of NPD processes with uncertain iterations, featuring probabilistic occurrences and durations, can potentially be improved through the use of the proposed project management framework. The viability of this framework was tested here using secondary data from an industrial case study previously published by Browning and Eppinger (2002). Based on our study results, it appears that the fundamental ideas of the framework are not only viable but also have the potential to enhance other project management approaches, in particular those used to explore the effects of alternative process architectures. In applying the LCI analysis to the UCAV example, we found that this criticality index can provide guidance in the process architecture improvement effort. This is accomplished by identifying the specific sets of activities worth addressing for re-sequencing. We also found that the matrix-based approach of the framework, allowed the automation of changes to the process architecture for performance evaluation through simple modifications to the set of input-data matrices (e.g., binary precedence relationships, time, cost, occurrence activity parameters, rework impact, etc.) Thus, the exploration for process architecture alternatives is likely to be more efficient, as matrices are more easily manipulated and represented than activity networks. The matrix-based approach used in the framework is therefore advantageous for the project evaluation process in terms of tractability. Thus far, the majority of previous works have applied simulation (e.g., Browning and Eppinger, 2002, Cho and Eppinger, 2005, Abdelsalam and Bao, 2006, Huang and Chen, 2006 and Wang et al., 2006) while relatively few have been based upon closed-form analytical techniques (e.g., Eppinger et al., 1997 and Smith and Eppinger, 1997a). In this work, we reconsidered the analytical approach and developed a matrix-based interface between DSM and GERT, a procedure that is amenable to computer automation. In particular, M-GERT allows not only the streamlined integration with the DSM but also provides a close-form analysis approach for cyclic, stochastic activity networks in a systematic way. Interestingly, this result suggests that the analytical framework presented here is a reasonable alternative approach (in addition to simulation) to evaluate the project completion time (or cost) for relatively large and complex stochastic networks. The project risk management framework is by no means the first attempt to develop an analytical-based procedure for NPD evaluation. In examining prior analytical based-models, Eppinger et al. (1997) developed a performance evaluation model to assist in the understanding of NPD processes. M-GERT analysis is similar in some ways to their model analysis, although it is also different in significant ways. The analysis of both models (Eppinger et al., 1997) and M-GERT is based upon signal flow theory. The differences lie in the rework activity parameters defined for the network structure. In Eppinger's et al. (1997) model, shown in a block diagram in Fig. 14a, activities duration values are deterministic and allow for variable higher-order rework parameters to evaluate the project performance within the iterative block. That is, activity durations and iteration probabilities change for the first through the third iteration and remain fixed for higher-order iterations. The authors described these activity duration patterns (or assumptions) by means of a signal flow graph that unfolds the iterative block by stages (activities within the iterative block were replicated two or three times based upon design engineers’ suggestions) as shown in Fig. 14b. In contrast, in M-GERT, activity durations are described with two PDFs (thus far confined to the triangle distribution), one for the first-pass duration and the other for the second and higher iteration times for activities within loops, including additional attributes such as learning curves and rework impact as described in the section DSM-GERT interface to identify and evaluate alternative process architectures. Unlike Eppinger's et al. (1997) model, M-GERT does not unfold loops by stages (see Fig. 12c). Interestingly, the graphical representation of variable rework parameters (activity durations and occurrence probabilities) yields a different process architecture when compared with the simplified GERT-type network. Accordingly, differences in project performance evaluations are also likely. Fig. 14 takes the analysis one step further. Notice the die design activity that goes from nodes 7 to 6 in Fig. 12b can represent a second-, third- or higher-order iteration with the same activity duration depending on the path traversed to realize the project network. If the path 1–2–3–7–6–5–7–8–9–10 is traversed, then nodes 7–6 represent the second iteration for the die design activity. However, if the path 1–2–3–4–5–7–6–5–7–8–9–10 is traversed, then nodes 7–6 represent the third iteration of die design. In this example, it appears that the die design activity does not consistently follow the assumption of variable rework parameters. This highlights the criticality of the process architecture assumptions and the magnitude of error that can be introduced when assumptions are not accurately represented in the activity network or do not accurately reflect reality. While the proposed framework still requires some parameter assumptions a priori (e.g., iteration probability and activity duration values), it reduces some assumptions regarding the stochastic behavior of the network (e.g., exact number of iterations per loop and variable parameter values for the second and higher iterations) when compared with previous techniques (e.g., Eppinger et al., 1997, Chen et al., 2003 and Wang et al., 2006). In general, this research provides a framework for examining the impact of different process architectures based not only on the lead-time and cost, but also on the amount of risk (reflected in the LCI). This should better equip the NPD manager with the necessary information to build a project management plan that accounts for uncertain iterations. Moreover, insights from the LCI allow the NPD manager to identify loops deserving more attention during project planning and execution. In this regard, the LCI in combination with sensitivity analysis, can provide a gradient for process improvement, i.e., a guided search to improve the speed and likelihood of finding optimal or near-optimal process architecture patterns, and thus the efficiency of the iteration management techniques. Ultimately, the analytical framework can support the decision-making process regarding whether to attempt to remove loops, although this is not always feasible, reduce the likelihood of iterations, or promote accelerated executions of loop iterations, by evaluating the impact of changes in process architectures on performance, rather than finding other ways of generating alternatives. It is the project manager's decision as to what implementation strategy to follow on the activities involved in critical loops. In any event, the potential costs incurred by implementing the selected process architecture should also be considered during the NPD evaluation. Implications for the practicing technology manager To summarize practical implications of the project management framework, it is first noted that, technology managers might be concerned about the suitability of a generic project risk management framework to describe and predict the behavior of unique NPD projects. As previously argued, the singularity of NPD projects stems from the patterns of activity interrelationships and their parameter values, which define the way activities interact over time to process information about a product being created (Liberatore and Titus, 1983). Notwithstanding that the information is product-specific, the overall process followed is often repeatable and similar to other projects, suggesting that the NPD process is amenable to general modeling efforts (Smith and Eppinger, 1997a). As such, NPD process models can also be considered mechanisms for sharing assumptions and understanding uncertainties in NPD projects (Browning et al., 2006). Another aspect that may discourage technology managers from using the proposed project management framework is the requirement of precise data from an uncertain and ambiguous project environment. Even though the uncertainty in activity descriptions is represented in probabilistic terms, this does not eliminate the fact that data are often only rough estimates based on the project manager's and other stakeholders’ perceptions. However, it seems to go farther than would be necessary to even attempt an exact determination of the joint PDF of project lead-time and cost, as the outcome would still be an approximation of reality (Elmaghraby, 1977). Thus, the outcome of the NPD project management framework should be regarded as an aid in the decision-making process rather than an instrument to provide an optimal solution for managing the NPD project. Based on initial tests, the proposed framework appears to have significant potential to improve the management of NPD projects with uncertain iterations, in particular by supporting better identification and management of project risks. Limitations and future research There are also several limitations to the existing study, many of which contribute to the identification of areas for future research. First, given that a core element of the framework is based upon two existing techniques, some of the limitations of DSM and GERT are inherited by the current approach. On the DSM side, during the activity rearrangement procedure considered in this work, the sequence of activities is not guaranteed to be optimal. Additional sequencing algorithms could be added to the framework, particularly, to re-arrange activities within critical loops (e.g., Gebala and Eppinger, 1991 and Rogers, 1996) and systematically evaluate alternative process architectures until a marginal improvement threshold value is reached. On the GERT side, even though GERT considers probabilistic activity occurrence, the sequence of activities within loops is assumed to be deterministic. Moreover, GERT assumes that the parameter values for probability of loop occurrence, activity learning curves, and activity rework impacts remain fixed for each iteration. In real settings, these probabilities may either decrease or increase, depending on how uncertainties unfold. Furthermore, GERT assumes that all activities are undertaken at least once, allowing the representation of the process architecture as a single chain of activities and, as currently developed, does not fully support simultaneous parallel path executions. However, in real-life settings there are projects in which different paths can be undertaken, and the actual path is only determined as the project progresses (Lévárdy and Browning, 2009). This feature can be handled in GERT-type networks by involving and and inclusive-or nodes. Thus, one approach that could be developed is to convert parallel paths leading to an and node to an equivalent set of paths leading to an exclusive-or node. However, we also noted that this further development would imply that parallel paths should be determined a priori. While this aspect may reduce the usability of the M-GERT for projects operating in completely unknown terrains, this methodology can be used iteratively as the project team learns about the project. Further, only immediate predecessor relationships can add potential rework to interim activities of loops. Future work includes the consideration of potential rework caused by non-immediate feedforward relationships. Last, the proposed framework assumes a lack of resource constraints, which may require further adjustments depending on resource availability. The overcoming of these limitations represent areas for future research related to the proposed framework. A second key limitation is related to the framework input data. The outcomes of the framework depend entirely on the initial estimates made or gathered by project planners. While this issue has been long recognized as a limitation in all types of project modeling efforts, the activity of determining probabilities of occurrence for each path within the network may turn into an overwhelming task for the persons responsible for this process, particularly for complex projects. According to Osborne (1993), this information can be collected by consulting available project management databases and personal interviews. For instance, the UCAV preliminary design process was characterized by interviews with engineers and managers and with a 23-item questionnaire designed by Browning (1998). This leads to the importance of considering the following questions about management tool effectiveness, originally posed by Dawson and Dawson (1998): Would this effort be worthwhile? and, Would the benefits of the framework proposed in this research surpass the difficulty in determining parameter values? In order to answer the previous questions, the value of the managerial insights should outweigh the cost of the data-collection process. According to Smith (1992), if a technique is meant to be useful to project managers, it must be shown to help managers make better decisions. In the current research, it was demonstrated that the framework should have been able to help managers in the UCAV case study make better assessments regarding project risk and better decisions based on this, e.g., identification of sets of activities more deserving attention for re-sequencing based on evidence of their criticality with respect to project performance; and therefore, this information can support previous project management approaches focused on exploring the effects of alternative of process architectures. However, as the current study used only secondary data, it cannot be proven with certainty that insights from the framework are able to help NPD project managers make better decisions that lead to shorter project lead-time and lower project cost in actual practice. Thus, a critical next step is further testing of the iteration management framework with an ongoing project. In addition, several other directions for future research within the iteration management domain were also identified. Future work can attempt to explore the integration of the current framework with other modeling techniques, in particular with queuing models, resource-constrained scheduling, and fuzzy logic-based models, among others. As for criticality measures, research efforts should be oriented in further developing the LCI and exploring its relationship with pair-wise interface criticality measures. Focus in future work should also be on further exploring the effect of different patterns of activity interactions on project performance. Further efforts can include the combination of the loop deletion strategy (or any other strategy to highlight the areas worth addressing first) with a meta-heurstic such as genetic algorithm (GA). One approach could be developed in a two-stage process in which the LCI could provide a gradient for optimization, while the GA identifies and evaluates multiple process architecture variants narrowed to the activities comprised in the critical loops. We believe that this would increase the speed and likelihood of finding optimal or near-optimal solutions. Another promising area for future research involves measuring the practical value of managerial insights furnished by other NPD project management approaches, so that these can be compared this with those provided by the current framework. One final area for future work is the development of more effective data-collection processes, in particular for imprecise data. Overall, despite the identified study limitations, it appears that the proposed framework currently has the potential to significantly improve the management of NPD projects with uncertain iterations, and may also be expanded and improved in the future through the identified areas for future research.