ابزار دقیق سیستم پایگاه داده برای تجزیه و تحلیل عملکرد از برنامه های علمی موازی
|تعداد صفحات مقاله انگلیسی
|41 صفحه PDF
نسخه انگلیسی مقاله همین الان قابل دانلود است.
هزینه ترجمه مقاله بر اساس تعداد کلمات مقاله انگلیسی محاسبه می شود.
این مقاله تقریباً شامل 10967 کلمه می باشد.
هزینه ترجمه مقاله توسط مترجمان با تجربه، طبق جدول زیر محاسبه می شود:
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Parallel Computing, Volume 28, Issue 10, October 2002, Pages 1409–1449
The complexity and computational intensity of scientific computing has fueled research on parallel computing and performance analysis. The purpose of this paper is to present a novel approach to performance analysis of large parallel programs. At the core of this approach is an instrumentation database (IDB) that enables comparative analysis of parallel code performance across architectures and algorithms. The basis of the IDB approach is scalable collection of performance data so that problem size and run-time environments do not affect the amount of information collected. This is achieved by uncoupling performance data collection from the underlying architecture and associating it with the control flow graph of the program. An important contribution of the IDB approach is the use of database technology to map program structure onto relational schema that represents the control flow hierarchy, its corresponding statistical data, and static information that describes the execution environment. To demonstrate the benefits of the proposed approach, we have implemented a POSIX compliant probe library, automated instrumentation tool, front-end visualization programs, database schema using an object-relational DBMS (PostgreSQL), and SQL queries. We also developed a methodology, based on these tools, for interactive performance analysis and demonstrated this methodology on several different parallel scientific applications.
Since the primary reason for writing parallel codes is speed , it comes as no surprise that performance analysis is a vital part of the development process. Analysis tries to determine if a given algorithm is as fast as it can be, where the program can be further optimized, and how efficiently the underlying system is being used. We present the instrumentation database (IDB) system which comprises of the following three primary components: • automated instrumentation tool, • probe library and multi-language application programmer’s interface (API), • experiment definition file tool (EDFtool) and visualization tool (Vistool) graphical front-end. We demonstrate the features of the system and resulting methodology of its use by evaluating performance results of several scientific computing applications.
نتیجه گیری انگلیسی
Performance tuning and optimization is an iterative process. In the past, the structure of these iterations was loosely defined by the user. The introduction of the database and tools for comparative analysis provide the user with the means to approach optimization in a more systematic manner. Commonalities exist in how IDB was used to instrument and analyze the various scientific codes we presented. As familiarity with the IDB approach increases, these emergent methodologies will hopefully continue to evolve and define a new framework for performance analysis. Currently, the methodology involves the following steps: (1) Reference instrumentation: Introduction of probes at the first level of the CFH. The resulting performance database is queried to view which branch or branches are on the critical path, or consume the most time. (2) Growing the tree: Introduction of probes on subsequent levels of the CFH. This process is repeated until the performance events that comprise the critical path are instrumented. Clues that a probe can be further optimized include: • Instrumented loops or functions that iterate or are invoked many times. • Probes showing a significant difference between CPU and total time. This may indicate synchronization or blocking delays. • Probes that include large segments of code, namely long functions or loop bodies. In many cases, such segments usually contain a performance critical event that may reside on the critical path. (3) Pruning the tree: Removal of probes that are not on the critical path. The resulting CFH defines the reference instrumentation version. This is a minimally instrumented version of the program with probes along the critical path of the CFH. (4) Experimentation: Optimization of instrumented code or permutation of run-time and compile-time parameters. Analysis methodologies will continue to evolve as the IDB framework and user base grows. The IDB approach addresses several issues in the area of performance analysis; among these are scalability, cost of invocation, ease of use, accuracy, object-oriented support, multi-language compatibility, platform independent comparative analysis, and experiment management. IDB uniquely addresses these issues by leveraging database technology in conjunction with a novel approach to instrumentation. Scalable instrumentation: We demonstrated that a minimal set of performance events (PROC, CALL, LOOP, and COMM) coupled with the CFG and aggregate statistics (MIN, MAX, AVG, COUNT, and SDEV) are sufficient for deriving performance attributes. The amount of instrumentation data collected is bound by the path, or paths, traversed through the CFG. Database size is not affected by run time, input size, or number of nodes. Multi-language and object-oriented support: We developed a POSIX compliant instrumentation API that ensures availability across multiple platforms. ++-izing enables C codes to use the C++ API. Similarly, compiler and linker support enable FORTRAN 90 codes to call IDB probes. The Java Native Interface provides a mechanism for future instrumentation of Java applications. Also, extensions to the CFH involving multiple instances of a CFH within a node enable object-oriented codes to collect performance data within each object contributing to the critical path. Mapping program structure onto relation database schema: Storing statistical probe data, CFH connectivity, and static data in a traditional relational database guarantees a standard interface to performance data. Data collected across multiple versions of the same program, across multiple architectures, compiler options, or run-time options can be compared or archived for future analysis. Moreover, the IDB enables analysis on architectures or systems other than those on which performance data was collected. Standard query language (SQL) provides a powerful interface where data-mining and querying can be cast against multiple program executions. This interface and the language specific extensions for embedded SQL provide the foundation for developing sophisticated visualization and graphical query tools. Automated instrumentation: To ensure a low cost of invocation, the process of introducing probes at the source code level is automated. Target applications are lexically analyzed and instrumentation is added based on information provided by the analyst in a graphical format. The design of this tool was guided exclusively by user feedback. Visualization front-end: Embedded SQL and graphical toolkits simplify development of sophisticated Vistools. For example, IDB provides Vistool for presenting data collected within a single execution graphically. Similarly, EDFtool presents data in a graphical means; however, data spans multiple executions of the same or modified versions of an application. Queries are constructed graphically through a user interface and the results returned from the database are presented graphically or numerically in tabular form. Experimental view of performance analysis: IDB is unique in that it forces the analyst to adhere to basic experimental techniques for optimizing and analyzing code. Program executions are defined as performance experiments. EDFs created during instrumentation are referenced during visualization. All aspects of the IDB framework (instrumentation tool, database, and visualization front-end) are integrated around these definition files. Experiment modes include comparison across multiple versions or coalescing data collected from multiple runs of the same code. Coalescing is useful for dusty deck  and gauging regularity; that is to say that it is useful on multi-user systems where performance can be affected by external factors or by asynchronous events during execution that can alter performance across multiple runs.