دانلود مقاله ISI انگلیسی شماره 78493
ترجمه فارسی عنوان مقاله

تنظیم ارتباطات جمعی برای مدل های برنامه نویسی فضایی آدرس های جهانی قسمت بندی شده

عنوان انگلیسی
Tuning collective communication for Partitioned Global Address Space programming models
کد مقاله سال انتشار تعداد صفحات مقاله انگلیسی
78493 2011 16 صفحه PDF
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : Parallel Computing, Volume 37, Issue 9, September 2011, Pages 576–591

ترجمه کلمات کلیدی
زبانهای فضای آدرس جهانی قسمت بندی شده؛ ارتباطات جمعی؛ ارتباطات یک طرفه
کلمات کلیدی انگلیسی
Partitioned Global Address Space languages; Collective communication; One-sided communication
پیش نمایش مقاله
پیش نمایش مقاله  تنظیم ارتباطات جمعی برای مدل های برنامه نویسی فضایی آدرس های جهانی قسمت بندی شده

چکیده انگلیسی

Partitioned Global Address Space (PGAS) languages offer programmers the convenience of a shared memory programming style combined with locality control necessary to run on large-scale distributed memory systems. Even within a PGAS language programmers often need to perform global communication operations such as broadcasts or reductions, which are best performed as collective operations in which a group of threads work together to perform the operation. In this paper we consider the problem of implementing collective communication within PGAS languages and explore some of the design trade-offs in both the interface and implementation. In particular, PGAS collectives have semantic issues that are different than in send–receive style message passing programs, and different implementation approaches that take advantage of the one-sided communication style in these languages. We present an implementation framework for PGAS collectives as part of the GASNet communication layer, which supports shared memory, distributed memory and hybrids. The framework supports a broad set of algorithms for each collective, over which the implementation may be automatically tuned. Finally, we demonstrate the benefit of optimized GASNet collectives using application benchmarks written in UPC, and demonstrate that the GASNet collectives can deliver scalable performance on a variety of state-of-the-art parallel machines including a Cray XT4, an IBM BlueGene/P, and a Sun Constellation system with InfiniBand interconnect.