<div class="csl-bib-body">
<div class="csl-entry">Träff, J. L., Hunold, S., Vardas, I., & Funk, N. M. (2023). Uniform Algorithms for Reduce-scatter and (most) other Collectives for MPI. In <i>2023 IEEE International Conference on Cluster Computing (CLUSTER)</i> (pp. 284–294). IEEE. https://doi.org/10.1109/CLUSTER52292.2023.00031</div>
</div>
-
dc.identifier.uri
http://hdl.handle.net/20.500.12708/190649
-
dc.description.abstract
We explore the use of a regular, circulant graph communication pattern for the implementation of the reduction-to-all (MPI_Allreduce), by specialization the reduction-to-root (MPI_Reduce), the reduce-scatter (MPI_Reduce_scatter_block), the all-to-all-broadcast (MPI_Allgather) and the rooted gather and scatter (MPI_Gather and MPI_Scatter) collective operations, all as found in MPI (the Message-Passing Interface), for commutative operators and for any number of processes. The reduction-to-all algorithm reconstructs the little known algorithm by Bar-Noy, Kipnis and Schieber (1993), which the paper considerably extends.We experiment with extensions and combinations of the algorithms for these operations, and examine their performance from the perspective of performance guidelines, and in direct comparison to the implementations in common MPI libraries. On a small cluster with 36 × 32 cores and two larger HPC production systems, we show that we can especially for MPI_Reduce_scatter_block achieve considerably better performance than standard MPI library implementations. Our algorithms can perform consistently, which the implementations in standard MPI libraries sometimes do not.In a homogeneous, one-ported communication system with linear transmission costs, reduction-to-all, reduce-scatter and all-to-all-broadcast can all be implemented in O(log p + m) time steps for problems of size m with small constants which we analyze and discuss.
en
dc.language.iso
en
-
dc.subject
MPI
en
dc.subject
HPC
en
dc.subject
collective communication operations
en
dc.title
Uniform Algorithms for Reduce-scatter and (most) other Collectives for MPI
en
dc.type
Inproceedings
en
dc.type
Konferenzbeitrag
de
dc.relation.isbn
979-8-3503-0792-4
-
dc.relation.doi
10.1109/CLUSTER52292.2023
-
dc.relation.issn
1552-5244
-
dc.description.startpage
284
-
dc.description.endpage
294
-
dc.type.category
Full-Paper Contribution
-
dc.relation.eissn
2168-9253
-
tuw.booktitle
2023 IEEE International Conference on Cluster Computing (CLUSTER)
-
tuw.peerreviewed
true
-
tuw.relation.publisher
IEEE
-
tuw.relation.publisherplace
Piscataway
-
tuw.researchTopic.id
I2
-
tuw.researchTopic.id
C5
-
tuw.researchTopic.name
Computer Engineering and Software-Intensive Systems
-
tuw.researchTopic.name
Computer Science Foundations
-
tuw.researchTopic.value
90
-
tuw.researchTopic.value
10
-
tuw.publication.orgunit
E191-04 - Forschungsbereich Parallel Computing
-
tuw.publisher.doi
10.1109/CLUSTER52292.2023.00031
-
dc.description.numberOfPages
11
-
tuw.author.orcid
0000-0002-4864-9226
-
tuw.author.orcid
0000-0002-5280-3855
-
tuw.author.orcid
0000-0001-5461-556X
-
tuw.event.name
IEEE International Conference on Cluster Computing (IEEE CLUSTER 2023)