<div class="csl-bib-body">
<div class="csl-entry">Vardas, I., Hunold, S., Ajanohoun, J. I., & Traff, J. L. (2022). mpisee: MPI Profiling for Communication and Communicator Structure. In <i>2022 IEEE 36th International Parallel and Distributed Processing Symposium Workshops (IPDPSW 2022)</i> (pp. 520–529). IEEE. https://doi.org/10.1109/IPDPSW55747.2022.00092</div>
</div>
-
dc.identifier.uri
http://hdl.handle.net/20.500.12708/136174
-
dc.description.abstract
Cumulative performance profiling is a fast and lightweight method for gaining summary information about where and how communication time in parallel MPI applications is spent. MPI provides mechanisms for implementing such profilers that can be transparently used with applications. Existing profilers typically profile on a process basis and record the frequency, total time, and volume of MPI operations per process. This can lead to grossly misleading cumulative information for applications that make use of MPI features for partitioning the processes into different communicators. We present a novel MPI profiler, mpisee, for communicator-centric profiling that separates and records collective and point-to-point communication information per communicator in the application. We discuss the implementation of mpisee which makes significant use of the MPI attribute mechanism. We evaluate our tool by measuring its overhead and profiling a number of standard applications. Our measurements with thirteen MPI applications show that the overhead of mpisee is less than 3%. Moreover, using mpisee, we investigate in detail two particular MPI applications, SPLATT and GROMACS, to obtain information on the various MPI operations for the different communicators of these applications. Such information is not available by other, state-of-the-art profilers. We use the communicator-centric information to improve the performance of SPLATT resulting in a significant runtime decrease when run with 1024 processes.
en
dc.description.sponsorship
Fonds zur Förderung der wissenschaftlichen Forschung (FWF)
-
dc.description.sponsorship
Fonds zur Förderung der wissenschaftlichen Forschung (FWF)
-
dc.language.iso
en
-
dc.subject
MPI Profiling
en
dc.title
mpisee: MPI Profiling for Communication and Communicator Structure
en
dc.type
Inproceedings
en
dc.type
Konferenzbeitrag
de
dc.relation.isbn
978-1-6654-9747-3
-
dc.relation.doi
10.1109/IPDPSW55747.2022
-
dc.description.startpage
520
-
dc.description.endpage
529
-
dc.relation.grantno
P31763-N31
-
dc.relation.grantno
P33884-N
-
dc.type.category
Full-Paper Contribution
-
tuw.booktitle
2022 IEEE 36th International Parallel and Distributed Processing Symposium Workshops (IPDPSW 2022)
-
tuw.peerreviewed
true
-
tuw.relation.publisher
IEEE
-
tuw.project.title
Algorithm Engineering für Prozess Mapping
-
tuw.project.title
Offline- und Online-Autotuning von Parallelen Programmen
-
tuw.researchTopic.id
I2
-
tuw.researchTopic.id
C5
-
tuw.researchTopic.name
Computer Engineering and Software-Intensive Systems
-
tuw.researchTopic.name
Computer Science Foundations
-
tuw.researchTopic.value
90
-
tuw.researchTopic.value
10
-
tuw.publication.orgunit
E191-04 - Forschungsbereich Parallel Computing
-
tuw.publisher.doi
10.1109/IPDPSW55747.2022.00092
-
dc.description.numberOfPages
10
-
tuw.author.orcid
0000-0001-5461-556X
-
tuw.author.orcid
0000-0002-4864-9226
-
tuw.event.name
27th Workshop on High-level Parallel Programming Models and Supportive Environments (HIPS 2022) in conjunction with IEEE IPDPS 2022