Live Webcast 15th Annual Charm++ Workshop

-->
Parallel Languages/Paradigms:
AMPI - Adaptive Message Passing Interface
Parallel simulations in Computational Science and Engineering often exhibit irregular structure and dynamic load patterns. Most such existing applications have been developed in C/C++ or Fortran using MPI for scalable parallelism on distributed-memory machines. Incorporating dynamic load balancing techniques at the application-level involves significant changes to the design and structure of applications, because traditional run-time systems for MPI do not support dynamic load balancing in an application-independent way. Charm++ supports efficient dynamic load balancing using object migration for irregular and dynamic applications, and the same mechanisms help adapt to external factors that cause load imbalance. However, converting legacy MPI applications to an object-based paradigm can be cumbersome. AMPI is an implementation of MPI that supports dynamic load balancing, processor virtualization, and fault tolerance for MPI applications.

AMPI implements MPI ranks as lightweight user-level migratable threads rather than operating system processes. Charm++'s runtime system takes care of scheduling multiple ranks per core in a message-driven manner, automatically overlapping communication and computation. The runtime system provides support for migrating ranks between nodes to balance the computational load, as well as for tolerating hard faults via checkpoint/restart-based schemes. AMPI defines extensions to the MPI standard that make using these features in existing applications easy. See the AMPI manual here for more information on AMPI, as well as the papers/talks below.
People
Papers/Talks
22-10
2022
[PhD Thesis]
Runtime Techniques for Efficient Execution of Virtualized, Migratable MPI Ranks [Thesis 2022]
22-08
2022
[Paper]
Improving Communication Asynchrony and Concurrency for Adaptive MPI Endpoints [ExaMPI 2022]
22-07
2022
[Paper]
Runtime Techniques for Automatic Process Virtualization [P2S2 2022]
22-03
2022
[Paper]
Optimizing Non-Commutative Allreduce over Virtualized, Migratable MPI Ranks [APDCM 2022]
22-02
2021
[Paper]
Accelerating Messages by Avoiding Copies in an Asynchronous Task-based Programming Model [ESPM2 2021]
22-01
2021
[Paper]
Enabling Support for Zero Copy Semantics in an Asynchronous Task-Based Programming Model [Asynchronous Many-Task Systems for Exascale Workshop 2021]
18-02
2018
[Paper]
Multi-level Load Balancing with an Integrated Runtime Approach [CCGrid 2018]
18-01
2017
[Talk]
Optimizing Point-to-Point Communication between Adaptive MPI Endpoints in Shared Memory [ExaMPI 2017]
17-10
2017
[Paper]
Optimizing Point-to-Point Communication between Adaptive MPI Endpoints in Shared Memory [ExaMPI 2017]
17-08
2017
[Paper]
Integrating OpenMP into the Charm++ Programming Model [ESPM2 2017]
17-07
2017
[Paper]
Visualizing, measuring, and tuning Adaptive MPI parameters [VPA 2017]
17-06
2017
[Poster]
Adaptive MPI: Dynamic Runtime Support for MPI Applications [EuroMPI 2017]
17-05
2017
[Paper]
Improving the memory access locality of hybrid MPI applications [EuroMPI 2017]
| Matthias Diener | Sam White | Laxmikant Kale | Michael Campbell | Dan Bodony | Jon Freund
16-19
2016
[Paper]
Handling Transient and Persistent Imbalance Together in Distributed and Shared Memory [PPL Technical Report 2016]
16-09
2016
[Talk]
Adaptive MPI: Overview & Recent Work [Charm++ Workshop 2016]
16-07
2015
[Talk]
Introducing Over-decomposition to Existing Applications: A Case Study with PlasComCM and Adaptive MPI [Charm++ Workshop 2015]
16-06
2016
[Talk]
Charm++ and AMPI [WEST 2016]
13-16
2013
[Paper]
Parallel Science and Engineering Applications: The Charm++ Approach [Book 2013]
11-23
2011
[Paper]
Automatic Handling of Global Variables for Multi-threaded MPI Programs [ICPADS 2011]
10-26
2011
[Paper]
A Comparative Analysis of Load Balancing Algorithms Applied to a Weather Forecast Model [SBAC-PAD 2011]
10-21
2010
[Paper]
Optimizing an MPI Weather Forecasting Model via Processor Virtualization [HiPC 2010]
10-14
2010
[Paper]
Automatic MPI to AMPI Program Transformation using Photran [PROPER 2010]
10-09
2010
[Paper]
Automatic MPI to AMPI Program Transformation [Charm++ Workshop 2010]
08-13
2008
[Paper]
A Case Study in Tightly Coupled Multi-paradigm Parallel Programming [LCPC 2008]
07-08
2007
[Paper]
Supporting Adaptivity in MPI for Dynamic Parallel Applications [PPL Technical Report 2007]
07-04
2007
[Paper]
Programming Petascale Applications with Charm++ and AMPI [Petascale Computing: Algorithms and Applications 2007]
06-05
2006
[Paper]
Multiple Flows of Control in Migratable Parallel Programs [HPSEC 2006]
05-06
2005
[PhD Thesis]
Achieving High Performance on Extremely Large Parallel Machines: Performance Prediction and Load Balancing [Thesis 2005]
05-04
2006
[Paper]
Performance Evaluation of Adaptive MPI [PPoPP 2006]
03-07
2003
[Paper]
Adaptive MPI [LCPC 2003]
02-05
2002
[Paper]
Adaptive MPI [PPL Technical Report 2002]
00-03
2001
[Paper]
Object-Based Adaptive Load Balancing for MPI Programs [ICCS 2001]