Adaptive MPI: Intelligent Runtime Strategies and Performance Prediction via Simulation
Future Technologies Colloquium Series 2005
Publication Type: Talk
Repository URL:
My research group's goal has been to improve performance and productivity in parallel programming, by developing enabling technologies in the context of multiple \real\parallel applications. One of the corner-stones of our research is the idea of migratable objects, and intelligent runtime techniques enabled by them. Here, the programmer decompose the problem into a large number of interacting pieces, and the runtime system (RTS) maps (and remaps) these pieces to processors. Capabilities such as dynamic load balancing, automatic fault tolerance and communication optimization are enabled by migratable objects. The original implementation of this idea was in Charm++, a parallel C++ system; in recent years, we have implemented AMPI (Adaptive MPI), a full implementation of MPI, that provides the same capabilities as Charm++. In this talk I will describe AMPI/Charm++, and their capabilities including fault tolerance, load balancing and performance analysis. The virtualization capabilities of our RTS are leveraged by a performance prediction framework that can be used to predict performance of full-fledged application codes on very large parallel machines, using relatively small parallel machines. Charm++/AMPI is used by several production-quality applications, including the molecular dynamics program NAMD, and rocket simulation codes at the Illinois' ASCI center. I will illustrate the capabilities of our frameworks using these applications.
Research Areas