Live Webcast 15th Annual Charm++ Workshop

-->

PPL/Charm++ at SC17

Charisma: Orchestrating Migratable Parallel Objects
High-Performance Parallel and Distributed Computing (HPDC) 2007
Publication Type: Paper
Repository URL: orch2
Abstract
The parallel programming paradigm based on migratable objects, as embodied in Charm++, improves programmer productivity by automating resource management. The programmer decomposes an application into a large number of parallel objects, while an intelligent run-time system assigns those objects to processors. It migrates objects among processors to effect dynamic load balance and communication optimizations. In addition, having multiple sets of objects representing distinct computations leads to improved modularity and performance. However, for complex applications involving many sets of objects, Charm++'s programming model tends to obscure the global flow of control in a parallel program: One must look at the code of multiple objects to discern how the multiple sets of objects are orchestrated in a given application. In this paper, we present Charisma, an orchestration notation that allows expression of Charm++ functionality without fragmenting the expression of control flow. Charisma separates expression of parallelism, including control flow and macro data-flow, from sequential components of the program. The sequential components only consume and publish data. Charisma expression of multiple patterns of communication among message-driven objects. A compiler generates Charm++ communication and synchronization code via static dependence analysis. As Charisma outputs standard Charm++ code, the functionality and performance benefits of the adaptive run-time system, such as automatic load balancing, are retained. In the paper, we show that Charisma programs scale up to 1024 processors without introducing undue overhead.
TextRef
Chao Huang and Laxmikant V. Kale, "Charisma: Orchestrating Migratable Parallel Objects", In proceedings of IEEE International Symposium on High Performance Distributed Computing (HPDC) 2007, July
People
Research Areas