CHARMM (Chemistry at HARvard Macromolecular Mechanics) is the most widely distributed and used program for molecular dynamics and mechanics. Possible applications involve the simulations of protein folding, in collaboration with research groups at the Department of Biochemistry. A better understanding of protein folding will contribute significantly to find solutions for many important diseases ranging from Alzheimer and BSE to the discovery of new drugs against cancer and AIDS.
The most expensive step of the sequential version of CHARMM (more than 90? of the CPU-time) consists of the evaluation of interactions between pair of atoms. It has been shown that these calculations offer a high degree of parallelization and some work has already been done in this direction using MPI.
In this project, we have shown a systematic approach to the performance evaluation of the state of parallelization of CHARMM through the analysis of: resource usages (cpus, memory, communication), bottlenecks (limiting factors for different problems) and different kind of communication networks (Fast Ethernet, Gigabit Ethernet) on an in-house cluster of 16 PentiumII processors running Linux. Because of this approach, a better understanding of the code has been possible. In order to improve the performance, we have introduced appropriate changes, suggested by the systematic performance analysis, into the parallelization strategy. A validation of these improvements has been done by means of performance measurements.