next up previous contents index
Next: Geometry optimization: *WALK Up: General input to DALTON Previous: General: *OPTIMIZE   Contents   Index


Parallel calculations : *PARALLEL

This submodule controls the performance of the parallel version of DALTON. The implementation has been described elsewhere [90]. DALTON only supports MPI as message passing interface in the current release.

.DEBUG
Transfers the print level from the master to the slaves, otherwise the print level on the slaves will always be zero. Only for debugging purposes.

.DEGREE

READ (LUCMD,*) NDEGDI

Determines the percent of available tasks that is to be distributed in a given distribution of tasks, where a distribution of tasks is defined as the process of giving batches to all slaves. The default is 5% , which ensures that each slave will receive 20 tasks during one integral evaluation, which will give a reasonable control with the idle time of each slave.

.NODES

READ (LUCMD,*) NODES

When MPI is used as message passing interface, the default value is the number of nodes that has been assigned to the job, and these nodes will be partitioned into one master and NODES-1 slaves. In most cases the program will find the number of nodes from the run-shell environment and setting equal to the number of nodes requested when submitting the MPI job minus 1.

.PRINT

READ (LUCMD, *) IPRPAR

Read in the print level for the parallel calculation. A print level of at least 2 is needed in order to be able to evaluate the parallel efficiency. A complete timing for all nodes will be given if the print level is 4 or higher.


next up previous contents index
Next: Geometry optimization: *WALK Up: General input to DALTON Previous: General: *OPTIMIZE   Contents   Index
Dalton Manual - Release 1.2.1