As for direct methods, the entire Hartree-Fock part of the DALTON\
program has been parallelized, allowing the use of both PVM and
MPI as
message passing interfaces. The use of the parallel code requires,
however, that the code has been installed as a parallel
code , which is
being determined during the building of the program as described in
Section .
If MPI is used as message passing interface, all that is needed to do
of changes in the DALTON.INP
file is to add the keyword
.PARALLEL
in the general input section, as demonstrated for a
calculation of vibrational frequencies :
**DALTON INPUT .RUN PROPERTIES .PARALLEL **WAVE FUNCTIONS .HF **PROPERTIES .VIBANA *END OF INPUT
The number of nodes to be used in the calculation is
requested to the
dalton
run script after the -n
option (see
Section ), or as stated in local
documantation. Note that the master/slave
paradigm employed by
DALTON will leave the master mainly doing sequential parts of the
calculation and distribution of tasks, thus very little computation
compared to the
n-1
slaves, see
Ref. [53].
In case of PVM runs, the program will spawn the requested number of slaves (enabling you to create a slave on the same machine as the master process in order to provide a more efficient use of the CPU power on the master machine). The number of slaves is requested through the keyword .NODES in the *PARALLEL input module, as indicated in the following example:
**DALTON INPUT .RUN PROPERTIES .PARALLEL *PARALLEL .NODES 4 **WAVE FUNCTIONS .HF **PROPERTIES .SHIELD **END OF INPUT
Note that this input would correspond to an MPI run with 5 nodes, as the master-process has to be added to the number of nodes.
By default the two-electron integrals will be screened, and a integral
threshold of is the default threshold for whether an
integral will be calculated or not. This can be changed with the
keywords .IFTHRS
and .ICEDIF
.