dft_managers.mpi_helpers

Contains the handling of the QE process. It can start QE, reactivate it, check if the lock file is there and finally kill QE. Needed for CSC calculations.

dft_managers.mpi_helpers.create_hostfile(number_cores, cluster_name)[source]

Writes a host file for the mpirun. This tells mpi which nodes to ssh into and start VASP on. The format of the hist file depends on the type of MPI that is used.

Parameters:
number_cores: int, the number of cores that vasp runs on
cluster_name: string, the name of the server
Returns:
string: name of the hostfile if not run locally and if called by master node
dft_managers.mpi_helpers.find_path_to_mpi_command(env_vars, mpi_exe)[source]

Finds the complete path for the mpi executable by scanning the directories of $PATH.

Parameters:
env_vars: dict of string, environment variables containing PATH
mpi_exe: string, mpi command
Returns:
string: absolute path to mpi command
dft_managers.mpi_helpers.get_mpi_arguments(mpi_profile, mpi_exe, number_cores, dft_exe, hostfile)[source]

Depending on the settings of the cluster and the type of MPI used, the arguments to the mpi call have to be different. The most technical part of the vasp handler.

Parameters:
cluster_name: string, name of the cluster so that settings can be tailored to it
mpi_exe: string, mpi command
number_cores: int, the number of cores that vasp runs on
dft_exe: string, the command to start the DFT code
hostfile: string, name of the hostfile
Returns:
list of string: arguments to start mpi with
dft_managers.mpi_helpers.poll_barrier(comm, poll_interval=0.1)[source]

Use asynchronous synchronization, otherwise mpi.barrier uses up all the CPU time during the run of subprocess.

Parameters:
comm: MPI communicator
poll_interval: float, time step for pinging the status of the sleeping ranks