Instructions
This simplified tutorial use the Kestrel (HPC@NREL) as an example
Written by Cheng-Wei Lee (clee2 [at] mines [dot] edu)
The very first step is to log into Kestrel. Since there is no external login nodes available yet, you need to use login the SSH gateway first via
ssh -AY [user name]@hpcsh.nrel.gov
Once on the ssh gateway, you can log into Kestrel by
ssh [user name]@kl2.hpc.nrel.gov
Once on the login nodes of Kestrel, we can load the following modules
source /nopt/nrel/apps/env.sh
module purge
module load anaconda3/2022.05
module load PrgEnv-intel/8.5.0
module swap cray-mpich cray-mpich-abi
module unload cray-libsci
module load intel-oneapi-compilers/2023.2.0
module load intel-oneapi-mpi/2021.10.0-intel
module load intel-oneapi-mkl/2023.2.0-intel
Once anaconda is loaded, we can now use conda to create a python virtual environment
conda create -n [name] python=3.9
Once the virtual environment is set up, we can activate it via
conda activate [name]
With the environment set up, we also need git to install pylada
Now with everything set up, we can follow the instruction on the pylada's github website
(We use pip to install pylada)
pip install git+https://github.com/pylada/pylada-light
Once the pylada is installed, we need to create a file name "ipython_config.py" with the path of
~/.ipython/profile_default/ipython_config.py
And it needs to have the following content
c = get_config()
c.InteractiveShellApp.extensions = [ "pylada.ipython" ]
In order to submit jobs using pylada, we also need the following file at home directory
with the following content
vasp_has_nlep = False
################## QDEL definition ################
mpirun_exe = "srun --mpi=pmi2 -n {n} {program}" # Kestrel requires --mpi=pmi2 for vasp
qdel_exe = "scancel"
qsub_exe = "sbatch"
################### QSTAT definition ##############
def ipython_qstat(self, arg):
""" Prints jobs of current user. """
from subprocess import Popen, PIPE
from IPython.utils.text import SList
# get user jobs ids
jobs = Popen(['squeue', '-u', '[user name]','--format','%.18i %.9P %j %.8u %.2t %.10M %.6D %R'], stdout=PIPE, encoding='utf-8').communicate()[0].split('\n')
names = [lines.strip().split()[2] for lines in jobs[1:-1]]
mpps = [lines.strip().split()[0] for lines in jobs[1:-1]]
states = [lines.strip().split()[4] for lines in jobs[1:-1]]
ids = [lines.strip().split()[0] for lines in jobs[1:-1]]
return SList([ "{0:>10} {1:>4} {2:>3} -- {3}".format(id, mpp, state, name) \
for id, mpp, state, name in zip(ids, mpps, states, names)])
## return SList([ "{0}".format(name) \
## for id, mpp, state, name in zip(ids, mpps, states, names)])
##################### PBSSCRIPT #####################
pbs_string = '''#!/bin/bash -x
#SBATCH --account={account}
#SBATCH --nodes={nnodes}
#SBATCH --ntasks-per-node={ppn}
#SBATCH --export=ALL
#SBATCH --time={walltime}
#SBATCH --job-name={name}
###SBATCH --mem=176G. # uncomment to requrest specific memory (usually not needed)
#SBATCH --partition={queue}
####SBATCH --qos=high # uncommented if high qos is needed (usually not needed)
###SBATCH -o out
###SBATCH -e err
# Go to the directoy from which our job was launched
cd {directory}
# Make sure the library are loaded properly for vasp
source /nopt/nrel/apps/env.sh
module purge
module load anaconda3/2022.05
module load PrgEnv-intel/8.5.0
module swap cray-mpich cray-mpich-abi
module unload cray-libsci
module load intel-oneapi-compilers/2023.2.0
module load intel-oneapi-mkl/2023.2.0
module load intel-oneapi-mpi/2021.10.0-intel
source activate [path to virtual environment]
export OMP_NUM_THREADS=1 #turns off multithreading, needed for VASP
{header}
python {scriptcommand}
{footer}
'''