The Distributed Computing Group at Daresbury Laboratory (original) (raw)
Linux Environment:
The default settings should be appropriate to begin with. If you want to customise the environment you can use modules. Alternatively change your .bashrc approriately:
Intel tools:
Intel tools are located in /opt/intel . You may find the following environmental variables useful
export INTEL_LICENSE_FILE="/opt/intel/licenses"
export MANPATH="/opt/intel/Compiler/11.0/man:${MANPATH}"
export LD_LIBRARY_PATH="/opt/intel/Compiler/11.0/lib/intel64:${LD_LIBRARY_PATH}"
export PATH="/opt/intel/Compiler/11.0/bin/intel64:${PATH}"
. /opt/intel/mkl/10.1.2.024/tools/environment/mklvarsem64t.sh
Portland compilers:
The new default location of PGI compilers is /opt/pgi (formerly /usr/local ). Add the following variables
export LM_LICENSE_FILE=/opt/pgi/license.dat
PGIROOT=/opt/pgi/linux86-64/9.0
export PATH=$PGIROOT/bin:$PATH
export MANPATH=$PGIROOT/man:$MANPATH
export LD_LIBRARY_PATH=$PGIROOT/lib:$PGIROOT/libso:$LD_LIBRARY_PATH
You may find this one useful too
export NO_STOP_MESSAGE=
PGI Fortran and C Accelerator Programming Model version 1.0 June 2009.
HMPP Workbench:
HMPP stands for Hybrid Multi-core Parallel Programming. The HMPP environment is loaded by sourcing
. /opt/HMPPWorkbench-2.1.0/bin/hmpp-workbench-env.sh
Once it is loaded you have hmppcc and hmppfort compilers at your disposal. Note that currently hmppfort compiler does not work with gfortran so you have to use Intel Fortran compiler ifort. If the Intel compiler does not work on a compute node use one of the head nodes (cseht or csehtgpu).
Tutorial on HMPP is available here.
NVIDIA compiler:
export CUDA_HOME=/usr/local/cuda
export CUDA_SDK_HOME=/usr/local/cuda/NVIDIA_CUDA_SDK
export PATH=/usr/local/cuda/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda/lib:$LD_LIBRARY_PATH
If you need libcuda.so you need to compile your code on a compute node with NVIDIA drivers installed, ie comp032 - comp039. It is located in /usr/lib64/libcuda.so.
Parallel Environment:
For best performance we suggest using OpenMPI. Set OMPIROOT to either of the three
OMPIROOT=/opt/openmpi-1.2.6-1/intel
OMPIROOT=/opt/openmpi-1.2.6-1/pgi
OMPIROOT=/opt/openmpi-1.2.6-1/gcc
and then
export PATH=$OMPIROOT/bin:$PATH
export LD_LIBRARY_PATH=$OMPIROOT/lib64:$LD_LIBRARY_PATH
export MANPATH=$OMPIROOT/share/man:$MANPATH
Use ompisub to submit a job, e.g.
ompisub Nx4 execfile [args]
This will create a script and submit a job. You can use the script as a template for creating more sophisticated submission scripts. You then submit it
qsub -pe openmpi n script.sh
where "n" is the number of nodes.
MVAPICH and MVAPICH2 are also available together with mpich2sub and mpichsub submission scripts respectively. The usage is similar to OpenMPI.
Intel MPI is now available too; mpich2sub submission script should be compatible with it but will need some testing.
Resource selection :
If you need to use specific nodes use qsub with -l option or add -l to your job submission script
For Harpertown nodes (x32) use -l gpgpu=false,nehalem=false
For Nehalem or Tesla nodes (x8) use -l gpgpu=true,nehalem=true
For Shanghai or FireStream nodes (x2) use -l gpgpu=true,nehalem=false
Example of submission script
#!/bin/sh
# Harpertown nodes
#$ -cwd -V -l gpgpu=false,nehalem=false
mpiexec -machinefile HOME/.mpich/mpichhosts.HOME/.mpich/mpich_hosts.HOME/.mpich/mpichhosts.JOB_ID -np 2 ./my.exe
File transfer :
You need to create an ssh tunnel through our gateway machine - cselog2.dl.ac.uk
(an ssh client needs to be installed)
Windows:
- open command prompt and type
ssh -L 33022:cseht.dl.ac.uk:22 @cselog2.dl.ac.uk
[you will be asked for password on cselog2] - open another command prompt
ssh -p 33022 @localhost
[you will be asked for password on cseht] - you can now copy through the ssh tunnel
scp -P 33022
where and can be either local destination or remote destination @localhost:
Linux:
The steps are the same except you can add -fN to ssh in the first step as
ssh -fNL 33022:cseht.dl.ac.uk:22 @cselog2.dl.ac.uk