Each of four dirac nodes (dirac-1, dirac-2, dirac-3, dirac-4) has
two Intel Xeon E5-2660V3 CPUs (10C/20T 2.6Ghz 9.6GT/s 25mb 105w)
and 192 GB memory.
If you have a linux desktop in math department, to log on to one of the dirac nodes, simply use ssh (assume your math account ID is dave72 and your linux desktop ID is euler):
euler ~ % ssh dirac-1
dave72@dirac-1's password:
Suppose we want to connect to dirac from an off-campus computer. From a linux/apple computer, open the terminal and connect to banach first (assume you have a macbook and your username is dave):
MacBook-Pro:~ dave% ssh dave72@banach.math.purdue.edu
dave72@banach.math.purdue.edu's password:
then connect to dirac (you cannot ssh to dirac-1.math.purdue.edu directly from an off-campus computer):
banach ~ % ssh dirac-1
dave72@dirac-1's password:
If you have a Windows computer, you need to install a SSH client
such as PuTTY
There is an .cshrc
file in your home directory, open it by any editor such as gedit
or vi:
dirac-1 ~ % gedit .cshrc
Add the following scripts to it:
# for Intel MKLSave the file and close gedit. The next time you connect to dirac, the file .cshrc will be automatically loaded. But to make sure the .cshrc file is loaded after your modification without logging out and in, do the following:
source /pkgs/intel-2019.3/compilers_and_libraries_2019.3.199/linux/mkl/bin/mklvars.csh intel64 mod
# for OpenMPI
source /pkgs/intel-2019.3/bin/compilervars.csh intel64
setenv LD_LIBRARY_PATH /pkgs/hpcx/ompi-icc/lib:$LD_LIBRARY_PATH
setenv PATH /pkgs/hpcx/ompi-icc/bin:$PATH
#start mellanox's openmpi with hpcx
setenv HPCX_HOME "/export/pkgs/linux-u18/hpcx"
source $HPCX_HOME/hpcx-prof-init.csh
set path=(/pkgs/intel-2019.3/bin /pkgs/hpcx/ompi-icc/bin $path)
dirac-1 ~ % source .cshrc
Now you are ready to use MPI on a single dirac node. To verify if openmpi is properly set up, use the following commands:
dirac-1 ~ % which mpif90
/pkgs/hpcx/ompi-icc/bin/mpif90
dirac-1 ~ % which mpirun
/pkgs/hpcx/ompi-icc/bin/mpirun
Use the following to test "mpirun" using 4 threads on dirac-1:
dirac-1 ~ % mpirun -np 4 hostname
dirac-1.math.purdue.edu
dirac-1.math.purdue.edu
dirac-1.math.purdue.edu
dirac-1.math.purdue.edu
Download the simple Fortran MPI hello test program here. Exact the files and use "make" to compile the Fortran test file, then run the executable "hello" with 7 threads:
dirac-1 ~ % tar -zxvf MPI-test.tar.gz
MPI-test/Makefile
MPI-test/test.f90
MPI-test/
dirac-1 ~ % cd MPI-test
dirac-1 ~/MPI-test % make
mpif90 -o hello -r8 -O2 test.f90
dirac-1 ~/MPI-test % mpirun -np 7 hello
Hello from thread ID 6
Hello from thread ID 0
Hello from thread ID 2
Hello from thread ID 3
Hello from thread ID 4
Hello from thread ID 5
Hello from thread ID 1
If you get warning messages like "Conflicting CPU frequencies", simply ignore it.
To use more than one dirac node, we need automatic login from dirac-i to dirac-j without entering password.
You want to use Linux and OpenSSH to automate your tasks. Therefore you need an automatic login from host A to Host B. You don't want to enter any passwords, because you want to call ssh from a within a shell script.
First log in on dirac-1 and generate a pair of authentication keys. Do not enter a passphrase:
dirac-1 ~ % ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/a/.ssh/id_rsa):
Created directory '/home/a/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/a/.ssh/id_rsa.
Your public key has been saved in /home/a/.ssh/id_rsa.pub.
The key fingerprint is:
3e:4f:05:79:3a:9f:96:7c:3b:ad:e9:58:37:bc:37:e4 a@A
Now use ssh to create a directory ~/.ssh as dirac-2. (The directory may already exist, which is fine):
dirac-1 ~ % ssh dirac-2 mkdir -p .ssh
dirac-2's password:
Finally append a's new public key to dirac-2:.ssh/authorized_keys and enter your password one last time:
dirac-1~%cat .ssh/id_rsa.pub | ssh b@B 'cat >>.ssh/authorized_keys'
dirac-2's password:
From now on you can log into dirac-2 from dirac-1 without password:
dirac-1 ~ % ssh dirac-2
1. You can edit your .ssh/authorized_keys file. Each key you have authorized is on a single line (usually a very long line, so it wraps around several times in the editor). The start of the line will look something like "ssh-rsa AAAA....." At the very beginning of such a line, you can add, for example, from="dirac-?.math.purdue.edu" and a space, so the resulting line looks like
from="dirac-?.math.purdue.edu" ssh-rsa AAAA.....
With that change, the key can still be used to log into any Math
server, with no passphrase, but only from one of the dirac-i
machines (the ? is a single-character wildcard). Naturally you can
change the pattern, or have more than one pattern, if it is
convenient for you to log in from a few other machines that you
choose. The object is to at least make the key less useful to
people trying to log in from other random places. So another is
example is to use
from="banach.math.purdue.edu,euler.math.purdue.edu,dirac-?.math.purdue.edu" ssh-rsa AAAA.....
2. You can still put a passphrase on your ssh key. This sounds
like it defeats your purpose, but there is a program, ssh-agent,
that can make your automatic login work anyway. To do that, first
(one time only) - you can add a passphrase to the key you
already generated. If it is in the file ~/.ssh/id_rsa, then
dirac-1 ~ % ssh-keygen -p ~/.ssh/id_rsa
will let you add a passphrase to it. Then, after logging in, before running MPI commands:
dirac-1 ~ % eval `ssh-agent`
dirac-1 ~ % ssh-add
The ssh-add command will ask for your passphrase for the key, which will be remembered for the rest of your session, so the MPI commands will just work. When you log out of the original session, the agent goes away and your key is safe.
Either one of those techniques, or both together, will make your account a lot safer than just having an unprotected, unrestricted ssh key set up.
Now you are ready to use at most 80 threads on four dirac nodes.
Remember that you should use at most 20 threads on each node.
Please be courteous to other users since there is no scheduler on
dirac. For instance, in general it is not a good idea to use all
80 threads on four nodes for quite a while on a weekday when other
people may also use dirac. You can use the commands "top" and
"who" to see who is doing what on dirac.
The following are examples. In first example, 7 threads are
distributed as 3 on dirac-1 and 4 on dirac-2.
dirac-1 ~/MPI-test % mpirun -np 7 -host dirac-1:3,dirac-2:4 hostname
dirac-1.math.purdue.edu
dirac-1.math.purdue.edu
dirac-1.math.purdue.edu
dirac-2.math.purdue.edu
dirac-2.math.purdue.edu
dirac-2.math.purdue.edu
dirac-2.math.purdue.edu
dirac-1 ~/MPI-test % mpirun -np 8 -host dirac-1:1,dirac-2:4,dirac-3:2,dirac-4:1 hostname
dirac-1.math.purdue.edu
dirac-2.math.purdue.edu
dirac-2.math.purdue.edu
dirac-2.math.purdue.edu
dirac-2.math.purdue.edu
dirac-4.math.purdue.edu
dirac-3.math.purdue.edu
dirac-3.math.purdue.edu
mpirun -np 16 -host biot:16 hello --bind-to core:overload-allowed
To this end, we have verified MPI on multiple nodes via Fortran.
In this step, we will see how to run a C code with MPI. Download
the simple C MPI hello test program here. Exact the files and
use "make" to compile the C test file, then run the executable
"hello" with 40 threads.
dirac-1 ~ % tar -zxvf MPI-test_C.tar.gz
MPI-test_C/Makefile
MPI-test_C/test.c
MPI-test_C/
dirac-1 ~ % cd MPI-test_C
dirac-1 ~/MPI-test_C % make
mpicc -o hello test.c
dirac-1 ~/MPI-test_C % mpirun -oversubscribe -n 40 -host dirac-1,dirac-2 ./hello
Hello from processor dirac-1.math.purdue.edu, rank 5 out of 40 processors
Hello from processor dirac-1.math.purdue.edu, rank 1 out of 40 processors
Hello from processor dirac-1.math.purdue.edu, rank 11 out of 40 processors
Hello from processor dirac-1.math.purdue.edu, rank 19 out of 40 processors
Hello from processor dirac-1.math.purdue.edu, rank 7 out of 40 processors
Hello from processor dirac-1.math.purdue.edu, rank 3 out of 40 processors
Hello from processor dirac-1.math.purdue.edu, rank 13 out of 40 processors
Hello from processor dirac-1.math.purdue.edu, rank 9 out of 40 processors
Hello from processor dirac-1.math.purdue.edu, rank 17 out of 40 processors
Hello from processor dirac-1.math.purdue.edu, rank 18 out of 40 processors
Hello from processor dirac-1.math.purdue.edu, rank 15 out of 40 processors
Hello from processor dirac-1.math.purdue.edu, rank 10 out of 40 processors
Hello from processor dirac-1.math.purdue.edu, rank 0 out of 40 processors
Hello from processor dirac-1.math.purdue.edu, rank 6 out of 40 processors
Hello from processor dirac-1.math.purdue.edu, rank 12 out of 40 processors
Hello from processor dirac-1.math.purdue.edu, rank 8 out of 40 processors
Hello from processor dirac-1.math.purdue.edu, rank 4 out of 40 processors
Hello from processor dirac-1.math.purdue.edu, rank 2 out of 40 processors
Hello from processor dirac-1.math.purdue.edu, rank 16 out of 40 processors
Hello from processor dirac-2.math.purdue.edu, rank 26 out of 40 processors
Hello from processor dirac-2.math.purdue.edu, rank 27 out of 40 processors
Hello from processor dirac-2.math.purdue.edu, rank 28 out of 40 processors
Hello from processor dirac-2.math.purdue.edu, rank 30 out of 40 processors
Hello from processor dirac-2.math.purdue.edu, rank 24 out of 40 processors
Hello from processor dirac-2.math.purdue.edu, rank 20 out of 40 processors
Hello from processor dirac-2.math.purdue.edu, rank 34 out of 40 processors
Hello from processor dirac-2.math.purdue.edu, rank 36 out of 40 processors
Hello from processor dirac-2.math.purdue.edu, rank 22 out of 40 processors
Hello from processor dirac-2.math.purdue.edu, rank 32 out of 40 processors
Hello from processor dirac-2.math.purdue.edu, rank 38 out of 40 processors
Hello from processor dirac-2.math.purdue.edu, rank 29 out of 40 processors
Hello from processor dirac-2.math.purdue.edu, rank 23 out of 40 processors
Hello from processor dirac-2.math.purdue.edu, rank 35 out of 40 processors
Hello from processor dirac-2.math.purdue.edu, rank 37 out of 40 processors
Hello from processor dirac-2.math.purdue.edu, rank 39 out of 40 processors
Hello from processor dirac-2.math.purdue.edu, rank 21 out of 40 processors
Hello from processor dirac-2.math.purdue.edu, rank 33 out of 40 processors
Hello from processor dirac-2.math.purdue.edu, rank 31 out of 40 processors
Hello from processor dirac-2.math.purdue.edu, rank 25 out of 40 processors
Hello from processor dirac-1.math.purdue.edu, rank 14 out of 40 processors
Author: Xiangxiong Zhang, and special thanks to J Chapman Flack & ChatGPT. Last updated in Jan 2026.