Project

General

Profile

Slurm » History » Version 53

Sebastian Bocquet, 11/27/2015 09:52 AM

1 21 Kerstin Paech
{{toc}}
2 21 Kerstin Paech
3 53 Sebastian Bocquet
h1. Hardware overview
4 53 Sebastian Bocquet
5 53 Sebastian Bocquet
You access the Euclid cluster through alexandria@usm.uni-muenchen.de
6 53 Sebastian Bocquet
7 53 Sebastian Bocquet
* alexandria is the file server and should not be used for computing
8 53 Sebastian Bocquet
* There are 12 compute nodes named euclides1--euclides12
9 53 Sebastian Bocquet
* euclides8 hosts a virtual machine and is not available for computing
10 53 Sebastian Bocquet
* euclides12 is only available for debugging, see below
11 53 Sebastian Bocquet
* each node has 32 logical CPUs and 64GB of RAM
12 53 Sebastian Bocquet
13 46 Roy Henderson
h1. How to run jobs on the euclides nodes (using Slurm)
14 1 Kerstin Paech
15 42 Kerstin Paech
Use slurm to submit jobs or login to the euclides nodes (euclides1-12).
16 1 Kerstin Paech
17 9 Kerstin Paech
*Please read through this entire wikipage so everyone can make efficient use of this cluster*
18 9 Kerstin Paech
19 1 Kerstin Paech
h2. alexandria
20 1 Kerstin Paech
21 1 Kerstin Paech
*Please do not use alexandria as a compute node* - it's hardware is different from the nodes. It hosts our file server and other services that are important to us. 
22 1 Kerstin Paech
23 1 Kerstin Paech
You should use alexandria to
24 1 Kerstin Paech
* transfer files
25 51 Sebastian Bocquet
* compile your code
26 51 Sebastian Bocquet
* submit jobs to the nodes
27 51 Sebastian Bocquet
28 51 Sebastian Bocquet
If you need to debug and would like to login to a node, please start an interactive job to one of the nodes using slurm. For instructions see below.
29 51 Sebastian Bocquet
30 51 Sebastian Bocquet
h2. euclides nodes
31 51 Sebastian Bocquet
32 1 Kerstin Paech
33 1 Kerstin Paech
Job submission to the euclides nodes is handled by the slurm jobmanager (see http://slurm.schedmd.com and https://computing.llnl.gov/linux/slurm/). 
34 52 Sebastian Bocquet
*Important: In order to run jobs, you need to be added to the slurm accounting system - please contact the admin*
35 1 Kerstin Paech
36 4 Kerstin Paech
All slurm commands listed below have very helpful man pages (e.g. man slurm, man squeue, ...). 
37 4 Kerstin Paech
38 4 Kerstin Paech
If you are already familiar with another jobmanager the following information may be helpful to you http://slurm.schedmd.com/rosetta.pdf‎.
39 1 Kerstin Paech
40 1 Kerstin Paech
h3. Scheduling of Jobs
41 1 Kerstin Paech
42 9 Kerstin Paech
At this point there are two queues, called partitions in slurm: 
43 9 Kerstin Paech
* *normal* which is the default partition your jobs will be sent to if you do not specify it otherwise. At this point there is a time limit of
44 9 Kerstin Paech
two days. Jobs at this point can only run on 1 node.
45 16 Kerstin Paech
* *debug* which is meant for debugging, you can only run one job at a time, other jobs submitted will remain in the queue. Time limit is
46 16 Kerstin Paech
12 hours.
47 1 Kerstin Paech
48 38 Kerstin Paech
The default memory per core used is 2GB, if you need more or less, please specify with the --mem or --mem-per-cpu option.
49 38 Kerstin Paech
50 9 Kerstin Paech
We have also set up a scheduler that goes beyond the first come first serve - some jobs will be favoured over others depending
51 9 Kerstin Paech
on how much you or your group have been using euclides in the past 2 weeks, how long the job has been queued and how much
52 9 Kerstin Paech
resources it will consume.
53 9 Kerstin Paech
54 9 Kerstin Paech
This is serves as a starting point, we may have to adjust parameters once the slurm jobmanager is used. Job scheduling is a complex
55 9 Kerstin Paech
issue and we still need to build expertise and gain experience what are the user needs in our groups. Please feel free to speak out if
56 9 Kerstin Paech
there is something that can be improved without creating an unfair disadvantage for other users.
57 9 Kerstin Paech
58 9 Kerstin Paech
You can run interactive jobs on both partitions.
59 9 Kerstin Paech
60 41 Kerstin Paech
h3. Running an interactive job with slurm (a.k.a. logging in)
61 1 Kerstin Paech
62 9 Kerstin Paech
To run an interactive job with slurm in the default partition, use
63 1 Kerstin Paech
64 1 Kerstin Paech
<pre>
65 14 Kerstin Paech
srun -u --pty bash
66 1 Kerstin Paech
</pre>
67 9 Kerstin Paech
68 15 Shantanu Desai
If you want to use tcsh use
69 15 Shantanu Desai
70 15 Shantanu Desai
<pre>
71 15 Shantanu Desai
srun -u --pty tcsh
72 15 Shantanu Desai
</pre>
73 15 Shantanu Desai
74 30 Shantanu Desai
If you want to use a larger memory per job do
75 30 Shantanu Desai
76 30 Shantanu Desai
<pre>
77 31 Shantanu Desai
srun -u --mem-per-cpu=8000 --pty tcsh
78 30 Shantanu Desai
</pre>
79 30 Shantanu Desai
80 20 Kerstin Paech
In case you want to open x11 applications, use the --x11=first option, e.g.
81 20 Kerstin Paech
<pre>
82 20 Kerstin Paech
srun --x11=first -u   --pty  bash
83 20 Kerstin Paech
</pre>
84 20 Kerstin Paech
85 9 Kerstin Paech
In case the 'normal' partition is overcrowded, to use the 'debug' partition, use:
86 9 Kerstin Paech
<pre>
87 14 Kerstin Paech
srun --account cosmo_debug -p debug -u --pty bash # if you are part of the Cosmology group
88 14 Kerstin Paech
srun --account euclid_debug -p debug -u --pty bash  # if you are part of the EuclidDM group
89 12 Kerstin Paech
</pre> As soon as a slot is open, slurm will log you in to an interactive session on one of the nodes.
90 1 Kerstin Paech
91 44 Kerstin Paech
h3. limited ssh access
92 44 Kerstin Paech
93 44 Kerstin Paech
If you have an active job (batch or interactive), you can login to the node the job is running on. Your ssh session will be killed if the job terminates. Your ssh session will be restricted to the same resources as your job (so you cannot accidentally bypass the job scheduler and harm other user's jobs).
94 44 Kerstin Paech
95 10 Kerstin Paech
h3. Running a simple once core batch job with slurm using the default partition
96 1 Kerstin Paech
97 1 Kerstin Paech
* To see what queues are available to you (called partitions in slurm), run:
98 1 Kerstin Paech
<pre>
99 1 Kerstin Paech
sinfo
100 1 Kerstin Paech
</pre>
101 1 Kerstin Paech
102 1 Kerstin Paech
* To run slurm, create a myjob.slurm containing the following information:
103 1 Kerstin Paech
<pre>
104 1 Kerstin Paech
#!/bin/bash
105 1 Kerstin Paech
#SBATCH --output=slurm.out
106 1 Kerstin Paech
#SBATCH --error=slurm.err
107 1 Kerstin Paech
#SBATCH --mail-user <put your email address here>
108 1 Kerstin Paech
#SBATCH --mail-type=BEGIN
109 8 Kerstin Paech
#SBATCH -p normal
110 1 Kerstin Paech
111 1 Kerstin Paech
/bin/hostname
112 1 Kerstin Paech
</pre>
113 1 Kerstin Paech
114 1 Kerstin Paech
* To submit a batch job use:
115 1 Kerstin Paech
<pre>
116 1 Kerstin Paech
sbatch myjob.slurm
117 1 Kerstin Paech
</pre>
118 1 Kerstin Paech
119 1 Kerstin Paech
* To see the status of you job, use 
120 1 Kerstin Paech
<pre>
121 1 Kerstin Paech
squeue
122 1 Kerstin Paech
</pre>
123 1 Kerstin Paech
124 11 Kerstin Paech
* To kill a job use:
125 11 Kerstin Paech
<pre>
126 11 Kerstin Paech
scancel <jobid>
127 11 Kerstin Paech
</pre> the <jobid> you can get from using squeue.
128 11 Kerstin Paech
129 1 Kerstin Paech
* For some more information on your job use
130 1 Kerstin Paech
<pre>
131 1 Kerstin Paech
scontrol show job <jobid>
132 11 Kerstin Paech
</pre>the <jobid> you can get from using squeue.
133 1 Kerstin Paech
134 10 Kerstin Paech
h3. Running a simple once core batch job with slurm using the debug partition
135 10 Kerstin Paech
136 10 Kerstin Paech
Change the partition to debug and add the appropriate account depending if you're part of
137 10 Kerstin Paech
the euclid or cosmology group.
138 10 Kerstin Paech
139 10 Kerstin Paech
<pre>
140 10 Kerstin Paech
#!/bin/bash
141 10 Kerstin Paech
#SBATCH --output=slurm.out
142 10 Kerstin Paech
#SBATCH --error=slurm.err
143 10 Kerstin Paech
#SBATCH --mail-user <put your email address here>
144 10 Kerstin Paech
#SBATCH --mail-type=BEGIN
145 10 Kerstin Paech
#SBATCH -p debug
146 10 Kerstin Paech
#SBATCH -account [cosmo_debug/euclid_debug]
147 10 Kerstin Paech
148 10 Kerstin Paech
/bin/hostname
149 10 Kerstin Paech
</pre>
150 10 Kerstin Paech
151 22 Kerstin Paech
h3. Accessing a node where a job is running or starting additional processes on a node
152 22 Kerstin Paech
153 25 Kerstin Paech
You can attach an srun command to an already existing job (batch or interactive). This
154 22 Kerstin Paech
means you can start an interactive session on a node where a job of yours is running
155 26 Kerstin Paech
or start an additional process.
156 22 Kerstin Paech
157 22 Kerstin Paech
First determine the jobid of the desired job using squeue, then use 
158 22 Kerstin Paech
159 22 Kerstin Paech
<pre>
160 22 Kerstin Paech
srun  --jobid <jobid> [options] <executable> 
161 22 Kerstin Paech
</pre>
162 22 Kerstin Paech
Or more concrete
163 22 Kerstin Paech
<pre>
164 22 Kerstin Paech
srun  --jobid <jobid> -u --pty  bash # to start an interactive session
165 22 Kerstin Paech
srun  --jobid <jobid> ps -eaFAl  # to start get detailed process information 
166 22 Kerstin Paech
</pre>
167 22 Kerstin Paech
168 24 Kerstin Paech
The processes will only run on cores that have been allocated to you. This works 
169 24 Kerstin Paech
for batch as well as interactive jobs. 
170 23 Kerstin Paech
*Important: If the original job that was submitted is finished, any process 
171 23 Kerstin Paech
attached in this fashion will be killed.*
172 22 Kerstin Paech
173 10 Kerstin Paech
174 6 Kerstin Paech
h3. Batch script for running a multi-core job
175 6 Kerstin Paech
176 17 Kerstin Paech
mpi is installed on alexandria.
177 17 Kerstin Paech
178 18 Kerstin Paech
To run a 4 core job for an executable compiled with mpi you can use
179 6 Kerstin Paech
<pre>
180 6 Kerstin Paech
#!/bin/bash
181 6 Kerstin Paech
#SBATCH --output=slurm.out
182 6 Kerstin Paech
#SBATCH --error=slurm.err
183 6 Kerstin Paech
#SBATCH --mail-user <put your email address here>
184 6 Kerstin Paech
#SBATCH --mail-type=BEGIN
185 6 Kerstin Paech
#SBATCH -n 4
186 1 Kerstin Paech
187 18 Kerstin Paech
mpirun <programname>
188 1 Kerstin Paech
189 1 Kerstin Paech
</pre>
190 18 Kerstin Paech
and it will automatically start on the number of nodes specified.
191 1 Kerstin Paech
192 18 Kerstin Paech
To ensure that the job is being executed on only one node, add
193 18 Kerstin Paech
<pre>
194 18 Kerstin Paech
#SBATCH -n 4
195 18 Kerstin Paech
</pre>
196 18 Kerstin Paech
to the job script.
197 17 Kerstin Paech
198 19 Kerstin Paech
If you would like to run a program that itself starts processes, you can use the
199 19 Kerstin Paech
environment variable $SLURM_NPROCS that is automatically defined for slurm
200 19 Kerstin Paech
jobs to explicitly pass the number of cores the program can run on.
201 19 Kerstin Paech
202 17 Kerstin Paech
To check if your job is acutally running on the specified number of cores, you can check
203 17 Kerstin Paech
the PSR column of
204 17 Kerstin Paech
<pre>
205 17 Kerstin Paech
ps -eaFAl
206 17 Kerstin Paech
# or ps -eaFAl | egrep "<yourusername>|UID" if you just want to see your jobs
207 6 Kerstin Paech
</pre>
208 27 Jiayi Liu
209 28 Kerstin Paech
h3. environment for jobs
210 27 Jiayi Liu
211 29 Kerstin Paech
By default, slurm does not initialize the environment (using .bashrc, .profile, .tcshrc, ...)
212 29 Kerstin Paech
213 28 Kerstin Paech
To use your usual system environment, add the following line in the submission script:
214 27 Jiayi Liu
<pre>
215 27 Jiayi Liu
#SBATCH --get-user-env
216 1 Kerstin Paech
</pre>
217 1 Kerstin Paech
218 28 Kerstin Paech
219 28 Kerstin Paech
h2. Software specific setup
220 28 Kerstin Paech
221 28 Kerstin Paech
h3. Python environment 
222 28 Kerstin Paech
223 28 Kerstin Paech
You can use the python 2.7.3 installed on the euclides cluster by using
224 27 Jiayi Liu
225 27 Jiayi Liu
<pre>
226 27 Jiayi Liu
source /data2/users/ccsoft/etc/setup_all
227 37 Kerstin Paech
source  /data2/users/ccsoft/etc/setup_python2.7.3
228 33 Shantanu Desai
</pre>
229 32 Shantanu Desai
230 32 Shantanu Desai
231 34 Shantanu Desai
h2. Notes For Euclid users
232 32 Shantanu Desai
233 35 Shantanu Desai
For those submitting jobs to euclides* nodes through Cosmo DM pipeline  here are some things which need to be specified for customized job submissions,
234 35 Shantanu Desai
since a different interface to slurm is used.
235 34 Shantanu Desai
236 34 Shantanu Desai
* To use larger memory per block , specify max_memory = 6000 (for 6G) and so on. inside block definition or in the submit file (in
237 34 Shantanu Desai
case you want to use it for all blocks)
238 34 Shantanu Desai
239 34 Shantanu Desai
* If you want to run on multiple cores/cores then use 
240 34 Shantanu Desai
nodes='<number of nodes>:ppn=<number of cores> inside the block definition of a particular block or in the submit file in case you want
241 1 Kerstin Paech
to use it for all blocks.
242 34 Shantanu Desai
243 35 Shantanu Desai
* If you want to use a larger wall time then specify wall_mod=<wall time in minutes> inside the module definition
244 39 Shantanu Desai
245 40 Shantanu Desai
* note that queue=serial does not work on alexandria(we usually use it for c2pap)
246 45 Roy Henderson
247 45 Roy Henderson
h1. Admin
248 45 Roy Henderson
249 49 Martin Kuemmel
There is a user "slurm" which however is not really necessary for the administration work. The slurm administrator needs sudo access. Some script for adding a user and similar things are in "/data1/users/slurm". With the sudo access the admin can execute those scripts. In the mysql database there is the username "slurmdb" with password.
250 48 Martin Kuemmel
251 50 Sebastian Bocquet
h2. Overview over users, accounts, etc.
252 50 Sebastian Bocquet
253 50 Sebastian Bocquet
No sudo access needed:
254 50 Sebastian Bocquet
<pre>
255 50 Sebastian Bocquet
/usr/local/bin/sacctmgr show account withassoc
256 50 Sebastian Bocquet
</pre>
257 50 Sebastian Bocquet
258 45 Roy Henderson
h2. Adding a new user
259 45 Roy Henderson
260 45 Roy Henderson
As root on @alexandria@,
261 45 Roy Henderson
262 45 Roy Henderson
<pre>
263 45 Roy Henderson
cd /data1/users/slurm/
264 45 Roy Henderson
./add_user.sh
265 45 Roy Henderson
</pre>
266 45 Roy Henderson
267 45 Roy Henderson
h2. To increase memory, cores etc for a user
268 45 Roy Henderson
269 45 Roy Henderson
Inside script above, various commands for changing user settings, e.g.
270 45 Roy Henderson
271 45 Roy Henderson
<pre>
272 45 Roy Henderson
/usr/local/bin/sacctmgr -i modify user  name=$1 set GrpCPUs=32
273 45 Roy Henderson
/usr/local/bin/sacctmgr -i modify user  name=$1 set GrpMem=128000
274 45 Roy Henderson
</pre>
275 50 Sebastian Bocquet
276 50 Sebastian Bocquet
h2. Node state "drain"
277 50 Sebastian Bocquet
278 50 Sebastian Bocquet
When a node is in "drain" state when calling <pre>sinfo</pre>
279 50 Sebastian Bocquet
run
280 50 Sebastian Bocquet
<pre>
281 50 Sebastian Bocquet
/usr/local/bin/scontrol update nodename=NODE_NAME state=resume
282 50 Sebastian Bocquet
</pre>
283 50 Sebastian Bocquet
to put it back to operation.
284 48 Martin Kuemmel
285 48 Martin Kuemmel
h2. Nodes down
286 48 Martin Kuemmel
287 48 Martin Kuemmel
Sometimes nodes are reported as "down". This seems to happen as a result of network problems. Here is some "troubleshooting":https://computing.llnl.gov/linux/slurm/troubleshoot.html#nodes for this situation. Also after a re-boot of alexandria some manual work on slurm might be necessary to get going again.
Redmine Appliance - Powered by TurnKey Linux