Condor has a few ways to run programs associated with a job, beyond the job itself. If you’re an administrator, you can use the USER_JOB_WRAPPER. If you’re a user who is friends with your administrator, you can use Job Hooks. If you are ambitious, you can wrap all your jobs in a script that runs programs before and after your actual job.
Or, you can use the PreCmd and PostCmd attributes on your job. They specify programs to run before and after your job executes. By example,
$ cat prepost.job cmd = /bin/sleep args = 1 log = prepost.log output = prepost.out error = prepost.err +PreCmd = "pre_script" +PostCmd = "post_script" transfer_input_files = pre_script, post_script should_transfer_files = always queue
$ cat pre_script #!/bin/sh date > prepost.pre $ cat post_script #!/bin/sh date > prepost.post
$ condor_submit prepost.job Submitting job(s) . 1 job(s) submitted to cluster 1. ...wait a few seconds, or 259... $ cat prepost.pre Sun Oct 14 18:06:00 UTC 2012 $ cat prepost.post Sun Oct 14 18:06:02 UTC 2012
That’s about it, except for some gotchas.
- transfer_input_files is manual and required
- The scripts are run from Iwd, you can’t use +PreCmd=”/bin/blah”, instead +PreCmd=”blah” and transfer_input_files=/bin/blah
- should_transfer_files = always, scripts are run from Iwd, if run local to the Schedd Iwd will be in the EXECUTE directory but the scripts won’t be
- Script stdout/err and exit code are ignored
- You must use +Attr=”” syntax, +PreCmd=pre_script won’t work
- There is no option of arguments for the scripts
- There is no starter environment, thus no $_CONDOR_JOB_AD/$_CONDOR_MACHINE_AD, but you can find .job_ad and .machine_ad in $_CONDOR_SCRATCH_DIR
- Make sure the scripts are executable, otherwise the job will be put on hold with a reason similar to: Error from 127-0-0-1.NO_DNS: Failed to execute ‘…/dir_30626/pre_script’: Permission denied
- PostCmd is broken in condor 7.6, but works in 7.8