urgent computing topline


SDSC DataStar

Cluster details

This cluster runs LoadLeveler as the default job manager/scheduler. You can run urgent jobs on these resources using either the 'spruce_sub' or the Globus extensions.


SDSC DataStar policy map is as follows -

red - 'next-to-run'
orange - 'next-to-run'
yellow - 'next-to-run'

Initial Setup

  • 1) Login to the machine- ssh -A username@dslogin.sdsc.edu
  • 2) Set the variable $SPRUCE to point to /usr/local/apps/spruce/bin If encountering any problems, please contact us for further instructions.
  • 3) Compile your codes and have them handy in your home directory tree.


Invoke the spruce_sub command with the urgency and job script specified. The script can be found here -


More about running jobs using spruce_sub can be found here.

Globus submissions

SPRUCE jobs can be submitted as any other Globus job. An additional urgency parameter and the right contact string are all that needed. The urgency level can take three values -
the contact string is -

More about running jobs using Globus interface can be found here.