Submit batch computing jobs to MMCloud
Always ensure you have obtained the latest version of required scripts in the src/
folder of this repository. You can git clone
the repo to get all files. Keep these files in the same folder / subfolder when running commands: mm_jobman.sh
, mm_batch.sh
, host_init.sh
, and bind_mount.sh
.
Login to MMCloud
Note: there is a firewall issue on our departmental HPC, so we can not login to float on HPC. Please use your laptop or desktop to run the float
commands.
This assumes that your admin has already created a MMCloud account for you, with a username and password provided. To login,
float login -u <username> -a <op_center_ip_address>
Example:
float login -u gaow -a 3.82.198.55
Note: If you see an error during your runs akin to this Token error, you likely have not logged in to the appropriate opcenter
Error: Invalid token
If you do not have a username in the MMCloud OpCenter, please talk to your system admin. They will provide you a username and password you can log into and submit jobs under.
Execute a simple command through pre-configured docker containers
Here is an example running a simple bash script susie_example.sh
with some R codes in it. The susie_example.sh
has these contents (copied from running ?susieR::susie
in R terminal):
#!/bin/bash
Rscript - << 'EOF'
# susie example
set.seed(1)
n = 1000
p = 1000
beta = rep(0,p)
beta[1:4] = 1
X = matrix(rnorm(n*p),nrow = n,ncol = p)
X = scale(X,center = TRUE,scale = TRUE)
y = drop(X %*% beta + rnorm(n))
res = susie(X,y,L = 10)
print("Maximum posterior inclusions probability:")
print(max(res$pip))
saveRDS(res, "susie_example_result.rds")
EOF
The command below will submit the above bash script to AWS accessing 2 CPUs and 8GB of Memory:
mm_batch.sh \
--job-script susie_example.sh \
-c 2 \
-m 8 \
--job-size 1
It will print some of the quantities to standard output stream which you can track by the follow command,
float log cat stdout.autosave -j <job_id>
where <job_id>
is available via float list
which will shows all jobs from the current MMCloud OpCenter. You can roughly understand OpCenter as the “login node” of an HPC that manages job submission. Please use float -h
to learn more about job management using the float
command.
Notice that even though this R script write output to a file called susie_example_result.rds
you will not find that file after the job finishes, because by default it is written to somewhere in the virtual machine (VM) instance that was fired up to run the computing, which is deleted after the job is done. To keep the results we need to mount to the VM some AWS S3 bucket where we keep the data. This will be introduced in the next section.
Submit multiple jobs for “embarrassing parallel” processing
To submit multiple computing tasks all at once, we assume that:
- Each job is one line of bash command
- No dependencies between jobs — multiple jobs can be executed in parallel
- All these jobs uses similar CPU and memory resource
Example: running multiple SoS pipelines in parallel
Suppose you have a bioinformatics pipeline script in SoS language, a text file called my_pipeline.sos
that reads like this:
[global]
paramter: gene = str
parameter: entrypoint= ""
[default]
output: f"{gene}.txt"
R: entrypoint=entrypoint, expand=True
write("success", {_output:r})
Make sure my_pipeline.sos
exists in your bucket, as that is how the VM will be able to access it.
Suppose you have 3 jobs to run in parallel, possibly using different docker images, like below. The <location>
in the script will be where your bucket will be mounted on your instance. Please note that there is one sos
command per line. DO NOT PUT NEW LINES WITHIN A COMMAND.
sos run /<location>/my_pipeline.sos --gene APOE4
sos run /<location>/my_pipeline.sos --gene BIN1
sos run /<location>/my_pipeline.sos --gene TREM2
You save these lines to run_my_job.sh
. Below is an example submission command using environment variables:
username=<YOUR_USERNAME>
jobname=<NAME_OF_YOUR_JOB>
mm_batch.sh \
--job-script run_my_job.sh \
-c 2 \
-m 8 \
--mountOpt "mode=rw" \
--mount statfungen/ftp_fgc_xqtl:/home/$username/data \
--mount statfungen/ftp_fgc_xqtl:/home/$USER/data \
--download statfungen/<FOLDER_LOCATION_IN_BUCKET>/:/home/$username/input/ \
--mountOpt mode=r,mode=rw \
--job-size 3 \
--parallel-commands 1 \
--float-executable float.darwin_arm64 \
--dryrun
Keep in mind the --dryrun
parameter below will not actually run your command, but will print it out instead for debugging purposes. You can remove --dryrun
to actually submit the job.
Additionally, you can combine the two --mount
commands like so:
...
--mount statfungen/ftp_fgc_xqtl:/home/$username/data,statfungen/ftp_fgc_xqtl:/home/$USER/data
...
You can find additional examples here on GitHub.
Monitoring jobs on MMCloud
After submitting jobs to the cloud through an MMCloud OpCenter we need to monitor the status and details of the job through the web interface of the opcenters or using CLI. So far we have 2 opcenters for our lab:
east1:
- 44.222.241.133 (mainly for interactive jobs)
- 3.82.198.55 (mainly for batch jobs - you want to submit onto this OpCenter)
Using OpCenter GUI
OpCenters provide Graphical User Interfaces (GUIs) that enable users to monitor and manually adjust job statuses, including suspension and migration. To access the MMcloud GUI for your corresponding OpCenter, use your OpCenter credentials at https://<OpcenterIP>
. For instance, if your OpCenter IP is 44.222.241.133, you would navigate to https://44.222.241.133/
in your browser. Upon logging in, you’ll be able to view all jobs associated with this OpCenter. Your interactive job is typically named <your_username>_jupyter_8888
by default. If you’re running multiple jobs you can track them using search and filter features on the interface.
Upon clicking on your job, you will see comprehensive information on status and resources usage of your job:
- Basic Information: At the top, you’ll find details about your job, including the job ID, user, host, and the overall time and cost consumed.
- Instances: Monitor CPU and memory usage, instance status, cost, and location for the resources you’re utilizing.
- Storage Volume: Track your storage volume, which typically defaults to 60 GB.
- Settings:
Target Port
in the Network section (usually showing 8888 -> gateway IP address)Input Arguments
displaying the command that initialized your jobJob script
showing your submitted job script. Checking on these settings is particulrly relevant when you use our wrapper scriptsmm_batch.sh
which will automatically modify your original commands to make it work for MMCloud. You can track these exact modifications this way.
- Attachments (Log Files):
job.events
for checking the jobs whenFailToComplete
or other error status occurstderr.autosave
for tracking messages and issues printed tostderr
stream, or locating your initial token for Jupter Lab sessions viamm_interactive.sh
.stdout.autosave
for tracking messages and issues printed tostdout
stream.
Note: The latter two log files are more frequently used with mmjobman than with mm_interactive.
- Wave Watcher: Track real-time CPU and memory usage for your job. This tool is invaluable for assessing resource usage of a job and adjusting your next submission command in terms of CPU and memory to fit your actual usage better.
Using command interface
To check the staus using CLI query:
float squeue
which should show the job ID. Then check log files generated for a job,
float log -j <jobID> ls #check log file name
float log -j <jobID> cat <logfile> #check log file contents
For admins, another way to get the logs of a job (with the use of a script) from the opcenter itself. logs of the job are stored under /mnt/memverge/slurm/work
. Two levels of directories that correspond to the first two pairs of characters for the job. For example, for job id “jkbzo4y7c529fiko0jius”, the contents are stored under /mnt/memverge/slurm/work/jk/bz/o4y7c529fiko0jius
.
It is also possible to use those IDs to save the log file contents via
float log download -j JOB_ID
Cancel and rerun failed jobs
- Cancel all or selected jobs
float scancel --filter "status=*" -f
# or
float scancel -f --filter user=*
To suspend jobs through the CLI is float suspend -j <job_id>
.
- Rerun FailToComplete jobs since a given submission date
Currently, unfortunately memverge do not have a way to rerun multiple jobs at once given a submission date/any other filter method. As of now, the only way for rerunning a job is through the GUI (one at a time) or with float rerun -j <JOB_ID>
. Consider to change the -c and -m parameters to avoid floating which is the often the reason why these jobs failed to complete.
Install packages not availabe by default
In our setup, submitting batch jobs uses packages that we install to EFS, an AWS cloud storage system with fast disk performance. mm_batch.sh
is configured in such a way that all users will load shared package by default. If a package needs to be install or updated, please ask the MMCloud admin because the update will impact all batch submission jobs. For admin please refer to this section on how to install / update packages.
Alternatively, you can use --mount-packages
option to load your own collection of package which will require you to setup your own computing environment. See here for details on setting up customized packages.
If you use both --oem-packages --mount-packages
you will be using both your customized and default packages, with your customized package overriding the default if they exist in both locations.
Trouble-shooting
This section documents frequently encountered issues and solutions.
Server internal error
When you got the error Error: server internal error. please contact the service owner. detail: json: cannot unmarshal string into Go struct field jobRest.Jobs.actualEmission of type float64.
The likely reason for this is that your local binary version is older than the OpCenter version. You can follow the lab wiki instructions to upgrade it or re-install float binary.
Failure to Connect to MMCloud Due to Network Issues
It is known that:
float
cannot connect to MMCloud from Columbia Neurology HPC.float
cannot connect to MMCloud from some VPN networks at certain institutes, although it can connect if you are on the CUMC VPN.
When these issues occur, please consider switching to a different network and try again.
Known issues
This section documents some of the known problems / possible improvements we can make down the road.
Better streamlined workflows
Currently we support only embarrassing parallel jobs running the same docker image.
This is done via the mm_batch.sh
shell script which is a simple wrapper to float
for job submission.
Down the road we plan to use nextflow
to manage other commands include those written in SoS
.
In the context of the FunGen-xQTL protocol for example, the mini-protocols can be implemented using nextflow
whereas the modules can be implemented using any language including SoS
.
CLI Reference for mm_jobman.sh
mm_batch.sh
calls mm_jobman.sh
with OpCenter settings default to our lab. Here we show interface for mm_jobman.sh
:
Usage: mm_jobman.sh [options]
Required Options:
-o, --opcenter <ip> Set the OP IP address
-sg, --securityGroup <group> Set the security group
-g, --gateway <id> Set gateway
-efs <ip> Set EFS IP
Required Batch Options:
--job-script <file> Main job script to be run on MMC.
--job-size <value> Set the number of commands per job for creating virtual machines.
Batch-specific Options:
--cwd <value> Define the working directory for the job (default: ~).
--download <remote>:<local> Download files/folders from S3. Format: <S3 path>:<local path> (space-separated).
--upload <local>:<remote> Upload folders to S3. Format: <local path>:<S3 path>.
--download-include '<value>' Include certain files for download (space-separated), encapsulate in quotations.
--no-fail-fast Continue executing subsequent commands even if one fails.
--parallel-commands <value> Set the number of commands to run in parallel (default: CPU value).
--min-cores-per-command <value> Specify the minimum number of CPU cores required per command.
--min-mem-per-command <value> Specify the minimum amount of memory in GB required per command.
Required Interactive Options:
-ide, --interactive-develop-env <env> Set the IDE
Interactive-specific Options:
--idle <seconds> Amount of idle time before suspension. Only works for jupyter instances (default: 7200 seconds)
--suspend-off For Jupyter jobs, turn off the auto-suspension feature
-pub, --publish <ports> Set the port publishing in the form of port:port
--entrypoint <dir> Set entrypoint of interactive job - please give Github link
--shared-admin Run in admin mode to make changes to shared packages in interactive mode
Global Options:
-u, --user <username> Set the username
-p, --password <password> Set the password
-i, --image <image> Set the Docker image
-c <min>:<optional max> Specify the exact number or a range of CPUs to use.
-m <min>:<optional max> Specify the exact amount or a range of memory to use (in GB).
--mount-packages Grant the ability to use user packages in interactive mode
--oem-packages Grant the ability to use shared packages in interactive mode
-vp, --vmPolicy <policy> Set the VM policy
-ivs, --imageVolSize <size> Set the image volume size
-rvs, --rootVolSize <size> Set the root volume size
--ebs-mount <folder>=<size> Mount an EBS volume to a local directory. Format: <local path>=<size>. Size in GB (space-separated).
--mount <s3_path:vm_path> Add S3:VM mounts, separated by commas
--mountOpt <value> Specify mount options for the bucket (required if --mount is used) (space-separated).
--env <variable>=<value> Specify additional environmental variables (space-separated).
-jn, --job-name <name> Set the job name (batch jobs will have a number suffix)
--float-executable <path> Set the path to the float executable (default: float)
--dryrun Execute a dry run, printing commands without running them.
-h, --help Display this help message
Mode Selection
- To specify either the batch or interactive mode, you must specify either
--job-script
or-ide
, respectively. - The script will return an error if both are populated, but will run an interactive tmate job in
--oem-packages
mode if neither are populated. - If either batch or an interactive job is specified, but no mode, it will default to
--oem-packages
mode, which only allows shared package usage.
Mount Options
- For the
--mount
option, DO NOT adds3://
to the name of your bucket. Simply put the name of your bucket + subdirectories you want to mount.- Example:
--mount statfungen/ftp_fgc_xqtl/:/home/ubuntu/data
- Example:
- “Space-separated” means if you want to add multiple values to the parameter, such as mounting two buckets, the values need to be separated by a comma without extra spaces.
- Example:
--mount statfungen/ftp_fgc_xqtl/:/home/ubuntu/data,statfungen/ftp_fgc_xqtl/analysis_result:/home/ubuntu/output
- Example:
- Multiple
--mountOpt
arguments will set mount options for the respective buckets in order. Therefore, if multiple--mountOpt
arguments are provided, the script will expect the same number of--mount
options.- Correct example:
--mount bucket1:local1 bucket2:local2 --mountOpt mode=r mode=rw
- Incorrect example:
--mount bucket1:local1 bucket2:local2 --mountOpt mode=r
- Correct example:
Upload and Download Options
-
For the
--upload
option, providing a trailing/
for the<local>
folder will copy the contents of the folder into the<remote> folder
. Not having a trailing/
will copy the entire folder. Remember to not put a trailing/
at the end of the<remote>
folder of yourupload
command. -
For the
--download
option, if your<remote>
is intended to be a folder, add a trailing/
at the end. This will make it so the folder is copied as a folder into your<local>
directory. If no trailing/
is provided, you will be copying a file and overriding the file on<local>
. -
Each
--download-include
will correspond to their respective--download
(first--download-include
will be for the first--download
, etc). It is up to the user to make sure their parameters are correct and to make note of this respective matching.- Example:
--download remote:local,remote2:local2 --download-include '*.txt *.bim','*.jpg'
- Example:
Working Directory
- For batch jobs, since a working directory is (default is
~
), any files mentioned in your job script needs to be relative to the working directory.- Example: If
--cwd
is/home/ubuntu
and your pipeline scriptmy_pipeline.sos
is in a bucket mounted under/home/ubuntu/data
, your job script should use:- Relative path:
sos run data/my_pipeline.sos --gene APOE4
- Or absolute path:
sos run /home/ubuntu/data/my_pipeline.sos --gene APOE4
- Relative path:
- Example: If