Getting Started
Essential guide for using the AWS-based HPC system at CU Neurology
The Department of Neurology’s HPC clusters are hosted on Amazon Web Services (AWS). Each lab operates its own cluster, all centrally managed by department IT. A cluster consists of a head node and multiple compute nodes, managed by the SLURM workload scheduler. This documentation covers getting started, software management, interactive environments (JupyterLab, VS Code, RStudio), and pricing for compute and storage resources.
This wiki was developed by members of the Gao Wang and Badri Vardarajan labs, particularly Ruixi Li and Yihao Li, incorporating information from CU Neurology HPC IT team as well as other user tips and feedback from other labs.
Essential guide for using the AWS-based HPC system at CU Neurology
Install custom R, Python, and other packages using the pixi package manager
Run your first job with step-by-step instructions and job script examples
Comprehensive reference for SLURM commands and options
Usage tracking, cost management, and administrative tools
Techniques to optimize SLURM workloads for cost efficiency
Run JupyterLab on a compute node and access it from your local computer
View and edit text files on the HPC using VS Code with Remote-SSH
Running Jupyter Notebooks on HPC Compute Nodes with VS Code
Run RStudio Server as a Singularity container on a compute node
CPU and GPU instance types, specifications, and hourly rates
S3 storage pricing under NIH STRIDES and additional AWS benefits
Various frequently asked computing questions and tips, some beyond HPC