Cloud HPC with SLURM

AWS-hosted HPC clusters for CU Neurology, managed with SLURM workload scheduler

The Department of Neurology’s HPC clusters are hosted on Amazon Web Services (AWS). Each lab operates its own cluster, all centrally managed by department IT. A cluster consists of a head node and multiple compute nodes, managed by the SLURM workload scheduler. This documentation covers getting started, software management, interactive environments (JupyterLab, VS Code, RStudio), and pricing for compute and storage resources.

This wiki was developed by members of the Gao Wang and Badri Vardarajan labs, particularly Ruixi Li and Yihao Li, incorporating information from CU Neurology HPC IT team as well as other user tips and feedback from other labs.


Getting Started

Essential guide for using the AWS-based HPC system at CU Neurology

Advanced Software Setup with Pixi

Install custom R, Python, and other packages using the pixi package manager

SLURM Quick Start

Run your first job with step-by-step instructions and job script examples

SLURM Reference

Comprehensive reference for SLURM commands and options

For lab managers

Usage tracking, cost management, and administrative tools

Job Arrays & Cost Optimization

Techniques to optimize SLURM workloads for cost efficiency

JupyterLab on HPC

Run JupyterLab on a compute node and access it from your local computer

VS Code on HPC Head Node

View and edit text files on the HPC using VS Code with Remote-SSH

VS Code on HPC Compute Node

Running Jupyter Notebooks on HPC Compute Nodes with VS Code

RStudio Server on HPC

Run RStudio Server as a Singularity container on a compute node

Compute Pricing

CPU and GPU instance types, specifications, and hourly rates

Storage Pricing

S3 storage pricing under NIH STRIDES and additional AWS benefits

Frequently Asked Questions

Various frequently asked computing questions and tips, some beyond HPC