Using Claude on the HPC
Created by Rui Dong · Last updated: March 2026
Claude Code is an AI coding assistant developed by Anthropic that runs directly in your development environment. Unlike a chat interface, it has full access to your files, terminal, and cluster environment — making it particularly useful for HPC workflows involving complex pipelines, large datasets, and job scheduling.
This page covers how to get set up and how to use it effectively on HPC. We also note that you can use Claude Desktop on your computer for prototyping and codes development — if you additionally follow our instruction to setup a production environment on your laptop.
Prerequisites
Before getting started, make sure you have the following:
- HPC access — an active account on the cluster with standard login credentials
- Anthropic account — sign up at claude.ai and ensure you have API access or a Pro/Team subscription that covers Claude Code usage
- You will have to then upload the API key to your HPC which you should protect with permission
600so only you can access it
- You will have to then upload the API key to your HPC which you should protect with permission
- VSCode on HPC (optional but recommended) — follow the lab tutorial here to set up VSCode on a compute node; this is required for the VSCode extension interface described below
Interface Options
There are three main ways to use Claude Code on the HPC. All three expose largely the same capabilities — the differences are in setup complexity, display quality, and workflow fit.
Option A: Claude Code CLI in JupyterLab/Terminal Directly
If you already work in JupyterLab, the easiest entry point is running the Claude Code CLI directly in a terminal session within JupyterLab.

Setup: Install Claude Code via npm in your environment following the official docs, then launch it with claude from any terminal.
Workflow: Open a terminal tab inside JupyterLab alongside your notebook. If you have two monitors, keeping the notebook on one screen and the CLI on the other works well. On a single large screen, JupyterLab’s split-pane view lets you tile both side by side within one browser window.
Limitations: The terminal rendering can sometimes struggle with math formulas and non-ASCII characters, which may appear garbled. It also has a learning curve than the GUI-based alternatives.
Option B: Claude Code Extension in VSCode Web
Once VSCode is running on the HPC (see prerequisites), this is the most polished interface for day-to-day use.

Setup:
- Open the Extensions panel in VSCode and search for
Claude Code for VS Code
- Install the extension from Open VSX — note there are several extensions with similar-looking icons; select the official Anthropic one (identifiable by its high download count)
- After installation, a small Claude Code icon will appear in the top-right corner of your editor — click it to open the panel
Workflow: The Claude Code panel opens on the right side of your editor, keeping your script or notebook visible alongside it. You can optionally enable “Launch Claude in the terminal instead of the native UI” in settings if you prefer the CLI-style experience within VSCode.
Option C: Claude Code Desktop App (SSH)
The Claude Code desktop app runs locally on your laptop or workstation and can connect to the HPC via SSH. The connection might be poor but you would not have to upload your API key to the HPC.

Setup: Download the app from code.claude.com. In code mode, enable SSH connection — see the official SSH docs for configuration details. Note that cowork mode does not currently support SSH and is limited to local development.
Limitations: The desktop app does not support a native side-by-side view of your remote scripts and the Claude panel in a single window. A workaround is to tile JupyterLab (in a browser) and the desktop app on the same screen, though this is less flexible. Stability has also been an issue — the app occasionally crashes or freezes, which can interrupt longer sessions. Also for intensive HPC work, this approach struggle with reliable connection.

Example Use Cases and Prompts
The examples below are representative tasks from day-to-day research workflows. The prompts are intentionally detailed — Claude Code performs better when given specific paths, expectations, and output requirements upfront.
1. Creating an analysis notebook from scratch
Rather than setting up boilerplate yourself, you can ask Claude Code to build an entire notebook including data loading, QC steps, analysis, and visualizations.
“I want you to create a Jupyter notebook with an R kernel under
<PATH_TO_NOTEBOOK>. In this analysis, run Mendelian randomization (MR-PRESSO) on two summary statistics files: the variant-exposure data is at<PATH_TO_SUMSTATS_1>and the variant-trait data is at<PATH_TO_SUMSTATS_2>. The variants may be on different genome builds, so check formats and perform liftover if needed. After harmonizing, run MR-PRESSO and MR-Egger, generate visualizations of the results, and compare these outputs. They should be very similar; if not, double check your code because something might be wrong. For each section, include a checkpoint showing how many variants were retained and dropped. Begin with a summary section describing the goal, inputs, and outputs. Save all outputs to<OUTPUT_PATH>.”
The trick is to ask Claude to self-verify by using different methods (and software interface) on the same data where you expect these methods should behave similarly.
2. Cleaning up configuration files
“My
~/.bashrcis disorganized. Please clean it up, group related settings logically, and add comments where helpful. Afterwards, summarize all the custom functions and aliases I’ve defined, what they do, and what output I should expect when I run them.”
3. Checking and recovering failed SLURM jobs
“I previously submitted the SLURM job script at
<PATH_TO_SCRIPT>. Check whether all array jobs have finished, then inspect the output directory and identify any jobs that failed or produced warnings. Explain why each failure occurred.”
Then as a follow-up:
“Write a new submission script to rerun only the failed jobs. Clean up the old output files for those jobs before resubmitting.”
4. Selecting nodes for a new job submission
“I need to submit a SLURM job but want to avoid overloaded nodes. The job has 100 arrays and each requires 15 GB of memory. Check the current cluster state and recommend the best partition and node options for this job.”
5. Debugging a script error
“My Python script at
<PATH_TO_SCRIPT>is throwing an error when I run it on<INPUT_FILE>. The full error log is at<PATH_TO_LOG>. Please identify the root cause, fix the script, and confirm the fix works on a small test input before I rerun the full job.”
6. Documenting an existing notebook
“I have a Jupyter notebook at
<PATH_TO_NOTEBOOK>that lacks documentation. Please read through it and add a summary cell at the top describing what the analysis does, add markdown cells between major sections explaining the logic, and add inline comments to any non-obvious code blocks. Do not change any of the actual code.”
Tips for Writing Effective Prompts
Specify inputs and outputs explicitly. State where data comes from and where results should be saved.
Ask for checkpoints. For multi-step analyses, request a summary (e.g. line counts) after each major step. This makes it easier to catch errors early without re-running the whole pipeline.
Specify the language, kernel, or environment. If you need an R kernel instead of Python, or want the script to run in a specific conda environment, say so upfront.
Break large tasks into stages. For complex workflows, start with a planning prompt (“outline the steps you would take to…”) before asking Claude to execute. This lets you catch misunderstandings before any files are created.
Use follow-up prompts to iterate. You don’t need to write a perfect prompt the first time. Ask Claude to do one piece, review it, then continue — especially useful for long analysis notebooks.