Search6 Results
- Knowledge Base
- Special Computer Environments
- High Performance Computing (HPC)
Instructions for requesting GPU computing, high-memory nodes, and other specialized resources on the Bowdoin HPC Slurm cluster. Covers available NVIDIA GPU cards and request syntax, memory reservation options, mixed GPU and CPU jobs, and the experimental NVIDIA Grace Hopper system.
- Knowledge Base
- Special Computer Environments
- High Performance Computing (HPC)
Bowdoin College provides a Linux-based High-Performance Computing (HPC) cluster for faculty, students, and researchers. The cluster offers approximately 1,400 CPU cores, GPU computing, up to 2 TB of RAM per node, and a variety of scientific software. This article provides an overview of HPC resources and how to get started.
- Knowledge Base
- Special Computer Environments
- High Performance Computing (HPC)
Instructions for submitting, monitoring, and managing jobs on the Bowdoin HPC Slurm cluster. Covers writing job scripts, using sbatch and the hpcsub wrapper, running parallel processing jobs (SMP and OpenMPI), running interactive jobs, and controlling jobs with squeue and scancel.
- Knowledge Base
- Special Computer Environments
- High Performance Computing (HPC)
Reference information for the Bowdoin HPC Slurm cluster, including queue (partition) descriptions, job policies and resource limits, and a hardware overview suitable for grant proposals.
- Knowledge Base
- Special Computer Environments
- High Performance Computing (HPC)
Instructions for connecting to the Bowdoin HPC environment using SSH, the HPC Web Portal, JupyterLab, or RStudio. Covers SSH access from macOS and Linux, VPN requirements for off-campus use, and SSH configuration tips for dropped connections.
- Knowledge Base
- Special Computer Environments
- High Performance Computing (HPC)
Instructions for transferring files between your local computer and the Bowdoin HPC environment. Covers the HPC Web Portal file browser, mounting the HPC research space via SMB from macOS or Windows, SFTP from the command line, and using Gluster temporary scratch storage for running jobs