DeepLabCut
= EXPERIMENTAL - DeepLabCut version 3.0.0-rc8 =
Tagging Videos
Login to the HPC Web Portal using your Bowdoin login name (not email address) and password.
Select the Interactive Applications menu and choose the "Bowdoin HPC Desktop". Select at least 16 Gb of memory, and the number of hours you want to run the Desktop session. Press the Blue Launch button. Wait several seconds as the Cluster sets up the job, then press the blue Launch Bowdoin HPC Desktop button.
Once you are at the Linux desktop, open a Linux shell by going to the Applications menu, Systems Tools, then MATE Terminal.
In the terminal, type (Note this can take several seconds to run):
module load miniconda3
source /mnt/local/miniconda3/etc/profile.d/conda.sh
conda activate dlc-3.0.0-rc8
ipython
import deeplabcut
You can safely ignore any messages about "Tensorflow binary optimizations", "Unable to register cuBLAS", and "networkx backend definted more than once"
You can now run the DeepLabCut GUI by typing:
deeplabcut.launch_dlc()
You can safely ignore any messages about "error creating runtime directory".
Running the analysis on the Slurm HPC Cluster
If you are submitting to the Slurm HPC Cluster, create a job script named myscript.sh that looks like this, replacing "my-python-file" with your DeepLabCut python filename:
#!/bin/bash
#SBATCH --mail-type=BEGIN,END,FAIL
module load miniconda3
source /mnt/local/miniconda3/etc/profile.d/conda.sh
conda activate dlc-3.0.0-rc8
export LD_LIBRARY_PATH=/mnt/local/miniconda3/envs/dlc-3.0.0-rc8/lib/python3.11/site-packages/nvidia/cudnn/lib:$LD_LIBRARY_PATH
export DLClight="True";
python my-python-file
Login to the HPC headnode (or get shell access through the HPC Web Portal, Clusters menu, "Slurm HPC Cluster Shell Access")
cd to your directory containing the DeepLabCut files.
Submit it to the Slurm Cluster with:
sbatch -p gpu --gres=gpu:rtx3080:1 --mem=32G myscript.sh
= STABLE - DeepLabCut version 2.2.1 =
Tagging videos
Login to the HPC Web Portal using your Bowdoin login name (not email address) and password.
Select the Interactive Applications menu and choose the "Bowdoin HPC Desktop". Select at least 16 Gb of memory, and the number of hours you want to run the Desktop session. Press the Blue Launch button. Wait several seconds as the Cluster sets up the job, then press the blue Launch Bowdoin HPC Desktop button.
Once you are at the Linux desktop, open a Linux shell by going to the Applications menu, Systems Tools, then MATE Terminal.
In the terminal, type (Note this can take several seconds to run):
source /mnt/local/python-venv/dlc-2.2.1-gui/bin/activate
ipython
import deeplabcut
You can safely ignore any messages about "Tensorflow binary optimizations", "Unable to register cuBLAS", and "networkx backend definted more than once"
You can now run the DeepLabCut GUI by typing:
deeplabcut.launch_dlc()
You can safely ignore any messages about "error creating runtime directory".
Running the analysis on the Slurm HPC Cluster
If you are submitting to the Slurm HPC Cluster, create a job script named myscript.sh that looks like this, replacing "my-python-file" with your DeepLabCut python filename:
#!/bin/bash
#SBATCH --mail-type=BEGIN,END,FAIL
source /mnt/local/python-venv/dlc-2.2.1-gui/bin/activate
export LD_LIBRARY_PATH=/mnt/local/python-venv/dlc-2.2.1-gui/lib:/mnt/local/python-venv/dlc-2.2.1-gui/lib/python3.9/site-packages/nvidia/cuda_runtime/lib:/mnt/local/python-venv/dlc-2.2.1-gui/lib/python3.9/site-packages/nvidia/cublas/lib:/mnt/local/python-venv/dlc-2.2.1-gui/lib/python3.9/site-packages/nvidia/cufft/lib:/mnt/local/python-venv/dlc-2.2.1-gui/lib/python3.9/site-packages/nvidia/cusparse/lib:/mnt/local/python-venv/dlc-2.2.1-gui/lib/python3.9/site-packages/nvidia/cudnn/lib:/mnt/local/python-venv/dlc-2.2.1-gui/lib/python3.9/site-packages/nvidia/cusolver/lib:$LD_LIBRARY_PATH
export DLClight="True";
python my-python-file
Login to the HPC headnode (or get shell access through the HPC Web Portal, Clusters menu, "Slurm HPC Cluster Shell Access")
cd to your directory containing the DeepLabCut files.
Submit it to the Slurm Cluster with:
sbatch -p gpu --gres=gpu:rtx3080:1 --mem=32G myscript.sh
= Tutorials =
Note that the "source" command has changed to run DLC on the new 2024 Slurm Cluster. See above for the correct "source" command to activate the Python virtual environment.
Lucy Sullivan has created an excellent set of instructions for using DeepLabCut in Bowdoin's HPC environment. I highly recommend that you take a look!
https://github.com/losullil/Rat-Behavioral-Analysis-Using-DeepLabCut
Some more generic tutorials on using DLC itself can be found here:
Tutorial Part I: DeepLabCut- How to create a new project, label data, and start training
Tutorial Part II: DeepLabCut - network evaluation, refinement, and re-training