Deploying Targets on HPC
Last updated on 2024-07-09 | Edit this page
Estimated time: 12 minutes
Overview
Questions
- Why would we use HPC to run Targets workflows?
- How can we run Targets workflows on Slurm?
Objectives
- Be able to run a Targets workflow on Slurm
- Understand how workers relate to targets
- Know how to configure Slurm jobs within targets
- Be able to create a pipeline with heterogeneous workers
Advantages of HPC
If your analysis involves computationally intensive or long-running tasks such as training machine learning models or processing very large amounts of data, it will quickly become infeasible to use a single machine to run this. If you have access to a High Performance Computing (HPC) cluster, you can leverage the numerous machines with Targets to scale up your analysis. This differs from the exucution we have learned so far, which spawns extra R processes on the same machine to speed up execution.
Configuring Targets for Slurm
To adapt Targets to use the Slurm HPC scheduler, we change the
controller
. In this section we will assume that our HPC
uses Slurm as its job scheduler, but you can use other schedulers such
as PBS/TORQUE, Sun Grid Engine (SGE) or LSF.
In the Parallel Processing section, we used the following configuration:
R
tar_option_set(
controller = crew_controller_local(workers = 2)
)
ERROR
Error in crew_controller_local(workers = 2): could not find function "crew_controller_local"
To configure this for Slurm, we swap out the controller with crew_controller_slurm()
a new one from the crew.cluster
package:
R
tar_option_set(
controller = crew_controller_slurm(
workers = 3,
script_lines = "module load R",
slurm_memory_gigabytes_per_cpu = 1
)
)
Callout
If you were using a scheduler other than Slurm, you would instead
select crew_controller_lsf()
,
crew_controller_pbs()
or crew_controller_sge()
instead of crew_controller_slurm()
.
These functions have their own unique arguments which are associated with the scheduler.
There are a number of options you can pass to
crew_controller_slurm()
to fine-tune the Slurm execution,
which
you can find here. Here we are only using three:
-
workers
sets the number of jobs that are submitted to Slurm to process targets. -
script_lines
adds some lines to the Slurm submit script used by Targets. This is useful for loading Environment Modules as we have done here. -
slurm_memory_gigabytes_per_cpu
specifies the amount of memory we need.
Let’s run the modified workflow:
R
library(crew.cluster)
library(targets)
library(tarchetypes)
library(palmerpenguins)
library(broom)
suppressPackageStartupMessages(library(tidyverse))
source("R/packages.R")
source("R/functions.R")
tar_option_set(
controller = crew_controller_slurm(
workers = 3,
script_lines = "module load R",
slurm_memory_gigabytes_per_cpu = 1
)
)
tar_plan(
# Load raw data
tar_file_read(
penguins_data_raw,
path_to_file("penguins_raw.csv"),
read_csv(!!.x, show_col_types = FALSE)
),
# Clean data
penguins_data = clean_penguin_data(penguins_data_raw),
# Build models
models = list(
combined_model = lm(
bill_depth_mm ~ bill_length_mm, data = penguins_data),
species_model = lm(
bill_depth_mm ~ bill_length_mm + species, data = penguins_data),
interaction_model = lm(
bill_depth_mm ~ bill_length_mm * species, data = penguins_data)
),
# Get model summaries
tar_target(
model_summaries,
glance_with_mod_name_slow(models),
pattern = map(models)
),
# Get model predictions
tar_target(
model_predictions,
augment_with_mod_name_slow(models),
pattern = map(models)
)
)
OUTPUT
▶ dispatched target penguins_data_raw_file
● completed target penguins_data_raw_file [1.926 seconds]
▶ dispatched target penguins_data_raw
● completed target penguins_data_raw [0.409 seconds]
▶ dispatched target penguins_data
● completed target penguins_data [0.023 seconds]
▶ dispatched target models
● completed target models [0.008 seconds]
▶ dispatched branch model_predictions_812e3af782bee03f
▶ dispatched branch model_predictions_2b8108839427c135
▶ dispatched branch model_predictions_533cd9a636c3e05b
● completed branch model_predictions_812e3af782bee03f [4.027 seconds]
▶ dispatched branch model_summaries_812e3af782bee03f
● completed branch model_predictions_533cd9a636c3e05b [4.011 seconds]
▶ dispatched branch model_summaries_2b8108839427c135
● completed branch model_summaries_812e3af782bee03f [5.44 seconds]
▶ dispatched branch model_summaries_533cd9a636c3e05b
● completed branch model_predictions_2b8108839427c135 [5.478 seconds]
● completed pattern model_predictions
● completed branch model_summaries_2b8108839427c135 [4.012 seconds]
● completed branch model_summaries_533cd9a636c3e05b [4.009 seconds]
● completed pattern model_summaries
▶ ended pipeline [27.918 seconds]
We’ve successfully transferred our analysis onto a Slurm cluster!
Check the arguments for crew_controller_slurm
.
R
tar_option_set(
controller = crew_controller_slurm(
workers = 3,
script_lines = "module load R",
slurm_memory_gigabytes_per_cpu = 1,
# Added this
slurm_cpus_per_task = 2
)
)
SBATCH Options
The script_lines
argument shown above can also be used
to add #SBATCH
flags, to configure your worker job. Each
entry in the vector will be treated as a new line to be added to the
sbatch
script that is generated. However, you have to be
careful to put all of your #SBATCH
lines before any other
bash commands. sbatch
flags are listed
here in the Slurm documentation. For instance, to request that the
worker has a GPU available, you could do the following:
R
tar_option_set(
controller = crew_controller_slurm(
workers = 3,
script_lines = c(
"#SBATCH --gres=gpu:1",
"module load R"
),
slurm_memory_gigabytes_per_cpu = 1
)
)
In general, it’s better to use a dedicated
crew_controller_slurm
argument than to use
script_lines
, if one exists. For example, prefer
slurm_cpus_per_task=2
to
script_lines="--cpus-per-task=2"
and set
name="my_name"
rather than using
script_lines="--job-name=my_name"
.
HPC Workers
crew
uses a persistent worker strategy. This means that
crew
does not submit one Slurm job for each target.
Instead, you define a pool of workers when configuring the workflow. In
our example above we specified a maximum of 3 workers. For each worker,
crew
submits a single Slurm job, and these workers will
process multiple targets over their lifetime.
We can verify that this has happened using sacct
, which
we can use to query information about our past jobs. All the Slurm jobs
with the same hash (the part after crew-
) belong to the
same Slurm controller:
BASH
sacct --starttime now-5minutes --allocations
OUTPUT
JobID JobName Partition Account AllocCPUS State ExitCode
------------ ---------- ---------- ---------- ---------- ---------- --------
17244966 zsh regular wehi 4 RUNNING 0:0
17247538 DiskBacked regular wehi 8 RUNNING 0:0
17247562 sys/dashb+ regular wehi 2 RUNNING 0:0
17247674 crew-7909+ regular wehi 2 FAILED 15:0
17247676 crew-7909+ regular wehi 2 FAILED 15:0
17247677 crew-7909+ regular wehi 2 FAILED 15:0
The upside of this approach is that we don’t have to know how long each target takes to build, or what resources it needs. It also means that we don’t submit a lot of jobs, making our Slurm usage easy to monitor.
The downside of this mechanism is that the resources of the
worker have to be sufficient to build all of your targets. In
other words, you need to work out the maximum RAM and CPUs used across
all of your targets, and specify those maximum resources in the
crew_controller_slurm()
function.
We need to choose the maximum of all resources if we have a single worker. It will need 100 GB of RAM and 8 CPUs. To do this we might use a controller a bit like this:
R
crew_controller_slurm(
name = "cpu_worker",
workers = 3,
script_lines = "module load R",
slurm_cpus_per_task = 8,
slurm_memory_gigabytes_per_cpu = 100 / 8
)
ERROR
Error in crew_controller_slurm(name = "cpu_worker", workers = 3, script_lines = "module load R", : could not find function "crew_controller_slurm"
Heterogeneous Workers
In some cases we may prefer to use more than one different Slurm job
processing our targets, especially if some of our targets need different
hardware from others, such as a GPU. When we do this, we say we have
“heterogeneous workers”, meaning that not all worker jobs are the same
as each other. To do this, we firstly define each worker configuration
by adding the name
argument to
crew_controller_slurm
:
R
small_memory <- crew_controller_slurm(
name = "small_memory",
script_lines = "module load R",
slurm_memory_gigabytes_per_cpu = 10
)
big_memory <- crew_controller_slurm(
name = "big_memory",
script_lines = "module load R",
slurm_memory_gigabytes_per_cpu = 20
)
Next, we tell Targets about these controllers using
tar_option_set
as before, with one difference: we have to
combine them in a controller group:
R
tar_option_set(
controller = crew_controller_group(small_memory, big_memory)
)
Then we specify each controller by name in each target definition:
R
list(
tar_target(
name = big_memory_task,
command = Sys.getenv("SLURM_MEM_PER_CPU"),
resources = tar_resources(
crew = tar_resources_crew(controller = "big_memory")
)
),
tar_target(
name = small_memory_task,
command = Sys.getenv("SLURM_MEM_PER_CPU"),
resources = tar_resources(
crew = tar_resources_crew(controller = "small_memory")
)
)
)
When we run the pipeline, we can see the differing results:
R
tar_make()
tar_read(big_memory_task)
tar_read(small_memory_task)
OUTPUT
── Attaching core tidyverse packages ──────────────────────── tidyverse 2.0.0 ──
✔ dplyr 1.1.4 ✔ readr 2.1.5
✔ forcats 1.0.0 ✔ stringr 1.5.1
✔ ggplot2 3.5.1 ✔ tibble 3.2.1
✔ lubridate 1.9.3 ✔ tidyr 1.3.1
✔ purrr 1.0.2
── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
✖ dplyr::filter() masks stats::filter()
✖ dplyr::lag() masks stats::lag()
ℹ Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors
▶ dispatched target small_memory_task
▶ dispatched target big_memory_task
● completed target big_memory_task [1.198 seconds]
● completed target small_memory_task [1.561 seconds]
▶ ended pipeline [8.424 seconds]
[1] "20480"
[1] "10240"
Mixing GPU and CPU targets
Q: Say we have the following targets workflow. How would we modify it
so that gpu_task
is only run in a GPU Slurm job?
R
graphics_devices <- function(){
system2("lshw", c("-class", "display"), stdout=TRUE, stderr=FALSE)
}
tar_plan(
tar_target(
cpu_hardware,
graphics_devices()
),
tar_target(
gpu_hardware,
graphics_devices()
)
)
You will need to define two different crew controllers. Also, you will need to request a GPU from Slurm. You can find an example of this above.
R
tar_option_set(
controller = crew_controller_group(
crew_controller_slurm(
name = "cpu_worker",
workers = 1,
script_lines = "module load R",
slurm_memory_gigabytes_per_cpu = 1,
slurm_cpus_per_task = 1
),
crew_controller_slurm(
name = "gpu_worker",
workers = 1,
script_lines = c(
"#SBATCH --partition=gpuq",
"#SBATCH --gres=gpu:1",
"module load R"
),
slurm_memory_gigabytes_per_cpu = 1,
slurm_cpus_per_task = 1
)
)
)
OUTPUT
── Attaching core tidyverse packages ──────────────────────── tidyverse 2.0.0 ──
✔ dplyr 1.1.4 ✔ readr 2.1.5
✔ forcats 1.0.0 ✔ stringr 1.5.1
✔ ggplot2 3.5.1 ✔ tibble 3.2.1
✔ lubridate 1.9.3 ✔ tidyr 1.3.1
✔ purrr 1.0.2
── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
✖ dplyr::filter() masks stats::filter()
✖ dplyr::lag() masks stats::lag()
ℹ Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors
▶ dispatched target cpu_hardware
▶ dispatched target gpu_hardware
● completed target cpu_hardware [2.551 seconds]
● completed target gpu_hardware [2.998 seconds]
▶ ended pipeline [12.848 seconds]
R
tar_read("cpu_hardware")
OUTPUT
[1] " *-display"
[2] " description: VGA compatible controller"
[3] " product: MGA G200e [Pilot] ServerEngines (SEP1)"
[4] " vendor: Matrox Electronics Systems Ltd."
[5] " physical id: 0"
[6] " bus info: pci@0000:02:00.0"
[7] " version: 42"
[8] " width: 32 bits"
[9] " clock: 33MHz"
[10] " capabilities: vga_controller bus_master cap_list rom"
[11] " configuration: driver=mgag200 latency=0"
[12] " resources: irq:16 memory:d2000000-d2ffffff memory:d3a10000-d3a13fff memory:d3000000-d37fffff memory:d3a00000-d3a0ffff"
R
tar_read("gpu_hardware")
OUTPUT
[1] " *-display"
[2] " description: VGA compatible controller"
[3] " product: Integrated Matrox G200eW3 Graphics Controller"
[4] " vendor: Matrox Electronics Systems Ltd."
[5] " physical id: 0"
[6] " bus info: pci@0000:03:00.0"
[7] " version: 04"
[8] " width: 32 bits"
[9] " clock: 66MHz"
[10] " capabilities: vga_controller bus_master cap_list rom"
[11] " configuration: driver=mgag200 latency=64 maxlatency=32 mingnt=16"
[12] " resources: irq:16 memory:91000000-91ffffff memory:92808000-9280bfff memory:92000000-927fffff"
[13] " *-display"
[14] " description: 3D controller"
[15] " product: NVIDIA Corporation"
[16] " vendor: NVIDIA Corporation"
[17] " physical id: 0"
[18] " bus info: pci@0000:17:00.0"
[19] " version: a1"
[20] " width: 64 bits"
[21] " clock: 33MHz"
[22] " capabilities: bus_master cap_list"
[23] " configuration: driver=nvidia latency=0"
[24] " resources: iomemory:21f00-21eff iomemory:21f80-21f7f irq:18 memory:9c000000-9cffffff memory:21f000000000-21f7ffffffff memory:21f800000000-21f801ffffff"
[25] " *-display"
[26] " description: 3D controller"
[27] " product: NVIDIA Corporation"
[28] " vendor: NVIDIA Corporation"
[29] " physical id: 0"
[30] " bus info: pci@0000:65:00.0"
[31] " version: a1"
[32] " width: 64 bits"
[33] " clock: 33MHz"
[34] " capabilities: bus_master cap_list"
[35] " configuration: driver=nvidia latency=0"
[36] " resources: iomemory:24f00-24eff iomemory:24f80-24f7f irq:18 memory:bc000000-bcffffff memory:24f000000000-24f7ffffffff memory:24f800000000-24f801ffffff"
[37] " *-display"
[38] " description: 3D controller"
[39] " product: NVIDIA Corporation"
[40] " vendor: NVIDIA Corporation"
[41] " physical id: 0"
[42] " bus info: pci@0000:ca:00.0"
[43] " version: a1"
[44] " width: 64 bits"
[45] " clock: 33MHz"
[46] " capabilities: bus_master cap_list"
[47] " configuration: driver=nvidia latency=0"
[48] " resources: iomemory:28f00-28eff iomemory:28f80-28f7f irq:18 memory:e7000000-e7ffffff memory:28f000000000-28f7ffffffff memory:28f800000000-28f801ffffff"
[49] " *-display"
[50] " description: 3D controller"
[51] " product: NVIDIA Corporation"
[52] " vendor: NVIDIA Corporation"
[53] " physical id: 0"
[54] " bus info: pci@0000:e3:00.0"
[55] " version: a1"
[56] " width: 64 bits"
[57] " clock: 33MHz"
[58] " capabilities: bus_master cap_list"
[59] " configuration: driver=nvidia latency=0"
[60] " resources: iomemory:29f00-29eff iomemory:29f80-29f7f irq:18 memory:f2000000-f2ffffff memory:29f000000000-29f7ffffffff memory:29f800000000-29f801ffffff"