Pretty much, depends on how the person submitting to their cluster submits the jobs.
They're using the slurm scheduler (which is what we use as well) which will take a single job and every line of the command file ends up being a subjob. Effectively you can create a parallelized pipeline that results in a single cohesive output.
What has me curious is what their pipelines look like and how they're coded, mainly to see how I could apply it to my own work seeing how they constantly run the same pipeline to output the model results.