download a small file that you process entirely.stream the file event-by-event (or chunks of events at a time) and only process a small number of events.For large (ROOT) files, it’s usually preferable to either yes, I realize it’s a big ROOT file but still…Īnyway, there’s lots of options.what? we don’t have time to cover that? ok, can we use xrdcp?.fine, maybe we can make a smaller ROOT file.ok, maybe not our repo, but another repo that you can add as a submodule so you don’t have to clone it every time.Now we need to think about how to get the data in. So now we’ve dealt with the first problem of getting the built code available to the skim_ggH job via artifacts and dependencies. skim_ggH : stage : run dependencies : - build_skim image : rootproject/root:6.26.10-conda script :. We’ll also make sure the skim_ggH job has the right dependencies as well. Let’s add artifacts to our jobs to save the build/ directory. Since the build artifacts don’t need to exist for more than a day, let’s add artifacts to our jobs in build that expire_in = 1 day. artifacts:reports (JUnit tests - expert-mode, will not cover).artifacts:expire_in: human-readable length of time (default: 30 days) such as 3 mins 14 seconds.artifacts:when: when to upload artifacts on_success (default), on_failure, or always.artifacts:untracked: boolean flag indicating whether to add all Git untracked files or not.artifacts:name: name of the archive when downloading from the UI (default: artifacts -> artifacts.zip).artifacts:paths: wild-carding works (but not often suggested).Ok, so what can we define with artifacts? Useful if you want to speed up jobs that don’t need the artifacts from previous stages! Don’t want to use dependencies?Īdding dependencies: will prevent downloading any artifacts into that job. The status of the previous job is not considered when using dependencies, so if it failed or it is a manual job that was not run, no error occurs. Defining an empty array will skip downloading any artifacts for that job. An error will be shown if you define jobs from the current stage or next ones. You can only define jobs from stages that are executed before the current one. To use this feature, define dependencies in context of the job and pass a list of all previous jobs from which the artifacts should be downloaded. In order to take advantage of this, one combines artifacts with dependencies. More ReadingĪrtifacts from all previous stages are passed in by default.Īrtifacts are the way to transfer files between jobs of different stages. The artifacts will be sent to GitLab after the job finishes and will be available for download in the GitLab UI. The data (ROOT file) isn’t available to the Runner yet.Īrtifacts is used to specify a list of files and directories which should be attached to the job when it succeeds, fails, or always.We need to use GitLab artifacts to copy over this from the right job (and not from build_skim_latest).
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |