Archive datasets flow
archive_datasets_flow(job_id, dataset_ids=None)
Prefect flow to archive a list of datasets. Corresponds to a "Job" in Scicat. Runs the individual archivals of the single datasets as subflows and reports the overall job status to Scicat.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dataset_ids
|
List[str]
|
description |
None
|
job_id
|
UUID
|
description |
required |
Raises:
Type | Description |
---|---|
e
|
description |
Source code in backend/archiver/flows/archive_datasets_flow.py
221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 |
|
check_free_space_in_LTS()
Prefect task to wait for free space in the LTS. Checks periodically if the condition for enough free space is fulfilled. Only one of these task runs at time; the others are only scheduled once this task has finished, i.e. there is enough space.
Source code in backend/archiver/flows/archive_datasets_flow.py
49 50 51 52 53 54 55 |
|
create_datablocks(dataset_id, origDataBlocks)
Prefect task to create datablocks.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dataset_id
|
str
|
dataset id |
required |
origDataBlocks
|
List[OrigDataBlock]
|
List of OrigDataBlocks (Pydantic Model) |
required |
Returns:
Type | Description |
---|---|
List[DataBlock]
|
List[DataBlock]: List of DataBlocks (Pydantic Model) |
Source code in backend/archiver/flows/archive_datasets_flow.py
33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
|
create_datablocks_flow(dataset_id, scicat_token)
Prefect (sub-)flow to create datablocks (.tar.gz files) for files of a dataset and register them in Scicat.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dataset_id
|
str
|
Dataset id |
required |
Returns:
Type | Description |
---|---|
List[DataBlock]
|
List[DataBlock]: List of created and registered datablocks |
Source code in backend/archiver/flows/archive_datasets_flow.py
101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 |
|
move_data_to_LTS(dataset_id, datablock)
Prefect task to move a datablock (.tar.gz file) to the LTS. Concurrency of this task is limited to 2 instances at the same time.
Source code in backend/archiver/flows/archive_datasets_flow.py
58 59 60 61 62 63 64 |
|
move_datablock_to_lts_flow(dataset_id, datablock)
Prefect (sub-)flow to move a datablock to the LTS. Implements the copying of data and verification via checksum.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dataset_id
|
str
|
description |
required |
datablock
|
DataBlock
|
description |
required |
Source code in backend/archiver/flows/archive_datasets_flow.py
77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 |
|
on_get_origdatablocks_error(dataset_id, task, task_run, state)
Callback for get_origdatablocks tasks. Reports a user error.
Source code in backend/archiver/flows/archive_datasets_flow.py
26 27 28 29 30 |
|
verify_data_in_LTS(dataset_id, datablock)
Prefect Task to verify a datablock in the LTS against a checksum. Task of this type run with no concurrency since the LTS does only allow limited concurrent access.
Source code in backend/archiver/flows/archive_datasets_flow.py
67 68 69 70 71 72 73 |
|