- Issues resolved in 0.7.10 milestone: https://github.com/radical-cybertools/radical.entk/milestone/12
- Separated the CU creation from RMQ Communication thread
- Issues resolved in 0.7.9 milestone: https://github.com/radical-cybertools/radical.entk/milestone/11?closed=1
- Included ability to suspend and resume pipeline execution
- Several bug fixes
- Hotfix for issue 259
- Issues as part of 0.7.7 milestone: https://github.com/radical-cybertools/radical.entk/milestone/10/
- Improved test coverage
- Minor fixes to be able to upload to conda repo
- Issues as part of 0.7.5 milestone: https://github.com/radical-cybertools/radical.entk/milestone/9/
- Documentation improved with more examples
- Bug fixes
- Bug fixes
- Improved documentation
- Improved test coverage
- Bug fixes
- Improved documentation
- Improved test coverage
- Issues as part of 0.7.0 milestone: https://github.com/radical-cybertools/radical.entk/milestone/8/
- API Changes:
- 'cores' attribute of task changed to
cpu_reqs
andgpu_reqs
which are dictionaries with the following structure:task.cpu_reqs = { 'processes': 1, 'process_type': None/MPI, 'threads_per_process': 1, 'thread_type': None/OpenMP } task.gpu_reqs = { 'processes': 1, 'process_type': None/MPI, 'threads_per_process': 1, 'thread_type': None/OpenMP }
- ResourceManager object is not exposed to the user. The resource description is to be provided to the AppManager using the 'resource_desc' attribute
amgr = AppManager() amgr.resource_desc = { 'resource': 'local.localhost', 'walltime': 10, 'cpus': 1}
- AppManager does not have assign_workflow() method. Instead you assign the workflow using the assignment operator (similar to resource desc) to the workflow attribute.
Note:amgr = AppManager() amgr.workflow = pipelines
pipelines
can be a list or a set of Pipeline objects but there are no guarantees of order- AppManager has two important additional arguments:
- write_workflow (True/False) to write the executed workflow to a file post-termination
- rmq_cleanup (True/False) to cleanup the rabbitmq queues post-execution
- AppManager reads default values from a JSON config file. Expected config file structure:
{ "hostname": "localhost", "port": 5672, "reattempts": 3, "resubmit_failed": false, "autoterminate": true, "write_workflow": false, "rts": "radical.pilot", "pending_qs": 1, "completed_qs": 1, "rmq_cleanup": true }
- The
shared_data
attribute is part of the AppManager object now (since the resource manager is not exposed to the user anymore).
amgr = AppManager() amgr.shared_data = ['file1.txt','/tmp/file2.txt']
- 'cores' attribute of task changed to
- Added some more tests
- Several issues addressed
- Bulk submission of tasks across entire set of pipelines
- Write workflow structure post-execution upon 'write_workflow=True' argument to AppManager
- Multiple bug fixes
_parent_pipeline
and_parent_stage
on Stage and Task objects changed toparent_pipeline
andparent_stage
.