Skip to content

Latest commit

 

History

History
126 lines (99 loc) · 3.88 KB

changes.MD

File metadata and controls

126 lines (99 loc) · 3.88 KB

Changelog for each release

Changelog for 0.7.11

Changelog for 0.7.9

Changelog for 0.7.8

  • Hotfix for issue 259

Changelog for 0.7.7

Changelog for 0.7.6

  • Minor fixes to be able to upload to conda repo

Changelog for 0.7.5

Changelog for 0.7.4

  • Bug fixes
  • Improved documentation
  • Improved test coverage

Changelog for 0.7.3

  • Bug fixes
  • Improved documentation
  • Improved test coverage

Changelog for 0.7.0

  • Issues as part of 0.7.0 milestone: https://github.com/radical-cybertools/radical.entk/milestone/8/
  • API Changes:
    • 'cores' attribute of task changed to cpu_reqs and gpu_reqs which are dictionaries with the following structure:
      task.cpu_reqs =     {
                              'processes': 1,
                              'process_type': None/MPI,
                              'threads_per_process': 1,
                              'thread_type': None/OpenMP
                          }
      
      task.gpu_reqs =     {
                              'processes': 1,
                              'process_type': None/MPI,
                              'threads_per_process': 1,
                              'thread_type': None/OpenMP
                          }
      • ResourceManager object is not exposed to the user. The resource description is to be provided to the AppManager using the 'resource_desc' attribute
      amgr = AppManager()
      amgr.resource_desc = {
          'resource': 'local.localhost',
          'walltime': 10,
          'cpus': 1}
      • AppManager does not have assign_workflow() method. Instead you assign the workflow using the assignment operator (similar to resource desc) to the workflow attribute.
      amgr = AppManager()
      amgr.workflow = pipelines
      Note: pipelines can be a list or a set of Pipeline objects but there are no guarantees of order
      • AppManager has two important additional arguments:
        • write_workflow (True/False) to write the executed workflow to a file post-termination
        • rmq_cleanup (True/False) to cleanup the rabbitmq queues post-execution
      • AppManager reads default values from a JSON config file. Expected config file structure:
      {
      "hostname": "localhost",
      "port": 5672,
      "reattempts": 3,
      "resubmit_failed": false,
      "autoterminate": true,
      "write_workflow": false,
      "rts": "radical.pilot",
      "pending_qs": 1,
      "completed_qs": 1,
      "rmq_cleanup": true
      }
      • The shared_data attribute is part of the AppManager object now (since the resource manager is not exposed to the user anymore).
      amgr = AppManager()
      amgr.shared_data = ['file1.txt','/tmp/file2.txt']

Changelog for 0.6.3

  • Added some more tests

Changelog for 0.6.2

  • Several issues addressed
  • Bulk submission of tasks across entire set of pipelines
  • Write workflow structure post-execution upon 'write_workflow=True' argument to AppManager

Changelog for 0.6.1

  • Multiple bug fixes

Changelog for 0.6.0

  • _parent_pipeline and _parent_stage on Stage and Task objects changed to parent_pipeline and parent_stage.