Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Have lsf driver log missing jobid in kill to debug instead of error #7944

Closed
wants to merge 2 commits into from

Conversation

jonathan-eq
Copy link
Contributor

@jonathan-eq jonathan-eq commented May 22, 2024

Issue
Resolves #7907

Approach
馃敘

(Screenshot of new behavior in GUI if applicable)

  • PR title captures the intent of the changes, and is fitting for release notes.
  • Added appropriate release note label
  • Commit history is consistent and clean, in line with the contribution guidelines.
  • Make sure tests pass locally (after every commit!)

When applicable

  • When there are user facing changes: Updated documentation
  • New behavior or changes to existing untested code: Ensured that unit tests are added (See Ground Rules).
  • Large PR: Prepare changes in small commits for more convenient review
  • Bug fix: Add regression test for the bug
  • Bug fix: Create Backport PR to latest release

@jonathan-eq jonathan-eq self-assigned this May 22, 2024
@jonathan-eq jonathan-eq added maintenance Not a bug now but could be one day, repaying technical debt release-notes:skip If there should be no mention of this in release notes labels May 22, 2024
@jonathan-eq jonathan-eq marked this pull request as ready for review May 22, 2024 08:32
@berland
Copy link
Contributor

berland commented May 22, 2024

I think this is an actual bug that this code (before and after the PR) hides. If we issue kill() right after submit(), but before _execute_with_retry() finishes, the job will be lingering on the compute cluster 馃

@jonathan-eq
Copy link
Contributor Author

I think this is an actual bug that this code (before and after the PR) hides. If we issue kill() right after submit(), but before _execute_with_retry() finishes, the job will be lingering on the compute cluster 馃

Okay, so as I understand it; we are potentially trying to kill a job before qsub has been called, or has not finished, or we havent processed the qsub output to get jobid.
Should we add a simple loop and sleep between each attempt, or should we use the full scale _execute_with_retry(...) for killing jobs?

@@ -204,8 +208,8 @@ def __init__(
self._bjobs_cmd = Path(bjobs_cmd or shutil.which("bjobs") or "bjobs")
self._bkill_cmd = Path(bkill_cmd or shutil.which("bkill") or "bkill")

self._jobs: MutableMapping[str, JobData] = {}
self._iens2jobid: MutableMapping[int, str] = {}
self._iens2jobdata: MutableMapping[int, JobData] = {}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why change iens from str to int in this pr?

job_id: Optional[str] = None
job_state: Optional[AnyJob] = None
submitted_timestamp: Optional[float] = None
submitted_future: asyncio.Future[bool] = Field(default_factory=asyncio.Future)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"submitted" is a little bit ambigous, it could be after submit() is called, or when when bsub is called, or when bsub is finished etc. Maybe submit_function_finished or something better?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Think about how this will be for resubmissions.

If it interferes with that, maybe we need a lock mechanism instead, "submit_is_in_progress" ?

@berland
Copy link
Contributor

berland commented May 29, 2024

Closing this as the original issue has been reformulated and have gotten a scope increase.

@berland berland closed this May 29, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
maintenance Not a bug now but could be one day, repaying technical debt release-notes:skip If there should be no mention of this in release notes
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Killing LSF jobs while submission is in progress can leave the job running on the cluster.
2 participants