Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add more supported partition #115

Closed
shenyangshi opened this issue Jun 20, 2024 · 4 comments
Closed

Add more supported partition #115

shenyangshi opened this issue Jun 20, 2024 · 4 comments
Assignees
Labels
enhancement New feature or request

Comments

@shenyangshi
Copy link

Do we want to add more supported partitions? Especially the gpus. @yuema137

utilix/utilix/batchq.py

Lines 27 to 44 in b9d19fa

PARTITIONS: List[str] = [
"dali",
"lgrandi",
"xenon1t",
"broadwl",
"kicp",
"caslake",
"build",
]
TMPDIR: Dict[str, str] = {
"dali": f"/dali/lgrandi/{USER}/tmp",
"lgrandi": os.path.join(SCRATCH_DIR, "tmp"),
"xenon1t": os.path.join(SCRATCH_DIR, "tmp"),
"broadwl": os.path.join(SCRATCH_DIR, "tmp"),
"kicp": os.path.join(SCRATCH_DIR, "tmp"),
"caslake": os.path.join(SCRATCH_DIR, "tmp"),
"build": os.path.join(SCRATCH_DIR, "tmp"),
}

image

@shenyangshi shenyangshi added the enhancement New feature or request label Jun 20, 2024
@yuema137
Copy link
Contributor

@shenyangshi Thanks for suggesting! Let me check if those partitions are available for us

@yuema137 yuema137 self-assigned this Jun 20, 2024
@yuema137
Copy link
Contributor

yuema137 commented Jun 21, 2024

Hi @shenyangshi , I looked into this problem and did some tests. Here is some information:
Currently, the available partitions on midway2/dali are (the dali partition is not available on midway2 as no I/O is allowed between them):

[yuem@midway2-login2 sr1_v15_data]$ accounts partitions
+------------+
| Partitions |
+------------+
| bigmem2    |
| broadwl    |
| broadwl-lc |
| build      |
| dali       |
| gpu2       |
| kicp       |
| viz        |
| xenon1t    |
+------------+

So the "unsupported ones" are bigmem2, broadwl-lc, gpu2 and viz

  • In env_starter, if you add the --gpu argument when executing the script, it will automatically guide you to gpu2. I think we should implement the same function here.
  • viz only has one working node with a 20GB GPU, which is quite weak. Given that gpu2 provides much more resources, I will skip this one
  • bigmem2 provides good CPU resources. We should also add.
  • broadwl-lc also works, but as it's loosely coupled and therefore doesn't have access to \project and \project2, I would hold on to this unless we are really short of resources.

As gpu2 and bigmem2 both have some broken nodes, I will work a little bit more on checking and excluding them. Once it's implemented I will let you know. Thanks for the suggestion again! :)

@shenyangshi
Copy link
Author

Thanks a lot Yue for the detailed response, looking forward to the implementation :)

@yuema137
Copy link
Contributor

Implemented in #116

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants