-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update SSB towards a release #31
Conversation
* Light refactoring to simplify * Keep accelerating voltage out of SSB implementation, only work with wavelength * Docstrings
Interestingly, working with more depth and relatively small tiles was a bit faster thann GB-sized tiles. Smaller tiles help to avoid running out of GPU RAM.
This brings a 2x speed-up for the CPU version when I tested it.
This would require a bit more documentation and unit tests to make it ready for releasing. Originally uploaded by @heidemeissner in commit b059ac4
Also remove a document with questions. One can open an Issue if this is still relevant.
This looks already quite good. The |
Sounds sensible to me! I'll do that. :-) |
Thx @sk1p for the suggestion!
Removed a lot of "lorem ipsum", tried to find a structure that makes sense for the release of SSB.
@sk1p The docs build clean on my system now. This PR should now be in a state that establishing a CI pipeline and homepage makes sense. How should we go on about that? :-) |
|
I'm wondering if we should run the CI pipeline on all Python versions and all OSes. So far, we only depend on LiberTEM, NumPy, Numba, CuPy, SciPy, and sparse directly. CuPy and LiberTEM would be the most likely source of system-dependent trouble in my experience. We can't test CuPy in CI anyway, and LiberTEM is already tested against different systems in its own CI pipeline. Since we do nothing out of the ordinary with LiberTEM for now, I'm not sure what the probability would be to hit any system-dependent issues here. The only argument would be that we run the latest release of LiberTEM here, which means we'd catch regressions from upstream changes which may not affect the current LiberTEM master, like the one that prompted us to release version 0.5.1. Since the CI pipeline here is still quite short, perhaps it makes sense to run it against all versions and OSes just to catch regressions affecting the latest release of LiberTEM, until we have a better solution to cover that? |
Should be reviewed carefully before release
Sounds good to me.
In the "data tests", we should be able to use CUDA/ |
* Purge unneeded items from test setup * Align with current LiberTEM * Add an example so that `--doctest-modules` is happy
Good point! I've purged a lot of LiberTEM-specific items from the test setup. |
Good point! The UDF was updated in this PR to use little GPU RAM and not more than is available. 👍 |
I'm not sure, if I understand it correctly. Do mean that we need a private hosted GPU runner for the Azure CI, like we do it on GitLab? And do you really mean Azure or GitHub actions? |
That was the idea. We can run one of those with just a single turing card, but maybe in the long run it would be possible to test on more hardware.
Right now we are using Azure Pipelines and have a working configuration for that, if only for the reason that GitHub Actions weren't publicly available when we started the switch on our main LiberTEM repository. If you prefer GitHub Actions, we could do a quick hacking session and port our CI over to Actions, we just didn't have a reason for that yet. |
At the moment, we have also a single Turing card. If I'm not wrong, we should get some A100 for the CI this year.
I need to talk with our CI admin what is possible. But I see two problems at the moment besides the lack of resources, but the A100 should come ;-)
I think a realistic solution will be to put as much of the CI as possible into generic (bash-) scripts and have only a few CI specific lines of code. This would allow us to use different CI systems in parallel. This is what we have planned and partially implemented for the alpaka project. There we want to combine the free Github action runners with our GPU runners. |
Closes #30
Contributor Checklist: