Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Acquire access to GPU cluster for testing #93

Closed
matthewfeickert opened this issue Mar 1, 2018 · 17 comments
Closed

Acquire access to GPU cluster for testing #93

matthewfeickert opened this issue Mar 1, 2018 · 17 comments
Assignees
Labels
research experimental stuff

Comments

@matthewfeickert
Copy link
Member

matthewfeickert commented Mar 1, 2018

Access to GPU clusters are needed for performing benchmarks with GPU acceleration. While access to Amir Farbin's personal GPU cluster is available it would also be good to have something with wider support. In the 2018-02-28 CERN IML meeting Maxime Reis advertised that CERN's TechLab has GPU clusters available with support. We can follow up on this and see if we can use it for testing.

@matthewfeickert matthewfeickert self-assigned this Mar 1, 2018
@matthewfeickert
Copy link
Member Author

We should be able to request specific GPU architectures for the benchmarking through the CERN TechLab TWiki. So once we have basic GPU functionality and tests then we can book a week and do testing.

@matthewfeickert matthewfeickert added this to To do in pyhf development via automation Mar 1, 2018
@matthewfeickert
Copy link
Member Author

matthewfeickert commented Mar 1, 2018

Maxime Reis has followed up with me with regards to how much time we can get for benchmarking:

Most of the nodes with GPUs are shared, and for benchmarking this obviously won't do. Exclusive access can be arranged for short periods of time, and I'd say a day to a week should be manageable. More than that, we'll have to discuss, and it also depends on which GPU you'd like to benchmark.

So hopefully we can do some testing on other GPU machines and then do a full benchmarking run on the TechLab cluster.

@kratsg
Copy link
Contributor

kratsg commented Apr 16, 2018

@ivukotic might be able to help give us access to some GPU clusters?

@matthewfeickert
Copy link
Member Author

From the ATLAS Machine Learning Forum mailing list:

IBM has provided a small GPU cluster to CERN OpenLab for ML studies by the different experiments. They are planning to host a training workshop (one full day between May 28 and June 8, excluding June 7) to help people understand the cluster and how to use it. ATLAS is not the main customer here, but we can have a number of slots for ATLAS people.

One of the big benefits of IBM hardware is their NVLink, which provides much higher bandwidth between CPU/GPU and more critically GPU/GPU. Intel has recently improved CPU/GPU bandwidth, but not touched GPU/GPU. As such, IBM seems keen to demonstrate the potential of increased GPU/GPU bandwidth, which would require large-scale networks/etc which exploit multiple GPUs at once.

If you think you might have now, or will have soon an ML application with large enough network which will gain from efficient multi GPU training, then this training workshop is probably of interest to you.

I will write up an application and submit us.

@kratsg
Copy link
Contributor

kratsg commented Apr 24, 2018 via email

@matthewfeickert matthewfeickert added the research experimental stuff label May 13, 2018
@matthewfeickert
Copy link
Member Author

I have confirmed with the SMU HPC Admins that I can use M2's (SMU's Tier3) GPUs for testing and development. So we'll have access to up to 36 nodes with NVIDIA GPUs. 👍

@matthewfeickert matthewfeickert moved this from To do to In progress in pyhf development Sep 28, 2018
@matthewfeickert
Copy link
Member Author

At the moment the environment at SMU that the HPC admins were able to setup is only fully supporting an optimized TensorFlow GPU. So I'll start there and then move to PyTorch.

@matthewfeickert
Copy link
Member Author

I'm getting access to NCSA's Hardware-Accelerated Learning (HAL) cluster, which should be a perfect environment to do hardware acceleration studies at scale (and probably make the BlueWaters team happier the having me mess around there). Thanks to @msneubauer for setting this in motion.

@matthewfeickert
Copy link
Member Author

matthewfeickert commented Jan 20, 2020

2020 update: There are two GPU enabled machines that I can use for testing at the moment:

  • My laptop (NVIDIA GeForce GTX 1650 Max-Q 4GB)
  • The Neubauer Group firmware and deep learning machine (@markusatkinson is the effective sys admin for this) (NVIDIA GeForce RTX 2080 Ti 11GB — memory can be expanded)

For dev work I will be using the GPUs on my laptop, but I will use our dedicated machine for all benchmarks.

@kratsg
Copy link
Contributor

kratsg commented Jan 24, 2020

Can we talk with the UChicago folks (./cc @fizisist, @LincolnBryant, @robrwg, @ivukotic) as well for perhaps access to some machines for CI purposes? Or will the Neubauer group allow the DL machine to be used for that?

@ivukotic
Copy link

ivukotic commented Jan 24, 2020 via email

@matthewfeickert
Copy link
Member Author

matthewfeickert commented Jan 24, 2020

Or will the Neubauer group allow the DL machine to be used for that?

I think that the DL machine we have is a great candidate for dedicated benchmarking studies, but I'm not sure if we can guarantee that the GPUs we have in there can be reserved for CI. The primary purpose of this machine is firmware development and testing with FPGAs and then deep learning studies with the GPUs, which gets first priority.

you can create a private JuputerLab instance with a GPU attached to it.

@ivukotic So do I understand you correctly that we can have that GPU indefinitely for hardware acceleration tests with our CI? If so, that's fantastic. I just wan't aware that this was an option.

@ivukotic
Copy link

ivukotic commented Jan 24, 2020 via email

@matthewfeickert
Copy link
Member Author

matthewfeickert commented Jan 24, 2020

You can’t get it indefinitely. But you can do reasonable scale studies.

Right, okay this make more sense. :) @kratsg's question was about CI, but this still is good as it will give multiple sites to do hardware acceleration tests. Since it doesn't say on the public view of the ATLAS ML Platform but can you give us information on the GPUs that you have available so that we can include that in the studies?

@ivukotic
Copy link

ivukotic commented Jan 24, 2020 via email

@fizisist
Copy link

fizisist commented Jan 25, 2020 via email

@matthewfeickert
Copy link
Member Author

Closing as this has been solved given local machines that the pyhf dev team has access to (in addition to the ATLAS ML Platform).

pyhf development automation moved this from In progress to Done Jul 11, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
research experimental stuff
Projects
Development

No branches or pull requests

4 participants