-
Notifications
You must be signed in to change notification settings - Fork 123
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CPU low load #55
Comments
Your CPUs should definitely be pegged at 100% using this configuration. As you know, setting if n_cores_batch == -1:
n_cores_batch = cpu_count()
if n_cores_batch > 1:
pool = Pool(n_cores_batch, initializer=set_task, initargs=(self.config_task,)) I'm wondering if there could be some OS/machine-specific issue with import multiprocessing
print(multiprocessing.cpu_count()) Also, how are you running this config? If it's via command-line, is it the simple One other thing to try that might help narrow things down is if the problem persists when removing the |
@IgorK7 FYI we just updated the repo to a new release. It had a lot of refactoring, so apologies if it breaks any existing configs; you might have to shuffle some things around in your configs following the new template Not sure if any of the edits will address the two issues you have raised. |
Hi Brenden, Thank you very much for getting back to me. I am using your package in my research project where I investigate properties of symbolic regressions and DSR in particular in the set up with very high noise (true R2 at 5%). So all calculations require a lot of computation capacity and time. I can confirm that with By the way, running the program on WSL should be a faster alternative to Docker if one needs to run it on Windows PC. I am not sure how to make it run on CPUs at the full capacity. Thank you very much! |
I will also test the new version of the package. Thank you! |
Regarding using GPU, I was able to make it utilize GPU but it still does not load to max capacity. The behavior is the same regardless of whether I use a small dataset (10,000 by 3) or a large one (10,000,000 by 9). |
GPU is not going to help. GPU will be used on the neural network (which is on the TensorFlow compute graph) but not computing the MSE (which is done off the compute graph). Since the DSO LSTM is a very small network, GPU just doesn't help. CPU (e.g., computing MSE on the dataset for the |
Hi,
I am not sure whether it is a bug or a feature of this package.
I've noticed that CPU cores are loaded for no more than 6% at most. I provide all the cores available on the system and it uses all the cores but the load is very low. It happens both on my local Intel Mac and on GCC (see a screenshot below).
Here is the config.json file (everything else if default). Data is random with 10000 obs and 2 predictors.
{ "experiment": {
"logdir": None
},
"task" : {
"task_type" : "regression",
"metric": "inv_nmse",
"metric_params": (0,),
"function_set" : ["add", "sub", "mul", "div", "exp", "log", "sqrt" ,"const"] #
} ,
"training" : { #"epsilon" : 0.05,
"n_cores_batch" : -1 # "epsilon" : 1,
},
"prior": {
"length": {
"min_": 3,
"max_": 15,
"on": True
}}}
The text was updated successfully, but these errors were encountered: