-
Notifications
You must be signed in to change notification settings - Fork 254
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About from_pandas #56
Comments
and from_pandas very slow |
Hi @ziyuwzf, thanks for your question. We have recently included the option to run the NOTEARS algorithm with Pytorch in the develop branch, and this will be included in the upcoming release. To use this feature,
Hope this will solve your problem. Please let me know if you still run into the same issue. Thanks. 🙂 |
thanks!i will try it now |
i can not use git command,so i download causalnex-develop.zip and unzip it,then i cd it. |
Hi @ziyuwzf, what python version are you using? Do |
it works
the reason of the error is just a piece of cake,because i set pip install '.[pytorch]' -i XX.XX.XX.XX。it works! thanks! |
ok sure no worries! 🙂 |
i have already used 'from causalnex.structure.pytorch.notears import from_pandas' |
@ziyuwzf Thanks for sharing this. We will take a look into this. Just a quick question do you have a GPU? We have an internal WIP branch which provides the option to use gpu for |
@ziyuwzf Thanks for your question, do I understand correctly that you have 370k rows? How many variables/features do you have? You can often speed it up massively by using a larger NOTE: On my macbook (4 cores, 8 threads) I ran the pytorch implementation with up to 1000 features and 1000 rows. The method should scale linearly with n_obs and cubically with the number of features (due to gradients and the constraint). |
i have gpus,when can i use GPU by your an internal WIP branch which provides the option to use gpu for |
yes,370k rows and 30 columns. |
Hi @ziyuwzf I have moved the WIP branch here: #57. Please go to the |
Example usage:
|
i used 'from_pandas(dataset, use_gpu=True)',but i shows error that
|
Is the gpu acceleration in the released codebase? Or is it only on the developer branch? |
The GPU acceleration has been implemented in this commit and will be made available in the next release. |
Description
when i use 'from_pandas' to learning causal map by notears,i run 'watch -n 1 free -m',it shows that 3/16GB used.i run 370 thousand data but only use memory 3G?how to improve efficiency?
Context
Every 1.0s: free -m Tue Jun 30 16:18:36 2020
Mem: 16384 2799 12213 0 1371 13584
Swap: 0 0 0
The text was updated successfully, but these errors were encountered: