-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Installation Issues, Hardware Requirements? #1
Comments
Hi Matrix ISSUE #3 - Pre-trained Model You have to save the Model in the following format: Project I have a question Can we use leafNATS if I dont have a GPU? File "C:\Users\deranan1\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\cuda_init_.py", line 75, in _check_driver |
Hi Matrix7689, ISSUE #1: I haven't tested how much RAM needed to run this code. But I do run it on my mac-pro without GPU. The codes are managed and tested on Ubuntu environment. To test the pre-trained model, we used SpaCy as a tokenizer. ISSUE #2: Yes. No, you don't have to install data to run the model. Here is the link to our pre-processed cnn/dm bytecup dataset.
ISSUE #3
|
ISSUE #3 Please also check this link |
Can we use leafNATS if I dont have a GPU? Yes. change device to cpu. |
Hi Tshi Yes it ran on the CPU but with the code --> |
Hello,
I am working on Abstractive Text Summarization and I am facing multiple issues such as:
ISSUE #1 - RAM
I was trying to set it up on AWS t2.micro, but spaCy couldn't install due to low RAM (1 GB). Upgraded to 2 GB RAM and then spaCy installed successfully.
Later I was trying to run the Stanford-CoreNLP Server to test it, it did run but it couldn't execute any sentences and crashed with this error:
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x000000075072e000, 661209088, 0) failed; error='Not enough space' (errno=12)
Which I am guessing is a RAM error...
So in order to run this model, what amount of RAM is required?
What OS and hardware specifications (Graphic Card, etc.) would you recommend?
ISSUE #2 - Dataset
When I was setting up the "Bytecup2018" dataset, on the following command,
python3 tokenize.py --input bytecup.corpus.train.1.txt --output new1.txt
I got this error:
ImportError: cannot import name 'StanfordCoreNLP'
Which means that StanfordCoreNLP wasn't installed properly. Do we have to use the pywrapper version of StanfordCoreNLP? Because the link that you have given consists of java files. How should I go about it?
In order to run the model, do I have to install all three datasets- CNN/Dailymail, Newsroom and Bytecup?
ISSUE #3 - Pre-trained Model
Where do I place the pre-trained model?
The text was updated successfully, but these errors were encountered: