Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

basic question. How run it on windows ? #52

Closed
yw3l opened this issue Oct 25, 2023 · 8 comments
Closed

basic question. How run it on windows ? #52

yw3l opened this issue Oct 25, 2023 · 8 comments
Assignees
Labels

Comments

@yw3l
Copy link

yw3l commented Oct 25, 2023

I just use pycharm virtual environment, can't find the .env file. installed requirements.txt, I ran test.py, and the program showed me the error is KeyError: 'OPENAI_API_KEY'. Can you give me some guidance?
Sincerely yours!

@HamedBabaei HamedBabaei self-assigned this Oct 25, 2023
@HamedBabaei
Copy link
Owner

Dear @yw3l ,

Thanks for the comment.

For this you might need to rename the .env-example file at the repo into .env and put your OPENAI_API_KEY key there, to be able to run models for OpenAI models. If you don't have the key simply just leave the OPENAI_API_KEY as it is in the .env file and you should be able to run the scripts.

I hope this helps. If you need prior assistance just let me know. and If this works for you please let me know as well to close this issue!

@yw3l
Copy link
Author

yw3l commented Oct 26, 2023

thank you, thank you. I have another question in running the program, which is when I use the command ' python test.py --kb_name wn18rr --model_name bert_large --template template_1 --device cpu 'it tells me that can not find the wn18rr_entities.json file. your help is invaluable to me. sincerely.

@HamedBabaei
Copy link
Owner

Dear @yw3l

Correct, you might need to follow the data construction to be able to generate the wn18rr_entities.json file using a preprocessing method that is available in this repository. But to ease your experimentations, please find your requested dataset here. Add this file into the datasets/TaskA/WN18RR dir and your script should work.

@yw3l
Copy link
Author

yw3l commented Oct 27, 2023

Dear @HamedBabaei,
that's very kind of you! Now it's giving me an error message 'huggingface_hub.utils._validators.HFValidationError: repo id must be of the form 'repo_name' or 'namespace/repo_name': '.../assets/LLMs/bert-large-uncased'. Please use the repo_type parameter if needed. Is that why the model cannot be found? How to solve it? and How to generate a file like 'wn18rr_entities.json' ,is using entity_dataset_builder.py? I really appreciate it !

@yw3l yw3l closed this as completed Oct 27, 2023
@yw3l yw3l reopened this Oct 27, 2023
@HamedBabaei
Copy link
Owner

Dear @yw3l

thanks for bringing these to my attention.

For your first question, I was using GPU servers for running experiments and it is not ideal to download models directly from huggingface_hub inside codes, so I downloaded them and I uploaded them to the server, Specifically in /assets/LLMs/ directory of the project. So for your case, I recommend doing the same if you are using servers. If you think this might not work for your case, then please go to the config.py (for each task you might need to do it separately) and change the following codes:

self.parser.add_argument("--model_path", type=str, default=f"{self.llms_root_dir}/bert-large-uncased")

to

self.parser.add_argument("--model_path", type=str, default="bert-large-uncased")

you might need to do the same for other models as well by adding huggingface repo id for --model_path variables.

Regarding your second question, on how to generate a file like wn18rr_entities.json, Yes, you might need to go through build_entity_datasets.py script. I will add this to my to-do list to add some documentation about this script and other scripts as well.

thanks again for the hints!

@yw3l
Copy link
Author

yw3l commented Oct 29, 2023

Dear @HamedBabaei

thanks for your help.

I modified the
self.parser.add_argument("--model_path", type=str, default=f"{self.llms_root_dir}/bert-large-uncased")
to
self.parser.add_argument("--model_path", type=str, default="bert-large-uncased")
you mentioned, then ran the command
python test.py --kb_name wn18rr --model_name bert_large --template template-1 --device cpu
, and the prompt
requests.exceptions.ConnectTimeout: (MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /bert-large-uncased/resolve/main/config.json (Caused by ConnectTimeoutError(<urllib3.connec tion.HTTPSConnection object at 0x0000020AA85992A0>, 'Connection to huggingface.co timed out. (connect timeout=10)'))"), '(Request ID: a4b69939-174a-4838-aa23-a63e088764c7)')
appeared. Are you doing this now? I wonder It's because of my region, equipment, etc.
thank you again!!!

@HamedBabaei
Copy link
Owner

HamedBabaei commented Oct 30, 2023

Dear @yw3l

I think this arises because of the connection time out to the huggingface.co, I don't know how you can solve it, maybe downloading models in your local directory and providing a path to --model_path would solve the issue. Otherwise, I recommend to contacting huggingface people by creating an issue here https://github.com/huggingface/transformers with an explanation of your problem.

@HamedBabaei
Copy link
Owner

@yw3l I think your issue has been addressed, so I will close this issue. Feel free to reopened if you think still the problem is persisting.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants