Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

provide data files? #4

Closed
Slash0BZ opened this issue Oct 20, 2019 · 5 comments
Closed

provide data files? #4

Slash0BZ opened this issue Oct 20, 2019 · 5 comments

Comments

@Slash0BZ
Copy link

Hi,

Is it possible that you provide some example data files? If I run the backend, it prompts errors such as exbert/server/data/woz/embeddings/combined.hdf5', errno = 2, error message = 'No such file or directory'. I tried to use the data tools to create such embeddings, but I encountered multiple errors during that process as well.

Thank you.

@bhoov
Copy link
Owner

bhoov commented Oct 21, 2019

We are working on resolving this in a more universal way shortly. Atm, check out the files and directions here.

Note that the files that contain all the attentions and the hidden representations for an entire corpus can be very large. Make sure you have at least 10GB of space before performing this operation on the wizard of oz txt

@Slash0BZ
Copy link
Author

I see, thanks for the explanation. There seems to be some issue with tokenization (I didn't read the code in-depth) when I tried to generate the attention files. I guess I will wait for a more universal solution.

At this point, is it possible to run the demo on single sentences without caching a large corpus? In theory that's doable right?

@bhoov
Copy link
Owner

bhoov commented Oct 21, 2019

The most common issue with tokenization is an environment one -- make sure you run python -m spacy download en_core_web_sm after you set up your conda environment. If this does not solve your problem, please paste the issue and we can work on getting it fixed.

So long as there are no "file not found" issues when looking for the hdf5 and faiss files on starting the server, you should be able to run just to play with the attention graph. Without those, you will not be able to search across the inner representations, however.

@felicitywang
Copy link

Hi, thanks for the corpus creating scripts. They're very helpful.

I got an index error with BPE/spacy tokenization as shown below:

Extracting embeddings into /felicity/workspace/exbert/server/data/mt/deepak/embeddings/embeddings.hdf5
['deep', '##ak', 'i', 'don', '’', 't', 'feel', 'like', 'doing', 'my', 'meditation', 'today', '.']
['deepak', 'i', 'do', 'n’t', 'feel', 'like', 'doing', 'my', 'meditation', 'today', '.']
11
0
Deepak I don’t feel like doing my meditation today.
Traceback (most recent call last):
  File "/felicity/workspace/exbert/server/data/processing/create_corpus.py", line 19, in <module>
    create_hdf5.main(unique_sent_pckl, args.outdir, args.force)
  File "/felicity/workspace/exbert/server/data/processing/create_hdf5.py", line 221, in main
    sentences_to_hdf5(embedding_extractor, str(embedding_outpath), sentences, clear_file=force)
  File "/felicity/workspace/exbert/server/data/processing/create_hdf5.py", line 179, in sentences_to_hdf5
    b_pos = combine_tokens_meta(b_tokens, s_tokens, s_pos)
  File "/felicity/workspace/exbert/server/utils/token_processing.py", line 121, in combine_tokens_meta
    meta_list.append(spacy_meta[j])
IndexError: list index out of range

In IndexError designed to raise under certain circumstances? If not how can I solve it?

Thank you very much.

@bhoov
Copy link
Owner

bhoov commented Oct 22, 2019

This looks like a separate bug for the spaCy - BPE alignment. Will look into it in a separate issue.

Looking at the two tokenized lists, it looks like the contraction don't messes up the alignment.

@bhoov bhoov closed this as completed Jun 19, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants