Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

关于total_word_feature_extractor_zh.dat文件 #13

Open
Jacky-Chiu opened this issue Nov 9, 2017 · 45 comments

Comments

Projects
None yet
@Jacky-Chiu
Copy link

commented Nov 9, 2017

Rasa NLU version (e.g. 0.7.3):

Used backend / pipeline (mitie, spacy_sklearn, ...):

Operating system (windows, osx, ...):

Issue:

Content of configuration file (if used & relevant):

```你好我在百度网盘下载了这个文件,但打开后是乱码,我encoding用utf-8并把文件也另存为utf-8
@crownpku

This comment has been minimized.

Copy link
Owner

commented Nov 9, 2017

这个文件是给rasa nlu做词向量支持的,应该是mitie自己的binary格式。请问你打开的需求是想做什么?

@Jacky-Chiu

This comment has been minimized.

Copy link
Author

commented Nov 9, 2017

我是看了你的文章,也关注了公众号,现在主要目的是想获得一些语料做知识库,另外貌似也有看到有知识图谱API可以调用,想自己跟着搜集到的资料或者论文试着实现一个问答机器人

@crownpku

This comment has been minimized.

Copy link
Owner

commented Nov 9, 2017

total_word_feature_extractor_zh.dat只是词向量,和知识库没有关系的。

@Jacky-Chiu

This comment has been minimized.

Copy link
Author

commented Nov 9, 2017

明白,谢谢!

@BrikerMan

This comment has been minimized.

Copy link

commented Nov 17, 2017

你好,我这里现在有一批影片名称和相关预料。如何在你训练的 total_word_feature_extractor_zh.dat 基础上继续训练利用这一批出书? 还是只能用 wordrep 重新训练?

@crownpku

This comment has been minimized.

Copy link
Owner

commented Nov 17, 2017

@BrikerMan 我所知道的只能重新训练(如果影片语料不够多,你可以wikipedia dump之类的语料一起训练),而且应该用同一个带自己词库的jieba做分词预处理。

@BrikerMan

This comment has been minimized.

Copy link

commented Nov 17, 2017

@crownpku 了解了。谢谢~。我试试看。

@BrikerMan

This comment has been minimized.

Copy link

commented Nov 17, 2017

@crownpku 有尝试过训练 spacy 模型么,MITIE 训练只能单线程,太慢了。而且以后电影名称库更新又得重来这个步骤。

@crownpku

This comment has been minimized.

Copy link
Owner

commented Nov 17, 2017

spacy对中文的支持也只是调用了jieba做分词部分... MITIE我的训练需要2天左右的时间,其实也还好。
这个模型不需要频繁更新,我觉得只有语料变动或者增量达到比如30%以上才需要重新训练,不然区别不大。

@BrikerMan

This comment has been minimized.

Copy link

commented Nov 17, 2017

恩,看样子只能这样了。此外我的 MITIE 模型训练完后,训练 rasa nlu 也非常慢,目前只有 30 个 sample,似乎跟这个 mit-nlp/MITIE#11 (comment) issue 一个问题。你的 nlu 大概多少个数据,训练要多久?

@BrikerMan

This comment has been minimized.

Copy link

commented Nov 17, 2017

我们用 MITIE 只做了词向量,那么可以用 gensim 做 word2vec 来替代这个词向量么?还是两者有本质区别?

@crownpku

This comment has been minimized.

Copy link
Owner

commented Nov 17, 2017

用MITIE的classfier会比较慢,用sklearn做分类会快很多,30个sample应该一分钟内可以训练完。
理论上是word2vec是比较普遍的方法。rasa_nlu官方坚持使用MITIE训练词向量,貌似是结合MITIE的NLP算法,会储存更多语义信息,效果更好。

@BrikerMan

This comment has been minimized.

Copy link

commented Nov 17, 2017

中文 nlu 用了 MITIE 的话没办法用 sklearn 做分类器吧?我这个配置,30 个 sample 大概需要 40 来分钟 。

{
  "name": "rasa_zh_nlu",
  "pipeline": [
    "nlp_mitie",
    "tokenizer_bf",
    "ner_mitie",
    "ner_synonyms",
    "intent_entity_featurizer_regex",
    "intent_featurizer_mitie",
    "intent_classifier_sklearn"
  ],
  "language": "zh",
  "mitie_file": "./data/total_word_feature_extractor.dat",
  "path": "./models",
  "data": "./data/nlu_data.json",
}

@crownpku

This comment has been minimized.

Copy link
Owner

commented Nov 17, 2017

这个就是intent_classifier_sklearn,MITIE只是用来生成feature.
我用基本一致的配置确实一分钟内训练完的,当然jieba部分并没有用到词库。
另外tokenizer_bf是你自定义的分词器吗,是这里慢的原因吗?

@BrikerMan

This comment has been minimized.

Copy link

commented Nov 17, 2017

这个分词器跟你的基本一致,就加了个自定义字点的加载。我把我的数据共享给你,你跑一下看看可以么。数据在这里, https://github.com/BrikerMan/rasa-demo/blob/master/data.json

@crownpku

This comment has been minimized.

Copy link
Owner

commented Nov 17, 2017

@BrikerMan 可以的,发我邮箱吧 crownpku@gmail.com
我就是怀疑自定义字典加载慢的缘故...

@BrikerMan

This comment has been minimized.

Copy link

commented Nov 17, 2017

我这里换成 'tokenizer_jieba' 也一样。似乎是这个问题,RasaHQ/rasa#260 (comment)

@BrikerMan

This comment has been minimized.

Copy link

commented Nov 20, 2017

@crownpku 有结果么?

@crownpku

This comment has been minimized.

Copy link
Owner

commented Nov 20, 2017

@BrikerMan 我没有收到你的sample数据啊...

@BrikerMan

This comment has been minimized.

Copy link

commented Nov 20, 2017

直接放在 github 了,上面有提到。 https://github.com/BrikerMan/rasa-demo/blob/master/data.json。

@crownpku

This comment has been minimized.

Copy link
Owner

commented Nov 20, 2017

用你的数据在跑了,跑到classification那一步确实很慢....

Part I: train segmenter
words in dictionary: 200000
num features: 271
now do training
C:           20
epsilon:     0.01
num threads: 1
cache size:  5
max iterations: 2000
loss per missed segment:  3
C: 20   loss: 3         0.807018
C: 35   loss: 3         0.807018
C: 20   loss: 4.5       0.877193
C: 5   loss: 3  0.807018
C: 20   loss: 1.5       0.789474
C: 20   loss: 6         0.877193
C: 20   loss: 5.25      0.877193
C: 21.5   loss: 4.65    0.877193
C: 16.9684   loss: 4.72073      0.877193
C: 18.2577   loss: 4.43072      0.877193
C: 18.2131   loss: 4.55681      0.877193
C: 20   loss: 4.4       0.877193
C: 20.9694   loss: 4.47547      0.877193
best C: 20
best loss: 4.5
num feats in chunker model: 4095
train: precision, recall, f1-score: 1 1 1
Part I: elapsed time: 4 seconds.

Part II: train segment classifier
now do training
num training samples: 58


还在跑中,是卡在了ner_mitie这里。我想下怎么回事。

@BrikerMan

This comment has been minimized.

Copy link

commented Nov 20, 2017

@crownpku 嗯嗯,谢谢啦,我也在考虑为啥这么慢。

@BrikerMan

This comment has been minimized.

Copy link

commented Apr 3, 2018

有什么进展么?

@kevinsay

This comment has been minimized.

Copy link

commented Apr 4, 2018

我有178个samples,加不加自定义词典,都很慢。

@cloudskyme

This comment has been minimized.

Copy link

commented May 18, 2018

total_word_feature_extractor_zh.dat,你好,这个文件现在下载不到了,有什么地方可以下载吗?

@kevinsay

This comment has been minimized.

Copy link

commented May 20, 2018

@crapthings

This comment has been minimized.

Copy link

commented May 28, 2018

这个文件下载了,放到哪儿哦?

我放到
models/default.dat
还是提示我找不到

每次运行需要输入 --path ./models/default.data

然后提示

curl -XPOST localhost:5000/parse -d '{"q":"我发烧了该吃什么药?", "project": "rasa_nlu_test", "model": "model_20170921-170911"}' | python -mjson.tool
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   160    0    60  100   100   7545  12575 --:--:-- --:--:-- --:--:-- 14285
{
    "error": "No project found with name 'rasa_nlu_test'."
}
@KevinZhou92

This comment has been minimized.

Copy link

commented May 28, 2018

@kevinsay 你好,请问还能分享total_word_feature_extractor_zh.dat这个文件吗,为什么我下载下来使用显示UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 40: invalid start byte

@kevinsay

This comment has been minimized.

@KevinZhou92

This comment has been minimized.

Copy link

commented May 28, 2018

@kevinsay 谢谢!

@yuxuan2015

This comment has been minimized.

Copy link

commented May 30, 2018

total_word_feature_extractor_zh.dat,有谁知道这个文件的数据长什么样吗?

@crapthings

This comment has been minimized.

Copy link

commented May 30, 2018

@yuxuan2015
这是训练出来的 binary 好像看了没用吧

@yuxuan2015

This comment has been minimized.

Copy link

commented May 31, 2018

@crapthings 那知道怎么换成word2vec词向量吗?

@mashagua

This comment has been minimized.

Copy link

commented Jun 5, 2018

你好,这个文件已经没有了,能共享一份吗?@KevinZhou92

@KevinZhou92

This comment has been minimized.

Copy link

commented Jun 5, 2018

@ghost

This comment has been minimized.

Copy link

commented Jul 7, 2018

您好,上面BrikerMan 提出的训练58个数据很慢的原因找到了吗,我训练90个sample也很慢,好几个小时了,都没有训练完

@yanolele

This comment has been minimized.

Copy link

commented Aug 9, 2018

你好!
@KevinZhou92
这个文件已经没有了,能再共享一份給我吗?

@siennx

This comment has been minimized.

Copy link

commented Aug 16, 2018

有好心人可以分享一下文件包嗎? 我找了好久, 鏈結都失效了, 感謝.

@KevinZhou92

This comment has been minimized.

Copy link

commented Aug 16, 2018

@siennx @yanolele 链接:https://pan.baidu.com/s/1kNENvlHLYWZIddmtWJ7Pdg 密码:p4vx

Edit: 发错链接了, 不好意思, 已修改.

@siennx

This comment has been minimized.

Copy link

commented Aug 16, 2018

@KevinZhou92 謝謝妳的分享, 可是我點進去, 第一次看到網頁, 輸入密碼後說網頁不存在, 後來再進去 就都說網頁不存在了, 請問是我哪裡操作有問題嗎?
Update: 不好意思, 我試了新的鏈結, 還是遇到"頁面不存在"的問題, 可以再麻煩你看一下嗎? 感謝

@aqiank

This comment has been minimized.

Copy link

commented Aug 29, 2018

很久以前我曾经下载过该文件. 不懂是不是一样的文件. 我将文件上传到MEGA了. 下载速度可能慢一点.

链接: https://mega.nz/#!EWgTHSxR!NbTXDAuVHwwdP2-Ia8qG7No-JUsSbH5mNQSRDsjztSA
SHA-1: 1c0f473464d14c706af695f5791e6e959d5efac8

@mashagua

This comment has been minimized.

Copy link

commented Aug 29, 2018

@siennx

This comment has been minimized.

Copy link

commented Sep 5, 2018

謝謝檔案分享, 已經下載了

@Ma-Dan

This comment has been minimized.

Copy link

commented Sep 7, 2018

MITIE的wordrep训练非常耗时,我使用约1G的Wiki中文语料训练,需要64G内存,而且它只用了CPU的一个核,从开始到训练出word_vects.dat需要56小时。再从word_vects.dat训练得到total_word_feature_extractor.dat又需要7小时。

@red-frog

This comment has been minimized.

Copy link

commented Mar 20, 2019

同样遇到了为什么这麽慢的问题, 现在有解决办法了吗?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.