Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ADD E5 #1179

Closed
kyakuno opened this issue Jul 18, 2023 · 16 comments
Closed

ADD E5 #1179

kyakuno opened this issue Jul 18, 2023 · 16 comments
Assignees

Comments

@kyakuno
Copy link
Collaborator

kyakuno commented Jul 18, 2023

日本語のEmbeddingモデル。各モデルの日本語性能の比較。
https://hironsan.hatenablog.com/entry/2023/07/05/073150

@kyakuno
Copy link
Collaborator Author

kyakuno commented Jul 18, 2023

@srpkdyy srpkdyy self-assigned this Jul 20, 2023
@srpkdyy
Copy link
Contributor

srpkdyy commented Jul 31, 2023

@kyakuno こちらのモデルをExportしたところonnx_dataが2GBを超えていたのですが、PB分割が必要ですか?

@kyakuno
Copy link
Collaborator Author

kyakuno commented Aug 1, 2023

onnxruntimeやailiaで読めれば2GBを超えていても問題ありません。
読めなければpb分割をお願いできればと思います。

@kyakuno
Copy link
Collaborator Author

kyakuno commented Aug 3, 2023

Embeddingの注意点
https://note.com/mahlab/n/nfad5143906ba

@kyakuno
Copy link
Collaborator Author

kyakuno commented Oct 16, 2023

e5-baseとe5-largeをエクスポートしたい。

@kyakuno
Copy link
Collaborator Author

kyakuno commented Oct 16, 2023

e5-base
https://huggingface.co/intfloat/multilingual-e5-base/tree/main/onnx
onnxはすでに公開されているので、サンプルを作ればOK。

@kyakuno
Copy link
Collaborator Author

kyakuno commented Oct 16, 2023

XLMRobertaTokenizerなので、SentenceTransformerと同様に使えそう。

@kyakuno kyakuno assigned ooe1123 and unassigned srpkdyy Oct 16, 2023
@kyakuno
Copy link
Collaborator Author

kyakuno commented Oct 16, 2023

@ooe1123 こちら、お願いすることは可能でしょうか。

@kyakuno
Copy link
Collaborator Author

kyakuno commented Oct 16, 2023

サンプルは下記のコードをベースに作成いただければと思います。
https://github.com/axinc-ai/ailia-models/tree/master/natural_language_processing/sentence_transformers_japanese

@kyakuno
Copy link
Collaborator Author

kyakuno commented Oct 16, 2023

@ooe1123 ooe1123 mentioned this issue Oct 17, 2023
@kyakuno
Copy link
Collaborator Author

kyakuno commented Oct 18, 2023

TokenizerはXLMRobertraでspieceのモデルも全く同じ模様。

Sentence Transformer Japanese

tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/paraphrase-multilingual-mpnet-base-v2')
inputs = tokenizer(sents, padding=True, truncation=True, return_tensors='np')
"グーグル㍿"
[     0,      6,   4758, 246514,   4725,   4758, 246514,   5283,  93743,      2]
NFKC正規化されない?ailiaでは現状、NFKC正規化を行う。
"グーグル株式会社"
[     0,      6,  21300,   4725, 155552,  93743,      2]

E5

tokenizer = AutoTokenizer.from_pretrained('intfloat/multilingual-e5-base')
inputs = tokenizer(sents, padding=True, truncation=True, return_tensors='np')
"グーグル㍿"
[     0,      6,   4758, 246514,   4725,   4758, 246514,   5283, 93743,      2]
"グーグル株式会社"
[     0,      6,  21300,   4725, 155552,  93743,      2]

@kyakuno kyakuno closed this as completed Oct 19, 2023
@kyakuno
Copy link
Collaborator Author

kyakuno commented May 18, 2024

LayerNormで高速化したいのでopset17で再エクスポートする。
optimum-cli export onnx --model intfloat/multilingual-e5-base --opset 17 multilingual-e5-base
だとtrust_remote_codeのエラーでダメだったので、下記でエクスポート。

import torch.nn.functional as F

import torch
from torch import Tensor
from transformers import AutoTokenizer, AutoModel

# Each input text should start with "query: " or "passage: ", even for non-English texts.
# For tasks other than retrieval, you can simply use the "query: " prefix.
input_texts = ['query: how much protein should a female eat',
               'query: 南瓜的家常做法',
               "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
               "passage: 1.清炒南瓜丝 原料:嫩南瓜半个 调料:葱、盐、白糖、鸡精 做法: 1、南瓜用刀薄薄的削去表面一层皮,用勺子刮去瓤 2、擦成细丝(没有擦菜板就用刀慢慢切成细丝) 3、锅烧热放油,入葱花煸出香味 4、入南瓜丝快速翻炒一分钟左右,放盐、一点白糖和鸡精调味出锅 2.香葱炒南瓜 原料:南瓜1只 调料:香葱、蒜末、橄榄油、盐 做法: 1、将南瓜去皮,切成片 2、油锅8成热后,将蒜末放入爆香 3、爆香后,将南瓜片放入,翻炒 4、在翻炒的同时,可以不时地往锅里加水,但不要太多 5、放入盐,炒匀 6、南瓜差不多软和绵了之后,就可以关火 7、撒入香葱,即可出锅"]

tokenizer = AutoTokenizer.from_pretrained('intfloat/multilingual-e5-base')
model = AutoModel.from_pretrained('intfloat/multilingual-e5-base')

class WrapperModel(torch.nn.Module):
    def __init__(self, model):
        super(WrapperModel, self).__init__()
        self.model = model

    def forward(self, input_ids, attention_mask=None, token_type_ids=None):
        outputs = self.model(input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids)
        return outputs.last_hidden_state  # 必要な出力を選択

model = WrapperModel(model)

# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(batch_dict["input_ids"], batch_dict["attention_mask"])

torch.onnx.export(model, args=(batch_dict["input_ids"], batch_dict["attention_mask"]), f="multilingual-e5-base.onnx", verbose=True,
input_names=["input", "attention_mask"],
output_names=["last_hidden_state"],
opset_version=17,
dynamic_axes={
"input": {0: "batch_size", 1: "sequenze_length"},
"attention_mask": {0: "batch_size", 1: "sequenze_length"},
"last_hidden_state": {0: "batch_size", 1: "sequenze_length"}
})

@kyakuno
Copy link
Collaborator Author

kyakuno commented May 18, 2024

推論結果の分散が大きい。

opset11 : 9992 ms
opset17 : 11902 ms
opset17 + optimizer : 9852 ms

@kyakuno
Copy link
Collaborator Author

kyakuno commented May 18, 2024

LayerNormが十分に高速化されていない可能性もある?

@kyakuno
Copy link
Collaborator Author

kyakuno commented May 18, 2024

@kyakuno
Copy link
Collaborator Author

kyakuno commented May 18, 2024

再計測、効果はありそう

BLAS : 7759ms -> 7051ms
MPS : 994ms -> 904.2ms

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants