Skip to content

masking model key,fix cuda model config transfer problem#16

Merged
pymumu merged 1 commit intomodelbox-ai:mainfrom
fujl:main
Dec 30, 2021
Merged

masking model key,fix cuda model config transfer problem#16
pymumu merged 1 commit intomodelbox-ai:mainfrom
fujl:main

Conversation

@fujl
Copy link
Contributor

@fujl fujl commented Dec 9, 2021

masking model key,fix cuda model config transfer problem


auto drivers_ptr = GetBindDevice()->GetDeviceManager()->GetDrivers();
ModelDecryption engine_decrypt;
engine_decrypt.Init(params_.engine, drivers_ptr, config);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

判断返回值。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

已处理

std::shared_ptr<uint8_t> modelBuf =
engine_decrypt.GetModelSharedBuffer(model_len);
if (modelBuf == nullptr) {
auto err_msg = "modelBuf is empty, the model file " + params_.engine;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里打印模型解密失败。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

已修改

if (modelBuf == nullptr) {
auto err_msg = "modelBuf is empty, the model file " + params_.engine;
MBLOG_ERROR << err_msg;
return {modelbox::STATUS_FAULT, err_msg};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里返回BAD_CONF

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

已修改

}
engine_ = TensorRTInferObject(infer->deserializeCudaEngine(
modelBuf.get(), model_len, plugin_factory_.get()));
} else if (engine_decrypt.GetModelState() ==
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

else分支呢?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

已处理

file.read(trtModelStream.data(), size);
file.close();

engine_ = TensorRTInferObject(infer->deserializeCudaEngine(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里直接让引擎去加载模型文件,不要再中间拷贝一次。占用内存太多了。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

EngineToModel方法之前就是从内存加载的

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

看看能否修改。

modelbox::Status PrePareInput(std::shared_ptr<modelbox::DataContext>& data_ctx,
std::vector<void*>& memory);
modelbox::Status PrePareOutput(
std::shared_ptr<modelbox::DataContext>& data_ctx,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这些格式,确认下是否用的我们标准的clangformat

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

image

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

在linux下格式化吧。用默认vscode的配置,不要指定任何clangformat格式文件。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants