You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jan 24, 2024. It is now read-only.
> OPTIONS:
> Framework: CAFFE
> SavePath: ./output
> ResultName: face_r100
> Config:
> LaunchBoard: ON
> Server:
> ip: 0.0.0.0
> port: 8888
> OptimizedGraph:
> enable: OFF
> path: ./googlenet.paddle_inference_model.bin.saved
> LOGGER:
> LogToPath: ./log/
> WithColor: ON
>
> TARGET:
> CAFFE:
> # path of fluid inference model
> Debug: NULL # Generally no need to modify.
> PrototxtPath: ./model/model.prototxt # The upper path of a fluid inference model.
> ModelPath: ./model/model.caffmodel # The upper path of a fluid inference model.
> NetType:
Traceback (most recent call last):
File "converter.py", line 79, in
graph = Graph(config)
File "/root/Anakin/tools/external_converter_v2/parser/graph.py", line 26, in init
raise NameError('ERROR: GrapProtoIO not support %s model.' % (config.framework))
NameError: ERROR: GrapProtoIO not support CAFFE model.
The text was updated successfully, but these errors were encountered:
my config:
The text was updated successfully, but these errors were encountered: