Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TypeError: function takes exactly 5 arguments (1 given) #45

Open
Fqlox opened this issue Jan 8, 2024 · 2 comments
Open

TypeError: function takes exactly 5 arguments (1 given) #45

Fqlox opened this issue Jan 8, 2024 · 2 comments

Comments

@Fqlox
Copy link

Fqlox commented Jan 8, 2024

Hi,

I tried to launch the project on windows 10 with a 3090, added import os os.environ["KMP_DUPLICATE_LIB_OK"]="TRUE"
and got the followings errors :

(anytext) E:\repo\AnyText>python demo.py
2024-01-08 11:50:28,862 - modelscope - INFO - PyTorch version 2.0.1+cu118 Found.
2024-01-08 11:50:28,866 - modelscope - INFO - TensorFlow version 2.13.0 Found.
2024-01-08 11:50:28,866 - modelscope - INFO - Loading ast index from C:\Users\USER\.cache\modelscope\ast_indexer
2024-01-08 11:50:28,963 - modelscope - INFO - Loading done! Current index file version is 1.10.0, with md5 407792a6ca3bfb6c73e1d4358a891444 and a total number of 946 components indexed
2024-01-08 11:50:34,252 - modelscope - INFO - Use user-specified model revision: v1.1.1
2024-01-08 11:50:38,802 - modelscope - WARNING - ('PIPELINES', 'my-anytext-task', 'anytext-pipeline') not found in ast index file
A matching Triton is not available, some optimizations will not be enabled.
Error caught was: No module named 'triton'
ControlLDM: Running in eps-prediction mode
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 768 and using 8 heads.
DiffusionWrapper has 859.52 M params.
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 768 and using 8 heads.
Loaded model config from [models_yaml/anytext_sd15.yaml]
Loaded state_dict from [C:\Users\USER\.cache\modelscope\hub\damo\cv_anytext_text_generation_editing\anytext_v1.1.ckpt]
2024-01-08 11:50:58,008 - modelscope - INFO - initiate model from C:\Users\USER\.cache\modelscope\hub\damo\cv_anytext_text_generation_editing\nlp_csanmt_translation_zh2en
2024-01-08 11:50:58,008 - modelscope - INFO - initiate model from location C:\Users\USER\.cache\modelscope\hub\damo\cv_anytext_text_generation_editing\nlp_csanmt_translation_zh2en.
2024-01-08 11:50:58,014 - modelscope - INFO - initialize model from C:\Users\USER\.cache\modelscope\hub\damo\cv_anytext_text_generation_editing\nlp_csanmt_translation_zh2en
{'hidden_size': 1024, 'filter_size': 4096, 'num_heads': 16, 'num_encoder_layers': 24, 'num_decoder_layers': 6, 'attention_dropout': 0.0, 'residual_dropout': 0.0, 'relu_dropout': 0.0, 'layer_preproc': 'layer_norm', 'layer_postproc': 'none', 'shared_embedding_and_softmax_weights': True, 'shared_source_target_embedding': True, 'initializer_scale': 0.1, 'position_info_type': 'absolute', 'max_relative_dis': 16, 'num_semantic_encoder_layers': 4, 'src_vocab_size': 50000, 'trg_vocab_size': 50000, 'seed': 1234, 'beam_size': 4, 'lp_rate': 0.6, 'max_decoded_trg_len': 100, 'device_map': None, 'device': 'cuda'}
2024-01-08 11:50:58,026 - modelscope - WARNING - No val key and type key found in preprocessor domain of configuration.json file.
2024-01-08 11:50:58,027 - modelscope - WARNING - Cannot find available config to build preprocessor at mode inference, current config: {'src_lang': 'zh', 'tgt_lang': 'en', 'src_bpe': {'file': 'bpe.zh'}, 'model_dir': 'C:\\Users\\USER\\.cache\\modelscope\\hub\\damo\\cv_anytext_text_generation_editing\\nlp_csanmt_translation_zh2en'}. trying to build by task and model information.
2024-01-08 11:50:58,027 - modelscope - WARNING - No preprocessor key ('csanmt-translation', 'translation') found in PREPROCESSOR_MAP, skip building preprocessor.
Traceback (most recent call last):
  File "E:\conda\envs\anytext\lib\site-packages\modelscope\utils\registry.py", line 212, in build_from_cfg
    return obj_cls(**args)
  File "E:\conda\envs\anytext\lib\site-packages\modelscope\pipelines\nlp\translation_pipeline.py", line 54, in __init__
    self._src_vocab = dict([
  File "E:\conda\envs\anytext\lib\site-packages\modelscope\pipelines\nlp\translation_pipeline.py", line 54, in <listcomp>
    self._src_vocab = dict([
  File "E:\conda\envs\anytext\lib\encodings\cp1252.py", line 23, in decode
    return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x81 in position 29: character maps to <undefined>

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "E:\conda\envs\anytext\lib\site-packages\modelscope\utils\registry.py", line 212, in build_from_cfg
    return obj_cls(**args)
  File "C:\Users\USER\.cache\modelscope\modelscope_modules\cv_anytext_text_generation_editing\ms_wrapper.py", line 336, in __init__
    pipe_model = AnyTextModel(model_dir=model, **kwargs)
  File "C:\Users\USER\.cache\modelscope\modelscope_modules\cv_anytext_text_generation_editing\ms_wrapper.py", line 46, in __init__
    self.init_model(**kwargs)
  File "C:\Users\USER\.cache\modelscope\modelscope_modules\cv_anytext_text_generation_editing\ms_wrapper.py", line 240, in init_model
    self.trans_pipe = pipeline(task=Tasks.translation, model=os.path.join(self.model_dir, 'nlp_csanmt_translation_zh2en'))
  File "E:\conda\envs\anytext\lib\site-packages\modelscope\pipelines\builder.py", line 170, in pipeline
    return build_pipeline(cfg, task_name=task)
  File "E:\conda\envs\anytext\lib\site-packages\modelscope\pipelines\builder.py", line 65, in build_pipeline
    return build_from_cfg(
  File "E:\conda\envs\anytext\lib\site-packages\modelscope\utils\registry.py", line 215, in build_from_cfg
    raise type(e)(f'{obj_cls.__name__}: {e}')
TypeError: function takes exactly 5 arguments (1 given)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "E:\repo\AnyText\demo.py", line 53, in <module>
    inference = pipeline('my-anytext-task', model='damo/cv_anytext_text_generation_editing', model_revision='v1.1.1', use_fp16=not args.use_fp32, use_translator=not args.no_translator, font_path=args.font_path)
  File "E:\conda\envs\anytext\lib\site-packages\modelscope\pipelines\builder.py", line 170, in pipeline
    return build_pipeline(cfg, task_name=task)
  File "E:\conda\envs\anytext\lib\site-packages\modelscope\pipelines\builder.py", line 65, in build_pipeline
    return build_from_cfg(
  File "E:\conda\envs\anytext\lib\site-packages\modelscope\utils\registry.py", line 215, in build_from_cfg
    raise type(e)(f'{obj_cls.__name__}: {e}')
TypeError: AnyTextPipeline: function takes exactly 5 arguments (1 given)

(anytext) E:\repo\AnyText>
wenmengzhou pushed a commit to modelscope/modelscope that referenced this issue Jan 9, 2024
he proposed fix involves converting the encoding format from Windows-1250 to utf-8.

This is in response to the issues reported when running Anytext:

tyxsspa/AnyText#45
tyxsspa/AnyText#36
tyxsspa/AnyText#22
@QueenPuxxi
Copy link

I had the same problem. How should I solve it?

@dvdcjw
Copy link

dvdcjw commented Feb 13, 2024

Same here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants