Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

One click integration package stuck at 0% #98

Open
VincentChanLivAway opened this issue Apr 27, 2024 · 0 comments
Open

One click integration package stuck at 0% #98

VincentChanLivAway opened this issue Apr 27, 2024 · 0 comments

Comments

@VincentChanLivAway
Copy link

I downloaded the one click intergration package and run the bat file.

Just use the demo image file with the demo prompt ((masterpiece, best quality, highres:1),(1boy, solo:1),(eye blinks:1.8),(head wave:1.3))
Keep everything default then click "Generate"

I got this

0%| | 0/10 [00:00<?, ?steps/s]`

It's been stuck at 0% for almost 2000 seconds now, the GUI is not stopped or crashed, the time is still increasing in seconds.

My environment:

Win 10
CPU i9 with 32G RAM
NVIDIA RTX 3060 12G

Full log (from initiating):

_`2024-04-27 22:15:04,424- py.warnings:109- WARNING- F:\MuseV\controlnet_aux\src\controlnet_aux\segment_anything\modeling\tiny_vit_sam.py:654: UserWarning: Overwriting tiny_vit_21m_512 in registry with controlnet_aux.segment_anything.modeling.tiny_vit_sam.tiny_vit_21m_512. This is because the name being registered conflicts with an existing name. Please check if this is not expected.
return register_model(fn_wrapper)

args
{'add_static_video_prompt': False,
'context_batch_size': 1,
'context_frames': 12,
'context_overlap': 4,
'context_schedule': 'uniform_v2',
'context_stride': 1,
'controlnet_conditioning_scale': 1.0,
'controlnet_name': 'dwpose_body_hand',
'cross_attention_dim': 768,
'enable_zero_snr': False,
'end_to_end': True,
'face_image_path': None,
'facein_model_cfg_path': '../../configs/model/facein.py',
'facein_model_name': None,
'facein_scale': 1.0,
'fix_condition_images': False,
'fixed_ip_adapter_image': True,
'fixed_refer_face_image': True,
'fixed_refer_image': True,
'fps': 4,
'guidance_scale': 7.5,
'height': None,
'img_length_ratio': 1.0,
'img_weight': 0.001,
'interpolation_factor': 1,
'ip_adapter_face_model_cfg_path': '../../configs/model/ip_adapter.py',
'ip_adapter_face_model_name': None,
'ip_adapter_face_scale': 1.0,
'ip_adapter_model_cfg_path': '../../configs/model/ip_adapter.py',
'ip_adapter_model_name': 'musev_referencenet_pose',
'ip_adapter_scale': 1.0,
'ipadapter_image_path': None,
'lcm_model_cfg_path': '../../configs/model/lcm_model.py',
'lcm_model_name': None,
'log_level': 'INFO',
'motion_speed': 8.0,
'n_batch': 1,
'n_cols': 3,
'n_repeat': 1,
'n_vision_condition': 1,
'need_hist_match': False,
'need_img_based_video_noise': True,
'need_return_condition': False,
'need_return_videos': False,
'need_video2video': False,
'negative_prompt': 'V2',
'negprompt_cfg_path': '../../configs/model/negative_prompt.py',
'noise_type': 'video_fusion',
'num_inference_steps': 30,
'output_dir': './results/',
'overwrite': False,
'pose_guider_model_path': None,
'prompt_only_use_image_prompt': False,
'record_mid_video_latents': False,
'record_mid_video_noises': False,
'redraw_condition_image': False,
'redraw_condition_image_with_facein': True,
'redraw_condition_image_with_ip_adapter_face': True,
'redraw_condition_image_with_ipdapter': True,
'redraw_condition_image_with_referencenet': True,
'referencenet_image_path': None,
'referencenet_model_cfg_path': '../../configs/model/referencenet.py',
'referencenet_model_name': 'musev_referencenet',
'sample_rate': 1,
'save_filetype': 'mp4',
'save_images': False,
'sd_model_cfg_path': '../../configs/model/T2I_all_model.py',
'sd_model_name': 'majicmixRealv6Fp16',
'seed': None,
'strength': 0.8,
'target_datas': 'boy_dance2',
'test_data_path': './configs/infer/testcase_video_famous.yaml',
'time_size': 12,
'unet_model_cfg_path': '../../configs/model/motion_model.py',
'unet_model_name': 'musev_referencenet_pose',
'use_condition_image': True,
'vae_model_path': '../../checkpoints/vae/sd-vae-ft-mse',
'video_guidance_scale': 3.5,
'video_guidance_scale_end': None,
'video_guidance_scale_method': 'linear',
'video_has_condition': True,
'video_is_middle': False,
'video_negative_prompt': 'V2',
'video_num_inference_steps': 10,
'video_overlap': 1,
'video_strength': 1.0,
'vision_clip_extractor_class_name': 'ImageClipVisionFeatureExtractor',
'vision_clip_model_path': '../../checkpoints/IP-Adapter/models/image_encoder',
'w_ind_noise': 0.5,
'which2video': 'video_middle',
'width': None,
'write_info': False}

running model, T2I SD
{'majicmixRealv6Fp16': {'sd': 'F:\MuseV\configs\model\../../checkpoints\t2i\sd1.5/majicmixRealv6Fp16'}}
lcm: None None
unet_model_params_dict_src dict_keys(['musev', 'musev_referencenet', 'musev_referencenet_pose'])
unet: musev_referencenet_pose F:\MuseV\configs\model../../checkpoints\motion\musev_referencenet_pose
referencenet_model_params_dict_src dict_keys(['musev_referencenet'])
referencenet: musev_referencenet F:\MuseV\configs\model../../checkpoints\motion\musev_referencenet
ip_adapter_model_params_dict_src dict_keys(['IPAdapter', 'IPAdapterPlus', 'IPAdapterPlus-face', 'IPAdapterFaceID', 'musev_referencenet', 'musev_referencenet_pose'])
ip_adapter: musev_referencenet_pose {'ip_image_encoder': 'F:\MuseV\configs\model\../../checkpoints\IP-Adapter\image_encoder', 'ip_ckpt': 'F:\MuseV\configs\model\../../checkpoints\motion\musev_referencenet_pose/ip_adapter_image_proj.bin', 'ip_scale': 1.0, 'clip_extra_context_tokens': 4, 'clip_embeddings_dim': 1024, 'desp': ''}
facein: None None
ip_adapter_face: None None
video_negprompt V2 badhandv4, ng_deepnegative_v1_75t, (((multiple heads))), (((bad body))), (((two people))), ((extra arms)), ((deformed body)), (((sexy))), paintings,(((two heads))), ((big head)),sketches, (worst quality:2), (low quality:2), (normal quality:2), lowres, ((monochrome)), ((grayscale)), skin spots, acnes, skin blemishes, age spot, glans, (((nsfw))), nipples, extra fingers, (extra legs), (long neck), mutated hands, (fused fingers), (too many fingers)
negprompt V2 badhandv4, ng_deepnegative_v1_75t, (((multiple heads))), (((bad body))), (((two people))), ((extra arms)), ((deformed body)), (((sexy))), paintings,(((two heads))), ((big head)),sketches, (worst quality:2), (low quality:2), (normal quality:2), lowres, ((monochrome)), ((grayscale)), skin spots, acnes, skin blemishes, age spot, glans, (((nsfw))), nipples, extra fingers, (extra legs), (long neck), mutated hands, (fused fingers), (too many fingers)
2024-04-27 22:15:47,523- musev:510- INFO- vision_clip_extractor, name=ImageClipVisionFeatureExtractor, path=../../checkpoints/IP-Adapter/models/image_encoder
test_model_vae_model_path ../../checkpoints/vae/sd-vae-ft-mse
Loads checkpoint by http backend from path: https://download.openmmlab.com/mmdetection/v2.0/yolox/yolox_l_8x8_300e_coco/yolox_l_8x8_300e_coco_20211126_140236-d3bd2b23.pth
Loads checkpoint by http backend from path: https://huggingface.co/wanghaofan/dw-ll_ucoco_384/resolve/main/dw-ll_ucoco_384.pth
Keyword arguments {'torch_device': 'cuda'} are not expected by MusevControlNetPipeline and will be ignored.
Loading pipeline components...: 33%|█████████████████▎ | 2/6 [00:03<00:07, 1.87s/it]2024-04-27 22:17:48,373- py.warnings:109- WARNING- F:\MuseV.glut\lib\site-packages\transformers\models\clip\feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead.
warnings.warn(

Loading pipeline components...: 100%|████████████████████████████████████████████████████| 6/6 [00:03<00:00, 1.54it/s]
2024-04-27 22:17:48,880- py.warnings:109- WARNING- F:\MuseV\diffusers\src\diffusers\pipelines\pipeline_utils.py:761: FutureWarning: torch_dtype is deprecated and will be removed in version 0.25.0.
deprecate("torch_dtype", "0.25.0", "")

2024-04-27 22:17:48,880- py.warnings:109- WARNING- F:\MuseV\diffusers\src\diffusers\pipelines\pipeline_utils.py:764: FutureWarning: torch_device is deprecated and will be removed in version 0.25.0.
deprecate("torch_device", "0.25.0", "")

args
Namespace(add_static_video_prompt=False, context_batch_size=1, context_frames=12, context_overlap=4, context_schedule='uniform_v2', context_stride=1, cross_attention_dim=768, face_image_path=None, facein_model_cfg_path='../../configs/model/facein.py', facein_model_name=None, facein_scale=1.0, fix_condition_images=False, fixed_ip_adapter_image=True, fixed_refer_face_image=True, fixed_refer_image=True, fps=4, guidance_scale=7.5, height=None, img_length_ratio=1.0, img_weight=0.001, interpolation_factor=1, ip_adapter_face_model_cfg_path='../../configs/model/ip_adapter.py', ip_adapter_face_model_name=None, ip_adapter_face_scale=1.0, ip_adapter_model_cfg_path='../../configs/model/ip_adapter.py', ip_adapter_model_name='musev_referencenet', ip_adapter_scale=1.0, ipadapter_image_path=None, lcm_model_cfg_path='../../configs/model/lcm_model.py', lcm_model_name=None, log_level='INFO', motion_speed=8.0, n_batch=1, n_cols=3, n_repeat=1, n_vision_condition=1, need_hist_match=False, need_img_based_video_noise=True, need_redraw=False, negative_prompt='V2', negprompt_cfg_path='../../configs/model/negative_prompt.py', noise_type='video_fusion', num_inference_steps=30, output_dir='./results/', overwrite=False, prompt_only_use_image_prompt=False, record_mid_video_latents=False, record_mid_video_noises=False, redraw_condition_image=False, redraw_condition_image_with_facein=True, redraw_condition_image_with_ip_adapter_face=True, redraw_condition_image_with_ipdapter=True, redraw_condition_image_with_referencenet=True, referencenet_image_path=None, referencenet_model_cfg_path='../../configs/model/referencenet.py', referencenet_model_name='musev_referencenet', save_filetype='mp4', save_images=False, sd_model_cfg_path='../../configs/model/T2I_all_model.py', sd_model_name='majicmixRealv6Fp16', seed=None, strength=0.8, target_datas='boy_dance2', test_data_path='../../configs/infer/testcase_video_famous.yaml', time_size=24, unet_model_cfg_path='../../configs/model/motion_model.py', unet_model_name='musev_referencenet', use_condition_image=True, use_video_redraw=True, vae_model_path='../../checkpoints/vae/sd-vae-ft-mse', video_guidance_scale=3.5, video_guidance_scale_end=None, video_guidance_scale_method='linear', video_negative_prompt='V2', video_num_inference_steps=10, video_overlap=1, vision_clip_extractor_class_name='ImageClipVisionFeatureExtractor', vision_clip_model_path='../../checkpoints/IP-Adapter/models/image_encoder', w_ind_noise=0.5, width=None, write_info=False)

running model, T2I SD
{'majicmixRealv6Fp16': {'sd': 'F:\MuseV\configs\model\../../checkpoints\t2i\sd1.5/majicmixRealv6Fp16'}}
lcm: None None
unet_model_params_dict_src dict_keys(['musev', 'musev_referencenet', 'musev_referencenet_pose'])
unet: musev_referencenet F:\MuseV\configs\model../../checkpoints\motion\musev_referencenet
referencenet_model_params_dict_src dict_keys(['musev_referencenet'])
referencenet: musev_referencenet F:\MuseV\configs\model../../checkpoints\motion\musev_referencenet
ip_adapter_model_params_dict_src dict_keys(['IPAdapter', 'IPAdapterPlus', 'IPAdapterPlus-face', 'IPAdapterFaceID', 'musev_referencenet', 'musev_referencenet_pose'])
ip_adapter: musev_referencenet {'ip_image_encoder': 'F:\MuseV\configs\model\../../checkpoints\IP-Adapter\image_encoder', 'ip_ckpt': 'F:\MuseV\configs\model\../../checkpoints\motion\musev_referencenet/ip_adapter_image_proj.bin', 'ip_scale': 1.0, 'clip_extra_context_tokens': 4, 'clip_embeddings_dim': 1024, 'desp': ''}
facein: None None
ip_adapter_face: None None
video_negprompt V2 badhandv4, ng_deepnegative_v1_75t, (((multiple heads))), (((bad body))), (((two people))), ((extra arms)), ((deformed body)), (((sexy))), paintings,(((two heads))), ((big head)),sketches, (worst quality:2), (low quality:2), (normal quality:2), lowres, ((monochrome)), ((grayscale)), skin spots, acnes, skin blemishes, age spot, glans, (((nsfw))), nipples, extra fingers, (extra legs), (long neck), mutated hands, (fused fingers), (too many fingers)
negprompt V2 badhandv4, ng_deepnegative_v1_75t, (((multiple heads))), (((bad body))), (((two people))), ((extra arms)), ((deformed body)), (((sexy))), paintings,(((two heads))), ((big head)),sketches, (worst quality:2), (low quality:2), (normal quality:2), lowres, ((monochrome)), ((grayscale)), skin spots, acnes, skin blemishes, age spot, glans, (((nsfw))), nipples, extra fingers, (extra legs), (long neck), mutated hands, (fused fingers), (too many fingers)
2024-04-27 22:18:17,445- musev:482- INFO- vision_clip_extractor, name=ImageClipVisionFeatureExtractor, path=../../checkpoints/IP-Adapter/models/image_encoder
test_model_vae_model_path ../../checkpoints/vae/sd-vae-ft-mse
Keyword arguments {'torch_device': 'cuda'} are not expected by MusevControlNetPipeline and will be ignored.
Loading pipeline components...: 100%|█████████████████████████████████████████████████| 6/6 [00:02<00:00, 2.16steps/s]
Running on local URL: http://127.0.0.1:7860
2024-04-27 22:19:40,225- httpx:1026- INFO- HTTP Request: GET http://127.0.0.1:7860/startup-events "HTTP/1.1 200 OK"
2024-04-27 22:19:40,341- httpx:1026- INFO- HTTP Request: HEAD http://127.0.0.1:7860/ "HTTP/1.1 200 OK"

To create a public link, set share=True in launch().
2024-04-27 22:19:50,655- httpx:1026- INFO- HTTP Request: POST https://api.gradio.app/gradio-initiated-analytics/ "HTTP/1.1 200 OK"
2024-04-27 22:19:50,741- httpx:1026- INFO- HTTP Request: POST https://api.gradio.app/gradio-launched-telemetry/ "HTTP/1.1 200 OK"
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "F:\MuseV.glut\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 407, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "F:\MuseV.glut\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 69, in call
return await self.app(scope, receive, send)
File "F:\MuseV.glut\lib\site-packages\fastapi\applications.py", line 1054, in call
await super().call(scope, receive, send)
File "F:\MuseV.glut\lib\site-packages\starlette\applications.py", line 123, in call
await self.middleware_stack(scope, receive, send)
File "F:\MuseV.glut\lib\site-packages\starlette\middleware\errors.py", line 186, in call
raise exc
File "F:\MuseV.glut\lib\site-packages\starlette\middleware\errors.py", line 164, in call
await self.app(scope, receive, _send)
File "F:\MuseV.glut\lib\site-packages\starlette\middleware\cors.py", line 85, in call
await self.app(scope, receive, send)
File "F:\MuseV.glut\lib\site-packages\starlette\middleware\exceptions.py", line 65, in call
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "F:\MuseV.glut\lib\site-packages\starlette_exception_handler.py", line 64, in wrapped_app
raise exc
File "F:\MuseV.glut\lib\site-packages\starlette_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "F:\MuseV.glut\lib\site-packages\starlette\routing.py", line 756, in call
await self.middleware_stack(scope, receive, send)
File "F:\MuseV.glut\lib\site-packages\starlette\routing.py", line 776, in app
await route.handle(scope, receive, send)
File "F:\MuseV.glut\lib\site-packages\starlette\routing.py", line 297, in handle
await self.app(scope, receive, send)
File "F:\MuseV.glut\lib\site-packages\starlette\routing.py", line 77, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "F:\MuseV.glut\lib\site-packages\starlette_exception_handler.py", line 64, in wrapped_app
raise exc
File "F:\MuseV.glut\lib\site-packages\starlette_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "F:\MuseV.glut\lib\site-packages\starlette\routing.py", line 75, in app
await response(scope, receive, send)
File "F:\MuseV.glut\lib\site-packages\starlette\responses.py", line 352, in call
await send(
File "F:\MuseV.glut\lib\site-packages\starlette_exception_handler.py", line 50, in sender
await send(message)
File "F:\MuseV.glut\lib\site-packages\starlette_exception_handler.py", line 50, in sender
await send(message)
File "F:\MuseV.glut\lib\site-packages\starlette\middleware\errors.py", line 161, in _send
await send(message)
File "F:\MuseV.glut\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 511, in send
output = self.conn.send(event=h11.EndOfMessage())
File "F:\MuseV.glut\lib\site-packages\h11_connection.py", line 512, in send
data_list = self.send_with_data_passthrough(event)
File "F:\MuseV.glut\lib\site-packages\h11_connection.py", line 545, in send_with_data_passthrough
writer(event, data_list.append)
File "F:\MuseV.glut\lib\site-packages\h11_writers.py", line 67, in call
self.send_eom(event.headers, write)
File "F:\MuseV.glut\lib\site-packages\h11_writers.py", line 96, in send_eom
raise LocalProtocolError("Too little data for declared Content-Length")
h11._util.LocalProtocolError: Too little data for declared Content-Length

test_data {'name': 'clvi6wcge0000ksc3g3mqvryh', 'prompt': '(masterpiece, best quality, highres:1),(1boy, solo:1),(eye blinks:1.8),(head wave:1.3)', 'condition_images': './t2v_input_image\clvi6wcge0000ksc3g3mqvryh.jpg', 'refer_image': './t2v_input_image\clvi6wcge0000ksc3g3mqvryh.jpg', 'ipadapter_image': './t2v_input_image\clvi6wcge0000ksc3g3mqvryh.jpg', 'height': -1, 'width': -1, 'img_length_ratio': 0.9785932721712538} majicmixRealv6Fp16
{'condition_images': './t2v_input_image\clvi6wcge0000ksc3g3mqvryh.jpg',
'height': -1,
'img_length_ratio': 0.9785932721712538,
'ipadapter_image': './t2v_input_image\clvi6wcge0000ksc3g3mqvryh.jpg',
'name': 'clvi6wcge0000ksc3g3mqvryh',
'prompt': '(masterpiece, best quality, highres:1),(1boy, solo:1),(eye '
'blinks:1.8),(head wave:1.3)',
'prompt_hash': '41e0f',
'refer_image': './t2v_input_image\clvi6wcge0000ksc3g3mqvryh.jpg',
'width': -1}
test_data_height=1280
test_data_width=704
output_path ./results/m=majicmixRealv6Fp16_rm=musev_referencenet_case=clvi6wcge0000ksc3g3mqvryh_w=704_h=1280_t=12_nb=1_s=91515621_p=41e0f_w=0.001_ms=8.0_s=0.8_g=3.5_c-i=clvi6_r-c=False_w=0.5_V2_r=clv_ip=clv_f=no.mp4
2024-04-27 22:21:24,890- py.warnings:109- WARNING- F:\MuseV\diffusers\src\diffusers\configuration_utils.py:135: FutureWarning: Accessing config attribute vae_scale_factor directly via 'VaeImageProcessor' object attribute is deprecated. Please access 'vae_scale_factor' over 'VaeImageProcessor's config object instead, e.g. 'scheduler.config.vae_scale_factor'.
deprecate("direct config name access", "1.0.0", deprecation_message, standard_warn=False)_

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant