Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NETWORK ERROR DUE TO HIGH TRAFFIC. PLEASE REGENERATE OR REFRESH THIS PAGE. #14

Closed
Sequential-circuits opened this issue Apr 19, 2023 · 6 comments

Comments

@Sequential-circuits
Copy link

can't use it as it always says NETWORK ERROR DUE TO HIGH TRAFFIC. PLEASE REGENERATE OR REFRESH THIS PAGE.

@haotian-liu
Copy link
Owner

This seems to be an error in the backend.

  1. can you see the model list on the top left?
  2. There may be an error in the model worker, can you paste the error message here?

@DifferentComputers
Copy link

DifferentComputers commented Apr 25, 2023

I get the same error but I suspect it's because I may be running it on entirely inadequate hardware. Alternately, I may not have the model data installed correctly.

I don't see any model worker error. that would be appearing on the command line process, correct?

I don't see any "model list", at least not on the webpage where the "NETWORK ERROR" appears as every response to an attempt to use the chatBot.

@haotian-liu
Copy link
Owner

Hi @DifferentComputers, sorry that I just saw this comment. It is not a normal behavior of an empty model list. You need to start the gradio demo after the worker is fully loaded, so as to get the model list. Please let me know if you have further concerns.

@corleytd
Copy link

corleytd commented May 3, 2023

I met the same problem "NETWORK ERROR DUE TO HIGH TRAFFIC. PLEASE REGENERATE OR REFRESH THIS PAGE.", but my model list is not empty, there is a model I have applied the delta before, but still the error, some details are as follow:
image

image

thank you!

@haotian-liu
Copy link
Owner

@corleytd Hi please see my response in #89, thanks.

@aymenabid-lab
Copy link

I have the folowing problem; actually I want load model from my pc

(llava) C:\Users\aymen\LLaVA>python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path C:/Users/aymen/llava-1.5-7b-hf
2024-03-04 12:43:53 | INFO | model_worker | args: Namespace(host='0.0.0.0', port=40000, worker_address='http://localhost:40000', controller_address='http://localhost:10000', model_path='C:/Users/aymen/llava-1.5-7b-hf', model_base=None, model_name=None, device='cuda', multi_modal=False, limit_model_concurrency=5, stream_interval=1, no_register=False, load_8bit=False, load_4bit=False, use_flash_attn=False)
2024-03-04 12:43:53 | INFO | model_worker | Loading the model llava-1.5-7b-hf on worker 62e548 ...
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
You are using a model of type llava to instantiate a model of type llava_llama. This is not supported for all configurations of models and can yield errors.
2024-03-04 12:43:53 | INFO | accelerate.utils.modeling | We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set max_memory in to a higher value to use more memory (at your own risk).
Loading checkpoint shards: 0%| | 0/3 [00:00<?, ?it/s]
Loading checkpoint shards: 33%|█████████████████████████████████████████████████▎ | 1/3 [00:00<00:00, 5.88it/s]
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 8.46it/s]
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 8.10it/s]
2024-03-04 12:54:33 | ERROR | stderr |
Some weights of LlavaLlamaForCausalLM were not initialized from the model checkpoint at C:/Users/aymen/llava-1.5-7b-hf and are newly initialized: ['layers.8.self_attn.o_proj.weight', 'layers.3.self_attn.o_proj.weight', 'layers.22.mlp.down_proj.weight', 'layers.29.mlp.gate_proj.weight', 'layers.1.self_attn.k_proj.weight', 'layers.0.mlp.up_proj.weight', 'layers.18.mlp.up_proj.weight', 'layers.19.mlp.down_proj.weight', 'layers.19.mlp.up_proj.weight', 'layers.22.mlp.gate_proj.weight', 'layers.27.self_attn.v_proj.weight', 'layers.18.mlp.gate_proj.weight', 'layers.8.mlp.down_proj.weight', 'layers.2.mlp.gate_proj.weight', 'layers.26.mlp.up_proj.weight', 'layers.0.self_attn.k_proj.weight', 'layers.17.self_attn.k_proj.weight', 'layers.20.mlp.up_proj.weight', 'layers.29.mlp.up_proj.weight', 'layers.23.self_attn.q_proj.weight', 'layers.19.self_attn.k_proj.weight', 'layers.20.post_attention_layernorm.weight', 'layers.24.mlp.up_proj.weight', 'layers.0.self_attn.o_proj.weight', 'layers.5.mlp.gate_proj.weight', 'layers.21.input_layernorm.weight', 'layers.12.input_layernorm.weight', 'layers.16.input_layernorm.weight', 'layers.31.self_attn.k_proj.weight', 'layers.19.self_attn.o_proj.weight', 'layers.9.self_attn.k_proj.weight', 'layers.2.post_attention_layernorm.weight', 'layers.15.self_attn.v_proj.weight', 'layers.20.mlp.down_proj.weight', 'layers.28.self_attn.k_proj.weight', 'layers.16.self_attn.q_proj.weight', 'layers.12.self_attn.k_proj.weight', 'layers.29.self_attn.q_proj.weight', 'layers.24.self_attn.k_proj.weight', 'layers.13.self_attn.q_proj.weight', 'layers.30.self_attn.k_proj.weight', 'layers.20.self_attn.o_proj.weight', 'layers.11.self_attn.v_proj.weight', 'layers.28.self_attn.v_proj.weight', 'layers.17.mlp.up_proj.weight', 'layers.24.self_attn.v_proj.weight', 'layers.1.self_attn.q_proj.weight', 'layers.6.mlp.down_proj.weight', 'layers.27.post_attention_layernorm.weight', 'layers.30.mlp.gate_proj.weight', 'layers.18.self_attn.o_proj.weight', 'layers.28.self_attn.q_proj.weight', 'layers.0.self_attn.v_proj.weight', 'layers.12.mlp.gate_proj.weight', 'layers.9.self_attn.q_proj.weight', 'layers.4.self_attn.q_proj.weight', 'layers.6.self_attn.o_proj.weight', 'layers.2.self_attn.v_proj.weight', 'layers.21.mlp.down_proj.weight', 'layers.31.self_attn.q_proj.weight', 'layers.14.self_attn.v_proj.weight', 'layers.11.mlp.up_proj.weight', 'layers.24.post_attention_layernorm.weight', 'layers.22.self_attn.k_proj.weight', 'layers.11.self_attn.k_proj.weight', 'layers.19.self_attn.q_proj.weight', 'layers.10.mlp.up_proj.weight', 'layers.22.mlp.up_proj.weight', 'layers.1.mlp.gate_proj.weight', 'layers.2.mlp.down_proj.weight', 'layers.24.input_layernorm.weight', 'layers.25.post_attention_layernorm.weight', 'layers.9.post_attention_layernorm.weight', 'layers.22.input_layernorm.weight', 'layers.3.self_attn.v_proj.weight', 'layers.5.self_attn.k_proj.weight', 'layers.12.self_attn.o_proj.weight', 'layers.15.self_attn.o_proj.weight', 'layers.12.self_attn.v_proj.weight', 'layers.24.self_attn.o_proj.weight', 'layers.23.self_attn.v_proj.weight', 'layers.28.mlp.gate_proj.weight', 'layers.14.self_attn.o_proj.weight', 'layers.10.self_attn.q_proj.weight', 'layers.11.mlp.down_proj.weight', 'layers.16.self_attn.k_proj.weight', 'layers.5.self_attn.o_proj.weight', 'layers.13.mlp.gate_proj.weight', 'layers.10.self_attn.v_proj.weight', 'layers.1.post_attention_layernorm.weight', 'layers.12.mlp.down_proj.weight', 'layers.13.self_attn.o_proj.weight', 'layers.8.mlp.up_proj.weight', 'layers.22.self_attn.v_proj.weight', 'layers.9.input_layernorm.weight', 'layers.25.self_attn.v_proj.weight', 'layers.23.input_layernorm.weight', 'layers.6.mlp.up_proj.weight', 'layers.7.mlp.up_proj.weight', 'layers.16.mlp.gate_proj.weight', 'layers.25.self_attn.k_proj.weight', 'layers.16.self_attn.o_proj.weight', 'layers.16.mlp.down_proj.weight', 'layers.9.mlp.up_proj.weight', 'layers.21.post_attention_layernorm.weight', 'layers.13.post_attention_layernorm.weight', 'layers.23.mlp.up_proj.weight', 'layers.30.self_attn.q_proj.weight', 'layers.30.self_attn.v_proj.weight', 'layers.1.self_attn.o_proj.weight', 'layers.11.self_attn.q_proj.weight', 'layers.6.self_attn.q_proj.weight', 'layers.7.self_attn.o_proj.weight', 'layers.20.input_layernorm.weight', 'layers.31.self_attn.v_proj.weight', 'layers.4.self_attn.o_proj.weight', 'layers.21.self_attn.q_proj.weight', 'layers.21.self_attn.k_proj.weight', 'layers.14.mlp.gate_proj.weight', 'layers.15.mlp.up_proj.weight', 'layers.15.mlp.down_proj.weight', 'layers.24.mlp.gate_proj.weight', 'layers.4.self_attn.k_proj.weight', 'layers.8.post_attention_layernorm.weight', 'layers.23.mlp.down_proj.weight', 'layers.8.input_layernorm.weight', 'layers.25.self_attn.o_proj.weight', 'layers.12.mlp.up_proj.weight', 'layers.25.mlp.gate_proj.weight', 'layers.29.mlp.down_proj.weight', 'layers.7.mlp.down_proj.weight', 'layers.1.input_layernorm.weight', 'layers.23.self_attn.k_proj.weight', 'layers.30.mlp.up_proj.weight', 'layers.22.post_attention_layernorm.weight', 'layers.26.self_attn.v_proj.weight', 'layers.3.input_layernorm.weight', 'layers.13.mlp.up_proj.weight', 'layers.18.mlp.down_proj.weight', 'layers.13.input_layernorm.weight', 'layers.16.mlp.up_proj.weight', 'layers.26.self_attn.o_proj.weight', 'layers.20.self_attn.q_proj.weight', 'layers.3.self_attn.q_proj.weight', 'layers.15.input_layernorm.weight', 'layers.0.mlp.down_proj.weight', 'layers.15.mlp.gate_proj.weight', 'layers.11.self_attn.o_proj.weight', 'layers.2.self_attn.q_proj.weight', 'layers.29.input_layernorm.weight', 'layers.9.self_attn.o_proj.weight', 'layers.24.self_attn.q_proj.weight', 'layers.5.self_attn.q_proj.weight', 'layers.19.input_layernorm.weight', 'layers.14.input_layernorm.weight', 'layers.3.self_attn.k_proj.weight', 'layers.23.self_attn.o_proj.weight', 'layers.17.mlp.gate_proj.weight', 'layers.31.post_attention_layernorm.weight', 'layers.3.mlp.up_proj.weight', 'layers.19.self_attn.v_proj.weight', 'layers.27.self_attn.q_proj.weight', 'layers.19.post_attention_layernorm.weight', 'layers.2.self_attn.o_proj.weight', 'layers.5.input_layernorm.weight', 'layers.14.post_attention_layernorm.weight', 'layers.6.mlp.gate_proj.weight', 'layers.2.input_layernorm.weight', 'layers.2.self_attn.k_proj.weight', 'layers.7.mlp.gate_proj.weight', 'layers.27.mlp.gate_proj.weight', 'layers.9.self_attn.v_proj.weight', 'layers.28.self_attn.o_proj.weight', 'layers.17.self_attn.v_proj.weight', 'layers.31.input_layernorm.weight', 'layers.24.mlp.down_proj.weight', 'layers.5.mlp.up_proj.weight', 'layers.3.mlp.down_proj.weight', 'layers.18.post_attention_layernorm.weight', 'lm_head.weight', 'layers.7.self_attn.v_proj.weight', 'layers.14.self_attn.k_proj.weight', 'layers.10.mlp.down_proj.weight', 'layers.28.post_attention_layernorm.weight', 'layers.8.self_attn.v_proj.weight', 'layers.4.post_attention_layernorm.weight', 'layers.8.self_attn.q_proj.weight', 'layers.27.input_layernorm.weight', 'layers.2.mlp.up_proj.weight', 'layers.29.self_attn.o_proj.weight', 'layers.4.mlp.gate_proj.weight', 'layers.29.post_attention_layernorm.weight', 'layers.31.mlp.gate_proj.weight', 'layers.26.post_attention_layernorm.weight', 'layers.26.input_layernorm.weight', 'layers.8.mlp.gate_proj.weight', 'layers.7.input_layernorm.weight', 'layers.9.mlp.down_proj.weight', 'layers.0.self_attn.q_proj.weight', 'layers.17.mlp.down_proj.weight', 'layers.5.self_attn.v_proj.weight', 'layers.11.post_attention_layernorm.weight', 'layers.17.self_attn.q_proj.weight', 'layers.13.self_attn.v_proj.weight', 'layers.21.mlp.gate_proj.weight', 'layers.27.mlp.up_proj.weight', 'layers.10.self_attn.k_proj.weight', 'layers.5.mlp.down_proj.weight', 'layers.27.mlp.down_proj.weight', 'layers.7.self_attn.k_proj.weight', 'layers.30.post_attention_layernorm.weight', 'layers.7.post_attention_layernorm.weight', 'layers.6.self_attn.v_proj.weight', 'layers.11.input_layernorm.weight', 'layers.17.post_attention_layernorm.weight', 'layers.29.self_attn.v_proj.weight', 'layers.12.post_attention_layernorm.weight', 'layers.6.self_attn.k_proj.weight', 'layers.26.self_attn.k_proj.weight', 'layers.1.mlp.down_proj.weight', 'layers.1.mlp.up_proj.weight', 'layers.3.post_attention_layernorm.weight', 'layers.18.input_layernorm.weight', 'layers.19.mlp.gate_proj.weight', 'layers.6.input_layernorm.weight', 'layers.29.self_attn.k_proj.weight', 'layers.1.self_attn.v_proj.weight', 'layers.23.mlp.gate_proj.weight', 'layers.22.self_attn.q_proj.weight', 'layers.21.self_attn.o_proj.weight', 'layers.25.mlp.down_proj.weight', 'layers.5.post_attention_layernorm.weight', 'layers.22.self_attn.o_proj.weight', 'layers.15.self_attn.k_proj.weight', 'layers.18.self_attn.v_proj.weight', 'layers.0.input_layernorm.weight', 'layers.15.post_attention_layernorm.weight', 'layers.31.self_attn.o_proj.weight', 'layers.30.self_attn.o_proj.weight', 'layers.14.mlp.up_proj.weight', 'layers.4.mlp.up_proj.weight', 'layers.20.self_attn.k_proj.weight', 'layers.9.mlp.gate_proj.weight', 'layers.31.mlp.down_proj.weight', 'layers.0.mlp.gate_proj.weight', 'layers.16.self_attn.v_proj.weight', 'layers.16.post_attention_layernorm.weight', 'layers.27.self_attn.k_proj.weight', 'layers.30.mlp.down_proj.weight', 'layers.20.mlp.gate_proj.weight', 'layers.14.mlp.down_proj.weight', 'layers.3.mlp.gate_proj.weight', 'layers.6.post_attention_layernorm.weight', 'layers.13.mlp.down_proj.weight', 'layers.25.mlp.up_proj.weight', 'layers.12.self_attn.q_proj.weight', 'layers.17.self_attn.o_proj.weight', 'layers.26.self_attn.q_proj.weight', 'layers.30.input_layernorm.weight', 'layers.0.post_attention_layernorm.weight', 'layers.14.self_attn.q_proj.weight', 'layers.31.mlp.up_proj.weight', 'embed_tokens.weight', 'layers.23.post_attention_layernorm.weight', 'layers.7.self_attn.q_proj.weight', 'layers.21.self_attn.v_proj.weight', 'layers.17.input_layernorm.weight', 'layers.10.self_attn.o_proj.weight', 'layers.26.mlp.down_proj.weight', 'layers.11.mlp.gate_proj.weight', 'layers.13.self_attn.k_proj.weight', 'layers.26.mlp.gate_proj.weight', 'layers.4.mlp.down_proj.weight', 'layers.4.self_attn.v_proj.weight', 'layers.18.self_attn.q_proj.weight', 'layers.21.mlp.up_proj.weight', 'layers.28.mlp.up_proj.weight', 'norm.weight', 'layers.10.mlp.gate_proj.weight', 'layers.8.self_attn.k_proj.weight', 'layers.15.self_attn.q_proj.weight', 'layers.10.post_attention_layernorm.weight', 'layers.25.self_attn.q_proj.weight', 'layers.25.input_layernorm.weight', 'layers.27.self_attn.o_proj.weight', 'layers.10.input_layernorm.weight', 'layers.28.input_layernorm.weight', 'layers.28.mlp.down_proj.weight', 'layers.4.input_layernorm.weight', 'layers.20.self_attn.v_proj.weight', 'layers.18.self_attn.k_proj.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
2024-03-04 12:54:35 | WARNING | root | Some parameters are on the meta device device because they were offloaded to the cpu.
2024-03-04 12:54:35 | ERROR | stderr | Traceback (most recent call last):
2024-03-04 12:54:35 | ERROR | stderr | File "C:\ProgramData\anaconda3\envs\llava\lib\runpy.py", line 196, in _run_module_as_main
2024-03-04 12:54:35 | ERROR | stderr | return _run_code(code, main_globals, None,
2024-03-04 12:54:35 | ERROR | stderr | File "C:\ProgramData\anaconda3\envs\llava\lib\runpy.py", line 86, in _run_code
2024-03-04 12:54:35 | ERROR | stderr | exec(code, run_globals)
2024-03-04 12:54:35 | ERROR | stderr | File "C:\Users\aymen\LLaVA\llava\serve\model_worker.py", line 277, in
2024-03-04 12:54:35 | ERROR | stderr | worker = ModelWorker(args.controller_address,
2024-03-04 12:54:35 | ERROR | stderr | File "C:\Users\aymen\LLaVA\llava\serve\model_worker.py", line 65, in init
2024-03-04 12:54:35 | ERROR | stderr | self.tokenizer, self.model, self.image_processor, self.context_len = load_pretrained_model(
2024-03-04 12:54:35 | ERROR | stderr | File "C:\Users\aymen\LLaVA\llava\model\builder.py", line 156, in load_pretrained_model
2024-03-04 12:54:35 | ERROR | stderr | if not vision_tower.is_loaded:
2024-03-04 12:54:35 | ERROR | stderr | AttributeError: 'NoneType' object has no attribute '

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants