We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
text-generation-launcher --model-id bigscience/bloom-7b1 --num-shard 1 --port 8889 2023-03-13T04:57:41.703495Z INFO text_generation_launcher: Args { model_id: "bigscience/bloom-7b1", revision: None, sharded: None, num_shard: Some(1), quantize: false, max_concurrent_requests: 128, max_best_of: 2, max_stop_sequences: 4, max_input_length: 1000, max_total_tokens: 1512, max_batch_size: 32, max_waiting_tokens: 20, port: 8889, shard_uds_path: "/tmp/text-generation-server", master_addr: "localhost", master_port: 29500, huggingface_hub_cache: None, weights_cache_override: None, disable_custom_kernels: false, json_output: false, otlp_endpoint: None, cors_allow_origin: [], watermark_gamma: None, watermark_delta: None } 2023-03-13T05:03:52.114048Z INFO text_generation_launcher: Waiting for shard 0 to be ready... 2023-03-13T05:03:52.514407Z INFO text_generation_launcher: Shard 0 ready in 370.805438546s 2023-03-13T05:03:52.604007Z INFO text_generation_launcher: Starting Webserver thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: "Model \"bigscience/bloom-7b1\" on the Hub doesn't have a tokenizer"', router/src/main.rs:101:70 note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
The text was updated successfully, but these errors were encountered:
Using 4 GPUs this problem inexplicably disappeared
Sorry, something went wrong.
No branches or pull requests
The text was updated successfully, but these errors were encountered: