Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TypeError: unsupported operand type(s) for |: 'type' and 'NoneType' #40

Closed
dzyjjpy opened this issue Feb 22, 2024 · 5 comments
Closed

Comments

@dzyjjpy
Copy link

dzyjjpy commented Feb 22, 2024

@wilson1yan It didn't work. More samples are needed including language and vision version.

./scripts/run_vision_chat.sh
Traceback (most recent call last):
File "/home/jiapeiyang/anaconda3/envs/nlp/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/jiapeiyang/anaconda3/envs/nlp/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/jiapeiyang/workspace/LWM/lwm/vision_chat.py", line 18, in
from lwm.vision_llama import VideoLLaMAConfig, FlaxVideoLLaMAForCausalLM
File "/home/jiapeiyang/workspace/LWM/lwm/vision_llama.py", line 21, in
from lwm.llama import LLaMAConfig, LLAMA_STANDARD_CONFIGS, FlaxLLaMABlockCollection, RMSNorm
File "/home/jiapeiyang/workspace/LWM/lwm/llama.py", line 31, in
from lwm.ring_attention import blockwise_ffn, ring_flash_attention_tpu,
File "/home/jiapeiyang/workspace/LWM/lwm/ring_attention.py", line 557, in
class BlockSizes:
File "/home/jiapeiyang/workspace/LWM/lwm/ring_attention.py", line 563, in BlockSizes
block_q_major_dkv: int | None = None
TypeError: unsupported operand type(s) for |: 'type' and 'NoneType'

`
export llama_tokenizer_path="/home/jiapeiyang/workspace/LWM/models/LWM-Chat-32K-Jax/tokenizer.model"
export vqgan_checkpoint="/home/jiapeiyang/workspace/LWM/models/LWM-Chat-32K-Jax/vqgan"
export lwm_checkpoint="/home/jiapeiyang/workspace/LWM/models/LWM-Chat-32K-Jax/params"
export input_file="/home/jiapeiyang/workspace/LWM/models/LWM-Chat-32K-Jax/test_a.jpg"

python3 -u -m lwm.vision_chat
--prompt="What is the video about?"
--input_file="$input_file"
--vqgan_checkpoint="$vqgan_checkpoint"
--mesh_dim='!1,1,8,1'
--dtype='fp32'
--load_llama_config='7b'
--max_n_frames=8
--update_llama_config="dict(sample_mode='text',theta=50000000,max_sequence_length=131072,use_flash_attention=False,scan_attention=False,scan_query_chunk_size=128,scan_key_chunk_size=128,remat_attention='',scan_mlp=False,scan_mlp_chunk_size=2048,remat_mlp='',remat_block='',scan_layers=True)"
--load_checkpoint="params::$lwm_checkpoint"
--tokenizer.vocab_file="$llama_tokenizer_path"
2>&1 | tee ~/output.log
read

`

@jackyin68
Copy link

--prompt="What is the video about?" ---> --prompt="What is the image about?"

@jackyin68
Copy link

--mesh_dim='!1,1,8,1' removed

@dzyjjpy
Copy link
Author

dzyjjpy commented Feb 22, 2024

@jackyin68 thanks
tried video and image both before, not work. removed --mesh_dim, same issue~

@wilson1yan
Copy link
Contributor

This looks like a python version incompatability - what version are you using? It should be 3.10 (see conda env instructions)

@dzyjjpy
Copy link
Author

dzyjjpy commented Feb 23, 2024

This looks like a python version incompatability - what version are you using? It should be 3.10 (see conda env instructions)

python version: 3.9.13
I will try it with python 3.10. Thanks

@dzyjjpy dzyjjpy closed this as completed Feb 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants