Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tensor shape mismatch when computing apply_rotary_pos_emb #2

Closed
Tomorrowdawn opened this issue Mar 9, 2024 · 5 comments
Closed

Tensor shape mismatch when computing apply_rotary_pos_emb #2

Tomorrowdawn opened this issue Mar 9, 2024 · 5 comments

Comments

@Tomorrowdawn
Copy link

Tomorrowdawn commented Mar 9, 2024

Description:

When I tried to reproduce the paper result by README, an exception raised:

return forward_call(*args, **kwargs)
  File "/data0/xiac/RLHF/Prelim/Sequoia/tests/../Engine/Llama_model.py", line 59, in forward
    layer_outputs = decoder_layer(
  File "/home/xiac/.conda/envs/rlhf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/xiac/.conda/envs/rlhf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/data0/xiac/RLHF/Prelim/Sequoia/tests/../Engine/Llama_modules.py", line 334, in forward
    hidden_states = self.self_attn(
  File "/home/xiac/.conda/envs/rlhf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/xiac/.conda/envs/rlhf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/data0/xiac/RLHF/Prelim/Sequoia/tests/../Engine/Llama_modules.py", line 118, in forward
    query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin, position_ids)
  File "/home/xiac/.conda/envs/rlhf/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 207, in apply_rotary_pos_emb
    q_embed = (q * cos) + (rotate_half(q) * sin)
RuntimeError: The size of tensor a (12) must match the size of tensor b (384) at non-singleton dimension 1

I tracked the function calling and enabled the 'debug' flag in engine.model_run. When I tried it again, the assertion failed:

Traceback (most recent call last):
  File "/data0/xiac/RLHF/Prelim/Sequoia/tests/testbed.py", line 268, in <module>
    draft_model.initialize_cuda_graph(graph_capture_list)  
  File "/home/xiac/.conda/envs/rlhf/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/data0/xiac/RLHF/Prelim/Sequoia/tests/../Engine/Engine.py", line 189, in initialize_cuda_graph
    self.callables[decoding_seqlen] = capture_graph(
  File "/data0/xiac/RLHF/Prelim/Sequoia/tests/../Engine/Engine.py", line 141, in capture_graph
    static_logits = engine.model_run(
  File "/home/xiac/.conda/envs/rlhf/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/data0/xiac/RLHF/Prelim/Sequoia/tests/../Engine/Engine.py", line 34, in model_run
    assert attention_mask.shape[0] == input_length
AssertionError

I checked the code and found a suspicious line in capture_graph:

static_attn_mask = torch.full((decoding_seqlen, engine.max_length), 0, dtype=dtype, device=device)
    static_attn_mask = static_attn_mask[None, None, :, :]

the last line changes static_attn_mask into shape of (1,1, x, y), which certainly fails the check.

@Tomorrowdawn
Copy link
Author

Tomorrowdawn commented Mar 10, 2024

I tried to comment the last line and received the same error again:

Traceback (most recent call last):
  File "/data0/xiac/RLHF/Prelim/Sequoia/tests/testbed.py", line 268, in <module>
    draft_model.initialize_cuda_graph(graph_capture_list)  
  File "/home/xiac/.conda/envs/rlhf/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/data0/xiac/RLHF/Prelim/Sequoia/tests/../Engine/Engine.py", line 189, in initialize_cuda_graph
    self.callables[decoding_seqlen] = capture_graph(
  File "/data0/xiac/RLHF/Prelim/Sequoia/tests/../Engine/Engine.py", line 141, in capture_graph
    static_logits = engine.model_run(
  File "/home/xiac/.conda/envs/rlhf/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/data0/xiac/RLHF/Prelim/Sequoia/tests/../Engine/Engine.py", line 38, in model_run
    logits = self.model(input_ids=input_ids,
  File "/home/xiac/.conda/envs/rlhf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/xiac/.conda/envs/rlhf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/data0/xiac/RLHF/Prelim/Sequoia/tests/../Engine/Llama_model.py", line 201, in forward
    outputs = self.model(
  File "/home/xiac/.conda/envs/rlhf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
File "/home/xiac/.conda/envs/rlhf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/data0/xiac/RLHF/Prelim/Sequoia/tests/../Engine/Llama_model.py", line 59, in forward
    layer_outputs = decoder_layer(
  File "/home/xiac/.conda/envs/rlhf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/xiac/.conda/envs/rlhf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/data0/xiac/RLHF/Prelim/Sequoia/tests/../Engine/Llama_modules.py", line 334, in forward
    hidden_states = self.self_attn(
  File "/home/xiac/.conda/envs/rlhf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_i
mpl
    return self._call_impl(*args, **kwargs)
  File "/home/xiac/.conda/envs/rlhf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/data0/xiac/RLHF/Prelim/Sequoia/tests/../Engine/Llama_modules.py", line 118, in forward
    query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin, position_ids)
  File "/home/xiac/.conda/envs/rlhf/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 207, in apply_rotary_pos_emb
    q_embed = (q * cos) + (rotate_half(q) * sin)
RuntimeError: The size of tensor a (12) must match the size of tensor b (384) at non-singleton dimension 1

After a thorough investigation of the source code, I discovered that within the implementation of attention, the query and key are transformed into the following forms.

query_states = query_states.view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
key_states = key_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
value_states = value_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)

I printed the tensors' shape:

query_states:  torch.Size([1, 12, 19, 64])
key_states:  torch.Size([1, 12, 19, 64])
cos:  torch.Size([384, 64])
sin:  torch.Size([384, 64])
position_ids:  torch.Size([1, 19])

apply_rotary_pos_emb requires multiplying the cosine and query, which clearly do not match in shape. I'm uncertain about the original intention of the source code, hence unable to correct this issue on my own.

@Tomorrowdawn Tomorrowdawn changed the title Assert attention_mask.shape[0] == input_length fails Tensor shape mismatch when computing apply_rotary_pos_emb Mar 10, 2024
@dreaming-panda
Copy link
Contributor

dreaming-panda commented Mar 10, 2024

In the function of "apply_rotary_pos_emb" we have position_ids to slice the cos and sin tensor to be aligned with query and keys

cos = cos[position_ids].unsqueeze(unsqueeze_dim)

sin = sin[position_ids].unsqueeze(unsqueeze_dim)

q_embed = (q * cos) + (rotate_half(q) * sin)

k_embed = (k * cos) + (rotate_half(k) * sin)

return q_embed, k_embed

So I think it's impossible to have a shape-misalignment bug here. Can you go to apply_rotary_pos_emb and print the shape of the tensor inside?

@Tomorrowdawn
Copy link
Author

In the function of "apply_rotary_pos_emb" we have position_ids to slice the cos and sin tensor to be aligned with query and keys

cos = cos[position_ids].unsqueeze(unsqueeze_dim)

sin = sin[position_ids].unsqueeze(unsqueeze_dim)

q_embed = (q * cos) + (rotate_half(q) * sin)

k_embed = (k * cos) + (rotate_half(k) * sin)

return q_embed, k_embed

So I think it's impossible to have a shape-misalignment bug here. Can you go to apply_rotary_pos_emb and print the shape of the tensor inside?

I check the apply_rotary_pos_emb but it seems a little bit different


def apply_rotary_pos_emb(q, k, cos, sin, position_ids=None, unsqueeze_dim=1):
    """Applies Rotary Position Embedding to the query and key tensors.

    Args:
        q (`torch.Tensor`): The query tensor.
        k (`torch.Tensor`): The key tensor.
        cos (`torch.Tensor`): The cosine part of the rotary embedding.
        sin (`torch.Tensor`): The sine part of the rotary embedding.
        position_ids (`torch.Tensor`, *optional*):
            Deprecated and unused.
        unsqueeze_dim (`int`, *optional*, defaults to 1):
            The 'unsqueeze_dim' argument specifies the dimension along which to unsqueeze cos[position_ids] and
            sin[position_ids] so that they can be properly broadcasted to the dimensions of q and k. For example, note
            that cos[position_ids] and sin[position_ids] have the shape [batch_size, seq_len, head_dim]. Then, if q and
            k have the shape [batch_size, heads, seq_len, head_dim], then setting unsqueeze_dim=1 makes
            cos[position_ids] and sin[position_ids] broadcastable to the shapes of q and k. Similarly, if q and k have
            the shape [batch_size, seq_len, heads, head_dim], then set unsqueeze_dim=2.
    Returns:
        `tuple(torch.Tensor)` comprising of the query and key tensors rotated using the Rotary Position Embedding.
    """
    cos = cos.unsqueeze(unsqueeze_dim)
    sin = sin.unsqueeze(unsqueeze_dim)
    q_embed = (q * cos) + (rotate_half(q) * sin)
    k_embed = (k * cos) + (rotate_half(k) * sin)
    return q_embed, k_embed

I've discovered that this is a compatibility issue. I have now rolled back to transformers==4.36(which was 4.38), and that problem has disappeared, but now issue #1 has occured.

File "/data0/xiac/RLHF/Prelim/Sequoia/tests/testbed.py", line 297, in <module>
    simulation_fast(target_model=target_model, draft_model=draft_model, dataloader=dataloader, T=args.T, to
p_p=args.P,
  File "/data0/xiac/RLHF/Prelim/Sequoia/tests/testbed.py", line 69, in simulation_fast
    spectree = SpecTree(prefix=input_ids.squeeze(0), device='cuda:0', temperature=T,
  File "/data0/xiac/RLHF/Prelim/Sequoia/tests/../Tree/SpecTree.py", line 68, in __init__
    draft_model_outputs = self.draft_model_engine.inference(input_ids=self.tokens[:self.num_nodes].unsqueez
e(0),
  File "/home/xiac/.conda/envs/rlhf/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in 
decorate_context
    return func(*args, **kwargs)
  File "/data0/xiac/RLHF/Prelim/Sequoia/tests/../Engine/Engine.py", line 244, in inference
    return self.engine.model_run(input_ids=input_ids, storage_ids=storage_ids,
  File "/home/xiac/.conda/envs/rlhf/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in 
decorate_context
    return func(*args, **kwargs)
  File "/data0/xiac/RLHF/Prelim/Sequoia/tests/../Engine/Engine.py", line 40, in model_run
    logits = self.model(input_ids=input_ids,
  File "/home/xiac/.conda/envs/rlhf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in
 _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/xiac/.conda/envs/rlhf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in
 _call_implreturn forward_call(*args, **kwargs)
  File "/data0/xiac/RLHF/Prelim/Sequoia/tests/../Engine/Llama_model.py", line 201, in forward
    outputs = self.model(
  File "/home/xiac/.conda/envs/rlhf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/xiac/.conda/envs/rlhf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/data0/xiac/RLHF/Prelim/Sequoia/tests/../Engine/Llama_model.py", line 59, in forward
    layer_outputs = decoder_layer(
  File "/home/xiac/.conda/envs/rlhf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/xiac/.conda/envs/rlhf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/data0/xiac/RLHF/Prelim/Sequoia/tests/../Engine/Llama_modules.py", line 339, in forward
    hidden_states = self.self_attn(
  File "/home/xiac/.conda/envs/rlhf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/xiac/.conda/envs/rlhf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/data0/xiac/RLHF/Prelim/Sequoia/tests/../Engine/Llama_modules.py", line 132, in forward
    attn_output = torch.nn.functional.scaled_dot_product_attention(
RuntimeError: p.attn_bias_ptr is not correctly aligned

@dreaming-panda
Copy link
Contributor

Oh, you need to install torch 2.1.2. Actually, only this torch version (and maybe 2.1.1) is compatible. I will deal with this later. But for now, you can turn to torch 2.1.2.

@Tomorrowdawn
Copy link
Author

Thank you for your response. After reconfiguring the environment, it indeed runs smoothly now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants