We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
vec = SequencePoolingLayer(combiner, supports_masking=True)( seq_input) def call(self, seq_value_len_list, mask=None, **kwargs): if self.supports_masking: if mask is None: raise ValueError( "When supports_masking=True,input must support masking") uiseq_embed_list = seq_value_len_list mask = tf.cast(mask, tf.float32) # tf.to_float(mask) user_behavior_length = reduce_sum(mask, axis=-1, keep_dims=True) mask = tf.expand_dims(mask, axis=2)
不了解接口,我的问题
The text was updated successfully, but these errors were encountered:
我发现这个问题了 你解决了吗
Sorry, something went wrong.
我发现这个问题了 你解决了吗 我问chatgpt3.5 给的答案 在这个例子中,在创建 SequencePoolingLayer 层时将 supports_masking 参数设置为 True,因此该层支持输入数据的掩码功能。因此,在 call 方法中,如果 emb 层的 mask_zero 参数设置为 True,则 mask 参数将自动设置为输入数据的掩码张量。在这种情况下,mask 参数将是一个布尔型张量,其形状为 (batch_size, sequence_length)。True 值表示该时间步骤需要被忽略,False 值表示该时间步骤需要被保留。在 SequencePoolingLayer 中,将使用 mask 参数来忽略被掩码的时间步骤。 因此,在上述代码中,如果 emb 层的 mask_zero 参数设置为 True,则在调用 SequencePoolingLayer 的 call 方法时,mask 参数将自动设置为输入数据的掩码张量。否则,mask 参数将被设置为 None。如果 mask 参数为 None,则 SequencePoolingLayer 将不会使用掩码,从而保留所有时间步骤的信息。
你通过使用debug 模式发现mask 并不是空的
No branches or pull requests
不了解接口,我的问题
The text was updated successfully, but these errors were encountered: