Skip to content
GitHub Actions / Unit Test Results failed Aug 31, 2023 in 0s

24 fail, 12 skipped, 2 790 pass in 1h 34m 21s

       6 files         6 suites   1h 34m 21s ⏱️
2 826 tests 2 790 ✔️ 12 💤 24
2 869 runs  2 824 ✔️ 21 💤 24

Results for commit 20a532f.

Annotations

Check warning on line 0 in tests.ludwig.combiners.test_combiners

See this annotation in the file changed.

@github-actions github-actions / Unit Test Results

test_tabtransformer_combiner_number_or_binary_without_category[None-fc_layers1-concat-1-feature_list0] (tests.ludwig.combiners.test_combiners) failed

artifacts/Unit Test Results (Python 3.10 not distributed)/pytest.xml
Raw output
features_to_test = ({'feature_00': {'encoder_output': tensor([[-1.0579],
        [ 1.9822],
        [ 0.7473],
        [-0.4933],
       ...        [-0.1815],
        [-0.2617]])}}, {'feature_00': <test_combiners.PseudoInputFeature object at 0x7f41bb94f3d0>})
embed_input_feature_name = None, fc_layers = [{'output_size': 256}]
reduce_output = 'concat', num_layers = 1

    @pytest.mark.parametrize(
        "feature_list",  # defines parameter for fixture features_to_test()
        [
            [
                ("binary", [BATCH_SIZE, 1]),
            ],
            [
                ("binary", [BATCH_SIZE, 1]),
                ("binary", [BATCH_SIZE, 1]),
            ],
            [
                ("number", [BATCH_SIZE, 1]),
            ],
            [
                ("number", [BATCH_SIZE, 1]),
                ("number", [BATCH_SIZE, 1]),
            ],
        ],
    )
    @pytest.mark.parametrize("num_layers", [1, 2])
    @pytest.mark.parametrize("reduce_output", ["concat", "sum"])
    @pytest.mark.parametrize("fc_layers", [None, [{"output_size": 256}]])
    @pytest.mark.parametrize("embed_input_feature_name", [None, 64, "add"])
    def test_tabtransformer_combiner_number_or_binary_without_category(
        features_to_test: tuple,
        embed_input_feature_name: Optional[Union[int, str]],
        fc_layers: Optional[list],
        reduce_output: str,
        num_layers: int,
    ) -> None:
        # make repeatable
        set_random_seed(RANDOM_SEED)
    
        # retrieve simulated encoder outputs and input features for the test
        encoder_outputs, input_features = features_to_test
    
        # setup combiner to test
        combiner = TabTransformerCombiner(
            input_features=input_features,
            config=load_config(
                TabTransformerCombinerConfig,
                embed_input_feature_name=embed_input_feature_name,
                # emulates parameters passed from combiner def
                num_layers=num_layers,  # number of transformer layers
                fc_layers=fc_layers,  # fully_connected layer definition
                reduce_output=reduce_output,  # sequence reducer
            ),
        ).to(DEVICE)
    
        # concatenate encoder outputs
        combiner_output = combiner(encoder_outputs)
    
        check_combiner_output(combiner, combiner_output, BATCH_SIZE)
    
        # check for parameter updating
        target = torch.randn(combiner_output["combiner_output"].shape)
        fpc, tpc, upc, not_updated = check_module_parameters_updated(
            combiner,
            (encoder_outputs,),
            target,
        )
    
        # Adjustments to the trainable parameter count (tpc) in the following assertion checks is needed
        # to account for the different code paths taken in the TabTransformerCombiner forward() method due to the
        # combination of input feature types (NUMBER, BINARY, CATEGORY) in the dataset and parameters used to
        # instantiate the TabTransformerCombiner object.
    
        # The entire transformer stack is by-passed because there is no categorical input features.  Subtract the
        # number for parameters in the transformer stack to account for this situation.
    
>       assert upc == (
            tpc - num_layers * PARAMETERS_IN_TRANSFORMER_BLOCK - (1 if embed_input_feature_name is not None else 0)
        ), f"Failed to update parameters. Parameters not updated: {not_updated}"
E       AssertionError: Failed to update parameters. Parameters not updated: ['layer_norm.weight', 'layer_norm.bias', 'transformer_stack.layers.0.self_attention.query_dense.weight', 'transformer_stack.layers.0.self_attention.query_dense.bias', 'transformer_stack.layers.0.self_attention.key_dense.weight', 'transformer_stack.layers.0.self_attention.key_dense.bias', 'transformer_stack.layers.0.self_attention.value_dense.weight', 'transformer_stack.layers.0.self_attention.value_dense.bias', 'transformer_stack.layers.0.self_attention.combine_heads.weight', 'transformer_stack.layers.0.self_attention.combine_heads.bias', 'transformer_stack.layers.0.layernorm1.weight', 'transformer_stack.layers.0.layernorm1.bias', 'transformer_stack.layers.0.fully_connected.0.weight', 'transformer_stack.layers.0.fully_connected.0.bias', 'transformer_stack.layers.0.fully_connected.2.weight', 'transformer_stack.layers.0.fully_connected.2.bias', 'transformer_stack.layers.0.layernorm2.weight', 'transformer_stack.layers.0.layernorm2.bias', 'fc_stack.stack.0.layers.0.weight', 'fc_stack.stack.0.layers.0.bias']
E       assert 0 == ((20 - (1 * 16)) - 0)

tests/ludwig/combiners/test_combiners.py:742: AssertionError

Check warning on line 0 in tests.ludwig.combiners.test_combiners

See this annotation in the file changed.

@github-actions github-actions / Unit Test Results

test_tabtransformer_combiner_number_or_binary_without_category[None-fc_layers1-concat-1-feature_list2] (tests.ludwig.combiners.test_combiners) failed

artifacts/Unit Test Results (Python 3.10 not distributed)/pytest.xml
Raw output
features_to_test = ({'feature_00': {'encoder_output': tensor([[-1.0579],
        [ 1.9822],
        [ 0.7473],
        [-0.4933],
       ...        [-0.1815],
        [-0.2617]])}}, {'feature_00': <test_combiners.PseudoInputFeature object at 0x7f41aba66320>})
embed_input_feature_name = None, fc_layers = [{'output_size': 256}]
reduce_output = 'concat', num_layers = 1

    @pytest.mark.parametrize(
        "feature_list",  # defines parameter for fixture features_to_test()
        [
            [
                ("binary", [BATCH_SIZE, 1]),
            ],
            [
                ("binary", [BATCH_SIZE, 1]),
                ("binary", [BATCH_SIZE, 1]),
            ],
            [
                ("number", [BATCH_SIZE, 1]),
            ],
            [
                ("number", [BATCH_SIZE, 1]),
                ("number", [BATCH_SIZE, 1]),
            ],
        ],
    )
    @pytest.mark.parametrize("num_layers", [1, 2])
    @pytest.mark.parametrize("reduce_output", ["concat", "sum"])
    @pytest.mark.parametrize("fc_layers", [None, [{"output_size": 256}]])
    @pytest.mark.parametrize("embed_input_feature_name", [None, 64, "add"])
    def test_tabtransformer_combiner_number_or_binary_without_category(
        features_to_test: tuple,
        embed_input_feature_name: Optional[Union[int, str]],
        fc_layers: Optional[list],
        reduce_output: str,
        num_layers: int,
    ) -> None:
        # make repeatable
        set_random_seed(RANDOM_SEED)
    
        # retrieve simulated encoder outputs and input features for the test
        encoder_outputs, input_features = features_to_test
    
        # setup combiner to test
        combiner = TabTransformerCombiner(
            input_features=input_features,
            config=load_config(
                TabTransformerCombinerConfig,
                embed_input_feature_name=embed_input_feature_name,
                # emulates parameters passed from combiner def
                num_layers=num_layers,  # number of transformer layers
                fc_layers=fc_layers,  # fully_connected layer definition
                reduce_output=reduce_output,  # sequence reducer
            ),
        ).to(DEVICE)
    
        # concatenate encoder outputs
        combiner_output = combiner(encoder_outputs)
    
        check_combiner_output(combiner, combiner_output, BATCH_SIZE)
    
        # check for parameter updating
        target = torch.randn(combiner_output["combiner_output"].shape)
        fpc, tpc, upc, not_updated = check_module_parameters_updated(
            combiner,
            (encoder_outputs,),
            target,
        )
    
        # Adjustments to the trainable parameter count (tpc) in the following assertion checks is needed
        # to account for the different code paths taken in the TabTransformerCombiner forward() method due to the
        # combination of input feature types (NUMBER, BINARY, CATEGORY) in the dataset and parameters used to
        # instantiate the TabTransformerCombiner object.
    
        # The entire transformer stack is by-passed because there is no categorical input features.  Subtract the
        # number for parameters in the transformer stack to account for this situation.
    
>       assert upc == (
            tpc - num_layers * PARAMETERS_IN_TRANSFORMER_BLOCK - (1 if embed_input_feature_name is not None else 0)
        ), f"Failed to update parameters. Parameters not updated: {not_updated}"
E       AssertionError: Failed to update parameters. Parameters not updated: ['layer_norm.weight', 'layer_norm.bias', 'transformer_stack.layers.0.self_attention.query_dense.weight', 'transformer_stack.layers.0.self_attention.query_dense.bias', 'transformer_stack.layers.0.self_attention.key_dense.weight', 'transformer_stack.layers.0.self_attention.key_dense.bias', 'transformer_stack.layers.0.self_attention.value_dense.weight', 'transformer_stack.layers.0.self_attention.value_dense.bias', 'transformer_stack.layers.0.self_attention.combine_heads.weight', 'transformer_stack.layers.0.self_attention.combine_heads.bias', 'transformer_stack.layers.0.layernorm1.weight', 'transformer_stack.layers.0.layernorm1.bias', 'transformer_stack.layers.0.fully_connected.0.weight', 'transformer_stack.layers.0.fully_connected.0.bias', 'transformer_stack.layers.0.fully_connected.2.weight', 'transformer_stack.layers.0.fully_connected.2.bias', 'transformer_stack.layers.0.layernorm2.weight', 'transformer_stack.layers.0.layernorm2.bias', 'fc_stack.stack.0.layers.0.weight', 'fc_stack.stack.0.layers.0.bias']
E       assert 0 == ((20 - (1 * 16)) - 0)

tests/ludwig/combiners/test_combiners.py:742: AssertionError

Check warning on line 0 in tests.ludwig.combiners.test_combiners

See this annotation in the file changed.

@github-actions github-actions / Unit Test Results

test_tabtransformer_combiner_number_or_binary_without_category[None-fc_layers1-concat-2-feature_list0] (tests.ludwig.combiners.test_combiners) failed

artifacts/Unit Test Results (Python 3.10 not distributed)/pytest.xml
Raw output
features_to_test = ({'feature_00': {'encoder_output': tensor([[-1.0579],
        [ 1.9822],
        [ 0.7473],
        [-0.4933],
       ...        [-0.1815],
        [-0.2617]])}}, {'feature_00': <test_combiners.PseudoInputFeature object at 0x7f41bb8c7310>})
embed_input_feature_name = None, fc_layers = [{'output_size': 256}]
reduce_output = 'concat', num_layers = 2

    @pytest.mark.parametrize(
        "feature_list",  # defines parameter for fixture features_to_test()
        [
            [
                ("binary", [BATCH_SIZE, 1]),
            ],
            [
                ("binary", [BATCH_SIZE, 1]),
                ("binary", [BATCH_SIZE, 1]),
            ],
            [
                ("number", [BATCH_SIZE, 1]),
            ],
            [
                ("number", [BATCH_SIZE, 1]),
                ("number", [BATCH_SIZE, 1]),
            ],
        ],
    )
    @pytest.mark.parametrize("num_layers", [1, 2])
    @pytest.mark.parametrize("reduce_output", ["concat", "sum"])
    @pytest.mark.parametrize("fc_layers", [None, [{"output_size": 256}]])
    @pytest.mark.parametrize("embed_input_feature_name", [None, 64, "add"])
    def test_tabtransformer_combiner_number_or_binary_without_category(
        features_to_test: tuple,
        embed_input_feature_name: Optional[Union[int, str]],
        fc_layers: Optional[list],
        reduce_output: str,
        num_layers: int,
    ) -> None:
        # make repeatable
        set_random_seed(RANDOM_SEED)
    
        # retrieve simulated encoder outputs and input features for the test
        encoder_outputs, input_features = features_to_test
    
        # setup combiner to test
        combiner = TabTransformerCombiner(
            input_features=input_features,
            config=load_config(
                TabTransformerCombinerConfig,
                embed_input_feature_name=embed_input_feature_name,
                # emulates parameters passed from combiner def
                num_layers=num_layers,  # number of transformer layers
                fc_layers=fc_layers,  # fully_connected layer definition
                reduce_output=reduce_output,  # sequence reducer
            ),
        ).to(DEVICE)
    
        # concatenate encoder outputs
        combiner_output = combiner(encoder_outputs)
    
        check_combiner_output(combiner, combiner_output, BATCH_SIZE)
    
        # check for parameter updating
        target = torch.randn(combiner_output["combiner_output"].shape)
        fpc, tpc, upc, not_updated = check_module_parameters_updated(
            combiner,
            (encoder_outputs,),
            target,
        )
    
        # Adjustments to the trainable parameter count (tpc) in the following assertion checks is needed
        # to account for the different code paths taken in the TabTransformerCombiner forward() method due to the
        # combination of input feature types (NUMBER, BINARY, CATEGORY) in the dataset and parameters used to
        # instantiate the TabTransformerCombiner object.
    
        # The entire transformer stack is by-passed because there is no categorical input features.  Subtract the
        # number for parameters in the transformer stack to account for this situation.
    
>       assert upc == (
            tpc - num_layers * PARAMETERS_IN_TRANSFORMER_BLOCK - (1 if embed_input_feature_name is not None else 0)
        ), f"Failed to update parameters. Parameters not updated: {not_updated}"
E       AssertionError: Failed to update parameters. Parameters not updated: ['layer_norm.weight', 'layer_norm.bias', 'transformer_stack.layers.0.self_attention.query_dense.weight', 'transformer_stack.layers.0.self_attention.query_dense.bias', 'transformer_stack.layers.0.self_attention.key_dense.weight', 'transformer_stack.layers.0.self_attention.key_dense.bias', 'transformer_stack.layers.0.self_attention.value_dense.weight', 'transformer_stack.layers.0.self_attention.value_dense.bias', 'transformer_stack.layers.0.self_attention.combine_heads.weight', 'transformer_stack.layers.0.self_attention.combine_heads.bias', 'transformer_stack.layers.0.layernorm1.weight', 'transformer_stack.layers.0.layernorm1.bias', 'transformer_stack.layers.0.fully_connected.0.weight', 'transformer_stack.layers.0.fully_connected.0.bias', 'transformer_stack.layers.0.fully_connected.2.weight', 'transformer_stack.layers.0.fully_connected.2.bias', 'transformer_stack.layers.0.layernorm2.weight', 'transformer_stack.layers.0.layernorm2.bias', 'transformer_stack.layers.1.self_attention.query_dense.weight', 'transformer_stack.layers.1.self_attention.query_dense.bias', 'transformer_stack.layers.1.self_attention.key_dense.weight', 'transformer_stack.layers.1.self_attention.key_dense.bias', 'transformer_stack.layers.1.self_attention.value_dense.weight', 'transformer_stack.layers.1.self_attention.value_dense.bias', 'transformer_stack.layers.1.self_attention.combine_heads.weight', 'transformer_stack.layers.1.self_attention.combine_heads.bias', 'transformer_stack.layers.1.layernorm1.weight', 'transformer_stack.layers.1.layernorm1.bias', 'transformer_stack.layers.1.fully_connected.0.weight', 'transformer_stack.layers.1.fully_connected.0.bias', 'transformer_stack.layers.1.fully_connected.2.weight', 'transformer_stack.layers.1.fully_connected.2.bias', 'transformer_stack.layers.1.layernorm2.weight', 'transformer_stack.layers.1.layernorm2.bias', 'fc_stack.stack.0.layers.0.weight', 'fc_stack.stack.0.layers.0.bias']
E       assert 0 == ((36 - (2 * 16)) - 0)

tests/ludwig/combiners/test_combiners.py:742: AssertionError

Check warning on line 0 in tests.ludwig.combiners.test_combiners

See this annotation in the file changed.

@github-actions github-actions / Unit Test Results

test_tabtransformer_combiner_number_or_binary_without_category[None-fc_layers1-concat-2-feature_list2] (tests.ludwig.combiners.test_combiners) failed

artifacts/Unit Test Results (Python 3.10 not distributed)/pytest.xml
Raw output
features_to_test = ({'feature_00': {'encoder_output': tensor([[-1.0579],
        [ 1.9822],
        [ 0.7473],
        [-0.4933],
       ...        [-0.1815],
        [-0.2617]])}}, {'feature_00': <test_combiners.PseudoInputFeature object at 0x7f41aba643d0>})
embed_input_feature_name = None, fc_layers = [{'output_size': 256}]
reduce_output = 'concat', num_layers = 2

    @pytest.mark.parametrize(
        "feature_list",  # defines parameter for fixture features_to_test()
        [
            [
                ("binary", [BATCH_SIZE, 1]),
            ],
            [
                ("binary", [BATCH_SIZE, 1]),
                ("binary", [BATCH_SIZE, 1]),
            ],
            [
                ("number", [BATCH_SIZE, 1]),
            ],
            [
                ("number", [BATCH_SIZE, 1]),
                ("number", [BATCH_SIZE, 1]),
            ],
        ],
    )
    @pytest.mark.parametrize("num_layers", [1, 2])
    @pytest.mark.parametrize("reduce_output", ["concat", "sum"])
    @pytest.mark.parametrize("fc_layers", [None, [{"output_size": 256}]])
    @pytest.mark.parametrize("embed_input_feature_name", [None, 64, "add"])
    def test_tabtransformer_combiner_number_or_binary_without_category(
        features_to_test: tuple,
        embed_input_feature_name: Optional[Union[int, str]],
        fc_layers: Optional[list],
        reduce_output: str,
        num_layers: int,
    ) -> None:
        # make repeatable
        set_random_seed(RANDOM_SEED)
    
        # retrieve simulated encoder outputs and input features for the test
        encoder_outputs, input_features = features_to_test
    
        # setup combiner to test
        combiner = TabTransformerCombiner(
            input_features=input_features,
            config=load_config(
                TabTransformerCombinerConfig,
                embed_input_feature_name=embed_input_feature_name,
                # emulates parameters passed from combiner def
                num_layers=num_layers,  # number of transformer layers
                fc_layers=fc_layers,  # fully_connected layer definition
                reduce_output=reduce_output,  # sequence reducer
            ),
        ).to(DEVICE)
    
        # concatenate encoder outputs
        combiner_output = combiner(encoder_outputs)
    
        check_combiner_output(combiner, combiner_output, BATCH_SIZE)
    
        # check for parameter updating
        target = torch.randn(combiner_output["combiner_output"].shape)
        fpc, tpc, upc, not_updated = check_module_parameters_updated(
            combiner,
            (encoder_outputs,),
            target,
        )
    
        # Adjustments to the trainable parameter count (tpc) in the following assertion checks is needed
        # to account for the different code paths taken in the TabTransformerCombiner forward() method due to the
        # combination of input feature types (NUMBER, BINARY, CATEGORY) in the dataset and parameters used to
        # instantiate the TabTransformerCombiner object.
    
        # The entire transformer stack is by-passed because there is no categorical input features.  Subtract the
        # number for parameters in the transformer stack to account for this situation.
    
>       assert upc == (
            tpc - num_layers * PARAMETERS_IN_TRANSFORMER_BLOCK - (1 if embed_input_feature_name is not None else 0)
        ), f"Failed to update parameters. Parameters not updated: {not_updated}"
E       AssertionError: Failed to update parameters. Parameters not updated: ['layer_norm.weight', 'layer_norm.bias', 'transformer_stack.layers.0.self_attention.query_dense.weight', 'transformer_stack.layers.0.self_attention.query_dense.bias', 'transformer_stack.layers.0.self_attention.key_dense.weight', 'transformer_stack.layers.0.self_attention.key_dense.bias', 'transformer_stack.layers.0.self_attention.value_dense.weight', 'transformer_stack.layers.0.self_attention.value_dense.bias', 'transformer_stack.layers.0.self_attention.combine_heads.weight', 'transformer_stack.layers.0.self_attention.combine_heads.bias', 'transformer_stack.layers.0.layernorm1.weight', 'transformer_stack.layers.0.layernorm1.bias', 'transformer_stack.layers.0.fully_connected.0.weight', 'transformer_stack.layers.0.fully_connected.0.bias', 'transformer_stack.layers.0.fully_connected.2.weight', 'transformer_stack.layers.0.fully_connected.2.bias', 'transformer_stack.layers.0.layernorm2.weight', 'transformer_stack.layers.0.layernorm2.bias', 'transformer_stack.layers.1.self_attention.query_dense.weight', 'transformer_stack.layers.1.self_attention.query_dense.bias', 'transformer_stack.layers.1.self_attention.key_dense.weight', 'transformer_stack.layers.1.self_attention.key_dense.bias', 'transformer_stack.layers.1.self_attention.value_dense.weight', 'transformer_stack.layers.1.self_attention.value_dense.bias', 'transformer_stack.layers.1.self_attention.combine_heads.weight', 'transformer_stack.layers.1.self_attention.combine_heads.bias', 'transformer_stack.layers.1.layernorm1.weight', 'transformer_stack.layers.1.layernorm1.bias', 'transformer_stack.layers.1.fully_connected.0.weight', 'transformer_stack.layers.1.fully_connected.0.bias', 'transformer_stack.layers.1.fully_connected.2.weight', 'transformer_stack.layers.1.fully_connected.2.bias', 'transformer_stack.layers.1.layernorm2.weight', 'transformer_stack.layers.1.layernorm2.bias', 'fc_stack.stack.0.layers.0.weight', 'fc_stack.stack.0.layers.0.bias']
E       assert 0 == ((36 - (2 * 16)) - 0)

tests/ludwig/combiners/test_combiners.py:742: AssertionError

Check warning on line 0 in tests.ludwig.combiners.test_combiners

See this annotation in the file changed.

@github-actions github-actions / Unit Test Results

test_tabtransformer_combiner_number_or_binary_without_category[None-fc_layers1-sum-1-feature_list0] (tests.ludwig.combiners.test_combiners) failed

artifacts/Unit Test Results (Python 3.10 not distributed)/pytest.xml
Raw output
features_to_test = ({'feature_00': {'encoder_output': tensor([[-1.0579],
        [ 1.9822],
        [ 0.7473],
        [-0.4933],
       ...        [-0.1815],
        [-0.2617]])}}, {'feature_00': <test_combiners.PseudoInputFeature object at 0x7f41bb8f9d80>})
embed_input_feature_name = None, fc_layers = [{'output_size': 256}]
reduce_output = 'sum', num_layers = 1

    @pytest.mark.parametrize(
        "feature_list",  # defines parameter for fixture features_to_test()
        [
            [
                ("binary", [BATCH_SIZE, 1]),
            ],
            [
                ("binary", [BATCH_SIZE, 1]),
                ("binary", [BATCH_SIZE, 1]),
            ],
            [
                ("number", [BATCH_SIZE, 1]),
            ],
            [
                ("number", [BATCH_SIZE, 1]),
                ("number", [BATCH_SIZE, 1]),
            ],
        ],
    )
    @pytest.mark.parametrize("num_layers", [1, 2])
    @pytest.mark.parametrize("reduce_output", ["concat", "sum"])
    @pytest.mark.parametrize("fc_layers", [None, [{"output_size": 256}]])
    @pytest.mark.parametrize("embed_input_feature_name", [None, 64, "add"])
    def test_tabtransformer_combiner_number_or_binary_without_category(
        features_to_test: tuple,
        embed_input_feature_name: Optional[Union[int, str]],
        fc_layers: Optional[list],
        reduce_output: str,
        num_layers: int,
    ) -> None:
        # make repeatable
        set_random_seed(RANDOM_SEED)
    
        # retrieve simulated encoder outputs and input features for the test
        encoder_outputs, input_features = features_to_test
    
        # setup combiner to test
        combiner = TabTransformerCombiner(
            input_features=input_features,
            config=load_config(
                TabTransformerCombinerConfig,
                embed_input_feature_name=embed_input_feature_name,
                # emulates parameters passed from combiner def
                num_layers=num_layers,  # number of transformer layers
                fc_layers=fc_layers,  # fully_connected layer definition
                reduce_output=reduce_output,  # sequence reducer
            ),
        ).to(DEVICE)
    
        # concatenate encoder outputs
        combiner_output = combiner(encoder_outputs)
    
        check_combiner_output(combiner, combiner_output, BATCH_SIZE)
    
        # check for parameter updating
        target = torch.randn(combiner_output["combiner_output"].shape)
        fpc, tpc, upc, not_updated = check_module_parameters_updated(
            combiner,
            (encoder_outputs,),
            target,
        )
    
        # Adjustments to the trainable parameter count (tpc) in the following assertion checks is needed
        # to account for the different code paths taken in the TabTransformerCombiner forward() method due to the
        # combination of input feature types (NUMBER, BINARY, CATEGORY) in the dataset and parameters used to
        # instantiate the TabTransformerCombiner object.
    
        # The entire transformer stack is by-passed because there is no categorical input features.  Subtract the
        # number for parameters in the transformer stack to account for this situation.
    
>       assert upc == (
            tpc - num_layers * PARAMETERS_IN_TRANSFORMER_BLOCK - (1 if embed_input_feature_name is not None else 0)
        ), f"Failed to update parameters. Parameters not updated: {not_updated}"
E       AssertionError: Failed to update parameters. Parameters not updated: ['layer_norm.weight', 'layer_norm.bias', 'transformer_stack.layers.0.self_attention.query_dense.weight', 'transformer_stack.layers.0.self_attention.query_dense.bias', 'transformer_stack.layers.0.self_attention.key_dense.weight', 'transformer_stack.layers.0.self_attention.key_dense.bias', 'transformer_stack.layers.0.self_attention.value_dense.weight', 'transformer_stack.layers.0.self_attention.value_dense.bias', 'transformer_stack.layers.0.self_attention.combine_heads.weight', 'transformer_stack.layers.0.self_attention.combine_heads.bias', 'transformer_stack.layers.0.layernorm1.weight', 'transformer_stack.layers.0.layernorm1.bias', 'transformer_stack.layers.0.fully_connected.0.weight', 'transformer_stack.layers.0.fully_connected.0.bias', 'transformer_stack.layers.0.fully_connected.2.weight', 'transformer_stack.layers.0.fully_connected.2.bias', 'transformer_stack.layers.0.layernorm2.weight', 'transformer_stack.layers.0.layernorm2.bias', 'fc_stack.stack.0.layers.0.weight', 'fc_stack.stack.0.layers.0.bias']
E       assert 0 == ((20 - (1 * 16)) - 0)

tests/ludwig/combiners/test_combiners.py:742: AssertionError

Check warning on line 0 in tests.ludwig.combiners.test_combiners

See this annotation in the file changed.

@github-actions github-actions / Unit Test Results

test_tabtransformer_combiner_number_or_binary_without_category[None-fc_layers1-sum-1-feature_list2] (tests.ludwig.combiners.test_combiners) failed

artifacts/Unit Test Results (Python 3.10 not distributed)/pytest.xml
Raw output
features_to_test = ({'feature_00': {'encoder_output': tensor([[-1.0579],
        [ 1.9822],
        [ 0.7473],
        [-0.4933],
       ...        [-0.1815],
        [-0.2617]])}}, {'feature_00': <test_combiners.PseudoInputFeature object at 0x7f41aba66620>})
embed_input_feature_name = None, fc_layers = [{'output_size': 256}]
reduce_output = 'sum', num_layers = 1

    @pytest.mark.parametrize(
        "feature_list",  # defines parameter for fixture features_to_test()
        [
            [
                ("binary", [BATCH_SIZE, 1]),
            ],
            [
                ("binary", [BATCH_SIZE, 1]),
                ("binary", [BATCH_SIZE, 1]),
            ],
            [
                ("number", [BATCH_SIZE, 1]),
            ],
            [
                ("number", [BATCH_SIZE, 1]),
                ("number", [BATCH_SIZE, 1]),
            ],
        ],
    )
    @pytest.mark.parametrize("num_layers", [1, 2])
    @pytest.mark.parametrize("reduce_output", ["concat", "sum"])
    @pytest.mark.parametrize("fc_layers", [None, [{"output_size": 256}]])
    @pytest.mark.parametrize("embed_input_feature_name", [None, 64, "add"])
    def test_tabtransformer_combiner_number_or_binary_without_category(
        features_to_test: tuple,
        embed_input_feature_name: Optional[Union[int, str]],
        fc_layers: Optional[list],
        reduce_output: str,
        num_layers: int,
    ) -> None:
        # make repeatable
        set_random_seed(RANDOM_SEED)
    
        # retrieve simulated encoder outputs and input features for the test
        encoder_outputs, input_features = features_to_test
    
        # setup combiner to test
        combiner = TabTransformerCombiner(
            input_features=input_features,
            config=load_config(
                TabTransformerCombinerConfig,
                embed_input_feature_name=embed_input_feature_name,
                # emulates parameters passed from combiner def
                num_layers=num_layers,  # number of transformer layers
                fc_layers=fc_layers,  # fully_connected layer definition
                reduce_output=reduce_output,  # sequence reducer
            ),
        ).to(DEVICE)
    
        # concatenate encoder outputs
        combiner_output = combiner(encoder_outputs)
    
        check_combiner_output(combiner, combiner_output, BATCH_SIZE)
    
        # check for parameter updating
        target = torch.randn(combiner_output["combiner_output"].shape)
        fpc, tpc, upc, not_updated = check_module_parameters_updated(
            combiner,
            (encoder_outputs,),
            target,
        )
    
        # Adjustments to the trainable parameter count (tpc) in the following assertion checks is needed
        # to account for the different code paths taken in the TabTransformerCombiner forward() method due to the
        # combination of input feature types (NUMBER, BINARY, CATEGORY) in the dataset and parameters used to
        # instantiate the TabTransformerCombiner object.
    
        # The entire transformer stack is by-passed because there is no categorical input features.  Subtract the
        # number for parameters in the transformer stack to account for this situation.
    
>       assert upc == (
            tpc - num_layers * PARAMETERS_IN_TRANSFORMER_BLOCK - (1 if embed_input_feature_name is not None else 0)
        ), f"Failed to update parameters. Parameters not updated: {not_updated}"
E       AssertionError: Failed to update parameters. Parameters not updated: ['layer_norm.weight', 'layer_norm.bias', 'transformer_stack.layers.0.self_attention.query_dense.weight', 'transformer_stack.layers.0.self_attention.query_dense.bias', 'transformer_stack.layers.0.self_attention.key_dense.weight', 'transformer_stack.layers.0.self_attention.key_dense.bias', 'transformer_stack.layers.0.self_attention.value_dense.weight', 'transformer_stack.layers.0.self_attention.value_dense.bias', 'transformer_stack.layers.0.self_attention.combine_heads.weight', 'transformer_stack.layers.0.self_attention.combine_heads.bias', 'transformer_stack.layers.0.layernorm1.weight', 'transformer_stack.layers.0.layernorm1.bias', 'transformer_stack.layers.0.fully_connected.0.weight', 'transformer_stack.layers.0.fully_connected.0.bias', 'transformer_stack.layers.0.fully_connected.2.weight', 'transformer_stack.layers.0.fully_connected.2.bias', 'transformer_stack.layers.0.layernorm2.weight', 'transformer_stack.layers.0.layernorm2.bias', 'fc_stack.stack.0.layers.0.weight', 'fc_stack.stack.0.layers.0.bias']
E       assert 0 == ((20 - (1 * 16)) - 0)

tests/ludwig/combiners/test_combiners.py:742: AssertionError

Check warning on line 0 in tests.ludwig.combiners.test_combiners

See this annotation in the file changed.

@github-actions github-actions / Unit Test Results

test_tabtransformer_combiner_number_or_binary_without_category[None-fc_layers1-sum-2-feature_list0] (tests.ludwig.combiners.test_combiners) failed

artifacts/Unit Test Results (Python 3.10 not distributed)/pytest.xml
Raw output
features_to_test = ({'feature_00': {'encoder_output': tensor([[-1.0579],
        [ 1.9822],
        [ 0.7473],
        [-0.4933],
       ...        [-0.1815],
        [-0.2617]])}}, {'feature_00': <test_combiners.PseudoInputFeature object at 0x7f41aba6a800>})
embed_input_feature_name = None, fc_layers = [{'output_size': 256}]
reduce_output = 'sum', num_layers = 2

    @pytest.mark.parametrize(
        "feature_list",  # defines parameter for fixture features_to_test()
        [
            [
                ("binary", [BATCH_SIZE, 1]),
            ],
            [
                ("binary", [BATCH_SIZE, 1]),
                ("binary", [BATCH_SIZE, 1]),
            ],
            [
                ("number", [BATCH_SIZE, 1]),
            ],
            [
                ("number", [BATCH_SIZE, 1]),
                ("number", [BATCH_SIZE, 1]),
            ],
        ],
    )
    @pytest.mark.parametrize("num_layers", [1, 2])
    @pytest.mark.parametrize("reduce_output", ["concat", "sum"])
    @pytest.mark.parametrize("fc_layers", [None, [{"output_size": 256}]])
    @pytest.mark.parametrize("embed_input_feature_name", [None, 64, "add"])
    def test_tabtransformer_combiner_number_or_binary_without_category(
        features_to_test: tuple,
        embed_input_feature_name: Optional[Union[int, str]],
        fc_layers: Optional[list],
        reduce_output: str,
        num_layers: int,
    ) -> None:
        # make repeatable
        set_random_seed(RANDOM_SEED)
    
        # retrieve simulated encoder outputs and input features for the test
        encoder_outputs, input_features = features_to_test
    
        # setup combiner to test
        combiner = TabTransformerCombiner(
            input_features=input_features,
            config=load_config(
                TabTransformerCombinerConfig,
                embed_input_feature_name=embed_input_feature_name,
                # emulates parameters passed from combiner def
                num_layers=num_layers,  # number of transformer layers
                fc_layers=fc_layers,  # fully_connected layer definition
                reduce_output=reduce_output,  # sequence reducer
            ),
        ).to(DEVICE)
    
        # concatenate encoder outputs
        combiner_output = combiner(encoder_outputs)
    
        check_combiner_output(combiner, combiner_output, BATCH_SIZE)
    
        # check for parameter updating
        target = torch.randn(combiner_output["combiner_output"].shape)
        fpc, tpc, upc, not_updated = check_module_parameters_updated(
            combiner,
            (encoder_outputs,),
            target,
        )
    
        # Adjustments to the trainable parameter count (tpc) in the following assertion checks is needed
        # to account for the different code paths taken in the TabTransformerCombiner forward() method due to the
        # combination of input feature types (NUMBER, BINARY, CATEGORY) in the dataset and parameters used to
        # instantiate the TabTransformerCombiner object.
    
        # The entire transformer stack is by-passed because there is no categorical input features.  Subtract the
        # number for parameters in the transformer stack to account for this situation.
    
>       assert upc == (
            tpc - num_layers * PARAMETERS_IN_TRANSFORMER_BLOCK - (1 if embed_input_feature_name is not None else 0)
        ), f"Failed to update parameters. Parameters not updated: {not_updated}"
E       AssertionError: Failed to update parameters. Parameters not updated: ['layer_norm.weight', 'layer_norm.bias', 'transformer_stack.layers.0.self_attention.query_dense.weight', 'transformer_stack.layers.0.self_attention.query_dense.bias', 'transformer_stack.layers.0.self_attention.key_dense.weight', 'transformer_stack.layers.0.self_attention.key_dense.bias', 'transformer_stack.layers.0.self_attention.value_dense.weight', 'transformer_stack.layers.0.self_attention.value_dense.bias', 'transformer_stack.layers.0.self_attention.combine_heads.weight', 'transformer_stack.layers.0.self_attention.combine_heads.bias', 'transformer_stack.layers.0.layernorm1.weight', 'transformer_stack.layers.0.layernorm1.bias', 'transformer_stack.layers.0.fully_connected.0.weight', 'transformer_stack.layers.0.fully_connected.0.bias', 'transformer_stack.layers.0.fully_connected.2.weight', 'transformer_stack.layers.0.fully_connected.2.bias', 'transformer_stack.layers.0.layernorm2.weight', 'transformer_stack.layers.0.layernorm2.bias', 'transformer_stack.layers.1.self_attention.query_dense.weight', 'transformer_stack.layers.1.self_attention.query_dense.bias', 'transformer_stack.layers.1.self_attention.key_dense.weight', 'transformer_stack.layers.1.self_attention.key_dense.bias', 'transformer_stack.layers.1.self_attention.value_dense.weight', 'transformer_stack.layers.1.self_attention.value_dense.bias', 'transformer_stack.layers.1.self_attention.combine_heads.weight', 'transformer_stack.layers.1.self_attention.combine_heads.bias', 'transformer_stack.layers.1.layernorm1.weight', 'transformer_stack.layers.1.layernorm1.bias', 'transformer_stack.layers.1.fully_connected.0.weight', 'transformer_stack.layers.1.fully_connected.0.bias', 'transformer_stack.layers.1.fully_connected.2.weight', 'transformer_stack.layers.1.fully_connected.2.bias', 'transformer_stack.layers.1.layernorm2.weight', 'transformer_stack.layers.1.layernorm2.bias', 'fc_stack.stack.0.layers.0.weight', 'fc_stack.stack.0.layers.0.bias']
E       assert 0 == ((36 - (2 * 16)) - 0)

tests/ludwig/combiners/test_combiners.py:742: AssertionError

Check warning on line 0 in tests.ludwig.combiners.test_combiners

See this annotation in the file changed.

@github-actions github-actions / Unit Test Results

test_tabtransformer_combiner_number_or_binary_without_category[None-fc_layers1-sum-2-feature_list2] (tests.ludwig.combiners.test_combiners) failed

artifacts/Unit Test Results (Python 3.10 not distributed)/pytest.xml
Raw output
features_to_test = ({'feature_00': {'encoder_output': tensor([[-1.0579],
        [ 1.9822],
        [ 0.7473],
        [-0.4933],
       ...        [-0.1815],
        [-0.2617]])}}, {'feature_00': <test_combiners.PseudoInputFeature object at 0x7f41aba649d0>})
embed_input_feature_name = None, fc_layers = [{'output_size': 256}]
reduce_output = 'sum', num_layers = 2

    @pytest.mark.parametrize(
        "feature_list",  # defines parameter for fixture features_to_test()
        [
            [
                ("binary", [BATCH_SIZE, 1]),
            ],
            [
                ("binary", [BATCH_SIZE, 1]),
                ("binary", [BATCH_SIZE, 1]),
            ],
            [
                ("number", [BATCH_SIZE, 1]),
            ],
            [
                ("number", [BATCH_SIZE, 1]),
                ("number", [BATCH_SIZE, 1]),
            ],
        ],
    )
    @pytest.mark.parametrize("num_layers", [1, 2])
    @pytest.mark.parametrize("reduce_output", ["concat", "sum"])
    @pytest.mark.parametrize("fc_layers", [None, [{"output_size": 256}]])
    @pytest.mark.parametrize("embed_input_feature_name", [None, 64, "add"])
    def test_tabtransformer_combiner_number_or_binary_without_category(
        features_to_test: tuple,
        embed_input_feature_name: Optional[Union[int, str]],
        fc_layers: Optional[list],
        reduce_output: str,
        num_layers: int,
    ) -> None:
        # make repeatable
        set_random_seed(RANDOM_SEED)
    
        # retrieve simulated encoder outputs and input features for the test
        encoder_outputs, input_features = features_to_test
    
        # setup combiner to test
        combiner = TabTransformerCombiner(
            input_features=input_features,
            config=load_config(
                TabTransformerCombinerConfig,
                embed_input_feature_name=embed_input_feature_name,
                # emulates parameters passed from combiner def
                num_layers=num_layers,  # number of transformer layers
                fc_layers=fc_layers,  # fully_connected layer definition
                reduce_output=reduce_output,  # sequence reducer
            ),
        ).to(DEVICE)
    
        # concatenate encoder outputs
        combiner_output = combiner(encoder_outputs)
    
        check_combiner_output(combiner, combiner_output, BATCH_SIZE)
    
        # check for parameter updating
        target = torch.randn(combiner_output["combiner_output"].shape)
        fpc, tpc, upc, not_updated = check_module_parameters_updated(
            combiner,
            (encoder_outputs,),
            target,
        )
    
        # Adjustments to the trainable parameter count (tpc) in the following assertion checks is needed
        # to account for the different code paths taken in the TabTransformerCombiner forward() method due to the
        # combination of input feature types (NUMBER, BINARY, CATEGORY) in the dataset and parameters used to
        # instantiate the TabTransformerCombiner object.
    
        # The entire transformer stack is by-passed because there is no categorical input features.  Subtract the
        # number for parameters in the transformer stack to account for this situation.
    
>       assert upc == (
            tpc - num_layers * PARAMETERS_IN_TRANSFORMER_BLOCK - (1 if embed_input_feature_name is not None else 0)
        ), f"Failed to update parameters. Parameters not updated: {not_updated}"
E       AssertionError: Failed to update parameters. Parameters not updated: ['layer_norm.weight', 'layer_norm.bias', 'transformer_stack.layers.0.self_attention.query_dense.weight', 'transformer_stack.layers.0.self_attention.query_dense.bias', 'transformer_stack.layers.0.self_attention.key_dense.weight', 'transformer_stack.layers.0.self_attention.key_dense.bias', 'transformer_stack.layers.0.self_attention.value_dense.weight', 'transformer_stack.layers.0.self_attention.value_dense.bias', 'transformer_stack.layers.0.self_attention.combine_heads.weight', 'transformer_stack.layers.0.self_attention.combine_heads.bias', 'transformer_stack.layers.0.layernorm1.weight', 'transformer_stack.layers.0.layernorm1.bias', 'transformer_stack.layers.0.fully_connected.0.weight', 'transformer_stack.layers.0.fully_connected.0.bias', 'transformer_stack.layers.0.fully_connected.2.weight', 'transformer_stack.layers.0.fully_connected.2.bias', 'transformer_stack.layers.0.layernorm2.weight', 'transformer_stack.layers.0.layernorm2.bias', 'transformer_stack.layers.1.self_attention.query_dense.weight', 'transformer_stack.layers.1.self_attention.query_dense.bias', 'transformer_stack.layers.1.self_attention.key_dense.weight', 'transformer_stack.layers.1.self_attention.key_dense.bias', 'transformer_stack.layers.1.self_attention.value_dense.weight', 'transformer_stack.layers.1.self_attention.value_dense.bias', 'transformer_stack.layers.1.self_attention.combine_heads.weight', 'transformer_stack.layers.1.self_attention.combine_heads.bias', 'transformer_stack.layers.1.layernorm1.weight', 'transformer_stack.layers.1.layernorm1.bias', 'transformer_stack.layers.1.fully_connected.0.weight', 'transformer_stack.layers.1.fully_connected.0.bias', 'transformer_stack.layers.1.fully_connected.2.weight', 'transformer_stack.layers.1.fully_connected.2.bias', 'transformer_stack.layers.1.layernorm2.weight', 'transformer_stack.layers.1.layernorm2.bias', 'fc_stack.stack.0.layers.0.weight', 'fc_stack.stack.0.layers.0.bias']
E       assert 0 == ((36 - (2 * 16)) - 0)

tests/ludwig/combiners/test_combiners.py:742: AssertionError

Check warning on line 0 in tests.ludwig.combiners.test_combiners

See this annotation in the file changed.

@github-actions github-actions / Unit Test Results

test_tabtransformer_combiner_number_or_binary_without_category[64-fc_layers1-concat-1-feature_list0] (tests.ludwig.combiners.test_combiners) failed

artifacts/Unit Test Results (Python 3.10 not distributed)/pytest.xml
Raw output
features_to_test = ({'feature_00': {'encoder_output': tensor([[-1.0579],
        [ 1.9822],
        [ 0.7473],
        [-0.4933],
       ...        [-0.1815],
        [-0.2617]])}}, {'feature_00': <test_combiners.PseudoInputFeature object at 0x7f41aba74730>})
embed_input_feature_name = 64, fc_layers = [{'output_size': 256}]
reduce_output = 'concat', num_layers = 1

    @pytest.mark.parametrize(
        "feature_list",  # defines parameter for fixture features_to_test()
        [
            [
                ("binary", [BATCH_SIZE, 1]),
            ],
            [
                ("binary", [BATCH_SIZE, 1]),
                ("binary", [BATCH_SIZE, 1]),
            ],
            [
                ("number", [BATCH_SIZE, 1]),
            ],
            [
                ("number", [BATCH_SIZE, 1]),
                ("number", [BATCH_SIZE, 1]),
            ],
        ],
    )
    @pytest.mark.parametrize("num_layers", [1, 2])
    @pytest.mark.parametrize("reduce_output", ["concat", "sum"])
    @pytest.mark.parametrize("fc_layers", [None, [{"output_size": 256}]])
    @pytest.mark.parametrize("embed_input_feature_name", [None, 64, "add"])
    def test_tabtransformer_combiner_number_or_binary_without_category(
        features_to_test: tuple,
        embed_input_feature_name: Optional[Union[int, str]],
        fc_layers: Optional[list],
        reduce_output: str,
        num_layers: int,
    ) -> None:
        # make repeatable
        set_random_seed(RANDOM_SEED)
    
        # retrieve simulated encoder outputs and input features for the test
        encoder_outputs, input_features = features_to_test
    
        # setup combiner to test
        combiner = TabTransformerCombiner(
            input_features=input_features,
            config=load_config(
                TabTransformerCombinerConfig,
                embed_input_feature_name=embed_input_feature_name,
                # emulates parameters passed from combiner def
                num_layers=num_layers,  # number of transformer layers
                fc_layers=fc_layers,  # fully_connected layer definition
                reduce_output=reduce_output,  # sequence reducer
            ),
        ).to(DEVICE)
    
        # concatenate encoder outputs
        combiner_output = combiner(encoder_outputs)
    
        check_combiner_output(combiner, combiner_output, BATCH_SIZE)
    
        # check for parameter updating
        target = torch.randn(combiner_output["combiner_output"].shape)
        fpc, tpc, upc, not_updated = check_module_parameters_updated(
            combiner,
            (encoder_outputs,),
            target,
        )
    
        # Adjustments to the trainable parameter count (tpc) in the following assertion checks is needed
        # to account for the different code paths taken in the TabTransformerCombiner forward() method due to the
        # combination of input feature types (NUMBER, BINARY, CATEGORY) in the dataset and parameters used to
        # instantiate the TabTransformerCombiner object.
    
        # The entire transformer stack is by-passed because there is no categorical input features.  Subtract the
        # number for parameters in the transformer stack to account for this situation.
    
>       assert upc == (
            tpc - num_layers * PARAMETERS_IN_TRANSFORMER_BLOCK - (1 if embed_input_feature_name is not None else 0)
        ), f"Failed to update parameters. Parameters not updated: {not_updated}"
E       AssertionError: Failed to update parameters. Parameters not updated: ['embed_i_f_name_layer.embeddings.weight', 'layer_norm.weight', 'layer_norm.bias', 'transformer_stack.layers.0.self_attention.query_dense.weight', 'transformer_stack.layers.0.self_attention.query_dense.bias', 'transformer_stack.layers.0.self_attention.key_dense.weight', 'transformer_stack.layers.0.self_attention.key_dense.bias', 'transformer_stack.layers.0.self_attention.value_dense.weight', 'transformer_stack.layers.0.self_attention.value_dense.bias', 'transformer_stack.layers.0.self_attention.combine_heads.weight', 'transformer_stack.layers.0.self_attention.combine_heads.bias', 'transformer_stack.layers.0.layernorm1.weight', 'transformer_stack.layers.0.layernorm1.bias', 'transformer_stack.layers.0.fully_connected.0.weight', 'transformer_stack.layers.0.fully_connected.0.bias', 'transformer_stack.layers.0.fully_connected.2.weight', 'transformer_stack.layers.0.fully_connected.2.bias', 'transformer_stack.layers.0.layernorm2.weight', 'transformer_stack.layers.0.layernorm2.bias', 'fc_stack.stack.0.layers.0.weight', 'fc_stack.stack.0.layers.0.bias']
E       assert 0 == ((21 - (1 * 16)) - 1)

tests/ludwig/combiners/test_combiners.py:742: AssertionError

Check warning on line 0 in tests.ludwig.combiners.test_combiners

See this annotation in the file changed.

@github-actions github-actions / Unit Test Results

test_tabtransformer_combiner_number_or_binary_without_category[64-fc_layers1-concat-1-feature_list2] (tests.ludwig.combiners.test_combiners) failed

artifacts/Unit Test Results (Python 3.10 not distributed)/pytest.xml
Raw output
features_to_test = ({'feature_00': {'encoder_output': tensor([[-1.0579],
        [ 1.9822],
        [ 0.7473],
        [-0.4933],
       ...        [-0.1815],
        [-0.2617]])}}, {'feature_00': <test_combiners.PseudoInputFeature object at 0x7f41abae0850>})
embed_input_feature_name = 64, fc_layers = [{'output_size': 256}]
reduce_output = 'concat', num_layers = 1

    @pytest.mark.parametrize(
        "feature_list",  # defines parameter for fixture features_to_test()
        [
            [
                ("binary", [BATCH_SIZE, 1]),
            ],
            [
                ("binary", [BATCH_SIZE, 1]),
                ("binary", [BATCH_SIZE, 1]),
            ],
            [
                ("number", [BATCH_SIZE, 1]),
            ],
            [
                ("number", [BATCH_SIZE, 1]),
                ("number", [BATCH_SIZE, 1]),
            ],
        ],
    )
    @pytest.mark.parametrize("num_layers", [1, 2])
    @pytest.mark.parametrize("reduce_output", ["concat", "sum"])
    @pytest.mark.parametrize("fc_layers", [None, [{"output_size": 256}]])
    @pytest.mark.parametrize("embed_input_feature_name", [None, 64, "add"])
    def test_tabtransformer_combiner_number_or_binary_without_category(
        features_to_test: tuple,
        embed_input_feature_name: Optional[Union[int, str]],
        fc_layers: Optional[list],
        reduce_output: str,
        num_layers: int,
    ) -> None:
        # make repeatable
        set_random_seed(RANDOM_SEED)
    
        # retrieve simulated encoder outputs and input features for the test
        encoder_outputs, input_features = features_to_test
    
        # setup combiner to test
        combiner = TabTransformerCombiner(
            input_features=input_features,
            config=load_config(
                TabTransformerCombinerConfig,
                embed_input_feature_name=embed_input_feature_name,
                # emulates parameters passed from combiner def
                num_layers=num_layers,  # number of transformer layers
                fc_layers=fc_layers,  # fully_connected layer definition
                reduce_output=reduce_output,  # sequence reducer
            ),
        ).to(DEVICE)
    
        # concatenate encoder outputs
        combiner_output = combiner(encoder_outputs)
    
        check_combiner_output(combiner, combiner_output, BATCH_SIZE)
    
        # check for parameter updating
        target = torch.randn(combiner_output["combiner_output"].shape)
        fpc, tpc, upc, not_updated = check_module_parameters_updated(
            combiner,
            (encoder_outputs,),
            target,
        )
    
        # Adjustments to the trainable parameter count (tpc) in the following assertion checks is needed
        # to account for the different code paths taken in the TabTransformerCombiner forward() method due to the
        # combination of input feature types (NUMBER, BINARY, CATEGORY) in the dataset and parameters used to
        # instantiate the TabTransformerCombiner object.
    
        # The entire transformer stack is by-passed because there is no categorical input features.  Subtract the
        # number for parameters in the transformer stack to account for this situation.
    
>       assert upc == (
            tpc - num_layers * PARAMETERS_IN_TRANSFORMER_BLOCK - (1 if embed_input_feature_name is not None else 0)
        ), f"Failed to update parameters. Parameters not updated: {not_updated}"
E       AssertionError: Failed to update parameters. Parameters not updated: ['embed_i_f_name_layer.embeddings.weight', 'layer_norm.weight', 'layer_norm.bias', 'transformer_stack.layers.0.self_attention.query_dense.weight', 'transformer_stack.layers.0.self_attention.query_dense.bias', 'transformer_stack.layers.0.self_attention.key_dense.weight', 'transformer_stack.layers.0.self_attention.key_dense.bias', 'transformer_stack.layers.0.self_attention.value_dense.weight', 'transformer_stack.layers.0.self_attention.value_dense.bias', 'transformer_stack.layers.0.self_attention.combine_heads.weight', 'transformer_stack.layers.0.self_attention.combine_heads.bias', 'transformer_stack.layers.0.layernorm1.weight', 'transformer_stack.layers.0.layernorm1.bias', 'transformer_stack.layers.0.fully_connected.0.weight', 'transformer_stack.layers.0.fully_connected.0.bias', 'transformer_stack.layers.0.fully_connected.2.weight', 'transformer_stack.layers.0.fully_connected.2.bias', 'transformer_stack.layers.0.layernorm2.weight', 'transformer_stack.layers.0.layernorm2.bias', 'fc_stack.stack.0.layers.0.weight', 'fc_stack.stack.0.layers.0.bias']
E       assert 0 == ((21 - (1 * 16)) - 1)

tests/ludwig/combiners/test_combiners.py:742: AssertionError

Check warning on line 0 in tests.ludwig.combiners.test_combiners

See this annotation in the file changed.

@github-actions github-actions / Unit Test Results

test_tabtransformer_combiner_number_or_binary_without_category[64-fc_layers1-concat-2-feature_list0] (tests.ludwig.combiners.test_combiners) failed

artifacts/Unit Test Results (Python 3.10 not distributed)/pytest.xml
Raw output
features_to_test = ({'feature_00': {'encoder_output': tensor([[-1.0579],
        [ 1.9822],
        [ 0.7473],
        [-0.4933],
       ...        [-0.1815],
        [-0.2617]])}}, {'feature_00': <test_combiners.PseudoInputFeature object at 0x7f41aba77040>})
embed_input_feature_name = 64, fc_layers = [{'output_size': 256}]
reduce_output = 'concat', num_layers = 2

    @pytest.mark.parametrize(
        "feature_list",  # defines parameter for fixture features_to_test()
        [
            [
                ("binary", [BATCH_SIZE, 1]),
            ],
            [
                ("binary", [BATCH_SIZE, 1]),
                ("binary", [BATCH_SIZE, 1]),
            ],
            [
                ("number", [BATCH_SIZE, 1]),
            ],
            [
                ("number", [BATCH_SIZE, 1]),
                ("number", [BATCH_SIZE, 1]),
            ],
        ],
    )
    @pytest.mark.parametrize("num_layers", [1, 2])
    @pytest.mark.parametrize("reduce_output", ["concat", "sum"])
    @pytest.mark.parametrize("fc_layers", [None, [{"output_size": 256}]])
    @pytest.mark.parametrize("embed_input_feature_name", [None, 64, "add"])
    def test_tabtransformer_combiner_number_or_binary_without_category(
        features_to_test: tuple,
        embed_input_feature_name: Optional[Union[int, str]],
        fc_layers: Optional[list],
        reduce_output: str,
        num_layers: int,
    ) -> None:
        # make repeatable
        set_random_seed(RANDOM_SEED)
    
        # retrieve simulated encoder outputs and input features for the test
        encoder_outputs, input_features = features_to_test
    
        # setup combiner to test
        combiner = TabTransformerCombiner(
            input_features=input_features,
            config=load_config(
                TabTransformerCombinerConfig,
                embed_input_feature_name=embed_input_feature_name,
                # emulates parameters passed from combiner def
                num_layers=num_layers,  # number of transformer layers
                fc_layers=fc_layers,  # fully_connected layer definition
                reduce_output=reduce_output,  # sequence reducer
            ),
        ).to(DEVICE)
    
        # concatenate encoder outputs
        combiner_output = combiner(encoder_outputs)
    
        check_combiner_output(combiner, combiner_output, BATCH_SIZE)
    
        # check for parameter updating
        target = torch.randn(combiner_output["combiner_output"].shape)
        fpc, tpc, upc, not_updated = check_module_parameters_updated(
            combiner,
            (encoder_outputs,),
            target,
        )
    
        # Adjustments to the trainable parameter count (tpc) in the following assertion checks is needed
        # to account for the different code paths taken in the TabTransformerCombiner forward() method due to the
        # combination of input feature types (NUMBER, BINARY, CATEGORY) in the dataset and parameters used to
        # instantiate the TabTransformerCombiner object.
    
        # The entire transformer stack is by-passed because there is no categorical input features.  Subtract the
        # number for parameters in the transformer stack to account for this situation.
    
>       assert upc == (
            tpc - num_layers * PARAMETERS_IN_TRANSFORMER_BLOCK - (1 if embed_input_feature_name is not None else 0)
        ), f"Failed to update parameters. Parameters not updated: {not_updated}"
E       AssertionError: Failed to update parameters. Parameters not updated: ['embed_i_f_name_layer.embeddings.weight', 'layer_norm.weight', 'layer_norm.bias', 'transformer_stack.layers.0.self_attention.query_dense.weight', 'transformer_stack.layers.0.self_attention.query_dense.bias', 'transformer_stack.layers.0.self_attention.key_dense.weight', 'transformer_stack.layers.0.self_attention.key_dense.bias', 'transformer_stack.layers.0.self_attention.value_dense.weight', 'transformer_stack.layers.0.self_attention.value_dense.bias', 'transformer_stack.layers.0.self_attention.combine_heads.weight', 'transformer_stack.layers.0.self_attention.combine_heads.bias', 'transformer_stack.layers.0.layernorm1.weight', 'transformer_stack.layers.0.layernorm1.bias', 'transformer_stack.layers.0.fully_connected.0.weight', 'transformer_stack.layers.0.fully_connected.0.bias', 'transformer_stack.layers.0.fully_connected.2.weight', 'transformer_stack.layers.0.fully_connected.2.bias', 'transformer_stack.layers.0.layernorm2.weight', 'transformer_stack.layers.0.layernorm2.bias', 'transformer_stack.layers.1.self_attention.query_dense.weight', 'transformer_stack.layers.1.self_attention.query_dense.bias', 'transformer_stack.layers.1.self_attention.key_dense.weight', 'transformer_stack.layers.1.self_attention.key_dense.bias', 'transformer_stack.layers.1.self_attention.value_dense.weight', 'transformer_stack.layers.1.self_attention.value_dense.bias', 'transformer_stack.layers.1.self_attention.combine_heads.weight', 'transformer_stack.layers.1.self_attention.combine_heads.bias', 'transformer_stack.layers.1.layernorm1.weight', 'transformer_stack.layers.1.layernorm1.bias', 'transformer_stack.layers.1.fully_connected.0.weight', 'transformer_stack.layers.1.fully_connected.0.bias', 'transformer_stack.layers.1.fully_connected.2.weight', 'transformer_stack.layers.1.fully_connected.2.bias', 'transformer_stack.layers.1.layernorm2.weight', 'transformer_stack.layers.1.layernorm2.bias', 'fc_stack.stack.0.layers.0.weight', 'fc_stack.stack.0.layers.0.bias']
E       assert 0 == ((37 - (2 * 16)) - 1)

tests/ludwig/combiners/test_combiners.py:742: AssertionError

Check warning on line 0 in tests.ludwig.combiners.test_combiners

See this annotation in the file changed.

@github-actions github-actions / Unit Test Results

test_tabtransformer_combiner_number_or_binary_without_category[64-fc_layers1-concat-2-feature_list2] (tests.ludwig.combiners.test_combiners) failed

artifacts/Unit Test Results (Python 3.10 not distributed)/pytest.xml
Raw output
features_to_test = ({'feature_00': {'encoder_output': tensor([[-1.0579],
        [ 1.9822],
        [ 0.7473],
        [-0.4933],
       ...        [-0.1815],
        [-0.2617]])}}, {'feature_00': <test_combiners.PseudoInputFeature object at 0x7f41aba77c40>})
embed_input_feature_name = 64, fc_layers = [{'output_size': 256}]
reduce_output = 'concat', num_layers = 2

    @pytest.mark.parametrize(
        "feature_list",  # defines parameter for fixture features_to_test()
        [
            [
                ("binary", [BATCH_SIZE, 1]),
            ],
            [
                ("binary", [BATCH_SIZE, 1]),
                ("binary", [BATCH_SIZE, 1]),
            ],
            [
                ("number", [BATCH_SIZE, 1]),
            ],
            [
                ("number", [BATCH_SIZE, 1]),
                ("number", [BATCH_SIZE, 1]),
            ],
        ],
    )
    @pytest.mark.parametrize("num_layers", [1, 2])
    @pytest.mark.parametrize("reduce_output", ["concat", "sum"])
    @pytest.mark.parametrize("fc_layers", [None, [{"output_size": 256}]])
    @pytest.mark.parametrize("embed_input_feature_name", [None, 64, "add"])
    def test_tabtransformer_combiner_number_or_binary_without_category(
        features_to_test: tuple,
        embed_input_feature_name: Optional[Union[int, str]],
        fc_layers: Optional[list],
        reduce_output: str,
        num_layers: int,
    ) -> None:
        # make repeatable
        set_random_seed(RANDOM_SEED)
    
        # retrieve simulated encoder outputs and input features for the test
        encoder_outputs, input_features = features_to_test
    
        # setup combiner to test
        combiner = TabTransformerCombiner(
            input_features=input_features,
            config=load_config(
                TabTransformerCombinerConfig,
                embed_input_feature_name=embed_input_feature_name,
                # emulates parameters passed from combiner def
                num_layers=num_layers,  # number of transformer layers
                fc_layers=fc_layers,  # fully_connected layer definition
                reduce_output=reduce_output,  # sequence reducer
            ),
        ).to(DEVICE)
    
        # concatenate encoder outputs
        combiner_output = combiner(encoder_outputs)
    
        check_combiner_output(combiner, combiner_output, BATCH_SIZE)
    
        # check for parameter updating
        target = torch.randn(combiner_output["combiner_output"].shape)
        fpc, tpc, upc, not_updated = check_module_parameters_updated(
            combiner,
            (encoder_outputs,),
            target,
        )
    
        # Adjustments to the trainable parameter count (tpc) in the following assertion checks is needed
        # to account for the different code paths taken in the TabTransformerCombiner forward() method due to the
        # combination of input feature types (NUMBER, BINARY, CATEGORY) in the dataset and parameters used to
        # instantiate the TabTransformerCombiner object.
    
        # The entire transformer stack is by-passed because there is no categorical input features.  Subtract the
        # number for parameters in the transformer stack to account for this situation.
    
>       assert upc == (
            tpc - num_layers * PARAMETERS_IN_TRANSFORMER_BLOCK - (1 if embed_input_feature_name is not None else 0)
        ), f"Failed to update parameters. Parameters not updated: {not_updated}"
E       AssertionError: Failed to update parameters. Parameters not updated: ['embed_i_f_name_layer.embeddings.weight', 'layer_norm.weight', 'layer_norm.bias', 'transformer_stack.layers.0.self_attention.query_dense.weight', 'transformer_stack.layers.0.self_attention.query_dense.bias', 'transformer_stack.layers.0.self_attention.key_dense.weight', 'transformer_stack.layers.0.self_attention.key_dense.bias', 'transformer_stack.layers.0.self_attention.value_dense.weight', 'transformer_stack.layers.0.self_attention.value_dense.bias', 'transformer_stack.layers.0.self_attention.combine_heads.weight', 'transformer_stack.layers.0.self_attention.combine_heads.bias', 'transformer_stack.layers.0.layernorm1.weight', 'transformer_stack.layers.0.layernorm1.bias', 'transformer_stack.layers.0.fully_connected.0.weight', 'transformer_stack.layers.0.fully_connected.0.bias', 'transformer_stack.layers.0.fully_connected.2.weight', 'transformer_stack.layers.0.fully_connected.2.bias', 'transformer_stack.layers.0.layernorm2.weight', 'transformer_stack.layers.0.layernorm2.bias', 'transformer_stack.layers.1.self_attention.query_dense.weight', 'transformer_stack.layers.1.self_attention.query_dense.bias', 'transformer_stack.layers.1.self_attention.key_dense.weight', 'transformer_stack.layers.1.self_attention.key_dense.bias', 'transformer_stack.layers.1.self_attention.value_dense.weight', 'transformer_stack.layers.1.self_attention.value_dense.bias', 'transformer_stack.layers.1.self_attention.combine_heads.weight', 'transformer_stack.layers.1.self_attention.combine_heads.bias', 'transformer_stack.layers.1.layernorm1.weight', 'transformer_stack.layers.1.layernorm1.bias', 'transformer_stack.layers.1.fully_connected.0.weight', 'transformer_stack.layers.1.fully_connected.0.bias', 'transformer_stack.layers.1.fully_connected.2.weight', 'transformer_stack.layers.1.fully_connected.2.bias', 'transformer_stack.layers.1.layernorm2.weight', 'transformer_stack.layers.1.layernorm2.bias', 'fc_stack.stack.0.layers.0.weight', 'fc_stack.stack.0.layers.0.bias']
E       assert 0 == ((37 - (2 * 16)) - 1)

tests/ludwig/combiners/test_combiners.py:742: AssertionError

Check warning on line 0 in tests.ludwig.combiners.test_combiners

See this annotation in the file changed.

@github-actions github-actions / Unit Test Results

test_tabtransformer_combiner_number_or_binary_without_category[64-fc_layers1-sum-1-feature_list0] (tests.ludwig.combiners.test_combiners) failed

artifacts/Unit Test Results (Python 3.10 not distributed)/pytest.xml
Raw output
features_to_test = ({'feature_00': {'encoder_output': tensor([[-1.0579],
        [ 1.9822],
        [ 0.7473],
        [-0.4933],
       ...        [-0.1815],
        [-0.2617]])}}, {'feature_00': <test_combiners.PseudoInputFeature object at 0x7f41bb972b90>})
embed_input_feature_name = 64, fc_layers = [{'output_size': 256}]
reduce_output = 'sum', num_layers = 1

    @pytest.mark.parametrize(
        "feature_list",  # defines parameter for fixture features_to_test()
        [
            [
                ("binary", [BATCH_SIZE, 1]),
            ],
            [
                ("binary", [BATCH_SIZE, 1]),
                ("binary", [BATCH_SIZE, 1]),
            ],
            [
                ("number", [BATCH_SIZE, 1]),
            ],
            [
                ("number", [BATCH_SIZE, 1]),
                ("number", [BATCH_SIZE, 1]),
            ],
        ],
    )
    @pytest.mark.parametrize("num_layers", [1, 2])
    @pytest.mark.parametrize("reduce_output", ["concat", "sum"])
    @pytest.mark.parametrize("fc_layers", [None, [{"output_size": 256}]])
    @pytest.mark.parametrize("embed_input_feature_name", [None, 64, "add"])
    def test_tabtransformer_combiner_number_or_binary_without_category(
        features_to_test: tuple,
        embed_input_feature_name: Optional[Union[int, str]],
        fc_layers: Optional[list],
        reduce_output: str,
        num_layers: int,
    ) -> None:
        # make repeatable
        set_random_seed(RANDOM_SEED)
    
        # retrieve simulated encoder outputs and input features for the test
        encoder_outputs, input_features = features_to_test
    
        # setup combiner to test
        combiner = TabTransformerCombiner(
            input_features=input_features,
            config=load_config(
                TabTransformerCombinerConfig,
                embed_input_feature_name=embed_input_feature_name,
                # emulates parameters passed from combiner def
                num_layers=num_layers,  # number of transformer layers
                fc_layers=fc_layers,  # fully_connected layer definition
                reduce_output=reduce_output,  # sequence reducer
            ),
        ).to(DEVICE)
    
        # concatenate encoder outputs
        combiner_output = combiner(encoder_outputs)
    
        check_combiner_output(combiner, combiner_output, BATCH_SIZE)
    
        # check for parameter updating
        target = torch.randn(combiner_output["combiner_output"].shape)
        fpc, tpc, upc, not_updated = check_module_parameters_updated(
            combiner,
            (encoder_outputs,),
            target,
        )
    
        # Adjustments to the trainable parameter count (tpc) in the following assertion checks is needed
        # to account for the different code paths taken in the TabTransformerCombiner forward() method due to the
        # combination of input feature types (NUMBER, BINARY, CATEGORY) in the dataset and parameters used to
        # instantiate the TabTransformerCombiner object.
    
        # The entire transformer stack is by-passed because there is no categorical input features.  Subtract the
        # number for parameters in the transformer stack to account for this situation.
    
>       assert upc == (
            tpc - num_layers * PARAMETERS_IN_TRANSFORMER_BLOCK - (1 if embed_input_feature_name is not None else 0)
        ), f"Failed to update parameters. Parameters not updated: {not_updated}"
E       AssertionError: Failed to update parameters. Parameters not updated: ['embed_i_f_name_layer.embeddings.weight', 'layer_norm.weight', 'layer_norm.bias', 'transformer_stack.layers.0.self_attention.query_dense.weight', 'transformer_stack.layers.0.self_attention.query_dense.bias', 'transformer_stack.layers.0.self_attention.key_dense.weight', 'transformer_stack.layers.0.self_attention.key_dense.bias', 'transformer_stack.layers.0.self_attention.value_dense.weight', 'transformer_stack.layers.0.self_attention.value_dense.bias', 'transformer_stack.layers.0.self_attention.combine_heads.weight', 'transformer_stack.layers.0.self_attention.combine_heads.bias', 'transformer_stack.layers.0.layernorm1.weight', 'transformer_stack.layers.0.layernorm1.bias', 'transformer_stack.layers.0.fully_connected.0.weight', 'transformer_stack.layers.0.fully_connected.0.bias', 'transformer_stack.layers.0.fully_connected.2.weight', 'transformer_stack.layers.0.fully_connected.2.bias', 'transformer_stack.layers.0.layernorm2.weight', 'transformer_stack.layers.0.layernorm2.bias', 'fc_stack.stack.0.layers.0.weight', 'fc_stack.stack.0.layers.0.bias']
E       assert 0 == ((21 - (1 * 16)) - 1)

tests/ludwig/combiners/test_combiners.py:742: AssertionError

Check warning on line 0 in tests.ludwig.combiners.test_combiners

See this annotation in the file changed.

@github-actions github-actions / Unit Test Results

test_tabtransformer_combiner_number_or_binary_without_category[64-fc_layers1-sum-1-feature_list2] (tests.ludwig.combiners.test_combiners) failed

artifacts/Unit Test Results (Python 3.10 not distributed)/pytest.xml
Raw output
features_to_test = ({'feature_00': {'encoder_output': tensor([[-1.0579],
        [ 1.9822],
        [ 0.7473],
        [-0.4933],
       ...        [-0.1815],
        [-0.2617]])}}, {'feature_00': <test_combiners.PseudoInputFeature object at 0x7f41aba9d3f0>})
embed_input_feature_name = 64, fc_layers = [{'output_size': 256}]
reduce_output = 'sum', num_layers = 1

    @pytest.mark.parametrize(
        "feature_list",  # defines parameter for fixture features_to_test()
        [
            [
                ("binary", [BATCH_SIZE, 1]),
            ],
            [
                ("binary", [BATCH_SIZE, 1]),
                ("binary", [BATCH_SIZE, 1]),
            ],
            [
                ("number", [BATCH_SIZE, 1]),
            ],
            [
                ("number", [BATCH_SIZE, 1]),
                ("number", [BATCH_SIZE, 1]),
            ],
        ],
    )
    @pytest.mark.parametrize("num_layers", [1, 2])
    @pytest.mark.parametrize("reduce_output", ["concat", "sum"])
    @pytest.mark.parametrize("fc_layers", [None, [{"output_size": 256}]])
    @pytest.mark.parametrize("embed_input_feature_name", [None, 64, "add"])
    def test_tabtransformer_combiner_number_or_binary_without_category(
        features_to_test: tuple,
        embed_input_feature_name: Optional[Union[int, str]],
        fc_layers: Optional[list],
        reduce_output: str,
        num_layers: int,
    ) -> None:
        # make repeatable
        set_random_seed(RANDOM_SEED)
    
        # retrieve simulated encoder outputs and input features for the test
        encoder_outputs, input_features = features_to_test
    
        # setup combiner to test
        combiner = TabTransformerCombiner(
            input_features=input_features,
            config=load_config(
                TabTransformerCombinerConfig,
                embed_input_feature_name=embed_input_feature_name,
                # emulates parameters passed from combiner def
                num_layers=num_layers,  # number of transformer layers
                fc_layers=fc_layers,  # fully_connected layer definition
                reduce_output=reduce_output,  # sequence reducer
            ),
        ).to(DEVICE)
    
        # concatenate encoder outputs
        combiner_output = combiner(encoder_outputs)
    
        check_combiner_output(combiner, combiner_output, BATCH_SIZE)
    
        # check for parameter updating
        target = torch.randn(combiner_output["combiner_output"].shape)
        fpc, tpc, upc, not_updated = check_module_parameters_updated(
            combiner,
            (encoder_outputs,),
            target,
        )
    
        # Adjustments to the trainable parameter count (tpc) in the following assertion checks is needed
        # to account for the different code paths taken in the TabTransformerCombiner forward() method due to the
        # combination of input feature types (NUMBER, BINARY, CATEGORY) in the dataset and parameters used to
        # instantiate the TabTransformerCombiner object.
    
        # The entire transformer stack is by-passed because there is no categorical input features.  Subtract the
        # number for parameters in the transformer stack to account for this situation.
    
>       assert upc == (
            tpc - num_layers * PARAMETERS_IN_TRANSFORMER_BLOCK - (1 if embed_input_feature_name is not None else 0)
        ), f"Failed to update parameters. Parameters not updated: {not_updated}"
E       AssertionError: Failed to update parameters. Parameters not updated: ['embed_i_f_name_layer.embeddings.weight', 'layer_norm.weight', 'layer_norm.bias', 'transformer_stack.layers.0.self_attention.query_dense.weight', 'transformer_stack.layers.0.self_attention.query_dense.bias', 'transformer_stack.layers.0.self_attention.key_dense.weight', 'transformer_stack.layers.0.self_attention.key_dense.bias', 'transformer_stack.layers.0.self_attention.value_dense.weight', 'transformer_stack.layers.0.self_attention.value_dense.bias', 'transformer_stack.layers.0.self_attention.combine_heads.weight', 'transformer_stack.layers.0.self_attention.combine_heads.bias', 'transformer_stack.layers.0.layernorm1.weight', 'transformer_stack.layers.0.layernorm1.bias', 'transformer_stack.layers.0.fully_connected.0.weight', 'transformer_stack.layers.0.fully_connected.0.bias', 'transformer_stack.layers.0.fully_connected.2.weight', 'transformer_stack.layers.0.fully_connected.2.bias', 'transformer_stack.layers.0.layernorm2.weight', 'transformer_stack.layers.0.layernorm2.bias', 'fc_stack.stack.0.layers.0.weight', 'fc_stack.stack.0.layers.0.bias']
E       assert 0 == ((21 - (1 * 16)) - 1)

tests/ludwig/combiners/test_combiners.py:742: AssertionError

Check warning on line 0 in tests.ludwig.combiners.test_combiners

See this annotation in the file changed.

@github-actions github-actions / Unit Test Results

test_tabtransformer_combiner_number_or_binary_without_category[64-fc_layers1-sum-2-feature_list0] (tests.ludwig.combiners.test_combiners) failed

artifacts/Unit Test Results (Python 3.10 not distributed)/pytest.xml
Raw output
features_to_test = ({'feature_00': {'encoder_output': tensor([[-1.0579],
        [ 1.9822],
        [ 0.7473],
        [-0.4933],
       ...        [-0.1815],
        [-0.2617]])}}, {'feature_00': <test_combiners.PseudoInputFeature object at 0x7f41bb971ab0>})
embed_input_feature_name = 64, fc_layers = [{'output_size': 256}]
reduce_output = 'sum', num_layers = 2

    @pytest.mark.parametrize(
        "feature_list",  # defines parameter for fixture features_to_test()
        [
            [
                ("binary", [BATCH_SIZE, 1]),
            ],
            [
                ("binary", [BATCH_SIZE, 1]),
                ("binary", [BATCH_SIZE, 1]),
            ],
            [
                ("number", [BATCH_SIZE, 1]),
            ],
            [
                ("number", [BATCH_SIZE, 1]),
                ("number", [BATCH_SIZE, 1]),
            ],
        ],
    )
    @pytest.mark.parametrize("num_layers", [1, 2])
    @pytest.mark.parametrize("reduce_output", ["concat", "sum"])
    @pytest.mark.parametrize("fc_layers", [None, [{"output_size": 256}]])
    @pytest.mark.parametrize("embed_input_feature_name", [None, 64, "add"])
    def test_tabtransformer_combiner_number_or_binary_without_category(
        features_to_test: tuple,
        embed_input_feature_name: Optional[Union[int, str]],
        fc_layers: Optional[list],
        reduce_output: str,
        num_layers: int,
    ) -> None:
        # make repeatable
        set_random_seed(RANDOM_SEED)
    
        # retrieve simulated encoder outputs and input features for the test
        encoder_outputs, input_features = features_to_test
    
        # setup combiner to test
        combiner = TabTransformerCombiner(
            input_features=input_features,
            config=load_config(
                TabTransformerCombinerConfig,
                embed_input_feature_name=embed_input_feature_name,
                # emulates parameters passed from combiner def
                num_layers=num_layers,  # number of transformer layers
                fc_layers=fc_layers,  # fully_connected layer definition
                reduce_output=reduce_output,  # sequence reducer
            ),
        ).to(DEVICE)
    
        # concatenate encoder outputs
        combiner_output = combiner(encoder_outputs)
    
        check_combiner_output(combiner, combiner_output, BATCH_SIZE)
    
        # check for parameter updating
        target = torch.randn(combiner_output["combiner_output"].shape)
        fpc, tpc, upc, not_updated = check_module_parameters_updated(
            combiner,
            (encoder_outputs,),
            target,
        )
    
        # Adjustments to the trainable parameter count (tpc) in the following assertion checks is needed
        # to account for the different code paths taken in the TabTransformerCombiner forward() method due to the
        # combination of input feature types (NUMBER, BINARY, CATEGORY) in the dataset and parameters used to
        # instantiate the TabTransformerCombiner object.
    
        # The entire transformer stack is by-passed because there is no categorical input features.  Subtract the
        # number for parameters in the transformer stack to account for this situation.
    
>       assert upc == (
            tpc - num_layers * PARAMETERS_IN_TRANSFORMER_BLOCK - (1 if embed_input_feature_name is not None else 0)
        ), f"Failed to update parameters. Parameters not updated: {not_updated}"
E       AssertionError: Failed to update parameters. Parameters not updated: ['embed_i_f_name_layer.embeddings.weight', 'layer_norm.weight', 'layer_norm.bias', 'transformer_stack.layers.0.self_attention.query_dense.weight', 'transformer_stack.layers.0.self_attention.query_dense.bias', 'transformer_stack.layers.0.self_attention.key_dense.weight', 'transformer_stack.layers.0.self_attention.key_dense.bias', 'transformer_stack.layers.0.self_attention.value_dense.weight', 'transformer_stack.layers.0.self_attention.value_dense.bias', 'transformer_stack.layers.0.self_attention.combine_heads.weight', 'transformer_stack.layers.0.self_attention.combine_heads.bias', 'transformer_stack.layers.0.layernorm1.weight', 'transformer_stack.layers.0.layernorm1.bias', 'transformer_stack.layers.0.fully_connected.0.weight', 'transformer_stack.layers.0.fully_connected.0.bias', 'transformer_stack.layers.0.fully_connected.2.weight', 'transformer_stack.layers.0.fully_connected.2.bias', 'transformer_stack.layers.0.layernorm2.weight', 'transformer_stack.layers.0.layernorm2.bias', 'transformer_stack.layers.1.self_attention.query_dense.weight', 'transformer_stack.layers.1.self_attention.query_dense.bias', 'transformer_stack.layers.1.self_attention.key_dense.weight', 'transformer_stack.layers.1.self_attention.key_dense.bias', 'transformer_stack.layers.1.self_attention.value_dense.weight', 'transformer_stack.layers.1.self_attention.value_dense.bias', 'transformer_stack.layers.1.self_attention.combine_heads.weight', 'transformer_stack.layers.1.self_attention.combine_heads.bias', 'transformer_stack.layers.1.layernorm1.weight', 'transformer_stack.layers.1.layernorm1.bias', 'transformer_stack.layers.1.fully_connected.0.weight', 'transformer_stack.layers.1.fully_connected.0.bias', 'transformer_stack.layers.1.fully_connected.2.weight', 'transformer_stack.layers.1.fully_connected.2.bias', 'transformer_stack.layers.1.layernorm2.weight', 'transformer_stack.layers.1.layernorm2.bias', 'fc_stack.stack.0.layers.0.weight', 'fc_stack.stack.0.layers.0.bias']
E       assert 0 == ((37 - (2 * 16)) - 1)

tests/ludwig/combiners/test_combiners.py:742: AssertionError

Check warning on line 0 in tests.ludwig.combiners.test_combiners

See this annotation in the file changed.

@github-actions github-actions / Unit Test Results

test_tabtransformer_combiner_number_or_binary_without_category[64-fc_layers1-sum-2-feature_list2] (tests.ludwig.combiners.test_combiners) failed

artifacts/Unit Test Results (Python 3.10 not distributed)/pytest.xml
Raw output
features_to_test = ({'feature_00': {'encoder_output': tensor([[-1.0579],
        [ 1.9822],
        [ 0.7473],
        [-0.4933],
       ...        [-0.1815],
        [-0.2617]])}}, {'feature_00': <test_combiners.PseudoInputFeature object at 0x7f41ab988d60>})
embed_input_feature_name = 64, fc_layers = [{'output_size': 256}]
reduce_output = 'sum', num_layers = 2

    @pytest.mark.parametrize(
        "feature_list",  # defines parameter for fixture features_to_test()
        [
            [
                ("binary", [BATCH_SIZE, 1]),
            ],
            [
                ("binary", [BATCH_SIZE, 1]),
                ("binary", [BATCH_SIZE, 1]),
            ],
            [
                ("number", [BATCH_SIZE, 1]),
            ],
            [
                ("number", [BATCH_SIZE, 1]),
                ("number", [BATCH_SIZE, 1]),
            ],
        ],
    )
    @pytest.mark.parametrize("num_layers", [1, 2])
    @pytest.mark.parametrize("reduce_output", ["concat", "sum"])
    @pytest.mark.parametrize("fc_layers", [None, [{"output_size": 256}]])
    @pytest.mark.parametrize("embed_input_feature_name", [None, 64, "add"])
    def test_tabtransformer_combiner_number_or_binary_without_category(
        features_to_test: tuple,
        embed_input_feature_name: Optional[Union[int, str]],
        fc_layers: Optional[list],
        reduce_output: str,
        num_layers: int,
    ) -> None:
        # make repeatable
        set_random_seed(RANDOM_SEED)
    
        # retrieve simulated encoder outputs and input features for the test
        encoder_outputs, input_features = features_to_test
    
        # setup combiner to test
        combiner = TabTransformerCombiner(
            input_features=input_features,
            config=load_config(
                TabTransformerCombinerConfig,
                embed_input_feature_name=embed_input_feature_name,
                # emulates parameters passed from combiner def
                num_layers=num_layers,  # number of transformer layers
                fc_layers=fc_layers,  # fully_connected layer definition
                reduce_output=reduce_output,  # sequence reducer
            ),
        ).to(DEVICE)
    
        # concatenate encoder outputs
        combiner_output = combiner(encoder_outputs)
    
        check_combiner_output(combiner, combiner_output, BATCH_SIZE)
    
        # check for parameter updating
        target = torch.randn(combiner_output["combiner_output"].shape)
        fpc, tpc, upc, not_updated = check_module_parameters_updated(
            combiner,
            (encoder_outputs,),
            target,
        )
    
        # Adjustments to the trainable parameter count (tpc) in the following assertion checks is needed
        # to account for the different code paths taken in the TabTransformerCombiner forward() method due to the
        # combination of input feature types (NUMBER, BINARY, CATEGORY) in the dataset and parameters used to
        # instantiate the TabTransformerCombiner object.
    
        # The entire transformer stack is by-passed because there is no categorical input features.  Subtract the
        # number for parameters in the transformer stack to account for this situation.
    
>       assert upc == (
            tpc - num_layers * PARAMETERS_IN_TRANSFORMER_BLOCK - (1 if embed_input_feature_name is not None else 0)
        ), f"Failed to update parameters. Parameters not updated: {not_updated}"
E       AssertionError: Failed to update parameters. Parameters not updated: ['embed_i_f_name_layer.embeddings.weight', 'layer_norm.weight', 'layer_norm.bias', 'transformer_stack.layers.0.self_attention.query_dense.weight', 'transformer_stack.layers.0.self_attention.query_dense.bias', 'transformer_stack.layers.0.self_attention.key_dense.weight', 'transformer_stack.layers.0.self_attention.key_dense.bias', 'transformer_stack.layers.0.self_attention.value_dense.weight', 'transformer_stack.layers.0.self_attention.value_dense.bias', 'transformer_stack.layers.0.self_attention.combine_heads.weight', 'transformer_stack.layers.0.self_attention.combine_heads.bias', 'transformer_stack.layers.0.layernorm1.weight', 'transformer_stack.layers.0.layernorm1.bias', 'transformer_stack.layers.0.fully_connected.0.weight', 'transformer_stack.layers.0.fully_connected.0.bias', 'transformer_stack.layers.0.fully_connected.2.weight', 'transformer_stack.layers.0.fully_connected.2.bias', 'transformer_stack.layers.0.layernorm2.weight', 'transformer_stack.layers.0.layernorm2.bias', 'transformer_stack.layers.1.self_attention.query_dense.weight', 'transformer_stack.layers.1.self_attention.query_dense.bias', 'transformer_stack.layers.1.self_attention.key_dense.weight', 'transformer_stack.layers.1.self_attention.key_dense.bias', 'transformer_stack.layers.1.self_attention.value_dense.weight', 'transformer_stack.layers.1.self_attention.value_dense.bias', 'transformer_stack.layers.1.self_attention.combine_heads.weight', 'transformer_stack.layers.1.self_attention.combine_heads.bias', 'transformer_stack.layers.1.layernorm1.weight', 'transformer_stack.layers.1.layernorm1.bias', 'transformer_stack.layers.1.fully_connected.0.weight', 'transformer_stack.layers.1.fully_connected.0.bias', 'transformer_stack.layers.1.fully_connected.2.weight', 'transformer_stack.layers.1.fully_connected.2.bias', 'transformer_stack.layers.1.layernorm2.weight', 'transformer_stack.layers.1.layernorm2.bias', 'fc_stack.stack.0.layers.0.weight', 'fc_stack.stack.0.layers.0.bias']
E       assert 0 == ((37 - (2 * 16)) - 1)

tests/ludwig/combiners/test_combiners.py:742: AssertionError

Check warning on line 0 in tests.ludwig.combiners.test_combiners

See this annotation in the file changed.

@github-actions github-actions / Unit Test Results

test_tabtransformer_combiner_number_or_binary_without_category[add-fc_layers1-concat-1-feature_list0] (tests.ludwig.combiners.test_combiners) failed

artifacts/Unit Test Results (Python 3.10 not distributed)/pytest.xml
Raw output
features_to_test = ({'feature_00': {'encoder_output': tensor([[-1.0579],
        [ 1.9822],
        [ 0.7473],
        [-0.4933],
       ...        [-0.1815],
        [-0.2617]])}}, {'feature_00': <test_combiners.PseudoInputFeature object at 0x7f41bb970bb0>})
embed_input_feature_name = 'add', fc_layers = [{'output_size': 256}]
reduce_output = 'concat', num_layers = 1

    @pytest.mark.parametrize(
        "feature_list",  # defines parameter for fixture features_to_test()
        [
            [
                ("binary", [BATCH_SIZE, 1]),
            ],
            [
                ("binary", [BATCH_SIZE, 1]),
                ("binary", [BATCH_SIZE, 1]),
            ],
            [
                ("number", [BATCH_SIZE, 1]),
            ],
            [
                ("number", [BATCH_SIZE, 1]),
                ("number", [BATCH_SIZE, 1]),
            ],
        ],
    )
    @pytest.mark.parametrize("num_layers", [1, 2])
    @pytest.mark.parametrize("reduce_output", ["concat", "sum"])
    @pytest.mark.parametrize("fc_layers", [None, [{"output_size": 256}]])
    @pytest.mark.parametrize("embed_input_feature_name", [None, 64, "add"])
    def test_tabtransformer_combiner_number_or_binary_without_category(
        features_to_test: tuple,
        embed_input_feature_name: Optional[Union[int, str]],
        fc_layers: Optional[list],
        reduce_output: str,
        num_layers: int,
    ) -> None:
        # make repeatable
        set_random_seed(RANDOM_SEED)
    
        # retrieve simulated encoder outputs and input features for the test
        encoder_outputs, input_features = features_to_test
    
        # setup combiner to test
        combiner = TabTransformerCombiner(
            input_features=input_features,
            config=load_config(
                TabTransformerCombinerConfig,
                embed_input_feature_name=embed_input_feature_name,
                # emulates parameters passed from combiner def
                num_layers=num_layers,  # number of transformer layers
                fc_layers=fc_layers,  # fully_connected layer definition
                reduce_output=reduce_output,  # sequence reducer
            ),
        ).to(DEVICE)
    
        # concatenate encoder outputs
        combiner_output = combiner(encoder_outputs)
    
        check_combiner_output(combiner, combiner_output, BATCH_SIZE)
    
        # check for parameter updating
        target = torch.randn(combiner_output["combiner_output"].shape)
        fpc, tpc, upc, not_updated = check_module_parameters_updated(
            combiner,
            (encoder_outputs,),
            target,
        )
    
        # Adjustments to the trainable parameter count (tpc) in the following assertion checks is needed
        # to account for the different code paths taken in the TabTransformerCombiner forward() method due to the
        # combination of input feature types (NUMBER, BINARY, CATEGORY) in the dataset and parameters used to
        # instantiate the TabTransformerCombiner object.
    
        # The entire transformer stack is by-passed because there is no categorical input features.  Subtract the
        # number for parameters in the transformer stack to account for this situation.
    
>       assert upc == (
            tpc - num_layers * PARAMETERS_IN_TRANSFORMER_BLOCK - (1 if embed_input_feature_name is not None else 0)
        ), f"Failed to update parameters. Parameters not updated: {not_updated}"
E       AssertionError: Failed to update parameters. Parameters not updated: ['embed_i_f_name_layer.embeddings.weight', 'layer_norm.weight', 'layer_norm.bias', 'transformer_stack.layers.0.self_attention.query_dense.weight', 'transformer_stack.layers.0.self_attention.query_dense.bias', 'transformer_stack.layers.0.self_attention.key_dense.weight', 'transformer_stack.layers.0.self_attention.key_dense.bias', 'transformer_stack.layers.0.self_attention.value_dense.weight', 'transformer_stack.layers.0.self_attention.value_dense.bias', 'transformer_stack.layers.0.self_attention.combine_heads.weight', 'transformer_stack.layers.0.self_attention.combine_heads.bias', 'transformer_stack.layers.0.layernorm1.weight', 'transformer_stack.layers.0.layernorm1.bias', 'transformer_stack.layers.0.fully_connected.0.weight', 'transformer_stack.layers.0.fully_connected.0.bias', 'transformer_stack.layers.0.fully_connected.2.weight', 'transformer_stack.layers.0.fully_connected.2.bias', 'transformer_stack.layers.0.layernorm2.weight', 'transformer_stack.layers.0.layernorm2.bias', 'fc_stack.stack.0.layers.0.weight', 'fc_stack.stack.0.layers.0.bias']
E       assert 0 == ((21 - (1 * 16)) - 1)

tests/ludwig/combiners/test_combiners.py:742: AssertionError

Check warning on line 0 in tests.ludwig.combiners.test_combiners

See this annotation in the file changed.

@github-actions github-actions / Unit Test Results

test_tabtransformer_combiner_number_or_binary_without_category[add-fc_layers1-concat-1-feature_list2] (tests.ludwig.combiners.test_combiners) failed

artifacts/Unit Test Results (Python 3.10 not distributed)/pytest.xml
Raw output
features_to_test = ({'feature_00': {'encoder_output': tensor([[-1.0579],
        [ 1.9822],
        [ 0.7473],
        [-0.4933],
       ...        [-0.1815],
        [-0.2617]])}}, {'feature_00': <test_combiners.PseudoInputFeature object at 0x7f41ab98b1c0>})
embed_input_feature_name = 'add', fc_layers = [{'output_size': 256}]
reduce_output = 'concat', num_layers = 1

    @pytest.mark.parametrize(
        "feature_list",  # defines parameter for fixture features_to_test()
        [
            [
                ("binary", [BATCH_SIZE, 1]),
            ],
            [
                ("binary", [BATCH_SIZE, 1]),
                ("binary", [BATCH_SIZE, 1]),
            ],
            [
                ("number", [BATCH_SIZE, 1]),
            ],
            [
                ("number", [BATCH_SIZE, 1]),
                ("number", [BATCH_SIZE, 1]),
            ],
        ],
    )
    @pytest.mark.parametrize("num_layers", [1, 2])
    @pytest.mark.parametrize("reduce_output", ["concat", "sum"])
    @pytest.mark.parametrize("fc_layers", [None, [{"output_size": 256}]])
    @pytest.mark.parametrize("embed_input_feature_name", [None, 64, "add"])
    def test_tabtransformer_combiner_number_or_binary_without_category(
        features_to_test: tuple,
        embed_input_feature_name: Optional[Union[int, str]],
        fc_layers: Optional[list],
        reduce_output: str,
        num_layers: int,
    ) -> None:
        # make repeatable
        set_random_seed(RANDOM_SEED)
    
        # retrieve simulated encoder outputs and input features for the test
        encoder_outputs, input_features = features_to_test
    
        # setup combiner to test
        combiner = TabTransformerCombiner(
            input_features=input_features,
            config=load_config(
                TabTransformerCombinerConfig,
                embed_input_feature_name=embed_input_feature_name,
                # emulates parameters passed from combiner def
                num_layers=num_layers,  # number of transformer layers
                fc_layers=fc_layers,  # fully_connected layer definition
                reduce_output=reduce_output,  # sequence reducer
            ),
        ).to(DEVICE)
    
        # concatenate encoder outputs
        combiner_output = combiner(encoder_outputs)
    
        check_combiner_output(combiner, combiner_output, BATCH_SIZE)
    
        # check for parameter updating
        target = torch.randn(combiner_output["combiner_output"].shape)
        fpc, tpc, upc, not_updated = check_module_parameters_updated(
            combiner,
            (encoder_outputs,),
            target,
        )
    
        # Adjustments to the trainable parameter count (tpc) in the following assertion checks is needed
        # to account for the different code paths taken in the TabTransformerCombiner forward() method due to the
        # combination of input feature types (NUMBER, BINARY, CATEGORY) in the dataset and parameters used to
        # instantiate the TabTransformerCombiner object.
    
        # The entire transformer stack is by-passed because there is no categorical input features.  Subtract the
        # number for parameters in the transformer stack to account for this situation.
    
>       assert upc == (
            tpc - num_layers * PARAMETERS_IN_TRANSFORMER_BLOCK - (1 if embed_input_feature_name is not None else 0)
        ), f"Failed to update parameters. Parameters not updated: {not_updated}"
E       AssertionError: Failed to update parameters. Parameters not updated: ['embed_i_f_name_layer.embeddings.weight', 'layer_norm.weight', 'layer_norm.bias', 'transformer_stack.layers.0.self_attention.query_dense.weight', 'transformer_stack.layers.0.self_attention.query_dense.bias', 'transformer_stack.layers.0.self_attention.key_dense.weight', 'transformer_stack.layers.0.self_attention.key_dense.bias', 'transformer_stack.layers.0.self_attention.value_dense.weight', 'transformer_stack.layers.0.self_attention.value_dense.bias', 'transformer_stack.layers.0.self_attention.combine_heads.weight', 'transformer_stack.layers.0.self_attention.combine_heads.bias', 'transformer_stack.layers.0.layernorm1.weight', 'transformer_stack.layers.0.layernorm1.bias', 'transformer_stack.layers.0.fully_connected.0.weight', 'transformer_stack.layers.0.fully_connected.0.bias', 'transformer_stack.layers.0.fully_connected.2.weight', 'transformer_stack.layers.0.fully_connected.2.bias', 'transformer_stack.layers.0.layernorm2.weight', 'transformer_stack.layers.0.layernorm2.bias', 'fc_stack.stack.0.layers.0.weight', 'fc_stack.stack.0.layers.0.bias']
E       assert 0 == ((21 - (1 * 16)) - 1)

tests/ludwig/combiners/test_combiners.py:742: AssertionError

Check warning on line 0 in tests.ludwig.combiners.test_combiners

See this annotation in the file changed.

@github-actions github-actions / Unit Test Results

test_tabtransformer_combiner_number_or_binary_without_category[add-fc_layers1-concat-2-feature_list0] (tests.ludwig.combiners.test_combiners) failed

artifacts/Unit Test Results (Python 3.10 not distributed)/pytest.xml
Raw output
features_to_test = ({'feature_00': {'encoder_output': tensor([[-1.0579],
        [ 1.9822],
        [ 0.7473],
        [-0.4933],
       ...        [-0.1815],
        [-0.2617]])}}, {'feature_00': <test_combiners.PseudoInputFeature object at 0x7f41ab981960>})
embed_input_feature_name = 'add', fc_layers = [{'output_size': 256}]
reduce_output = 'concat', num_layers = 2

    @pytest.mark.parametrize(
        "feature_list",  # defines parameter for fixture features_to_test()
        [
            [
                ("binary", [BATCH_SIZE, 1]),
            ],
            [
                ("binary", [BATCH_SIZE, 1]),
                ("binary", [BATCH_SIZE, 1]),
            ],
            [
                ("number", [BATCH_SIZE, 1]),
            ],
            [
                ("number", [BATCH_SIZE, 1]),
                ("number", [BATCH_SIZE, 1]),
            ],
        ],
    )
    @pytest.mark.parametrize("num_layers", [1, 2])
    @pytest.mark.parametrize("reduce_output", ["concat", "sum"])
    @pytest.mark.parametrize("fc_layers", [None, [{"output_size": 256}]])
    @pytest.mark.parametrize("embed_input_feature_name", [None, 64, "add"])
    def test_tabtransformer_combiner_number_or_binary_without_category(
        features_to_test: tuple,
        embed_input_feature_name: Optional[Union[int, str]],
        fc_layers: Optional[list],
        reduce_output: str,
        num_layers: int,
    ) -> None:
        # make repeatable
        set_random_seed(RANDOM_SEED)
    
        # retrieve simulated encoder outputs and input features for the test
        encoder_outputs, input_features = features_to_test
    
        # setup combiner to test
        combiner = TabTransformerCombiner(
            input_features=input_features,
            config=load_config(
                TabTransformerCombinerConfig,
                embed_input_feature_name=embed_input_feature_name,
                # emulates parameters passed from combiner def
                num_layers=num_layers,  # number of transformer layers
                fc_layers=fc_layers,  # fully_connected layer definition
                reduce_output=reduce_output,  # sequence reducer
            ),
        ).to(DEVICE)
    
        # concatenate encoder outputs
        combiner_output = combiner(encoder_outputs)
    
        check_combiner_output(combiner, combiner_output, BATCH_SIZE)
    
        # check for parameter updating
        target = torch.randn(combiner_output["combiner_output"].shape)
        fpc, tpc, upc, not_updated = check_module_parameters_updated(
            combiner,
            (encoder_outputs,),
            target,
        )
    
        # Adjustments to the trainable parameter count (tpc) in the following assertion checks is needed
        # to account for the different code paths taken in the TabTransformerCombiner forward() method due to the
        # combination of input feature types (NUMBER, BINARY, CATEGORY) in the dataset and parameters used to
        # instantiate the TabTransformerCombiner object.
    
        # The entire transformer stack is by-passed because there is no categorical input features.  Subtract the
        # number for parameters in the transformer stack to account for this situation.
    
>       assert upc == (
            tpc - num_layers * PARAMETERS_IN_TRANSFORMER_BLOCK - (1 if embed_input_feature_name is not None else 0)
        ), f"Failed to update parameters. Parameters not updated: {not_updated}"
E       AssertionError: Failed to update parameters. Parameters not updated: ['embed_i_f_name_layer.embeddings.weight', 'layer_norm.weight', 'layer_norm.bias', 'transformer_stack.layers.0.self_attention.query_dense.weight', 'transformer_stack.layers.0.self_attention.query_dense.bias', 'transformer_stack.layers.0.self_attention.key_dense.weight', 'transformer_stack.layers.0.self_attention.key_dense.bias', 'transformer_stack.layers.0.self_attention.value_dense.weight', 'transformer_stack.layers.0.self_attention.value_dense.bias', 'transformer_stack.layers.0.self_attention.combine_heads.weight', 'transformer_stack.layers.0.self_attention.combine_heads.bias', 'transformer_stack.layers.0.layernorm1.weight', 'transformer_stack.layers.0.layernorm1.bias', 'transformer_stack.layers.0.fully_connected.0.weight', 'transformer_stack.layers.0.fully_connected.0.bias', 'transformer_stack.layers.0.fully_connected.2.weight', 'transformer_stack.layers.0.fully_connected.2.bias', 'transformer_stack.layers.0.layernorm2.weight', 'transformer_stack.layers.0.layernorm2.bias', 'transformer_stack.layers.1.self_attention.query_dense.weight', 'transformer_stack.layers.1.self_attention.query_dense.bias', 'transformer_stack.layers.1.self_attention.key_dense.weight', 'transformer_stack.layers.1.self_attention.key_dense.bias', 'transformer_stack.layers.1.self_attention.value_dense.weight', 'transformer_stack.layers.1.self_attention.value_dense.bias', 'transformer_stack.layers.1.self_attention.combine_heads.weight', 'transformer_stack.layers.1.self_attention.combine_heads.bias', 'transformer_stack.layers.1.layernorm1.weight', 'transformer_stack.layers.1.layernorm1.bias', 'transformer_stack.layers.1.fully_connected.0.weight', 'transformer_stack.layers.1.fully_connected.0.bias', 'transformer_stack.layers.1.fully_connected.2.weight', 'transformer_stack.layers.1.fully_connected.2.bias', 'transformer_stack.layers.1.layernorm2.weight', 'transformer_stack.layers.1.layernorm2.bias', 'fc_stack.stack.0.layers.0.weight', 'fc_stack.stack.0.layers.0.bias']
E       assert 0 == ((37 - (2 * 16)) - 1)

tests/ludwig/combiners/test_combiners.py:742: AssertionError

Check warning on line 0 in tests.ludwig.combiners.test_combiners

See this annotation in the file changed.

@github-actions github-actions / Unit Test Results

test_tabtransformer_combiner_number_or_binary_without_category[add-fc_layers1-concat-2-feature_list2] (tests.ludwig.combiners.test_combiners) failed

artifacts/Unit Test Results (Python 3.10 not distributed)/pytest.xml
Raw output
features_to_test = ({'feature_00': {'encoder_output': tensor([[-1.0579],
        [ 1.9822],
        [ 0.7473],
        [-0.4933],
       ...        [-0.1815],
        [-0.2617]])}}, {'feature_00': <test_combiners.PseudoInputFeature object at 0x7f41ab988760>})
embed_input_feature_name = 'add', fc_layers = [{'output_size': 256}]
reduce_output = 'concat', num_layers = 2

    @pytest.mark.parametrize(
        "feature_list",  # defines parameter for fixture features_to_test()
        [
            [
                ("binary", [BATCH_SIZE, 1]),
            ],
            [
                ("binary", [BATCH_SIZE, 1]),
                ("binary", [BATCH_SIZE, 1]),
            ],
            [
                ("number", [BATCH_SIZE, 1]),
            ],
            [
                ("number", [BATCH_SIZE, 1]),
                ("number", [BATCH_SIZE, 1]),
            ],
        ],
    )
    @pytest.mark.parametrize("num_layers", [1, 2])
    @pytest.mark.parametrize("reduce_output", ["concat", "sum"])
    @pytest.mark.parametrize("fc_layers", [None, [{"output_size": 256}]])
    @pytest.mark.parametrize("embed_input_feature_name", [None, 64, "add"])
    def test_tabtransformer_combiner_number_or_binary_without_category(
        features_to_test: tuple,
        embed_input_feature_name: Optional[Union[int, str]],
        fc_layers: Optional[list],
        reduce_output: str,
        num_layers: int,
    ) -> None:
        # make repeatable
        set_random_seed(RANDOM_SEED)
    
        # retrieve simulated encoder outputs and input features for the test
        encoder_outputs, input_features = features_to_test
    
        # setup combiner to test
        combiner = TabTransformerCombiner(
            input_features=input_features,
            config=load_config(
                TabTransformerCombinerConfig,
                embed_input_feature_name=embed_input_feature_name,
                # emulates parameters passed from combiner def
                num_layers=num_layers,  # number of transformer layers
                fc_layers=fc_layers,  # fully_connected layer definition
                reduce_output=reduce_output,  # sequence reducer
            ),
        ).to(DEVICE)
    
        # concatenate encoder outputs
        combiner_output = combiner(encoder_outputs)
    
        check_combiner_output(combiner, combiner_output, BATCH_SIZE)
    
        # check for parameter updating
        target = torch.randn(combiner_output["combiner_output"].shape)
        fpc, tpc, upc, not_updated = check_module_parameters_updated(
            combiner,
            (encoder_outputs,),
            target,
        )
    
        # Adjustments to the trainable parameter count (tpc) in the following assertion checks is needed
        # to account for the different code paths taken in the TabTransformerCombiner forward() method due to the
        # combination of input feature types (NUMBER, BINARY, CATEGORY) in the dataset and parameters used to
        # instantiate the TabTransformerCombiner object.
    
        # The entire transformer stack is by-passed because there is no categorical input features.  Subtract the
        # number for parameters in the transformer stack to account for this situation.
    
>       assert upc == (
            tpc - num_layers * PARAMETERS_IN_TRANSFORMER_BLOCK - (1 if embed_input_feature_name is not None else 0)
        ), f"Failed to update parameters. Parameters not updated: {not_updated}"
E       AssertionError: Failed to update parameters. Parameters not updated: ['embed_i_f_name_layer.embeddings.weight', 'layer_norm.weight', 'layer_norm.bias', 'transformer_stack.layers.0.self_attention.query_dense.weight', 'transformer_stack.layers.0.self_attention.query_dense.bias', 'transformer_stack.layers.0.self_attention.key_dense.weight', 'transformer_stack.layers.0.self_attention.key_dense.bias', 'transformer_stack.layers.0.self_attention.value_dense.weight', 'transformer_stack.layers.0.self_attention.value_dense.bias', 'transformer_stack.layers.0.self_attention.combine_heads.weight', 'transformer_stack.layers.0.self_attention.combine_heads.bias', 'transformer_stack.layers.0.layernorm1.weight', 'transformer_stack.layers.0.layernorm1.bias', 'transformer_stack.layers.0.fully_connected.0.weight', 'transformer_stack.layers.0.fully_connected.0.bias', 'transformer_stack.layers.0.fully_connected.2.weight', 'transformer_stack.layers.0.fully_connected.2.bias', 'transformer_stack.layers.0.layernorm2.weight', 'transformer_stack.layers.0.layernorm2.bias', 'transformer_stack.layers.1.self_attention.query_dense.weight', 'transformer_stack.layers.1.self_attention.query_dense.bias', 'transformer_stack.layers.1.self_attention.key_dense.weight', 'transformer_stack.layers.1.self_attention.key_dense.bias', 'transformer_stack.layers.1.self_attention.value_dense.weight', 'transformer_stack.layers.1.self_attention.value_dense.bias', 'transformer_stack.layers.1.self_attention.combine_heads.weight', 'transformer_stack.layers.1.self_attention.combine_heads.bias', 'transformer_stack.layers.1.layernorm1.weight', 'transformer_stack.layers.1.layernorm1.bias', 'transformer_stack.layers.1.fully_connected.0.weight', 'transformer_stack.layers.1.fully_connected.0.bias', 'transformer_stack.layers.1.fully_connected.2.weight', 'transformer_stack.layers.1.fully_connected.2.bias', 'transformer_stack.layers.1.layernorm2.weight', 'transformer_stack.layers.1.layernorm2.bias', 'fc_stack.stack.0.layers.0.weight', 'fc_stack.stack.0.layers.0.bias']
E       assert 0 == ((37 - (2 * 16)) - 1)

tests/ludwig/combiners/test_combiners.py:742: AssertionError

Check warning on line 0 in tests.ludwig.combiners.test_combiners

See this annotation in the file changed.

@github-actions github-actions / Unit Test Results

test_tabtransformer_combiner_number_or_binary_without_category[add-fc_layers1-sum-1-feature_list0] (tests.ludwig.combiners.test_combiners) failed

artifacts/Unit Test Results (Python 3.10 not distributed)/pytest.xml
Raw output
features_to_test = ({'feature_00': {'encoder_output': tensor([[-1.0579],
        [ 1.9822],
        [ 0.7473],
        [-0.4933],
       ...        [-0.1815],
        [-0.2617]])}}, {'feature_00': <test_combiners.PseudoInputFeature object at 0x7f41ab988ac0>})
embed_input_feature_name = 'add', fc_layers = [{'output_size': 256}]
reduce_output = 'sum', num_layers = 1

    @pytest.mark.parametrize(
        "feature_list",  # defines parameter for fixture features_to_test()
        [
            [
                ("binary", [BATCH_SIZE, 1]),
            ],
            [
                ("binary", [BATCH_SIZE, 1]),
                ("binary", [BATCH_SIZE, 1]),
            ],
            [
                ("number", [BATCH_SIZE, 1]),
            ],
            [
                ("number", [BATCH_SIZE, 1]),
                ("number", [BATCH_SIZE, 1]),
            ],
        ],
    )
    @pytest.mark.parametrize("num_layers", [1, 2])
    @pytest.mark.parametrize("reduce_output", ["concat", "sum"])
    @pytest.mark.parametrize("fc_layers", [None, [{"output_size": 256}]])
    @pytest.mark.parametrize("embed_input_feature_name", [None, 64, "add"])
    def test_tabtransformer_combiner_number_or_binary_without_category(
        features_to_test: tuple,
        embed_input_feature_name: Optional[Union[int, str]],
        fc_layers: Optional[list],
        reduce_output: str,
        num_layers: int,
    ) -> None:
        # make repeatable
        set_random_seed(RANDOM_SEED)
    
        # retrieve simulated encoder outputs and input features for the test
        encoder_outputs, input_features = features_to_test
    
        # setup combiner to test
        combiner = TabTransformerCombiner(
            input_features=input_features,
            config=load_config(
                TabTransformerCombinerConfig,
                embed_input_feature_name=embed_input_feature_name,
                # emulates parameters passed from combiner def
                num_layers=num_layers,  # number of transformer layers
                fc_layers=fc_layers,  # fully_connected layer definition
                reduce_output=reduce_output,  # sequence reducer
            ),
        ).to(DEVICE)
    
        # concatenate encoder outputs
        combiner_output = combiner(encoder_outputs)
    
        check_combiner_output(combiner, combiner_output, BATCH_SIZE)
    
        # check for parameter updating
        target = torch.randn(combiner_output["combiner_output"].shape)
        fpc, tpc, upc, not_updated = check_module_parameters_updated(
            combiner,
            (encoder_outputs,),
            target,
        )
    
        # Adjustments to the trainable parameter count (tpc) in the following assertion checks is needed
        # to account for the different code paths taken in the TabTransformerCombiner forward() method due to the
        # combination of input feature types (NUMBER, BINARY, CATEGORY) in the dataset and parameters used to
        # instantiate the TabTransformerCombiner object.
    
        # The entire transformer stack is by-passed because there is no categorical input features.  Subtract the
        # number for parameters in the transformer stack to account for this situation.
    
>       assert upc == (
            tpc - num_layers * PARAMETERS_IN_TRANSFORMER_BLOCK - (1 if embed_input_feature_name is not None else 0)
        ), f"Failed to update parameters. Parameters not updated: {not_updated}"
E       AssertionError: Failed to update parameters. Parameters not updated: ['embed_i_f_name_layer.embeddings.weight', 'layer_norm.weight', 'layer_norm.bias', 'transformer_stack.layers.0.self_attention.query_dense.weight', 'transformer_stack.layers.0.self_attention.query_dense.bias', 'transformer_stack.layers.0.self_attention.key_dense.weight', 'transformer_stack.layers.0.self_attention.key_dense.bias', 'transformer_stack.layers.0.self_attention.value_dense.weight', 'transformer_stack.layers.0.self_attention.value_dense.bias', 'transformer_stack.layers.0.self_attention.combine_heads.weight', 'transformer_stack.layers.0.self_attention.combine_heads.bias', 'transformer_stack.layers.0.layernorm1.weight', 'transformer_stack.layers.0.layernorm1.bias', 'transformer_stack.layers.0.fully_connected.0.weight', 'transformer_stack.layers.0.fully_connected.0.bias', 'transformer_stack.layers.0.fully_connected.2.weight', 'transformer_stack.layers.0.fully_connected.2.bias', 'transformer_stack.layers.0.layernorm2.weight', 'transformer_stack.layers.0.layernorm2.bias', 'fc_stack.stack.0.layers.0.weight', 'fc_stack.stack.0.layers.0.bias']
E       assert 0 == ((21 - (1 * 16)) - 1)

tests/ludwig/combiners/test_combiners.py:742: AssertionError

Check warning on line 0 in tests.ludwig.combiners.test_combiners

See this annotation in the file changed.

@github-actions github-actions / Unit Test Results

test_tabtransformer_combiner_number_or_binary_without_category[add-fc_layers1-sum-1-feature_list2] (tests.ludwig.combiners.test_combiners) failed

artifacts/Unit Test Results (Python 3.10 not distributed)/pytest.xml
Raw output
features_to_test = ({'feature_00': {'encoder_output': tensor([[-1.0579],
        [ 1.9822],
        [ 0.7473],
        [-0.4933],
       ...        [-0.1815],
        [-0.2617]])}}, {'feature_00': <test_combiners.PseudoInputFeature object at 0x7f41ab990160>})
embed_input_feature_name = 'add', fc_layers = [{'output_size': 256}]
reduce_output = 'sum', num_layers = 1

    @pytest.mark.parametrize(
        "feature_list",  # defines parameter for fixture features_to_test()
        [
            [
                ("binary", [BATCH_SIZE, 1]),
            ],
            [
                ("binary", [BATCH_SIZE, 1]),
                ("binary", [BATCH_SIZE, 1]),
            ],
            [
                ("number", [BATCH_SIZE, 1]),
            ],
            [
                ("number", [BATCH_SIZE, 1]),
                ("number", [BATCH_SIZE, 1]),
            ],
        ],
    )
    @pytest.mark.parametrize("num_layers", [1, 2])
    @pytest.mark.parametrize("reduce_output", ["concat", "sum"])
    @pytest.mark.parametrize("fc_layers", [None, [{"output_size": 256}]])
    @pytest.mark.parametrize("embed_input_feature_name", [None, 64, "add"])
    def test_tabtransformer_combiner_number_or_binary_without_category(
        features_to_test: tuple,
        embed_input_feature_name: Optional[Union[int, str]],
        fc_layers: Optional[list],
        reduce_output: str,
        num_layers: int,
    ) -> None:
        # make repeatable
        set_random_seed(RANDOM_SEED)
    
        # retrieve simulated encoder outputs and input features for the test
        encoder_outputs, input_features = features_to_test
    
        # setup combiner to test
        combiner = TabTransformerCombiner(
            input_features=input_features,
            config=load_config(
                TabTransformerCombinerConfig,
                embed_input_feature_name=embed_input_feature_name,
                # emulates parameters passed from combiner def
                num_layers=num_layers,  # number of transformer layers
                fc_layers=fc_layers,  # fully_connected layer definition
                reduce_output=reduce_output,  # sequence reducer
            ),
        ).to(DEVICE)
    
        # concatenate encoder outputs
        combiner_output = combiner(encoder_outputs)
    
        check_combiner_output(combiner, combiner_output, BATCH_SIZE)
    
        # check for parameter updating
        target = torch.randn(combiner_output["combiner_output"].shape)
        fpc, tpc, upc, not_updated = check_module_parameters_updated(
            combiner,
            (encoder_outputs,),
            target,
        )
    
        # Adjustments to the trainable parameter count (tpc) in the following assertion checks is needed
        # to account for the different code paths taken in the TabTransformerCombiner forward() method due to the
        # combination of input feature types (NUMBER, BINARY, CATEGORY) in the dataset and parameters used to
        # instantiate the TabTransformerCombiner object.
    
        # The entire transformer stack is by-passed because there is no categorical input features.  Subtract the
        # number for parameters in the transformer stack to account for this situation.
    
>       assert upc == (
            tpc - num_layers * PARAMETERS_IN_TRANSFORMER_BLOCK - (1 if embed_input_feature_name is not None else 0)
        ), f"Failed to update parameters. Parameters not updated: {not_updated}"
E       AssertionError: Failed to update parameters. Parameters not updated: ['embed_i_f_name_layer.embeddings.weight', 'layer_norm.weight', 'layer_norm.bias', 'transformer_stack.layers.0.self_attention.query_dense.weight', 'transformer_stack.layers.0.self_attention.query_dense.bias', 'transformer_stack.layers.0.self_attention.key_dense.weight', 'transformer_stack.layers.0.self_attention.key_dense.bias', 'transformer_stack.layers.0.self_attention.value_dense.weight', 'transformer_stack.layers.0.self_attention.value_dense.bias', 'transformer_stack.layers.0.self_attention.combine_heads.weight', 'transformer_stack.layers.0.self_attention.combine_heads.bias', 'transformer_stack.layers.0.layernorm1.weight', 'transformer_stack.layers.0.layernorm1.bias', 'transformer_stack.layers.0.fully_connected.0.weight', 'transformer_stack.layers.0.fully_connected.0.bias', 'transformer_stack.layers.0.fully_connected.2.weight', 'transformer_stack.layers.0.fully_connected.2.bias', 'transformer_stack.layers.0.layernorm2.weight', 'transformer_stack.layers.0.layernorm2.bias', 'fc_stack.stack.0.layers.0.weight', 'fc_stack.stack.0.layers.0.bias']
E       assert 0 == ((21 - (1 * 16)) - 1)

tests/ludwig/combiners/test_combiners.py:742: AssertionError

Check warning on line 0 in tests.ludwig.combiners.test_combiners

See this annotation in the file changed.

@github-actions github-actions / Unit Test Results

test_tabtransformer_combiner_number_or_binary_without_category[add-fc_layers1-sum-2-feature_list0] (tests.ludwig.combiners.test_combiners) failed

artifacts/Unit Test Results (Python 3.10 not distributed)/pytest.xml
Raw output
features_to_test = ({'feature_00': {'encoder_output': tensor([[-1.0579],
        [ 1.9822],
        [ 0.7473],
        [-0.4933],
       ...        [-0.1815],
        [-0.2617]])}}, {'feature_00': <test_combiners.PseudoInputFeature object at 0x7f41aba7e410>})
embed_input_feature_name = 'add', fc_layers = [{'output_size': 256}]
reduce_output = 'sum', num_layers = 2

    @pytest.mark.parametrize(
        "feature_list",  # defines parameter for fixture features_to_test()
        [
            [
                ("binary", [BATCH_SIZE, 1]),
            ],
            [
                ("binary", [BATCH_SIZE, 1]),
                ("binary", [BATCH_SIZE, 1]),
            ],
            [
                ("number", [BATCH_SIZE, 1]),
            ],
            [
                ("number", [BATCH_SIZE, 1]),
                ("number", [BATCH_SIZE, 1]),
            ],
        ],
    )
    @pytest.mark.parametrize("num_layers", [1, 2])
    @pytest.mark.parametrize("reduce_output", ["concat", "sum"])
    @pytest.mark.parametrize("fc_layers", [None, [{"output_size": 256}]])
    @pytest.mark.parametrize("embed_input_feature_name", [None, 64, "add"])
    def test_tabtransformer_combiner_number_or_binary_without_category(
        features_to_test: tuple,
        embed_input_feature_name: Optional[Union[int, str]],
        fc_layers: Optional[list],
        reduce_output: str,
        num_layers: int,
    ) -> None:
        # make repeatable
        set_random_seed(RANDOM_SEED)
    
        # retrieve simulated encoder outputs and input features for the test
        encoder_outputs, input_features = features_to_test
    
        # setup combiner to test
        combiner = TabTransformerCombiner(
            input_features=input_features,
            config=load_config(
                TabTransformerCombinerConfig,
                embed_input_feature_name=embed_input_feature_name,
                # emulates parameters passed from combiner def
                num_layers=num_layers,  # number of transformer layers
                fc_layers=fc_layers,  # fully_connected layer definition
                reduce_output=reduce_output,  # sequence reducer
            ),
        ).to(DEVICE)
    
        # concatenate encoder outputs
        combiner_output = combiner(encoder_outputs)
    
        check_combiner_output(combiner, combiner_output, BATCH_SIZE)
    
        # check for parameter updating
        target = torch.randn(combiner_output["combiner_output"].shape)
        fpc, tpc, upc, not_updated = check_module_parameters_updated(
            combiner,
            (encoder_outputs,),
            target,
        )
    
        # Adjustments to the trainable parameter count (tpc) in the following assertion checks is needed
        # to account for the different code paths taken in the TabTransformerCombiner forward() method due to the
        # combination of input feature types (NUMBER, BINARY, CATEGORY) in the dataset and parameters used to
        # instantiate the TabTransformerCombiner object.
    
        # The entire transformer stack is by-passed because there is no categorical input features.  Subtract the
        # number for parameters in the transformer stack to account for this situation.
    
>       assert upc == (
            tpc - num_layers * PARAMETERS_IN_TRANSFORMER_BLOCK - (1 if embed_input_feature_name is not None else 0)
        ), f"Failed to update parameters. Parameters not updated: {not_updated}"
E       AssertionError: Failed to update parameters. Parameters not updated: ['embed_i_f_name_layer.embeddings.weight', 'layer_norm.weight', 'layer_norm.bias', 'transformer_stack.layers.0.self_attention.query_dense.weight', 'transformer_stack.layers.0.self_attention.query_dense.bias', 'transformer_stack.layers.0.self_attention.key_dense.weight', 'transformer_stack.layers.0.self_attention.key_dense.bias', 'transformer_stack.layers.0.self_attention.value_dense.weight', 'transformer_stack.layers.0.self_attention.value_dense.bias', 'transformer_stack.layers.0.self_attention.combine_heads.weight', 'transformer_stack.layers.0.self_attention.combine_heads.bias', 'transformer_stack.layers.0.layernorm1.weight', 'transformer_stack.layers.0.layernorm1.bias', 'transformer_stack.layers.0.fully_connected.0.weight', 'transformer_stack.layers.0.fully_connected.0.bias', 'transformer_stack.layers.0.fully_connected.2.weight', 'transformer_stack.layers.0.fully_connected.2.bias', 'transformer_stack.layers.0.layernorm2.weight', 'transformer_stack.layers.0.layernorm2.bias', 'transformer_stack.layers.1.self_attention.query_dense.weight', 'transformer_stack.layers.1.self_attention.query_dense.bias', 'transformer_stack.layers.1.self_attention.key_dense.weight', 'transformer_stack.layers.1.self_attention.key_dense.bias', 'transformer_stack.layers.1.self_attention.value_dense.weight', 'transformer_stack.layers.1.self_attention.value_dense.bias', 'transformer_stack.layers.1.self_attention.combine_heads.weight', 'transformer_stack.layers.1.self_attention.combine_heads.bias', 'transformer_stack.layers.1.layernorm1.weight', 'transformer_stack.layers.1.layernorm1.bias', 'transformer_stack.layers.1.fully_connected.0.weight', 'transformer_stack.layers.1.fully_connected.0.bias', 'transformer_stack.layers.1.fully_connected.2.weight', 'transformer_stack.layers.1.fully_connected.2.bias', 'transformer_stack.layers.1.layernorm2.weight', 'transformer_stack.layers.1.layernorm2.bias', 'fc_stack.stack.0.layers.0.weight', 'fc_stack.stack.0.layers.0.bias']
E       assert 0 == ((37 - (2 * 16)) - 1)

tests/ludwig/combiners/test_combiners.py:742: AssertionError

Check warning on line 0 in tests.ludwig.combiners.test_combiners

See this annotation in the file changed.

@github-actions github-actions / Unit Test Results

test_tabtransformer_combiner_number_or_binary_without_category[add-fc_layers1-sum-2-feature_list2] (tests.ludwig.combiners.test_combiners) failed

artifacts/Unit Test Results (Python 3.10 not distributed)/pytest.xml
Raw output
features_to_test = ({'feature_00': {'encoder_output': tensor([[-1.0579],
        [ 1.9822],
        [ 0.7473],
        [-0.4933],
       ...        [-0.1815],
        [-0.2617]])}}, {'feature_00': <test_combiners.PseudoInputFeature object at 0x7f41ab990a60>})
embed_input_feature_name = 'add', fc_layers = [{'output_size': 256}]
reduce_output = 'sum', num_layers = 2

    @pytest.mark.parametrize(
        "feature_list",  # defines parameter for fixture features_to_test()
        [
            [
                ("binary", [BATCH_SIZE, 1]),
            ],
            [
                ("binary", [BATCH_SIZE, 1]),
                ("binary", [BATCH_SIZE, 1]),
            ],
            [
                ("number", [BATCH_SIZE, 1]),
            ],
            [
                ("number", [BATCH_SIZE, 1]),
                ("number", [BATCH_SIZE, 1]),
            ],
        ],
    )
    @pytest.mark.parametrize("num_layers", [1, 2])
    @pytest.mark.parametrize("reduce_output", ["concat", "sum"])
    @pytest.mark.parametrize("fc_layers", [None, [{"output_size": 256}]])
    @pytest.mark.parametrize("embed_input_feature_name", [None, 64, "add"])
    def test_tabtransformer_combiner_number_or_binary_without_category(
        features_to_test: tuple,
        embed_input_feature_name: Optional[Union[int, str]],
        fc_layers: Optional[list],
        reduce_output: str,
        num_layers: int,
    ) -> None:
        # make repeatable
        set_random_seed(RANDOM_SEED)
    
        # retrieve simulated encoder outputs and input features for the test
        encoder_outputs, input_features = features_to_test
    
        # setup combiner to test
        combiner = TabTransformerCombiner(
            input_features=input_features,
            config=load_config(
                TabTransformerCombinerConfig,
                embed_input_feature_name=embed_input_feature_name,
                # emulates parameters passed from combiner def
                num_layers=num_layers,  # number of transformer layers
                fc_layers=fc_layers,  # fully_connected layer definition
                reduce_output=reduce_output,  # sequence reducer
            ),
        ).to(DEVICE)
    
        # concatenate encoder outputs
        combiner_output = combiner(encoder_outputs)
    
        check_combiner_output(combiner, combiner_output, BATCH_SIZE)
    
        # check for parameter updating
        target = torch.randn(combiner_output["combiner_output"].shape)
        fpc, tpc, upc, not_updated = check_module_parameters_updated(
            combiner,
            (encoder_outputs,),
            target,
        )
    
        # Adjustments to the trainable parameter count (tpc) in the following assertion checks is needed
        # to account for the different code paths taken in the TabTransformerCombiner forward() method due to the
        # combination of input feature types (NUMBER, BINARY, CATEGORY) in the dataset and parameters used to
        # instantiate the TabTransformerCombiner object.
    
        # The entire transformer stack is by-passed because there is no categorical input features.  Subtract the
        # number for parameters in the transformer stack to account for this situation.
    
>       assert upc == (
            tpc - num_layers * PARAMETERS_IN_TRANSFORMER_BLOCK - (1 if embed_input_feature_name is not None else 0)
        ), f"Failed to update parameters. Parameters not updated: {not_updated}"
E       AssertionError: Failed to update parameters. Parameters not updated: ['embed_i_f_name_layer.embeddings.weight', 'layer_norm.weight', 'layer_norm.bias', 'transformer_stack.layers.0.self_attention.query_dense.weight', 'transformer_stack.layers.0.self_attention.query_dense.bias', 'transformer_stack.layers.0.self_attention.key_dense.weight', 'transformer_stack.layers.0.self_attention.key_dense.bias', 'transformer_stack.layers.0.self_attention.value_dense.weight', 'transformer_stack.layers.0.self_attention.value_dense.bias', 'transformer_stack.layers.0.self_attention.combine_heads.weight', 'transformer_stack.layers.0.self_attention.combine_heads.bias', 'transformer_stack.layers.0.layernorm1.weight', 'transformer_stack.layers.0.layernorm1.bias', 'transformer_stack.layers.0.fully_connected.0.weight', 'transformer_stack.layers.0.fully_connected.0.bias', 'transformer_stack.layers.0.fully_connected.2.weight', 'transformer_stack.layers.0.fully_connected.2.bias', 'transformer_stack.layers.0.layernorm2.weight', 'transformer_stack.layers.0.layernorm2.bias', 'transformer_stack.layers.1.self_attention.query_dense.weight', 'transformer_stack.layers.1.self_attention.query_dense.bias', 'transformer_stack.layers.1.self_attention.key_dense.weight', 'transformer_stack.layers.1.self_attention.key_dense.bias', 'transformer_stack.layers.1.self_attention.value_dense.weight', 'transformer_stack.layers.1.self_attention.value_dense.bias', 'transformer_stack.layers.1.self_attention.combine_heads.weight', 'transformer_stack.layers.1.self_attention.combine_heads.bias', 'transformer_stack.layers.1.layernorm1.weight', 'transformer_stack.layers.1.layernorm1.bias', 'transformer_stack.layers.1.fully_connected.0.weight', 'transformer_stack.layers.1.fully_connected.0.bias', 'transformer_stack.layers.1.fully_connected.2.weight', 'transformer_stack.layers.1.fully_connected.2.bias', 'transformer_stack.layers.1.layernorm2.weight', 'transformer_stack.layers.1.layernorm2.bias', 'fc_stack.stack.0.layers.0.weight', 'fc_stack.stack.0.layers.0.bias']
E       assert 0 == ((37 - (2 * 16)) - 1)

tests/ludwig/combiners/test_combiners.py:742: AssertionError

Check notice on line 0 in .github

See this annotation in the file changed.

@github-actions github-actions / Unit Test Results

12 skipped tests found

There are 12 skipped tests, see "Raw output" for the full list of skipped tests.
Raw output
tests.integration_tests.test_horovod ‑ test_horovod_gpu_memory_limit
tests.ludwig.automl.test_base_config
tests.ludwig.automl.test_utils
tests.ludwig.backend.test_ray
tests.ludwig.benchmarking.test_profiler
tests.ludwig.data.test_ray_data
tests.ludwig.utils.test_fs_utils ‑ test_get_fs_and_path_invalid_windows
tests.ludwig.utils.test_hyperopt_ray_utils ‑ test_grid_strategy[test_1]
tests.ludwig.utils.test_hyperopt_ray_utils ‑ test_grid_strategy[test_2]
tests.regression_tests.benchmark.test_model_performance ‑ test_performance[ames_housing.ecd.yaml]
tests.regression_tests.benchmark.test_model_performance ‑ test_performance[mercedes_benz_greener.ecd.yaml]
tests.regression_tests.benchmark.test_model_performance ‑ test_performance[sarcos.ecd.yaml]