Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error in energy estimation for AveragePooling2D layers #100

Open
RatkoFri opened this issue Oct 21, 2022 · 0 comments
Open

Error in energy estimation for AveragePooling2D layers #100

RatkoFri opened this issue Oct 21, 2022 · 0 comments

Comments

@RatkoFri
Copy link

RatkoFri commented Oct 21, 2022

Greetings, I am trying to quantize the network for the KWS application using DS CNN. The network is described here (LINK)(lines from 85 to 141).

When running AutoQKeras, It shows an error on energy estimation for Average2D pooling layers:

Traceback (most recent call last):
File "/home/auto_qk.py", line 180, in
autoqk = AutoQKeras(model, metrics=[keras.metrics.SparseCategoricalAccuracy()], custom_objects=custom_objects, **run_config)
File "/usr/local/lib/python3.8/dist-packages/qkeras/autoqkeras/autoqkeras_internal.py", line 831, in init
self.hypermodel = AutoQKHyperModel(
File "/usr/local/lib/python3.8/dist-packages/qkeras/autoqkeras/autoqkeras_internal.py", line 125, in init
self.reference_size = self.target.get_reference(model)
File "/usr/local/lib/python3.8/dist-packages/qkeras/autoqkeras/forgiving_metrics/forgiving_energy.py", line 121, in get_reference
energy_dict = q.pe(
File "/usr/local/lib/python3.8/dist-packages/qkeras/qtools/run_qtools.py", line 85, in pe
energy_dict = qenergy.energy_estimate(
File "/usr/local/lib/python3.8/dist-packages/qkeras/qtools/qenergy/qenergy.py", line 302, in energy_estimate
add_energy = OP[get_op_type(accumulator.output)]["add"](
AttributeError: 'NoneType' object has no attribute 'output'

When I remove the Average2D pooling layer, the AutoQKeras does not produce the error. I tried to set quant parameters for AveragePooling, but no luck.

Code for AutoQKeras:

AutoQkeras start

# set quantization configs 

quantization_config = {
    "kernel": {
            "binary": 1,
            "stochastic_binary": 1,
            "ternary": 2,
            "stochastic_ternary": 2,
            "quantized_bits(2,0,1,1,alpha=\"auto_po2\")": 2,
            "quantized_bits(3,0,1,1,alpha=\"auto_po2\")": 3,
            "quantized_bits(4,0,1,1,alpha=\"auto_po2\")": 4,
            "quantized_bits(3,0,1,1,alpha=\"auto_po2\")": 5,
            "quantized_bits(4,0,1,1,alpha=\"auto_po2\")": 6
    },
    "bias": {
            "quantized_bits(2,0,1,1,alpha=\"auto_po2\")": 2,
            "quantized_bits(3,0,1,1,alpha=\"auto_po2\")": 3,
            "quantized_bits(4,0,1,1,alpha=\"auto_po2\")": 4,
            "quantized_bits(3,0,1,1,alpha=\"auto_po2\")": 5,
            "quantized_bits(4,0,1,1,alpha=\"auto_po2\")": 6
    },
    "activation": {
            "binary": 1,
            "ternary": 2,
            "quantized_bits(2,0,1,1,alpha=\"auto_po2\")": 2,
            "quantized_bits(3,0,1,1,alpha=\"auto_po2\")": 3,
            "quantized_bits(4,0,1,1,alpha=\"auto_po2\")": 4,
            "quantized_bits(3,0,1,1,alpha=\"auto_po2\")": 5,
            "quantized_bits(4,0,1,1,alpha=\"auto_po2\")": 6
    },
    "linear": {
            "binary": 1,
            "ternary": 2,
            "quantized_bits(2,0,1,1,alpha=\"auto_po2\")": 2,
            "quantized_bits(3,0,1,1,alpha=\"auto_po2\")": 3,
            "quantized_bits(4,0,1,1,alpha=\"auto_po2\")": 4,
            "quantized_bits(3,0,1,1,alpha=\"auto_po2\")": 5,
            "quantized_bits(4,0,1,1,alpha=\"auto_po2\")": 6
    }
}

# define limits 
limit = {
    "Dense": [4, 4, 4],
    "Conv2D": [4, 4, 4],
    "DepthwiseConv2D": [4, 4, 4],
    "Activation": [4],
    "AveragePooling2D":  [4, 4, 4],
    "BatchNormalization": [],
    "Dense":[],
}

# define goal (delta = forgiving factor lets put at 8% like in tutorial )

goal = {
    "type": "energy",
    "params": {
        "delta_p": 8.0,
        "delta_n": 8.0,
        "rate": 2.0,
        "stress": 1.0,
        "process": "horowitz",
        "parameters_on_memory": ["sram", "sram"],
        "activations_on_memory": ["sram", "sram"],
        "rd_wr_on_io": [False, False],
        "min_sram_size": [0, 0],
        "source_quantizers": ["int8"],
        "reference_internal": "int8",
        "reference_accumulator": "int32"
        }
}

# SOME RUN CONFIGS

run_config = {
    "output_dir": Flags.bg_path + "auto_qk_dump",
    "goal": goal,
    "quantization_config": quantization_config,
    "learning_rate_optimizer": False,
    "transfer_weights": False,
    "mode": "random",
    "seed": 42,
    "limit": limit,
    "tune_filters": "layer",
    "tune_filters_exceptions": "^dense",
    # first layer is input, layer two layers are softmax and flatten
    "layer_indexes": range(1, len(model.layers)-1),
    "max_trials": 20
    }


# Start autoQkeras 

model.summary()
model.compile(
    #optimizer=keras.optimizers.RMSprop(learning_rate=args.learning_rate),  # Optimizer
    optimizer=keras.optimizers.Adam(learning_rate=Flags.learning_rate),  # Optimizer
    # Loss function to minimize
    loss=keras.losses.SparseCategoricalCrossentropy(),
    # List of metrics to monitor
    metrics=[keras.metrics.SparseCategoricalAccuracy()],
)   
#model = keras.models.load_model(Flags.saved_model_path)

custom_objects = {}
autoqk = AutoQKeras(model, metrics=[keras.metrics.SparseCategoricalAccuracy()], custom_objects=custom_objects, **run_config)
autoqk.fit(ds_train, validation_data=ds_val, epochs=Flags.epochs, callbacks=callbacks)

qmodel = autoqk.get_best_model()
model.save_weights(Flags.bg_path + "auto_qk_dump/","qmodel.h5")
### AutoQkeras stop
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant