Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compressed pruned model is the same size as compressed baseline model #47838

Open
LucasStromberg opened this issue Mar 16, 2021 · 3 comments
Open
Assignees
Labels
ModelOptimizationToolkit TF Model Optimization Toolkit stat:awaiting tensorflower Status - Awaiting response from tensorflower TF 2.3 Issues related to TF 2.3 type:bug Bug

Comments

@LucasStromberg
Copy link

System information

  • Have I written custom code (as opposed to using a stock example script
    provided in TensorFlow)
    :
    Code from pruning documentation:
    https://www.tensorflow.org/model_optimization/guide/pruning/comprehensive_guide

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
    Windows 10

  • TensorFlow installed from (source or binary):
    Anaconda Navigator

  • TensorFlow version (use command below):
    tensorflow-gpu 2.3.0

  • Python version:
    3.7.9

  • CUDA/cuDNN version:
    cudatoolkit 10.1.243
    cudnn 7.6.5

  • GPU model and memory:
    GTX 1070 8GB

Describe the problem

I followed the code provided in the documentation and pruned a model. I expected the compressed pruned model to be of smaller size than the baseline. Here, the baseline is compared to the pruned model, and is definitely smaller.
In my case, they are both the same size.

Source code / logs

import tensorflow as tf
from tensorflow import keras
import tensorflow_model_optimization as tfmot

from tensorflow.compat.v1 import ConfigProto
from tensorflow.compat.v1 import InteractiveSession

config = ConfigProto()
config.gpu_options.allow_growth = True
session = InteractiveSession(config=config)

import numpy as np
import tempfile
import os
import zipfile


def get_gzipped_model_size(model):
  # Returns size of gzipped model, in bytes.
  import os
  import zipfile

  _, keras_file = tempfile.mkstemp('.h5')
  model.save(keras_file, include_optimizer=False)

  _, zipped_file = tempfile.mkstemp('.zip')
  with zipfile.ZipFile(zipped_file, 'w', compression=zipfile.ZIP_DEFLATED) as f:
    f.write(keras_file)

  return os.path.getsize(zipped_file)


input_shape = [20]
x_train = np.random.randn(1, 20).astype(np.float32)
y_train = tf.keras.utils.to_categorical(np.random.randn(1), num_classes=20)


def setup_model():
  model = tf.keras.Sequential([
      tf.keras.layers.Dense(20, input_shape=input_shape),
      tf.keras.layers.Flatten()
  ])
  return model

def setup_pretrained_weights():
  model = setup_model()

  model.compile(
      loss=tf.keras.losses.categorical_crossentropy,
      optimizer='adam',
      metrics=['accuracy']
  )

  model.fit(x_train, y_train)

  _, pretrained_weights = tempfile.mkstemp('.tf')

  model.save_weights(pretrained_weights)

  return pretrained_weights


pretrained_weights = setup_pretrained_weights()


def test():
    base_model = setup_model()
    base_model.load_weights(pretrained_weights)
    model_for_pruning = tfmot.sparsity.keras.prune_low_magnitude(base_model)

    model_for_export = tfmot.sparsity.keras.strip_pruning(model_for_pruning)

    print("Size of gzipped baseline model: %.2f bytes" % (get_gzipped_model_size(base_model)))
    print("Size of gzipped pruned model without stripping: %.2f bytes" % (get_gzipped_model_size(model_for_pruning)))
    print("Size of gzipped pruned model with stripping: %.2f bytes" % (get_gzipped_model_size(model_for_export)))


if __name__ == "__main__":
    test()

Output:

Size of gzipped baseline model: 2935.00 bytes
Size of gzipped pruned model without stripping: 3360.00 bytes
Size of gzipped pruned model with stripping: 2935.00 bytes

@abattery abattery added the ModelOptimizationToolkit TF Model Optimization Toolkit label Mar 16, 2021
@abattery
Copy link
Contributor

Fyi @daverim

@amahendrakar
Copy link
Contributor

Was able to reproduce the issue with TF v2.3, TF v2.4 and TF-nightly. Please find the gist of it here. Thanks!

@amahendrakar amahendrakar added comp:keras Keras related issues TF 2.3 Issues related to TF 2.3 type:bug Bug labels Mar 17, 2021
@amahendrakar amahendrakar assigned ymodak and unassigned amahendrakar Mar 17, 2021
@ymodak ymodak added stat:awaiting tensorflower Status - Awaiting response from tensorflower and removed comp:keras Keras related issues labels Mar 24, 2021
@ymodak ymodak assigned daverim and unassigned ymodak Mar 24, 2021
@sushreebarsa
Copy link
Contributor

Was able to replicate the issue in TF 2.6.0-dev20210531,please find the gist here ..Thanks !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ModelOptimizationToolkit TF Model Optimization Toolkit stat:awaiting tensorflower Status - Awaiting response from tensorflower TF 2.3 Issues related to TF 2.3 type:bug Bug
Projects
None yet
Development

No branches or pull requests

6 participants