Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is there any plan to expose intermediate CNN outputs in hub with KerasLayer? #453

Closed
jiahuei opened this issue Dec 20, 2019 · 9 comments
Closed
Assignees
Labels
hub For all issues related to tf hub library and tf hub tutorials or examples posted by hub team stat:awaiting tensorflower type:feature

Comments

@jiahuei
Copy link

jiahuei commented Dec 20, 2019

Currently all CNNs in Hub only provide either feature vector after global pooling or logits.

However it can be very useful to have access to the feature maps as well as retaining ability to fine-tune the CNN using KerasLayer in TF 2.0.

Is there any plan to expose intermediate layer outputs in hub with KerasLayer?

@gowthamkpr gowthamkpr self-assigned this Dec 20, 2019
@gowthamkpr gowthamkpr added type:feature stat:awaiting response hub For all issues related to tf hub library and tf hub tutorials or examples posted by hub team stat:awaiting tensorflower and removed stat:awaiting response labels Dec 20, 2019
@jiahuei
Copy link
Author

jiahuei commented Dec 21, 2019 via email

@arnoegw arnoegw assigned arnoegw and unassigned vbardiovskyg Jan 8, 2020
@arnoegw
Copy link
Contributor

arnoegw commented Jan 8, 2020

Hi @jiahuei, there are no current plans to expose intermediate outputs systematically.

However, there is an undocumented way to get them out of some TF2 SavedModels exported from TF-Slim, such as https://tfhub.dev/google/imagenet/inception_v1/feature_vector/4: passing return_endpoints=True to the SavedModel's __call__ function changes the output to a dict.

NOTE: This interface is subject to change or removal, and has known issues.

l = hub.KerasLayer(
    "https://tfhub.dev/google/imagenet/inception_v1/feature_vector/4",
    trainable=True,  # Or not, as you please.
    arguments=dict(return_endpoints=True))
images = tf.keras.layers.Input((224, 224, 3))
outputs = l(images)
for k, v in sorted(outputs.items()):
  print(k, v.shape)

Issues to be aware of:

  • Undocumented, subject to change or removal, not available consistently.
  • __call__ computes all outputs (and applies all update ops during training) irrespective of the ones being used later on.

@agonojo
Copy link

agonojo commented Sep 14, 2020

jiahuei have you since been able to easily do this with models such as efficientnet? It really is annoying that we can't easily see intermediate layers. Even when I've tried to do it by utilizing tf.keras_applications models to just transfer over my weights, differences in the architectures make it nearly impossible to do even though documentation states "they are identical."

@jiahuei
Copy link
Author

jiahuei commented Sep 14, 2020

Hi, it has been a while, but I might be able to retrieve intermediate layers. Check out the code at https://github.com/jiahuei/TF2-pretrained-CNN

@egormkn
Copy link

egormkn commented Oct 27, 2020

Hi @arnoegw! Are there any changes expected to support access to intermediate layers? I've found a lot of issues in this repository about accessing intermediate outputs for BERT and various CNN models, and this issue seems to be the only one where a solution is given. Please, consider implementing some kind of official way to use intermediate outputs, as it will significantly expand the capabilities of tfhub models.

@ujjwal-ai
Copy link

ujjwal-ai commented Dec 1, 2020

@arnoegw
Is this issue receiving any attention ? This is one of the most important concerns right now.

@akhorlin
Copy link
Collaborator

akhorlin commented Dec 2, 2020 via email

@matfeb
Copy link

matfeb commented Aug 19, 2021

Hi,

in order to see your model architecture, you can try (example with efficientnet lite0):

efficientnet_lite0_base_layer = hub.KerasLayer(
    "https://tfhub.dev/tensorflow/efficientnet/lite0/feature-vector/2",
    output_shape=[1280],
    trainable=False
)

print("Thickness of the model:", len(efficientnet_lite0_base_layer.weights))
print ("{:<80} {:<20} {:<10}".format('Layer','Shape','Type'))

for i in range(len(efficientnet_lite0_base_layer.weights)):
    model_weights_raw_string = str(efficientnet_lite0_base_layer.weights[i])
    model_weights_wo_weights = model_weights_raw_string.split(", numpy", 1)[0]
    dtype = model_weights_wo_weights.split(" dtype=")[1]
    shape = model_weights_wo_weights.split(" shape=")[1].split(" dtype=")[0]
    
    print ("{:<80} {:<20} {:<10}".format(efficientnet_lite0_base_layer.weights[i].name, shape, dtype))

Output:

Thickness of the model: 245
Layer                                                                            Shape                Type      
efficientnet-lite0/stem/conv2d/kernel:0                                          (3, 3, 3, 32)        float32   
efficientnet-lite0/stem/tpu_batch_normalization/gamma:0                          (32,)                float32   
efficientnet-lite0/stem/tpu_batch_normalization/beta:0                           (32,)                float32   
efficientnet-lite0/stem/tpu_batch_normalization/moving_mean:0                    (32,)                float32   
efficientnet-lite0/stem/tpu_batch_normalization/moving_variance:0                (32,)                float32   
efficientnet-lite0/blocks_0/depthwise_conv2d/depthwise_kernel:0                  (3, 3, 32, 1)        float32   
efficientnet-lite0/blocks_0/tpu_batch_normalization/gamma:0                      (32,)                float32   
efficientnet-lite0/blocks_0/tpu_batch_normalization/beta:0                       (32,)                float32   
efficientnet-lite0/blocks_0/tpu_batch_normalization/moving_mean:0                (32,)                float32   
efficientnet-lite0/blocks_0/tpu_batch_normalization/moving_variance:0            (32,)                float32   
efficientnet-lite0/blocks_0/conv2d/kernel:0                                      (1, 1, 32, 16)       float32   
efficientnet-lite0/blocks_0/tpu_batch_normalization_1/gamma:0                    (16,)                float32   
efficientnet-lite0/blocks_0/tpu_batch_normalization_1/beta:0                     (16,)                float32   
efficientnet-lite0/blocks_0/tpu_batch_normalization_1/moving_mean:0              (16,)                float32   
efficientnet-lite0/blocks_0/tpu_batch_normalization_1/moving_variance:0          (16,)                float32   
.
.
.
efficientnet-lite0/blocks_15/conv2d/kernel:0                                     (1, 1, 192, 1152)    float32   
efficientnet-lite0/blocks_15/tpu_batch_normalization/gamma:0                     (1152,)              float32   
efficientnet-lite0/blocks_15/tpu_batch_normalization/beta:0                      (1152,)              float32   
efficientnet-lite0/blocks_15/tpu_batch_normalization/moving_mean:0               (1152,)              float32   
efficientnet-lite0/blocks_15/tpu_batch_normalization/moving_variance:0           (1152,)              float32   
efficientnet-lite0/blocks_15/depthwise_conv2d/depthwise_kernel:0                 (3, 3, 1152, 1)      float32   
efficientnet-lite0/blocks_15/tpu_batch_normalization_1/gamma:0                   (1152,)              float32   
efficientnet-lite0/blocks_15/tpu_batch_normalization_1/beta:0                    (1152,)              float32   
efficientnet-lite0/blocks_15/tpu_batch_normalization_1/moving_mean:0             (1152,)              float32   
efficientnet-lite0/blocks_15/tpu_batch_normalization_1/moving_variance:0         (1152,)              float32   
efficientnet-lite0/blocks_15/conv2d_1/kernel:0                                   (1, 1, 1152, 320)    float32   
efficientnet-lite0/blocks_15/tpu_batch_normalization_2/gamma:0                   (320,)               float32   
efficientnet-lite0/blocks_15/tpu_batch_normalization_2/beta:0                    (320,)               float32   
efficientnet-lite0/blocks_15/tpu_batch_normalization_2/moving_mean:0             (320,)               float32   
efficientnet-lite0/blocks_15/tpu_batch_normalization_2/moving_variance:0         (320,)               float32   
efficientnet-lite0/head/conv2d/kernel:0                                          (1, 1, 320, 1280)    float32   
efficientnet-lite0/head/tpu_batch_normalization/gamma:0                          (1280,)              float32   
efficientnet-lite0/head/tpu_batch_normalization/beta:0                           (1280,)              float32   
efficientnet-lite0/head/tpu_batch_normalization/moving_mean:0                    (1280,)              float32   
efficientnet-lite0/head/tpu_batch_normalization/moving_variance:0                (1280,)              float32   

@nss-ysasaki
Copy link

nss-ysasaki commented Nov 24, 2021

@akhorlin Hmmm..., could you elaborate a bit? I am wondering how I can get outputs of intermediate layers from tf.Variables.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
hub For all issues related to tf hub library and tf hub tutorials or examples posted by hub team stat:awaiting tensorflower type:feature
Projects
None yet
Development

No branches or pull requests

10 participants