New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to use Keras layer implemented by myself? #373
Comments
カスタムレイヤーを利用するには変換用の追加コード(カスタムハンドラ)を実装しトランスパイラに登録する必要があります。カスタムハンドラを登録するためのAPIはすでに実装されているのですが、ドキュメンテーションが間に合っておらず申し訳ありません。近日中、早ければ明日にはドキュメントを反映できると思うので、お待ちいただければと思います。 |
承知しました。ありがとうございます! |
I pushed new examples about custom layer in Keras. Please check them. Kerasの自作レイヤをWebDNNで読み込む方法に関するサンプルを追加しました。 |
closed |
Could you help me again? I implemented a Keras custom layer PaddingAdd2D, which adds inputs with same dimension but different number of channels after zero-padding for missing channels. I tried to make an IR operator, converter handler, and generator handler for this layer, but I may need time to understand a detail of WebDNN^^; The code of PaddingAdd2D is below.
I will appreciate any suggestions or sample code! |
In my understanding, # X and Y are tf.Tensor instances
print(X.shape)
>>> (1, 16, 16, 8)
print(Y.shape)
>>> (1, 16, 16, 32)
Z = PaddingAdd2D()(X, Y)
print(Z.shape)
>>> (1, 16, 16, 32) Here, computation graph of Keras is as follows,
and computation graph of TensorFlow is like follows,
Computation graph of TensorFlow is built by Keras internally. Everything you have to do is implementing converter handler for each operator in these computation graph. I think you don't have to implement any WebDNN operator and generator handler because the Implement Converter HandlerWebDNN try to convert TensorFlow computation graph by Therefore, you have two choices. 1. Implement converter handler for
|
I've implemented converter handler for |
Thank you for your kind efforts. I tried convert_keras.py in the latest webdnn and got an error "NameError: name 'tf' is not defined" at the line of "return Lambda(f)" in PaddingChannel2D. I could not find any places for "import tensorflow as tf" to resolve this error. |
Can you give me complete error log with command you called? |
Here is a command. (Sorry I forgot it and edited)
bias.py is a custom layer handler that you made it for me, model/model.h5 is a model that is saved by keras. Here is an output.
|
This problem seems due to keras. I will investigate it. |
Could you help me again?^^ In order to avoid a strange error of load_model, I implemented PaddingAdd (PaddingAdd2d above) as Layer rather than function using Lambda.
By specifying plugin, it seems to pass load_model successfully. What I tried is,
And I got the following error.
|
Each converter handler converts other framework's operation (e.g.
Your implementation converts Keras layer into TensorFlow's Pad operation, so conversion failed. You have to construct WebDNN computation graph there. Unfortunately, general padding operation is not supported yet. Instead, concatenation operation is used In the converter for |
Thank you! Are there any ways to allows to fallback to a handler of tf.pad that you implemented previously for this issue? I understood that I need to write WebDNN computation graph in the converter, but I think that padding is more primitive operation than PaddingAdd. |
In my understanding, you faced to 2 problems.
And the first one is because of Keras (and it was solved?). Then, your model can be converted. As default, Please retry conversion with enviroment variable |
Yes. And I solved it by implementing a custom layer rather than a function using Lambda. So the second problem that you pointed out is actually the solution for the first problem. And since I made the custom layer, I needed to pass a plugin for convert_keras.py to succeed to "load_model".
And I got the following error (reproduced with DEBUG=1).
It seems that I need to resolve "TypeError: 'float' object cannot be interpreted as an integer". |
Thanks. I understand the real problem. It seeas to be the bug of TensorFlowConverter handler, which called reshape operator with shape of float values. I'll fix it soon. |
Can you give me the output of |
Here is an output of model.summary.
|
Thanks. I've fixed the error. Please retry with latest revision of WebDNN. |
Thank you for your efforts! Here is an output.
|
Oh... Sorry to let you down, but implementing converter handler by yourself is required. x = tf.placeholder(np.float32, (2, 3, 4, 5)) # x is tf.Tensor
y = tf.shape(x)[0] # y is tf.Tensor, whose shape is (1,) and value is [2]. This is slicing operation to scalar value Currently WebDNN does not support operations like this. I think follow form may works (not sure). x = tf.placeholder(np.float32, (2, 3, 4, 5)) # x is tf.Tensor
y = tf.shape(x)[0:1] # slicing with range |
Could you give me clues? I am using K.int_shape(input)[-1] to get a number of channels. I think that it should be an integer variable rather than a placeholder because, for example, I need it to specify size of pad for tf.pad. If so, I will try hard coding about numbers of channels. |
I think it has nothing to do about this problem. class PaddingAdd(Layer):
"""Layer that adds a list of inputs.
It takes as input a list of tensors,
all of the same dimension but different channels, and returns
a single tensor (also of the largest shape).
"""
def __init__(self, **kwargs):
super(PaddingAdd, self).__init__(**kwargs)
def call(self, inputs):
channels = list(map(lambda e: K.int_shape(e)[-1], inputs))
max_channels = max(channels)
padded = []
for (i, e) in enumerate(inputs):
pad = max_channels - channels[i]
if pad == 0:
padded.append(e)
else:
paddings = [[0, 0] for _ in range(len(e.shape))]
paddings[-1][1] = pad
padded.append(tf.pad(e, paddings))
return add(padded) This is your custom layer. Here, For example, # Keras Computation Graph
x1 -+
+-{PaddingAdd}- y
x2 -+ In this Keras computation graph, follow TF computation graph is built. # TF Computation Graph
{Constant}- paddings -+
+-{Pad}- padded_x1 -+
x1 -+ |
+-{Add}- y
x2 -+ Here, no slicing operation appeared. Of course as you said, |
Ok, so I should make simpler test cases of not only PaddingAdd but also other layers and search the part of problem, right? Thank you for your collaboration! |
Could you help me again? I made a simple reproduction code. (https://gist.github.com/y-ich/db973fc2a1a1736adf5570a22e87902f) PaddingChannel in padding_channel.py is the simplest, general layer for padding channel. What do you think about the cause? |
Writing a converter handler solved this problem.
|
(Written by @Kiikurage)
Original question is in Japanese, but answer is also written in English.
(英語のほうがよければ英語で書きますのでその旨お知らせください)
Kerasには入力の各点に学習可能なバイアスを加えるレイヤーがないようなので、カスタムレイヤー(Bias)を作りました。
このレイヤーを含むKerasモデルをWebDNNで利用するにはどうしたらよいでしょうか?
ちなみにBiasは以下のような簡単なものです。
The text was updated successfully, but these errors were encountered: