Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

lavfi: Add Documentation to Native Backend Layers #396

Closed
wants to merge 5 commits into from

Conversation

shubhanshu02
Copy link
Contributor

Description

Add documentation for the Average Pooling Layer.

Questions

@guoyejun, I have a few questions regarding the Average Pooling Layer.

  1. Is the dnn_size returned by ff_dnn_load_layer_avg_pool the size of the layer or the size of parameters?
  2. I noticed the padding method in the Average Pooling Layer can be either SAME or VALID, but this isn't checking until the execution starts. Should we have an exclusive check for this while loading the layer?
  3. Also in ff_dnn_execute_layer_avg_pool, if the execution is successful, the function returns 0. But when it fails, it returns DNN_ERROR. I think it would be better to return DNN_SUCCESS though technically both are the same.

@shubhanshu02 shubhanshu02 changed the title lavfi/dnn/dnn_backend_native_layer_avgpool.h: Add Documentation lavfi: Add Documentation to Native Backend Layers May 3, 2021
@guoyejun
Copy link
Collaborator

guoyejun commented May 6, 2021

  1. it is the size of content for avg_pool saved in .model file.
  2. you are right, we need a check in ff_dnn_load_layer_avg_pool
  3. yes, DNN_SUCCESS is better for now. Actually, in my plan, we'd change DNN_ERROR to be specific errors for example AVERROR(EINVAL), AVERROR(ENOMEM). The whole dnn module needs such change.

@guoyejun
Copy link
Collaborator

guoyejun commented May 6, 2021

@Semmer2 please help to review the doc, thanks

@Semmer2
Copy link
Collaborator

Semmer2 commented May 6, 2021

  1. Actually, the TF model support either SAME or VALID as layers padding method parameters for now. And our native model can only be generated from TF model now. It is necessary but not urgent.

* like strides, padding method, and kernel size after
* parsing from the model file context.
*
* Possible values for the padding method are SAME and VALID.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe no need to specify the value?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought that the padding method should be correctly selected to load the layer. But yes, since it is parsed from the model file context, maybe we don't need to specify its value. 🤔

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we specify the values for the padding method in the typedef for the layer parameters?

@shubhanshu02
Copy link
Contributor Author

  1. you are right, we need a check in ff_dnn_load_layer_avg_pool

@guoyejun, do you mean a check like this?

if (avgpool_params->padding_method != SAME || avgpool_params->padding_method != VALID) {
    av_freep(&avgpool_params);
    return 0;
}
  1. yes, DNN_SUCCESS is better for now. Actually, in my plan, we'd change DNN_ERROR to be specific errors for example AVERROR(EINVAL), AVERROR(ENOMEM). The whole dnn module needs such change.

Shall I change the return value from 0 to DNN_SUCCESS then? Yes, I noticed the DNN_ERROR doesn't give any information regarding the type of error. Can I help with that?

@shubhanshu02 shubhanshu02 marked this pull request as ready for review May 7, 2021 13:59
@shubhanshu02
Copy link
Contributor Author

  1. Actually, the TF model support either SAME or VALID as layers padding method parameters for now. And our native model can only be generated from TF model now. It is necessary but not urgent.

@Semmer2 I noticed there is another padding method SAME_CLAMP_TO_EDGE in the convolution layer. What does it mean?

@guoyejun
Copy link
Collaborator

guoyejun commented May 8, 2021

  1. you are right, we need a check in ff_dnn_load_layer_avg_pool

@guoyejun, do you mean a check like this?

if (avgpool_params->padding_method != SAME || avgpool_params->padding_method != VALID) {
    av_freep(&avgpool_params);
    return 0;
}

yes

  1. yes, DNN_SUCCESS is better for now. Actually, in my plan, we'd change DNN_ERROR to be specific errors for example AVERROR(EINVAL), AVERROR(ENOMEM). The whole dnn module needs such change.

Shall I change the return value from 0 to DNN_SUCCESS then? Yes, I noticed the DNN_ERROR doesn't give any information regarding the type of error. Can I help with that?

Just leave it now. Sure, the final plan is to replace 'DNNReturnType' with 'int', while adding DNN_GENERIC_ERROR (or DNN_ERROR is a better naming?) in dnn_interface.h for errors general to dnn module.

#define DNN_GENERIC_ERROR FFERRTAG('D','N','N','!')

@shubhanshu02
Copy link
Contributor Author

Just leave it now. Sure, the final plan is to replace 'DNNReturnType' with 'int', while adding DNN_GENERIC_ERROR (or DNN_ERROR is a better naming?) in dnn_interface.h for errors general to dnn module.

#define DNN_GENERIC_ERROR FFERRTAG('D','N','N','!')

As far as I could understand, this is what I need to do -

  1. Define the DNN_GENERIC_ERROR in dnn_interface.h with the given definition.
  2. Replace return types of the functions which return DNNReturnType to int. E.g. - ff_dnn_execute_model_tf
  3. In the case of an error in the function, say, for example, the number of dimensions is incorrect, return AVERROR(EINVAL) or any other appropriate error code.

Is the third step above correct? What does the DNN_GENERIC_ERROR do in this case?

@guoyejun
Copy link
Collaborator

guoyejun commented May 8, 2021

Just leave it now. Sure, the final plan is to replace 'DNNReturnType' with 'int', while adding DNN_GENERIC_ERROR (or DNN_ERROR is a better naming?) in dnn_interface.h for errors general to dnn module.
#define DNN_GENERIC_ERROR FFERRTAG('D','N','N','!')

As far as I could understand, this is what I need to do -

  1. Define the DNN_GENERIC_ERROR in dnn_interface.h with the given definition.
  2. Replace return types of the functions which return DNNReturnType to int. E.g. - ff_dnn_execute_model_tf
  3. In the case of an error in the function, say, for example, the number of dimensions is incorrect, return AVERROR(EINVAL) or any other appropriate error code.

Is the third step above correct? What does the DNN_GENERIC_ERROR do in this case?

right, just in case that an error does not belong to others

@shubhanshu02
Copy link
Contributor Author

Okay, thank you.

And in DnnLayerMathBinaryParams of dnn_backend_native_layer_mathbinary.h,

typedef struct DnnLayerMathBinaryParams{
    DNNMathBinaryOperation bin_op; // binary operator to be used
    int input0_broadcast;
    int input1_broadcast;
    float v;
} DnnLayerMathBinaryParams;

I could understand the bin_op, but couldn't understand the meaning of the other three variables. What are they?

@guoyejun
Copy link
Collaborator

guoyejun commented May 11, 2021

Shall I change the return value from 0 to DNN_SUCCESS then? Yes, I noticed the DNN_ERROR doesn't give any information regarding the type of error. Can I help with that?

@shubhanshu02 most of the code have been pushed into upstream, you may pending this (it's no problem to continue if you have bandwidth) and start the code for GSoC.

  1. extract common code from dnn_backend_tf.c into dnn_backend_common.h/c,
    for example, struct TaskItem and struct InferenceItem, etc.
    typedef struct TaskItem {
    OVModel *ov_model; --> void *model; or DNNModel *model;
    const char *input_name;
    AVFrame *in_frame;
    const char *output_name;
    AVFrame *out_frame;
    int do_ioproc;
    int async;
    uint32_t inference_todo;
    uint32_t inference_done;
    } TaskItem;

  2. reuse in different backends.

@shubhanshu02
Copy link
Contributor Author

@shubhanshu02 most of the code have been pushed into upstream, you may pending this (it's no problem to continue if you have bandwidth) and start the code for GSoC.

Sir, I have already done the required work for the error codes in #400. Currently, it is showing a merge conflict. I'll resolve and push it in a few minutes. Okay, I'll start this work today. So shall I extract only the common typedefs first?

@guoyejun
Copy link
Collaborator

@shubhanshu02 most of the code have been pushed into upstream, you may pending this (it's no problem to continue if you have bandwidth) and start the code for GSoC.

Sir, I have already done the required work for the error codes in #400. Currently, it is showing a merge conflict. I'll resolve and push it in a few minutes. Okay, I'll start this work today. So shall I extract only the common typedefs first?

take your time, there's plenty of time available. typedef is just an example, you may think about it in a whole.

@shubhanshu02
Copy link
Contributor Author

take your time, there's plenty of time available. typedef is just an example, you may think about it in a whole.

Okay, got it. Thank you.

@guoyejun
Copy link
Collaborator

btw, to save your time, you can just run
git format-patch --signoff --subject-prefix="PATCH V4" HEAD~5

to get all the patches with v4, and send them to community.

Add documentation for Average Pool Layer

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
@shubhanshu02
Copy link
Contributor Author

git format-patch --signoff --subject-prefix="PATCH V4" HEAD~5

@guoyejun thank you. That helped a lot. I was actually sending one by one by editing the subject before this.

Add documentation for 2D Convolution Layer

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
Add documentation for Dense Layer

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
Add documentation for Depth to Space Layer

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
Add documentation for Unary Math Layer

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
@shubhanshu02
Copy link
Contributor Author

Thank you, @guoyejun and @Semmer2, for reviewing the PR.

@shubhanshu02 shubhanshu02 deleted the avgpool branch June 4, 2021 09:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
3 participants