Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Counting FLOPS during DropOut #3

Closed
Goutam-Kelam opened this issue Nov 28, 2018 · 4 comments
Closed

Counting FLOPS during DropOut #3

Goutam-Kelam opened this issue Nov 28, 2018 · 4 comments

Comments

@Goutam-Kelam
Copy link

Firstly, I sincerely thank you for THOP, a much needed product for the community. However, I would like to know how to find the FLOPs when nodes are dropped. A followup question would be, if we freeze the layers, the forward propagation will be executed but weights wont be updated during the back propagation. How to calculate FLOPs when layers are frozen.

@Lyken17
Copy link
Owner

Lyken17 commented Nov 28, 2018

Hi @Goutam-Kelam

Thanks for your interest. First I want to notify that thop currently only counts FLOPs for feed-forward, backpropogate might be future feature. Then, if you want to only profile some layers , you can add a special judge in profile() function. An example is shown below, hope it will help

# assume we want to ignore FLOPs in batchnorm layer
def profile(model, input_size, custom_ops={}):
    def add_hooks(m):
        if len(list(m.children())) > 0:
            return

        # ========  add one judgement here ===========
        if isinstance(m, (nn.BatchNorm1d, nn.BatchNorm2d, nn.BatchNorm3d)):
            return 

        m.register_buffer('total_ops', torch.zeros(1))
        m.register_buffer('total_params', torch.zeros(1))

        for p in m.parameters():
            m.total_params += torch.Tensor([p.numel()])

        m_type = type(m)
        fn = None

        if m_type in custom_ops:
            fn = custom_ops[m_type]
        elif m_type in register_hooks:
            fn = register_hooks[m_type]
        else:
            logging.warning("Not implemented for ", m)

        if fn is not None:
            logging.info("Register FLOP counter for module %s" % str(m))
            m.register_forward_hook(fn)

    model.eval()
    model.apply(add_hooks)

    x = torch.zeros(input_size)
    model(x)

    total_ops = 0
    total_params = 0
    for m in model.modules():
        if len(list(m.children())) > 0: # skip for non-leaf module
            continue
        total_ops += m.total_ops
        total_params += m.total_params

    total_ops = total_ops.item()
    total_params = total_params.item()

    return total_ops, total_params

@Goutam-Kelam
Copy link
Author

Thanks for the reply @Lyken17. I was interested in knowing how much of computation each layer takes. Suppose for simplicity we have 2 conv. layers with different kernel sizes, a fc layer. How can I find how many FLOP's each conv. layer takes. The reason I want to do this is I want to know the total reduction in FLOP's when a layer is dropped. Assuming both forward and backward propagation takes same number of FLOP's. So my idea was to get individual FLOP's for each layer and subtract those from the total FLOP's.

@Lyken17
Copy link
Owner

Lyken17 commented Nov 28, 2018

Sure you can do it.

    for m in model.modules():
        if len(list(m.children())) > 0: # skip for non-leaf module
            continue
        # print layer-wise information here.
        print(str(m),  m.total_ops, m.total_params)
        total_ops += m.total_ops
        total_params += m.total_params

@Lyken17
Copy link
Owner

Lyken17 commented Dec 3, 2018

Close due to inactivity. Feel free to reopen it if you have further questions

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants