Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pytorch implementation #10

Closed
expectopatronum opened this issue Apr 11, 2019 · 26 comments
Closed

Pytorch implementation #10

expectopatronum opened this issue Apr 11, 2019 · 26 comments

Comments

@expectopatronum
Copy link

Hi,
this might not be a question for the repo owner but maybe someone else sees this - I hope it is ok I put this question here.
Is anyone aware of a Pytorch implementation of influence functions? I think I got the implementation of the hessian vector product right but there is also a lot of data handling involved (to replace the Tensorflow feed_dict stuff by more Pytorchy data types). If no one has done it - I am currently working on it and can also share it (but this might take some time).

Best regards
Verena

@Kunlun-Zhu
Copy link

Wonderful work, I would also consider it be done by pytorch

@WonderSeven
Copy link

I do wish the pytorch version could be released as soon as possible.

@tengerye
Copy link

Hi, what is progress now? I would like to join.

@expectopatronum
Copy link
Author

I was able to reproduce the hospital readmission notebook experiments in Pytorch with a few issues:

  1. The bar charts are similar (so it returns the influential samples in the same/correct order) but the computed influence values are all too large (all by the same factor). I am not yet sure whether the error is in my loss functions, some missing scaling, ...
  2. And the second thing is that is super slow (about 35 times slower than the TF implementation), so far I didn't find a solution for that (from profiling it looks like it might be the DataLoaders that are slow).

Since I couldn't get it to run in reasonable time and some things from the original implementation are unclear to me (I sent an email to the first author of the paper but I haven't received an answer yet) I have moved on to other interpretability methods.

My code is messy so I didn't put it online. If someone is interested in helping me - feel free to contact me, I'd like to give it another shot.

@tengerye
Copy link

@expectopatronum Hi, I am working on the first experiment by translating the TensorFlow code to PyTorch, it is difficult though. I would like to help and work on it together.

What is your approach? Do you translate the codes file by file or organize them by yourselves?

@expectopatronum
Copy link
Author

First I tried to translate the code file by file but I think how Pytorch and Tensorflow work is too different. I also want the influence code extracted from the model, so I put it in a separate file. In the end I want it to work for every model and not copy the code to all models.
I also tried to figure out which parts are actually used (in the example) and only implement those (for now). E.g. in the hospital_readmission example (which I use to test my implementation) they pass test_indices, so I currently don't care about the part of the function that deals with the case that this is None.
I will put my code on Github in the next couple of days and share it with you - maybe we can solve it together.

Here is one of the questions I asked the author, maybe you have an answer to this:

  1. The function update_feed_dict_with_v_placeholder is not clear to me. First you fill the feed_dict with a batch of the data (https://github.com/kohpangwei/influence-release/blob/master/influence/genericNeuralNet.py#L496) and afterwards you seem to update this batch with 'cur_estimate'. What does the feed_dict look like at this stage?

    a) Is the input replaced by v? Is the prediction computed on v or input?
    b) Or is v added to the feed_dict and it now contains input, label and v?

@tengerye
Copy link

@expectopatronum
The function update_feed_dict_with_v_placeholder just try to insert a pair of placeholder in v_placeholder and corresponding values into the feed_dict. The key in feed_dict is the tensor and value is corresponding value.

Hope it can help. By the way, may I ask when do you think that your code will be ready online?

@expectopatronum
Copy link
Author

Alright, thanks!

I am currently working on it, so I'd expect it to be ready in a couple of hours.

@expectopatronum
Copy link
Author

I've now created a private repository with my current status and invited @tengerye. If anyone else is interested in having a look, just let me know.

@markus-beuckelmann
Copy link

Hi @expectopatronum, just stumbled upon this...I'm also working on a currently unreleased PyTorch implementation of the paper, feel free to reach out...

@Kunlun-Zhu
Copy link

kohpangwei seems not really care about this repository anymore, what a shame

@kohpangwei
Copy link
Owner

Hello,

@expectopatronum I don't think I saw any email (sorry if I missed it). But thanks @tengerye for answering it.

This repo is frozen to what was used for the paper. I'm glad that there's interest in making a Pytorch version; thank you and good luck! In case it helps, we have a more recent paper that also uses influence functions, and the code there is cleaner and easier to read: https://github.com/kohpangwei/group-influence-release

@expectopatronum
Copy link
Author

Hi @kohpangwei, that's strange. I used the email adress from your influence paper, is that still valid? I still have some theoretical questions about the paper that probably can not be answered by someone on Github.

I am aware of the new paper, I didn't have time yet to check it out but I will soon :)

Thanks a lot!

@kohpangwei
Copy link
Owner

Yup, that email address still works! Feel free to drop me a note there. :)

@expectopatronum
Copy link
Author

Thanks, I did! Hopefully it won't get lost this time :)

@tengerye
Copy link

Hi, @expectopatronum @Kunlun-Zhu @markus-beuckelmann has anyone successfully repeat the experiment of CNN (fig2-c) successfully yet? Although the paper states that the methods works well with non-convergence case but I found I can never make the get_inverse_hvp_cg convergence. The original code achieves 0.9996 on training CNN and 0.9746 on test set. In my case, it is 0.9325 and 0.8972 respectively.

I guess it must be related to the damping term.

@kohpangwei If possible, would you please share some experience in how to determine if the training is good for the next step? e.g., did you check eigenvalues of hessian inverse?

@expectopatronum
Copy link
Author

Hi @tengerye, unfortunately not. I have given up for now since I didn't even manage to exactly reproduce the hospital notebook (and it is super slow in my Pytorch implementation). Would you like to share your code?

@kohpangwei
Copy link
Owner

Yup, checking the eigenvalues of the Hessian was a helpful diagnostic, and damping it "appropriately" (to make sure it's PSD) is important in the non-convex case. Increasing L2 regularization can also be helpful.

@tengerye
Copy link

@kohpangwei Thank you for your kind reply. @expectopatronum Sharing is the reason for me to produce it. Allow me a few days to fix the problem before making it public.

@Jinjicheng
Copy link

I've now created a private repository with my current status and invited @tengerye. If anyone else is interested in having a look, just let me know.

hi,@expectopatronum,i am also interested in PyTorch implementation of the paper,could you share me with your code?Thanks.

@pianpwk
Copy link

pianpwk commented Nov 6, 2019

@expectopatronum I'm also very interested in the Pytorch implementation, could you also share your code with me as well? It'd be a fantastic help!

@stovecat
Copy link

@expectopatronum I'm also looking for the pytorch implementation of influence functions! It'll be very helpful if you share your code😆

@nimarb
Copy link

nimarb commented Nov 30, 2019

I've had a pytorch implementation lingering around for some time on my hard drive. I've just polished it up a bit (hope it's readable at all...) and wrote a few docs to go along with it. You can find it here: https://github.com/nimarb/pytorch_influence_functions

It doesn't implement all the graphics, tests, examples of the original paper - just the algo itself.

@expectopatronum
Copy link
Author

@nimarb This is amazing, thanks for sharing! If you don't implement stuff from the paper - how do you know if it is correct? (not saying that everything in the paper must be correct)

@nimarb
Copy link

nimarb commented Dec 4, 2019

Initially, I recreated the Inception and adversarial use-cases (were most interesting for my use) where I got the same images for helpful data points. I hope to find the time to put those out over the christmas holidays :)

@kohpangwei
Copy link
Owner

Closing this thread; thanks @nimarb for the implementation. :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants