Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

iNNvestigate 2.0 planning #221

Closed
albermax opened this issue Sep 17, 2020 · 9 comments
Closed

iNNvestigate 2.0 planning #221

albermax opened this issue Sep 17, 2020 · 9 comments

Comments

@albermax
Copy link
Owner

I suggest we approach the planning in the following way:

  • Define some high level stakeholder requirements.
  • Define more concrete Software requirements.
  • Develop the architecture and interfaces that would address those SW requirements.
  • Create a roadmap to know in which order we should do things.

I know many things are "trivial", but IMHO it is good to have them listed and in mind when making decisions.
Also compared to coding the whole package, planning it takes not that much time. :-)

Also, those requirements change over time. The idea is to then update accordingly the derived requirements/architecture/interfaces/implementations.


Stakeholder requirements

Stakeholder is someone how will interact with this package in some way. Such requirements give a broad direction to the planning and are use case oriented. Mind stakeholders are more of a role than real persons - one person can have several roles.

  • Applying user: Someone who applies explanation methods w/o deep background knowledge and interest how everything works.
    • [SR 1.1] An applying user wants to be able easily setup and apply the software to standard neural network architectures.
    • [SR 1.2] An applying user wants to have a broad selection of the most important explanation methods (for neural networks).
    • [SR 1.3] An applying user wants to trust the SW to do the right thing.
  • Developing user: Someone who wants to develop/extend explanation methods.
    • [SR 2.1] A developing user wants to be able to easily integrate new methods and extend/modify existing ones.
  • Developers: Someone who actively works on the package.
    • [SR 3.1] A developer wants to be able to navigate the code easily and efficiently.
    • [SR 3.2] A developer wants to have a best practice and efficient workflows.

Software requirements

SW requirements are software oriented and typically derive from stakeholder requirements.

  • Derived from SR 1.1:
    • [SWR 1.11]: An easy to read and follow documentation/tutorials.
    • [SWR 1.1.2]: Works out of the box with (compatible) Keras models in a sensible way/with good default configurations.
    • [SWR 1.1.3]: Can be installed via pip.
    • [SWR 1.1.4]: Works with Python 3.6+
    • [SWR 1.1.5]: Works with TF/tf.keras 2.0+
    • [SWR 1.1.6]: The code runs quickly.
    • [SWR 1.1.7]: It is possible to save and export analysis models with tf.saved_models.
    • [SWR 1.1.8]: It is possible to choose the neuron(s) to analyze: index, max output, all, mask (for example for segmentation models).
  • Derived from SR 1.2:
    • [SWR 1.2.1]: LRP is implemented.
    • [SWR 1.2.2]: Smoothgrad is implemented.
    • [SWR 1.2.3]: Integrated gradients is implemented.
    • [SWR 1.2.4]: Vargard is implemented.
  • Derived from SR 1.3:
    • [SWR 1.3.1]: Code is reviewed by another developer before merging.
  • Derived from SR 2.1:
    • [SWR 2.1.1]: Clear interface between xyz and abc.
  • Derived from SR 3.1:
    • [SWR 3.1.1]: Use and enforcement of codings style xyz.
    • [SWR 3.1.2]: Use of automatic CI pipeline.
  • Derived from SR 3.2:
    • [SWR 3.2.1]: Use of a logger.

Architecture & interfaces

  • Application interface:
    • Stick to sklearn fit/analyze pattern?
    • Stick to Keras models?
  • Architecture of a analyzer:
    • Divide preprocessing wrappers and analyzer functionality?
  • Algorithm:
    • Three steps: 1. check/remap/canonize network, 2. map to specific backward rules, 3. apply/build backward network
      • What should be the interfaces/invariants between those steps?
      • Consequently is an analyzer a collection of check/remap/canonize "rules" as well as "backward" rules?
  • Testing:
    • Unit tests; testing of single backward steps.
    • "Snapshot testing"?
    • Visual testing with notebooks?

Open questions:

  • How to work on this content? Should we move to Google Docs or should I update the descriptions based on your inputs?

Todos:

  • Checking what captum people are doing and how they are approaching this?
  1. Settle on a first set of requirements.
  2. Develop the architecture and interfaces.
  3. Develop a roadmap/order of implementations based on that.
@albermax
Copy link
Owner Author

Hi @leanderweber @enryH @rachtibat

here is a first draft.
Please let me know if this makes sense to you and what you think.

Cheers,
Max

@leanderweber
Copy link
Collaborator

leanderweber commented Sep 17, 2020

Hi @albermax ,

thanks for the draft!
I think moving to Google Docs would be a great idea, since editing/updating of the requirements, and commenting would be easier.

For now I would suggest the following additions/changes:

Stakeholder requirements:

  • Researcher User: Someone who applies explanation methods while having some background knowledge, and is interested in employing them in their research.
    • [SR 4.1] A researcher user wants to be able to easily set up the software
    • [SR 4.2] A researcher user wants to be able to easily apply the software to the newest State-of-the-Art models
    • [SR 4.3] A researcher user wants intermediate explanation results to be easily accessible
    • [SR 4.4] A researcher user wants the explanation methods and their results to be as conveniently customizable as possible
    • [SR 4.5] A researcher user wants to quickly and flexibly evaluate explanations with various different parametrizations

Software requirements:

  • Derivations of SR 4.1 are already included in derivations from other Stakeholders
  • Derived from SR 4.2:
    • [SWR 4.2.1]: Models should be canonized as generally as possible before explanation methods are applied
  • Derived from SR 4.3:
    • [SWR 4.3.1]: Allow specification of layers for which explanations should be returned
    • [SWR 4.3.2]: In case of LRP: Also allow for specification of layer up to which the Flat-Rule should be applied
  • Derived from SR 4.4:
    • [SWR 4.4.1]: Neuron selection should be possible, see [SWR 1.1.8]
    • [SWR 4.4.2]: The output that is explained (The initialization for the explanation, i.e., a head mapping) should be customizable.
  • Derived from SR 4.5:
    • [SWR 4.5.1]: No new analyzer model should have to be built when changing the analyzer parameters described in [SWR 4.3.1], [SWR 4.3.2], [SWR 4.4.1], [SWR 4.4.2]

@rachtibat rachtibat pinned this issue Sep 22, 2020
@rachtibat
Copy link
Collaborator

Hi Max,

Thank you for writing everything out!
Yes, I think we should switch over to google docs.
And we should speak more about the points during the meeting - especially in regards of performance.

Best

@moritzaugustin
Copy link

since come comments indicate activity towards a 2.0 version (I cannot wait :-) ) I have two questions:

is there some approx. ETA for tf2 compatible innvestigate 2.0 that you could share with us (users)?

and if there is some branch that already works in some cases it would be quite interesting to know where to find (at least I could not figure out the most recent code version)

@adrhill
Copy link
Collaborator

adrhill commented Apr 28, 2021

Hi Moritz,

We are aiming for a first release before August.
In the meantime you can use the updates_towards_tf2.0 dev branch, which already implements LRP analyzers in TF2.

@jizhang02
Copy link

Hello,
thank you for offering us such convenient explaining tools!
I wonder if the method Grad-CAM could be added in iNNvestigate 2.0?

@moritzaugustin
Copy link

@adrhill it would be great, if you give an update on the release schedule or point us to information about it. in particular I am strongly interested when pattern attribution is available for tf2

@adrhill
Copy link
Collaborator

adrhill commented Oct 11, 2021

Hi Moritz,
thanks for your patience! We're currently working on exactly that: making the pattern attribution methods run on TF2. Once they work, we might release a final TF1 version that runs on the tf.keras backend (as there are several open issues asking for this) and soon after the TF2 version.

@adrhill
Copy link
Collaborator

adrhill commented Aug 1, 2022

Hi @moritzaugustin, this took longer than anticipated, but iNNvestigate 2.0 is now released!

@adrhill adrhill closed this as completed Aug 1, 2022
@adrhill adrhill unpinned this issue Aug 1, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants