conda env create -f env.yml
<- this is the environment the code was run in- In the
conda
environment, search for thecaptum
package (e.g. under<env_name>/lib/python3.9/site-packages/
), and incaptum/_utils/gradient.py
, modify the linegrads = torch.autograd.grad(torch.unbind(outputs), inputs)
tograds = torch.autograd.grad(torch.unbind(outputs), inputs, retain_graph=True, create_graph=True)
. This will create the computation graph for gradients and gradient-based attributions. - In the
conda
environment, search for thetransformers
package (e.g. under<env_name>/lib/python3.9/site-packages/
), and intransformers/models/bert/modeling_bert.py
, modify the linepooled_output = self.activation(pooled_output)
topooled_output = torch.tanh(pooled_output)
. This is in theBertPooler
class, around line 660. - In the
conda
environment, search for thetransformers
package (e.g. under<env_name>/lib/python3.9/site-packages/
), and intransformers/models/roberta/modeling_roberta.py
, modify the linepooled_output = self.activation(pooled_output)
topooled_output = torch.tanh(pooled_output)
. This is in theRobertaPooler
class, around line 578.
- Set
exp_folder
parameter inscripts/[van,adv,far]_config.json
as the desired logging folder - Set
dataset
,model
,candidate_extractor
... parameters in the config files - Run
scripts/[van,adv,far]_train.py
- Set
model_path
parameter inscripts/[van,adv,far]_config.json
to the trained model path - Set
only_eval
parameter in the same configuration file - Run
scripts/[van,adv,far]_train.py
, this will run the training script in evaluation mode