conda env create -f env.yml<- this is the environment the code was run in- In the
condaenvironment, search for thecaptumpackage (e.g. under<env_name>/lib/python3.9/site-packages/), and incaptum/_utils/gradient.py, modify the linegrads = torch.autograd.grad(torch.unbind(outputs), inputs)tograds = torch.autograd.grad(torch.unbind(outputs), inputs, retain_graph=True, create_graph=True). This will create the computation graph for gradients and gradient-based attributions. - In the
condaenvironment, search for thetransformerspackage (e.g. under<env_name>/lib/python3.9/site-packages/), and intransformers/models/bert/modeling_bert.py, modify the linepooled_output = self.activation(pooled_output)topooled_output = torch.tanh(pooled_output). This is in theBertPoolerclass, around line 660. - In the
condaenvironment, search for thetransformerspackage (e.g. under<env_name>/lib/python3.9/site-packages/), and intransformers/models/roberta/modeling_roberta.py, modify the linepooled_output = self.activation(pooled_output)topooled_output = torch.tanh(pooled_output). This is in theRobertaPoolerclass, around line 578.
- Set
exp_folderparameter inscripts/[van,adv,far]_config.jsonas the desired logging folder - Set
dataset,model,candidate_extractor... parameters in the config files - Run
scripts/[van,adv,far]_train.py
- Set
model_pathparameter inscripts/[van,adv,far]_config.jsonto the trained model path - Set
only_evalparameter in the same configuration file - Run
scripts/[van,adv,far]_train.py, this will run the training script in evaluation mode