Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Project dependencies may have API risk issues #119

Open
PyDeps opened this issue Oct 26, 2022 · 0 comments
Open

Project dependencies may have API risk issues #119

PyDeps opened this issue Oct 26, 2022 · 0 comments

Comments

@PyDeps
Copy link

PyDeps commented Oct 26, 2022

Hi, In mrc-for-flat-nested-ner, inappropriate dependency versioning constraints can cause risks.

Below are the dependencies and version constraints that the project is using

pytorch-lightning==0.9.0
tokenizers==0.9.3
transformers==3.5.1

The version constraint == will introduce the risk of dependency conflicts because the scope of dependencies is too strict.
The version constraint No Upper Bound and * will introduce the risk of the missing API Error because the latest version of the dependencies may remove some APIs.

After further analysis, in this project,
The version constraint of dependency transformers can be changed to >=2.0.0,<=4.1.1.

The above modification suggestions can reduce the dependency conflicts as much as possible,
and introduce the latest version as much as possible without calling Error in the projects.

The invocation of the current project includes all the following methods.

The calling methods from the transformers
transformers.AutoTokenizer.from_pretrained
The calling methods from the all methods
wordpiece_label_lst.append
torch.logical_or
tokenizers.BertWordPieceTokenizer
pytorch_lightning.Trainer.add_argparse_args.add_argument
start_label_mask.bool.unsqueeze.expand
start_label_mask.view
tokenizer.encode_plus.index
end_labels.numpy.tolist
end_preds.unsqueeze.expand
pytorch_lightning.seed_everything
output.append
src.split
BertSequenceLabeling.load_state_dict
labels.split
self.start_outputs
self.dropout
super.__init__
data.get.split
get_parser
sequence_input_lst.F.softmax.torch.argmax.detach.cpu
tokens.numpy.tolist
all
train.bert_tagger_trainer.BertSequenceLabeling.load_from_checkpoint.model
transformers.AutoTokenizer.from_pretrained.encode
sequence_input_lst.F.softmax.torch.argmax.detach.cpu.numpy
torch.utils.data.DataLoader
torch.cuda.manual_seed_all
any
torch.optim.SGD
transformers.AdamW
sequence_labels.view
utils.random_seed.set_random_seed
metrics.functional.query_span_f1.extract_flat_spans
int
mrc_samples.append
join
Exception
ValueError
start_positions.tolist.tolist
MRCNERDataset
torch.triu.view
self.model.view
match_labels.bool.bool
outputs.x.x.torch.stack.view
start_label_mask.unsqueeze.expand
label_lst.extend
model.result_logger.info
count_entity_with_sequence_ner_format
line.strip.split
torch.nn.functional.softmax
start_logits.view
self.classifier1
metrics.functional.tagger_span_f1.get_entity_from_bmes_lst
sequence_logits.view
wordpiece_mask.detach.cpu.numpy.tolist.detach
pytorch_lightning.Trainer.test
self.train_dataloader
OntoNotesDataConfig
torch.optim.AdamW
end_preds.unsqueeze
models.model_config.BertQueryNerConfig.from_pretrained
end_labels.view.float
match_preds.match_labels.long.sum
start_labels.view.float
self.result_logger.setLevel
list
sequence_input_lst.torch.argmax.detach.cpu.numpy.tolist
end_label_mask.bool.unsqueeze.expand
os.makedirs
outputs.x.x.torch.stack.view.sum
transformers.AutoTokenizer.from_pretrained.convert_ids_to_tokens
print
__file__.os.path.realpath.split
wordpiece_token_lst.append
transformers.BertModel
wordpiece_mask.detach.cpu.numpy
start_positions.append
Tag
start_label_mask.view.float.sum
match_loss.sum.sum
torch.nn.functional.gelu
metrics.functional.tagger_span_f1.compute_tagger_span_f1
sequence_heatmap.self.start_outputs.squeeze
self.span_f1
pytorch_lightning.Trainer.from_argparse_args.fit
main
sequence_input_lst.detach.cpu.numpy.tolist
load_data_in_conll
data_item.strip.strip
torch.where
numpy.nonzero
models.model_config.BertTaggerConfig.from_pretrained
word_collections.append
range.copy
set_random_seed
self.classifier2
end_positions.append
str
metrics.functional.query_span_f1.query_span_f1
train.bert_tagger_trainer.BertSequenceLabeling.load_from_checkpoint
pytorch_lightning.Trainer.from_argparse_args
self.SingleLinearClassifier.super.__init__
re.findall
start_labels.unsqueeze
pytorch_lightning.Trainer.from_argparse_args.test
self.QuerySpanF1.super.__init__
super
end_preds.bool.bool
start_labels.unsqueeze.expand
torch.nn.modules.BCEWithLogitsLoss
seq_len.seq_len.batch_size.torch.empty.uniform_
tokenizers.BertWordPieceTokenizer.decode
match_labels.view
self.compute_loss
start_label_mask.view.float
type
lst.append
start_preds.unsqueeze.expand
labels.append
torch.manual_seed
length_lst.append
tokenizers.BertWordPieceTokenizer.token_to_id
end_label_mask.view.float.sum
get_parser.add_argument
EnglishCoNLLDataConfig
tokens.long
json.load
torch.cat
torch.stack
trained_tagger_ner_model.model.view
set.add
torch.utils.data.SequentialSampler
i.label_list.upper
transformers.get_polynomial_decay_schedule_with_warmup
find_illegal_entity
TmpArgs
token_input_ids.numpy
float
sequence_heatmap.unsqueeze
datasets.tagger_ner_dataset.TaggerNERDataset
train.mrc_ner_trainer.BertLabeling.load_from_checkpoint.model
metrics.tagger_span_f1.TaggerSpanF1
utils.get_parser.get_parser
models.classifier.BERTTaggerClassifier
parser.parse_args.keys
os.path.join
tmp_end_position.append
logging.info
torch.squeeze
get_dataloader
start_preds.bool.unsqueeze
os.remove
sys.path.insert
self.get_dataloader
self.span_embedding
torch.device
input_string.index
models.bert_tagger.BertTagger.from_pretrained
torch.nn.modules.CrossEntropyLoss
tokens.tolist.tolist
data.get
outputs.x.x.torch.stack.mean
metrics.query_span_f1.QuerySpanF1
output_label_sequence.append
pytorch_lightning.callbacks.model_checkpoint.ModelCheckpoint
tmp_start_position.append
start_label_mask.bool.size
sequence_input_lst.detach
span_matrix.self.span_embedding.squeeze
data_item.strip.split
torch.nn.functional.relu
self.loss_func.ignore_index.torch.tensor.type_as
data_generator.random_matrix.torch.bernoulli.long
entity_lst.append
torch.zeros
label_idx.item
end_labels.view
random_matrix.cuda.cuda
pred_entity_lst.append
self.save_hyperparameters
self.pad
argparse.ArgumentParser.add_argument
checkpoint_info_line.F1_PATTERN.re.findall.replace
tmp_label_seq.append
transformers.AutoTokenizer.from_pretrained.encode_plus
pytorch_lightning.Trainer.add_argparse_args
tags.append
argparse.ArgumentParser
self.bce_loss
label2positions.get
token_input_ids.numpy.tolist
pytorch_lightning.Trainer.add_argparse_args.parse_args
self.result_logger.info
tokens.numpy
batch_size.match_labels.view.float
pytorch_lightning.Trainer
sequence_input_lst.F.softmax.torch.argmax.detach.cpu.numpy.tolist
numpy.transpose
logging.basicConfig
end_label_mask.bool
torch.Generator
new_start_positions.append
self.bert
self.BERTTaggerClassifier.super.__init__
sequence_input_lst.torch.argmax.detach
self.BertQueryNerConfig.super.__init__
torch.utils.data.RandomSampler
input_mask.view
start_logits.size
max
BertSequenceLabeling.add_model_specific_args
self.loss_func
start_preds.bool.bool
json.dump
metrics.functional.query_span_f1.extract_nested_spans
zip
entity_string.replace.replace
end_preds.bool.unsqueeze
len
sequence_input_lst.detach.cpu.numpy
line.strip.strip
getattr
load_dataexamples
batch_size.match_label_mask.view.float.sum
BertSequenceLabeling
logging.getLogger
torch.nn.functional.tanh
torch.bernoulli
torch.nn.Linear
BertLabeling.add_model_specific_args
torch.full
start_label_mask.bool.unsqueeze
sequence_input_lst.torch.argmax.detach.cpu
checkpoint_info_line.CKPT_PATTERN.re.findall.replace.replace
self.init_weights
self.model
entity_info.find
tokenizer.convert_ids_to_tokens.index
torch.nn.Dropout
tokenize_word
torch.argmax
start_labels.view
self.dropout.view
min
models.bert_query_ner.BertQueryNER.from_pretrained
self.MultiNonLinearClassifier.super.__init__
end_label_mask.view
checkpoint_info_line.F1_PATTERN.re.findall.replace.replace
transformers.get_linear_schedule_with_warmup
torch.triu
end_labels.unsqueeze.expand
sequence_heatmap.unsqueeze.expand
batch_size.match_label_mask.view.float
convert_file
torch.optim.lr_scheduler.OneCycleLR
count_max_length
json.load.items
torch.tensor
evaluate
count_entity_with_mrc_ner_format
span_logits.view
sentence.append
set
torch.Generator.manual_seed
kwargs.get
train.mrc_ner_trainer.BertLabeling.load_from_checkpoint
self.BertQueryNER.super.__init__
models.classifier.MultiNonLinearClassifier
sys.argv.strip.split
sys.argv.strip
word_label_collections.append
x.split
find_best_checkpoint_on_dev
entity_counter.keys
self.classifier
collections.namedtuple
BertLabeling
end_positions.tolist.tolist
sequence_heatmap.self.end_outputs.squeeze
_improve_answer_span
EnglishCoNLL03DocDataConfig
match_preds.match_labels.long
checkpoint_info_lines.append
i.label_list.upper.replace
sequence_input_lst.torch.argmax.detach.cpu.numpy
start_labels.numpy.tolist
reverse_style
numpy.random.seed
tmp_entity.index
get_query_index_to_label_cate
wordpiece_mask.detach.cpu
self.end_outputs
start_label_mask.bool
x.strip
end_label_mask.unsqueeze.expand
self.BertTagger.super.__init__
stand_matrix.append
wordpiece_mask.detach.cpu.numpy.tolist
tag_list.append
isinstance
end_logits.view
BertLabeling.load_state_dict
get_parser.parse_args
checkpoint_info_line.CKPT_PATTERN.re.findall.replace
sequence_input_lst.detach.cpu
token.strip
datasets.tagger_ner_dataset.get_labels
gold_entities.remove
metrics.functional.tagger_span_f1.transform_predictions_to_labels
self.args.gpus.str.split
end_label_mask.bool.unsqueeze
transformers.AutoTokenizer.from_pretrained
open
wordpiece_label_lst.extend
datasets.truncate_dataset.TruncateDataset
torch.LongTensor
utils.bmes_decode.bmes_decode
sequence_input_lst.F.softmax.torch.argmax.detach
sentence_collections.append
random.seed
range
tokenizers.BertWordPieceTokenizer.encode
count_confusion_matrix
self.pad.numpy
start_float_label_mask.start_loss.sum
datasets.tagger_ner_dataset.load_data_in_conll
transformers.AutoTokenizer.from_pretrained.decode
start_preds.unsqueeze
end_label_mask.view.float
new_end_positions.append
datasets.mrc_ner_dataset.MRCNERDataset
end_preds.unsqueeze.expand.numpy
sequence_heatmap.size
end_float_label_mask.end_loss.sum
match_preds.numpy.np.nonzero.np.transpose.tolist
run_dataset
self.__dict__.items
self.model.named_parameters
set.update
f.readlines
sentence_label_collections.append
numpy.random.random
torch.load
EnglishOntoDataConfig
torch.LongTensor.item
wordpiece_token_lst.extend
tuple
start_label_mask.bool.bool
re.compile
self.BertTaggerConfig.super.__init__
join.split
self.tokenizer.encode
enumerate
self.TaggerSpanF1.super.__init__
self
get_entity_from_bmes_lst
sum
dataset.append
os.path.realpath
ChineseMSRADataConfig
end_label_mask.bool.bool
torch.empty
end_labels.unsqueeze
get_labels

@developer
Could please help me check this issue?
May I pull a request to fix it?
Thank you very much.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant