Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed to convert Llama-v2 models #4493

Closed
HighTemplar-wjiang opened this issue Dec 16, 2023 · 28 comments
Closed

Failed to convert Llama-v2 models #4493

HighTemplar-wjiang opened this issue Dec 16, 2023 · 28 comments
Labels
bug Something isn't working high priority Very important issue stale

Comments

@HighTemplar-wjiang
Copy link

Prerequisites

Please answer the following questions for yourself before submitting an issue.

  • [Y] I am running the latest code. Development is very rapid so there are no tagged versions as of now.
  • [Y] I carefully followed the README.md.
  • [Y] I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
  • [Y] I reviewed the Discussions, and have a new bug or useful enhancement to share.

Expected Behavior

Successful converting Llama models using the following command:

python convert.py models/xxx

where xxx is the original trained Llama model downloaded from Facebook.

Current Behavior

Cannot convert with errors (detailed below).

I've found there was an update in the following commit, after checkout an older version, the conversion could be done:

873637afc7924f435ac44c067630a28e82eefa7b

It seems that after the above commit, convert.py does not support the BPE vocab format anymore (--vocabtype param has been removed). While README did not reflect such change. This causes confusion.

Environment and Context

  • Physical (or virtual) hardware you are using, e.g. for Linux:

Macbook M3 Max

  • Operating System, e.g. for Linux:

MacOS - 14.1 (23B2073)

  • SDK version, e.g. for Linux:

Python 3.10.13
transformers 4.36.1

Failure Information (for bugs)

Loading model file models/13B/consolidated.00.pth
Loading model file models/13B/consolidated.01.pth
params = Params(n_vocab=32000, n_embd=5120, n_layer=40, n_ctx=4096, n_ff=13824, n_head=40, n_head_kv=40, n_experts=None, n_experts_used=None, f_norm_eps=1e-05, rope_scaling_type=None, f_rope_freq_base=None, f_rope_scale=None, n_orig_ctx=None, rope_finetuned=None, ftype=None, path_model=PosixPath('models/13B'))
Traceback (most recent call last):
  File "llama.cpp/convert.py", line 1279, in <module>
    main()
  File "llama.cpp/convert.py", line 1255, in main
    vocab = VocabLoader(params, vocab_dir)
  File "llama.cpp/convert.py", line 342, in __init__
    self.tokenizer = AutoTokenizer.from_pretrained(str(fname_tokenizer), trust_remote_code=True)
  File "python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 752, in from_pretrained
    config = AutoConfig.from_pretrained(
  File "python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 1082, in from_pretrained
    config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
  File "python3.10/site-packages/transformers/configuration_utils.py", line 644, in get_config_dict
    config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
  File "python3.10/site-packages/transformers/configuration_utils.py", line 699, in _get_config_dict
    resolved_config_file = cached_file(
  File "python3.10/site-packages/transformers/utils/hub.py", line 360, in cached_file
    raise EnvironmentError(
OSError: models/13B does not appear to have a file named config.json. Checkout 'https://huggingface.co/models/13B/None' for available files.
@adelmofilho
Copy link

adelmofilho commented Dec 17, 2023

Same here @HighTemplar-wjiang. I found an alternative by reverting to commit f4d973c and running the README instructions. It did not solve the problema but can give us some tips about what happened.

I attach an image below of the quantized model being tested

image

@ghost
Copy link

ghost commented Dec 18, 2023

I am running into this same issues, Can you post if you find a resolution?

@HighTemplar-wjiang
Copy link
Author

HighTemplar-wjiang commented Dec 18, 2023

I am running into this same issues, Can you post if you find a resolution?

As a temporary fix you can simply checkout an older commit, such as:

git checkout 0353a1840134b24b07ab61fd4490192f28c4db6b

This is the latest commit before this bug appears.

@leejw51
Copy link

leejw51 commented Dec 19, 2023

the same error, it worked before

@teleprint-me
Copy link
Contributor

teleprint-me commented Dec 25, 2023

SPM and BPE vocabularies were removed, so if you're using a non-huggingface model, you'll get this error. If you use facebooks hf model, it will work.

transformers.AutoTokenizer is looking for the vocabulary and fails to find it because it's looking for remote access for a file that doesn't exist.

This is simply due to the way the vocabularies are handled. I'll be looking into it over the next few days to see if I can fix the regression.

@slaren slaren added bug Something isn't working high priority Very important issue and removed bug-unconfirmed labels Dec 25, 2023
@ggerganov
Copy link
Owner

@teleprint-me Thanks for looking into this. Pinging also @strutive07 for any extra insight on this

@XiongjieDai
Copy link

XiongjieDai commented Jan 4, 2024

Here is how I have gone through it in the last two weeks; basically you can try to convert your model to the huggingface format to get the config.json file.

curl -o convert_llama_weights_to_hf.py https://raw.githubusercontent.com/huggingface/transformers/main/src/transformers/models/llama/convert_llama_weights_to_hf.py

python3 -m pip install -r requirements.txt

python3 convert_llama_weights_to_hf.py --input_dir models/7B/ --model_size 7B --output_dir models/7B/

One trick is you don't need to install accelerate package. You will still get the config.json file although you will encounter requires Accelerate error. Then you can delete other half-way model files and convert the original model to gguf FP16 format.

@quanpinjie
Copy link

python convert.py ./models/34B/
Loading model file models/34B/consolidated.00.pth
Loading model file models/34B/consolidated.01.pth
Loading model file models/34B/consolidated.02.pth
Loading model file models/34B/consolidated.03.pth
params = Params(n_vocab=32000, n_embd=8192, n_layer=48, n_ctx=16384, n_ff=22016, n_head=64, n_head_kv=8, n_experts=None, n_experts_used=None, f_norm_eps=1e-05, rope_scaling_type=None, f_rope_freq_base=1000000, f_rope_scale=None, n_orig_ctx=None, rope_finetuned=None, ftype=None, path_model=PosixPath('models/34B'))
Traceback (most recent call last):
File "convert.py", line 1295, in
main()
File "convert.py", line 1271, in main
vocab = VocabLoader(params, vocab_dir)
File "convert.py", line 342, in init
self.tokenizer = AutoTokenizer.from_pretrained(str(fname_tokenizer), trust_remote_code=True)
File "/data/code/llama.cpp/new/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py", line 701, in from_pretrained
config = AutoConfig.from_pretrained(
File "/data/code/llama.cpp/new/lib/python3.8/site-packages/transformers/models/auto/configuration_auto.py", line 1023, in from_pretrained
config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/data/code/llama.cpp/new/lib/python3.8/site-packages/transformers/configuration_utils.py", line 620, in get_config_dict
config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/data/code/llama.cpp/new/lib/python3.8/site-packages/transformers/configuration_utils.py", line 675, in _get_config_dict
resolved_config_file = cached_file(
File "/data/code/llama.cpp/new/lib/python3.8/site-packages/transformers/utils/hub.py", line 400, in cached_file
raise EnvironmentError(
OSError: models/34B does not appear to have a file named config.json. Checkout 'https://huggingface.co/models/34B/None' for available files.

@teleprint-me
Copy link
Contributor

teleprint-me commented Jan 5, 2024

Between work, holidays, and helping out my cousins, I got sick over the holidays, so that's why I went MIA. Spent the last few days fighting off the fever. Starting to feel better now. Wondering if any progress was made, if not, I can pick up where I left off. I'm skipping work over the weekend to recoup, so I'll need something to keep me busy.

I did start on a new project named to_gguf which was supposed to isolate and reorganize a lot the client facing code. It was mostly experimental so I could play around with it without messing with upstream source code. Any progress I make there, I could push upstream if there was any interest in it. If not, it would be educational for me regardless.

@teleprint-me
Copy link
Contributor

teleprint-me commented Jan 6, 2024

I've been reviewing the convert.py script and verified the removal of both BPE and SentencePiece Vocab classes. My understanding was that these vocabularies would be merged, as indicated in my comment on PR #3633.

My efforts were aimed at maintaining backward compatibility while looking forward. This approach may have seemed like digression, but it was strategic, anticipating a more integrated solution which unfortunately didn't materialize as expected.

I have a script that follows what I envisioned for PR #3633. It's currently non-functional but fixable. Integrating all three vocab classes (BpeVocab, SentencePieceVocab, HfVocab) into one is a complex task that could lead to brittle code. However, I believe using a factory pattern, as I initially proposed, provides a more maintainable and scalable approach. This method allows for cleaner separation of concerns and easier adaptation to future changes. I'm committed to exploring this path, considering the time constraints and balancing other commitments.

My goal is to have a working fix in the next few days, though realistically, it might take a week of focused effort. I appreciate everyone's patience and welcome any suggestions or collaborative efforts.

@teleprint-me
Copy link
Contributor

I attempted to simplify the process by trying "the easy way," but unfortunately, it didn't behave as expected. As a result, I opened issue #28370 in the Transformers repository to address the underlying problem.

The previously utilized implementation, BPEVocab and SentencePieceVocab, in convert.py had fewer lines of code and appeared more straightforward. However, since the new VocabLoader doesn't work as intended, this led to the need for further investigation.

Here's a snippet that showcases the idea:

class VocabLoader:
    def __init__(self, params: Params, fname_tokenizer: Path) -> None:
        try:
            from transformers import AutoTokenizer
        except ImportError as e:
            raise ImportError(
                "To use VocabLoader, please install the `transformers` package. "
                "You can install it with `pip install transformers`."
            ) from e

        try:
            self.tokenizer = AutoTokenizer.from_pretrained(str(fname_tokenizer), trust_remote_code=True)
        except ValueError:
            self.tokenizer = AutoTokenizer.from_pretrained(str(fname_tokenizer), use_fast=False, trust_remote_code=True)
        except OSError:
            # Handle SPM model here

Exploring the convert_slow_tokenizer.py script from the Transformers repository allowed me to capture the differences and learn more about the problem.

I'm uncertain whether it's best to wait for a resolution, revert to the previous code in convert.py for now, or explore other potential solutions. Nevertheless, the exploration was worthwhile, and I appreciate the learning experience it provided.

What's convenient about this approach is that we can convert to the fast tokenizer using the convert_slow_tokenizer function if it were to operate as expected.

Just exploring all of my options before jumping in and changing anything.

@ggerganov
Copy link
Owner

@teleprint-me Thank you for looking into this and appreciate the effort.

I'm honestly a bit lost myself here - didn't expect this to be such a difficult change and I really don't understand what is the issue now.

In any case, don't want you to spend too much effort looking into this.
I will simply revert #3633 if this does not get fixed in a day or two

@teleprint-me
Copy link
Contributor

teleprint-me commented Jan 6, 2024

@ggerganov I appreciate your feedback and understanding. I believe there's a middle ground that can address this issue effectively without compromising the existing setup.

The core problem here is straightforward:

User Expectation: Users expect the tokenizer.model to be utilized.

Current Behavior: transformers.AutoTokenizer expects a config.json and tokenizer.json.

Previously, the BPEVocab and SentencePieceVocab classes seamlessly handled tokenizers for BPE and SPM models. However, with the introduction of VocabLoader, it seems that BPE and SPM support got unintentionally omitted. While I'm uncertain about the exact rationale behind this change, I can speculate it might have been an oversight during the transition to transformers.

Consequently, the original models fail to convert now simply because they were initially created with torch and sentencepiece instead of transformers.

Here's a glimpse of the directory structure highlighting the difference:

Original Models:

(.venv) git:(convert.py | Δ) λ tree local/models/facebook/llama-2-7b-chat
local/models/facebook/llama-2-7b-chat
├── checklist.chk
├── consolidated.00.pth  # original model
├── params.json  # original hyperparameters
├── tokenizer_checklist.chk
└── tokenizer.model  # llamas vocab: The SPM model

Transformers-Supported Variation:

(.venv) git:(convert.py | Δ) λ tree local/models/facebook/llama-2-7b-chat-hf
local/models/facebook/llama-2-7b-chat-hf
├── config.json  # transformers hyperparameters
├── generation_config.json
├── ggml-model-f16.gguf  # successfully created with convert.py
├── LICENSE.txt
├── model-00001-of-00002.safetensors  # safetensors model part
├── model-00002-of-00002.safetensors
├── model.safetensors.index.json
├── pytorch_model-00001-of-00002.bin  # torch model part
├── pytorch_model-00002-of-00002.bin
├── pytorch_model.bin.index.json
├── README.md
├── special_tokens_map.json
├── tokenizer_config.json
├── tokenizer.json  # llamas vocab: the transformers model
├── tokenizer.model  # original sentencepiece model
└── USE_POLICY.md

I'm actively working on resolving this issue over the next couple of days. Let's aim for a solution that allows users to continue utilizing their existing models seamlessly while also aligning with transformers standards.

I appreciate your patience and support.

@ggerganov
Copy link
Owner

The convert.py script should support the vanilla LLaMA models - this is the primary purpose of the script and the instructions in the README require it. For HF models, we have the convert-hf-to-gguf.py script. Based on the directory structures that you showed, I'm not sure why convert.py was chosen for implementing the transformers.AutoTokenizer support. It looks something more suitable for convert-hf-to-gguf.py?

Let's aim for a solution that allows users to continue utilizing their existing models seamlessly while also aligning with transformers standards.

Of course, I just don't want to have invalid instructions in the README, so a temporary solution is to go back to the working version until we figure out the proper way to implement this.

@teleprint-me
Copy link
Contributor

I completely understand. The integration in PR #3633 was primarily aimed at enhancing support for multilingual models. While the outcomes weren't as anticipated, the thoughtfulness and forward-thinking behind these efforts are invaluable.

I'm in the process of restoring the original functionality of convert.py and, simultaneously, exploring ways to smoothly integrate the advancements from #3633.

Technical debt is a natural part of any rapidly evolving project, and llama.cpp is no exception. The overwhelming demand and the community's active engagement are testaments to the incredible platform you've built.

I believe adopting a more collaborative strategy for managing these updates could be beneficial as you continue progress. It's about finding the right balance and ensuring that no single person bears the whole weight of this dynamic environment.

Your leadership and the community's passion are the driving forces behind the success of llama.cpp. You built something amazing here and people want to be apart it. I think that's pretty cool.

I'm working towards a preliminary PR. It should be up soon. I'll keep you in the loop.

@ipolytex
Copy link

ipolytex commented Jan 7, 2024

I am running into this same issues, Can you post if you find a resolution?

As a temporary fix you can simply checkout an older commit, such as:

git checkout 0353a1840134b24b07ab61fd4490192f28c4db6b

This is the latest commit before this bug appears.

Hi
Do you have the name for the branch?

@teleprint-me
Copy link
Contributor

Screencast.from.2024-01-07.15-30-21.webm

It's working!

I was also able to preserve support for transformers as well!

🥳

@teleprint-me
Copy link
Contributor

teleprint-me commented Jan 8, 2024

@ggerganov It's up: #4818

@ggerganov
Copy link
Owner

@HighTemplar-wjiang and all, please confirm that conversion works with latest master

@teleprint-me
Copy link
Contributor

Note on n_vocab Parameter Handling:

In the convert.py script, there is now a check to ensure the n_vocab parameter in params.json is properly defined before proceeding with the model conversion process. This parameter represents the vocabulary size of the model and is crucial for the conversion process to accurately reflect the model's structure.

I observed that in some instances, n_vocab is set to -1, a value that signifies an undefined or unspecified vocabulary size. The implications of proceeding with a conversion process under this condition are not fully understood and could potentially lead to inaccuracies or errors during model inference.

To safeguard against such uncertainties and to maintain the integrity and reliability of the conversion process, I decided to raise a ValueError if n_vocab is found to be -1. This is a precautionary measure to prompt users to manually verify and set an appropriate value for n_vocab in params.json. This approach ensures that the conversion process is carried out with a clearly defined and valid vocabulary size, reducing the risk of downstream issues.

I recognize that this decision may necessitate an additional step for users, but I think it is a necessary measure to ensure the correctness and stability of the converted models. We can further investigate the impact and significance of the n_vocab parameter and may update this behavior in future versions of the script based on new insights and findings.

Thanks @ggerganov.

@clairvoyant
Copy link

clairvoyant commented Jan 11, 2024

the conversion fails with the latest in master. Reverting to 0353a18 allows convert.py to progress.

the new backtrace with master is

$ git pull --rebase
$ git checkout master
$ python3.9 convert.py models/7B/
......
skipping tensor blk.20.attn_rot_embd
skipping tensor blk.21.attn_rot_embd
skipping tensor blk.22.attn_rot_embd
skipping tensor blk.23.attn_rot_embd
skipping tensor blk.24.attn_rot_embd
skipping tensor blk.25.attn_rot_embd
skipping tensor blk.26.attn_rot_embd
skipping tensor blk.27.attn_rot_embd
skipping tensor blk.28.attn_rot_embd
skipping tensor blk.29.attn_rot_embd
skipping tensor blk.30.attn_rot_embd
skipping tensor blk.31.attn_rot_embd
Writing models/7B/ggml-model-f16.gguf, format 1
Traceback (most recent call last):
  File "/opt/test/software/llama.cpp/convert.py", line 1658, in <module>
    main(sys.argv[1:])  # Exclude the first element (script name) from sys.argv
  File "/opt/test/software/llama.cpp/convert.py", line 1643, in main
    OutputFile.write_all(
  File "/opt/test/software/llama.cpp/convert.py", line 1188, in write_all
    check_vocab_size(params, vocab, pad_vocab=pad_vocab)
  File "/opt/test/software/llama.cpp/convert.py", line 993, in check_vocab_size
    raise ValueError(
ValueError: The model's vocab size is set to -1 in params.json. Please update it manually. Maybe 32000?

@teleprint-me
Copy link
Contributor

teleprint-me commented Jan 11, 2024

@clairvoyant

This is a known issue with the params.json.

ValueError: The model's vocab size is set to -1 in params.json. Please update it manually. Maybe 32000?

You'll need to manually fix it by updating the value from -1 to 32000.

@teleprint-me
Copy link
Contributor

teleprint-me commented Jan 11, 2024

@ggerganov

Maybe we should issue a warning and revert to the fallback instead in the future? Commit f36a777 simply used a fallback without any warning.

Maybe doing something similar would reduce the amount false negatives reported? I'm hesitant to support this approach because it isn't as concise as the exception and would be more difficult to spot as a result.

@clairvoyant

Commit 0353a1 uses commit 799a1cb for convert.py. Commit 799a1cb doesn't apply any checks for the hyperparameters.

Please report here if you experience any issues during inference. I suspect you will, but they may range from subtle to obvious which could be misconstrued as an issue with the model.

Personally, I'm not really a fan of the assertion as I consider this to be a brittle approach, but it's the only way to prevent false positives/negatives and avoid strange bugs throughout the pipeline. It's a time saver in the long run. Given the trade-offs, I made the decision to enforce it.

@ggerganov
Copy link
Owner

It's fine as it is since there is a workaround suggested. The old method created issues with other models where n_vocab cannot be deduced correctly

@calebheinzman
Copy link

calebheinzman commented Jan 26, 2024

Was there a solution to this?
I'm running:

!python llama.cpp/convert.py testing \
  --outfile testing.gguf \
  --outtype q8_0

And getting the error:

Loading model file testing/pytorch_model-00001-of-00002.bin
Loading model file testing/pytorch_model-00001-of-00002.bin
Loading model file testing/pytorch_model-00002-of-00002.bin
params = Params(n_vocab=32000, n_embd=4096, n_layer=32, n_ctx=4096, n_ff=11008, n_head=32, n_head_kv=32, n_experts=None, n_experts_used=None, f_norm_eps=1e-05, rope_scaling_type=None, f_rope_freq_base=None, f_rope_scale=None, n_orig_ctx=None, rope_finetuned=None, ftype=<GGMLFileType.MostlyQ8_0: 7>, path_model=PosixPath('testing'))
Found vocab files: {'tokenizer.model': None, 'vocab.json': None, 'tokenizer.json': PosixPath('testing/tokenizer.json')}
Loading vocab file 'testing/tokenizer.json', type 'spm'
Traceback (most recent call last):
  File "/content/llama.cpp/convert.py", line 1471, in <module>
    main()
  File "/content/llama.cpp/convert.py", line 1439, in main
    vocab, special_vocab = vocab_factory.load_vocab(args.vocab_type, model_parent_path)
  File "/content/llama.cpp/convert.py", line 1325, in load_vocab
    vocab = SentencePieceVocab(
  File "/content/llama.cpp/convert.py", line 391, in __init__
    self.sentencepiece_tokenizer = SentencePieceProcessor(str(fname_tokenizer))
  File "/usr/local/lib/python3.10/dist-packages/sentencepiece/__init__.py", line 447, in Init
    self.Load(model_file=model_file, model_proto=model_proto)
  File "/usr/local/lib/python3.10/dist-packages/sentencepiece/__init__.py", line 905, in Load
    return self.LoadFromFile(model_file)
  File "/usr/local/lib/python3.10/dist-packages/sentencepiece/__init__.py", line 310, in LoadFromFile
    return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)
RuntimeError: Internal: src/sentencepiece_processor.cc(1101) [model_proto->ParseFromArray(serialized.data(), serialized.size())] 

I also tried 2 of the commits mentioned (0353a18, 799a1cb) and got the following error:

Loading model file testing/pytorch_model-00001-of-00002.bin
Loading model file testing/pytorch_model-00001-of-00002.bin
Loading model file testing/pytorch_model-00002-of-00002.bin
params = Params(n_vocab=32000, n_embd=4096, n_layer=32, n_ctx=4096, n_ff=11008, n_head=32, n_head_kv=32, n_experts=None, n_experts_used=None, f_norm_eps=1e-05, rope_scaling_type=None, f_rope_freq_base=None, f_rope_scale=None, n_orig_ctx=None, rope_finetuned=None, ftype=<GGMLFileType.MostlyQ8_0: 7>, path_model=PosixPath('testing'))
Traceback (most recent call last):
  File "/content/llama.cpp/convert.py", line 1268, in <module>
    main()
  File "/content/llama.cpp/convert.py", line 1248, in main
    vocab = load_vocab(vocab_dir, args.vocabtype)
  File "/content/llama.cpp/convert.py", line 1138, in load_vocab
    raise FileNotFoundError(
FileNotFoundError: Could not find tokenizer.model in testing or its parent; if it's in another directory, pass the directory as --vocab-dir

@teleprint-me
Copy link
Contributor

@calebheinzman

convert.py only supports Llama, Llama-2, CodeLlama, Mistral, and Mixtral. For transformers (aka Hugging Face) models, the --vocab-type hfft flag was supposed to be utilized for supporting other languages because it did default to the sentencepiece vocab which uses tokenizer.model. This doesn't imply that you can use any transformers model with convert.py as it targets only the original models as initially intended. For any other supported architectures, convert-hf-to-gguf.py should be used instead.

As for whether it was fixed or not, @cebtenzzre partially rolled back the patch I applied in PR #5041 due to conflicts with the code style and preference over the actual functional utility of the script. I don't care about this, I just care about whether or not the script functions as advertised.

Personally, I just use formatters for whatever language I use and don't even think about it outside of my initial preferred settings which is very limited because I'd rather put energy into the code than worry about the way it looks. My primary concerns are always readability, maintainability, and functionality.

I attempted to find a middle ground within the patch so that the work previously applied by both @strutive07 and @TheBloke would continue to function and allow all 3 vocabulary styles to operate depending on user preferences, e.g. Byte-Pair Encoding, Googles SentencePiece, and the Hugging Face Fast Tokenizer used by the transformers module.

The only way the transformers implementation is triggered is by passing the boolean flag to the factory that handles the vocabulary required by the model for conversion. While technically still functional, this is no longer apparent due to the removal of this information.

I never got a chance to update the docs, implement the complete changes, or the other stuff it needed. It has been modified since and I haven't had time to really look into it as it requires a much more in-depth review and I have other projects that also need my attention.

Copy link
Contributor

This issue is stale because it has been open for 30 days with no activity.

@github-actions github-actions bot added the stale label Mar 18, 2024
Copy link
Contributor

github-actions bot commented Apr 2, 2024

This issue was closed because it has been inactive for 14 days since being marked as stale.

@github-actions github-actions bot closed this as completed Apr 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working high priority Very important issue stale
Projects
None yet
Development

No branches or pull requests