Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dumb question: definitions.py model parameters #10

Closed
Dougie777 opened this issue Sep 10, 2023 · 2 comments
Closed

Dumb question: definitions.py model parameters #10

Dougie777 opened this issue Sep 10, 2023 · 2 comments

Comments

@Dougie777
Copy link

I am very sorry for this newbie question. In the definitions.py there are a number of parameters for each model. I assume these correspond to the settings given on the model page. My question is how do I know the variable names you have used for each setting? For example:

airoboros_l2_13b_gguf = LlamaCppModel(
model_path="TheBloke/Airoboros-L2-13B-2.1-GGUF", # automatic download
max_total_tokens=8192,
rope_freq_base=26000,
rope_freq_scale=0.5,
n_gpu_layers=30,
n_batch=8192,

rope_freq_base : It doesn't appear in any of your other examples. I assume your examples are a non-exhaustive usage of all possible parameters. How can I know the variable names you used? Is there a mapping chart somewhere?

Again I apologize for the newbie question that is probably painfully obvious to others.

Thanks, Doug

@c0sogi
Copy link
Owner

c0sogi commented Sep 10, 2023

RoPE is a technique that allows a model to operate in a higher context (max_total_tokens) than the context length in which it was trained (maybe 4096). That's what you see. By default, if you do not set the parameter, it is calculated automatically. In other words, just delete the line with the parameter starting with rope.

There's no obivous golden rule for rope_freq_base, but I recommend using this calculations:

    def calculate_rope_alpha(self) -> float:
        """Calculate the RoPE alpha based on the n_ctx.
        Assume that the trained token length is 4096."""
        # The following formula is obtained by fitting the data points
        # (comp, alpha): [(1.0, 1.0) (1.75, 2.0), (2.75, 4.0), (4.1, 8.0)]
        compress_ratio = self.calculate_rope_compress_ratio()
        return (
            -0.00285883 * compress_ratio**4
            + 0.03674126 * compress_ratio**3
            + 0.23873223 * compress_ratio**2
            + 0.49519964 * compress_ratio
            + 0.23218571
        )

    def calculate_rope_freq(self) -> float:
        """Calculate the RoPE frequency based on the n_ctx.
        Assume that the trained token length is 4096."""
        return 10000.0 * self.calculate_rope_alpha() ** (64 / 63)

    def calculate_rope_compress_ratio(self) -> float:
        """Calculate the RoPE embedding compression ratio based on the n_ctx.
        Assume that the trained token length is 4096."""
        return max(self.max_total_tokens / Config.trained_tokens, 1.0)

    def calculate_rope_scale(self) -> float:
        """Calculate the RoPE scaling factor based on the n_ctx.
        Assume that the trained token length is 4096."""
        return 1 / self.calculate_rope_compress_ratio()

Note that this auto calculation methods are present in dev branch now and will be merged soon.

You can refer other parameters in this path:
llama_api/schemas/models.py. I recommend you to use IDE such as VSCode, as it will show you some hints for hidden parameters.

@Dougie777
Copy link
Author

Thank you so much!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants