Skip to content

Pros and cons of the configuration setup #11

@LysandreJik

Description

@LysandreJik

Could you mention the reasons why you opted for a configuration setup that is different from transformers'?

From a previous conversation I remember it was in order to not repeat twice the arguments, however when looking at schedulers it seems like it is still the case:

def __init__(
self,
timesteps=1000,
beta_start=0.0001,
beta_end=0.02,
beta_schedule="linear",
trained_betas=None,
timestep_values=None,
variance_type="fixed_small",
clip_predicted_image=True,
tensor_format="np",
):
super().__init__()
self.register(
timesteps=timesteps,
beta_start=beta_start,
beta_end=beta_end,
beta_schedule=beta_schedule,
trained_betas=trained_betas,
timestep_values=timestep_values,
variance_type=variance_type,
clip_predicted_image=clip_predicted_image,
)

From a quick read through the file, I don't understand why I have to register something, and what exactly I need to register. Some values are directly set as attribute below the register, while others are passed to the register method, which seems to act as an __init__ method with required arguments.

Is it in order to isolate a dict_to_save that will serialize only the kwargs passed to the register method? Wouldn't it be simpler with an __init__ method in the ConfigMixin instead?

-> Or is it a choice so as to not have a configuration object that you move around everywhere, instead choosing to have it as a mixin for both schedulers and pipelines?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions