-
Notifications
You must be signed in to change notification settings - Fork 322
feat: microgen - adds two init file templates #2286
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
| {{ import }} | ||
| {%- endfor %} | ||
|
|
||
| __all__ = ( | ||
| {%- for item in all_list %} | ||
| "{{ item }}", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Without context for how this is used, I wonder if this is the right level of abstraction? Seems to me that this requires whoever uses this to keep two separate lists in sync. I might prefer to see something like this:
{% for module, obj, alias in imports %}
from {{ module }} import {{ obj }} as {{ alias }}
{%- endfor }}
__all__ = (
{%- for module, obj, alias in imports %}
"{{ alias }}",There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a good question. In the absence of context, it makes total sense to suggest this.
It presupposes that we have discovered all the needed imports and then generate the file from scratch.
That isn't what is happening here. This is intended to quickly overwrite one GAPIC-generated file so that we can include two new lines. And to do so simply and easily.
We start with a file that looks similar to this:
from .services.dataset_service import DatasetServiceClient
from .services.job_service import JobServiceClient
...
from .types.biglake_config import BigLakeConfiguration
from .types.clustering import Clustering
...
from .types.dataset import Access
from .types.dataset import Dataset
...
from .types.dataset import DatasetAccessEntry
from .types.dataset import DatasetList
...
from .types.time_partitioning import TimePartitioning
from .types.udf_resource import UserDefinedFunctionResource
__all__ = (
"Access",
"AggregationThresholdPolicy",
...
"BigtableColumn",
"BigtableColumnFamily",
...
"ColumnReference",
"ConnectionProperty",
...
"GetServiceAccountRequest",
"GetServiceAccountResponse",
...
"VectorSearchStatistics",
"ViewDefinition",
)
And try to create a file that looks like this:
from .services.centralized_service import BigQueryClient # NEW first line this alphabetical list
from .services.dataset_service import DatasetServiceClient
from .services.job_service import JobServiceClient
...
from .types.biglake_config import BigLakeConfiguration
from .types.clustering import Clustering
...
from .types.dataset import Access
from .types.dataset import Dataset
...
from .types.dataset import DatasetAccessEntry
from .types.dataset import DatasetList
...
from .types.time_partitioning import TimePartitioning
from .types.udf_resource import UserDefinedFunctionResource
__all__ = __all__ = (
"Access",
"AggregationThresholdPolicy",
...
"BigQueryClient", # NEW line in the middle of this alphabetical list
"BigtableColumn",
"BigtableColumnFamily",
...
"ColumnReference",
"ConnectionProperty",
...
"GetServiceAccountRequest",
"GetServiceAccountResponse",
...
"VectorSearchStatistics",
"ViewDefinition",
)
Our process for making this file is simply;
- read the lines in their entirety from the GAPIC generated file for the
import sectioninto a list - read the lines in their entirety from the GAPIC generated file for the
all sectioninto a list
Add the two lines we care about (e.g. the NEW lines that reference BigQueryClient) to their respective lists and sort the list alphabetically so the sections will come out alphabetically.
We then use those lists as inputs to the template in each section.
Trying to break the lines into component parts (module, object, alias) just complicates what is basically a read a line and then write a line operation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@tswast Following up. How do you feel about this PR. Approve? Needs more work?
* feat: adds _helpers.py.js template * Updates with two usage examples * updates the license header
This adds two templates used to generate two different types of
__init__files.__init__.pythat defines the BigQueryClient within the services directory (and only the BigQueryClient)__init__.pythat defines the entire BigQuery interface for the library. This lists all the public classes including the BigQueryClient.