Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Expand the 'Extending' docs with an example. #113187

Merged
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
78 changes: 78 additions & 0 deletions Doc/library/importlib.metadata.rst
Original file line number Diff line number Diff line change
Expand Up @@ -406,6 +406,84 @@ metadata in locations other than the file system, subclass
a custom finder, return instances of this derived ``Distribution`` in the
``find_distributions()`` method.

Example
-------

Consider for example a custom finder that loads Python
modules from a database::

class DatabaseImporter(importlib.abc.MetaPathFinder):
def __init__(self, db):
self.db = db

def find_spec(self, fullname, target=None) -> ModuleSpec:
return self.db.spec_from_name(fullname)

sys.meta_path.append(DatabaseImporter(connect_db(...)))

That importer now presumably provides importable modules from a
database, but it provides no metadata or entry points. For this
custom importer to provide metadata, it would also need to implement
``DistributionFinder``::
Comment on lines +425 to +427
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This says that to provide metadata, one must also implement DistributionFinder but the parent class, importlib.abc.MetaPathFinder, is replaced by DistributionFinder in the snippet below. The reader must infer that DistributionFinder is a subclass of importlib.abc.MetaPathFinder.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can see a couple of ways to highlight this detail. One could change the narrative to something like:

...it would also need to implement DistributionFinder (a subclass of MetaPathFinder):

Alternatively, the DistributionFinder could be changed from a raw identifier to a reference to :class:DistributionFinder, which captures that detail. That, of course, would require that the API docs be implemented, which is something I'm reluctant to do because it's something a machine can do (just not in the context of CPython AFAIK). I guess the CPython docs could just link back to the importlib_metadata docs, i.e.:

...it would also need to implement `DistributionFinder <https://importlib-metadata.readthedocs.io/en/latest/api.html#importlib_metadata.DistributionFinder>`_.

WDYT?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Personally, I would much rather the full documentation gets consolidated into the CPython documentation, rather than calling out to the importlib_metadata docs. While I appreciate this is extra work, I think it is worthwhile nevertheless, because it brings the importlib.metadata in line with the rest of the CPython documentation (in terms of style, maintenance workflow, and ease of access).

It's not me that would be doing the work, though, so I'm just offering this as my viewpoint. If you feel as maintainer that the cost isn't justified by the benefits, then that's your call.


from importlib.metadata import DistributionFinder

class DatabaseImporter(DistributionFinder):
...

def find_distributions(self, context=DistributionFinder.Context()):
query = dict(name=context.name) if context.name else {}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not clear why it's OK to only respect the name attribute of the context. This goes back to the fact that it's not at all clear what kwargs are valid when calling distributions(). "Anything the distribution finders might use" isn't of much practical use - a caller isn't guaranteed to know what finders are even present. And the documentation for Context mentions the path attribute - why is it OK to ignore that, but not to ignore name?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In a3af10d, I've added a paragraph describing why path was ignored and when it should be considered.

for dist_record in self.db.query_distributions(query):
yield DatabaseDistribution(dist_record)

In this way, ``query_distributions`` would return records for
each distribution served by the database matching the query. For
example, if ``requests-1.0`` is in the database, ``find_distributions``
would yield a ``DatabaseDistribution`` for ``Context(name='requests')``
or ``Context(name=None)``.

For the sake of simplicity, this example ignores ``context.path``\. The
``path`` attribute defaults to ``sys.path`` and is the set of import paths to
be considered in the search. A ``DatabaseImporter`` could potentially function
without any concern for a search path. Assuming the importer does no
partitioning, the "path" would be irrelevant. In order to illustrate the
purpose of ``path``, the example would need to illustrate a more complex
``DatabaseImporter`` whose behavior varied depending on
``sys.path``/``PYTHONPATH``. In that case, the ``find_distributions`` should
honor the ``context.path`` and only yield ``Distribution``\ s pertinent to that
path.

``DatabaseDistribution``, then, would look something like::

class DatabaseDistribution(importlib.metadata.Distributon):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where is it documented that you only need to implement read_text and locate_file? And for that matter where is it documented which other methods need read_text to handle a filename of METADATA and which need it to handle a filename of entry_points.txt?

Why is it OK to omit RECORD? I note that the files property says in its documentation "Result is None if the metadata file that enumerates files (i.e. RECORD for dist-info, or installed-files.txt or SOURCES.txt for egg-info) is missing." How do I know which of RECORD, installed-files.txt or SOURCES.txt I should support? (And by "how do I know?" I mean "where is it documented?")

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where is it documented that you only need to implement read_text and locate_file?

The parent class (Distribution) declares those two methods and only those two methods as abstract. It's a shame that Sphinx doesn't render those methods in the API docs as such, perhaps because it only does that for proper ABCs. If one were to implement a subclass of Distribution without implementing those methods, DeprecationWarnings would be emitted at run time. Eventually that deprecation will be replaced with a proper ABC base.

And for that matter where is it documented which other methods need read_text to handle a filename of METADATA and which need it to handle a filename of entry_points.txt?

It's not documented, and much of it is in the implementation details. The METADATA is mentioned briefly here, but I'm not aware of a more canonical description. Interestingly, entry_points.txt isn't even mentioned in that description, meaning it's purely an implementation detail.

Ideally, the packaging ecosystem would have a more rigorous definition of what constitutes metadata. Ideally, the metadata wouldn't be structured based on incidental file formats, but on a proper API and data structures. Unfortunately, that's not what we have, so providers have to simulate the interface that the PathDistribution provides (namely read_text).

Why is it OK to omit RECORD? I note that the files property says in its documentation "Result is None if the metadata file that enumerates files (i.e. RECORD for dist-info, or installed-files.txt or SOURCES.txt for egg-info) is missing." How do I know which of RECORD, installed-files.txt or SOURCES.txt I should support? (And by "how do I know?" I mean "where is it documented?")

I'm not sure RECORD is rigorously defined anywhere. This doc indicates that when installing a wheel that the RECORD should be updated, so if an installer is installing a distribution into the database, it presumably could create that "file". Alternately, the Distribution could dynamically generate the RECORD based on internal structures. It's not obvious to me that the RECORD has any meaning in a provider that doesn't present any files on the file system.

Much of the packaging ecosystem is designed around assumptions of installation to a file system or zip file. importlib metadata attempts to provide a programmatic interface to these contents while also attempting not to reinforce these assumptions. importlib metadata is not attempting to define the metadata for the ecosystem, but rather adapt to the designs as they exist or evolve.

Basically it's undocumented because it's ill-defined. I welcome the PyPA or others to design a system that addresses these concerns. My instinct is it's not worth documenting these incidental details.

Why is it okay to omit RECORD?

It depends on your callers. If they're expecting to be able to resolve .files() for a distribution, you'll need to either override .files or supply a RECORD or similar.

My advice would be to read through the code, figure out what works and what doesn't for a particular use-case, and capture bugs or documentation that would help.

In 9599fd8, I've added a small paragraph suggesting that other files might be provided and directed to the source.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The parent class (Distribution) declares those two methods and only those two methods as abstract. It's a shame that Sphinx doesn't render those methods in the API docs as such, perhaps because it only does that for proper ABCs.

OK, but that's not an argument for not documenting this - if Sphinx isn't rendering the contract expected from users correctly, then you need to add prose text explaining the details.

It's not documented, and much of it is in the implementation details.

Yes, that's essentially my point 🙁

The METADATA is mentioned briefly here, but I'm not aware of a more canonical description.

That's the definitive description (along with the core metadata spec) but that just explains what the file is for. What I'm asking is how I know what methods of the Distribution class rely on having this information available. "Read the source" is not, IMO, a reasonable answer when we're trying to address a docs issue.

Interestingly, entry_points.txt isn't even mentioned in that description, meaning it's purely an implementation detail.

Not true. It's defined here. But again, that's not the point - the question is how I know what Distribution methods I need to override if I don't supply an entry_points.txt file.

I'm not sure RECORD is rigorously defined anywhere.

https://packaging.python.org/en/latest/specifications/recording-installed-packages/#the-record-file

Basically it's undocumented because it's ill-defined. I welcome the PyPA or others to design a system that addresses these concerns. My instinct is it's not worth documenting these incidental details.

I'm not sure I agree with that statement. If nothing else, importlib.metadata is a stdlib module, and as such isn't something the PyPA has influence over. If you (or anyone else interested in maintaining importlib.metadata) think that there are packaging standards (or core Python import system features) that need to be added in order for importlib.metadata to deliver on its documented goal "Through an extension mechanism, the metadata can live almost anywhere" then by all means start the discussion. But expecting others to define a system when there's no indication of what the current implementation is unable to deliver seems optimistic at best 🙁

It depends on your callers.

If I'm writing an import hook, I have no control over what my callers will do. That's the whole point of having a standardised API and interface - callers can do whatever they want with no consideration of whether there are any custom importers in the chain.

If they're expecting to be able to resolve .files() for a distribution, you'll need to either override .files or supply a RECORD or similar.

That's precisely the sort of statement I expect to see in the documentation.

My advice would be to read through the code, figure out what works and what doesn't for a particular use-case, and capture bugs or documentation that would help.

To be blunt, that's way more work than I'm willing to do. I don't think "read the code" is a reasonable thing to suggest to a user who's simply trying to use the API in a way that's (documented as) supported. But as the module maintainer, it's up to you whether you agree with me on that. But I guess that means that I'm simply going to go back to my user and say that if they want importlib.metadata support, they will have to go through the stdlib code and work out how to implement it for themselves. Which isn't ideal, but as we're all volunteers we have to set our own boundaries on what we're willing to do.

Copy link
Member Author

@jaraco jaraco Jan 6, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a shame that Sphinx doesn't render those methods in the API docs as such

I see now that Sphinx does in fact render an abstract prefix for those methods. I'm unsure if it did before and I missed it or something improved, but that's the way one would interpret which methods need to be overridden in a subclass.

image

"Read the source" is not, IMO, a reasonable answer when we're trying to address a docs issue.

That's fair. I'm somewhat reluctant to provide an intermediate documentation between the guide and the documented source API, because that adds yet another dimension that needs to be maintained and kept in sync, especially when there are already synchronicity challenges across the stdlib and the standalone package.

What I'd like to try to do instead of creating a new, intermediate layer between the user guide and the API docs is to close the gap between those through the following:

  • enhance the user guide to provide examples and clarification enough to give a basic understanding of the features and surfaces available,
  • re-organize the codebase so it's more modular and the most user-facing APIs are more prominent, and
  • expand the documentation in the codebase such that the API docs provide more detailed guidance.

How does that sound as a plan?

That's precisely the sort of statement I expect to see in the documentation.

That's helpful. I sometimes struggle to understand what gaps the users see because I'm so entrenched in the implementation that I carry an intuitive understanding of the design. In python/importlib_metadata@8635240, I've updated the docstrings to include more of this guidance.

If nothing else, importlib.metadata is a stdlib module, and as such isn't something the PyPA has influence over.

The importlib metadata design takes a pragmatic approach, attempting to satisfy the users' current needs around packaging metadata based on implicit designs from pkg_resources and pip implementations. Where standards exist, it will honor those. The design has another goal, to support the extensibility afforded by the Python import system. That is, in the same way that importers and finders and specs allow arbitrary customization, importlib metadata wishes to support that customization.

If you (or anyone else interested in maintaining importlib.metadata) think that there are packaging standards (or core Python import system features) that need to be added in order for importlib.metadata to deliver on its documented goal "Through an extension mechanism, the metadata can live almost anywhere" then by all means start the discussion.

The main problem is that many systems assume a file-system implementation or less commonly a zip-file based implementation. They assume packages are installed and uninstalled. But the Python ecosystem doesn't have these constraints. It's conceivable that Python packages could be supplied through a database or loaded in memory from the web or even generated by an AI without any location on disk. In these cases, the concept of a "RECORD" of installed "files" makes much less sense, and the python-packaging docs seem oblivious to this possibility.

And maybe that's fine. Maybe PyPA wishes to focus solely on the filesystem-based packages and leave concerns of custom finders as an exercise for the reader.

Unfortunately, this area is not one where I have time or energy to invest, so I'm relying on others to explore the space and where necessary to ask questions or (preferably) suggest changes to support the stated goals better.

It depends on your callers.

If I'm writing an import hook, I have no control over what my callers will do. That's the whole point of having a standardised API and interface - callers can do whatever they want with no consideration of whether there are any custom importers in the chain.

But the point of an extensible interface is that it may expose a richer surface than what's available to the general caller.

Imagine, for example, a company creates a custom importer that presents a suite of packages that are partitioned by a realm, which might be "public" or "private". Depending on environment variables, the importer might present packages from one or both realms. In this scenario, the caller may want to query distributions by realm, so they might call importlib.metadata.distributions(realm="private"). That would return all distributions on the file system (because the default providers don't discriminate based on realm) and all packages from the custom importer where realm == "private". They might even call importlib.metadata.distributions(path=["custom://..."], realm="private") to exclude normal distributions on the file system.

This mechanism gives the custom importer a means to solicit additional details from the caller beyond "name" and "path" when searching distributions.

I'll elaborate on this aspect in the docs (python/importlib_metadata@a6038c3f2b).

Copy link
Member

@pfmoore pfmoore Jan 6, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How does that sound as a plan?

I'm not particularly happy with the idea of the stdlib documentation (which is basically the "canonical" documentation in many people's minds, including mine) being essentially examples. But I understand that you're not willing to go down the route taken by the other stdlib docs of including handwritten API documentation, so I think that leaves us at a bit of an impasse. Your suggestion is probably an improvement, so I don't want to discourage you, but conversely I don't want to give you the impression that it addresses my concerns when it doesn't.

The importlib metadata design takes a pragmatic approach, attempting to satisfy the users' current needs around packaging metadata based on implicit designs from pkg_resources and pip implementations. Where standards exist, it will honor those. The design has another goal, to support the extensibility afforded by the Python import system. That is, in the same way that importers and finders and specs allow arbitrary customization, importlib metadata wishes to support that customization.

I'm not sure that addresses my point. Yes, importlib.metadata supports packaging standards. But there's no sense in which "the PyPA" designs those standards - they are designed by community members identifying a need and proposing it as a standard, which is what I was suggesting you could do if you want to address the problem that "it's undocumented because it's ill-defined" as you state. And as regards the other goal, that's purely an importlib.metadata goal and as such it's entirely within your control how you satisfy it. Again I don't see why the PyPA would be involved in this.

And maybe that's fine. Maybe PyPA wishes to focus solely on the filesystem-based packages and leave concerns of custom finders as an exercise for the reader.

I can't speak for the PyPA as a whole, but I think the focus there is on packaging rather than on import mechanisms. As such, filesystem-based approaches are really the only ones that fit the PyPA's scope. It's similar to the way that import finders and hooks aren't under the PyPA's remit, but are part of the core (or stdlib).

Maybe that's something that could change, but someone would need to propose the idea (on the Packaging category on Discourse) in order to get the community involved in the idea. Is that something you want to do? I don't personally have either the bandwidth or the interest in tackling it right now.

Unfortunately, this area is not one where I have time or energy to invest, so I'm relying on others to explore the space and where necessary to ask questions or (preferably) suggest changes to support the stated goals better.

I can completely understand and accept this. But if no-one is willing to commit time to non-filesystem based import scenarios, then maybe we shouldn't be blocking progress in areas where there is benefit just in order to maintain a (fairly obscure? is that fair?) capability in the area of esoteric1 import mechanisms?

If you're talking more generally, then ideally, I'd offer to help, but as importlib.metadata seems very much built on your vision and design approaches, I suspect that doing so would likely just cause frustration for both of us. For example, I assume you'd reject a PR that added manually maintained documentation of the API to the stdlib docs?

But the point of an extensible interface is that it may expose a richer surface than what's available to the general caller.

I think I disagree. If the caller has to know what import hooks are installed, and the import hooks have to expose data in a way that the caller can use, everybody's tightly coupled together. Which is the opposite of what I consider to be an "interoperability standard". Maybe I'm wrong to use that terminology and mindset for importlib.metadata, but it's how I think of standards-based interfaces. The import mechanism itself, for example, is designed so that callers can just do import <something> and everything else happens automatically, with the caller not needing to know the details.

In any case, if this is a genuine intended use case for the library, I think it should be documented. There's no indication anywhere that I can see that something like this is anything other than an accident of implementation, and I certainly wouldn't have anticipated that sort of usage. I'm not sure I see the value of this much flexibility. I can understand the example you give, but are there any real-world use cases of this flexibility being used "in the wild" - or is it simply something that came out of a "don't prohibit anything that doesn't need to be" design approach? I assume it's simply a matter of me not having encountered this type of use case, but again, having a discussion in the docs would make it less of a stretch to understand why the API is designed in this way.

Anyway, I don't know that this discussion is particularly productive - if you're getting benefit from what I'm saying, please say so and I'm happy to continue, but otherwise it just feels like two people with very different design philosophies trying to explain why they disagree. Also, I think that if this discussion is worthwhile, it should probably be occurring somewhere more public (for example, Discourse) - because as it stands, I have no sense of whether my views or yours are more in line with the community view. So I'm happy to continue, either here or in a more public forum, but I don't want to mislead you into thinking that we're converging on any sort of agreement here.

(Edit: Saying a comment is uncharitable doesn't make it any less uncharitable. I apologise if you saw the original version - it was uncalled for. I've reworded the offending sentence.)

Footnotes

  1. Full disclosure, I'm a big fan in theory of non-standard import mechanisms. When I was originally involved in the import hook PEP, I had visions of imports backed by SQLite databases, or imports from URLs, and all sorts of possibilities. The reality seems to be that no-one ever really felt the need to go beyond filesystems and zip files, which sort of saddens me, even now.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm getting caught up on this again. It's been a rough new year with some seriously distracting life events.

I can't speak for the PyPA as a whole, but I think the focus there is on packaging rather than on import mechanisms. As such, filesystem-based approaches are really the only ones that fit the PyPA's scope.

I see how PyPA deals with things like core metadata specs, which are about metadata for packages and independent of the import mechanisms, and that's similar to how importlib.metadata attempts to abstract those concepts - to give packaging providers the ability to expose their metadata (the fields of the core metadata, plus arbitrary "files" of metadata), regardless of how the package is installed. How the metadata for a distribution is resolved is independent of how the package is imported in Python (though often related). PyPA (or the community) could establish standards for what files are exposed, what formats those files are in, and how that relates to the core metadata specs such that any (custom) provider could provide metadata using a compatible mechanism, even if the packages are still imported using the standard file system (though in practice, the value would be primarily for esoteric importers).

Is that something you want to do? I don't personally have either the bandwidth or the interest in tackling it right now.

Not at the moment, as I've got more important concerns to address, but possibly in the future. I agree the starting point would be a thread in discourse.

I can completely understand and accept this. But if no-one is willing to commit time to non-filesystem based import scenarios, then maybe we shouldn't be blocking progress in areas where there is benefit just in order to maintain a (fairly obscure? is that fair?) capability in the area of esoteric1 import mechanisms?

There are definitely users relying on the non-filesystem based interfaces, including the mempip project and the rinohtype project. Moreover, the zip-based import scenarios originally required separate handling before zipfile.Path provided an abstraction to treat content in zip files like file system paths, so I've always considered these scenarios nominally "supported" (albeit weakly documented).

I didn't realize that progress was blocked in any dimension. Yes, I agree, a prominent need should not be blocked for an esoteric concern. Maybe I just missed an opportunity in python/importlib_metadata#427 where all you were seeking was a simple way to expose an additional PathDistribution for an editable package... and the concern of non-filesystem based interfaces was only confounding the problem.

For example, I assume you'd reject a PR that added manually maintained documentation of the API to the stdlib docs?

Definitely not. I try to be welcoming to contributions and if I have concerns, raise those in a way that the contributor would ideally agree. I try to avoid ruling with authority and consider these projects to be community-owned. The one area where I'd push back is if a contribution is likely to create more maintenance burden for which I would be expected to bear.

are there any real-world use cases of this flexibility being used "in the wild" - or is it simply something that came out of a "don't prohibit anything that doesn't need to be" design approach?

It came about when I realized that the signature to find_distributions was changing as certain key features were added (find distributions for a subset of paths, find distributions for a given name, ...) and I was concerned that the signature of that API interface would continue to evolve and expand until it had 100 parameters to cover each and every esoteric use-case.

I think you're right that ultimately this interface has stayed pretty static, with only the name and path being the relevant parameters. I'm not aware of any real-world use-cases reliant on the extensibility of a Context, so it may be a case of premature generalization. Still, the cases it does support are widely used (specifying any or none of name and path). My intention here would be that custom providers would ignore the extensibility here unless needed.

Anyway, I don't know that this discussion is particularly productive - if you're getting benefit from what I'm saying, please say so and I'm happy to continue, but otherwise it just feels like two people with very different design philosophies trying to explain why they disagree. Also, I think that if this discussion is worthwhile, it should probably be occurring somewhere more public (for example, Discourse) - because as it stands, I have no sense of whether my views or yours are more in line with the community view. So I'm happy to continue, either here or in a more public forum, but I don't want to mislead you into thinking that we're converging on any sort of agreement here.

I'm definitely getting value from the conversation, but I also want to work toward something useful. I understand the frustration - the design is complicated, possibly unnecessarily so, but I'm not sure what the alternative is. The prognosis can't be "go back and try again". What I'd really like to see is concrete examples of what's broken/blocked with the current design so we can discuss and implement improvements to address those defects.

I want to get these changes merged so at least they're not lingering. Then let's figure out what the next steps are toward convergence. That is, what changes are you proposing? If the problem is "the current design is incomprehensible", let's work on a new design that's more comprehensible and a transition plan to that approach.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the comments. I don't really have enough time right now to comment fully, but I agree that this should be merged, as it's definitely an improvement even if there are things that might still need to be done.

I didn't realize that progress was blocked in any dimension.

It's hardly the end of the world, but pfmoore/editables#23 is stalled, mostly because I was struggling to understand how to implement an importlib.metadata extension (it's now stalled because I've lost momentum, but that's a separate matter 🙂)

My problem here is that I don't want to rely on unsupported details that might change - editables sits very low down in the packaging stack, and I don't want to get tied up with hacks to support cross-version compatibility. So I'm very focused on only using features that the documentation guarantees are supported. Your suggestion that I should check the source goes against that (and honestly, I'm somewhat shocked that as the primary setuptools maintainer, you're not completely turned off "read the source" as a way of identifying supported APIs at this point!!!)

Let's not leave this discussion here - I'll try to get back to it when I get some time. Is it OK to continue discussions on this PR even after the change itself gets merged? If not, maybe we should open a new issue.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

I'm not opposed to continuing the discussion here, but I'd probably prefer to reset to a new issue or continue in python/importlib_metadata#427.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given that python/importlib_metadata#427 is closed, let's pick it up in a new issue. I'll open something when I have the time.

def __init__(self, record):
self.record = record

def read_text(self, filename):
"""
Read a file like "METADATA" for the current distribution.
"""
if filename == "METADATA":
return f"""Name: {self.record.name}
Version: {self.record.version}
"""
if filename == "entry_points.txt":
return "\n".join(
f"""[{ep.group}]\n{ep.name}={ep.value}"""
for ep in self.record.entry_points)

def locate_file(self, path):
raise RuntimeError("This distribution has no file system")

This basic implementation should provide metadata and entry points for
packages served by the ``DatabaseImporter``, assuming that the
``record`` supplies suitable ``.name``, ``.version``, and
``.entry_points`` attributes.

The ``DatabaseDistribution`` may also provide other metadata files, like
``RECORD`` (required for ``Distribution.files``) or override the
implementation of ``Distribution.files``. See the source for more inspiration.


.. _`entry point API`: https://setuptools.readthedocs.io/en/latest/pkg_resources.html#entry-points
.. _`metadata API`: https://setuptools.readthedocs.io/en/latest/pkg_resources.html#metadata-api
Loading