You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.
As noted in #37537 (comment), workspace symbols is a useful index for "available symbols" in a project. But right now, we only index files in the transitive closure of workspace packages. This means that symbols in unimported packages (including stdlib packages) are not reachable. FWIW I don't usually notice this, but it can be surprising if one is trying to jump to a standard library symbol that happens not to be reachable. As an API it's not great because the user can't be expected to remember which packages are imported, and which are not.
The good news is that the workspace symbol index is out-of-band of our package graph, and symbols are somewhat efficiently stored (if needed, we can further optimize their storage). We should be able to index all files in modules reachable from the workspace.
I looked into how that would work for the case of symbols but I'm not sure yet how gopls can parse the std lib from go/packages and turn them into cashed objects/files in the snapshot. I'd be happy to contribute but would probably need guidance.
@marwan-at-work first of all, sorry that review dropped off. If you're game, let's revisit it -- though I think we should rethink the problem.
The consistency of package metadata is a big concern for gopls, and I'd prefer not to have another source of metadata that could go out of sync. I think we should aim to have our metadata graph hold all possible package information that we could need. If this is our model, then both this workspace symbol problem and the known packages problem become easy to solve.
I suspect that go/packages is fast enough for this, provided we are selective about our queries. We already load metadata through go/packages at startup (our initial workspace load), and loading all necessary metadata for kubernetes takes ~4s.
We just need to have a good mechanism for loading non-critical metadata asynchronously, and merging it into our metadata graph. I'm working on this problem this month, as part of some optimization work, and I'll see what it would take to load all necessary metadata.