-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
codeintel: Auto-inference sandbox API #33756
Conversation
cf65be2
to
8aab4e2
Compare
3bca35d
to
b8b1575
Compare
High-level question: the PR description says this closes #33041 and #33040 which belong to milestones 1 and 4 respectively. This comment in the RFC makes it sound like we still haven't finished milestone 0 yet. So why do this now? Did we actually have to implement this to get some feedback from customers? Because if we haven't heard from customers yet on RFC 624 and whether it'll solve their problem, this is a lot of code to build on a hunch. What I'm thinking is: could we have written pseudo Lua code and asked them whether they'd use it and whether that would work for their codebase? |
High-level response: what happened in the week while @mrnugget was OoO and some misc thoughts:
What we should do going forward:
I'm very confident that the solution here is not far off from the solution we'd get if we blocked on completion of M0. We have enough uses cases we don't support today for Java alone (no build tooling support today, only package repos) that the API in this PR is perfectly suited for. Completion of M0 is still super important because the Lua sandbox in this PR is not yet ergonomic. It's a bare-bones v1 (that improves on the existing art), but it's not idiomatic and has no standard library attached yet. (I'll be deferring heavily to customer response and @tjdevries's feedback here). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 👍 I like the new lua inference logic and it will be a gamechanger for users to be able to roll out custom inference changes.
Enabling auto-merge now, but would be very happy to receive another review. We'll be iterating on this in the very near future. |
This PR adds an initial
internal/codeintel/autoindexing/internal/inference
package that encodes the rules of our autoindexing inference in a Lua sandbox. This package is currently unused by any entrypoint (outside of unit tests) and subsequent efforts (#33044 and #33045) will replace this code with the original.Closes #33041 and #33040.
This PR does not close discussion on API design. This PR has roughed-out what seems to be a very usable interface for us to be able to extend. We're still waiting on customer feedback (see #33047) to help us ensure we're solving the right expressibility problem, but it doesn't seem to be a blocker for moving the PoC forward (but may be a blocker before moving onto or past #33046).
Reviewer hints:
Please review by-commit, paying special attention to the following:
lib/codeintel/autoindex
in LuaTest plan
This PR adds significantly good coverage and shows that we have one-to-one behavior with our existing indexers (test cases are taken verbatim). Additional testing will be done before this code is hooked up to a production (or even development application) entrypoint.