Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Starlark: @StarlarkMethod.trustReturnsValid #13012

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

stepancheg
Copy link
Contributor

When @StarlarkMethod(trustReturnsValid = true) is specified, the interpreter does not check the method returns a valid Starlark object (which is does by default unless return type is statically known to be valid).

Object validity check is quite expensive.

Speed up is 4% for this test:

def test():
  for i in range(10):
    print(i)
    d = {"a": range(10), "b": {}, "c": 4, "d": 5, "e": (), "f": True, "g": []}
    for _ in range(1000000):
      d.get("a")
      d.get("b")
      d.get("c")
      d.get("d")
      d.get("e")
      d.get("f")
      d.get("g")

test()
  • First commit adds an annotation parameter and handling of this parameter
  • Second commit annotates net.starlark.java builtins with this annotation

@google-cla google-cla bot added the cla: yes label Feb 13, 2021
@jin jin added the team-Starlark-Interpreter Issues involving the Starlark interpreter used by Bazel label Mar 1, 2021
facebook-github-bot pushed a commit to facebook/buck that referenced this pull request Mar 10, 2021
Summary:
When returning from native functions, Starlark by default checks that returned value is a valid Starlark object (i. e. `String`, `Boolean` or implements `StarlarkValue`). That check is relatively expensive for cheap calls like `dict.get`.

`StarlarkMethod.trustReturnsValid` when used with native function disables this check.

This diff will likely not produce noticeable speedup, but the check is visible in profiler.

Upstream PR: bazelbuild/bazel#13012

Reviewed By: mzlee

fbshipit-source-id: 29e146503a0653a5629dc526c76d64e6f5e5b2f7
@tetromino tetromino requested review from tetromino and removed request for alandonovan July 29, 2021 22:12
Copy link
Contributor

@tetromino tetromino left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the fix!

src/main/java/net/starlark/java/annot/StarlarkMethod.java Outdated Show resolved Hide resolved
When this parameter is specified for method, Starlark interpreter
assumes function returns a valid Starlark value.

When method is declared to return `Object`, Starlark interpreter
by default checks that object is a valid Starlark value, which is
quite expensive.
Speed up is 4% for this test:

```
def test():
  for i in range(10):
    print(i)
    d = {"a": range(10), "b": {}, "c": 4, "d": 5, "e": (), "f": True, "g": []}
    for _ in range(1000000):
      d.get("a")
      d.get("b")
      d.get("c")
      d.get("d")
      d.get("e")
      d.get("f")
      d.get("g")

test()
```
@sgowroji sgowroji added the awaiting-review PR is awaiting review from an assigned reviewer label Oct 19, 2022
@brandjon
Copy link
Member

I'm on the fence about this one -- whether the 4% speedup is worth the extra annotation API. I don't love the idea of subverting a validity check with a "trust me, it's valid" flag, because that suggests we shouldn't be validating in the first place.

(Mostly I just wish Java would let us declare a type subsuming {StarlarkValue | String | Boolean} without paying the cost of a tagged union wrapper object.)

I'll punt on this now until tetromino is back in office (January) to discuss.

@brandjon brandjon removed the awaiting-review PR is awaiting review from an assigned reviewer label Dec 19, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla: yes team-Starlark-Interpreter Issues involving the Starlark interpreter used by Bazel
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants