Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prune the graph of inflight functions to not include the ones we don't need #438

Open
wants to merge 7 commits into
base: will-compiler-cache-partial-load-multi-module
Choose a base branch
from

Conversation

wllgrnt
Copy link

@wllgrnt wllgrnt commented Feb 16, 2023

(by @braxtonmckee)

Motivation and Context

When we first call a python function 'f' with a specific set of arguments, we may not know its return type the first time we try to convert it. To ensure we have a stable typing graph, we repeatedly update the active functions in our graph until the type graph is stable.

This can lead to many copies of the same function, or even multiple signatures of the same function, only one of which we'll use.

This change prunes those away before we submit them to the LLVM layer.

How Has This Been Tested?

Running test_dispatch_to_function_overload with TP_COMPILER_VERBOSE=1 with and without this commit causes 13 functions to be converted rather than 17, with a slight speedup.

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)

Checklist:

  • My code follows the code style of this project.
  • My change requires a change to the documentation.
  • I have updated the documentation accordingly.

Augustus Lonergan and others added 7 commits February 15, 2023 11:47
Current implementation of the interface, used via
_loadFromCompilerCache, runs loadForSymbol for a given link name and
then returns two dicts representing everything the cache touched during
this load process. Maintaining this interface makes the partial-load
refactor difficult, and muddies the converter layers/cache relationship.
Here we alter the API so that we can ask the cache for
a TypedCallTarget, and add modules, and that's it. This means getting
rid of _loadFromCompilerCache, and associated registers for tracking
what's being converted. Also means passing the cache down to the
native_ast_to_llvm layer.
Previously we would always attempt to link and validate all global
variables when loading a module from the cache. This caused linking
errors, or validation errors, or deserialization errors, and meant we
needed the mark_invalid mechanism for handling modules with outdated
global variables. Here we add double-serialised global variables, and
only deserialize,link&validate the subset required for the function
required (and its dependencies). This requires the cache to store
a function and global_var dependency graph. Also add utility methods
for GlobalVariableDefinition.
Pickle supports a protocol where __reduce__returns a string giving
the global name. Implementing this behaviour lets us serialize numpy
ufuncs. Also adjust installInflightFunctions to handle new load
behaviour, fix an instability caused by not leaving LoadedModule
objects in memory, and adjust alternative test.
…t need.

When we first call a python function 'f' with a specific set of arguments, we may not know
its return type the first time we try to convert it.  To ensure we have a stable typing graph,
we repeatedly update the active functions in our graph until the type graph is stable.

This can lead to many copies of the same function, or even multiple signatures of the
same function, only one of which we'll use.

This change prunes those away before we submit them to the LLVM layer.
@wllgrnt wllgrnt force-pushed the brax-prune-compiler-graph branch from 706f894 to 4843289 Compare February 16, 2023 20:39
@wllgrnt wllgrnt changed the base branch from dev to will-compiler-cache-partial-load-multi-module March 7, 2023 00:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants