Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

codegen restructuring #25984

Merged
merged 1 commit into from
Mar 3, 2020
Merged

codegen restructuring #25984

merged 1 commit into from
Mar 3, 2020

Conversation

vtjnash
Copy link
Sponsor Member

@vtjnash vtjnash commented Feb 9, 2018

This creates the function jl_create_native for outputting a llvm::Module from a list (jl_array_t) of functions (jl_method_instance_t) which is independent from the JIT (as needed by JuliaGPU/CUDAnative.jl#122). It's now used by the build system also, so that it'll be tested too. This (mostly) eliminates the shadow_module, and does a slightly more aggressive job at ensuring functions are getting precompiled.

There's still some more work to do to make that work even better, but that can be done later. I'm also now printing out errors that happen as a result of codegen bugs, rather than propagating them to the user, which makes a few of the tests (for codegen breakage) are a bit noisier now – those will need to be fixed now.

fixes #26767
fixes #26714

@ararslan ararslan added the compiler:codegen Generation of LLVM IR and native code label Feb 9, 2018
src/jitlayers.h Outdated Show resolved Hide resolved
@maleadt
Copy link
Member

maleadt commented Feb 10, 2018

Cool! How should I access the call targets from CUDAnative in order to build a module with all functions defined?

@vtjnash
Copy link
Sponsor Member Author

vtjnash commented Feb 12, 2018

How should I access the call targets from CUDAnative
Value::getContext

Feel free to add exports from aotlayers.cpp to expose these with C wrappers. I know jl_create_native needs some extra arguments (specifically, cgparams) too.

@vtjnash vtjnash force-pushed the jn/codegen-norecursion branch 2 times, most recently from 029e6fb to 16f53fe Compare February 12, 2018 22:11
@maleadt
Copy link
Member

maleadt commented Feb 13, 2018

Feel free to add exports from aotlayers.cpp to expose these with C wrappers. I know jl_create_native needs some extra arguments (specifically, cgparams) too.

OK. Could you describe the behavior/indented use of jl_create_native, and the fields of jl_native_code_desc_t then? Does jl_create_native compile all necessary functions (ie. no trampolines), without using a cache? And should I still bother with a module_setup hook, or just re-configure the datalayout/triple as jl_dump_native does?

eg. does this look sane jn/codegen-norecursion...tb/codegen-norecursion

maleadt added a commit to JuliaGPU/CUDAnative.jl that referenced this pull request Feb 13, 2018
@vtjnash
Copy link
Sponsor Member Author

vtjnash commented Feb 13, 2018

And should I still bother with a module_setup

I'm thinking of moving the creation of the Module into the caller, to sidestep this question entirely.

maleadt added a commit to JuliaGPU/CUDAnative.jl that referenced this pull request Feb 15, 2018
@maleadt
Copy link
Member

maleadt commented Feb 15, 2018

What about my other questions?

Also, I was using julia_type_to_llvm in generated functions in combination with LLVM.jl. This function now takes a codegen context, which a) isn't available from C/Julia, and b) doesn't exist yet at the time the LLVM.jl-based generated function runs. I guess that b) will become even worse once there isn't a global LLVM context anymore (which I assume is the direction you're going in).
Any comments on how to do this style of LLVM-based custom codegen now?

@vtjnash
Copy link
Sponsor Member Author

vtjnash commented Feb 15, 2018

I was using julia_type_to_llvm in generated functions in combination with LLVM.jl. This function now takes a codegen context

That other method is not exported – I didn't change the signature of julia_type_to_llvm (yet 😛 – it might gain a LLVMContextRef eventually).

will become even worse once there isn't a global LLVM context anymore (which I assume is the direction you're going in). Any comments on how to do this style of LLVM-based custom codegen now?

Yes, probably true. But I don't really expect jl_create_native will be exactly right for your use case long-term anyways, but will just provides a sample usage. Eventually, I expect you'll just provide accessors from this file to be able to do ccalls to do everything that it can do.

@maleadt
Copy link
Member

maleadt commented Feb 16, 2018

That other method is not exported – I didn't change the signature of julia_type_to_llvm (yet 😛 – it might gain a LLVMContextRef eventually).

Ah OK, I didn't see the second definition. You renamed it to jl_type_to_llvm!
An explicit LLVM context should be fine, assuming we can maintain a global one in CUDAnative.

Also, renaming julia_gpu_accumulate!_146195 to julia_gpu_accumulate%NOT_146195 seems pretty... weird.

But this passes CUDAnative tests so 👍 from me (with the additions from jn/codegen-norecursion...tb/codegen-norecursion, that is).

maleadt added a commit to JuliaGPU/CUDAnative.jl that referenced this pull request Feb 16, 2018
@vtjnash vtjnash force-pushed the jn/codegen-norecursion branch 3 times, most recently from 43f5ab9 to 4da4982 Compare March 2, 2018 21:40
maleadt added a commit to JuliaGPU/CUDAnative.jl that referenced this pull request May 7, 2018
@iblislin
Copy link
Member

This FreeBSD CI build already ran more than 9 hrs: https://freebsdci.julialang.org/#/builders/1/builds/9298
I manually killed it.

@vtjnash
Copy link
Sponsor Member Author

vtjnash commented May 10, 2018

@nanosoldier runbenchmarks(ALL, vs = ":master")

@nanosoldier
Copy link
Collaborator

Your benchmark job has completed - possible performance regressions were detected. A full report can be found here. cc @ararslan

maleadt added a commit to JuliaGPU/CUDAnative.jl that referenced this pull request Feb 27, 2020
extern JITEventListener *CreateJuliaJITEventListener();

// for image reloading
bool imaging_mode = false;

// shared llvm state
static LLVMContext &jl_LLVMContext = *(new LLVMContext());
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
static LLVMContext &jl_LLVMContext = *(new LLVMContext());
JL_DLLEXPORT LLVMContext &jl_LLVMContext = *(new LLVMContext());

Copy link
Sponsor Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One of the purpose of this change is to eliminate this state object. (There's a function accessor for it in the aotcompiler file)

Copy link
Member

@maleadt maleadt Feb 27, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, so I can't be assuming a global state anymore starting with this PR? That'll require some more work on my side then 🙂

src/aotcompile.cpp Outdated Show resolved Hide resolved
maleadt added a commit to JuliaGPU/CUDAnative.jl that referenced this pull request Feb 27, 2020
also provides support for using a different code_native format,
as a fallback, later we'll want to make this more configurable

there are now several primary interfaces to native code:
 - codegen: mostly internal, support for translating IR to LLVM
 - jitlayers: manages runtime codegen results and executable memory
 - aotcompile: support for managing external code output
 - disasm: pretty-printer for code objects
 - debuginfo: tracking for unwind info

also removes the global type caches and
move all codegen pass handling to aotcompile.cpp
@maleadt
Copy link
Member

maleadt commented Mar 2, 2020

A non-global LLVM context (but jl_get_llvm_context to get one from the AOT compiler output) won't currently work for the GPU compiler, because we implement generated functions with LLVM.jl by returning a llvm::Function* that relies on the Julia context. Do you have a short-term plan for that? If not, maybe we should keep the context (which is still global anyway) exported until we have a solution.

maleadt added a commit to JuliaGPU/CUDAnative.jl that referenced this pull request Mar 2, 2020
@vtjnash vtjnash merged commit 3b53f54 into master Mar 3, 2020
@vtjnash vtjnash deleted the jn/codegen-norecursion branch March 3, 2020 22:03
maleadt added a commit to JuliaGPU/CUDAnative.jl that referenced this pull request Mar 4, 2020
maleadt added a commit to JuliaGPU/CUDAnative.jl that referenced this pull request Mar 5, 2020
maleadt added a commit to JuliaGPU/CUDAnative.jl that referenced this pull request Mar 26, 2020
ravibitsgoa pushed a commit to ravibitsgoa/julia that referenced this pull request Apr 9, 2020
also provides support for using a different code_native format,
as a fallback, later we'll want to make this more configurable

there are now several primary interfaces to native code:
 - codegen: mostly internal, support for translating IR to LLVM
 - jitlayers: manages runtime codegen results and executable memory
 - aotcompile: support for managing external code output
 - disasm: pretty-printer for code objects
 - debuginfo: tracking for unwind info

also removes the global type caches and
move all codegen pass handling to aotcompile.cpp
KristofferC pushed a commit that referenced this pull request Apr 11, 2020
Other bad cases are probably less common, but still exist. The code to fix those is included in #25984.

Fixes #34680
KristofferC pushed a commit that referenced this pull request Apr 11, 2020
also provides support for using a different code_native format,
as a fallback, later we'll want to make this more configurable

there are now several primary interfaces to native code:
 - codegen: mostly internal, support for translating IR to LLVM
 - jitlayers: manages runtime codegen results and executable memory
 - aotcompile: support for managing external code output
 - disasm: pretty-printer for code objects
 - debuginfo: tracking for unwind info

also removes the global type caches and
move all codegen pass handling to aotcompile.cpp


extern "C" JL_DLLEXPORT
void *jl_get_llvmf_decl(jl_method_instance_t *mi, size_t world, bool getwrapper, const jl_cgparams_t params)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there an alternative function to use instead of this one? It was used in Cxx.jl: https://github.com/JuliaInterop/Cxx.jl/blob/256485c04c76feee515e83f091ba0d6c8f3286ee/src/cxxstr.jl#L107

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
compiler:codegen Generation of LLVM IR and native code
Projects
None yet
8 participants