Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Revert "Use recursion to fix inference failure" #80

Merged
merged 1 commit into from
Mar 8, 2024

Conversation

maleadt
Copy link
Member

@maleadt maleadt commented Mar 8, 2024

Reverts #78

Looks like this introduces inference crashes during CUDA.jl CI: from https://buildkite.com/julialang/gpuarrays-dot-jl/builds/814#018e13b6-4936-4fb9-8d88-4402694019e6

      From worker 4:	Internal error: stack overflow in type inference of _adapt_tuple_structure(CUDA.KernelAdaptor, NTuple{6373, UInt64}).
      From worker 4:	This might be caused by recursion over very long tuples or argument lists.

cc @charleskawczynski

Copy link

codecov bot commented Mar 8, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 93.65%. Comparing base (3d7097a) to head (79a98f2).

Additional details and impacted files
@@            Coverage Diff             @@
##           master      #80      +/-   ##
==========================================
- Coverage   94.02%   93.65%   -0.38%     
==========================================
  Files           6        6              
  Lines          67       63       -4     
==========================================
- Hits           63       59       -4     
  Misses          4        4              

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@maleadt maleadt merged commit e99bc55 into master Mar 8, 2024
18 checks passed
@maleadt maleadt deleted the revert-78-ck/inline_recurse branch March 8, 2024 11:04
@charleskawczynski
Copy link
Contributor

It looks like the failure was due to an aqua ambiguity, not inference failure, can't we fix that? cc @maleadt

@maleadt
Copy link
Member Author

maleadt commented Mar 8, 2024

The aqua failure is unrelated. It's the inference failures that are problematic.

@charleskawczynski
Copy link
Contributor

Ah, I didn't see that. Sheesh Internal error: stack overflow in type inference of _adapt_tuple_structure(CUDA.KernelAdaptor, NTuple{7708, UInt64}). That seems awfully large, is that correct?

If so, is there a middle point that we could settle on? Maybe we can specialize on small tuples?

@maleadt
Copy link
Member Author

maleadt commented Mar 8, 2024

Yeah, those large tuples are used to test for parameter space exhaustion: https://github.com/JuliaGPU/CUDA.jl/blob/cb14a637e0b7b7be9ae01005ea9bdcf79b320189/test/core/execution.jl#L622-L625

In any case, it would be good to add a limit based on the length of the tuple. Anything that's significantly large should probably fall back to the current implementation? Or maybe use ntuple (why doesn't that suffice in the first place to avoid an inference problem?).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants