You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jun 18, 2019. It is now read-only.
Hazel currently hard-codes the assumption that @ghc// contains the GHC distribution. This is an issue, because it prohibits to use Hazel for a multi platform build where different GHC distributions are required for different platforms. E.g. @ghc_platform_unix@ghc_platform_windows.
What doesn't work
A first step towards resolving this issue is to add a parameter, say ghc_workspace, to hazel_repositories that defines the GHC workspace, and then pass it along within Hazel to all the components that require this information. This is doable.
However, as I understand, in order to choose the value of ghc_workspace based on platform information requires calling hazel_repositories from within a repository rule. (Only those have access to repository_ctx and thereby to platform information. Note, that Bazel's select does not work for this use-case in WORKSPACE.) At the time of writing, it seems that hazel_repositories cannot be called from within a repository rule. Any attempt causes errors along the lines of
ERROR: Analysis of target '//some/package:some-target' failed; build aborted: no such package '@haskell_some_dependency//': The repository could not be resolved
I.e. the external workspaces for Hazel packages are not generated. I am not sure why this happens. I assume it is related to nested workspace restrictions in Bazel.
What could work
Given that, I think a better approach would be to not require direct access to the GHC workspace in the first place. To my understanding it is currently used to access the Unix system headers and the threaded runtime shared libraries. Instead of accessing these through the workspace, maybe rules_haskell could be extended to make them available through the GHC toolchain and Hazel could then pick them up from there.
Am I overlooking something, or do you think going through the GHC toolchain is a feasible solution?
The text was updated successfully, but these errors were encountered:
It would make sense to use the toolchain to provide dependencies like the threaded runtime shared libs and the unix system headers. Perhaps the latter could be incorporated into the haskell_import rule somehow?
Hazel currently hard-codes the assumption that
@ghc//
contains the GHC distribution. This is an issue, because it prohibits to use Hazel for a multi platform build where different GHC distributions are required for different platforms. E.g.@ghc_platform_unix
@ghc_platform_windows
.What doesn't work
A first step towards resolving this issue is to add a parameter, say
ghc_workspace
, tohazel_repositories
that defines the GHC workspace, and then pass it along within Hazel to all the components that require this information. This is doable.However, as I understand, in order to choose the value of
ghc_workspace
based on platform information requires callinghazel_repositories
from within a repository rule. (Only those have access torepository_ctx
and thereby to platform information. Note, that Bazel'sselect
does not work for this use-case inWORKSPACE
.) At the time of writing, it seems thathazel_repositories
cannot be called from within a repository rule. Any attempt causes errors along the lines ofI.e. the external workspaces for Hazel packages are not generated. I am not sure why this happens. I assume it is related to nested workspace restrictions in Bazel.
What could work
Given that, I think a better approach would be to not require direct access to the GHC workspace in the first place. To my understanding it is currently used to access the Unix system headers and the threaded runtime shared libraries. Instead of accessing these through the workspace, maybe
rules_haskell
could be extended to make them available through the GHC toolchain and Hazel could then pick them up from there.Am I overlooking something, or do you think going through the GHC toolchain is a feasible solution?
The text was updated successfully, but these errors were encountered: