New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Virtual libraries #921

Open
diml opened this Issue Jun 28, 2018 · 32 comments

Comments

Projects
None yet
3 participants
@diml
Member

diml commented Jun 28, 2018

Virtual libraries represent part of library variant proposal. They are much simpler to implement than the full proposal, but are still compatible with the full proposal and represent the most useful part of it, so we should start with them.

User facing features

The user may write (virtual_modules a b c) in a library stanza for a library named foo. This means that modules a, b and c must have a .mli but no .ml and will be implemented by another library. If a library has at least one virtual module we call it a virtual library.

Another library may have (implement foo) in its stanza. This library must provide .ml files for a, b and c but no .mli files. Additionally it must not have .ml or .mli files for any non-virtual modules of foo. Such a library is called an implementation of foo. An implementation can have a public_name but must not have a name field, since it must be the same as the one of the library it implements.

For every virtual library an executable depends on, it must depend on exactly one implementation.

One open question is how to refer to implementations when they don't have a public_name. One possibility is to allow an implmenentation_name field in implementations or a local_names field in all libraries. The latter seems better.

Implementation

virtual_modules can be implemented the same way as modules_without_implementations. Additionally for a virtual library we must not produce any archive file (.cma, .a, ...). The list of virtual modules must be kept around in Lib.t and in the generated META file.

When compiling an implementation, we should copy all the artifacts of the virtual library over and do the rest as usual.

When computing the transitive closure for an executable, we must proceed as follow:

  1. compute the transitive closure of libraries (as before)
  2. check that for every virtual libraries in the list we have exactly one corresponding implementation
  3. recompute the transitive closure, but this time implicitely replace dependency on a virtual library by a dependency on the corresponding implementation
@rgrinberg

This comment has been minimized.

Show comment
Hide comment
@rgrinberg

rgrinberg Jul 17, 2018

Member

When compiling an implementation, we should copy all the artifacts of the virtual library over and do the rest as usual.

Why is this copying step necessary? We can't just use the include paths of the virtual libraries?

One possibility is to allow an implmenentation_name field in implementations or a local_names field in all libraries. The latter seems better.

What about private_name? Seems more symmetric and possibly useful for executables as well.

Member

rgrinberg commented Jul 17, 2018

When compiling an implementation, we should copy all the artifacts of the virtual library over and do the rest as usual.

Why is this copying step necessary? We can't just use the include paths of the virtual libraries?

One possibility is to allow an implmenentation_name field in implementations or a local_names field in all libraries. The latter seems better.

What about private_name? Seems more symmetric and possibly useful for executables as well.

@diml

This comment has been minimized.

Show comment
Hide comment
@diml

diml Jul 18, 2018

Member

Why is this copying step necessary? We can't just use the include paths of the virtual libraries?

That should be ok, yes

What about private_name? Seems more symmetric and possibly useful for executables as well.

Well name is already private, so the distinction between the two might not be clear.

Member

diml commented Jul 18, 2018

Why is this copying step necessary? We can't just use the include paths of the virtual libraries?

That should be ok, yes

What about private_name? Seems more symmetric and possibly useful for executables as well.

Well name is already private, so the distinction between the two might not be clear.

@rgrinberg

This comment has been minimized.

Show comment
Hide comment
@rgrinberg

rgrinberg Jul 18, 2018

Member

Then reusing name shouldn't be so bad. I realize that this may be confusing if the implementation is (wrapped true), but that's simply b/c we don't allow inconsistency between private names and the alias module.

Member

rgrinberg commented Jul 18, 2018

Then reusing name shouldn't be so bad. I realize that this may be confusing if the implementation is (wrapped true), but that's simply b/c we don't allow inconsistency between private names and the alias module.

@diml

This comment has been minimized.

Show comment
Hide comment
@diml

diml Jul 18, 2018

Member

actually, the only thing we need to make sure is that the prefix we use for object files is the name of the virtual library, but the implementation could have it's own toplevel module. So reusing name seems fine yes

Member

diml commented Jul 18, 2018

actually, the only thing we need to make sure is that the prefix we use for object files is the name of the virtual library, but the implementation could have it's own toplevel module. So reusing name seems fine yes

@samoht

This comment has been minimized.

Show comment
Hide comment
@samoht

samoht Jul 18, 2018

Member

Looking forward to use this one :-)

The current workaround is causing a bit of confusion already (see here) so it would be nice to have a proper solution which scales well.

Member

samoht commented Jul 18, 2018

Looking forward to use this one :-)

The current workaround is causing a bit of confusion already (see here) so it would be nice to have a proper solution which scales well.

@rgrinberg rgrinberg self-assigned this Jul 21, 2018

@rgrinberg

This comment has been minimized.

Show comment
Hide comment
@rgrinberg

rgrinberg Jul 21, 2018

Member

As a simplification for v1, let's tentatively disallow implements libraries to be virtual themselves. There is probably no conceptual issues with creating partial implementation, so we'll eventually lift this restriction. But for now this keeps things simple.

Member

rgrinberg commented Jul 21, 2018

As a simplification for v1, let's tentatively disallow implements libraries to be virtual themselves. There is probably no conceptual issues with creating partial implementation, so we'll eventually lift this restriction. But for now this keeps things simple.

@diml

This comment has been minimized.

Show comment
Hide comment
@diml

diml Jul 30, 2018

Member

That seems fine

Member

diml commented Jul 30, 2018

That seems fine

@rgrinberg

This comment has been minimized.

Show comment
Hide comment
@rgrinberg

rgrinberg Aug 5, 2018

Member

There's a bit of a technical problem with saving module information in Lib.t. As a reminder, this is necessary to make sure that implementing libraries are verified that they actually posses implementations for all virtual modules.

The issue is as follows, we'd basically like to instantiate a Lib.Info.t from Jbuild.Library.t * Dir_contents.Library_modules.t (Library_module.t would be augmented with virtual modules), but obtaining Library_modules.t requires us to have a Super_context.t. This makes sense because to get the evaluated modules for a library we must evaluate some rules.

I see two potential work arounds for this:

  • The simplest one is to pass the unevaluated list of virtual modules. It's really only enough to have the static of list virtual modules here, and it doesn't matter that we didn't check these virtual modules don't overlap with intf only modules for example (at least at this stage).

  • Refactor Dir_contents to be independent of Super_context. What we'd essentially need is to have some sort of a new "Great_context" type that lies in between Context.t and Super_context.t. This Great_context would be enough to setup rules (such as those in Simple_rules), but it would be oblivious to libraries. Unfortunately, this would require a massive amount of effort, so I'd like to hear some opinions about this issue before tackling it.

Member

rgrinberg commented Aug 5, 2018

There's a bit of a technical problem with saving module information in Lib.t. As a reminder, this is necessary to make sure that implementing libraries are verified that they actually posses implementations for all virtual modules.

The issue is as follows, we'd basically like to instantiate a Lib.Info.t from Jbuild.Library.t * Dir_contents.Library_modules.t (Library_module.t would be augmented with virtual modules), but obtaining Library_modules.t requires us to have a Super_context.t. This makes sense because to get the evaluated modules for a library we must evaluate some rules.

I see two potential work arounds for this:

  • The simplest one is to pass the unevaluated list of virtual modules. It's really only enough to have the static of list virtual modules here, and it doesn't matter that we didn't check these virtual modules don't overlap with intf only modules for example (at least at this stage).

  • Refactor Dir_contents to be independent of Super_context. What we'd essentially need is to have some sort of a new "Great_context" type that lies in between Context.t and Super_context.t. This Great_context would be enough to setup rules (such as those in Simple_rules), but it would be oblivious to libraries. Unfortunately, this would require a massive amount of effort, so I'd like to hear some opinions about this issue before tackling it.

@diml

This comment has been minimized.

Show comment
Hide comment
@diml

diml Aug 6, 2018

Member

The first method seems fine.

BTW, we are starting to have a lot of contexts and it's hard to invent new names. The part of Super_context.t that would need to be split out are the library and scope databases. Instead of computing them eagerly in Super_context.create, Super_context.t could have a univ_map and these would be computed in scope.ml. We would just have to invert the dependency between super_context and scope.

Member

diml commented Aug 6, 2018

The first method seems fine.

BTW, we are starting to have a lot of contexts and it's hard to invent new names. The part of Super_context.t that would need to be split out are the library and scope databases. Instead of computing them eagerly in Super_context.create, Super_context.t could have a univ_map and these would be computed in scope.ml. We would just have to invert the dependency between super_context and scope.

@rgrinberg

This comment has been minimized.

Show comment
Hide comment
@rgrinberg

rgrinberg Aug 6, 2018

Member

The first method seems fine.

Actually, it has a bit of an issue that i over looked as the list of virtual_modules could contain modules that don't yet exists. The check for such fake modules is done when the OSL.t is evaluated, so there seems to be no way to avoid it. What we can do instead is to store this for every Lib.t

type virtual_modules =
  | Unevaluated (* internal libs *)
  | Evaluated of Module.Name.t list (* external libs *)

And whenever we encounter (implements internal_lib) in gen_rules, we simply use Dir_contents.get sctx ~dir |> Dir_contents.modules_of_library ~name:"internal_lib" to get the virtual modules out. Not elegant, but serviceable.

BTW, we are starting to have a lot of contexts and it's hard to invent new names. The part of Super_context.t that would need to be split out are the library and scope databases. Instead of computing them eagerly in Super_context.create, Super_context.t could have a univ_map and these would be computed in scope.ml. We would just have to invert the dependency between super_context and scope.

I agree that the name I've proposed is not really serious, but I do think that there might be value in having an abstraction for creating/inspecting rules that is independent of the concept of libraries (as the existence of things like simple_rules seems to suggest).

Do you see other use cases for this univ map in the future? perhaps plugins? It seems like using a univ map just to break a dependency cycle is overkill and makes the code more "weakly typed". If anything, I'd prefer to make fields type parameters just to break dependency cycles.

Member

rgrinberg commented Aug 6, 2018

The first method seems fine.

Actually, it has a bit of an issue that i over looked as the list of virtual_modules could contain modules that don't yet exists. The check for such fake modules is done when the OSL.t is evaluated, so there seems to be no way to avoid it. What we can do instead is to store this for every Lib.t

type virtual_modules =
  | Unevaluated (* internal libs *)
  | Evaluated of Module.Name.t list (* external libs *)

And whenever we encounter (implements internal_lib) in gen_rules, we simply use Dir_contents.get sctx ~dir |> Dir_contents.modules_of_library ~name:"internal_lib" to get the virtual modules out. Not elegant, but serviceable.

BTW, we are starting to have a lot of contexts and it's hard to invent new names. The part of Super_context.t that would need to be split out are the library and scope databases. Instead of computing them eagerly in Super_context.create, Super_context.t could have a univ_map and these would be computed in scope.ml. We would just have to invert the dependency between super_context and scope.

I agree that the name I've proposed is not really serious, but I do think that there might be value in having an abstraction for creating/inspecting rules that is independent of the concept of libraries (as the existence of things like simple_rules seems to suggest).

Do you see other use cases for this univ map in the future? perhaps plugins? It seems like using a univ map just to break a dependency cycle is overkill and makes the code more "weakly typed". If anything, I'd prefer to make fields type parameters just to break dependency cycles.

@diml

This comment has been minimized.

Show comment
Hide comment
@diml

diml Aug 6, 2018

Member

The type virtual_modules seems fine.

Do you see other use cases for this univ map in the future? perhaps plugins? It seems like using a univ map just to break a dependency cycle is overkill and makes the code more "weakly typed". If anything, I'd prefer to make fields type parameters just to break dependency cycles.

It would allow thinning the super_context.t type and then we could add more shared computation easily. It wouldn't cause a runtime error, the databases would be created the first time one tries to access them. In this particular case, we wouldn't win much by using phantom types.

Member

diml commented Aug 6, 2018

The type virtual_modules seems fine.

Do you see other use cases for this univ map in the future? perhaps plugins? It seems like using a univ map just to break a dependency cycle is overkill and makes the code more "weakly typed". If anything, I'd prefer to make fields type parameters just to break dependency cycles.

It would allow thinning the super_context.t type and then we could add more shared computation easily. It wouldn't cause a runtime error, the databases would be created the first time one tries to access them. In this particular case, we wouldn't win much by using phantom types.

@rgrinberg

This comment has been minimized.

Show comment
Hide comment
@rgrinberg

rgrinberg Aug 9, 2018

Member

Btw, I think it is also necessary for implementors of virtual libraries to include the name of the library they implement in findlib/our own format.

Member

rgrinberg commented Aug 9, 2018

Btw, I think it is also necessary for implementors of virtual libraries to include the name of the library they implement in findlib/our own format.

@rgrinberg

This comment has been minimized.

Show comment
Hide comment
@rgrinberg

rgrinberg Aug 9, 2018

Member

Additionally for a virtual library we must not produce any archive file (.cma, .a, ...)

Curious as to why? Is this just not possible?

When computing the transitive closure for an executable, we must proceed as follow:

So I take it that we don't allow direct dependence on implementations.

By the way, the algorithm you've listed is done in 2 passes. Is this just to keep things simple? I don't see any technical reason why we can't do this in 1 pass. We just accumulate a map of virtual libs => implementations and make sure we don't introduce 2 implementations for the same library as we calculate the closure.

In any case, I'll try implementing the simple algorithm that you've listed first.

Member

rgrinberg commented Aug 9, 2018

Additionally for a virtual library we must not produce any archive file (.cma, .a, ...)

Curious as to why? Is this just not possible?

When computing the transitive closure for an executable, we must proceed as follow:

So I take it that we don't allow direct dependence on implementations.

By the way, the algorithm you've listed is done in 2 passes. Is this just to keep things simple? I don't see any technical reason why we can't do this in 1 pass. We just accumulate a map of virtual libs => implementations and make sure we don't introduce 2 implementations for the same library as we calculate the closure.

In any case, I'll try implementing the simple algorithm that you've listed first.

@diml

This comment has been minimized.

Show comment
Hide comment
@diml

diml Aug 9, 2018

Member

Btw, I think it is also necessary for implementors of virtual libraries to include the name of the library they implement in findlib/our own format.

Agreed

Additionally for a virtual library we must not produce any archive file (.cma, .a, ...)

Curious as to why? Is this just not possible?

Yes, I believe so. It's because when you do the final link, all the compilation units needs to be sorted in dependency order. If we create cma with holes, we can't do that.

So I take it that we don't allow direct dependence on implementations.

I wrote that a while ago, but I initially thought that we could. In fact, until we have variants, I don't see how we can use this feature without specifying explicitly the implementations we are using.

By the way, the algorithm you've listed is done in 2 passes. Is this just to keep things simple? I don't see any technical reason why we can't do this in 1 pass. We just accumulate a map of virtual libs => implementations and make sure we don't introduce 2 implementations for the same library as we calculate the closure.

That's not enough. When a library X depends on a virtual library V, then the implementation I ofV must come before X in the final list. However, I might not be in the list of dependencies written by the user, we might only discover it while computing the transitive closure. While doing this, when we reach X we might not know yet who is implementing V. There might be a way to make it work with a single pass, but using 2 passes seems a lot simpler.

Member

diml commented Aug 9, 2018

Btw, I think it is also necessary for implementors of virtual libraries to include the name of the library they implement in findlib/our own format.

Agreed

Additionally for a virtual library we must not produce any archive file (.cma, .a, ...)

Curious as to why? Is this just not possible?

Yes, I believe so. It's because when you do the final link, all the compilation units needs to be sorted in dependency order. If we create cma with holes, we can't do that.

So I take it that we don't allow direct dependence on implementations.

I wrote that a while ago, but I initially thought that we could. In fact, until we have variants, I don't see how we can use this feature without specifying explicitly the implementations we are using.

By the way, the algorithm you've listed is done in 2 passes. Is this just to keep things simple? I don't see any technical reason why we can't do this in 1 pass. We just accumulate a map of virtual libs => implementations and make sure we don't introduce 2 implementations for the same library as we calculate the closure.

That's not enough. When a library X depends on a virtual library V, then the implementation I ofV must come before X in the final list. However, I might not be in the list of dependencies written by the user, we might only discover it while computing the transitive closure. While doing this, when we reach X we might not know yet who is implementing V. There might be a way to make it work with a single pass, but using 2 passes seems a lot simpler.

@rgrinberg

This comment has been minimized.

Show comment
Hide comment
@rgrinberg

rgrinberg Aug 10, 2018

Member

I wrote that a while ago, but I initially thought that we could. In fact, until we have variants, I don't see how we can use this feature without specifying explicitly the implementations we are using.

Oops, I meant that we don't allow libraries to depend on particular implementations. Only executables are allowed to depend on implementations. Thinking about this again, I'm not really sure why we need this restriction however.

That's not enough. When a library X depends on a virtual library V, then the implementation I ofV must come before X in the final list. However, I might not be in the list of dependencies written by the user, we might only discover it while computing the transitive closure. While doing this, when we reach X we might not know yet who is implementing V. There might be a way to make it work with a single pass, but using 2 passes seems a lot simpler.

I see. Doing it in 1 pass might require us to sort the list of dependencies in the closure anyway. That's not going to be in 1 pass after all.

Member

rgrinberg commented Aug 10, 2018

I wrote that a while ago, but I initially thought that we could. In fact, until we have variants, I don't see how we can use this feature without specifying explicitly the implementations we are using.

Oops, I meant that we don't allow libraries to depend on particular implementations. Only executables are allowed to depend on implementations. Thinking about this again, I'm not really sure why we need this restriction however.

That's not enough. When a library X depends on a virtual library V, then the implementation I ofV must come before X in the final list. However, I might not be in the list of dependencies written by the user, we might only discover it while computing the transitive closure. While doing this, when we reach X we might not know yet who is implementing V. There might be a way to make it work with a single pass, but using 2 passes seems a lot simpler.

I see. Doing it in 1 pass might require us to sort the list of dependencies in the closure anyway. That's not going to be in 1 pass after all.

@rgrinberg

This comment has been minimized.

Show comment
Hide comment
@rgrinberg

rgrinberg Aug 12, 2018

Member

When compiling an implementation, we should copy all the artifacts of the virtual library over and do the rest as usual.

I've been trying to implement this bit and I actually think that the copying would simplify a lot of things. Our code isn't really meant to handle situations when modules are spread over more than object directory, and I'm leaning against delaying this feature until this problem is addressed. Part of the reason why I'd like to delay such a refactoring is that I'd like to take into account private modules considerations. So I think we should indeed keep things simple and just copy things.

Member

rgrinberg commented Aug 12, 2018

When compiling an implementation, we should copy all the artifacts of the virtual library over and do the rest as usual.

I've been trying to implement this bit and I actually think that the copying would simplify a lot of things. Our code isn't really meant to handle situations when modules are spread over more than object directory, and I'm leaning against delaying this feature until this problem is addressed. Part of the reason why I'd like to delay such a refactoring is that I'd like to take into account private modules considerations. So I think we should indeed keep things simple and just copy things.

@diml

This comment has been minimized.

Show comment
Hide comment
@diml

diml Aug 13, 2018

Member

Agreed. We can also use symlinks to avoid the copy.

Member

diml commented Aug 13, 2018

Agreed. We can also use symlinks to avoid the copy.

@rgrinberg

This comment has been minimized.

Show comment
Hide comment
@rgrinberg

rgrinberg Aug 15, 2018

Member

In our original scheme, we didn't really consider implementing external virtual libraries. But I think it shouldn't be too hard to support this as well. All that we'd really need is to save the dep graph of the virtual libs so that we have the linking order when we make the cmxa in the implementation.

@diml will that work?

Member

rgrinberg commented Aug 15, 2018

In our original scheme, we didn't really consider implementing external virtual libraries. But I think it shouldn't be too hard to support this as well. All that we'd really need is to save the dep graph of the virtual libs so that we have the linking order when we make the cmxa in the implementation.

@diml will that work?

@diml

This comment has been minimized.

Show comment
Hide comment
@diml

diml Aug 16, 2018

Member

Yep, that should be enough. We should indeed allow it so that someone can provide an implementation outside of the package itself.

Member

diml commented Aug 16, 2018

Yep, that should be enough. We should indeed allow it so that someone can provide an implementation outside of the package itself.

@rgrinberg

This comment has been minimized.

Show comment
Hide comment
@rgrinberg

rgrinberg Aug 16, 2018

Member

Actually, this might not be enough. We're currently not installing .o files and I think those might be necessary to create the .cmxa/.a files when implementing external libraries. I suppose we can just install the .o files for virtual libraries. Perhaps we can also consider installing it for all libraries to keep things consistent.

Member

rgrinberg commented Aug 16, 2018

Actually, this might not be enough. We're currently not installing .o files and I think those might be necessary to create the .cmxa/.a files when implementing external libraries. I suppose we can just install the .o files for virtual libraries. Perhaps we can also consider installing it for all libraries to keep things consistent.

@rgrinberg

This comment has been minimized.

Show comment
Hide comment
@rgrinberg

rgrinberg Aug 16, 2018

Member

Here's another issue I'm encountering. This time it's for implementing virtual libraries that are internal. The issue is that we need to access the virtual libraries dependency graph, but there's no easy way to do so. There are 2 options:

  • One is to save the dep graph (actually, its arrow) in some table keyed by the virtual libraries. But this will require being careful about loading the rules for virtual libraries before the implementations. This doesn't seem very easy.

  • The second approach is to express the dependency on the dep graph of the virtual library using a normal file dependency. We'd have to serialize the dep graph into a file and then have the implementations load the dep graph from it.

I also considered just simply recalculating the graph from scratch in the implementations. Since this is possible for internal libraries. But this is also requires a hefty amount of boilerplate, and will also make implementing external/internal libraries too different for my tastes.

Member

rgrinberg commented Aug 16, 2018

Here's another issue I'm encountering. This time it's for implementing virtual libraries that are internal. The issue is that we need to access the virtual libraries dependency graph, but there's no easy way to do so. There are 2 options:

  • One is to save the dep graph (actually, its arrow) in some table keyed by the virtual libraries. But this will require being careful about loading the rules for virtual libraries before the implementations. This doesn't seem very easy.

  • The second approach is to express the dependency on the dep graph of the virtual library using a normal file dependency. We'd have to serialize the dep graph into a file and then have the implementations load the dep graph from it.

I also considered just simply recalculating the graph from scratch in the implementations. Since this is possible for internal libraries. But this is also requires a hefty amount of boilerplate, and will also make implementing external/internal libraries too different for my tastes.

@rgrinberg

This comment has been minimized.

Show comment
Hide comment
@rgrinberg

rgrinberg Aug 17, 2018

Member

The case where we serialize/deserialize the dep graph for external virtual libraries needs to decide on the format to use. I think marshalling for installed artifacts is a bad idea, so we should just include it in the installed dune files as sexp.

Member

rgrinberg commented Aug 17, 2018

The case where we serialize/deserialize the dep graph for external virtual libraries needs to decide on the format to use. I think marshalling for installed artifacts is a bad idea, so we should just include it in the installed dune files as sexp.

@rgrinberg

This comment has been minimized.

Show comment
Hide comment
@rgrinberg

rgrinberg Aug 17, 2018

Member

Here's another point to consider: virtual libraries without any virtual modules. What's the point? Well we can customize link flags for libraries in the implementation.

Btw, I've discussed with @avsm the possibility of customizing implementations of virtual libraries just by changing compilation flags. I don't really think that this is possible as we can't recompile external virtual libraries. Implementations allow only link time customizations.

Member

rgrinberg commented Aug 17, 2018

Here's another point to consider: virtual libraries without any virtual modules. What's the point? Well we can customize link flags for libraries in the implementation.

Btw, I've discussed with @avsm the possibility of customizing implementations of virtual libraries just by changing compilation flags. I don't really think that this is possible as we can't recompile external virtual libraries. Implementations allow only link time customizations.

@rgrinberg

This comment has been minimized.

Show comment
Hide comment
@rgrinberg

rgrinberg Aug 17, 2018

Member

Here's another issue I'm encountering. This time it's for implementing virtual libraries that are internal. The issue is that we need to access the virtual libraries dependency graph, but there's no easy way to do so. There are 2 options:

The same problem arises when trying to serialize the dependency graph into the installed dune file. I think that 2nd approach that I've listed above is the easiest.

Member

rgrinberg commented Aug 17, 2018

Here's another issue I'm encountering. This time it's for implementing virtual libraries that are internal. The issue is that we need to access the virtual libraries dependency graph, but there's no easy way to do so. There are 2 options:

The same problem arises when trying to serialize the dependency graph into the installed dune file. I think that 2nd approach that I've listed above is the easiest.

@diml

This comment has been minimized.

Show comment
Hide comment
@diml

diml Aug 20, 2018

Member

Serializing the dep graph seems best to me as well.

Member

diml commented Aug 20, 2018

Serializing the dep graph seems best to me as well.

@rgrinberg

This comment has been minimized.

Show comment
Hide comment
@rgrinberg

rgrinberg Aug 20, 2018

Member

Okay, will try to make that work. And what about installing .o files? I'm leaning towards just installing it for virtual libraries.

Member

rgrinberg commented Aug 20, 2018

Okay, will try to make that work. And what about installing .o files? I'm leaning towards just installing it for virtual libraries.

@diml

This comment has been minimized.

Show comment
Hide comment
@diml

diml Aug 20, 2018

Member

That seems right

Member

diml commented Aug 20, 2018

That seems right

@rgrinberg

This comment has been minimized.

Show comment
Hide comment
@rgrinberg

rgrinberg Aug 26, 2018

Member

When compiling implementation modules, we need a way to avoid generating .cmi files since those will be copied from the virtual library. I see no easier way of accomplishing this than adding a field to Module.t. When this field would be set, we'd turn on the force_read_cmi behavior. The other alternative is to have a flag for Module_compilation itself to force_read_cmi or all modules. I prefer the latter approach because storing such options at the module level makes more sense with other features such as private modules. Since in the future, private modules in implementations would of course have their .mli files.

Member

rgrinberg commented Aug 26, 2018

When compiling implementation modules, we need a way to avoid generating .cmi files since those will be copied from the virtual library. I see no easier way of accomplishing this than adding a field to Module.t. When this field would be set, we'd turn on the force_read_cmi behavior. The other alternative is to have a flag for Module_compilation itself to force_read_cmi or all modules. I prefer the latter approach because storing such options at the module level makes more sense with other features such as private modules. Since in the future, private modules in implementations would of course have their .mli files.

@rgrinberg

This comment has been minimized.

Show comment
Hide comment
@rgrinberg

rgrinberg Aug 27, 2018

Member

here's another bit that needs to be saved for external libraries: the module alias name (if it exists). Implementations must share the same module alias names as the virtual libraries that they implement. This information isn't available for external libraries. Well, in theory we could probably piece this together by looking at archive names, but this seems pretty fragile. So I suggest that we include the lib.name for virtual libraries in the dune file.

Btw, we should disallow the wrapped field for implementations. The value of this field should always be the same as the virtual library's.

Member

rgrinberg commented Aug 27, 2018

here's another bit that needs to be saved for external libraries: the module alias name (if it exists). Implementations must share the same module alias names as the virtual libraries that they implement. This information isn't available for external libraries. Well, in theory we could probably piece this together by looking at archive names, but this seems pretty fragile. So I suggest that we include the lib.name for virtual libraries in the dune file.

Btw, we should disallow the wrapped field for implementations. The value of this field should always be the same as the virtual library's.

@diml

This comment has been minimized.

Show comment
Hide comment
@diml

diml Aug 27, 2018

Member

Since in the future, private modules in implementations would of course have their .mli files.

Why is that?

So I suggest that we include the lib.name for virtual libraries in the dune file.

Seems good

Btw, we should disallow the wrapped field for implementations. The value of this field should always be the same as the virtual library's.

Agreed

Member

diml commented Aug 27, 2018

Since in the future, private modules in implementations would of course have their .mli files.

Why is that?

So I suggest that we include the lib.name for virtual libraries in the dune file.

Seems good

Btw, we should disallow the wrapped field for implementations. The value of this field should always be the same as the virtual library's.

Agreed

@rgrinberg

This comment has been minimized.

Show comment
Hide comment
@rgrinberg

rgrinberg Aug 28, 2018

Member

Why is that?

I just assume that it will be useful for complex implementations to have their own private modules to organize things internally. Such private modules aren't part of the interface of the virtual library, so they could have their own mli files.

Longer term, I'd even expect that we'd guarantee that no wto private modules belonging to different libraries/virtual libraries/implementations could have colliding names.

By the way, this also raises the question of whether an implementation would be able to access the virtual library's private modules. I think it's simpler to disallow this for now.

Member

rgrinberg commented Aug 28, 2018

Why is that?

I just assume that it will be useful for complex implementations to have their own private modules to organize things internally. Such private modules aren't part of the interface of the virtual library, so they could have their own mli files.

Longer term, I'd even expect that we'd guarantee that no wto private modules belonging to different libraries/virtual libraries/implementations could have colliding names.

By the way, this also raises the question of whether an implementation would be able to access the virtual library's private modules. I think it's simpler to disallow this for now.

@diml

This comment has been minimized.

Show comment
Hide comment
@diml

diml Aug 28, 2018

Member

Yh, I agree that we should allow implementations to have private modules, however I thought you suggested that such modules should be required to have a .mli file.

By the way, this also raises the question of whether an implementation would be able to access the virtual library's private modules. I think it's simpler to disallow this for now.

That seems right, especially for the case where the implementation is in another repository.

Member

diml commented Aug 28, 2018

Yh, I agree that we should allow implementations to have private modules, however I thought you suggested that such modules should be required to have a .mli file.

By the way, this also raises the question of whether an implementation would be able to access the virtual library's private modules. I think it's simpler to disallow this for now.

That seems right, especially for the case where the implementation is in another repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment