Skip to content


Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP


namespaces / modules #57

StefanKarpinski opened this Issue · 20 comments

We're still unclear on how they would work, but we probably will want to have modules or namespaces at some point.


What happened to the old wiki page that had some design ideas for this?


I really don't yet feel the need for these. I hate using scipy for this exact reason. For every function, you have to go hunting through the modules and importing them, and I can't every remember the order in which to go, and the syntax to use. Its always 5 mins before I can do any realistic computation.

What if we just allow function names to have a dot in them? Then people can just use a naming convention to make it look like a namespace, without really having true namespaces.


I agree that we don't need it just yet, but we will eventually. That's why this is tagged v2.0 :-)

I think that there is a happy medium to be found with namespacing. All languages seem to be too far on one side or the other: either there are no namespaces like C and Matlab, or there are namespaces and half of your code is incredibly annoying import statements like Java or Python. Having to import something to use println is idiotic.

One big issue is just granularity of namespacing. If modules are fairly big, then you can just do something like import math: * and use all your acoss and gammas and factorials without having to go hunting for them. Then again, are you really going to name something acos or gamma by yourself? Hard to say. Another option would be to have a hierarchical system where there are umbrella modules that contain everything in groups of modules. You can imagine that import math: * is equivalent to

import math.operators: *
import math.functions: *
import math.linalg: *
import math.stats: *

or whatever. I also think that scripts and modules should have different defaults — when you start writing code to run in the repl or a script, you don't want to have to mess around with imports, but when you're writing a module, it makes sense to be a bit more explicit about what you're using.

And finally, it's really annoying to have to import something just to use it. If you explicitly call something by its full name, it should always be available. If I call math.gamma it should be available to me regardless of whether I've imported the math module. On the other hand if I want to be able to just call gamma, then maybe I need to import it.


Agree! Modules are more of a feature, rather than a way to structure the whole system. They are for getting around the problem that all the good names are taken. There are certain interesting use cases, e.g. if you start writing your own libm replacement entirely in julia, you want to be able to name functions sin, cos, etc., and then be able to call both system.sin and mystuff.sin and compare them. And then to switch an application to using the new code you want to just add "import mystuff: *" at the top. Stuff like that.

I also like the idea that you don't need to think about modules until you start writing one yourself. You start in an environment where everything is generally available and modules have already been imported. If you have more specific needs and want to control what's imported, you write a module and it starts totally empty. Seems ideal to me.

@JeffBezanson JeffBezanson was assigned
@JeffBezanson JeffBezanson was assigned

I think modules are a must for people that write specialized libraries or even just reusable code that would be shared widely. Also, it is a nice way to avoid name clashes.

That is the one of the major things that holds octave back vs MATLAB.

Moreover, modules could, in principle, be (pre) compiled separately like in Python.


Definitely agreed that it's something we need. Just hasn't yet become pressing and there's so much to get done. I think the precompiled units and namespaces are somewhat orthogonal. Compiling files as units probably makes sense, regardless of namespaces. Languages like Java and to some extent Python conflate these two because files correspond to both classes (i.e. user-defined types) and namespaces, all of which are natural compilation units. But without the file-class-namespace identification, these mappings are no longer so obvious. Especially in the presence of multiple dispatch, there's no obvious way to make classes/types correspond to compilation units.


I won't even pretend to know what multiple dispatch is or what it is useful for, so I will defer to you on that topic and how it complicates modules.


Multiple dispatch is actually simple enough: the implementation of + for something like a + b is chosen based on the type of a and the type of b instead of just the type of a as it would be in mainstream object-orientation. Therefore, it's unreasonable to say that definition of + belongs bundled into the a type — types and the operations on them have to be separate, much more like traditional imperative programming. So you can't just take the definition of each type and all the operations that apply to it and bundle them together as a compilation unit. That was all I was getting at.

Anyway, welcome. We're glad you're checking Julia out :-)


Commit 30bdf1c takes the first tentative step.


As somebody wanting to write reusable code, modules are pretty important to me. This is something that drives me nuts with matlab.

As far as a model to pick: I'm kind of a fanboi, but I like the commonjs/node module system, and it's one I would draw inspiration from over python's. The only downside is when you have to require basic shit in the repl, imo.


Namespaces are your best friends.

One of the problem with R is that it will behave differently depending on which packages/librarie you have loaded.
I agree that core functions should be in the main:: namespace or imported automatically, but I do not want my production code to failed because of a name clash in an updated package.

Explicit/absolute names have also the advantage that when you read some code,
seeing a = f() is much much less explicit than a = Foo::f(), at least you known in which way to look.
You can have a comfortable way to alias/import names in your script namespace, but in my opinion non-core packages should never ever export names into the main namespace.


We just got bitten again today with this problem of namespaces in R.
We have a package that makes use of the melt function provided by the reshape2 library. The code runs fine.
Suddenly it stopped to work. Why ? Because we loaded before a library that makes use of the reshape (not 2) library, that also exports melt.

But then, even by specifying the fully qualified name reshape2::melt, is still does not work, because internally reshape2::melt makes use of sub functions
that are also overridden by reshape.

My point: just as in perl or jav, packages/libraries should not export symbols unless explictely requested to do so.
E.g. use Reshape2 'melt';

If you want to have a robust package eco-system, I do not see another way.


@kforner you should explicetly import it from reshape2. Namespaces and package system in R are simple and very flexible. I don't know much about Julia (as yet), but whoever develops namespaces for Julia should definitely look into R's system.


Given all of the overlap in ideas between Julia and Dylan, you probably know about this already, but it might be worth taking a look at the Dylan module system:

My guess based on some comments here is that as a whole you'll find it too heavy-weight for Julia, but maybe it will be a useful reference point.


Thanks, @cgay. This is very useful. I'll read through how Dylan handles this. We may want something a bit simpler/dumber, but the problems the two languages have are so similar, it would be a crime not to consider Dylan's approach when designing this.


It would be nice if we can have embed docs (like perl and ruby does) in modules.


How's this for a solution: Use a spaghetti stack of globals (including function and type names). The load function directly modifies the current level of the stack, but a new use function modifies the child level. If you want a common set of uses, then up high in the program execution, you just load a bunch of use calls. Dispatch works in the context of the spaghetti stack layer, where closer levels override higher-up equivalents.

Note that this is very similar to how another common dynamic execution framework works: Unix shells. I can source a file if I want to import their environment, and otherwise I just call the script. I think it's a pretty successful model.

Different folks might try to use the same script over and over in different levels of the stack, so you'd want to optimize already having it parsed and semianalyzed, but you'd have to rework the dispatching and type specialization in some cases. With cleverness, maybe that reworking could also be kept to a minimum.

A version of use could also be made to define non-exported (private) things like so:

use do
  function blah ...

Or something like that. I just discovered Julia yesterday (literally), so I might have some details on various issues not quite down exactly.


Module system now available.


The module system is not yet documented in the manual ( ), but rather listed as a "Potential feature".

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Something went wrong with that request. Please try again.