Skip to content
This repository

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP

Note: I may have messed up the order.


Read your proposal in the wiki about "LISP for Python", interesting stuff.

Just wanted to make sure that you have read my CEP 508:

http://wiki.cython.org/enhancements/inlining

It might not look like it, but I really think there is quite a lot of overlap. Anyway, it is what I ended up with after our last discussion, and is a way of providing compile-time meta-programming in the exact same syntax as run-time programming.

I'm interested here in whether you think one could unify the two; perhaps by having the same "engine" below but allow for varying degrees of explicitness.

(I haven't done anything about implementation.)

Dag Sverre

Hi Dag,

Thanks for noticing, reading and emailing! I have to edit a little at a time, because I have two little kids to look after.

My work on this was inspired by your posts to the cython-dev mailing list. Once I'm done with the proposal, I'm planning to start a thread on the list about what the best way to proceed is, how we can get the best of both, and what's best for Cython.

While there's a lot of overlap, the two proposals have a different emphasis. Mine is more about custom transforms with specific uses, and if I understand yours correctly, its about a general transform.

An advantage of having a single general transform is that many people can use it, and get its benefits with very little work. An advantage of being application-specific is that the transform writer has an easier time and more control. I think there's room for both.

What do you think?

Here's the diff for my version of the code. I didn't realize your Transform work had been implemented and included in Cython, so some of mine implements the same functionality.

This is definitely a proof-of-concept implementation, in other words, it's pretty sloppy. :) I implemented the bare minimum needed to get my examples working.

Best, Martin ----

> > What do you think?

I'll call yours "Lisp" and mine "unrolling" for short. And I'm sure I'll have more to say later but I think these are my key observations for now.

  • Lisp is probably easier to implement fully (one can use Python "exec")
  • Lisp is a "superset" in functionality: Everything unrolling can do can

be coded explicitly witht the lisp approach. - Unrolling should feel a lot more natural to Python users though. - Especially, having the whole of "a + b" in "deriv(a + b)" passed to deriv, ie that it doesn't follow the usual pattern to calculating temporary and passing, is going to be very hard to eat for a lot (I think). Perhaps custom syntax would be best, something like "deriv(%(a + b))". - I see a problem with operator overloading etc. I don't think looking for operators/isinstance(..., AddNode) is going to work in the long run, rather perhaps something like

deriv(%(a + b)) => deriv(%(a.__add__(b)) # always, before macro expansion

...and then macro expansion would always look down trees of function calls (making the API for macros more consistent too), and also return "__add__" rather than "+" and so on. More LISP-like too.

  • However, one still is left without the capability to process

if-statements etc.

Here's some of my thoughts on what might go on which is a more "explicit" approach. Unrolling would be used to detect that this can be evaluated compile-time. But I'll readily admit that it is not as nice.

cdef check_int(tree, var):
return parse("if isinstance(%s, int): %c") % (var, tree)

check_int(parse("if x == 4: print x"), "x").exec(x=x)

parse returns a tree structure with a special % operator that recognizes %c for inserting code-trees in addition to the normal %d, %s etc.

Hmm, I really don't like it though.

Dag Sverre

Hi Dag,

Dag Sverre Seljebotn wrote: >> What do you think? > > I'll call yours "Lisp" and mine "unrolling" for short. And I'm sure I'll > have more to say later but I think these are my key observations for now. > > - Lisp is probably easier to implement fully (one can use Python "exec") > - Lisp is a "superset" in functionality: Everything unrolling can do can > be coded explicitly witht the lisp approach. > - Unrolling should feel a lot more natural to Python users though. > - Especially, having the whole of "a + b" in "deriv(a + b)" passed to > deriv, ie that it doesn't follow the usual pattern to calculating > temporary and passing, is going to be very hard to eat for a lot (I > think). Perhaps custom syntax would be best, something like "deriv(%(a + > b))".

I agree. Most introductory Lisp books have only a few paragraphs about macros, because they're confusing. Just the whole idea of code generating other code takes some getting used to. So does the idea that the transform is getting a parse tree, not a value. But it doesn't take long, and when you do, it has a lot of power.

But cases where you dig into the parse tree of the arguments are a rare case. Now that I think of it, its a little misleading to lead with that example. The array indexing thing is more typical, where you treat the arguments as black boxes that compute a value.

I think it's useful to have both. Both Groovy's DataSets and R's formulas are examples where this is put to good use.

> - I see a problem with operator overloading etc. I don't think looking for > operators/isinstance(..., AddNode) is going to work in the long run, > rather perhaps something like > > deriv(%(a + b)) => deriv(%(a.__add__(b)) # always, before macro expansion > > ...and then macro expansion would always look down trees of function calls > (making the API for macros more consistent too), and also return "__add__" > rather than "+" and so on. More LISP-like too.

That's a good point. I haven't thought through the interaction with class methods.

> - However, one still is left without the capability to process > if-statements etc. > > Here's some of my thoughts on what might go on which is a more "explicit" > approach. Unrolling would be used to detect that this can be evaluated > compile-time. But I'll readily admit that it is not as nice. > > cdef check_int(tree, var): > return parse("if isinstance(%s, int): %c") % (var, tree) > > check_int(parse("if x == 4: print x"), "x").exec(x=x) > > parse returns a tree structure with a special % operator that recognizes > %c for inserting code-trees in addition to the normal %d, %s etc. > > Hmm, I really don't like it though.

Yeah, I've been thinking about this a bit. There are two halves to this: passing statements to the macro, and having the macro generate statements.

Lisp macros can certainly take statements, and I've been looking at the use cases I see in our code base. All the use cases could be handled by passing a closure instead of an actual body. But Python doesn't have a syntax for inline closures.

Most of the use cases are covered by iterators or Python 2.5's "with" statement.

As for returning statements: one thought was to allow the macro to return two values: the expression tree that replaces the call, like the example does now, and a list of statement to insert just before the entire expression that contains the macro. Another is to return a single parse tree, but turn it into an inline function. That solves some other problems, like variable capture and accidental multiple evaluation of arguments.

Anyway, I'm still thinking about how this could work, or whether there's some way around it. Thanks for your thoughts, you raise some really good points.

Best, Martin ----

> > But cases where you dig into the parse tree of the arguments are a rare > > case. Now that I think of it, its a little misleading to lead with that > > example. The array indexing thing is more typical, where you treat the > > arguments as black boxes that compute a value.

"Lemma:"

In all cases where one does not dig into the parse tree of the arguments, the unroller code can do the same job, and have a really natural syntax (simply "do what you want to happen").

Agreed? Have a look at wiki.cython.org/DagSverreSeljebotn/soc/details for how one would do index unrolling.

Of course, the unroller is harder to implement in Cython.

Dag Sverre

> > >> >> But cases where you dig into the parse tree of the arguments are a rare >> >> case. Now that I think of it, its a little misleading to lead with that >> >> example. The array indexing thing is more typical, where you treat the >> >> arguments as black boxes that compute a value. > > > > "Lemma:" > > > > In all cases where one does not dig into the parse tree of the arguments, > > the unroller code can do the same job, and have a really natural syntax > > (simply "do what you want to happen").

Lemma 2:

As for returning parse trees, unrolling should work in all cases.

For instance, in your derivation example, one could simply return "a * b + c * d" rather than using prefix notation and I think it would work, under the assumption of unrolling and inlining. You can think of it as the parse tree elements having overloaded operators and return new parse tree elements from operations if you wish...

What I've arrived at then for the derivation example:

deftrans deriv(myexpr, var):

import Cython.Compiler.ExprNodes as ExprNodes if isinstance(myexpr, ExprNodes.AddNode):

System Message: ERROR/3 (<string>, line 202)

Unexpected indentation.
# (f + g)' = f' + g' return deriv2(myexpr.operand1, var) + deriv2(myexpr.operand2, var)
elif isinstance(myexpr, ExprNodes.MulNode):
return myexpr.operand1 * deriv2(myexpr.operand2, var) +
deriv2(myexpr.operand1, var) * myexpr.operand2
elif isinstance(myexpr, ExprNodes.NameNode):
if myexpr.name == var.name:
return 1
else:
return 0
elif isinstance(myexpr, ExprNodes.ConstNode):
return 0

# Find double derivate deriv(%deriv(%(4*a + b), "a"), "b")

Dag Sverre

For another example of where macros would be useful, consider the very last section of:

http://wiki.cython.org/enhancements/numpy

As you see, "lemma 2" does however require writing functional code in quite a few cases where your approach would allow imperative code.

Dag Sverre

Hi Dag,

Lemma 1 is very interesting, I'm still thinking about it. More on that later.

Dag Sverre Seljebotn wrote: >>> But cases where you dig into the parse tree of the arguments are a rare >>> case. Now that I think of it, its a little misleading to lead with that >>> example. The array indexing thing is more typical, where you treat the >>> arguments as black boxes that compute a value. >> "Lemma:" >> >> In all cases where one does not dig into the parse tree of the arguments, >> the unroller code can do the same job, and have a really natural syntax >> (simply "do what you want to happen"). > > Lemma 2: > > As for returning parse trees, unrolling should work in all cases. > > For instance, in your derivation example, one could simply return "a * b + > c * d" rather than using prefix notation and I think it would work, under > the assumption of unrolling and inlining. You can think of it as the parse > tree elements having overloaded operators and return new parse tree > elements from operations if you wish...

Yes, I think that would be a better format for the template. The template format I was using was designed to be closer to the parse tree that's returned, but that's probably unnatural for Python programmers. The idea is that the first element of each list determines the type of node. For my simple examples I can just use a string and unambiguously tell which node type it is, but I don't know that that scales to e.g. method access.

And even if it did, providing a string of Cython code, with identifiers that get replaced, is more natural.

If you avoid multiple evaluation of the args by evaluating them once, binding them to local variables, and putting those local variables into the template, then you've essentially got a closure.

> > What I've arrived at then for the derivation example: > > deftrans deriv(myexpr, var): > import Cython.Compiler.ExprNodes as ExprNodes > if isinstance(myexpr, ExprNodes.AddNode): > # (f + g)' = f' + g' > return deriv2(myexpr.operand1, var) + deriv2(myexpr.operand2, var) > elif isinstance(myexpr, ExprNodes.MulNode): > return myexpr.operand1 * deriv2(myexpr.operand2, var) + > deriv2(myexpr.operand1, var) * myexpr.operand2 > elif isinstance(myexpr, ExprNodes.NameNode): > if myexpr.name == var.name: > return 1 > else: > return 0 > elif isinstance(myexpr, ExprNodes.ConstNode): > return 0 > > # Find double derivate > deriv(%deriv(%(4*a + b), "a"), "b") > > Dag Sverre

I have to admit, I'm not a fan of the % syntax. One of the important points is that the callers of a macro shouldn't care whether its a macro or a function. Of course, macros can do things that functions can't, which is confusing at first as you point out. But I think people will get used to it. I don't think it's more confusing than e.g. functions as first class objects is to people coming from C/C++/Java.

In your example, it's not clear to me what deriv() is returning. Is it a parse tree? If so,

return deriv2(myexpr.operand1, var) + deriv2(myexpr.operand2, var)

is calling the __add__ of some Node subclass. If the idea is that unrolling is being applied, then deriv() should be written like a runtime function and myexpr would always be treated like a number.

So I'd suggest a string as a template, something like this:

deftrans deriv(myexpr, var):
if isinstance(myexpr, ExprNodes.AddNode):

# (f + g)' = f' + g' return parse("fprime + gprime",

System Message: ERROR/3 (<string>, line 301)

Unexpected indentation.
fprime=deriv(myexpr.operand1, var), gprime=deriv(myexpr.operand2, var))
elif isinstance(myexpr, ExprNodes.MulNode):
return parse("f * gprime + fprime * g",
f = myexpr.operand1, g = myexpr.operand2, fprime = deriv(myexpr.operand1, var), gprime = deriv(myexpr.operand2, var))
elif isinstance(myexpr, ExprNodes.NameNode):
if myexpr.name == var.name:
return 1
else:
return 0
elif isinstance(myexpr, ExprNodes.ConstNode):
return 0

What do you think? In way its kind of wordy because you need to define & use these names (f, g, fprime, gprime) that you didn't before. Perhaps we could have a syntax for embedding them in the string when we want, e.g.:

"$(deriv(myexpr.operand1, var)) + $(deriv(myexpr.operand2, var))"

Although that's moving far from Python syntax.

What do you think?

Best, Martin ----

> You can think of it as the parse > tree elements having overloaded operators and return new parse tree > elements from operations if you wish...

Oh, this just sunk in.

So how would you distinguish the case where you actually want to manipulate the parse tree, from something that simply returns more parse trees? Or do you even need to? Hmm.

These are very interesting ideas Dag, thanks a lot for the discussion.

Best, Martin ----

I think I "nai-led" it, at least concerning my own personal preferences. Read on.

> > So how would you distinguish the case where you actually want to manipulate the parse tree, from something that simply returns more parse trees? Or do you even need to? Hmm. You don't really need to; except for the case that you want to take the output and further run a macro on it (double derivation...)

Anyway, my suggestion is the following now. It is however completely ignorant of the time it would take to implement something like that, and your approach will possibly be superior for the time being.

If you get tired of 1) because of the length, do read 2) :-)

  1. Things center around the compile-time unroller as in my spec (it is not so clearly written though, ask if there's anything you wonder about concerning how that works). If one doesn't need to inspect the parameter parse-trees but only use them, this is sufficient.

What happens is that each expression (everywhere within a function declared with @cython.unroll) either has a compile-time status or run-time status. Run-time "taints", but if one only involve compile-time expressions then they are combined as appropriate at compile-time and through unrolling and inlining this is similar to a macro, although much less explicit.

This really becomes interesting if one combines it with whitelisting of entire libraries. As mentioned in the spec, whitelisted functions called with compile-time known parameters are called at compile-time. So without doing any expression tree inspection one could still have something like this:

import sympy # We hereby declare that nothing in "sympy" has no side-effects that must happen run-time cython.unroll.whitelist_module(sympy) sympy.Expression.subs = cython.unroll(sympy.Expression.subs)

z = get_float_from_runtime_code() x = sympy.Symbol('x') f = x**2 diff(f, x).subs(x, z)

Which by Cython will then be ideally be compiled to

(2 * x)

because a) sympy is whitelisted so that operations using it can happen compile time (using the CPython interpreter), and b) the subs call is declared to be "a macro" so that it is evaluated step-by-step by the unroller, ie it can also have a run-time argument without getting tainted.

n = get_int_from_runtime_code(...) z = get_float_from_runtime_code() x = sympy.Symbol('x') f = x**n diff(f, x).subs(x, z)

..then very little happens compile-time, because n "taints" the expression x**n so that f is only known run-time, and so diff is not attempted at compile-time, and so on.

  1. In addition to the above, one has an "introspection module" which allows one to get hold of parse trees directly. Calls to this module must be unrolled compile-time, i.e. if the arguments are run-time-tainted then it is a compiler error.

Example:

@cython.unroll def derive(expr, varname):

System Message: ERROR/3 (<string>, line 387)

Unexpected indentation.

node = cython.meta.treeof(expr) if node.runtimeonly:

System Message: ERROR/3 (<string>, line 389)

Unexpected indentation.
if node.name == varname: return expr else: raise Exception("Cannot symbolically derivate this expression")
if isinstance(node, cython.meta.nodes.add):
return derive(node.operand1, varname) + derive(node.operand2, varname)

elif node....

Here, cython.meta.treeof(expr) basically returns all the data the compile-time unroller/interpreter carries about the variable. So

x = get_runtime_value... print derive(x**2, "x")

would work. However,

z = my_f(x) print derive(z**2, "x")

will raise an exception since my_f will have runtimeonly set.

-- Dag Sverre


Hey Dag,

I've been thinking about your lemma, and places where it doesn't apply.

For example, consider the functions, "when_debugging," "start_debugging" and "stop_debugging," with the following semantics (different from what's mentioned on the wiki):

# Maps name to boolean, whether we're debugging that name. names_debugging = {}

def when_debugging(name, *values_to_accumulate):
if name in names_debugging:
... do something with values_to_accumulate, e.g. push them
on a list or increment a counter with them ...
def start_debugging(name):
names_debugging[name] = name
def stop_debugging(name):
del names_debugging[name]

In when_debugging, the hash table lookup could dominate the computation. So, we could have a macro which generates a unique integer for each name, and a runtime, index into an array for that name:

name_to_id = {} max_id = 0

deftrans when_debugging(name, *values_to_accumulate):
if name in name_to_id:
id = name_to_id[name]
else
max_id += 1 name_to_id[name] = max_id

return "when_debugging_internal($id, $name, $values_to_accumulate)"

Then at runtime when_debugging_internal just uses the id to index an array. It still has the name in case it wants to e.g. print it out.

This would be very hard to do with unrolling. The unrolling would have to know that, once something's added to the name_to_id hash table, its never modified or removed. This is an example of something that's really hard for an automated system to figure out, but if a programmer divides up the work between compile time and runtime, it's natural.

I think there's room for both proposals. The unrolling proposal is a kind of optimization that you can guide by giving it hints, e.g. with DEF. The Lisp proposal is for running Python code at runtime to generate parse trees. Where the unrolling proposal can get the job done, it's a less complicated and far less confusing mechanism. It also allows things like turning off the unrolling in a debug build, to make it easier to debug.

On the other hand, the Lisp proposal can do a lot more, because the programmer can know a lot more, and build that knowledge into the code. It can also do other things, e.g. not evaluate the args if they're not needed, or evaluate them multiple times like an iterator.

What do you think?

Best, Martin


> > I've been thinking about your lemma, and places where it doesn't apply. I think the problem is when the macro has compile-time side-effects (in global variables)?

> > On the other hand, the Lisp proposal can do a lot more, because the programmer can know a lot more, and build that knowledge into the code. It can also do other things, e.g. not evaluate the args if they're not needed, or evaluate them multiple times like an iterator. Indeed, and that has really been my intent all the time; I wouldn't like macros to go, but see if unrolling could lend them a more "natural" syntax. The cython.meta.treeof(expr) is all about macros and not about compile-time unrolling.

One could add cython.meta.clone(expr) to evaluate something multiple times?

As for your example now, couldn't one get far with stuff like:

names_debugging = cython.meta.compiletime_global({})

and so on until the cases are covered? Your when_debugging would then be almost the exact same, but drop the quotes and $.

I really like the idea of compile-time code, so one could also add something "stronger" than unroll, ie

@cython.unroll(allow_runtime_tainting=False)

which is basically a macro but uses the same machinery as unrolling (the big difference being in cython.meta... and "syntax tree operator overloading" being the I/O mechanisms rather than working directly on the trees, but all features should be available, and one could put stuff into cython.meta for building output trees as well).

And then I'm roundtrip to your original proposal -- the difference is that it is nicely integrated with the unrolling engine.

-- Dag Sverre


Dag Sverre Seljebotn wrote: > >> >> I've been thinking about your lemma, and places where it doesn't apply. > I think the problem is when the macro has compile-time side-effects (in global variables)?

Yes, that certainly does it. I'm not sure if its the only case.

>> >> On the other hand, the Lisp proposal can do a lot more, because the programmer can know a lot more, and build that knowledge into the code. It can also do other things, e.g. not evaluate the args if they're not needed, or evaluate them multiple times like an iterator. > Indeed, and that has really been my intent all the time; I wouldn't like macros to go, but see if unrolling could lend them a more "natural" syntax. The cython.meta.treeof(expr) is all about macros and not about compile-time unrolling. > > One could add cython.meta.clone(expr) to evaluate something multiple times? > > As for your example now, couldn't one get far with stuff like: > > names_debugging = cython.meta.compiletime_global({}) > > and so on until the cases are covered? Your when_debugging would then be almost the exact same, but drop the quotes and $.

My inclination is to be explicit about runtime vs. compile time, so I like the quotes and $. But maybe that's just what I'm used to. Let me think about this "if the args aren't known at compile time, it's a tree, else evaluate it at compile time" thing a little, until it fully sinks in.

> I really like the idea of compile-time code, so one could also add something "stronger" than unroll, ie > > @cython.unroll(allow_runtime_tainting=False) > > which is basically a macro but uses the same machinery as unrolling (the big difference being in cython.meta... and "syntax tree operator overloading" being the I/O mechanisms rather than working directly on the trees, but all features should be available, and one could put stuff into cython.meta for building output trees as well). > > And then I'm roundtrip to your original proposal -- the difference is that it is nicely integrated with the unrolling engine.

Thanks again for all the time you're spending describing your proposals to me. I think Cython will be all the better for our combined work.

Best, Martin ----

Dag Sverre Seljebotn wrote: > > One could add cython.meta.clone(expr) to evaluate something multiple times?

I was thinking something more like uneval(expr). The difference is in:

for x in xrange(10):
print expr

If you just substitute expr's parse tree in for expr, you evaluate it 10 times. If it has side effects, you get the side effects 10 times. This is the default in Lisp and is a gotcha for new macro writers.

So an alternative would be that when "expr" appears, it should only be evaluated once (or at most once?). I.e. it becomes equivalent to:

value = expr for x in xrange(10):

System Message: ERROR/3 (<string>, line 540)

Unexpected indentation.
print value

But when you want the parse tree inserted, you could use uneval(expr) in place of expr. There'd still be only one copy of the parse tree around, but since its in a loop, its evaluated many times.

Anyway, just a thought, things are in flux.

Best, Martin ----

Dag Sverre Seljebotn wrote: >> But cases where you dig into the parse tree of the arguments are a rare >> case. Now that I think of it, its a little misleading to lead with that >> example. The array indexing thing is more typical, where you treat the >> arguments as black boxes that compute a value. > > "Lemma:" > > In all cases where one does not dig into the parse tree of the arguments, > the unroller code can do the same job, and have a really natural syntax > (simply "do what you want to happen").

If you don't dig into the parse tree, all you can do is pass it to another function (which might dig into it), or evaluate it. There are times when you want to evaluate it more than once, e.g. as in an iterator.

But we could amend your premise to be "whenever you don't dig into it, and only evaluate it once," which is still the most common case.

Assuming you don't dig into the expanded template, what else can you do with it? The idea is that, once the deftrans has filled in the template, the deftrans could still do other operations to it, e.g. substitute it into another template (or the same template again, as in my loop unrolling example.)

Digging into the expanded template would be an odd thing to do, so let me think of what else you might do with it if you treat it as a "black box."

Best, Martin


I can understand why you want to be explicit about compile-time, it's just that I worry about acceptance amongst Python users. Also I think that for instance the derivation example with sympy shows how comparatively powerful unrolling can be (running any code at compile-time, not only what is explicitly written to be macros).

So it would be good to do both seperately, but even nicer to have them converge into a common tool that supports both modes of operation. > > My inclination is to be explicit about runtime vs. compile time, so I like the quotes and $. But maybe that's just what I'm used to. Let me think about this "if the args aren't known at compile time, it's a tree, else evaluate it at compile time" thing a little, until it fully sinks in. I'm going through it myself, and I discover that I am cleaning up my approach a bit and making some changes from what I've said :-)

I'll provide some more examples as food for thought. First though: When running the code in the compile-time interpreter mode, all args, variables, expressions etc. (don't think about compile-time-known for now) have two views: Their evaluated value and the expression tree that computes the value. The former is accessed "by default" and is usually all that is needed; but the latter can be available too.

Simple first:

@cython.unroll def addone(x):

System Message: ERROR/3 (<string>, line 588)

Unexpected indentation.
return x + 1

print addone(34 ** 2 + currenttime()) # => print 34**2 + currenttime() + 1 # (which can obviously be optimized in the same pass)

Multiple side-effects:

def f(): print "Hello"

@cython.unroll def dotimes(n, x):

System Message: ERROR/3 (<string>, line 600)

Unexpected indentation.
# x is simply going to be "None" when used "verbatim" # However, if we uneval it we get a full-fledged parse tree # object, which can be reevaluated explicitly for i in range(n): uneval(x).eval()

dotimes(10, f()) # By convention, I suppose, if x is not used within dotimes it is not evaluated the "extra" time. # Must think more on this.

Parameter modification:

@cython.unroll def remove_powers(x):

System Message: ERROR/3 (<string>, line 613)

Unexpected indentation.

node = uneval(x) # node now contains a full representation... if node.operation == "**": return None else: node.arguments = [remove_powers(arg) for arg in node.arguments]

# Always evaluate tree on return -- it can be uneval-ed again by other # macros if needed. x can perhaps be immutable so that uneval returns a copy? return node.eval()

print remove_powers(10 ** 4 + 20 ** 100) # => print 10 + 20

The important conclusions so far for my part is that one should probably combine at least large parts of the efforts for macros and unrolling (though the "syntax front-end" might end up being different and that one keeps a very explicit version).

-- Dag Sverre


Forgot to tackle run-time-tainting. Here you go.

> > Multiple side-effects: > > > > def f(): print "Hello" > > > > @cython.unroll > > def dotimes(n, x): > > # x is simply going to be "None" when used "verbatim" > > # However, if we uneval it we get a full-fledged parse tree > > # object, which can be reevaluated explicitly > > for i in range(n): uneval(x).eval() > > > > dotimes(3, f())

All my examples went ok because everything that was important for /control flow within the macro/ was known compile-time. The above would translate to

f() f() f()

However, consider this:

dotimes(hours_since_midnight(), f())

Now, the for-loop cannot know the range at compile-time, the for-loop is run-time-tainted and you get

n = hours_since_midnight() for i in range(n): f()

It still has a tree though, but a very uninteresting one: If you do uneval(n) above, all you get is a function call, and there's no possibility of figuring out the value.

One can add other decorators (@cython.macro, or @cython.macro(should_become_expression=True), or @cython.macro(force_known_compiletime="n")) that raises compile time errors instead when the macro doesn't fully collapse into an expression (or doesn't collapse with respect to the given variables).

Also one could probably do this:

def dotimes(n, x):
if not unroll(n).compiletimeknown: raise MacroError("Cannot pass n
that is only known run-time")
for i in range(n): uneval(x).eval()

So that's it: Everything is both trees (accessable by uneval) and values/expressions (accessable by simply using the name); everything is also either known compile-time or run-time; the former will make control structures that depends on the value unroll, while the latter won't.

Dag Sverre

Hey Dag,

I've been thinking more about your proposal. Very interesting.

As I understand it, the idea is to determine what code runs at compile time vs. run time by having a way to classify each variable as "use at compile time" vs. not, then any expression/statement that contains only "use at compile time" variables is evaluated at compile time. Also, if the iterator of a for loop is "use at compile time," but something in the body isn't, then we unroll it. Is that right?

What about the case where a value should be used at compile time in one place but runtime in another? For example, in the when_debugging example, you might want to iterate over all the keys at runtime, say to print a report. What do you do in that case?

Best, Martin ----

> > I've been thinking more about your proposal. Very interesting. > > As I understand it, the idea is to determine what code runs at compile time vs. run time by having a way to classify each variable as "use at compile time" vs. not, then any expression/statement that contains only "use at compile time" variables is evaluated at compile time. Also, if the iterator of a for loop is "use at compile time," but something in the body isn't, then we unroll it. Is that right? Yes. "Use at compile-time" is defined as "all nodes of the parse tree of the symbol being evaluatable at compile-time".

This is actually often not important, because if you simply append something to a symbol and returns it then it doesn't matter whether it is known. When it becomes important is when you "cross" the dividing line between macro code and run-time code; for instance when calling the range() function and it also affects loop unrolling.

For instance:

cython.meta.whitelist_function(range) # Means it is callable compile-time if wanted. # Builtins without side-effects will be whitelisted by default of course.

@cython.macro def twicerange(n): return range(n) + range(n)

print twicerange(2) # => print [0, 1, 0, 1] # range called compile-time print twicerange(hourssincemidnight()) # => tmp = hourssincemidnight(); print range(tmp) + range(tmp)

> > What about the case where a value should be used at compile time in one place but runtime in another? For example, in the when_debugging example, you might want to iterate over all the keys at runtime, say to print a report. What do you do in that case? First off, there's what to do at compile-time and run-time in general. This cannot be automatic (or generated code size would quickly become exponential with output).

So: All functions carrying the cython.macro decorator and calls to it happen compile-time _to the extend possible_; everything else happens run-time no matter what (of course, this scheme can be tuned and is independent of the general approach; this does for macro behaviour, but the same engine can be used with elaborate rulesets to create an "optimizer" instead; that is beyond my current interests though). The answer to your question is then: Consider this function:

def print_debug_symbols(): for key in debug_symbols.keys(): print key print_debug_symbols()

This will always be run run-time. However, adding the decorator @cython.macro before the def would yield instead:

print "a" print "i" print "k"

However, it gets more complicated. You see, I'd like the default behaviour like the following, because debug_symbols is run-time-known by default (note also the uneval in order to get the name used in the argument):

debug_symbols = {}

@cython.macro def when_debugging(x):

System Message: ERROR/3 (<string>, line 742)

Unexpected indentation.
global debug_symbols idx = debug_symbols.get(uneval(x).name) if idx is None: idx = debug_symbols[uneval(x).name] = len(debug_symbols) dostuff(x, idx)

when_debugging(y) when_debugging(y) # Turned into: # idx = debug_symbols.get("y") # if idx is None: idx = debug_symbols["y"] = len(debug_symbols) # dostuff(y, idx) # idx = debug_symbols.get("y") # if idx is None: idx = debug_symbols["y"] = len(debug_symbols) # dostuff(y, idx)

However, if one instead declares that debug_symbols is compile-time modifiable:

debug_symbols = cython.meta.compile_time_variable({})

Then the same code should give:

when_debugging(y) when_debugging(y) # # debug_symbols["y"] = 0 could be here, but is dropped because debug_symbols is compile-time # dostuff(y, 0) # dostuff(y, 0)

But what happens then at run-time with print_debug_symbols? Perhaps the "debug_symbols = " at top can simply be replaced with the serialized contents at the end of the macro compiler phase and then lifes goes on, with it as a run-time object. Or perhaps it should be removed from run-time scope altogether. Probably both, depending on an available_runtime parameter to compile_time_variable.

Lastly, there are other cases where one might want to add more parameters to specify behaviour. For instance:

@cython.macro(allow_compiletime_sideeffects_for=[time.time]) def get_compilation_time(): return time.time()

(However, one could only pass functions _known to the interpreter_, ie not other local functions (if one wants compile-time evaluations of other local functions, one should declare them as macros)).

Dag Sverre


Dag Sverre Seljebotn wrote: > >> >> I've been thinking more about your proposal. Very interesting. >> >> As I understand it, the idea is to determine what code runs at compile time vs. run time by having a way to classify each variable as "use at compile time" vs. not, then any expression/statement that contains only "use at compile time" variables is evaluated at compile time. Also, if the iterator of a for loop is "use at compile time," but something in the body isn't, then we unroll it. Is that right? > Yes. "Use at compile-time" is defined as "all nodes of the parse tree of the symbol being evaluatable at compile-time". > > This is actually often not important, because if you simply append something to a symbol and returns it then it doesn't matter whether it is known. When it becomes important is when you "cross" the dividing line between macro code and run-time code; for instance when calling the range() function and it also affects loop unrolling. And, importantly, if-tests.

@cython.macro def cleanup(strategy):

System Message: ERROR/3 (<string>, line 795)

Unexpected indentation.
if strategy == 1: return do_this() elif strategy == 2: return do_that()

x = cleanup(1) # becomes: # x = do_this()

x = cleanup(1 if freemem() < 256 else 2) # becomes: # tmp = 1 if freemem() < 256 else 2 # if tmp == 1: x = do_this() # elif tmp == 2: x = do_that()

Dag Sverre


BTW you have permission to put our correspondance on a subpage on the wiki etc. or post it on the mailing list if you want to make it available for reference.

Getting back to "reality" I'm definitely for your current approach and implementation (would be nice if the tree one gets and returns takes into account method calls and can be the same tree that will be returned in the future by uneval()). One must move slowly and partially...

Dag Sverre


Dag Sverre Seljebotn wrote: > BTW you have permission to put our correspondance on a subpage on the wiki > etc. or post it on the mailing list if you want to make it available for > reference.

Thanks, you have permission to make my comments public too!

> Getting back to "reality" I'm definitely for your current approach and > implementation (would be nice if the tree one gets and returns takes into > account method calls and can be the same tree that will be returned in the > future by uneval()). One must move slowly and partially...

Thanks. I'm still a little hazy on your proposal. I'll try to explain my confusion a little more when I get a chance. But I definitely think compile time optimizations can ease the burden, and the user can provide as much information as possible to guide their work.

Best, Martin

Docutils System Messages

System Message: ERROR/3 (<string>, line 725); backlink

Unknown target name: "possible".

System Message: ERROR/3 (<string>, line 776); backlink

Unknown target name: "interpreter".
Something went wrong with that request. Please try again.