Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP
branch: master
Fetching contributors…

Cannot retrieve contributors at this time

1119 lines (750 sloc) 33.915 kB
20050705
Your idea about parametrized definitions, for polygons, splines,
etc., seems like a better one than you initially thought. You don't
need a syntax as complicated as:
define polygon {
point v[n], c[n];
constraints {
c[j] = (v[j]+v[j+1])/2 for j in 1 .. n-1
c[n] = (v[n]+v[0])/2
}
}
Instead, by restricting the semantics, you can allow:
define polygon[n] closed {
point v[n], c[n];
constraints {
c[j] = (v[j] + v[j+1])/2;
}
}
and leave it at that. The subscript is iterated at the time the
definition is instantiated; for example
polygon mypoly[5];
instantiates a polygon with points v0 .. v4, c0 .. c4, and constraints
c0 = (v0 + v1)/2;
c1 = (v1 + v2)/2;
c2 = (v2 + v3)/2;
c3 = (v3 + v4)/2;
c4 = (v4 + v0)/2;
where the subscripts in the names are all calculated mod 5. The
mod-5-ness is effected by the "closed" keyword in the header. If we
had instead used "open" (the default) the constraints with overflowing
subscript numbers would simply have been discarded.
Note that this works too:
define reg_polygon[n] extends polygon[n] {
param number r, rot = 0;
constraints {
v[j] = (r * cos(j * 360 / n + rot),
r * sin(j * 360 / n + rot));
}
}
It's probably better not to let the [n] be implicit in the header.
This is for two reasons. First, you need the name to be explicitly
declared so that it is available for use in the definition, as in the
previous example. And second, making [n] implicit would rule out
define foo[n] extends bar[2*n+1] {
...
}
which could be quite useful.
----------------------------------------------------------------
20050707
It shouldn't generate objects named
c0 = (v0 + v1)/2;
but rather
c[0] = (v[0] + v[1])/2;
The draw functions can handle this just as easily, and it avoids (1)
conflicts with other items named "c1" and "v1" and (2) problems
concerning how to deal with"v[n-1]" and such.
----------------------------------------------------------------
20050705
When linogram gets builtin sin() and cos() functions, they should take
degree arguments, not radian arguments.
----------------------------------------------------------------
20050705
Syntactic additions:
point p = (4,3), q = left.end;
constraints {
a = b = c;
}
----------------------------------------------------------------
20050706
Note that in this definition:
define reg_polygon[n] extends polygon[n] {
param number r, rot = 0;
constraints {
v[j] = (r * cos(j * 360 / n + rot),
r * sin(j * 360 / n + rot));
}
}
r need *not* be a parameter:
define reg_polygon[n] extends polygon[n] {
param number rot = 0;
* number r;
constraints {
v[j] = (r * cos(j * 360 / n + rot),
r * sin(j * 360 / n + rot));
}
}
which is very good, because we can do cool stuff like
reg_polygon[5] pentagon(rot = 180, v1 = foo, v2 = bar);
and it will figure out how big the pentagon needs to be.
----------------------------------------------------------------
20050706
The parametrized definitions thing seems quite doable. Parameters can
appear in five places:
1. In definition headers
DEFINE polygon[n] ... { ... } # n is a VAR
2. In feature declaration type names:
polygon[3] t1, t2; # 3 is an EXPR
3. In declarators:
line edge[n]; # n is a VAR
4. In expressions:
point v[n] = (v[n+1] + v[n-1]) / 2; # n+1 and n-1 are EXPRs
5. In drawables:
draw { e[1]; e[2]; e[3] } # 1,2,3 are EXPRs
define polygon[n] closed {
point v[n], c[n];
line e[n];
constraints {
c[j] = e[j].center; # c[j] = (v[j] + v[j+1])/2;
e[j].start = v[j];
e[j].end = v[j+1];
}
}
Do you need some sort of explicit declaration for the iteration
variable j in this example? Something like:
constraints {
forall j: c[j] = e[j].center; # c[j] = (v[j] + v[j+1])/2;
forall j: e[j].start = v[j];
forall j: e[j].end = v[j+1];
}
You mustn't re-use n here! Or else
forall j: v[j].x = start.x + j/n * length;
becomes impossible, and it would be a good thing to have.
You need this on the declarations too, because otherwise how can you
say
point v0 = start
point v_{n-1} = end?
The natural notation is
point v[n-1] = end;
which must not be confused with
forall n: point v[n-1] = end
which is quite different. On the other hand, the use of the name "n"
itself in the former declaration might be enough to disambiguate it.
----------------------------------------------------------------
20050706
Note that
point v[n] = (v[n+1] + v[n-1]) / 2; # n+1 and n-1 are EXPRs
which you made up at random is silly for closed polygons, but could be
really useful for open ones. It doesn't constrain v0 or v_n at all.
----------------------------------------------------------------
20050707
Regarding "forall", you could avoid the syntactic weirdness by just
reserving some special token to stand for the iterated variable; for
example
forall j: v[j].x = start.x + j/n * length;
becomes
v[_].x = start.x + _/n * length;
or some such. Problem with this approach: it's hard to handle
doubly-parametrized types. (You weren't planning to deal with
doubly-parametrized types anyway, but you might someday.) A related
alternative: multiplier variables much be capitalized, in which case
the lowercase version is automatically reserved as an iterator. For
example
define polygon[N] closed {
point v[n], c[n];
constraints {
c[n] = (v[n] + v[n+1])/2;
}
}
Where N and n are implictly linked. Then we have
v[n].x = start.x + n/N * length;
which seems straightforward enough.
----------------------------------------------------------------
20050707
Parsing is still slow, so you can have two formats for diagram files.
The compiled version of the files can be either an ad-hoc byte-offset
format like you did for the XML syntax in the cookbook project, or
maybe just Data::Dumpering the type objects wil be sufficient.
----------------------------------------------------------------
20050707
Now, let's return to this:
1. In definition headers
DEFINE polygon[n] ... { ... } # n is a VAR
This established a new multiplier parameter named "n", stored at the
top level of the type object:
MULTIPLIER_PARAM => "n"
2. In declarators:
line edge[n]; # n is a VAR
Here we check the current type to make sure that a multiplier param
has been declared with the right name ("n"). We annotate the entry
in the object hash. Normally, it has
edge => $LINE
but we'll adjoin additional information:
edge => [$LINE, "n"]
or probably better, a separate hash:
O => { edge => $LINE, ... }
M => { edge => "n" }
3. In feature declaration type names:
polygon[3] t1, t2; # 3 is an EXPR
Here we look up the definition of "polygon" and check to make sure it
has a multiplier parameter. If so, we record
MULTIPLIER_VAL => 3
in the "declaration" structure. When the type definition is complete,
we will instantiate the objects declared by the declarators. These
themselves have
PARAM_SPECS => ...
Just before instantiation, we install (n => 3) into the PARAM_SPECS.
We then instantiate normally. But instantiation now has a new case.
add_subobj_declaration doesn't immediately add the subchunk; instead,
it checks for a MULTIPLIER_VAL. If this is absent, it does the same
thing it does now. If not, it multiplies the declaration object by
the multiplier val, effectively translating
polygon[3]
4. In expressions:
point v[n] = (v[n+1] + v[n-1]) / 2; # n+1 and n-1 are EXPRs
5. In drawables:
draw { e[1]; e[2]; e[3] } # 1,2,3 are EXPRs
----------------------------------------------------------------
20050709
What should we do if the equations are inconsistent?
Current behavior of returning immediately is dead wrong.
We could just abort, of course.
But another alternative is to try to salvage what we can. For
example:
P = Q;
P.x = 1;
P.y = 2;
Q.y = 3;
is inconsistent, but we can still deduce P.x = Q.x = 1; only the y
parts are broken.
Here's one thing we might do. Construct the graph G whose vertices
are variables and where u and v are connected if they appear in the
same equation. Consider the connected components of this graph. In
the example above, there are two connected components, one containing
the x'es and the other the y's. You can solve the connected
components separately and discard any components that are
inconsistent, retaining the others. This will also be faster than
solving the system as a whole.
Your original idea was that if an equation turns out to be
inconsitent, that taints all the variables in it, and all equations
with those variables become tainted, and the taint spreads, etc.; then
you discard the tainted equations. This is equivalent to the conncted
components thing above.
In this idea, we would go:
P.x = Q.x
P.y = Q.y
P.x = 1
P.y = 2
Q.y = 3
P.x = Q.x
P.y = Q.y
Q.x = 1
P.y = 2
Q.y = 3
P.x = Q.x
P.y = Q.y
Q.x = 1
Q.y = 2
Q.y = 3
P.x = 1
P.y = Q.y
Q.x = 1
Q.y = 2
Q.y = 3
Then we would discover that Q.y was inconsistent. This would taint
all the Q.y and all the P.y equations, but the others would remain:
P.x = 1
Q.x = 1
----------------------------------------------------------------
20050710
require "trig";
can load trig.lino, which is simply:
builtin sin, cos;
__END__
my $PI = atan2(0, -1);
register_builtin(sin => sub { sin($_[0] * $PI / 180) });
register_builtin(cos => sub { cos($_[0] * $PI / 180) });
20050711
Why even require the "builtin" declaration? The only purpose it's
serving is to change the diagnotic message if you misspell a function
name.
----------------------------------------------------------------
20050711
The crucial question about parameters seems to be: when are the ASTs
converted to constraints?
Some subsidiary questions: are parameters replaced by their values in
ASTs or in constraints? I think the former will be easier. Then
given
simple S(b=4+3)
you can compile the param-spec to something like [b, [+, 4, 3]] and
then later on simply replace [VAR b] with [+, 4, 3] and then do the
evaluation as usual.
This should turn into two things in the type object:
O => { S => simple }
and
V => { S.b => [+ , 4, 3] }
** When the ->draw method is called, its ASTs are converted to
constraints. **
This is done recursively for all subobjects.
The V member is subsetted appropriately and the subset is passed as a
parameter to expression_to_constraints.
Here's the example you made up that seems to clear things up a lot:
define simple {
number a;
param number b = 10;
constraints { a*b = 20; }
}
define hyperb {
simple S(b=4);
}
When the definition of hyperb() is read, the type object constructed
for hyperb has O and V as above. Later, when a hyperbola is drawn,
its constraints are compiled from the expression trees. The
calculation of constraints recurses into the S object with the
environment {b=>[CON, 4]}. Then the constraint [-, [*, [VAR, a],
[VAR, b]], 20] is processed and in the course of that the VAR[b] is
replaced with [CON 4].
----------------------------------------------------------------
20050713
OK, here's how parameter variables work.
Every type object contains a "V" member, which is a hash. There are
two kinds of items in V. One is the type's own parameters, with their
default expression values. The other is the expressions that the type
wishes to enforce on the parameter variables of its subobjects. The
names of the latter entries have dots in them.
At DRAW time, we do the following:
1. Replace parameter variables in constraint expressions with their
defining expressions or defaults. Abort at this time if a
parameter is unspecified.
2. Convert expressions to equations.
3. Gather equations from subobjects, qualifying equations
appropriately.
4. Solve equations.
There will be a Type::constraint_equations method that returns all the
type's constraint equations. Its arguments are a (posibly empty) list
of environment hashes. It has two phases:
1. For each constraint expression, call Expression::substitute,
passing the environment hashes, and also $self->{V}. This
removes the parameter variables from the expression; see below.
Call expression_to_constraints on the result.
2. For each subobject, subset $self->{V} appropriately and call
constraint_equations recursively on the subobject, passing the
environment hashes and the $self->{V} subset.
Merge together the resulting sets of equations and you have the system
to be solved.
Expression::substitute is passed a list of environment hashes. It
recurses over the expression, copying it. When it reaches a VAR node,
it looks for the var in the environment. There are three cases here:
1. Var is not in the environment. It is therefore not a parameter,
so just leave it alone.
2. Var is in the environment, but all appearances of it have
undefined value. Raise an unspecified-parameter error.
3. Var is in the environment with a defined value. Take the first
(?) such value and replace the VAR node with the result.
Question: Does case 3 need a recursive call over the replacing value?
Your idea was "param number a=3, b=a" should be legal, but then [VAR
b] might not be properly evaluated to [CON 3].
Is "first" correct? Or should it be "last"?
Types also need an ->is_param method, because the behavior of certain
syntactic constructions depends on that. For example:
sometype P(x=EXPR);
If P.x is a parameter, this adds P.x => EXPR to the current type's V
hash. But if P.x is an ordinary variable, this adds [- P.x EXPR] to
the current type's C hash. (Idea for later: Always do the former,
and add constraints to the V hash whenever they have the form VAR =
EXPR, even when they're in a CONSTRAINTS section.)
Other cases:
param sometype P;
Puts P => undef into V.
param sometype P = EXPR;
puts P => EXPR into V.
sometype P(x = EXPR);
puts P.x => EXPR into V or P.x = EXPR into C, as above.
Question: what does
param sometype P(x = EXPR)
do? Think of a plausible example.
So here's your clearing-up example again:
define simple {
number a;
param number b = 10;
constraints { a*b = 20; }
}
define hyperb {
simple S(b=4);
}
This does
simple: { O => { a => number,
b => number,
}
C => { [- [* a b] 20] },
V => { b => 10 },
}
hyperb: { O => { S => simple },
V => { S.b => 4 },
}
Then then we do hyperb->draw, it calls hyperb->constraints, which gets
hyperb's constraints (none) and then calls simple->constraints({b =>
4}).
simple->constraints({b => 4}) calls substitute({b => 4}, {b => 10})
on its constraint expression [- [* a b] 20], and the result is
[- [* a 4] 20]. This is then passed to expression_to_constraints,
which produces { a => 4, "" => -20 }. hyperb->constraints qualifies
this to { S.a => 4, "" => -20 }. This is added to the equation set.
The V hashes will have to be incorporated into the result of solving
the equations somehow. Maybe the easiest way is to have ->constraints
do it from the V hashes. If it sees ({b=>10}, { b => 4, p.x => 12 })
it can add { b => 1, "" => -4 } to the equation set it yields up.
This won't slow down the equation solving much. Or maybe there's a
better way to just incorporate the values from the V into the solution
hash after the equations are solved.
----------------------------------------------------------------
20050722
Here's an interesting bug. Consider:
define cross extends vline {
hline h;
constraints { ... }
}
Now what happens when a cross is drawn? What are its drawables?
The rule is to draw all the subfeatures (which in this case would
include h) *unless* there is an explicit draw{} section that overrides
this.
But in this case there *is* an explicit draw{} section: it is
inherited from line, via vline.
So what's the correct behavior here?
I think you earlier had some idea about having a special SUPER
declaration in draw sections that would explicitly reqest that the
parent be drawn. I guess that's not appropriate here.
Maybe the default is "draw my subobjects, and also, if I have a
parent, subset the environment appropriately, and draw the parent".
"Appropriately" here migh, at least as a first cut, be "not at all",
since the parent's draw routines are free (and likely) to ignore the
parts of the environment they're not expecting to see.
t/draw001 has this boiled down to an essential test case:
define foo { draw { &print_foo; } }
define bar { draw { &print_bar; } }
define baz extends bar { foo f; }
baz b;
When b is drawn, what happens? I think it should call both print_foo
and print_bar. The present implementation omits FOO because the
draw{} section inherited from bar overrides the normal behavior.
Idea: Just make the change you think is best and see if any of the
existing tests fail.
Yes, the change did clea up the draw001 and param006 tests and didn't
mess up anything else.
20050725
Your most recent set of changes, adding in strings, has broken *all*
the "label" tests---looks like some sort of fatal error. Here's the
log message:
Fixing bug in label004 tests.
The real bug was that the test file was incorect and should
provoke a type error! Now it does.
The fix involved adding a new value type--strings are no
longer CON (constants) but special STR (strings). They have
their own "+" method, which concatenates strings, but
otherwise resist arithmetic. "foo" compiles to [ STR, "foo" ]
instead of to [ CON, "foo" ].
Also added a revised, corrected version of label004 that
should work, called label005.
Everything else is still working, though.
20050725
Now that you have the label stuff straightened out, tests are passing
that should fail!
define foo {
number a = c + 1;
param number b = a/10;
number c = b + 1;
}
gets solved correctly, as does
define foo {
number a = c * 2;
param number b = a/10;
number c = b + 1;
}
However
define foo {
number a = c * b;
param number b = a/10;
number c = b + 1;
}
generates a CHUNK * CHUNK error. What exactly is going on now? You
need to think carefully about how params work and how they are
supposed to work. How do the first two equations get solved? Is it
that they are first reduced to
a = c + 1
c = a/10 + 1
and then the equations are solved and then b is calculated from a? If
so, everything is JUST FINE!
What does this theory predict for the third example? The substitution
occurs and yields
a = c * a/10
c = a/10 + 1
and then there really is an improper multiplication. Did you plan
this feature? I think you had an idea to do it, then decided that it
wouldn't work and abandoned it without reflecting carefully. But it
does work. I've refitted t/param006 to go with the original
conception; its now t/param012.
20050805
Idea for dealing with underconstrained equations.
Each equation group has a minimum and maximum x and a minimum and
maximum y. This associates a bounding box with an equation groups.
If an equation group is underconstrained, then;
1. Issue a warning message: "Equation group involving ... has
$N degrees of freedom" (N > 0)
2. Assign one (x,y) variable pair so that it is located
*outside* the bounding boxes of the solved equation groups.
3. Re-solve the underconstrained equation group. Repeat until
equation is no longer unconstrained.
20050805
Question: given an underconstrained equation group, can we determine
the the range of x and y? That would help us decide how to assign (x,
y) to move the entire equation group outside of some box.
Answers:
a. Maybe. Some groups are so constrained. But some aren't.
For example, if A.y = 2 * B.y, the distinace |B.y - A.y|
could be arbitrarily large.
b. Here's an idea I think won't work: Make "ht" and "wd"
special, like "x" and "y" are. Then calculate bounding
boxes by adding up heights. But
|
+----+----+
| |
+---+---+ |
| | +-+------+
+-------+ | |
| |
| |
+--------+
Suggests that it might be too hard to figure out where everything
might be. There's a lot of complicated geometry you'd have
to do.
Also, The specialness of x, y, z comes fairly naturally out
of the treatment of tuples. How would we get the
specialness of ht, wd, ?? similarly naturally?
20050805
The problem you're trying to solve in the previous entry is that you
would like to figure out a good place to draw the underconstrained
group so that it doesn't overlap the properly-positioned
fully-constrained groups, or the other underconstrained groups.
It's easy to calculate a bounding box for a fully-constrained group,
as you noted above. But to keep the underconstrained group's box from
intersecting the known box, you have to know how big it is, and which
variables are extremal. But this might not even be well-defined.
20050810
I think here's how we're going to deal with parameters. There are
several phases.
0. Topological-sort the parameters into a dependency order; a
depends on b if the defining expression for a involves
parameter b. If there's a cycle, fail with an error
message.
1. Take parameter definitions (which might be expressions) and
substitute them into the constraint equations until no
param variables remain. This must be done recursively,
because you might have
param number a = b;
param number b = c;
number c;
constraint { a = 12; }
There's no worry about infinite recursion, because there
are no cycles in the parameter dependency graph.
2. The constraints are now parameterless. Convert them to
equations. Fail if there's nonlinearity.
3. Solve the equations.
4. In dependency order, evaluate the parameter expressions.
This works because you can evaluate each parameter before
the parameters on which it depends.
Procedural question: clean up the Environment stuff first?
20050813
Consider a type X that has a label in it somewhere. Now you have
X a, b;
a.label = "foo";
b.label = "bar";
a = b + (1, 2);
No problem so far. But what if that last line was
a = b;
to put them at the same place? Won't that try to equate a.label = b.label?
This isn't a serious problem, and I can see several potential
solutions, but it may have to be solved somewhere along the line.
For example, when recusring down the feature tree to find the ultimate
constraints, don't recurse to string types. Or only recurse
ultimately to numbers named x, y, and z.
Or just require that instead of "a = b", the user write something like
"a = b + (0,0)" or even "a = b + 0".
20050815
You need to think a little more carefully about how the environments
are handled during the calculation of parameter values. You have some
situations in which you need to replace a variable with an expression,
and others where you need to replace it with a perl number.
The argument to param_values is a type and an environment of outer
parameter definitions. This outer environment is an EXPRESSION
environment, because
footype n(pn=a+b);
is legal. So we need to construct the full environment, which
includes the type's ->{V} hash and its ancestors' ->{V} hashes, and
then definitions here can be overridden by the outer environment hash.
One thing I think you had wrong: You were going to substitute
parameter definitions into other parameter definitions to solve the
parameters. Don't bother. Instead, just substitute directly into the
constraint expressions, again in tsort order. The result is that the
constraint expressions now contain no parameter variables.
You know, looking at the note of 20050810, above, this is exactly what
you said there under #1 and #2. I think you were confused today
because the code doesn't reflect this clear view of things.
Anyway, parameter_values should only be called *after* the equations
are solved, to handle phase 4, not before the equations are solved.
Before the equations are solved, it's fine to have parameters be
defined as expressions.
20050816
Here's the crucial part of ->draw:
unless ($env) {
$env ||= Environment->new();
my $equations = $self->constraint_equations($builtins);
my $solutions = Environment->new($equations->values);
my %params = $self->param_values($solutions);
$env = Environment->new(%params, $solutions->var_hash);
}
Clearly, param_values should do the calculation part; the substitution
phase should be done inside of constraint_equations, and the tsorting,
since it's used by both should be done in this block.
20050817
What are the drawables of an object? You ran into this question
before. I think the answer is:
If the object has an explicit drawables{} section, use that.
Otherwise, if any ancestor of the object has an explicit
drawables{} section, use the first one found.
Otherwise, use all the (nonscalar) subobjects of the object,
including all those inherited from ancestors.
But now that I think about it, maybe it's more useful to try
If the object has an explicit drawables{} section, use that.
Otherwise, it's the (nonscalar) subobjects, *plus* the
parent's drawables.
20050819
Extend param spec notation so that
labelbox L(label.text="foo")
is legal; that way you would not have to use the silly trick of
define labelbox {
param string text = "";
label L(text=text);
}
everywhere. This has serious problems. Most obviously, suppose there
are lots of parameters? Then many types will have many redundant
param declarations like this. Less obviously, what if "label" has a
default text? The declaration in "labelbox" needs exactly the same
default, or else the behavior is broken.
Idea: just make the change to the grammar; see if any of the tests
fail. It might Just Work.
----------------------------------------------------------------
20061210
You should still allow something like
box X rotated 37;
which makes X a box whose internal coordinate system relates to the
external one by a 37-degree rotation. As long as the rotation amount
is known before the equations are set up, the equations will still be
linear.
Note that X.s is still the midpoint of the "bottom" side---that is, it
is still .s in the internal coordinate system, not in the external
coordinate system, and so is not the globally southmost point.
Big question: What is the center of rotation? Maybe the syntax needs
to be "... rotated 37 about X.c"?
20061210
Here's a purely technical problem. To transform constraints with
subscript vars into constraints with contstants, you need the original
type object around, because a[i] needs to operate on i (use
substitute_vars to replace it with constant values) but also we need a
(because we need to look it up in the type object to find its
declaration to determine the correct range for instantiating i.)
So where to put this method? It should naturally be a Type method,
but all the emap stuff you need is in Expression. Maybe put it in
Expression and have the Type be the u parameter.
You also need to replace the O{} hash in a type withsomething more
complicated than a hash, because O maps names to types, and names are
no longer strings. There should be a method which takes a type T and
a name N and returns the type of subchunk N of T (you have "subchunk"
already, but it needs rewriting), and another method that returns the
array declaration bounds of subchunk N of T.
20070119
There's a bit of a problem with the translation from Linogram's curves
to PS splines. PS only does cubic Bezier curves, and I had imagined
that LG would present arbitrary-degree Bezier curves to the user. So
the question arises of how to get the curves I want in PS.
One possibility is that if the user asks for points p0, p1, p2, etc.,
then generate the PS curve that is piecewise cubic with the following
segments:
p0 p1 p1 (p1+p2)/2
(p1+p2)/2 p2 p2 (p2+p3)/2
(p2+p3)/2 p3 p3 (p3+p4)/2
...
(p3+p4)/2 p4 p4 p5
which has the property of passing through only p0 and p_n.
It might approach the (p1+p2)/2's too sharply, though.
Addendum: Yes indeed, the curves look like crap. Not curvy enough.
Do better interpolation. Maybe:
p0 (4p0+p1+p2)/6 (2p0+2p1+2p2)/6 (p1+p2)/2
(3p1+3p2)/6 (2p1+3p2+p3)/6 (p1+3p2+2p3)/6 (3p2+3p3)/6
(3p2+3p3)/6 (2p2+3p3+p4)/6 (p2+3p3+2p4)/6 (3p3+3p4)/6
...
(3p3+3p4)/6 (2p3+2p4+2p5)/6 (p3+p4+4p5)/6 6p5/6
No, this lacks the smoothness property. You if you have
A B C D
D E F G ...
Then you need D-C = E-D or it won't be smooth.
20070120
Actually, the first scheme doesn't look as bad as you thought. There
was a bug: you were scaling the y coordinates by the x factors instead
of by the y factors, so the curves were all squashed. With the bug
fixed, it looks pretty good.
20070122
The "design" of drawing libraries---if you can call it that---is
atrocious. Using an END block to print out a bunch of text has real
drawbacks, and is bizarre. Even ignoring the real technical problems
of this approach, you want to get other people to write drawing
libraries, so the API needs not to be bizarre.
A more normal design would be like this: drawing libraries are normal
Perl modules. At startup, linogram loads one or more of these modules
as directed by the command-line arguments and instantiates an object
from each. Drawing functions are actually method calls on these
objects. For example, draw_line turns into
$_->draw_line for @drawing_components;
Then, just before exiting, linogram calls one more method:
$_->render for @drawing_components;
One advantage of this approach is that linogram can draw several
different output formats in one run. Another is that it is not
bizarre.
20070610
The interface to index variables continues to produce problems. You
will really need a way to specify the index variable explicitly. For
example, consider a collection of n concentric rings of different radii:
define rings {
param index N;
param number spacing = 1;
ring r[N](rad=s*(i+1));
}
Where you want i to be taken to be 0..N-1. But i was never declared
and you provided no way to declare it. If rad were a constraint
variable, there would be no problem:
r[i].rad = s*(i+1);
but if it is a parameter, this fails.
Suggestion: an extended syntax for param variables:
param index N with iterator i;
or something of the sort. Actually from a linguistics point of view
it makes more sense to say something like
param index i with bound N;
but then what does the i-less version look like?
param index with bound N;
perhaps?
20070610
Here's your current thinking about the index circularity problem: THe
program should resolve the index definitions first, from top to
bottom. By then it will know how many of each object there are and
what the tree structure is. Then it can set up the non-index param
definitions, which can be as circular as they want. Then reolve the
params and solve the equations.
You raised an example like this:
define snark {
param number p = 3;
}
define boojum {
param number N = s[2].p;
snark s[N];
}
In this new scheme, this example is illegal: indexes can only depend
on other indices.
Jump to Line
Something went wrong with that request. Please try again.