Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Library variables recomputed on each module call #782

Open
OskarLinde opened this issue May 6, 2014 · 4 comments
Open

Library variables recomputed on each module call #782

OskarLinde opened this issue May 6, 2014 · 4 comments

Comments

@OskarLinde
Copy link
Contributor

OskarLinde commented May 6, 2014

Here is a simplified synthetic case to illustrate the problem:

_Snippet 1_

lookup_table = expensive_computation();
function f(x) = interpolate(x, using = lookup_table);

_Snippet 2_

for (i=[1:1000]) echo(f(x));

If these two snippets are placed in the same file, everything executes as expected. The expensive_computation() is performed once and the cheap interpolate call does the work as intended. Now to the problem: if we instead put Snippet 1 in its own library: lib.scad, and call it like this

use<lib.scad>
for (i=[1:1000]) echo(f(x));

the expensive_computation() call will suddenly be performed 1000 times – causing a dramatic slowdown.

While this could all be as intended (there is a fringe use-case, see below), I think it is more likely an unintended consequence of doing what was easiest – avoiding having to find a way to store the global context of each used file, instead recomputing it at each module instantiation and function call.

Fixing the issue is easy, but I've not attached a patch as there is a remote chance that the behavior is intended, and because there is also a remote chance of this breaking / changing the behavior of some existing code that could be relying on this.

Here is the most likely (although very contrived) theoretical scenario I've come up with that would change behavior by fixing this:

_lib.scad_

var = $fn * 5;
module m() circle($fn=var);

And used:

use<lib.scad>
module n() {
    $fn = 7;
    m(); // generates a circle with 35 sides
}
n(); 

The fix would make m() generate a circle with 25 sides (default $fn*5).

That behavior is a bit inconsistent, as changing use -> include already today makes the circle get 25 sides.

Is this behavior intended, and are there any more reasonable use-cases?


Want to back this issue? Post a bounty on it! We accept bounties via Bountysource.

@kintel
Copy link
Member

kintel commented May 29, 2014

This is not intended - top-level expressions in use'd libraries should only be evaluated once.
Whether models in the wild actually depend on this is a good question.

I've been setting up some larger-scale test scripts, mostly for performance testing, but they're also useful for regression testing across OpenSCAD releases. Perhaps I should extend those a bit. Right now it only works with customizable designs published on thingiverse, but that could be lifted by massaging the scripts a bit: https://github.com/openscad/statistics-scripts.

The large initial job is to collect public URLs for published OpenSCAD designs.

@adrianVmariano
Copy link

I stumbled across this recently and was surprised to learn about this old bug report.

The run time implications of this bug are quite serious: in a recent test I improved my code run time from 10.5 minutes to 5 seconds by commenting out high level examples in a library file.

@nophead
Copy link
Member

nophead commented Apr 9, 2019

Yes I moved some test code from a library module into separate test file that used the library and got a speed up.

@rcolyer
Copy link
Member

rcolyer commented Sep 18, 2021

I prepared a testcase to help benchmark and illustrate the behavior of this recomputing issue:

slow.scad:

//include <slowused.scad>
use <slowused.scad>

x = [for (i=[0:100000]) let($i=i) foo(i)];
echo(x[0]);

slowused.scad:

howslow = 25;
demodata1 = [for (i=[1:howslow]) rands(0, i, 1)[0]];
$demodata2 = [for (i=[1:howslow]) rands(0, i, 1)[0]];
$i = is_undef($i) ? -1 : $i;
function output2(s) = echo(s) 0;
function output(s) =
  $i == -1 ? output2(str("Initial pass ", s)) :
  $i < 5 ? output2(str("Reprocessed ", s, " time ", $i)) :
  $i == 5 ? output2("Suppressing further output") : 0;
output1 = output("normal variables");
$output2 = output("special variables");

function foo(i) = i+1;

This testcase was carefully constructed with the logic needed to misbehave identically in the master branch and in recent releases, separating this remaining performance behavior from the recent changes, in case comparisons across versions are desired to resolve this. Commenting the use and uncommenting the include produces the correct one-pass behavior:

For include:

ECHO: "Initial pass normal variables"
ECHO: "Initial pass special variables"
ECHO: 1
...
Total rendering time: 0:00:00.146

However, currently for the use case:

ECHO: "Reprocessed normal variables time 0"
ECHO: "Reprocessed special variables time 0"
ECHO: "Reprocessed normal variables time 1"
ECHO: "Reprocessed special variables time 1"
ECHO: "Reprocessed normal variables time 2"
ECHO: "Reprocessed special variables time 2"
ECHO: "Reprocessed normal variables time 3"
ECHO: "Reprocessed special variables time 3"
ECHO: "Reprocessed normal variables time 4"
ECHO: "Reprocessed special variables time 4"
ECHO: "Suppressing further output"
ECHO: "Suppressing further output"
ECHO: 1
...
Total rendering time: 0:00:06.532

Increase the value "howslow = 25;" in the used file if you want to see how slow the function calls to the "add one" function can get from the top-level reprocessing in the use case. I started my testing with howslow = 10000; but this bug has such an impact that one has to turn it down quite a bit to get a tolerably usable testcase.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants