#line annotation costly to generate because of too much I/O #693

Closed
nddrylliog opened this Issue Sep 7, 2013 · 2 comments

Projects

None yet

1 participant

@nddrylliog
Member

See:

amos at loki in ~/Dev/rock (97x)
$ time rock rock.use --onlygen
[ OK ]
rock rock.use --onlygen  11.14s user 0.24s system 99% cpu 11.396 total

versus:

amos at loki in ~/Dev/rock (97x)
$ time rock rock.use --onlygen --nolines
[ OK ]
rock rock.use --onlygen --nolines  5.44s user 0.11s system 99% cpu 5.558 total

A whopping half the time taken by line numbers! It's because we read the file every time. We gotta have at least some sort of caching system for .ooc files being read.. or better yet, Tokens storing line numbers directly when being read.

@nddrylliog nddrylliog was assigned Sep 7, 2013
@nddrylliog
Member

This has been observed on Windows and it seems to have a huge impact. We have indeed a few million I/O reads (O_O) because we seek so much in files and the like.

As a temporary stopgap measure, instead of implementing a byte offset <=> line number map, we could just read whole files in memory and work from there. It'd still be stupid but a whole lot more efficient.

@nddrylliog nddrylliog added a commit that referenced this issue Nov 27, 2013
@nddrylliog nddrylliog Take advantage of nagaqueen's line numbers instead of recomputing the…
…m all the time. Large codegen speed-up, esp. on Windows. Closes #693
c4779b8
@nddrylliog
Member

The solution was to make nagaqueen expose its own line info, which it already has. No need to recompute it all the time ourselves!

@nddrylliog nddrylliog closed this Nov 27, 2013
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment