From global iteration to global iteration, the model steps in tm2py are taking longer to run and using more memory. In some cases, the runtime increase is substantial. As shown in the spreadsheet attached to #179, the truck component takes 2.46 minutes in the first global iteration, but 4.93 in the third global iteration. As noted in #189, invoking the transit skim component independently takes about half the runtime as when it is invoked as part of a model run. The memory profiling done as part of #179 also shows the memory footprint of the model run growing over the global iterations.
Progress:
Considerations
There are numerous things we could experiment with, including:
- Putting pauses in the model run to see if memory is freed up and steps can proceed faster. Meaning, are we calling procedures too rapidly for the garbage collector to clean things up?
- Experiment by leaving components out to see which are bogging things down. For example, if we just run the truck component three times, does the runtime slow down in each global iteration?
- Add profiling code to see which methods are consuming memory and for what period of time.
Number 2. seems like a reasonable and efficient way to start. It's likely that a small number of the computationally-intensive components is the problem. If we can isolate them, we can add pauses first to see if that works and then do the profiling.
@Ennazus, @lmz, @e-lo, @i-am-sijia, @AshishKuls: thoughts?
From global iteration to global iteration, the model steps in
tm2pyare taking longer to run and using more memory. In some cases, the runtime increase is substantial. As shown in the spreadsheet attached to #179, the truck component takes 2.46 minutes in the first global iteration, but 4.93 in the third global iteration. As noted in #189, invoking the transit skim component independently takes about half the runtime as when it is invoked as part of a model run. The memory profiling done as part of #179 also shows the memory footprint of the model run growing over the global iterations.Progress:
Considerations
There are numerous things we could experiment with, including:
Number 2. seems like a reasonable and efficient way to start. It's likely that a small number of the computationally-intensive components is the problem. If we can isolate them, we can add pauses first to see if that works and then do the profiling.
@Ennazus, @lmz, @e-lo, @i-am-sijia, @AshishKuls: thoughts?