Generate old regional analysis results#957
Conversation
| @Override | ||
| public void action(ProgressListener progressListener) throws Exception { | ||
| DBObject query = QueryBuilder.start().or( | ||
| QueryBuilder.start("travleTimePercentiles").is(null).get(), |
There was a problem hiding this comment.
Typo in field name: travelTimePercentiles
|
It does seem like a good simplification to get rid of this fallback code for very old formats. Removing some of these fields means old worker versions that rely on them would break, but these are very old versions that are already effectively deprecated and shouldn't be used. Just thinking through how this works: In recent years we generate multi-cutoff results files, from which single-cutoff files are generated in multiple formats. The multi-cutoff files are never used directly; they are only used to generate single-cutoff files on demand. For old analyses where multi-cutoff results files do not exist (or are named differently because analyses were limited to one set of destinations), this PR doesn't attempt to generate new-style results files. It just hits all code paths where multi-cutoff results would normally be used to derive single-cutoff results files, hitting the fallback code that works in the absence of new-style results files. Once that's done, it should be possible to remove that fallback code. This is assuming we've exhaustively hit all the ways the functions containing the fallback code can be invoked. What's the planned way to run this on staging and production before #956 is merged? I guess we would perform one deployment of the patched version, observe the logs, and perform another deployment with the post-956 version soon thereafter if everything looked good? |
| regionalAnalysis.cutoffsMinutes = cutoffs; | ||
| regionalAnalysis.travelTimePercentiles = percentiles; | ||
| regionalAnalysis.destinationPointSetIds = destinationPointSetIds; | ||
| Persistence.regionalAnalyses.put(regionalAnalysis); |
There was a problem hiding this comment.
Modified the description of #956 to reference this PR and recommend that this one should be merged and run first.
| OpportunityDataset destinations = Persistence.opportunityDatasets.get(destinationPointSetId); | ||
| for (int cutoffMinutes : cutoffs) { | ||
| for (int percentile : percentiles) { | ||
| for (FileStorageFormat format : FileStorageFormat.values()) { |
There was a problem hiding this comment.
It looks like this is iterating through all 12 or so FileStorageFormat values and calling getSingleCutoffGrid with all of them. The getSingleCutoffGrid method only recognizes and handles three of those formats (GRID, PNG, and GEOTIFF) and does not write anything to the output stream for any other cases. So it looks like this might leave the output stream open and try to move an underlying empty file into storage with each of the nine other extensions.
There was a problem hiding this comment.
Updated to use only those three formats in b85e57c
| ).get(); | ||
| try (DBCursor<RegionalAnalysis> cursor = Persistence.regionalAnalyses.find(query)) { | ||
| while (cursor.hasNext()) { | ||
| RegionalAnalysis regionalAnalysis = cursor.next(); |
There was a problem hiding this comment.
It might be good to add some logging in here so we can verify the effects of the migration when it runs. Or maybe better, enable DEBUG level logging when we run the migration to hit the log statements inside getSingleCutoffGrid.
There was a problem hiding this comment.
Thanks for adding some logging. To clarify, the underlying reason I was interested in logging was to observe whether the effects of the migration matched expectations. Rather than just counts, something that would tip us off if for example records were being missed due to an incorrect field name, or extra files were being produced etc. This is why I mentioned enabling debug for getSingleCutoffGrid as it would log each new file created and stored. Maybe we could also log the key fields of each regionalAnalysis record that was migrated.
As of right now, I think this approach would be best. There are changes in the dev branch that have not been deployed from PRs #953 and #954, however those changes are very minor. If we take that approach, #956 could also remove the code in this PR. |
Generate all final results for old regional analyses with a task so that we can eliminate old cold to handle them.
Not all `FileStorageFormat`s are valid single cutoff grid formats.
`getSingleCutoffGrid` can be run more than once on the same inputs. We should perform the database changes after all of the files are generated in case there is an error during one of the runs so that we could just run this again if needed.
bafcfab to
353f2de
Compare
ansoncfit
left a comment
There was a problem hiding this comment.
One thing to consider: how will the result generation handle cases where the destination opportunity dataset referenced by a regional analysis has been deleted? It should be possible to generate the single cutoff grids using just the id of the destination dataset. The name of the dataset is only required for the resultHumanFilename. Fallbacks for generating human-readable filenames when referenced data are missing would be a separate change. I just wanted to flag this edge case now in case it causes failures with the migration.
@trevorgerhardt, how does the following sound?
- Log at the
DEBUGlevel for the method in question as @abyrd suggested above (add<logger name="com.conveyal.analysis.controllers.RegionalAnalysisController" level="DEBUG" />to logback.xml?). Or if you think loggingregionalAnalysis._idandregionalAnalysis.accessGroupas currently proposed (plus the existingINFOandWARNlevel statements ingetSingleCutoffGridwill be enough detail, that's fine. - Take a snapshot of the staging database
- Run the migration and check results on staging, reporting back on log statements and time required
|
We may also want to do a dry run or equivalent read-only database query on production without changing anything, just to see how many records / results are concerned here and how old they are. We may be addressing a limited number of very old results. If current active users confirm that these results are quite dated and will rarely or never be used, it may be reasonable to abandon them or do a partial conversion that does not cover every edge case. |
Refactor methods so that `getSingleCutoffGrid` does not require the opportunity dataset to still exist in the database.
|
I modified the log statements to use There are currently 167 entries that match the MongoDB query on staging. I will take a snapshot and then run the migration there. |
|
The changes in this PR were run on staging and from a local machine on production data with the results stored in our production S3 bucket. Closing. |
Pull request was closed
Generate all final results for old regional analyses with a task so that we can eliminate old code which was required to handle them. See #956 for the R5 code that can be simplified.
Regional analysis results had two changes to how they were stored over the years. However, the last change was several years ago. We still handle the old style of results in multiple locations in code, but if we migrate the database entries and pre-generate all of the regional analysis results we would be able to eliminate those code paths. This will make future changes easier to make.
This should be run on staging and production before #956 is merged.