Skip to content
This repository has been archived by the owner on Aug 4, 2023. It is now read-only.

Update Flickr large batch handling #1047

Merged
merged 3 commits into from Mar 27, 2023
Merged

Conversation

stacimc
Copy link
Contributor

@stacimc stacimc commented Mar 14, 2023

Fixes

Fast follow for Flickr backfill.
Related to WordPress/openverse#1285.

Description

This PR attempts to handle situations where the Flickr API returns excessively large batches of data. The logic should be documented pretty thoroughly in the code, but here's a refresher:

The Flickr API will only return 4,000 unique records for any given set of query params; after that, it will just infinitely return duplicates. Consequently, we have to try to query the API in such a way that we get batches with less than 4,000 records each. Up until now, we have been doing this using the TimeDelineatedProviderDataIngester to break the day into small time intervals for ingestion.

There is a limit to the granularity of data by time interval, though -- once you reduce the interval size to about 5 minutes, the number of records stays the same. For example, given a 5-min interval with > 4k records, if you search any 5 second interval within this range, you will still be returned >4k records. So reducing the timespan only works up to a certain point. However, we can still try to reduce the size of the result set by querying for one license type at a time, instead of all 8 license types at once.

This PR detects these large batch intervals during ingestion, adds them to an array for later processing, and skips ingestion for that batch. After 'regular' ingestion completes, each of these large intervals are reprocessed 8 times, once for each license type. It's still possible for a 5-min interval to contain more than 4k records for a single license type, but in this case there's nothing more we can do, so we process the first 4,000 results and then continue.

Notes

  • It is impossible to guarantee that we will get all records, but this should dramatically increase what we're able to ingest.
  • Notably, this situation is most likely to arise on ingestion days with a lot of data, so making this improvement will result in a large data increase.
  • After merging this update, I'll re-run the failed days in production.

Testing Instructions

Try running the flickr DAG locally. In particular, run it for one of the days that failed in production using the DagRun conf options.

I tried a manual run with the conf:

{
    "date":"2023-02-26"
}

This day failed in production after ingesting 7,253 records. Tested locally against this branch, the run succeeded after about 10 minutes and ingested 56,927 records.

Checklist

  • My pull request has a descriptive title (not a vague title like
    Update index.md).
  • My pull request targets the default branch of the repository (main) or
    a parent feature branch.
  • My commit messages follow best practices.
  • My code follows the established code style of the repository.
  • I added or updated tests for the changes I made (if applicable).
  • I added or updated documentation (if applicable).
  • I tried running the project locally and verified that there are no visible
    errors.

Developer Certificate of Origin

Developer Certificate of Origin
Developer Certificate of Origin
Version 1.1

Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
1 Letterman Drive
Suite D4700
San Francisco, CA, 94129

Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.


Developer's Certificate of Origin 1.1

By making a contribution to this project, I certify that:

(a) The contribution was created in whole or in part by me and I
    have the right to submit it under the open source license
    indicated in the file; or

(b) The contribution is based upon previous work that, to the best
    of my knowledge, is covered under an appropriate open source
    license and I have the right under that license to submit that
    work with modifications, whether created in whole or in part
    by me, under the same open source license (unless I am
    permitted to submit under a different license), as indicated
    in the file; or

(c) The contribution was provided directly to me by some other
    person who certified (a), (b) or (c) and I have not modified
    it.

(d) I understand and agree that this project and the contribution
    are public and that a record of the contribution (including all
    personal information I submit with it, including my sign-off) is
    maintained indefinitely and may be redistributed consistent with
    this project or the open source license(s) involved.

@stacimc stacimc added 🟧 priority: high Stalls work on the project or its dependents ✨ goal: improvement Improvement to an existing user-facing feature 💻 aspect: code Concerns the software code in the repository 🧱 stack: catalog Related to the catalog and Airflow DAGs labels Mar 14, 2023
@stacimc stacimc requested a review from a team as a code owner March 14, 2023 23:47
@stacimc stacimc self-assigned this Mar 14, 2023
@openverse-bot openverse-bot added this to Needs review in Openverse PRs Mar 14, 2023
@@ -235,9 +250,9 @@ def get_should_continue(self, response_json) -> bool:
" been fetched. Consider reducing the ingestion interval."
)
if self.should_raise_error:
raise AirflowException(error_message)
raise Exception(error_message)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's necessary to change this to an Exception, because AirflowExceptions are specifically not handled by our ingestion error handling logic (meaning they can never be skipped).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@stacimc Is this feedback worth adding to the code?

# Must be an `Exception` instead of an `AirflowException` to allow skipping.

Or something like that.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or potentially even a custom, Exception-derived exception subclass?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Our own error handling in the super class will re-raise this as an IngestionError already.

This isn't a problem with this specific exception so much as a general point about error handling in the ingester classes: we probably shouldn't be raising AirflowExceptions ourselves anyway.

@openverse-bot
Copy link
Contributor

Based on the high urgency of this PR, the following reviewers are being gently reminded to review this PR:

@AetherUnbound
@obulat
This reminder is being automatically generated due to the urgency configuration.

Excluding weekend1 days, this PR was updated 2 day(s) ago. PRs labelled with high urgency are expected to be reviewed within 2 weekday(s)2.

@stacimc, if this PR is not ready for a review, please draft it to prevent reviewers from getting further unnecessary pings.

Footnotes

  1. Specifically, Saturday and Sunday.

  2. For the purpose of these reminders we treat Monday - Friday as weekdays. Please note that the that generates these reminders runs at midnight UTC on Monday - Friday. This means that depending on your timezone, you may be pinged outside of the expected range.

Comment on lines +188 to +191
logger.error(
f"{detected_count} records retrieved, but there is a"
f" limit of {self.max_unique_records}."
)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This log was so useful when observing this PR in airflow. Before even reading the code it gave a great understanding of what was happening 👍

Copy link
Member

@zackkrida zackkrida left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Really cool. On my local run I picked up 46,732 records. As always, phenomenal documentation 😍

Copy link
Contributor

@obulat obulat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is such a clever idea! Hope we can get most of the images now.
Thank you for amazing documentation and tests.

Openverse PRs automation moved this from Needs review to Reviewer approved Mar 24, 2023
Copy link
Contributor

@AetherUnbound AetherUnbound left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Incredible documentation, thanks for getting this all down in the various docstrings/comments! I hope this will be useful for other folks who encounter issues with the Flickr API 😄

@@ -235,9 +250,9 @@ def get_should_continue(self, response_json) -> bool:
" been fetched. Consider reducing the ingestion interval."
)
if self.should_raise_error:
raise AirflowException(error_message)
raise Exception(error_message)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or potentially even a custom, Exception-derived exception subclass?

@stacimc stacimc merged commit 7d0ce7f into main Mar 27, 2023
29 checks passed
Openverse PRs automation moved this from Reviewer approved to Merged! Mar 27, 2023
@stacimc stacimc deleted the update/flickr-large-batch-handling branch March 27, 2023 20:30
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
💻 aspect: code Concerns the software code in the repository ✨ goal: improvement Improvement to an existing user-facing feature 🟧 priority: high Stalls work on the project or its dependents 🧱 stack: catalog Related to the catalog and Airflow DAGs
Projects
Archived in project
Openverse PRs
  
Merged!
Development

Successfully merging this pull request may close these issues.

None yet

5 participants