New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Components not cached in Airflow? #43
Comments
That is definitely not the behavior expected. Transform should also caching the result if the inputs haven't changed, and evaluator should not be using the cached results if the trainer outputs have changed. I'll try to recreate this here in the lab. |
I wasn't able to recreate this. I used the simple pipeline and ran it a few times. Except for the first run or whenever I deleted the metadata db, it would always use the cached artifacts. I also didn't experience difference between caching with transform and evaluator. So, a few followup questions:
|
One thing to mention, if taxi_utils.py is changed, cache won't hit for transform and trainer For evaluator, it takes example gen output and trainer output, if those are changed, it shouldn't hit cache, could you check the pipeline output folder to see if there are new output (new # subfolder under example_gen and trainer) generated from examplegen or trainer? |
I'm working with TFX 0.12.0. No problem when I run the example. I'll try to modify the Trainer in the example to see if the Evaluator uses its cached results or no.
Thanks, that's exactly what's happening, I've been modifying my model in the utils.py file. I separated the transformation and the model in 2 files + 1 utils file and now it works. Even though I had to add "sys.path.append(path_to_utils_folder)" in my pipeline definition to avoid "no module named xxx". Is it why you made an unique taxi_utils file in the example?
My ExampleGen doesn't change (I don't generate new examples), but my Trainer output changes at each modification (IDs 44-47-49-51-53-59) while the Evaluator doesn't produce new output (ID 45). I'll try with the Taxi example to see if it comes from my pipeline. |
Ok, that sounds like the issue then. If anything about the component's input changes -- including the injected file's checksum -- then the component is rerun. If you're modifying the utils.py file and then running the pipeline, it will trigger a new execution of both transform and trainer.
Evaluator should trigger a new execution given the Trainer is probably producing a new model on each iteration. Can you attach the evaluator logs for the more recent runs? |
Yes it's clearer now, thanks for this clarification.
The log is attached. There is indeed some weird lines in the log (lines 64-65), but I still can get the results in a Jupyter notebook and display them. |
Hi, is that possible to send us the entire pipeline log folder in a zip file? we need to check the caching logs and executor logs in each run |
The logs are attached. |
Thanks @loiccordone ! This helps to detect a bug in our codebase. I will have a PR soon to address this. |
PiperOrigin-RevId: 245101614
This PR adds Anthony to the Release team of sig-io with his PyPI.org. Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
Hello,
I'm running a TFX pipeline in Airflow, with a Transform component. Each time I trigger my DAG, the CsvExampleGen, StatisticsGen, SchemaGen and ExampleValidator all use their cached outputs since no changes occured, each one takes ~20s on my machine. On the other hand, the Transform component re-compute its outputs at each time (taking ~10min) even though nothing changed.
My preprocessing_fn only calculates z_score on dense float features (+ fill-in missing values).
I also have problems with the Evaluator component, it uses its cached results even when the Trainer outputs are different. I am forced to manually delete the Evaluator output folder before running my DAG.
Is this normal behavior?
Thanks
The text was updated successfully, but these errors were encountered: