Skip to content

Conversation

@Enovotny
Copy link
Collaborator

@Enovotny Enovotny commented Oct 7, 2025

fix message from posting
change print statements from timeseries store/get to logging

@Enovotny Enovotny linked an issue Oct 7, 2025 that may be closed by this pull request
@Enovotny Enovotny requested review from msweier and perrymanmd October 7, 2025 20:25
# If getting extents fails, fall back to single-threaded mode
print(
f"WARNING: Could not retrieve time series extents ({e}). Falling back to single-threaded mode."
logging.debug(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would actually call this an error instead of debug because it means the timeseries extents is failing for some reason.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could be an error, but the request still goes through. it just doesn't use the multithread. so I don't want an error message to appear if the request completes. If we killed the entire request then yes we would put error.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good


# Assert the expected values
assert chunks == 5, f"Expected 5 chunks, but got {chunks}"
assert threads == 5, f"Expected 5 threads, but got {threads}"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we want to still test for the number of threads or call it good since we've tested it already?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think a logging message should be used for a test. It might be a test of the individual function it self that chunks the data. run a test for chunk_timeseries_data. to make sure that it provides the data chunked correctly. actually that should be what we do...

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense. Maybe the actual threads should be a separate function too and tested.

@msweier
Copy link
Collaborator

msweier commented Oct 7, 2025

The multi-threading changes looks good, just a couple comments. I didn't look close at the api changes.

Copy link
Contributor

@perrymanmd perrymanmd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just cleaning up some change from print to logging.

chunk_start, chunk_end = future_to_chunk[future]
print(
logging.error(
f"ERROR: Failed to fetch data from {chunk_start} to {chunk_end}: {e}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove "ERROR: " and let the logging level handle it.

actual_workers = min(max_workers, len(chunks))
print(
logging.debug(
f"INFO: Storing {len(chunks)} chunks of timeseries data with {actual_workers} threads"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove "INFO: " and let the logging level handle it.

@sonarqubecloud
Copy link

sonarqubecloud bot commented Oct 9, 2025

@Enovotny Enovotny merged commit e9ffa06 into main Oct 9, 2025
9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

api.py errors when decoding successful timeseries post return

4 participants