-
Notifications
You must be signed in to change notification settings - Fork 3.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Log when dropping data outside "windowPeriod"? #15
Comments
RandomEtc. it actually does log it, but at debug level instead of info. In a production setting, it's actually not uncommon to have a flurry of late events (if a server goes down for some time and then comes back up and pushes everything it has queued up, for example), so logging it on every occurrence actually wouldn't be great. We should perhaps look at turning debug logging on for the relevant classes for the demos so that it's easier to see there though. If we had done that, do you think it would've helped you out? Also, the system maintains a count of the number of messages that it dropped, which it emits every minute (with the demo configurations, this is emitted to the logs at debug level). |
It would probably have helped, yes. But if debug logging is extremely chatty then perhaps not. It may just be necessary learning curve at the moment, and it will go away when there are examples that can correctly import historical data. |
…handler_remove Removing the uncaught exception handler from yjava_daemon startup.
* RowBasedIndexedTable: Add specialized index types for long keys. (apache#10430) * RowBasedIndexedTable: Add specialized index types for long keys. Two new index types are added: 1) Use an int-array-based index in cases where the difference between the min and max values isn't too large, and keys are unique. 2) Use a Long2ObjectOpenHashMap (instead of the prior Java HashMap) in all other cases. In addition: 1) RowBasedIndexBuilder, a new class, is responsible for picking which index implementation to use. 2) The IndexedTable.Index interface is extended to support using unboxed primitives in the unique-long-keys case, and callers are updated to use the new functionality. Other key types continue to use indexes backed by Java HashMaps. * Fixup logic. * Add tests. * vectorize constant expressions with optimized selectors (apache#10440) * Web console: switch to switches instead of checkboxes (apache#10454) * switch to switches * add img alt * add relative * change icons * update snapshot * Fix the offset setting in GoogleStorage#get (apache#10449) * Fix the offset in get of GCP object * upgrade compute dependency * fix version * review comments * missed * Fix the task id creation in CompactionTask (apache#10445) * Fix the task id creation in CompactionTask * review comments * Ignore test for range partitioning and segment lock * Web console reindexing E2E test (apache#10453) Add an E2E test for the web console workflow of reindexing a Druid datasource to change the secondary partitioning type. The new test changes dynamic to single dim partitions since the autocompaction test already does dynamic to hashed partitions. Also, run the web console E2E tests in parallel to reduce CI time and change naming convention for test datasources to make it easier to map them to the corresponding test run. Main changes: 1) web-consolee2e-tests/reindexing.spec.ts - new E2E test 2) web-console/e2e-tests/component/load-data/data-connector/reindex.ts - new data loader connector for druid input source 3) web-console/e2e-tests/component/load-data/config/partition.ts - move partition spec definitions from compaction.ts - add new single dim partition spec definition * Fix UI datasources view edit action compaction (apache#10459) Restore the web console's ability to view a datasource's compaction configuration via the "action" menu. Refactoring done in apache#10438 introduced a regression that always caused the default compaction configuration to be shown via the "action" menu instead. Regression test is added in e2e-tests/auto-compaction.spec.ts. * Allow using jsonpath predicates with AvroFlattener (apache#10330) * Improve UI E2E test usability (apache#10466) - Update playwright to latest version - Provide environment variable to disable/enable headless mode - Allow running E2E tests against any druid cluster running on standard ports (tutorial-batch.spec.ts now uses an absolute instead of relative path for the input data) - Provide environment variable to change target web console port - Druid setup does not need to download zookeeper * Web console: fix lookup edit dialog version setting (apache#10461) * fix lookup edit dialog * update snapshots * clean up test * fix array types from escaping into wider query engine (apache#10460) * fix array types from escaping into wider query engine * oops * adjust * fix lgtm * Update version to 0.21.0-SNAPSHOT (apache#10450) * [maven-release-plugin] prepare release druid-0.21.0 * [maven-release-plugin] prepare for next development iteration * Update web-console versions * Test UI to trigger auto compaction (apache#10469) In the web console E2E tests, Use the new UI to trigger auto compaction instead of calling the REST API directly so that the UI is covered by tests. Co-authored-by: Gian Merlino <gianmerlino@gmail.com> Co-authored-by: Clint Wylie <cwylie@apache.org> Co-authored-by: Vadim Ogievetsky <vadim@ogievetsky.com> Co-authored-by: Abhishek Agarwal <1477457+abhishekagarwal87@users.noreply.github.com> Co-authored-by: Chi Cao Minh <chi.caominh@imply.io> Co-authored-by: Lasse Krogh Mammen <lasse.mammen@gmail.com> Co-authored-by: Jonathan Wei <jon-wei@users.noreply.github.com>
Sync remote
Following our discussion on Google Groups:
https://groups.google.com/d/msg/druid-development/ag7EyEIftqQ/QBRNoFxuWMsJ
It might help the learning curve for Druid if the logs indicated when data was dropped. I assume it's a rare occurrence in a properly configured Realtime node, so the overhead would be minimal, but when experimenting with RealtimeStandaloneMain this was a confusing side-road and difficult to understand what was going on :)
The text was updated successfully, but these errors were encountered: