Skip to content

Releases: Sanketika-Obsrv/obsrv-core

v1.0.6.1-GA

07 Aug 10:24
65074e3
Compare
Choose a tag to compare

What's Changed

Migration of SQL queries to prepared statement to avoid the SQL injection

Full Changelog: 1.0.6-GA...1.0.6.1-GA

v1.0.6-GA

27 May 06:26
90090d2
Compare
Choose a tag to compare
Release 1.0.6-GA (#81)

* Pipeline Bug fixes (#74)

* Sanketika-Obsrv/issue-tracker#106:fix: Fix postgres connection issue with dataset read and handling an errors while parsing the message

* Sanketika-Obsrv/issue-tracker#107:fix: Denorm job fix to handle error when denorm field node is contains empty value

* Sanketika-Obsrv/issue-tracker#106:fix: Review comments fix - Changed the generic exception to actual exception (NullPointer)

* fix: #0000: update datasourceRef only if dataset has records

* Sanketika-Obsrv/issue-tracker#180 fix: Datasource DB schema changes to include type. (#79)

Co-authored-by: sowmya-dixit <sowmyadixit7@gamil.com>

* Hudi connector flink job implementation (#80)

* feat: Hudi Flink Implementation.
* feat: local working with metastore and localstack.
* #0000 - feat: Hudi Sink implementation
* #0000 - feat: Hudi Sink implementation
* #0000 - feat: Initialize dataset RowType during job startup
* refactor: Integrate hudi connector with dataset registry.
* refactor: Integrate hudi connector with dataset registry.
* Sanketika-Obsrv/issue-tracker#141 refactor: Enable timestamp based partition
* Sanketika-Obsrv/issue-tracker#141 refactor: Fix Hudi connector job to handle empty datasets list for lakehouse.
* Sanketika-Obsrv/issue-tracker#141 fix: Set Timestamp based partition configurations only if partition key is of timestamp type.
* Sanketika-Obsrv/issue-tracker#170 fix: Resolve timestamp based partition without using TimestampBasedAvroKeyGenerator.
* Sanketika-Obsrv/issue-tracker#177 fix: Lakehouse connector flink job fixes.
* Sanketika-Obsrv/issue-tracker#177 fix: Dockerfile changes for hudi-connector
* Sanketika-Obsrv/issue-tracker#177 fix: Lakehouse connector flink job fixes.
* Sanketika-Obsrv/issue-tracker#177 fix: remove unused code
* Sanketika-Obsrv/issue-tracker#177 fix: remove unused code
* Sanketika-Obsrv/issue-tracker#177 fix: remove unused code
* Sanketika-Obsrv/issue-tracker#177 fix: remove commented code

---------

Co-authored-by: Manjunath Davanam <manjunath@sanketika.in>
Co-authored-by: SurabhiAngadi <surabhia@sanketika.in>
Co-authored-by: Sowmya N Dixit <sowmyadixit7@gmail.com>
Co-authored-by: sowmya-dixit <sowmyadixit7@gamil.com>

v1.0.5-GA

23 Apr 13:31
b05912f
Compare
Choose a tag to compare
Pipeline Bug fixes (#74) (#77)

v1.0.3-GA

16 Feb 07:11
b2a183a
Compare
Choose a tag to compare
Release 1.0.3-GA (#72)

v1.0.2-GA

29 Jan 09:43
d2a2dea
Compare
Choose a tag to compare

TODO

v1.0.1-GA

12 Jan 05:50
e8c3f57
Compare
Choose a tag to compare

Obsrv 1.0.1 GA Release Notes

Improved Features and Refinements

Software Updates

  • Superset: Now at v3.0.2
  • Druid: Upgraded to v28.0.1
  • NodeJS: Version increased to v21.10
  • Postgres: Updated to v16.1
  • Kubernetes: Running on the latest v1.27

Dataset Management

  • Live Dataset Editing:
    • Users can seamlessly edit live datasets, enhancing flexibility and allowing for real-time adjustments.

Monitoring and Alerting

  • Connectors Alerts Configurations:
    • Enjoy enhanced control with improved configurations for connector alerts, ensuring a more responsive monitoring experience.

Documentation Updates

  • API Swagger Doc:

    • Access the Swagger documentation for comprehensive information on the Obsrv API, facilitating easier integration and usage.
  • Error Codes and Resolutions:

    • Find detailed explanations of error codes and their resolutions, helping users troubleshoot and resolve issues more efficiently.
  • Creating Datasets (Standard and Master):

    • Detailed instructions on creating datasets are now available in the documentation.

Bug Fixes and Enhancements

  • UI Bug Fixes:

    • Various UI bugs have been resolved to enhance the overall user interface and improve the user experience.
  • Dataset Import and Export API Bug Fixes:

    • Addressed issues related to the Dataset Import and Export APIs, resulting in smoother data handling and transfer processes.

v1.0.0-GA

26 Dec 09:08
e794934
Compare
Choose a tag to compare

Obsrv 1.0.0 GA Release Notes

Advanced Capabilities and Refinements

Dataset Management

  • Delete Draft Datasets:
    • Users can now delete draft datasets using both the API and the Obsrv Console. This feature enhances data management capabilities and provides flexibility in dataset maintenance.
  • Export and Import Dataset Configurations:
    • Added functionality to export Dataset Configurations as a JSON file, facilitating easy sharing and replication of dataset configurations.
    • Introduced the ability to import dataset configurations from a JSON file, streamlining the setup process and ensuring consistency across environments.

Automated Dashboards and Debugging

  • Automated Event Dashboards:
    • Implemented automation for the creation of dashboards specifically tailored for failed/error message events. This enhances the monitoring experience by providing quick access to relevant information.
  • Detailed Debugging with Query Store:
    • Indexed all failed events into the query store for comprehensive and detailed debugging. This feature improves troubleshooting capabilities and accelerates issue resolution.

Backup System Configuration

  • Automated Backup Configuration:
    • Automated the configuration of the backup system during installation, allowing users to define their preferred timezone. This ensures a seamless and customized backup setup for enhanced data protection.

Monitoring and Alerting

  • Enhanced Monitoring and Alerting System:
    • Improved the monitoring and alerting system for better visibility into system performance and proactive identification of potential issues. Users can now benefit from a more robust system for ensuring data integrity.

Data Sources Management

  • Aggregated and Filtered Data Sources:
    • Introduced the option to create aggregated data sources and filtered data sources through both the API and the Obsrv Console. This feature provides users with more control over data sources, enabling customization based on specific requirements.

User Interface Redesign

  • Dataset List Home Page Redesign:
    • Redesigned the Dataset List home page for a more intuitive and user-friendly experience. The updated interface enhances navigation and accessibility, improving overall usability.

Bug Fixes and Improvements

  • Improved the Schema API to support nano-second validation and the ability to index nano-second data.
  • Fixed the auto-conversion of the timestamp property to a string.
  • Fixed the auto-selection of the timestamp field from the derived property.

v1.3.1

19 Dec 07:41
8106fa2
Compare
Choose a tag to compare
Release 1.3.1 into Main (#49)

* testing new images

* testing new images

* testing new images

* testing new images

* testing new images

* build new image with bug fixes

* update dockerfile

* update dockerfile

* #0 fix: upgrade packages

* #0 feat: add flink dockerfiles

* feat: update all failed, invalid and duplicate topic names

* feat: update kafka topic names in test cases

* #0 fix: add individual extraction

* feat: update failed event

* Update ErrorConstants.scala

* feat: update failed event

* Issue #0 fix: upgrade ubuntu packages for vulnerabilities

* feat: add exception handling for json deserialization

* Update BaseProcessFunction.scala

* Update BaseProcessFunction.scala

* feat: update batch failed event generation

* Update ExtractionFunction.scala

* feat: update invalid json exception handling

* Issue #46 feat: update batch failed event

* Issue #46 feat: update batch failed event

* Issue #46 feat: update batch failed event

* Issue #46 feat: update batch failed event

* Issue #46 fix: remove cloning object

* Issue #46 feat: update batch failed event

* #0 fix: update github actions release condition

* Issue #46 feat: add error reasons

* Issue #46 feat: add exception stack trace

* Issue #46 feat: add exception stack trace

* Release 1.3.1 Changes (#42)

* Dataset enhancements (#38)

* feat: add connector config and connector stats update functions
* Issue #33 feat: add documentation for Dataset, Datasources, Data In and Query APIs
* Update DatasetModels.scala
* #0 fix: upgrade packages
* #0 feat: add flink dockerfiles
* #0 fix: add individual extraction

---------

Co-authored-by: ManojKrishnaChintaluri <manojc@sanketika.in>
Co-authored-by: Praveen <66662436+pveleneni@users.noreply.github.com>
Co-authored-by: Sowmya N Dixit <sowmyadixit7@gmail.com>

* #0000 [SV] - Fallback to local redis instance if embedded redis is not starting

* Update DatasetModels.scala

* #0000 - refactor the denormalization logic
1. Do not fail the denormalization if the denorm key is missing
2. Add clear message whether the denorm is sucessful or failed or partially successful
3. Handle denorm for both text and number fields

* #0000 - refactor:
1. Created a enum for dataset status and ignore events if the dataset is not in Live status
2. Created a outputtag for denorm failed stats
3. Parse event validation failed messages into a case class

* #0000 - refactor:
1. Updated the DruidRouter job to publish data to router topics dynamically
2. Updated framework to created dynamicKafkaSink object

* #0000 - mega refactoring:
1. Made calls to getAllDatasets and getAllDatasetSources to always query postgres
2. Created BaseDatasetProcessFunction for all flink functions to extend that would dynamically resolve dataset config, initialize metrics and handle common failures
3. Refactored serde - merged map and string serialization into one function and parameterized the function
4. Moved failed events sinking into a common base class
5. Master dataset processor can now do denormalization with another master dataset as well

* #0000 - mega refactoring:
1. Made calls to getAllDatasets and getAllDatasetSources to always query postgres
2. Created BaseDatasetProcessFunction for all flink functions to extend that would dynamically resolve dataset config, initialize metrics and handle common failures
3. Refactored serde - merged map and string serialization into one function and parameterized the function
4. Moved failed events sinking into a common base class
5. Master dataset processor can now do denormalization with another master dataset as well

* #0000 - mega refactoring:
1. Added validation to check if the event has a timestamp key and it is not blank nor invalid
2. Added timezone handling to store the data in druid in the TZ specified by the dataset


* #0000 - minor refactoring: Updated DatasetRegistry.getDatasetSourceConfig to getAllDatasetSourceConfig

* #0000 - mega refactoring: Refactored logs, error messages and metrics

* #0000 - mega refactoring: Fix unit tests

* #0000 - refactoring:
1. Introduced transformation mode to enable lenient transformations
2. Proper exception handling for transformer job

* #0000 - refactoring: Fix test cases and code

* #0000 - refactoring: upgrade embedded redis to work with macos sonoma m2

* #0000 - refactoring: Denormalizer test cases and bug fixes. Code coverage is 100% now

* #0000 - refactoring: Router test cases and bug fixes. Code coverage is 100% now

* #0000 - refactoring: Validator test cases and bug fixes. Code coverage is 100% now

* #0000 - refactoring: Framework test cases and bug fixes

* #0000 - refactoring: kafka connector test cases and bug fixes. Code coverage is 100% now

* #0000 - refactoring: improve code coverage and fix bugs

* #0000 - refactoring: improve code coverage and fix bugs --- Now the code coverage is 100%

* #0000 - refactoring: organize imports

* #0000 - refactoring:
1. transformer test cases and bug fixes - code coverage is 100%

* #0000 - refactoring: test cases and bug fixes

---------

Co-authored-by: shiva-rakshith <rakshiths@sanketika.in>
Co-authored-by: Aniket Sakinala <aniket@sanketika.in>
Co-authored-by: Manjunath Davanam <manjunath@sanketika.in>
Co-authored-by: ManojKrishnaChintaluri <manojc@sanketika.in>
Co-authored-by: Praveen <66662436+pveleneni@users.noreply.github.com>
Co-authored-by: Sowmya N Dixit <sowmyadixit7@gmail.com>
Co-authored-by: Anand Parthasarathy <anandp504@gmail.com>

* #000:feat: Removed the provided scope of the kafka-client in the framework (#40)

* #0000 - feat: Add dataset-type to system events (#41)

* #0000 - feat: Add dataset-type to system events

* #0000 - feat: Modify tests for dataset-type in system events

* #0000 - feat: Remove unused getDatasetType function

* #0000 - feat: Remove unused pom test dependencies

* #0000 - feat: Remove unused pom test dependencies

---------

Co-authored-by: Santhosh <santhosh@sanketika.in>
Co-authored-by: shiva-rakshith <rakshiths@sanketika.in>
Co-authored-by: Aniket Sakinala <aniket@sanketika.in>
Co-authored-by: ManojKrishnaChintaluri <manojc@sanketika.in>
Co-authored-by: Praveen <66662436+pveleneni@users.noreply.github.com>
Co-authored-by: Sowmya N Dixit <sowmyadixit7@gmail.com>
Co-authored-by: Anand Parthasarathy <anandp504@gmail.com>

* Main conflicts fixes (#44)

* feat: add connector config and connector stats update functions

* Issue #33 feat: add documentation for Dataset, Datasources, Data In and Query APIs

* Update DatasetModels.scala

* Release 1.3.0 into Main branch (#34)

* testing new images

* testing new images

* testing new images

* testing new images

* testing new images

* build new image with bug fixes

* update dockerfile

* update dockerfile

* #0 fix: upgrade packages

* #0 feat: add flink dockerfiles

* #0 fix: add individual extraction

* Issue #0 fix: upgrade ubuntu packages for vulnerabilities

* #0 fix: update github actions release condition

---------

Co-authored-by: ManojKrishnaChintaluri <manojc@sanketika.in>
Co-authored-by: Praveen <66662436+pveleneni@users.noreply.github.com>
Co-authored-by: Sowmya N Dixit <sowmyadixit7@gmail.com>

* Update DatasetModels.scala

* Issue #2 feat: Remove kafka connector code

* feat: add function to get all datasets

* #000:feat: Resolve conflicts

---------

Co-authored-by: shiva-rakshith <rakshiths@sanketika.in>
Co-authored-by: Aniket Sakinala <aniket@sanketika.in>
Co-authored-by: ManojKrishnaChintaluri <manojc@sanketika.in>
Co-authored-by: Praveen <66662436+pveleneni@users.noreply.github.com>
Co-authored-by: Sowmya N Dixit <sowmyadixit7@gmail.com>
Co-authored-by: Santhosh <santhosh@sanketika.in>
Co-authored-by: Anand Parthasarathy <anandp504@gmail.com>
Co-authored-by: Ravi Mula <ravismula@users.noreply.github.com>

* #0000 - fix: Fix null dataset_type in DruidRouterFunction (#48)

---------

Co-authored-by: ManojKrishnaChintaluri <manojc@sanketika.in>
Co-authored-by: Praveen <66662436+pveleneni@users.noreply.github.com>
Co-authored-by: shiva-rakshith <rakshiths@sanketika.in>
Co-authored-by: Sowmya N Dixit <sowmyadixit7@gmail.com>
Co-authored-by: Santhosh <santhosh@sanketika.in>
Co-authored-by: Aniket Sakinala <aniket@sanketika.in>
Co-authored-by: Anand Parthasarathy <anandp504@gmail.com>
Co-authored-by: Ravi Mula <ravismula@users.noreply.github.com>

v1.3.0

29 Nov 11:53
5cdc215
Compare
Choose a tag to compare
Merge pull request #29 from shiva-rakshith/failed-events

Issue #46 feat: Enhance pipeline jobs to push all failed and invalid events into single topic

v1.2.0

29 Nov 11:52
Compare
Choose a tag to compare
  1. Included the master dataset transformation
  2. Vulnerbalities fixes