Releases: GuillaumeTrain/PyDataCore
1.1.2 : first Implement async data-ready signaling in Data and DataPool classes
Implement async data-ready signaling in Data and DataPool classes
- Added support for asynchronous data-ready signaling in the
DataandDataPoolclasses to streamline the data flow and improve synchronization. - Now,
Dataobjects can signal when they are ready for processing, allowing for better coordination in data pipelines.
Future enhancement:
- Plan to extend async data-ready functionality to chunked data handling within mixins such as
ChunkableMixinandFileRamMixinusingdata.mark_data_ready()for each chunk. This feature is deferred to maintain current test stability.
Full Changelog: 1.1.1...1.1.2
1.1.1 : Add TempLimitsData class and enhance tests for temporal and frequency limits
Full Changelog: 1.1.0...1.1.1
FreqLimitsData Enhancements:
Added freq_min and freq_max properties to track the minimum and maximum frequencies.
Improved add_limit_point to enforce ascending frequency order, updating freq_min and freq_max dynamically.
Added validation to set_interpolation_type for accepted values (linear or log).
Updated docstrings for clarity and consistency in frequency limit handling.
New TempLimitsData Class:
Created the TempLimitsData class to manage temporal limits, storing tuples of (level, transparency_time, release_time).
Added add_limit_point to validate and store temporal limit points, with enforced ascending order for transparency_time.
Included time_min and time_max properties to store the minimum and maximum time ranges dynamically.
Implemented clear_limit_points to reset temporal data points and get_limits_in_range to filter data within a specified time range.
Testing:
Added test_temporal_data.py to validate TempLimitsData functionality, including:
Adding multiple temporal limit points.
Checking minimum and maximum time boundaries.
Testing get_limits_in_range to retrieve points within a specified time range.
Enhanced existing frequency limit tests to ensure comprehensive validation.
1.1.0 : fixed 1.0.9
Description:
1.1.0 fix big mistakes in project structure and syntax of Data_Type.
this can be seen as the debuged 1.0.9 release.
This commit refactors several components of the DataPool, FFTSData, and ChunkableMixin classes to improve data management, streamline object instantiation, and enhance error handling. Key changes include:
FFTSData Refactor:
The FFTSData class now stores only the data_id of FreqSignalData objects instead of the entire object. This approach enables better memory management and modularity.
Added a new property, fft_ids, to retrieve a list of all data_ids associated with FreqSignalData.
Adjusted add_fft_signal() to append only the data_id, and fft_signals to retrieve full objects via datapool using data_id.
DataPool Enhancements:
Enhanced register_data() to handle variable kwargs more flexibly, allowing data_size_in_bytes and number_of_elements to pass selectively as needed.
Refactored the get_data() method by removing the auto-acknowledgment functionality to give subscribers explicit control over data access and processing.
Added get_data_object() to directly retrieve Data objects using data_id, with comprehensive permission checks and without automatic acknowledgment.
ChunkableMixin Adjustment:
Refined read_chunked_data() to include additional conditions for reading chunks, improving control over data stored in files.
Testing and Validation:
The test script has been updated to verify the proper storage and retrieval of FreqSignalData instances within FFTSData.
Implemented tabulate for a structured display of data_registry status, ensuring data storage and associations are correctly maintained.
Setup for Release 1.0.9:
Updated setup.py to version 1.0.9 to reflect this refactoring and functional enhancement.
These changes enhance the flexibility, clarity, and efficiency of data handling within the DataPool and make FFTSData more modular and suitable for FFT-related data management.
Full Changelog: 1.0.9...1.1.0
Full Changelog: 1.0.9...1.1.0
1.0.9 : Refactor DataPool and FFTSData Classes with Enhanced Handling for FFT Signals
Full Changelog: 1.0.8...1.0.9
Commit Message:
Description:
This commit refactors several components of the DataPool, FFTSData, and ChunkableMixin classes to improve data management, streamline object instantiation, and enhance error handling. Key changes include:
FFTSData Refactor:
The FFTSData class now stores only the data_id of FreqSignalData objects instead of the entire object. This approach enables better memory management and modularity.
Added a new property, fft_ids, to retrieve a list of all data_ids associated with FreqSignalData.
Adjusted add_fft_signal() to append only the data_id, and fft_signals to retrieve full objects via datapool using data_id.
DataPool Enhancements:
Enhanced register_data() to handle variable kwargs more flexibly, allowing data_size_in_bytes and number_of_elements to pass selectively as needed.
Refactored the get_data() method by removing the auto-acknowledgment functionality to give subscribers explicit control over data access and processing.
Added get_data_object() to directly retrieve Data objects using data_id, with comprehensive permission checks and without automatic acknowledgment.
ChunkableMixin Adjustment:
Refined read_chunked_data() to include additional conditions for reading chunks, improving control over data stored in files.
Testing and Validation:
The test script has been updated to verify the proper storage and retrieval of FreqSignalData instances within FFTSData.
Implemented tabulate for a structured display of data_registry status, ensuring data storage and associations are correctly maintained.
Setup for Release 1.0.9:
Updated setup.py to version 1.0.9 to reflect this refactoring and functional enhancement.
These changes enhance the flexibility, clarity, and efficiency of data handling within the DataPool and make FFTSData more modular and suitable for FFT-related data management.
1.0.8
changed FFTSDATA attribut freq_step to df for compatibility purpose with methods shared with FreqSignalData
pushed release to 1.0.8 in order to be usable in other project
Full Changelog: 1.0.7...1.0.8
1.0.7
Refactor and fix data storage and chunk handling for file and RAM-based signals.
- Added proper calculation and assignment of
data_size_in_bytesandnum_samplesduring data storage. - Refactored methods
store_data_from_objectandstore_data_from_data_generatorto ensure accurate tracking of sample size and data size. - Removed redundant manual size calculations in the
store_datamethod. - Improved error handling and debug outputs when dealing with file-based data.
- Ensured consistent behavior between file-based and RAM-based data storage and retrieval.
- Updated chunk reading logic to handle large datasets more efficiently.
This commit resolves issues with incorrect chunk handling for file-stored signals and ensures that file size and sample size are correctly calculated and managed.
Full Changelog: 1.0.6...1.0.7
1.0.6
Fix: Correct handling of num_samples in datapool storedata method
Full Changelog: 1.0.5...1.0.6
1.0.5
Fix: Correct handling of num_samples in data storage and chunking
- Added proper updates to
num_sampleswhen storing data from generators or objects. - Ensured that
num_samplesis correctly incremented during chunk-by-chunk storage, both in RAM and file-based data storage. - Updated the
store_data_from_objectandstore_data_from_data_generatormethods to handle large datasets, ensuring the accurate count of samples. - Refined
ChunkableMixinto ensure that chunk-based operations properly update and track the number of samples stored. - Fixed potential issues with reading incomplete chunks by properly handling the remaining bytes during file-based data retrieval.
- Added more robust data handling for mixed RAM and file-based operations, with enhanced logging for debugging.
These changes address the issues with zero num_samples and improve the overall reliability of chunked data handling in PyDataCore.
Full Changelog: 1.0.4...1.0.5
1.0.4
Full Changelog: 1.0.3...1.0.4
Merge remote-tracking branch 'origin/main'
Ajout de la fonctionnalité de lecture de chunks spécifiques pour les données volumineuses stockées en fichier
Ajout de la méthode read_specific_chunk dans ChunkableMixin pour permettre la lecture d'un chunk spécifique directement depuis un fichier.
Amélioration de la gestion des chunks non complets (en fin de fichier) afin de garantir qu'aucune donnée n'est perdue lors de la lecture.
Validation et tests effectués pour différents types de données (TEMPORAL_SIGNAL, FREQ_SIGNAL, etc.) avec des tailles de fichier et de chunk variées.
Mise à jour du README.md pour refléter les nouvelles fonctionnalités de gestion des données volumineuses via fichiers.
Added get_data_chunk(self, data_id, chunk_index, chunk_size=1024) method to DataPool class to get a specific chunk of a data
1.0.3
Build failed ....
Merge remote-tracking branch 'origin/main'
Ajout de la fonctionnalité de lecture de chunks spécifiques pour les données volumineuses stockées en fichier
- Ajout de la méthode
read_specific_chunkdansChunkableMixinpour permettre la lecture d'un chunk spécifique directement depuis un fichier. - Amélioration de la gestion des chunks non complets (en fin de fichier) afin de garantir qu'aucune donnée n'est perdue lors de la lecture.
- Validation et tests effectués pour différents types de données (TEMPORAL_SIGNAL, FREQ_SIGNAL, etc.) avec des tailles de fichier et de chunk variées.
- Mise à jour du README.md pour refléter les nouvelles fonctionnalités de gestion des données volumineuses via fichiers.
- Added get_data_chunk(self, data_id, chunk_index, chunk_size=1024) method to DataPool class to get a specific chunk of a data
Full Changelog: 1.0.2...1.0.3