feat: consolidate DataSpark web/core/cli capabilities#9
Merged
markhazleton merged 14 commits intomainfrom Mar 31, 2026
Merged
Conversation
- Introduced a comprehensive feature specification document outlining user stories, acceptance criteria, functional requirements, and success criteria for the consolidation of sql2csv, DataAnalysisDemo, and related repositories into a unified DataSpark platform. - Document includes detailed user scenarios for data upload, exploratory analysis, chart creation, pivot tables, SQLite database tools, AI insights, statistical analysis, CLI automation, and more. - Established clear requirements for data ingestion, analysis, visualization, and API integration, ensuring a robust foundation for the DataSpark platform. docs: Create consolidation recommendation document for DataSpark - Added a detailed analysis and recommendation document for the consolidation of sql2csv and DataAnalysisDemo repositories. - Provided an executive summary, repository comparison, feature overlap matrix, and a phased approach for rebranding and feature porting. - Outlined unique features from DataAnalysisDemo worth porting and recommended a unified architecture for the DataSpark solution.
…acts - plan.md: technical context, constitution check (PASS), project structure - research.md: 8 topics resolved (namespace rename, SQL fix, API auth, samples) - data-model.md: 10 entities with fields, types, constraints, validation rules - contracts/web-api.md: 10 REST API endpoints with request/response schemas - contracts/cli.md: 4 CLI commands (discover, export, schema, generate) - quickstart.md: developer getting-started guide - tasks.md: 112 tasks across 14 phases, organized by 11 user stories - Updated copilot agent context with tech stack from plan Feature: 001-dataspark-consolidation
…racts Addresses all 16 findings from /speckit.analyze: CRITICAL: - C1: Add T113 (integration tests FR-043) + T114 (CI coverage gate) to tasks.md - C2: Add T115 (constitution.md rename post Phase 1) to tasks.md HIGH: - H1: FR-014 updated to 5 formats (add JPEG); T048 updated to match - H2: plan.md corrective action #2 corrected from UnifiedAnalysisService -> DatabaseAnalysisService line ~729 (both Gate Result and Principle VI note) - H3: T041/T099 scope split clarified; T099 now depends on T041 MEDIUM: - M1: FR-003 'first rows' -> 'first 10 rows' (matches edge case spec) - M2: Add T117 (GitHub archival for DataAnalysisDemo) to tasks.md - M3: Add T118 (pivot 50K perf test SC-005) to tasks.md - M4: Add T119 (CLI batch test SC-006) to tasks.md - M5: plan.md Principle V note corrected: only Sql2Csv.Tests missing TreatWarningsAsErrors - M6: FR-007 now defines data quality score; T037 updated to verify it - M7: Add T116 (GitHub repo rename) to tasks.md LOW: - L1: T028 tightened to 'controller actions'; T103 tightened to 'view forms' - L2: T007/T009 ordering dependency noted; T009 excludes RootNamespace/AssemblyName - L3: T017 now covers both .github/copilot-instructions.md files - L4: T065 specifies ZIP output; contracts/web-api.md adds GET /api/Database/export-all Total tasks: 112 -> 119
… middleware - Created SchemaServiceTests to validate the functionality of SchemaService methods, including GetTablesAsync, GetTableNamesAsync, and GenerateSchemaReportAsync. - Added tests for handling null and invalid connection strings, cancellation tokens, and ensuring correct row counts and schema report formats. - Implemented ApiKeyAuthMiddleware to enforce API key authentication on API routes, including error handling for missing or invalid keys. - Added a solution file to organize the project structure and included a strong name key file for assembly signing.
…nd CLI consolidation
…compliance assessment and recommendations
…n implementation plan
…olations in DataSpark.Core services
…ackaging service with logging
|
$(cat coverage-summary.txt 2>/dev/null || echo "Coverage unavailable") |
markhazleton
added a commit
that referenced
this pull request
Mar 31, 2026
* feat: Add feature specification for DataSpark platform consolidation - Introduced a comprehensive feature specification document outlining user stories, acceptance criteria, functional requirements, and success criteria for the consolidation of sql2csv, DataAnalysisDemo, and related repositories into a unified DataSpark platform. - Document includes detailed user scenarios for data upload, exploratory analysis, chart creation, pivot tables, SQLite database tools, AI insights, statistical analysis, CLI automation, and more. - Established clear requirements for data ingestion, analysis, visualization, and API integration, ensuring a robust foundation for the DataSpark platform. docs: Create consolidation recommendation document for DataSpark - Added a detailed analysis and recommendation document for the consolidation of sql2csv and DataAnalysisDemo repositories. - Provided an executive summary, repository comparison, feature overlap matrix, and a phased approach for rebranding and feature porting. - Outlined unique features from DataAnalysisDemo worth porting and recommended a unified architecture for the DataSpark solution. * docs(spec): add DataSpark consolidation plan, tasks, and design artifacts - plan.md: technical context, constitution check (PASS), project structure - research.md: 8 topics resolved (namespace rename, SQL fix, API auth, samples) - data-model.md: 10 entities with fields, types, constraints, validation rules - contracts/web-api.md: 10 REST API endpoints with request/response schemas - contracts/cli.md: 4 CLI commands (discover, export, schema, generate) - quickstart.md: developer getting-started guide - tasks.md: 112 tasks across 14 phases, organized by 11 user stories - Updated copilot agent context with tech stack from plan Feature: 001-dataspark-consolidation * docs(spec): apply analysis remediation across spec, plan, tasks, contracts Addresses all 16 findings from /speckit.analyze: CRITICAL: - C1: Add T113 (integration tests FR-043) + T114 (CI coverage gate) to tasks.md - C2: Add T115 (constitution.md rename post Phase 1) to tasks.md HIGH: - H1: FR-014 updated to 5 formats (add JPEG); T048 updated to match - H2: plan.md corrective action #2 corrected from UnifiedAnalysisService -> DatabaseAnalysisService line ~729 (both Gate Result and Principle VI note) - H3: T041/T099 scope split clarified; T099 now depends on T041 MEDIUM: - M1: FR-003 'first rows' -> 'first 10 rows' (matches edge case spec) - M2: Add T117 (GitHub archival for DataAnalysisDemo) to tasks.md - M3: Add T118 (pivot 50K perf test SC-005) to tasks.md - M4: Add T119 (CLI batch test SC-006) to tasks.md - M5: plan.md Principle V note corrected: only Sql2Csv.Tests missing TreatWarningsAsErrors - M6: FR-007 now defines data quality score; T037 updated to verify it - M7: Add T116 (GitHub repo rename) to tasks.md LOW: - L1: T028 tightened to 'controller actions'; T103 tightened to 'view forms' - L2: T007/T009 ordering dependency noted; T009 excludes RootNamespace/AssemblyName - L3: T017 now covers both .github/copilot-instructions.md files - L4: T065 specifies ZIP output; contracts/web-api.md adds GET /api/Database/export-all Total tasks: 112 -> 119 * Add unit tests for SchemaService and implement API key authentication middleware - Created SchemaServiceTests to validate the functionality of SchemaService methods, including GetTablesAsync, GetTableNamesAsync, and GenerateSchemaReportAsync. - Added tests for handling null and invalid connection strings, cancellation tokens, and ensuring correct row counts and schema report formats. - Implemented ApiKeyAuthMiddleware to enforce API key authentication on API routes, including error handling for missing or invalid keys. - Added a solution file to organize the project structure and included a strong name key file for assembly signing. * feat(dataspark): implement database tools, AI hardening, analytics, and CLI consolidation * docs(spec): add pull request review for DataSpark consolidation with compliance assessment and recommendations * fix(pr-9): address review findings for security, architecture, async, and logging * fix(pr-9): close remaining security and architecture review findings * docs(pr-review): update PR #9 follow-up review report * fix(pr-9): address remaining findings and enhance security measures in implementation plan * fix(pr-9): update review metadata and address async I/O discipline violations in DataSpark.Core services * fix(core,web,tests): resolve PR #9 review findings * fix(pr-9): update review metadata and improve assessment details in PR documentation * feat(core): implement database discovery summary service and export packaging service with logging
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Validation