Release: Merge release into master from: release/2.56.4#14607
Merged
Release: Merge release into master from: release/2.56.4#14607
Conversation
…3-2.57.0-dev (#14579) * Update versions in application files * Update versions in application files --------- Co-authored-by: DefectDojo release bot <dojo-release-bot@users.noreply.github.com> Co-authored-by: Ross E Esposito <ross@defectdojo.com>
* feat: Add JFrog Xray API Summary Artifact Scan configuration * docs: Document upgrade process for DefectDojo 2.56.3 Added documentation for upgrading to DefectDojo version 2.56.3, focussing on JFrog Xray API Summary Artifact parser deduplication. * Update release notes for version 2.56.4 --------- Co-authored-by: valentijnscholten <valentijnscholten@gmail.com>
* Update versions in application files * chore(deps): bump pyopenssl from 25.3.0 to 26.0.0 Bumps [pyopenssl](https://github.com/pyca/pyopenssl) from 25.3.0 to 26.0.0. - [Changelog](https://github.com/pyca/pyopenssl/blob/main/CHANGELOG.rst) - [Commits](pyca/pyopenssl@25.3.0...26.0.0) --- updated-dependencies: - dependency-name: pyopenssl dependency-version: 26.0.0 dependency-type: direct:production ... Signed-off-by: dependabot[bot] <support@github.com> --------- Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: DefectDojo release bot <dojo-release-bot@users.noreply.github.com> Co-authored-by: Ross E Esposito <ross@defectdojo.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps and [picomatch](https://github.com/micromatch/picomatch). These dependencies needed to be updated together. Updates `picomatch` from 2.3.1 to 2.3.2 - [Release notes](https://github.com/micromatch/picomatch/releases) - [Changelog](https://github.com/micromatch/picomatch/blob/master/CHANGELOG.md) - [Commits](micromatch/picomatch@2.3.1...2.3.2) Updates `picomatch` from 4.0.3 to 4.0.4 - [Release notes](https://github.com/micromatch/picomatch/releases) - [Changelog](https://github.com/micromatch/picomatch/blob/master/CHANGELOG.md) - [Commits](micromatch/picomatch@2.3.1...2.3.2) --- updated-dependencies: - dependency-name: picomatch dependency-version: 2.3.2 dependency-type: indirect - dependency-name: picomatch dependency-version: 4.0.4 dependency-type: indirect ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
* add semi large sample for jfrog xray unified * rename * add larger acunetix scan * add larger acunetix scan
Bumps [requests](https://github.com/psf/requests) from 2.32.5 to 2.33.0. - [Release notes](https://github.com/psf/requests/releases) - [Changelog](https://github.com/psf/requests/blob/main/HISTORY.md) - [Commits](psf/requests@v2.32.5...v2.33.0) --- updated-dependencies: - dependency-name: requests dependency-version: 2.33.0 dependency-type: direct:production ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
…iants (#14593) * Standardize CI tests on Debian AMD64 and document supported image variants Reduce CI matrix to only build and test the Debian AMD64 Django image, which is the officially supported configuration. Alpine and ARM64 images are still built for release but are no longer tested in CI. - Add Docker Image Variants section to installation docs - Remove ARM64 from build and REST framework test matrices - Exclude django-alpine from test build workflow - Switch performance tests from alpine to debian - Remove alpine from integration and REST framework OS matrices Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * Restore ARM64 builds and unit tests, keep integration tests AMD64-only ARM64 should still be built and unit tested in CI. Only integration, performance, and k8s tests are restricted to AMD64. Update the installation docs to reflect the three support tiers. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * Update .github/workflows/build-docker-images-for-testing.yml Co-authored-by: valentijnscholten <valentijnscholten@gmail.com> --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Co-authored-by: valentijnscholten <valentijnscholten@gmail.com>
…#14569) * fix: handle missing status_finding_non_special prefetch in reimporter When a finding is created during reimport (no match found) and added to the candidate dictionaries for same-batch matching, it lacks the status_finding_non_special prefetch attribute that is only set by build_candidate_scope_queryset. If a subsequent finding in the same batch matches against this newly-created finding, accessing existing_finding.status_finding_non_special raises AttributeError. Add EndpointManager.get_non_special_endpoint_statuses() that returns the prefetched attribute when available, falling back to an equivalent DB query otherwise. Use it at both access sites: default_reimporter.py (process_matched_mitigated_finding) and endpoint_manager.py (update_endpoint_status). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * add test case * ruff * remove mock * Remove unused import from test_reimport_prefetch.py --------- Co-authored-by: seantechco <admin@seantech.co> Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Co-authored-by: Valentijn Scholten <valentijnscholten@gmail.com>
… managers (#14572) Add PluggableContextTask between DojoAsyncTask and PgHistoryTask that loads context managers from the CELERY_TASK_CONTEXT_MANAGERS setting. This allows plugins (e.g. Pro) to wrap all celery tasks with custom context managers without relying on celery signals (which don't fire in prefork workers). Also propagate sync kwarg from process_findings to dojo_dispatch_task in both DefaultImporter and DefaultReImporter so callers can force post_process_findings_batch to run in-process.
Removes django-linear-migrations from requirements-dev.txt, INSTALLED_APPS, and deletes max_migration.txt. The tool caused startup failures when the package was not installed but DD_DEBUG=True.
* Optimize prepare_duplicates_for_delete and add test coverage Replace per-original O(n×m) loop with a single bulk UPDATE for inside-scope duplicate reset. Outside-scope reconfiguration still runs per-original but now uses .iterator() and .exists() to avoid loading full querysets into memory. Also adds WARN-level logging to fix_loop_duplicates for visibility into how often duplicate loops occur in production, and a comment on removeLoop explaining the optimization opportunity. * fix: remove unused import and fix docstring lint warning * perf: eliminate per-original queries with prefetch and bulk reset Remove redundant .exclude() and .exists() calls by leveraging the bulk UPDATE that already unlinks inside-scope duplicates. Add prefetch_related to fetch all reverse relations in a single query. * add comment * perf: replace per-object async delete with SQL cascade walker Replace the per-object obj.delete() approach in async_delete_crawl_task with a recursive SQL cascade walker that compiles QuerySets to raw SQL and walks model._meta.related_objects bottom-up. This auto-discovers all FK relations at runtime, including those added by plugins. Key changes: - New dojo/utils_cascade_delete.py: cascade_delete() utility - New dojo/signals.py: pre_bulk_delete_findings signal for extensibility - New bulk_clear_finding_m2m() in finding/helper.py for M2M cleanup with FileUpload disk cleanup and orphaned Notes deletion - Rewritten async_delete_crawl_task with chunked cascade deletion - Removed async_delete_chunk_task (no longer needed) - Product grading recalculated once at end instead of per-object * perf: replace mass_model_updater with single UPDATE in reconfigure_duplicate_cluster Use QuerySet.update() instead of mass_model_updater to re-point duplicates to the new original. Single SQL query instead of loading all findings into Python and calling bulk_update. * cleanup: remove dead code from duplicate handling Remove reset_duplicate_before_delete, reset_duplicates_before_delete, and set_new_original — all replaced by bulk UPDATE in prepare_duplicates_for_delete and .update() in reconfigure_duplicate_cluster. Remove unused mass_model_updater import. * fix: delete outside-scope duplicates before main scope to avoid FK violations When bulk-deleting findings in chunks, an original in an earlier chunk could fail to delete because its duplicate (higher ID) in a later chunk still references it via duplicate_finding FK. Fix by deleting outside-scope duplicates first, then the main scope. Also moves pre_bulk_delete_findings signal into bulk_delete_findings so it fires automatically. * fix: use bool type for DD_DUPLICATE_CLUSTER_CASCADE_DELETE env var * fix: replace save_no_options with .update() in reconfigure_duplicate_cluster Avoids triggering Finding.save() signals (pre_save_changed, execute_prioritization_calculations) when reconfiguring duplicate clusters during deletion. Adds tests for cross-engagement duplicate reconfiguration and product deletion with duplicates. * refactor: scope prepare_duplicates_for_delete to full object, not per-engagement Adds product= and product_type= parameters so the entire deletion scope is handled in one call, avoiding unnecessary reconfiguration of findings that are about to be deleted anyway. Uses subqueries instead of materializing ID sets, and chunks the originals loop with prefetch to bound memory. Reverts finding_delete to use ORM .delete() for single finding cascade deletes. * refactor: remove ASYNC_DELETE_MAPPING, use FINDING_SCOPE_FILTERS Replace the model_list-based mapping with a simple scope filter dict. prepare_duplicates_for_delete now accepts a single object and derives the scope via FINDING_SCOPE_FILTERS. Removes the redundant non-Finding model deletion loop — cascade_delete on the top-level object handles all remaining children. Cleans up async_delete class. * fix: resolve ruff lint violations in helper and tests * remove obsolete test * perf: add bulk_delete_findings, fix CASCADE_DELETE and scope expansion - Add bulk_delete_findings() wrapper: M2M cleanup + chunked cascade_delete - reconfigure_duplicate_cluster: return early when CASCADE_DELETE=True instead of calling Django .delete() which fires signals per finding - finding_delete: use bulk_delete_findings when CASCADE_DELETE=True - async_delete_crawl_task: expand scope to include outside-scope duplicates, use bulk_delete_findings instead of manual M2M + cascade_delete calls - Fix test to use async_delete class instead of direct task import * fix: handle M2M and tag cleanup in cascade_delete Adds generic M2M through-table cleanup to cascade_delete so tags and other M2M relations are cleared before row deletion. Introduces bulk_remove_all_tags in tag_utils to properly decrement tagulous tag counts during bulk deletion. Adds test for product deletion with tagged objects. * refactor: auto-discover TagFields in bulk_remove_all_tags Instead of hardcoding field names, iterate over all fields on the model and select those with tag_options. This avoids unexpected side effects when callers pass a specific tag_field_name parameter. * perf: address PR review feedback for large-scale delete safety - Stream finding IDs via iterator()+batched instead of materializing the full ID list into memory. Prevents OOM on 4.5M+ finding deletes. - Add SET LOCAL statement_timeout (300s) and deadlock error logging to cascade_delete SQL execution. Prevents runaway queries from holding locks indefinitely and surfaces deadlock errors in logs. - Reuse scope_ids subquery variable and replace .exists()+.count() with a single .count() call to avoid evaluating the subquery twice. - Add comment explaining why FileUpload uses per-object ORM delete (custom delete() removes files from disk; file attachments are rare). - Scope fix_loop_duplicates to the deletion set instead of scanning the full findings table. The double self-join is cheap when filtered to only findings in the scope being deleted. - Document that pre_bulk_delete_findings signal receivers must not materialize the full queryset (use .filter()/.iterator() instead). - Add skip_m2m_for parameter to cascade_delete so bulk_delete_findings can tell it Finding M2M was already cleaned by bulk_clear_finding_m2m, avoiding redundant tag count aggregation queries. * refactor: rename cascade_delete to cascade_delete_related_objects The function now only deletes related objects, not the root record. This allows async_delete_task to call obj.delete() on the top-level object via ORM, which fires Django signals (post_delete notifications, pghistory audit, Pro signals like product_post_delete). bulk_delete_findings uses execute_delete_sql to delete the finding rows themselves after cascade_delete_related_objects cleans children. * Update dojo/settings/settings.dist.py Co-authored-by: Cody Maffucci <46459665+Maffooch@users.noreply.github.com> --------- Co-authored-by: Cody Maffucci <46459665+Maffooch@users.noreply.github.com>
* Add scan_date to import settings if overridden * Fix datetime JSON serialization in import_settings Convert scan_date to ISO format string before storing in import_settings dict, which gets JSON-serialized. Raw datetime objects cause "TypeError: Object of type datetime is not JSON serializable". Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * Add unit tests for scan_date in import_settings Tests both cases: - User supplies scan_date: verifies it's stored as ISO string - No scan_date supplied: verifies it's stored as None Both tests also verify import_settings remains JSON-serializable. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * Fix timezone import in scan_date test Use datetime.timezone.utc instead of django.utils.timezone.utc which doesn't exist. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * Fix ruff lint: import sorting and use datetime.UTC alias Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Finding.Meta.ordering includes multiple columns (numerical_severity, date,
title, epss_score, epss_percentile). When Django generates
SELECT DISTINCT test_id ... ORDER BY those columns, PostgreSQL requires
them in the SELECT list, so Django silently adds them. The DISTINCT then
operates on the full tuple instead of test_id alone, causing the same test
to appear multiple times in the iterator and be processed repeatedly.
Fix by calling .order_by("test_id") before .values_list().distinct() to
override the model-level ordering, so the query stays SELECT DISTINCT test_id
ORDER BY test_id.
…licate-test-ids fix(dedupe): prevent duplicate test processing in batch dedupe command
|
This pull request modifies sensitive code paths (dojo/importers/default_reimporter.py and dojo/importers/endpoint_manager.py), which the scanner flagged as risky edits; configure allowed authors or paths in .dryrunsecurity.yaml if these changes are intentional.
🔴 Configured Codepaths Edit in
|
| Vulnerability | Configured Codepaths Edit |
|---|---|
| Description | Sensitive edits detected for this file. Sensitive file paths and allowed authors can be configured in .dryrunsecurity.yaml. |
🔴 Configured Codepaths Edit in dojo/importers/endpoint_manager.py (drs_3e66e55f)
| Vulnerability | Configured Codepaths Edit |
|---|---|
| Description | Sensitive edits detected for this file. Sensitive file paths and allowed authors can be configured in .dryrunsecurity.yaml. |
We've notified @mtesauro.
Comment to provide feedback on these findings.
Report false positive: @dryrunsecurity fp [FINDING ID] [FEEDBACK]
Report low-impact: @dryrunsecurity nit [FINDING ID] [FEEDBACK]
Example: @dryrunsecurity fp drs_90eda195 This code is not user-facing
All finding details can be found in the DryRun Security Dashboard.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Release triggered by
rossops