Skip to content

Feat/azure devops deploy steps#2013

Merged
moshloop merged 3 commits intomainfrom
feat/azure-devops-deploy-steps
Mar 18, 2026
Merged

Feat/azure devops deploy steps#2013
moshloop merged 3 commits intomainfrom
feat/azure-devops-deploy-steps

Conversation

@moshloop
Copy link
Copy Markdown
Member

@moshloop moshloop commented Mar 18, 2026

Summary by CodeRabbit

  • New Features

    • Azure DevOps integration now includes release description, reason, variables, deploy steps and artifact summaries per environment
    • Release change type now reflects environment name
    • Log results now support grouped log output
  • Bug Fixes

    • Cleanup job correctly increments success counts when removing stale items
  • Tests

    • New end-to-end tests and fixtures cover relationship matching: matched, unmatched, parent, and lookup scenarios

…re devops scraper

Fetch complete release definition JSON for better config tracking, expose deploy steps in environment details, and use environment names as change types for more granular release tracking.
Adds support for extracting release and environment-level configuration variables (excluding secrets), artifact definitions with pipeline/version/branch references, and release reason/description fields from Azure DevOps releases. Includes new data structures for artifacts and configuration variables, and helper functions to flatten and summarize this metadata.
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 18, 2026

Caution

Review failed

The pull request is closed.

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: a81efabd-75d4-45b9-bba7-1a15e62a36bb

📥 Commits

Reviewing files that changed from the base of the PR and between 16743b1 and de0d230.

⛔ Files ignored due to path filters (1)
  • go.sum is excluded by !**/*.sum
📒 Files selected for processing (4)
  • go.mod
  • jobs/cleanup.go
  • scrapers/logs/logs.go
  • scrapers/stale.go

Walkthrough

Enhances Azure DevOps release scraping to fetch full release definitions and enrich release results with artifacts, variables, deploy steps and metadata; adds relationship extraction fixtures and e2e specs; expands extract e2e test harness to pre-populate and assert relationships; simplifies stale-item timeout handling and updates dependencies.

Changes

Cohort / File(s) Summary
Test Fixtures - Relationships
fixtures/data/relationships_basic.json, fixtures/data/relationships_unmatched.json
Added JSON fixtures for matched and unmatched service→deployment relationship scenarios.
Test Specifications - Relationships
scrapers/extract/testdata/e2e/relationships_*.yaml
Added four e2e YAML specs verifying relationship lookup, matched/mismatched and parent relationship behaviors.
Extract E2E Test Harness
scrapers/extract_e2e_test.go
Added Name to pre-populate config struct, extended pre-population/teardown to include config_changes/config_items, and injects serialized ConfigRelationship results into test env for assertions.
Azure DevOps Client Types
scrapers/azure/devops/client.go
Added types: DeployStep, ConfigurationVariable, ArtifactSourceRef, ReleaseArtifact; extended Release/ReleaseEnvironment with variables, deploy steps and artifacts; added GetReleaseDefinition method(s).
Azure DevOps Release Processing
scrapers/azure/devops/releases.go, scrapers/azure/devops/pipelines_test.go, scrapers/azure/devops/releases_test.go
Changed buildReleaseResult signature to accept defJSON map[string]any; scraper now fetches release definition JSON and uses it as result.Config when present; populates ChangeType from env.Name; added helpers flattenVariables and summarizeArtifacts; tests updated/extended accordingly.
Logs Output Structure
scrapers/logs/logs.go
Added Groups []*logs.LogGroup field to LogResult to include log group collections in output.
Stale Item Cleanup / Config
jobs/cleanup.go, scrapers/stale.go
Simplified stale-item age handling: removed intermediate coalesce logic and changed DefaultStaleTimeout from string to time.Duration; clamped durations and adjusted deletion bookkeeping to increment success counts.
Dependency Updates
go.mod
Bumped several flanksource modules and added JSON-related indirect dependencies (jsonparser, jsonschema, easyjson, go-ordered-map, etc.).

Sequence Diagrams

sequenceDiagram
    participant Scraper as Release Scraper
    participant Client as AzureDevopsClient
    participant API as Azure DevOps API
    participant Processor as buildReleaseResult

    Scraper->>Client: GetReleaseDefinition(ctx, project, defID)
    Client->>API: Fetch release definition JSON
    API-->>Client: defJSON (map[string]any)
    Client-->>Scraper: defJSON

    Scraper->>Client: GetReleases(ctx, project, defID)
    Client->>API: Fetch releases list
    API-->>Client: []Release
    Client-->>Scraper: releases

    Scraper->>Processor: buildReleaseResult(ctx, config, project, def, defJSON, releases, cutoff)
    Processor->>Processor: flattenVariables(release.Variables, env.Variables)
    Processor->>Processor: summarizeArtifacts(release.Artifacts)
    Processor->>Processor: Populate result.Config from defJSON
    Processor-->>Scraper: ScrapeResult (enriched)
Loading
sequenceDiagram
    participant Test as E2E Test
    participant Loader as Fixture Loader
    participant Store as Config Store
    participant Extractor as Extract Scraper
    participant Repo as ConfigRelationship Repo

    Test->>Loader: Load fixtures/data/relationships_basic.json
    Loader-->>Test: Service entity
    Test->>Store: Pre-populate Deployment (deploy-web-frontend)
    Store-->>Test: Stored

    Test->>Extractor: Run extract with transform (backend_ref → App::Deployment)
    Extractor->>Repo: Query ConfigRelationship for scraper
    Repo-->>Extractor: []ConfigRelationship
    Extractor-->>Test: Serialized relationships injected into env
    Test->>Test: Assert relationship count/properties
Loading

Possibly related PRs

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 16.67% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Feat/azure devops deploy steps' directly describes the main change: adding deploy steps support to Azure DevOps scrapers, as evidenced by new DeployStep types and fields in Release-related structures.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/azure-devops-deploy-steps
✨ Simplify code
  • Create PR with simplified code
  • Commit simplified code in branch feat/azure-devops-deploy-steps
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
scrapers/stale.go (1)

19-36: ⚠️ Potential issue | 🟠 Major

Pipeline failure: staleDuration from scrape start is never used.

The static analysis correctly identifies that staleDuration computed from time.Since(start) on line 22 is always overwritten in the switch statement. According to the summary, this PR removed the comparison logic that previously enforced taking the greater of the parsed value and scrape duration.

If the scrape-start-based duration is no longer needed, remove the dead code (lines 19-24). Otherwise, if the intent was to preserve the "don't delete items scraped recently" behavior, the comparison logic needs to be restored.

Option 1: Remove dead code if scrape duration is no longer needed
 func DeleteStaleConfigItems(ctx context.Context, staleTimeout string, scraperID uuid.UUID) (int64, error) {
 	var staleDuration time.Duration
-	if val := ctx.Value(contextKeyScrapeStart); val != nil {
-		if start, ok := val.(time.Time); ok {
-			staleDuration = time.Since(start)
-		}
-	}
 
 	switch staleTimeout {
 	case "keep":
Option 2: Restore max comparison if scrape duration should be respected
 	switch staleTimeout {
 	case "keep":
 		return 0, nil
 	case "":
-		staleDuration = ctx.Properties().Duration("config.retention.stale_item_age", DefaultStaleTimeout)
+		configDuration := ctx.Properties().Duration("config.retention.stale_item_age", DefaultStaleTimeout)
+		if configDuration > staleDuration {
+			staleDuration = configDuration
+		}
 	default:
 		if parsed, err := duration.ParseDuration(staleTimeout); err != nil {
 			return 0, fmt.Errorf("failed to parse stale timeout %s: %w", staleTimeout, err)
 		} else {
-			staleDuration = time.Duration(parsed)
+			if time.Duration(parsed) > staleDuration {
+				staleDuration = time.Duration(parsed)
+			}
 		}
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scrapers/stale.go` around lines 19 - 36, The variable staleDuration is
computed from ctx.Value(contextKeyScrapeStart) but then always overwritten by
the switch on staleTimeout; either remove the unused scrape-start code (the
ctx.Value(contextKeyScrapeStart) block and the initial staleDuration
declaration) or restore the original “take the max” behavior by keeping the
scrape-start-derived staleDuration and, after resolving staleTimeout (using
ctx.Properties().Duration or duration.ParseDuration), set staleDuration =
max(staleDuration, parsedDuration) so the logic in functions using staleDuration
(and symbols staleTimeout, duration.ParseDuration, ctx.Properties().Duration,
DefaultStaleTimeout) preserves the “don’t delete items scraped more recently”
rule.
scrapers/azure/devops/pipelines_test.go (1)

367-378: ⚠️ Potential issue | 🔴 Critical

Replace env.Name with the mapped changeType variable in the ChangeType assignment.

The code computes a status-based changeType value via releaseEnvStatusToChangeType[env.Status] and uses it in conditional logic (line 274), but then incorrectly assigns env.Name to ChangeType instead. This causes:

  1. Semantic mismatch: ChangeType now holds environment names ("Staging", "Prod") instead of status constants (ChangeTypeSucceeded, ChangeTypeInProgress) used elsewhere in the codebase (see run_state.go for pipeline runs).

  2. Downstream breakage: Code in db/update.go:366 calls events.GetSeverity(change.ChangeType), which expects status constants, not environment names. This will produce incorrect severity levels.

  3. Inconsistent semantics: Approval changes correctly use ChangeType: changeType for status values; environment changes should do the same. The actual deployment status should remain in details["status"].

The fix is straightforward: change line ~330 from ChangeType: env.Name, to ChangeType: changeType, to use the computed status value.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scrapers/azure/devops/pipelines_test.go` around lines 367 - 378, In
buildReleaseResult, the ChangeType for environment changes is incorrectly set to
env.Name (environment label) instead of the mapped status constant; locate the
block where env is processed (uses releaseEnvStatusToChangeType[env.Status]
assigned to changeType) and replace the assignment to the ChangeType field so it
uses changeType (the mapped status) rather than env.Name, leaving
details["status"] populated with env.Status/name as before; this keeps semantics
consistent with approval changes (which already use ChangeType: changeType) and
ensures downstream callers like events.GetSeverity receive the expected status
constants.
🧹 Nitpick comments (3)
scrapers/extract_e2e_test.go (1)

252-268: Cleanup order is correct but has minor redundancy.

The teardown properly handles FK constraints by deleting config_relationships first. However, there's minor redundancy: lines 256-258 delete config_changes for pre-populated items by config_id, while line 265 also deletes config_changes for all items by scraper_id. This is harmless (DELETE is idempotent) but could be simplified.

♻️ Optional: Remove redundant config_changes deletion
 for _, id := range createdItems {
-  DefaultContext.DB().Where("config_id = ?", id).Delete(&models.ConfigChange{})
   DefaultContext.DB().Delete(&models.ConfigItem{}, "id = ?", id)
 }

Since line 265 already deletes all config_changes for items belonging to this scraper, the per-item deletion in the loop is redundant.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scrapers/extract_e2e_test.go` around lines 252 - 268, The teardown currently
deletes per-item ConfigChange records inside the createdItems loop via
DefaultContext.DB().Where("config_id = ?", id).Delete(&models.ConfigChange{}),
but later there is a bulk Exec that already deletes config_changes for all items
of this scraper (the Exec "DELETE FROM config_changes WHERE config_id IN (SELECT
id FROM config_items WHERE scraper_id = ?)"). Remove the per-item deletion line
from the loop (leave deletion of ConfigItem and other per-item cleanup intact)
so only the bulk Exec handles config_changes; reference symbols: createdItems,
models.ConfigChange, DefaultContext.DB().Where(...).Delete, and the Exec("DELETE
FROM config_changes WHERE config_id IN (SELECT id FROM config_items WHERE
scraper_id = ?)").
scrapers/azure/devops/releases_test.go (1)

487-492: Consider using a safer type assertion in test.

The unchecked type assertion details["artifacts"].([]map[string]any) will panic if the type doesn't match. While in test code a panic effectively fails the test, using Gomega's type-safe assertions would provide clearer error messages.

♻️ Suggested improvement
-		artifacts := details["artifacts"].([]map[string]any)
-		Expect(artifacts).To(HaveLen(1))
-		Expect(artifacts[0]["definition"]).To(Equal("my-pipeline"))
+		Expect(details).To(HaveKey("artifacts"))
+		artifacts, ok := details["artifacts"].([]map[string]any)
+		Expect(ok).To(BeTrue(), "artifacts should be []map[string]any")
+		Expect(artifacts).To(HaveLen(1))
+		Expect(artifacts[0]["definition"]).To(Equal("my-pipeline"))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scrapers/azure/devops/releases_test.go` around lines 487 - 492, The test
currently does an unchecked type assertion on details["artifacts"] which can
panic; change it to a safe assertion by first retrieving the value and asserting
its type with Gomega (e.g., use a type-check Expect on details["artifacts"] or
assert ok after a comma-ok cast) before accessing fields, then continue with
Expect checks on artifacts[0]["definition"], artifacts[0]["version"],
artifacts[0]["branch"], and artifacts[0]["isPrimary"]; reference the details map
access and the artifacts variable in the releases_test.go assertions to locate
where to add the safe type check.
scrapers/azure/devops/releases.go (1)

97-105: Consider soft-failing GetReleaseDefinition to improve resilience.

The current implementation fails the entire scrape if fetching a release definition fails. Consider whether this should be a soft failure (log warning, use nil for defJSON, continue) to improve resilience when a single definition is temporarily unavailable.

♻️ Alternative: soft-fail approach
 		defJSON, err := releaseClient.GetReleaseDefinition(ctx, project.Name, def.ID)
 		if err != nil {
-			return results.Errorf(err, "failed to get release definition %d", def.ID)
+			ctx.Logger.Warnf("failed to get release definition %d: %v (proceeding without definition config)", def.ID, err)
+			defJSON = nil
 		}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scrapers/azure/devops/releases.go` around lines 97 - 105, The call to
releaseClient.GetReleaseDefinition currently returns an error that aborts the
whole scrape; change this to soft-fail by catching the error from
releaseClient.GetReleaseDefinition(ctx, project.Name, def.ID), logging a warning
(including def.ID and project.Name and the error), setting defJSON to nil, and
continuing so buildReleaseResult(ctx, config, project, def, defJSON, releases,
cutoff) runs with a nil defJSON; ensure buildReleaseResult can handle a nil
defJSON (or add a nil check before calling it) and do not return the error from
the failed GetReleaseDefinition so a single missing definition doesn't fail the
entire scrape.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@scrapers/azure/devops/pipelines_test.go`:
- Around line 367-378: In buildReleaseResult, the ChangeType for environment
changes is incorrectly set to env.Name (environment label) instead of the mapped
status constant; locate the block where env is processed (uses
releaseEnvStatusToChangeType[env.Status] assigned to changeType) and replace the
assignment to the ChangeType field so it uses changeType (the mapped status)
rather than env.Name, leaving details["status"] populated with env.Status/name
as before; this keeps semantics consistent with approval changes (which already
use ChangeType: changeType) and ensures downstream callers like
events.GetSeverity receive the expected status constants.

In `@scrapers/stale.go`:
- Around line 19-36: The variable staleDuration is computed from
ctx.Value(contextKeyScrapeStart) but then always overwritten by the switch on
staleTimeout; either remove the unused scrape-start code (the
ctx.Value(contextKeyScrapeStart) block and the initial staleDuration
declaration) or restore the original “take the max” behavior by keeping the
scrape-start-derived staleDuration and, after resolving staleTimeout (using
ctx.Properties().Duration or duration.ParseDuration), set staleDuration =
max(staleDuration, parsedDuration) so the logic in functions using staleDuration
(and symbols staleTimeout, duration.ParseDuration, ctx.Properties().Duration,
DefaultStaleTimeout) preserves the “don’t delete items scraped more recently”
rule.

---

Nitpick comments:
In `@scrapers/azure/devops/releases_test.go`:
- Around line 487-492: The test currently does an unchecked type assertion on
details["artifacts"] which can panic; change it to a safe assertion by first
retrieving the value and asserting its type with Gomega (e.g., use a type-check
Expect on details["artifacts"] or assert ok after a comma-ok cast) before
accessing fields, then continue with Expect checks on
artifacts[0]["definition"], artifacts[0]["version"], artifacts[0]["branch"], and
artifacts[0]["isPrimary"]; reference the details map access and the artifacts
variable in the releases_test.go assertions to locate where to add the safe type
check.

In `@scrapers/azure/devops/releases.go`:
- Around line 97-105: The call to releaseClient.GetReleaseDefinition currently
returns an error that aborts the whole scrape; change this to soft-fail by
catching the error from releaseClient.GetReleaseDefinition(ctx, project.Name,
def.ID), logging a warning (including def.ID and project.Name and the error),
setting defJSON to nil, and continuing so buildReleaseResult(ctx, config,
project, def, defJSON, releases, cutoff) runs with a nil defJSON; ensure
buildReleaseResult can handle a nil defJSON (or add a nil check before calling
it) and do not return the error from the failed GetReleaseDefinition so a single
missing definition doesn't fail the entire scrape.

In `@scrapers/extract_e2e_test.go`:
- Around line 252-268: The teardown currently deletes per-item ConfigChange
records inside the createdItems loop via DefaultContext.DB().Where("config_id =
?", id).Delete(&models.ConfigChange{}), but later there is a bulk Exec that
already deletes config_changes for all items of this scraper (the Exec "DELETE
FROM config_changes WHERE config_id IN (SELECT id FROM config_items WHERE
scraper_id = ?)"). Remove the per-item deletion line from the loop (leave
deletion of ConfigItem and other per-item cleanup intact) so only the bulk Exec
handles config_changes; reference symbols: createdItems, models.ConfigChange,
DefaultContext.DB().Where(...).Delete, and the Exec("DELETE FROM config_changes
WHERE config_id IN (SELECT id FROM config_items WHERE scraper_id = ?)").

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 10160dcd-4e0d-42be-b733-b779851e8385

📥 Commits

Reviewing files that changed from the base of the PR and between 1a2bbc2 and 16743b1.

⛔ Files ignored due to path filters (1)
  • go.sum is excluded by !**/*.sum
📒 Files selected for processing (15)
  • fixtures/data/relationships_basic.json
  • fixtures/data/relationships_unmatched.json
  • go.mod
  • jobs/cleanup.go
  • scrapers/azure/devops/client.go
  • scrapers/azure/devops/pipelines_test.go
  • scrapers/azure/devops/releases.go
  • scrapers/azure/devops/releases_test.go
  • scrapers/extract/testdata/e2e/relationships_lookup.yaml
  • scrapers/extract/testdata/e2e/relationships_matched.yaml
  • scrapers/extract/testdata/e2e/relationships_parent.yaml
  • scrapers/extract/testdata/e2e/relationships_unmatched.yaml
  • scrapers/extract_e2e_test.go
  • scrapers/logs/logs.go
  • scrapers/stale.go

…dling

Updates commons, duty, gomplate, and is-healthy packages. Refactors stale item age handling to use centralized property resolution instead of repeated fallback logic. Adds LogGroup to log scraper results and simplifies duration parsing in stale config cleanup.
@moshloop moshloop force-pushed the feat/azure-devops-deploy-steps branch from 16743b1 to de0d230 Compare March 18, 2026 12:57
@moshloop moshloop enabled auto-merge (rebase) March 18, 2026 12:57
@moshloop moshloop merged commit be5035d into main Mar 18, 2026
12 of 14 checks passed
@moshloop moshloop deleted the feat/azure-devops-deploy-steps branch March 18, 2026 13:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant