Skip to content

Conversation

GitTimeraider
Copy link
Owner

No description provided.

@Copilot Copilot AI review requested due to automatic review settings September 12, 2025 17:48
@GitTimeraider GitTimeraider merged commit 8265474 into main Sep 12, 2025
4 checks passed
Copy link

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR implements comprehensive fixes to prevent multiple backup runs for the same repository, addressing issues with duplicate backup job creation and execution. The changes add multiple layers of protection including database-level checks, file-based locking, and automatic cleanup mechanisms.

Key changes:

  • Added file-based locking mechanism to prevent concurrent backup executions
  • Implemented automatic cleanup of orphaned temp directories and duplicate jobs
  • Enhanced duplicate prevention with extended time windows and stuck job detection
  • Added periodic health check job to monitor and clean up scheduler inconsistencies

Reviewed Changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 4 comments.

File Description
backup_service.py Added auto-cleanup for orphaned temp directories before starting new backups
app.py Extensive changes including file-based locking, job tracking, automatic cleanup of duplicates and stuck jobs, and periodic health monitoring
Comments suppressed due to low confidence (1)

app.py:1

  • The code references BackupJob.created_at field, but the BackupJob model may not have this field based on the context. The visible fields in the backup job creation are started_at, status, etc. This could cause an AttributeError.
import os

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

Comment on lines +46 to +50
# Auto-cleanup: Check for and clean up any orphaned temp directories
user_backup_dir = self.backup_base_dir / f"user_{repository.user_id}"
repo_backup_dir = user_backup_dir / repository.name
if repo_backup_dir.exists():
self._cleanup_temp_directories(repo_backup_dir)
Copy link
Preview

Copilot AI Sep 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The method _cleanup_temp_directories is called but not defined in the shown code. This will result in an AttributeError when the backup service attempts to clean up temp directories.

Copilot uses AI. Check for mistakes.

Comment on lines +834 to +837
try:
os.unlink(lock_file_path)
except:
pass
Copy link
Preview

Copilot AI Sep 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using bare except: clause suppresses all exceptions including system exits and keyboard interrupts. Use a specific exception type like OSError or FileNotFoundError instead.

Suggested change
try:
os.unlink(lock_file_path)
except:
pass
try:
os.unlink(lock_file_path)
except OSError:
pass

Copilot uses AI. Check for mistakes.

Comment on lines +822 to +839
try:
lock_file = open(lock_file_path, 'w')
fcntl.flock(lock_file.fileno(), fcntl.LOCK_EX | fcntl.LOCK_NB)
logger.info(f"Acquired file lock for repository {repo.name}")

try:
logger.info(f"Starting scheduled backup for repository: {repo.name}")
backup_service.backup_repository(repo)
logger.info(f"Completed scheduled backup for repository: {repo.name}")
finally:
fcntl.flock(lock_file.fileno(), fcntl.LOCK_UN)
lock_file.close()
try:
os.unlink(lock_file_path)
except:
pass

except (IOError, OSError) as lock_error:
Copy link
Preview

Copilot AI Sep 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The file handle should be managed with a context manager (with statement) to ensure proper cleanup even if an exception occurs before the explicit close() call.

Suggested change
try:
lock_file = open(lock_file_path, 'w')
fcntl.flock(lock_file.fileno(), fcntl.LOCK_EX | fcntl.LOCK_NB)
logger.info(f"Acquired file lock for repository {repo.name}")
try:
logger.info(f"Starting scheduled backup for repository: {repo.name}")
backup_service.backup_repository(repo)
logger.info(f"Completed scheduled backup for repository: {repo.name}")
finally:
fcntl.flock(lock_file.fileno(), fcntl.LOCK_UN)
lock_file.close()
try:
os.unlink(lock_file_path)
except:
pass
except (IOError, OSError) as lock_error:
try:
with open(lock_file_path, 'w') as lock_file:
fcntl.flock(lock_file.fileno(), fcntl.LOCK_EX | fcntl.LOCK_NB)
logger.info(f"Acquired file lock for repository {repo.name}")
try:
logger.info(f"Starting scheduled backup for repository: {repo.name}")
backup_service.backup_repository(repo)
logger.info(f"Completed scheduled backup for repository: {repo.name}")
finally:
fcntl.flock(lock_file.fileno(), fcntl.LOCK_UN)
try:
os.unlink(lock_file_path)
except:
pass
except (IOError, OSError) as lock_error:

Copilot uses AI. Check for mistakes.

# Additional check: ensure no backup started in the last 30 seconds to prevent rapid duplicates
recent_cutoff = datetime.utcnow() - timedelta(seconds=30)
# 2. Check for very recent backups (within last 2 minutes) to prevent rapid duplicates
recent_cutoff = datetime.utcnow() - timedelta(minutes=2)
Copy link
Preview

Copilot AI Sep 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] The time window for preventing duplicates has changed from 30 seconds to 2 minutes without explanation. This magic number should be made configurable or at least documented why 2 minutes was chosen.

Copilot uses AI. Check for mistakes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant