Skip to content

yogg17/NodeLock

Repository files navigation

NodeLock | Network Node Manager

NodeLock is an internal-grade network node allocation platform designed to eliminate reservation chaos for shared IP-addressable infrastructure (for example OLTs, lab nodes, and troubleshooting chassis).

Impact Snapshot

  • Reduced reservation coordination overhead (mail + chat ping-pong) by approximately 90% in the team where it is currently adopted.
  • Introduced time-bound, auditable allocation records with clear ownership and return accountability.
  • Enforced one-active-reservation-per-vault semantics to prevent double booking and operational ambiguity.

1. Problem Statement

In network labs and integration teams, expensive and scarce network nodes are usually shared across multiple engineers and sub-teams. Without a centralized reservation ledger, teams experience recurring operational issues:

  • Manual coordination overhead across chat/mail.
  • Lost accountability for who owns a setup at a given time.
  • Scheduling conflicts and accidental overlap.
  • Missing audit trail for post-incident and root-cause workflows.

NodeLock addresses this by introducing a structured reservation lifecycle with owner assignment, borrow windows, controlled return, and archival history.


2. Introduction

NodeLock is a Flask-based web application that manages the lifecycle of network node usage:

  1. Node inventory registration.
  2. Ownership assignment (lend window).
  3. Borrowing/reservation within allowed windows.
  4. Return flow with authentication and archival.
  5. Defaulter reporting and notification trigger.

The platform is built with a controller-centric application layer and SQL-backed persistence. The design intentionally favors explicit business transitions over hidden automation, which makes behavior understandable and operable for production support teams.


3. Guidelines and Operating Principles

Reservation Guidelines

  • Standard Reservation: all network IP nodes must be reserved using a valid Employee ID.
  • Timely Release: users must release setups at the end of their reservation window.
  • Conflict Resolution: if a required node is occupied, coordinate handoff with the active user before reallocation.
  • Accountability: the primary owner remains accountable for setup integrity during the ownership window.

Engineering Principles Enforced by NodeLock

  • Deterministic state transitions for reserve and return.
  • Validation-first form handling before mutation.
  • Atomic DB commit boundaries around critical transitions.
  • Recoverable failure handling via flash messaging + logging.

4. Module Explanation

NodeLock is organized around four high-value controller modules that map directly to business capabilities.

4.1 Admin Controller

Responsibility:

  • Establish privileged session (ADMIN) via key-based authentication.
  • Gate administrative operations such as inventory creation/edit flows.

Core behavior:

  • Uses bcrypt hash verification (ADMIN_BCRYPT_HASH) from environment.
  • Creates session principal only on successful credential verification.

Why it matters:

  • Keeps privileged write operations behind explicit auth, while preserving lightweight UX for internal tooling.

4.2 Node Controller

Responsibility:

  • Manage canonical node inventory (ID, IP, title, location, identifier, content).
  • Provide create, list, update operations for network assets.

Core behavior:

  • Input completeness validation + IP format validation.
  • Session-aware admin checks for mutating paths.
  • Distinct setup counting for inventory-level observability.

Why it matters:

  • Defines the source-of-truth inventory set from which vault ownership slots are instantiated.

4.3 Vault Controller (Core Engine)

Responsibility:

  • Manage ownership, reservation, return, deletion, and defaulter reporting.
  • Enforce business invariants around time windows and reservation exclusivity.

Core behavior:

  • Ownership (lend) and reservation (borrow) are validated against date windows.
  • Reservation flow includes anti-conflict checks:
    • server-side current state check (already reserved guard),
    • hidden vault id consistency check against route id.
  • Return flow archives transaction and resets mutable reservation fields.
  • Notification pipeline renders defaulter templates and sends report mail.

Why it matters:

  • This controller implements the atomic operational lifecycle of shared-node usage.

4.4 Archive Controller

Responsibility:

  • Provide read-only access to historical reservation/return outcomes.

Core behavior:

  • Loads archived records for operational audit and historical accountability.

Why it matters:

  • Converts transient reservation events into a durable audit trail.

5. Features

End-to-End Reservation Flow

  1. Admin/user identifies a node from inventory.
  2. Ownership is created in vault with owner identity and lend window.
  3. Borrow request is submitted within ownership boundaries.
  4. Validation checks:
    • required fields,
    • employee identity presence,
    • date-window admissibility,
    • anti-tamper vault id check,
    • already-reserved guard.
  5. On return, system archives transaction and restores vault availability.
  6. Defaulter view and optional report mail support operational follow-up.

Authentication and Access Features

  • Session-based admin access control.
  • Environment-backed bcrypt authentication hash.
  • Employee ID-centric identity referencing in reservation paths.

Archive and Traceability Features

  • Historical archive records for post-incident diagnosis.
  • Flash + structured logging instrumentation across flow branches.

6. Novelty and Design Advantages

6.1 Node List + Vault Split (Key Architectural Moat)

A major design decision is splitting inventory entities (Node) from temporal allocation entities (NodeVault).

  • Node List stores static asset metadata.
  • Vault stores dynamic ownership/borrow state with time windows.

Why this is novel and practical:

  • One node can participate in one or more ownership contexts over time without duplicating static metadata in inventory workflows.
  • It reduces data redundancy and keeps mutable reservation context isolated from immutable node identity.
  • It simplifies incident analysis by making "what the node is" independent from "who used it and when".

6.2 Human-Centric Identity Keying

  • Employee ID is used as a memorable operational key for reservation ownership and borrowing.
  • This reduces friction in fast-paced lab operations compared to opaque identity tokens.

6.3 Atomicity Principle

Reservation and return state transitions are committed as explicit DB transactions.

  • On success: commit establishes canonical state.
  • On failure: rollback prevents partial update artifacts.

The result is predictable behavior under operational and validation failures.

6.4 Single-Reservation Invariant

A vault record can only be actively reserved once at a time.

  • Request-time state re-check prevents stale-form assumptions.
  • Hidden route/form id coherence check reduces tamper/confusion risk.

7. Architecture (MVP-Oriented)

This implementation aligns with a pragmatic MVP interpretation:

  • Model: SQLAlchemy-backed entity classes (Node, NodeVault, Archive) + data session lifecycle.
  • View: Jinja templates for create/lend/borrow/return/archive/report pages.
  • Presenter (Controller Layer): Flask controller modules orchestrating validation, business rules, persistence, and response rendering.

7.1 High-Level Component Map

flowchart TD
		U[User/Admin] --> V[Views\nJinja Templates]
		V --> C[Controllers\nAdmin/Node/Vault/Archive]
		C --> M[Models\nNode/NodeVault/Archive]
		M --> D[(SQLite DB\ninstance/nodelock.db)]
		C --> J[(data/auth.json)]
		C --> L[(logs/*.log)]
		C --> E[SMTP Mail Engine]
Loading

7.2 Request Routing Layer

Blueprint registration maps URL namespaces to controller handlers:

  • Node routes: create/show/edit inventory entries.
  • Vault routes: lend/borrow/return/delete/report operations.
  • Admin routes: authenticate and session promotion.
  • Archive routes: read-only historical listing.

7.3 Data Model Responsibilities

Node:

  • Canonical inventory metadata (id, ip, title, location, identifier, content).

NodeVault:

  • Ownership and reservation lifecycle state.
  • Owner and borrower identity fields.
  • lend/borrow window boundaries and availability state.

Archive:

  • Immutable historical record of completed usage.

7.4 Flow-Level Architecture

A) Lend (Ownership Creation)

  1. Validate node + owner + date inputs.
  2. Resolve owner metadata from auth dataset.
  3. Persist NodeVault row with Available state.

B) Borrow (Reservation)

  1. Re-fetch current vault state.
  2. Reject if already reserved.
  3. Validate requested borrow window within lend window.
  4. Validate identity and request integrity.
  5. Persist borrower fields and mark reserved.

C) Return

  1. Authenticate return request.
  2. Persist archival snapshot.
  3. Reset mutable borrower fields.
  4. Restore vault availability.

D) Defaulter Reporting

  1. Query overdue borrow windows.
  2. Render report template.
  3. Trigger SMTP notification path.

7.5 Logging and Operability

The Vault controller now contains structured logging at:

  • function entry points,
  • validation outcomes,
  • branch decisions (reserve refusal, id mismatch, auth failure),
  • transaction success/failure,
  • mail trigger status.

This improves debuggability and operational visibility without changing business behavior.


8. Tech Stack

  • Language: Python 3.x
  • Web Framework: Flask
  • ORM / Persistence: SQLAlchemy + Flask-SQLAlchemy
  • Migrations: Flask-Migrate / Alembic
  • Templates: Jinja2
  • Database: SQLite (instance/nodelock.db)
  • Session backend: filesystem-based Flask session
  • Auth verification: bcrypt (admin), MD5 (temporary path in vault return auth)
  • Notification: SMTP (email MIME pipeline)
  • Process control: Bash launcher with PID management

9. Installation and Local Setup

9.1 Prerequisites

  • Python 3.x
  • pip
  • virtualenv (recommended)

9.2 Clone and Bootstrap

git clone <your-repo-url>
cd equipment_dashboard
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt

9.3 Runtime Directories

Create required runtime folders if missing:

mkdir -p logs instance flask_session

9.4 Environment Setup

Configure .env (example):

ADMIN_BCRYPT_HASH=<bcrypt-hash>

Generate a bcrypt hash:

python3 -c "import bcrypt; print(bcrypt.hashpw(b'YOUR_ADMIN_KEY', bcrypt.gensalt()).decode())"

9.5 Config Setup

Create local application config from the sanitized template:

cp mock-config.py config.py

Then update local values in config.py if needed for your environment.

Important:

  • config.py is intentionally ignored by git.
  • Keep secrets in local env variables or local config only.

9.6 Database Initialization

Initialize schema:

python db_init.py

This creates the SQLite schema under instance/nodelock.db through model metadata registration.

9.7 Optional Git Hygiene

Use .gitignore to avoid committing local runtime artifacts and secrets:

  • .env
  • venv/
  • instance/*.db
  • logs/*.log
  • flask_session/
  • __pycache__/

9.8 Launch Script Usage

launch-application.sh supports process control with PID tracking:

./launch-application.sh start dev
./launch-application.sh start test
./launch-application.sh status
./launch-application.sh stop

Behavior:

  • Starts Flask in background.
  • Stores PID in nodelock.pid.
  • Writes runtime log stream to logs/nodelock.log.
  • Supports graceful stop via SIGTERM.

10. Controller Workflow Deep Dive

10.1 Admin Workflow

  1. User submits admin key.
  2. Controller validates against ADMIN_BCRYPT_HASH.
  3. On success, session["user"] = "ADMIN".

Operational guarantee:

  • Administrative write paths are protected by session role checks.

10.2 Node Workflow

  1. Admin creates/edits inventory entries.
  2. Controller validates field completeness + IP format.
  3. Data persists to Node table.

Operational guarantee:

  • Inventory remains normalized and independently manageable.

10.3 Vault Workflow

Ownership path:

  1. Select node.
  2. Create ownership slot with lend window.

Borrow path:

  1. User submits borrow request.
  2. System revalidates current vault state from DB.
  3. Rejects already-reserved or invalid id mismatch.
  4. Enforces date window constraints.
  5. Marks vault reserved and stores borrower metadata.

Return path:

  1. Authenticate employee return.
  2. Archive full transaction.
  3. Reset vault to available.

Operational guarantee:

  • One reservation at a time per vault record.
  • Controlled lifecycle with reversibility and auditability.

10.4 Archive Workflow

  1. Read archived rows.
  2. Render historical allocation timeline for governance and diagnostics.

11. Disclaimer

This tool originated to solve OLT reservation and sharing challenges in a Nokia context. The published version is a generalized rebuild of the same architecture and operational principles for broader network resource allocation use cases.

I chose MD5 for the MVP identity matching to keep the return-flow lightweight, but the Admin-level authentication uses Bcrypt for industry-standard salted hashing.


12. Future Hardening Roadmap (Post-v1)

  • Replace temporary MD5 path in vault return authentication with stronger scheme.
  • Move SMTP host/recipients fully to environment configuration.
  • Add integration tests for lend/borrow/return edge cases.
  • Add optimistic locking/version checks for high-concurrency reservation race windows.
  • Introduce role tiers beyond single ADMIN principal.

About

NodeLock is an internal-grade network node allocation platform designed to eliminate reservation chaos for shared IP-addressable infrastructure (for example OLTs, lab nodes, and troubleshooting chassis).

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors