mqMonitor is a real-time process pipeline monitoring system built with .NET 8, RabbitMQ, PostgreSQL, and a React 19 frontend. It provides full observability over distributed process pipelines using event-driven architecture, the Saga pattern (choreography), and CQRS.
Processes flow through configurable pipeline stages (e.g., Report > Account > Routine > Payment > Notification > Audit), with each stage handled by an independent worker. The monitor captures all lifecycle events, projects them into a read model, and pushes real-time updates to the frontend via SignalR.
ββββββββββββββββββββββββββββββββββββββββ
β React Frontend (Vite) β
β Kanban Board + Process Details β
ββββββββββββββββ¬ββββββββββββββββββββββββ
β SignalR WebSocket
ββββββββββββββββ΄ββββββββββββββββββββββββ
β Monitor API (.NET 8) β
βββββββββββββ β REST API + Event Projection + Saga β
β Producer ββββ RabbitMQ βββ β Cancel Consumer + Compensation β
β (CLI) β Pipeline ββββ ββββββββββββββββ¬ββββββββββββββββββββββββ
βββββββββββββ Exchange β β
β β PostgreSQL (Read Model)
ββββββββ΄ββββββββ€
βΌ βΌ
βββββββββββββββ βββββββββββββββ
β Stage 1 β β Stage N β β Independent Workers
β Worker β β Worker β (one per pipeline stage)
βββββββββββββββ βββββββββββββββ
This is the main extension point of mqMonitor. Each pipeline stage needs a worker that consumes messages from its queue, processes them, and forwards to the next stage.
Add your new stage to the Pipeline.Stages array in MqMonitor.API/appsettings.json:
{
"Pipeline": {
"PipelineExchange": "processes.pipeline",
"Stages": [
{ "Name": "report", "DisplayName": "Report", "QueueName": "processes.report", "RoutingKey": "pipeline.report", "MaxPriority": 10, "PrefetchCount": 1, "DlqName": "processes.report.dlq", "RetryDelayMs": 5000, "MaxRetries": 3 },
{ "Name": "account", "DisplayName": "Account", "QueueName": "processes.account", "RoutingKey": "pipeline.account", "MaxPriority": 10, "PrefetchCount": 1, "DlqName": "processes.account.dlq", "RetryDelayMs": 5000, "MaxRetries": 3 },
// β
Add your new stage here:
{ "Name": "myStage", "DisplayName": "My Stage", "QueueName": "processes.my-stage", "RoutingKey": "pipeline.myStage", "MaxPriority": 10, "PrefetchCount": 1, "DlqName": "processes.my-stage.dlq", "RetryDelayMs": 5000, "MaxRetries": 3 }
]
}
}Stage configuration fields:
| Field | Description |
|---|---|
Name |
Unique identifier used internally (lowercase, no spaces) |
DisplayName |
Human-readable name shown in the UI |
QueueName |
RabbitMQ queue name (convention: processes.<name>) |
RoutingKey |
Routing key for the pipeline exchange (convention: pipeline.<name>) |
MaxPriority |
Max message priority level (1-10) |
PrefetchCount |
How many messages the worker processes concurrently |
DlqName |
Dead Letter Queue name (convention: processes.<name>.dlq) |
RetryDelayMs |
Delay in ms before retrying a failed message |
MaxRetries |
Max retry attempts before sending to DLQ |
Note: The topology setup (
RabbitMqTopologySetup) automatically creates the queue, DLQ, retry queue, and all bindings from this configuration. No manual RabbitMQ setup needed.
# From the repository root:
mkdir examples/MqMonitor.Example.MyStageWorkerCreate the .csproj file:
<!-- examples/MqMonitor.Example.MyStageWorker/MqMonitor.Example.MyStageWorker.csproj -->
<Project Sdk="Microsoft.NET.Sdk.Worker">
<PropertyGroup>
<TargetFramework>net8.0</TargetFramework>
<ImplicitUsings>enable</ImplicitUsings>
<Nullable>enable</Nullable>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="Microsoft.Extensions.Hosting" Version="8.0.1" />
</ItemGroup>
<ItemGroup>
<ProjectReference Include="..\..\MqMonitor.Application\MqMonitor.Application.csproj" />
<ProjectReference Include="..\..\MqMonitor.Domain\MqMonitor.Domain.csproj" />
<ProjectReference Include="..\..\MqMonitor.Infra\MqMonitor.Infra.csproj" />
</ItemGroup>
</Project>Add to the solution:
dotnet sln add examples/MqMonitor.Example.MyStageWorker/MqMonitor.Example.MyStageWorker.csprojusing System.Text;
using System.Text.Json;
using Microsoft.Extensions.Options;
using MqMonitor.Application;
using MqMonitor.Domain.Enums;
using MqMonitor.Domain.Messaging.Interfaces;
using MqMonitor.Infra.Configuration;
using MqMonitor.Infra.Messaging.Contracts;
using MqMonitor.Infra.RabbitMq;
using RabbitMQ.Client;
using RabbitMQ.Client.Events;
var builder = Host.CreateApplicationBuilder(args);
builder.Services.AddMqMonitor(builder.Configuration);
var host = builder.Build();
// Configure RabbitMQ topology (creates queues if they don't exist)
using (var scope = host.Services.CreateScope())
{
var topology = scope.ServiceProvider.GetRequiredService<RabbitMqTopologySetup>();
topology.Configure();
}
// Resolve services
var connectionFactory = host.Services.GetRequiredService<RabbitMqConnectionFactory>();
var publisher = host.Services.GetRequiredService<IMessagePublisher>();
var pipelineSettings = host.Services.GetRequiredService<IOptions<PipelineSettings>>().Value;
var logger = host.Services.GetRequiredService<ILogger<Program>>();
// βββ CONFIGURE THESE 3 VALUES βββββββββββββββββββββββββββββββ
const string TARGET_STAGE = "myStage"; // Must match the Name in appsettings
const string? NEXT_STAGE = "nextStage"; // Name of the next stage, or null if this is the final stage
const int ERROR_PERCENTAGE = 10; // Simulated error rate (0-100)
// βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
var stage = pipelineSettings.Stages.First(s => s.Name == TARGET_STAGE);
var workerName = $"worker-{TARGET_STAGE}-{Environment.MachineName}-{Guid.NewGuid().ToString()[..8]}";
logger.LogInformation("Starting {Worker} for stage '{Stage}'", workerName, TARGET_STAGE);
var channel = connectionFactory.CreateChannel();
channel.BasicQos(prefetchSize: 0, prefetchCount: (ushort)stage.PrefetchCount, global: false);
var consumer = new AsyncEventingBasicConsumer(channel);
consumer.Received += async (_, ea) =>
{
var retryCount = 0;
if (ea.BasicProperties.Headers?.TryGetValue("x-retry-count", out var rc) == true)
retryCount = Convert.ToInt32(rc);
try
{
var processEvent = JsonSerializer.Deserialize<ProcessEvent>(
Encoding.UTF8.GetString(ea.Body.ToArray()));
if (processEvent == null)
{
channel.BasicReject(ea.DeliveryTag, requeue: false);
return;
}
// 1. Notify: stage started
publisher.PublishEvent(new ProcessEvent
{
ProcessId = processEvent.ProcessId,
Status = ProcessStatusEnum.StageStarted.ToConstant(),
Worker = workerName,
CurrentStage = TARGET_STAGE,
Message = processEvent.Message,
Priority = processEvent.Priority,
Timestamp = DateTime.UtcNow
}, RabbitMqConstants.ProcessStageStarted);
// ββββββββββββββββββββββββββββββββββββββββββββββ
// 2. YOUR BUSINESS LOGIC HERE
// Replace the simulated delay with real work.
await Task.Delay(Random.Shared.Next(5000, 30001));
// ββββββββββββββββββββββββββββββββββββββββββββββ
// 3. Check for failure (replace with real error handling)
if (Random.Shared.Next(100) < ERROR_PERCENTAGE)
{
var errorMsg = $"Failure at stage '{TARGET_STAGE}'";
publisher.PublishEvent(new ProcessEvent
{
ProcessId = processEvent.ProcessId,
Status = ProcessStatusEnum.Failed.ToConstant(),
Worker = workerName, CurrentStage = TARGET_STAGE,
ErrorMessage = errorMsg, Message = processEvent.Message,
Priority = processEvent.Priority, Timestamp = DateTime.UtcNow
}, RabbitMqConstants.ProcessFailed);
// Trigger saga compensation for all completed stages
publisher.PublishEvent(new ProcessEvent
{
ProcessId = processEvent.ProcessId,
Status = ProcessStatusEnum.Compensating.ToConstant(),
Worker = workerName, CurrentStage = TARGET_STAGE,
ErrorMessage = errorMsg, Message = processEvent.Message,
Priority = processEvent.Priority, Timestamp = DateTime.UtcNow
}, RabbitMqConstants.ProcessCompensating);
}
else if (NEXT_STAGE != null)
{
// 4a. Stage completed β forward to next stage
publisher.PublishEvent(new ProcessEvent
{
ProcessId = processEvent.ProcessId,
Status = ProcessStatusEnum.StageCompleted.ToConstant(),
Worker = workerName, CurrentStage = TARGET_STAGE,
NextStage = NEXT_STAGE, Message = processEvent.Message,
Priority = processEvent.Priority, Timestamp = DateTime.UtcNow
}, RabbitMqConstants.ProcessStageCompleted);
publisher.PublishToPipeline(new ProcessEvent
{
ProcessId = processEvent.ProcessId,
Status = ProcessStatusEnum.Queued.ToConstant(),
CurrentStage = NEXT_STAGE, Message = processEvent.Message,
Priority = processEvent.Priority, Timestamp = DateTime.UtcNow
}, $"pipeline.{NEXT_STAGE}", (byte)processEvent.Priority);
}
else
{
// 4b. Final stage β process finished
publisher.PublishEvent(new ProcessEvent
{
ProcessId = processEvent.ProcessId,
Status = ProcessStatusEnum.Finished.ToConstant(),
Worker = workerName, CurrentStage = TARGET_STAGE,
Message = processEvent.Message, Priority = processEvent.Priority,
Timestamp = DateTime.UtcNow
}, RabbitMqConstants.ProcessFinished);
}
channel.BasicAck(ea.DeliveryTag, multiple: false);
}
catch (Exception ex)
{
logger.LogError(ex, "[{Worker}] Error processing message", workerName);
if (retryCount < stage.MaxRetries)
{
// Retry: publish to per-stage retry queue (TTL auto-routes back)
var retryQueueName = $"{stage.QueueName}.retry";
var props = channel.CreateBasicProperties();
props.Persistent = true;
props.Headers = new Dictionary<string, object> { { "x-retry-count", retryCount + 1 } };
if (ea.BasicProperties.Priority > 0) props.Priority = ea.BasicProperties.Priority;
channel.BasicPublish(exchange: "", routingKey: retryQueueName, basicProperties: props, body: ea.Body);
channel.BasicAck(ea.DeliveryTag, multiple: false);
}
else
{
// Max retries exceeded β reject sends to DLQ via dead letter exchange
channel.BasicReject(ea.DeliveryTag, requeue: false);
}
}
};
channel.BasicConsume(queue: stage.QueueName, autoAck: false, consumer: consumer);
logger.LogInformation("[{Worker}] Listening on queue '{Queue}'", workerName, stage.QueueName);
await host.RunAsync();Add the worker service to docker-compose.yml:
my-stage-worker:
build:
context: .
dockerfile: examples/Dockerfile # Shared Dockerfile for all workers
args:
PROJECT_NAME: MqMonitor.Example.MyStageWorker
container_name: mqmonitor-my-stage-worker
environment:
RabbitMq__HostName: rabbitmq
RabbitMq__Port: 5672
RabbitMq__UserName: ${RABBITMQ_DEFAULT_USER}
RabbitMq__Password: ${RABBITMQ_DEFAULT_PASS}
depends_on:
rabbitmq:
condition: service_healthydocker compose up -d --buildVerify in the RabbitMQ Management UI (http://localhost:15672) that your new queue has 1 consumer connected.
Message arrives on stage queue
β Worker publishes process.stage.started
β Worker executes business logic
β On SUCCESS:
If has next stage β publishes process.stage.completed + forwards to next queue
If final stage β publishes process.finished
β On FAILURE:
Publishes process.failed + process.compensating (triggers saga compensation)
β On EXCEPTION:
Retries via per-stage retry queue (TTL auto-routes back)
After max retries β BasicReject sends to DLQ
- π‘ Real-time monitoring β SignalR WebSocket pushes updates instantly to the frontend
- π Dynamic pipeline β Stages configured in
appsettings.json, auto-creates RabbitMQ topology - π Kanban board β Visual process flow across pipeline stages with live updates
- π Saga pattern (choreography) β Automatic compensation in reverse order on failures
- π« Process cancellation β Cancel running processes with automatic saga compensation
- π Metrics dashboard β Process counts, stage breakdown, success/failure rates
- π Retry + DLQ β Per-stage retry queues with TTL and Dead Letter Queues
- β‘ Priority queues β Message priority support across all pipeline stages
- ποΈ Clean Architecture β 8-project solution with clear dependency boundaries
- π³ Docker ready β Full stack with one
docker compose up
- .NET 8 β API, workers, and producer
- ASP.NET Core β REST API with Swagger
- Entity Framework Core 8 β PostgreSQL ORM with Code First migrations
- RabbitMQ.Client 6.8 β Message broker integration
- AutoMapper 13 β Object mapping between layers
- SignalR β Real-time WebSocket communication
- React 19 β UI framework
- TypeScript 5.9 β Type-safe development
- Vite 7 β Build tool and dev server
- Tailwind CSS 4 β Utility-first styling
- @microsoft/signalr β Real-time WebSocket client
- Radix UI β Accessible dialog components
- Lucide React β Icon library
- Sonner β Toast notifications
- RabbitMQ 3 β Message broker with Management UI
- PostgreSQL 16 β Process state database
- Docker & Docker Compose β Container orchestration
- Nginx β Frontend static file serving
mqMonitor/
βββ MqMonitor.DTO/ # Data Transfer Objects (no dependencies)
βββ MqMonitor.Domain/ # Entities, enums, interfaces
β βββ Entities/ # ProcessExecutionModel, SagaStepModel, EventLogModel
β βββ Enums/ # ProcessStatusEnum
β βββ Messaging/Interfaces/ # IMessagePublisher
β βββ Services/Interfaces/ # IProcessQueryService, IEventProjectionService
βββ MqMonitor.Infra.Interfaces/ # Repository interfaces
βββ MqMonitor.Infra/ # Infrastructure implementations
β βββ Configuration/ # RabbitMqConstants, PipelineSettings
β βββ Context/ # MonitorDbContext (EF Core)
β βββ Mapping/Profiles/ # AutoMapper profiles
β βββ Messaging/Contracts/ # ProcessEvent, CancelProcessCommand
β βββ RabbitMq/ # ConnectionFactory, Publisher, TopologySetup
β βββ Repository/ # EF Core repositories
β βββ Services/ # EventProjectionService, ProcessQueryService
βββ MqMonitor.Application/ # DI composition root (Initializer.cs)
βββ MqMonitor.API/ # ASP.NET Core Web API
β βββ Controllers/ # ProcessesController, QueuesController
β βββ Consumers/ # ProcessEventConsumer, CancelCommandConsumer, CompensationConsumer
β βββ Hubs/ # MonitorHub (SignalR)
β βββ Services/ # QueueStatsBackgroundService
βββ examples/ # Example workers and tools
β βββ Dockerfile # Shared multi-stage Dockerfile
β βββ MqMonitor.Producer/ # CLI tool for sending test processes
β βββ MqMonitor.Example.ReportWorker/
β βββ MqMonitor.Example.AccountWorker/
β βββ MqMonitor.Example.RoutineWorker/
β βββ MqMonitor.Example.PaymentWorker/
β βββ MqMonitor.Example.NotificationWorker/
β βββ MqMonitor.Example.AuditWorker/
βββ mqmonitor-app/ # React frontend
β βββ src/
β β βββ components/ # kanban/, process/, queue/, table/, ui/
β β βββ contexts/ # ProcessContext, QueueContext
β β βββ hooks/ # useProcess, useQueue
β β βββ pages/ # DashboardPage, ProcessPage
β β βββ services/ # processService, queueService
β β βββ types/ # TypeScript interfaces
β βββ Dockerfile # Node build + Nginx serve
βββ scripts/ # init-db.sql
βββ docker-compose.yml # Full stack orchestration
βββ .env.example # Environment template
βββ mqMonitor.sln # .NET solution file
MqMonitor.DTO (no deps)
βββ MqMonitor.Domain
βββ MqMonitor.Infra.Interfaces
βββ MqMonitor.Infra
βββ MqMonitor.Application
βββ MqMonitor.API
βββ examples/
βββ MqMonitor.Producer
βββ MqMonitor.Example.*
cp .env.example .env# PostgreSQL
POSTGRES_DB=process_monitor
POSTGRES_USER=monitor
POSTGRES_PASSWORD=your_secure_password_here
POSTGRES_PORT=5432
# RabbitMQ
RABBITMQ_DEFAULT_USER=guest
RABBITMQ_DEFAULT_PASS=your_secure_password_here
RABBITMQ_PORT=5672
RABBITMQ_MANAGEMENT_PORT=15672
# Monitor API
MONITOR_API_PORT=5000
ASPNETCORE_ENVIRONMENT=Development
# Frontend
FRONTEND_PORT=3000IMPORTANT: Never commit the
.envfile with real credentials. Only the.env.exampleshould be version controlled.
# 1. Copy and configure environment
cp .env.example .env
# 2. Build and start all services
docker compose up -d --build
# 3. Verify all containers are running
docker compose ps| Service | URL | Description |
|---|---|---|
| Frontend | http://localhost:3000 |
React monitoring dashboard |
| API | http://localhost:5000 |
REST API + Swagger |
| Swagger | http://localhost:5000/swagger |
API documentation |
| RabbitMQ Management | http://localhost:15672 |
Queue management UI |
| SignalR Hub | ws://localhost:5000/hubs/monitor |
Real-time WebSocket |
| Action | Command |
|---|---|
| Start all services | docker compose up -d |
| Start with rebuild | docker compose up -d --build |
| Stop all services | docker compose stop |
| View status | docker compose ps |
| View logs (all) | docker compose logs -f |
| View API logs | docker compose logs -f monitor |
| View worker logs | docker compose logs -f report-worker |
| Remove containers | docker compose down |
| Remove containers + volumes | docker compose down -v |
- .NET 8 SDK
- Node.js 20+
- PostgreSQL 16
- RabbitMQ 3.x with Management plugin
# 1. Restore and build
dotnet restore mqMonitor.sln
dotnet build mqMonitor.sln
# 2. Run database migrations (from API project)
cd MqMonitor.API
dotnet ef database update
# 3. Start the API (includes monitor, cancel, and compensation consumers)
dotnet run --project MqMonitor.API
# 4. Start example workers (one terminal per worker)
dotnet run --project examples/MqMonitor.Example.ReportWorker
dotnet run --project examples/MqMonitor.Example.AccountWorker
dotnet run --project examples/MqMonitor.Example.RoutineWorker
dotnet run --project examples/MqMonitor.Example.PaymentWorker
dotnet run --project examples/MqMonitor.Example.NotificationWorker
dotnet run --project examples/MqMonitor.Example.AuditWorkercd mqmonitor-app
npm install
npm run devFrontend available at http://localhost:5173
dotnet run --project examples/MqMonitor.Producer=== Process Producer (Pipeline) ===
Commands:
send <stage> [count] [priority] - Send processes to a pipeline stage
stages - List available stages
quit - Exit
> send report 5 # Send 5 processes starting at Report stage
> send account 3 8 # Send 3 processes to Account with priority 8
> stages # List all configured pipeline stages
Full Swagger documentation available at /swagger when running in Development mode.
| Method | Endpoint | Description |
|---|---|---|
GET |
/api/processes |
List all processes (filter: ?stage=X or ?status=Y) |
POST |
/api/processes |
Create a new process |
GET |
/api/processes/{id} |
Get process details |
GET |
/api/processes/{id}/events |
Get event history |
GET |
/api/processes/{id}/saga |
Get saga step timeline |
PUT |
/api/processes/{id}/priority |
Update process priority |
POST |
/api/processes/{id}/cancel |
Cancel a running process |
GET |
/api/processes/metrics |
Get execution metrics |
| Method | Endpoint | Description |
|---|---|---|
GET |
/api/queues |
Get all queue stats |
GET |
/api/queues/{name} |
Get specific queue stats |
GET |
/api/queues/pipeline |
Get pipeline overview |
GET |
/api/queues/stages |
Get configured stages |
Connect to /hubs/monitor for real-time updates.
| Method | Description |
|---|---|
SubscribeToAll() |
Receive all process updates |
SubscribeToProcess(processId) |
Receive updates for a specific process |
SubscribeToQueue(queueName) |
Receive queue stats for a specific queue |
| Server Event | Payload | Description |
|---|---|---|
ProcessUpdated |
ProcessExecutionInfo |
Process state changed |
QueueStatsUpdated |
QueueStatusInfo[] |
Queue stats refreshed (every 5s) |
The system implements an event-driven architecture where each process flows through configurable pipeline stages:
ββββββββββββ ββββββββββββ ββββββββββββ ββββββββββββ ββββββββββββ ββββββββββββ
β Report β β β Account β β β Routine β β β Payment β β β Notif. β β β Audit β
β Worker β β Worker β β Worker β β Worker β β Worker β β Worker β
ββββββ¬ββββββ ββββββ¬ββββββ ββββββ¬ββββββ ββββββ¬ββββββ ββββββ¬ββββββ ββββββ¬ββββββ
β β β β β β
βββββββββββββββββ΄ββββββββββββββββ΄ββββββββββββββββ΄ββββββββββββββββ΄ββββββββββββββββ
β
events exchange
(process.#)
β
βββββββββββ΄ββββββββββ
β Monitor API β
β ProcessEvent β β PostgreSQL (read model)
β Consumer β β SignalR (real-time push)
ββββββββββββββββββββββ
Each process execution tracks its saga steps. On failure, the system automatically compensates completed stages in reverse order:
Normal flow: Report β β Account β β Routine β β Payment β (FAILED)
Compensation: Routine β Account β Report (reverse order)
Final state: All completed steps marked as COMPENSATED
| Exchange | Type | Purpose |
|---|---|---|
processes.events |
Topic | Process lifecycle events (process.#) |
processes.commands |
Topic | Control commands (cancel.process) |
processes.pipeline |
Topic | Stage-to-stage routing (pipeline.<stage>) |
processes.dlx |
Topic | Dead Letter Exchange |
| Queue | Binding | Consumer |
|---|---|---|
processes.monitor |
process.# on events exchange |
ProcessEventConsumer (API) |
processes.cancel |
cancel.process on commands exchange |
CancelCommandConsumer (API) |
processes.compensation |
process.compensating on events exchange |
CompensationConsumer (API) |
processes.<stage> |
pipeline.<stage> on pipeline exchange |
Stage Worker |
processes.<stage>.dlq |
pipeline.<stage>.# on DLX |
None (investigation) |
processes.<stage>.retry |
TTL routes back to pipeline exchange | Auto (RabbitMQ TTL) |
| Pattern | Implementation |
|---|---|
| CQRS | Write via RabbitMQ events, read via projected PostgreSQL model |
| Event Sourcing (append-only log) | event_logs table stores all raw events |
| Saga (Choreography) | Workers decide flow; compensation in reverse step order |
| Idempotent Consumer | Deduplication by EventId in EventProjectionService |
| Competing Consumers | Multiple worker instances can share the same stage queue |
| Dead Letter Queue | Failed messages routed to per-stage DLQ via DLX |
| Retry with TTL | Per-stage retry queue with configurable delay, auto-routes back |
| Priority Queue | RabbitMQ x-max-priority on all pipeline queues |
| Table | PK | Purpose |
|---|---|---|
process_executions |
process_id |
Current state of each process (read model) |
event_logs |
event_id |
Append-only log of all events (event store) |
saga_steps |
step_id |
Saga step tracking per process |
docker compose exec postgres pg_dump -U monitor process_monitor > backup.sqldocker compose exec -T postgres psql -U monitor process_monitor < backup.sqlCheck:
docker compose logs -f <worker-name>Common causes:
- RabbitMQ not healthy yet β workers depend on
service_healthycondition - Stage name in worker doesn't match
appsettings.jsonconfiguration - Environment variables
RabbitMq__HostNameand credentials not set
Check: RabbitMQ Management UI > Queues > Consumers column
Common causes:
- Old Docker containers still running:
docker ps -a | grep mqmonitor - Fix:
docker compose down && docker compose up -d --build
Common causes:
- Worker crashed during processing β message was not acknowledged
- Fix: Check worker logs, message will be redelivered after worker restart
Check: Browser DevTools > Console for SignalR connection errors
Common causes:
- API not running or CORS not configured for frontend URL
- SignalR connection failed β check
ws://localhost:5000/hubs/monitor
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create a feature branch (
git checkout -b feature/AmazingFeature) - Make your changes
- Build the solution (
dotnet build mqMonitor.sln) - Commit your changes (
git commit -m 'Add some AmazingFeature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
- Follow Clean Architecture dependency rules (never reference inward layers from outward)
- Domain entities use private setters with factory methods (
Create,Reconstruct) - All RabbitMQ constants defined in
RabbitMqConstants.cs - AutoMapper profiles: 2 per entity (EF β Domain, Domain β DTO)
- Database: snake_case table/column names via EF conventions
Developed by Rodrigo Landim Carneiro
This project is licensed under the MIT License β see the LICENSE file for details.
- Built with .NET 8
- Messaging by RabbitMQ
- Database by PostgreSQL
- Frontend by React + Vite
- Real-time by SignalR
If you find this project useful, please consider giving it a star!