Run Book / System Operation Manual
Service or system overview
Service or system name:
What business need is met by this service or system? What expectations do we have about availability and performance?
(e.g. Provides reliable automated reconciliation of logistics transactions from the previous 24 hours)
What kind of system is this? Web-connected order processing? Back-end batch system? Internal HTTP-based API? ETL control system?
(e.g. Internal API for order reconciliation based on Ruby and RabbitMQ, deployed in Docker containers on Kubernetes)
Service Level Agreements (SLAs)
What explicit or implicit expectations are there from users or clients about the availability of the service or system?
(e.g. Contractual 99.9% service availability outside of the 03:00-05:00 maintenance window)
Which team owns and runs this service or system?
(e.g. The Sneaky Sharks team (Bangalore) develops and runs this service: firstname.lastname@example.org / #sneaky-sharks on Slack / Extension 9265)
Contributing applications, daemons, services, middleware
Which distinct software applications, daemons, services, etc. make up the service or system? What external dependencies does it have?
(e.g. Ruby app + RabbitMQ for source messages + PostgreSQL for reconciled transactions)
Hours of operation
During what hours does the service or system actually need to operate? Can portions or features of the system be unavailable at times if needed?
Hours of operation - core features
(e.g. 03:00-01:00 GMT+0)
Hours of operation - secondary features
(e.g. 07:00-23:00 GMT+0)
Data and processing flows
How and where does data flow through the system? What controls or triggers data flows?
(e.g. mobile requests / scheduled batch jobs / inbound IoT sensor data )
Infrastructure and network design
What servers, containers, schedulers, devices, vLANs, firewalls, etc. are needed?
(e.g. '10+ Ubuntu 14 VMs on AWS IaaS + 2 AWS Regions + 2 VPCs per Region + Route53')
Resilience, Fault Tolerance (FT) and High Availability (HA)
How is the system resilient to failure? What mechanisms for tolerating faults are implemented? How is the system/service made highly available?
(e.g. 2 Active-Active data centres across two cities + two or more nodes at each layer)
Throttling and partial shutdown
How can the system be throttled or partially shut down e.g. to avoid flooding other dependent systems? Can the throughput be limited to (say) 100 requests per second? etc. What kind of connection back-off schemes are in place?
Throttling and partial shutdown - external requests
(e.g. Commercial API gateway allows throttling control)
Throttling and partial shutdown - internal components
(e.g. Exponential backoff on all HTTP-based services +
/health healthcheck endpoints on all services)
Expected traffic and load
Details of the expected throughput/traffic: call volumes, peak periods, quiet periods. What factors drive the load: bookings, page views, number of items in Basket, etc.)
(e.g. Max: 1000 requests per second with 400 concurrent users - Friday @ 16:00 to Sunday @ 18:00, driven by likelihood of barbecue activity in the neighborhood)
Hot or peak periods
Cool or quiet periods
What are the main differences between Production/Live and other environments? What kinds of things might therefore not be tested in upstream environments?
(e.g. Self-signed HTTPS certificates in Pre-Production - certificate expiry may not be detected properly in Production)
What tools are available to help operate the system?
(e.g. Use the
queue-cleardown.sh script to safely cleardown the processing queue nightly)
What compute, storage, database, metrics, logging, and scaling resources are needed? What are the minimum and expected maximum sizes (in CPU cores, RAM, GB disk space, GBit/sec, etc.)?
Required resources - compute
(e.g. Min: 4 VMs with 2 vCPU each. Max: around 40 VMs)
Required resources - storage
(e.g. Min: 10GB Azure blob storage. Max: around 500GB Azure blob storage)
Required resources - database
(e.g. Min: 500GB Standard Tier RDS. Max: around 2TB Standard Tier RDS)
Required resources - metrics
(e.g. Min: 100 metrics per node per minute. Max: around 6000 metrics per node per minute)
Required resources - logging
(e.g. Min: 60 log lines per node per minute (100KB). Max: around 6000 log lines per node per minute (1MB))
Required resources - other
(e.g. Min: 10 encryption requests per node per minute. Max: around 100 encryption requests per node per minute)
Security and access control
Password and PII security
What kind of security is in place for passwords and Personally Identifiable Information (PII)? Are the passwords hashed with a strong hash function and salted?
(e.g. Passwords are hashed with a 10-character salt and SHA265)
Ongoing security checks
How will the system be monitored for security issues?
(e.g. The ABC tool scans for reported CVE issues and reports via the ABC dashboard)
How is configuration managed for the system?
(e.g. CloudInit bootstraps the installation of Puppet - Puppet then drives all system and application level configuration except for the XYZ service which is configured via
App.config files in Subversion)
How are configuration secrets managed?
(e.g. Secrets are managed with Hashicorp Vault with 3 shards for the master key)
System backup and restore
Which parts of the system need to be backed up?
(e.g. Only the CoreTransactions database in PostgreSQL and the Puppet master database need to be backed up)
How does backup happen? Is service affected? Should the system be [partially] shut down first?
(e.g. Backup happens from the read replica - live service is not affected)
How does restore happen? Is service affected? Should the system be [partially] shut down first?
(e.g. The Booking service must be switched off before Restore happens otherwise transactions will be lost)
Monitoring and alerting
Log aggregation solution
What log aggregation & search solution will be used?
(e.g. The system will use the existng in-house ELK cluster. 2000-6000 messages per minute expected at normal load levels)
Log message format
What kind of log message format will be used? Structured logging with JSON?
log4jstyle single-line output?
(e.g. Log messages will use log4j compatible single-line format with wrapped stack traces)
Events and error messages
What significant events, state transitions and error events may be logged?
(e.g. IDs 1000-1999: Database events; IDs 2000-2999: message bus events; IDs 3000-3999: user-initiated action events; ...)
What significant metrics will be generated?
(e.g. Usual VM stats (CPU, disk, threads, etc.) + around 200 application technical metrics + around 400 user-level metrics)
How is the health of dependencies (components and systems) assessed? How does the system report its own health?
Health of dependencies
/health HTTP endpoint for internal components that expose it. Other systems and external endpoints: typically HTTP 200 but some synthetic checks for some services)
Health of service
/health HTTP endpoint: 200 --> basic health, 500 --> bad configuration +
/health/deps for checking dependencies)
How is the software deployed? How does roll-back happen?
(e.g. We use GoCD to coordinate deployments, triggering a Chef run pulling RPMs from the internal yum repo)
What kind of batch processing takes place?
(e.g. Files are pushed via SFTP to the media server. The system processes up to 100 of these per hour on a
What needs to happen when machines are power-cycled?
(e.g. *** WARNING: we have not investigated this scenario yet! ***)
Routine and sanity checks
What kind of checks need to happen on a regular basis?
/health endpoints should be checked every 60secs plus the synthetic transaction checks run every 5 mins via Pingdom)
How should troubleshooting happen? What tools are available?
(e.g. Use a combination of the
/health endpoint checks and the
abc-*.sh scripts for diagnosing typical problems)
How should patches be deployed and tested?
Normal patch cycle
(e.g. Use the standard OS patch test cycle together with deployment via Jenkins and Capistrano)
(e.g. Use the early-warning notifications from UpGuard plus deployment via Jenkins and Capistrano)
Daylight-saving time changes
Is the software affected by daylight-saving time changes (both client and server)?
(e.g. Server clocks all set to UTC+0. All date/time data converted to UTC with offset before processing)
Which data needs to be cleared down? How often? Which tools or scripts control cleardown?
abc-cleardown.ps1 run nightly to clear down the document cache)
Is log rotation needed? How is it controlled?
(e.g. The Windows Event Log ABC Service is set to a maximum size of 512MB)
Failover and Recovery procedures
What needs to happen when parts of the system are failed over to standby systems? What needs to during recovery?
Troubleshooting Failover and Recovery
What tools or scripts are available to troubleshoot failover and recovery operations?
(e.g. Start with running
SELECT state__desc FROM sys.database__mirroring__endpoints on the PRIMARY node and then use the scripts in the db-failover Git repo)