Skip to content
Rajendra Prasad Reddy Penumalli edited this page Oct 4, 2020 · 35 revisions

Performance Engineering=Performance Testing (We need to Monitor Server Performance) + Performance Analysis+Performance Tuning

Performance Roles

 * Performance Tester
 * Performance Engineer
 * Performance Architect

Performance Testing vs Performance Engineering

  • In Performance testing, testing cycle includes:
    • Requirement gathering
    • Scripting
    • Execution
    • Result sharing
    • Report generation
  • Performance Engineering is a post Performance testing activity; that includes:
    • Results analysis with the aim to find the performance bottlenecks
    • Solution is provided to resolve the identified issues.

Application Architectures;

 * Traditional Application with 3 tier architecture
 * Single monolithic architecture
 * Completely Micro-services architecture
 * Hybrid(Monolithic architecture+ Micro-services) 

SDLC Models -Impact on PT or PE

 * Waterfall
 * Agile (Quite Challenging due to Rapid Application and Dev-ops Adaptation)

Performance Testing

Performance testing measures the quality attributes of the system, such as salability, reliability and resource usage.

 * Procedure
   * UI Later
   * Application Layer
   * DB Layer 
 * Tools
 * Real Use Cases
 * Common Issues and Causes
Performance Testing Types
 * Capacity Test
 * Load Test
 * Stress Test
 * Fail-Over Test
 * Endurance Test
PT tool Choosing is based on:
  • monitoring requirements?
  • Analysis Requirements?
  • Reporting Requirements?
  • Platform Requirements?
  • Team Skills?
Performance Testing Tools
* JMeter    : Java Based
* locust.io : Python Based
* k6.io     : Java Script Based   
Performance Testing Procedure
  • Targets and Requirements:
    • Talk to stake holders (Product owners)
    • Read non functional requirements SLA's
    • Decide/Set SLA'S
  • Test Environment
    • Separate Test Execution environment which is free from noise
    • Must be an exact replica of Production
  • Test Strategy
    • Analyze typical application usage patterns (logs will help in this)
    • Plan Test for each usage pattern with identical work loads
    • Brainstorm,discuss finalize load model for each type of test
    • Plan to perform different geographic regions if possible and required (based on client base)
  • Scripting
    • Record and Create scripts
    • Must add think times
    • Add Assertions
    • Add Co-relation for dynamic data
  • Test Data
    • Use production or alike data
    • Use appropriate date and time functions
  • Work Flow Modelling
    • Design workflow for each scenario
  • Environment Check
    • First Manually Check and verify Application
  • Execution and Monitoring and Collecting metrics
    • Execute the load test
    • Monitor db and web servers for CPU,RAM and Bandwidth
    • Check results against Key performance indicators
  • Results Analysis
    • Log if any other information need
  • Reporting Results

Performance Analysis

 * Procedure
   * UI later
   * Application layer
   * DB Layer 
 * Tools
   * Profilers
   * Analyzers
 * Real Use Cases
 * Common Issues and Causes

Performance Tuning

 * Code Tuning/Refactoring
 * Memory Tuning
 * CPU Tuning
 * IO Tuning
 * Procedure
   * UI later
   * Application layer
   * DB Layer 
 * Tools
 * Real Use Cases
 * Common Issues and Causes

Java

* Java heap dump
* Java thread dump
* Java Memory Efficient Coding Practices
* Coding Practices to avoid
  * Dead lock
  * Memory Leak
  * Connection Leak
* GC algorithms
* Hardware

Server Monitoring Base Lines vs Trend Lines

Defining thresholds or Baselines:

Sample

  • CPU percentage should never go over 60%.
  • GC pause never higher than 0.7 sec,
  • Memory usage is always under 70%
  • Avg. Response time is always under 2 sec
  • Active connections less than 300

APM Tools

  • App-dynamics
  • New relic

Sample Applications:

References:

Clone this wiki locally