Skip to content
Sample code for EasySMF
Java Batchfile Shell
Branch: master
Clone or download

Latest commit

Fetching latest commit…
Cannot retrieve the latest commit at this time.


Type Name Latest commit message Commit time
Failed to load latest commit information.


This repository contains sample code for EasySMF:JE.

EasySMF:JE is a commercial product developed by Black Hill Software which provides a Java API to map z/OS SMF records. To run these samples, you will require the EasySMF:JE jar file and a EasySMF:JE license key.

A free, 30 day trial is available. You can request a trial here: EasySMF 30 Day Trial

Getting Started

There is a Tutorial to help you get started and understand how EasySMF processes SMF records. It demonstrates the basic principles behind reading SMF records, extracting the data sections and fields.

The Tutorial can be found here: EasySMF Tutorial

Sample Reports

There are a number of sample reports to show how Java can be used to process SMF data.

  1. Counting SMF Records
  2. Analyzing Duplicate SMF records
  3. Summarizing Data by Jobname
  4. Highest Contributors to R4HA Peak
  5. User Key Common
  6. A/B (Before/After) Comparison
  7. Dataset Activity

Counting SMF Records


SMF record counts by type and subtype.

Analyzing Duplicate SMF records

These reports were prompted by a question about whether it was possible to remove duplicate SMF records when data had been duplicated due to a processing error.

That led to another question: Are legitimate duplicate SMF records likely to occur?

The short answer is: Yes, duplicate SMF records occur surprisingly often. The timestamps in SMF records are not granular enough to prevent duplicate records from being written when the same event occurs multiple times. Duplicate records seemed particularly common for

  • Type 30 subtype 1 (Job Start). These were probably generated by unix processes where address spaces are reused multiple times and the job id does not change.
  • RACF records



SmfDeDup reports whether a SMF dataset contains duplicate records, with duplicate counts by record type.

Optionally, it can write a new dataset/file with the duplicates removed (even though they might be legitimate data), and write the duplicates to another file for further analysis.



SmfReportDups attempts to better answer the question of whether data has been duplicated due to a processing error.

Records are grouped by SMF ID, record type and minute and a count is kept of unique and duplicate records.

Duplicate data is flagged for any minute where the number of duplicates is greater than or equal to the number of unique records.

Duplicates are checked:

  • for each SMF ID to find instances where all data from a system is duplicated
  • by SMF ID and record type to find instances where particular record types are duplicated e.g. if a record type is copied into a separate dataset which is copied again back into the main stream.

Summarizing Data by Jobname


Summary of CP time, zIIP time, Connect time and EXCP count by job name.

Highest Contributors to R4HA Peak


Show the jobs and address spaces that used the most CPU during the R4HA peak periods.

The program reads type 30 and type 70 SMF records. CPU information from the type 30 records is collected by jobname by hour, and the type 70 records are used to find the R4HA peaks.

For each of the top 5 peaks on each system, sum the CP time used by each job in the 4 hours leading to that peak and report the top 5 jobs in the list. An estimated MSU value is also calculated based on the each jobs percentage of the total CPU time multiplied by the actual MSU.

Do the same thing for zIIP on CP time.

User Key Common


Search type 30 records for jobs with User Key Common flags set.

Common storage in a user key is not supported after z/OS 2.3. APAR OA53355 introduced flags in the type 30 SMF record that are set if a task uses user key common storage.

A/B (Before/After) Comparison


Produce a report by program name showing changes in zIIP%, zIIP on CP% and CPU milliseconds per I/O before and after a specified date. This type of report may help to evaluate the inpact of e.g. hardware or configuration changes.

Dataset Activity


List activity against datasets (read, write, update etc.) Additional documentation is available here: Dataset Reports


Thanks for your interest.

Contributions can be made using the standard github fork/pull request process. For an outline for beginners see:

You can’t perform that action at this time.