Skip to content
This repository

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP

A faster Java heap analysis tool

branch: language-speci…
README.md

fasthat, a faster heap dump analyser

Overview

fasthat is a fork of OpenJDK 6's jhat that enhances our ability to analyse large heap dumps (typically 4 to 8 GB) that we frequently work with at On-Site.

Features

Above and beyond the features already present in jhat, fasthat is enhanced in the following ways:

  • Performance:
    • Faster loading of large heap dumps, by using a deque for the tree-walking phase (which uses a FIFO), rather than a vector.
    • Faster execution of OQL, by using real Rhino rather than the in-JDK Rhino.
  • Functionality:
    • Referrer chains: The histogram now has the ability to look at only objects that refer to instances of a specific class. For example, if your heap dump is full of Strings, you can now look at a histogram of just the objects that refer to any String objects.
    • Language-specific models: Various object/collection types from OpenJDK, specific versions of JRuby, and Guava can now have their contents viewed conveniently in the object view.

As I solve more challenges in analysing our heap dumps, I'll be adding more features.

Dependencies

fasthat uses the following libraries:

I always try to keep up-to-date with the latest released versions. At the time of current writing, I'm using Guava 14.0.1 and Rhino 1.7R4.

I develop and build using Eclipse. Mike Virata-Stone has created an Ant build.xml that you may find useful, but I have not tested it. In particular, you may need to update it to work with the latest Guava jars.

Future directions

Since this isn't my full-time project, there are many things I'd like to improve on that I haven't yet got around to:

  • Performance:
    • Look into ways to make heap dump loading more concurrent. Right now, on our 16-core machine, sometimes the CPU usage is 1300% and sometimes it's 100%. The more of those 100% we can turn into 1300%, the better.
    • Audit the code to find and fix any weird concurrency bugs.
    • Figure out what is shareable per-thread in Rhino, and what must be distinct. Currently I've taken the very conservative approach of creating a new Rhino instance for each OQL query, but that is probably way over the top.
    • Make all the model and script operations interruptible, so that they can stop running as soon as the user hits the Stop button. Right now, the interruption only happens when the page is being written out (after all the computation has already been done and squandered).
  • Language-specific models:
    • Allow real tracing through JRuby classes, etc. In particular, this means having JRuby classes be selectable via the histogram.
    • Allow inspection of JRuby stack traces.
    • Make the object views use language-specific models much more pervasively.
    • Enable OQL queries on language-specific object properties.
    • Implement unpacking of more object types, especially for JRuby and Guava.
    • Implement models for OpenJDK 7 and JRuby 1.7.
    • Implement better support for detecting Guava data structures given that Guava doesn't have a version signature in the heap dump.
  • Miscellaneous:
    • Merge in changes to the Rhino JSR-223 engine from OpenJDK 7.

Contact and licensing

fasthat is maintained by Chris Jester-Young.

All of the code from OpenJDK are licensed under GPLv2 with Classpath Exception. All of the new code (not originating from OpenJDK) are licensed under GPLv2 or later, with Classpath Exception.

Something went wrong with that request. Please try again.