Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP
Browse files

- Introduced proc_readline() and proc_spitline() using linecache for …

…top-plugins

- Introduced proc_readlines() and proc_splitlines() using linecache for top-plugins
- Introduced proc_pidlist() for top-plugins
- New tchg() function to format the time depending on width
  • Loading branch information...
commit 04b08fba60bc3d1300fda3ebe6634c15cc6be24e 1 parent 699acf3
@dagwieers authored
View
6 ChangeLog
@@ -1,4 +1,4 @@
-* 0.7.0svn - ... - release 26/11/2009
+* 0.7.0svn - ... - release 10/02/2009
- Fix external plugins on python 2.2 and older (eg. RHEL3)
- Documentation improvements
- Implement linecache for top-plugins (caching for statistics)
@@ -7,6 +7,10 @@
- Added --profile option to get profiling statistics when you exit dstat
- Show a message with the default options when no stats are being specified
- Improved page allocation numbers in vm plugin (Hirofumi Ogawa)
+- Introduced proc_readline() and proc_spitline() using linecache for top-plugins
+- Introduced proc_readlines() and proc_splitlines() using linecache for top-plugins
+- Introduced proc_pidlist() for top-plugins
+- New tchg() function to format the time depending on width
* 0.7.0 - Tokyo - release 25/11/2009
- Fixed dstat_disk plugin for total calculation on 2.6.25+ kernels (Noel J. Bergman)
View
2  README
@@ -32,4 +32,4 @@ help and your feedback to fix the remaining problems.
If you have improvements or bugreports, please send them to:
- <dag@wieers.com>
+ <dag@wieers.com>
View
1  TODO
@@ -19,7 +19,6 @@ contact me as well. :) Send an email to: Dag Wieers <dag@wieers.com>
+ Look into adding sched_setscheduler() calls for improved priority
### General improvements
-+ Base debug runtime stats on schedstat of own pid (when possible), make plugin out of it
+ Implement better (?) protection against counter rollovers (see mail from Sebastien Prud'homme)
### Documentation (help welcome!)
View
24 docs/counter-rollovers.html
@@ -4,7 +4,7 @@
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
<meta name="generator" content="AsciiDoc 8.5.1" />
-<title>All you ever wanted to know about counter-rollovers and dstat</title>
+<title>All you ever wanted to know about counter-rollovers in Dstat</title>
<style type="text/css">
/* Debug borders */
p, li, dt, dd, div, pre, h1, h2, h3, h4, h5, h6 {
@@ -544,12 +544,12 @@
</head>
<body>
<div id="header">
-<h1>All you ever wanted to know about counter-rollovers and dstat</h1>
+<h1>All you ever wanted to know about counter-rollovers in Dstat</h1>
</div>
<div id="content">
<h2 id="_what_you_need_to_know_about_counter_rollovers">What you need to know about counter rollovers</h2>
<div class="sectionbody">
-<div class="paragraph"><p>Unfortunately, dstat is susceptible for counter rollovers, which may give
+<div class="paragraph"><p>Unfortunately, Dstat is susceptible for counter rollovers, which may give
you bogus performance output. Linux currently implements counters as 32bit
values (not sure on 64bit platforms). This means a counter can go up to
2^32 (= 4294967296 = 4G) values.</p></div>
@@ -559,19 +559,19 @@ <h2 id="_what_you_need_to_know_about_counter_rollovers">What you need to know ab
interfaces, this happens after 1.6 seconds.</p></div>
<div class="paragraph"><p>Since /proc is updated every second, this becomes almost impossible to catch.</p></div>
</div>
-<h2 id="_how_does_this_impact_dstat">How does this impact dstat ?</h2>
+<h2 id="_how_does_this_impact_dstat">How does this impact Dstat ?</h2>
<div class="sectionbody">
-<div class="paragraph"><p>Currently dstat has a problem if you specify delays that are too big. I.e.
-using 60 or 120 seconds delay in dstat will make dstat check these counters
+<div class="paragraph"><p>Currently Dstat has a problem if you specify delays that are too big. I.e.
+using 60 or 120 seconds delay in Dstat will make Dstat check these counters
only once per minute or every two minutes. In the case the value is reset,
it might be lower than the previous value (which causes negative values) or
worse, the value is actually higher (which will go unnoticed and you get
-bogus information and dstat won&#8217;t know).</p></div>
+bogus information and Dstat won&#8217;t know).</p></div>
<div class="paragraph"><p>This is very problematic, and it&#8217;s important you are aware of this.</p></div>
</div>
<h2 id="_what_are_the_solutions">What are the solutions ?</h2>
<div class="sectionbody">
-<div class="paragraph"><p>The only fix for dstat is to check more often than the specified delay.
+<div class="paragraph"><p>The only fix for Dstat is to check more often than the specified delay.
Unfortunately, this requires a re-design (or an ugly hack).</p></div>
<div class="paragraph"><p>There are plans to use 64bit counters on Linux and/or changing the output from
using bytes to kbytes. None of this is sure. (add pointers to threads)</p></div>
@@ -583,9 +583,9 @@ <h2 id="_what_are_the_solutions">What are the solutions ?</h2>
<h2 id="_what_can_i_do">What can I do ?</h2>
<div class="sectionbody">
<div class="paragraph"><p>Since this is Open Source, you are free to fix this and send me the fix. Or
-help with a redesign of dstat to overcome this problem. Also look at the
-TODO file to see what other changes are expected in a redesign of dstat.</p></div>
-<div class="paragraph"><p>Since I have a lot of other responsibilities and am currently not using dstat
+help with a redesign of Dstat to overcome this problem. Also look at the
+TODO file to see what other changes are expected in a redesign of Dstat.</p></div>
+<div class="paragraph"><p>Since I have a lot of other responsibilities and am currently not using Dstat
for something where this problem matters much, I will have no time to look at
it closely (unless the fix or the redesign is made fairly simple). It all
depends on how quick I think I can fix/redesign it and how much time I have.</p></div>
@@ -603,7 +603,7 @@ <h2 id="_what_can_i_do">What can I do ?</h2>
<div id="footnotes"><hr /></div>
<div id="footer">
<div id="footer-text">
-Last updated 2006-12-12 16:38:30 CEST
+Last updated 2010-02-11 11:26:02 CEST
</div>
</div>
</body>
View
31 docs/counter-rollovers.txt
@@ -1,9 +1,7 @@
-All you ever wanted to know about counter-rollovers and dstat
-=============================================================
+= All you ever wanted to know about counter-rollovers in Dstat
-What you need to know about counter rollovers
----------------------------------------------
-Unfortunately, dstat is susceptible for counter rollovers, which may give
+== What you need to know about counter rollovers
+Unfortunately, Dstat is susceptible for counter rollovers, which may give
you bogus performance output. Linux currently implements counters as 32bit
values (not sure on 64bit platforms). This means a counter can go up to
2^32 (= 4294967296 = 4G) values.
@@ -16,21 +14,19 @@ interfaces, this happens after 1.6 seconds.
Since /proc is updated every second, this becomes almost impossible to catch.
-How does this impact dstat ?
-----------------------------
-Currently dstat has a problem if you specify delays that are too big. I.e.
-using 60 or 120 seconds delay in dstat will make dstat check these counters
+== How does this impact Dstat ?
+Currently Dstat has a problem if you specify delays that are too big. I.e.
+using 60 or 120 seconds delay in Dstat will make Dstat check these counters
only once per minute or every two minutes. In the case the value is reset,
it might be lower than the previous value (which causes negative values) or
worse, the value is actually higher (which will go unnoticed and you get
-bogus information and dstat won't know).
+bogus information and Dstat won't know).
This is very problematic, and it's important you are aware of this.
-What are the solutions ?
-------------------------
-The only fix for dstat is to check more often than the specified delay.
+== What are the solutions ?
+The only fix for Dstat is to check more often than the specified delay.
Unfortunately, this requires a re-design (or an ugly hack).
There are plans to use 64bit counters on Linux and/or changing the output from
@@ -43,13 +39,12 @@ re-calculate the negative values (by adding 2^32 to them).
If the rollovers happen only sporadically, you can just ignore those values.
-What can I do ?
----------------
+== What can I do ?
Since this is Open Source, you are free to fix this and send me the fix. Or
-help with a redesign of dstat to overcome this problem. Also look at the
-TODO file to see what other changes are expected in a redesign of dstat.
+help with a redesign of Dstat to overcome this problem. Also look at the
+TODO file to see what other changes are expected in a redesign of Dstat.
-Since I have a lot of other responsibilities and am currently not using dstat
+Since I have a lot of other responsibilities and am currently not using Dstat
for something where this problem matters much, I will have no time to look at
it closely (unless the fix or the redesign is made fairly simple). It all
depends on how quick I think I can fix/redesign it and how much time I have.
View
24 docs/dstat.1
@@ -2,7 +2,7 @@
.\" It was generated using the DocBook XSL Stylesheets (version 1.69.1).
.\" Instead of manually editing it, you probably should edit the DocBook XML
.\" source for it and then use the DocBook XSL Stylesheets to regenerate it.
-.TH "DSTAT" "1" "11/26/2009" "\ 0.7.0" "\ "
+.TH "DSTAT" "1" "02/11/2010" "\ 0.7.0" "\ "
.\" disable hyphenation
.nh
.\" disable justification (adjust text to left margin only)
@@ -37,13 +37,13 @@ Users of Sleuthkit might find Sleuthkit\(cqs dstat being renamed to datastat to
enable cpu stats (system, user, idle, wait, hardware interrupt, software interrupt)
.TP
\-C 0,3,total
-include cpu0, cpu3 and total
+include cpu0, cpu3 and total (when using \-c/\-\-cpu)
.TP
\-d, \-\-disk
enable disk stats (read, write)
.TP
\-D total,hda
-include hda and total
+include total and hda (when using \-d/\-\-disk)
.TP
\-g, \-\-page
enable page stats (page in, page out)
@@ -52,7 +52,7 @@ enable page stats (page in, page out)
enable interrupt stats
.TP
\-I 5,10
-include interrupt 5 and 10
+include interrupt 5 and 10 (when using \-i/\-\-int)
.TP
\-l, \-\-load
enable load average stats (1 min, 5 mins, 15mins)
@@ -64,7 +64,7 @@ enable memory stats (used, buffers, cache, free)
enable network stats (receive, send)
.TP
\-N eth1,total
-include eth1 and total
+include eth1 and total (when using \-n/\-\-net)
.TP
\-p, \-\-proc
enable process stats (runnable, uninterruptible, new)
@@ -76,7 +76,7 @@ enable I/O request stats (read, write requests)
enable swap stats (used, free)
.TP
\-S swap1,total
-include swap1 and total
+include swap1 and total (when using \-s/\-\-swap)
.TP
\-t, \-\-time
enable time/date output
@@ -160,6 +160,9 @@ disable intermediate updates when delay > 1
.TP
\-\-output file
write CSV output to file
+.TP
+\-\-profile
+show profiling statistics when exiting dstat
.SH "PLUGINS"
While anyone can create their own dstat plugins (and contribute them) dstat ships with a number of plugins already that extend its capabilities greatly. Here is an overview of the plugins dstat ships with:
.sp
@@ -179,6 +182,9 @@ number of dbus connections (needs python\-dbus)
\-\-disk\-util
per disk utilization in percentage
.TP
+\-\-dstat
+show dstat cputime consumption and latency
+.TP
\-\-fan
fan speed (needs ACPI)
.TP
@@ -254,6 +260,9 @@ show power usage
\-\-proc\-count
show total number of processes
.TP
+\-\-qmail
+show qmail queue sizes (needs qmail)
+.TP
\-\-rpc
show RPC client calls stats
.TP
@@ -275,6 +284,9 @@ system temperature sensors
\-\-top\-bio
show most expensive block I/O process
.TP
+\-\-top\-childwait
+show process waiting for child the most
+.TP
\-\-top\-cpu
show most expensive CPU process
.TP
View
44 docs/dstat.1.html
@@ -622,7 +622,7 @@ <h2 id="_options">OPTIONS</h2>
</dt>
<dd>
<p>
- include cpu0, cpu3 and total
+ include cpu0, cpu3 and total (when using -c/--cpu)
</p>
</dd>
<dt class="hdlist1">
@@ -638,7 +638,7 @@ <h2 id="_options">OPTIONS</h2>
</dt>
<dd>
<p>
- include hda and total
+ include total and hda (when using -d/--disk)
</p>
</dd>
<dt class="hdlist1">
@@ -662,7 +662,7 @@ <h2 id="_options">OPTIONS</h2>
</dt>
<dd>
<p>
- include interrupt 5 and 10
+ include interrupt 5 and 10 (when using -i/--int)
</p>
</dd>
<dt class="hdlist1">
@@ -694,7 +694,7 @@ <h2 id="_options">OPTIONS</h2>
</dt>
<dd>
<p>
- include eth1 and total
+ include eth1 and total (when using -n/--net)
</p>
</dd>
<dt class="hdlist1">
@@ -726,7 +726,7 @@ <h2 id="_options">OPTIONS</h2>
</dt>
<dd>
<p>
- include swap1 and total
+ include swap1 and total (when using -s/--swap)
</p>
</dd>
<dt class="hdlist1">
@@ -939,6 +939,14 @@ <h2 id="_options">OPTIONS</h2>
write CSV output to file
</p>
</dd>
+<dt class="hdlist1">
+--profile
+</dt>
+<dd>
+<p>
+ show profiling statistics when exiting dstat
+</p>
+</dd>
</dl></div>
</div>
<h2 id="_plugins">PLUGINS</h2>
@@ -988,6 +996,14 @@ <h2 id="_plugins">PLUGINS</h2>
</p>
</dd>
<dt class="hdlist1">
+--dstat
+</dt>
+<dd>
+<p>
+ show dstat cputime consumption and latency
+</p>
+</dd>
+<dt class="hdlist1">
--fan
</dt>
<dd>
@@ -1188,6 +1204,14 @@ <h2 id="_plugins">PLUGINS</h2>
</p>
</dd>
<dt class="hdlist1">
+--qmail
+</dt>
+<dd>
+<p>
+ show qmail queue sizes (needs qmail)
+</p>
+</dd>
+<dt class="hdlist1">
--rpc
</dt>
<dd>
@@ -1244,6 +1268,14 @@ <h2 id="_plugins">PLUGINS</h2>
</p>
</dd>
<dt class="hdlist1">
+--top-childwait
+</dt>
+<dd>
+<p>
+ show process waiting for child the most
+</p>
+</dd>
+<dt class="hdlist1">
--top-cpu
</dt>
<dd>
@@ -1511,7 +1543,7 @@ <h2 id="_author">AUTHOR</h2>
<div id="footer">
<div id="footer-text">
Version 0.7.0<br />
-Last updated 2009-11-26 03:35:03 CEST
+Last updated 2010-02-11 14:02:23 CEST
</div>
</div>
</body>
View
16 docs/dstat.1.txt
@@ -44,13 +44,13 @@ information.
interrupt)
-C 0,3,total::
- include cpu0, cpu3 and total
+ include cpu0, cpu3 and total (when using -c/--cpu)
-d, --disk::
enable disk stats (read, write)
-D total,hda::
- include hda and total
+ include total and hda (when using -d/--disk)
-g, --page::
enable page stats (page in, page out)
@@ -59,7 +59,7 @@ information.
enable interrupt stats
-I 5,10::
- include interrupt 5 and 10
+ include interrupt 5 and 10 (when using -i/--int)
-l, --load::
enable load average stats (1 min, 5 mins, 15mins)
@@ -71,7 +71,7 @@ information.
enable network stats (receive, send)
-N eth1,total::
- include eth1 and total
+ include eth1 and total (when using -n/--net)
-p, --proc::
enable process stats (runnable, uninterruptible, new)
@@ -83,7 +83,7 @@ information.
enable swap stats (used, free)
-S swap1,total::
- include swap1 and total
+ include swap1 and total (when using -s/--swap)
-t, --time::
enable time/date output
@@ -189,6 +189,9 @@ Here is an overview of the plugins dstat ships with:
--disk-util::
per disk utilization in percentage
+--dstat::
+ show dstat cputime consumption and latency
+
--fan::
fan speed (needs ACPI)
@@ -288,6 +291,9 @@ Here is an overview of the plugins dstat ships with:
--top-bio::
show most expensive block I/O process
+--top-childwait::
+ show process waiting for child the most
+
--top-cpu::
show most expensive CPU process
View
12 docs/examples.html
@@ -549,16 +549,16 @@
<div id="content">
<div id="preamble">
<div class="sectionbody">
-<div class="paragraph"><p>I&#8217;ve written a few examples that make use of the dstat classes.</p></div>
+<div class="paragraph"><p>I&#8217;ve written a few examples that make use of the Dstat classes.</p></div>
<div class="paragraph"><p>The following examples currently exist:</p></div>
<div class="literalblock">
<div class="content">
-<pre><tt>read.py - shows how to access dstat data
-mstat.py - small sub-second ministat tool</tt></pre>
+<pre><tt>read.py - shows how to access dstat data
+mstat.py - small sub-second ministat tool</tt></pre>
</div></div>
-<div class="paragraph"><p>Please send other examples or tools that make use of dstat classes
+<div class="paragraph"><p>Please send other examples or tools that make use of Dstat classes
or changes to extend the current infrastructure.</p></div>
-<div class="paragraph"><p>I&#8217;m not particularly happy with the current interface to dstat,
+<div class="paragraph"><p>I&#8217;m not particularly happy with the current interface to Dstat,
so any hints on how to improve it are welcome. Also look at the
TODO for future changes.</p></div>
<div class="admonitionblock">
@@ -575,7 +575,7 @@
<div id="footnotes"><hr /></div>
<div id="footer">
<div id="footer-text">
-Last updated 2006-06-16 09:01:03 CEST
+Last updated 2010-02-11 11:26:39 CEST
</div>
</div>
</body>
View
13 docs/examples.txt
@@ -1,17 +1,16 @@
-Dstat examples
-==============
+= Dstat examples
-I've written a few examples that make use of the dstat classes.
+I've written a few examples that make use of the Dstat classes.
The following examples currently exist:
- read.py - shows how to access dstat data
- mstat.py - small sub-second ministat tool
+ read.py - shows how to access dstat data
+ mstat.py - small sub-second ministat tool
-Please send other examples or tools that make use of dstat classes
+Please send other examples or tools that make use of Dstat classes
or changes to extend the current infrastructure.
-I'm not particularly happy with the current interface to dstat,
+I'm not particularly happy with the current interface to Dstat,
so any hints on how to improve it are welcome. Also look at the
TODO for future changes.
View
59 docs/performance.html
@@ -549,27 +549,64 @@
<div id="content">
<h2 id="_introduction">Introduction</h2>
<div class="sectionbody">
-<div class="paragraph"><p>Since dstat is written in python, it is not optimized for performance.</p></div>
-<div class="paragraph"><p>When doing performance analysis, it is always important to verify that
-the monitoring tool is not messing with the performance numbers.
+<div class="paragraph"><p>Since Dstat is written in python, it is not optimized for performance.
+But that doesn&#8217;t mean that Dstat performs bad, it performs quite good
+given its written in python and a lot of dedication went into profiling
+and optimizing Dstat and Dstat plugins.</p></div>
+<div class="paragraph"><p>But when doing performance analysis, it is always important to verify
+that the monitoring tool is not interfering with the performance numbers.
(eg. writing to disk, using cpu/memory/network, increasing load)</p></div>
-<div class="paragraph"><p>Depending on the stats being used and the load on the server itself
+</div>
+<h2 id="_compare_with_baseline">Compare with baseline</h2>
+<div class="sectionbody">
+<div class="paragraph"><p>Depending on the plugins being used and the load on the server itself
the impact Dstat has on the system you are monitoring might be
considerable. A lot of plugins are pretty fast (less than 0.1ms on
-an modest 1.2Ghz laptop, but some plugins may use up to 3ms using
-up to 2% of your CPU).</p></div>
+an modest 1.2Ghz laptop), but some plugins may use up to 3ms or even
+up to 2% of your CPU. (eg. each top-plugin scans the process-list)</p></div>
<div class="paragraph"><p>Before performing any tests please verify for yourself what impact
Dstat has on your test results and keep that in mind when analysing
-the results afterwards.</p></div>
-<div class="paragraph"><p>In case the impact is higher than expected, reduce the number of stats
-and remove expensive stats or even look at the plugin you&#8217;re using and
-send me optimisations.</p></div>
+the results afterwards. Especially if you suspect Dstat to be
+influencing your results, do a baseline with and without the Dstat
+commandline.</p></div>
+</div>
+<h2 id="_selection_of_plugins">Selection of plugins</h2>
+<div class="sectionbody">
+<div class="paragraph"><p>In case the impact is higher than expected, reduce the number of plugins
+and remove expensive plugins, or even better, look at the plugin you&#8217;re
+using and send me optimizations.</p></div>
<div class="paragraph"><p>Newer python versions are also faster than older ones, and hardware is
only becoming faster at a pace that these considerations may not hold
anylonger.</p></div>
+</div>
+<h2 id="_debugging_and_profiling_dstat">Debugging and profiling Dstat</h2>
+<div class="sectionbody">
<div class="paragraph"><p>If you need feedback about plugin performance, use the --debug option
to profile different plugins. If you use -t together with --debug, you
can see the time deviation on your system in relation to load/plugins.</p></div>
+<div class="paragraph"><p>If you want to profile certain plugins, you can use the --profile option
+which provides you with detailed information of the function calls that
+are the most expensive.</p></div>
+<div class="paragraph"><p>You can also run the dstat plugin (--dstat) to look what overhead (cputime)
+and response (latency) Dstat has during runtime, which can be very useful
+to compare with your baseline and the system in idle state.</p></div>
+<div class="paragraph"><p>One common way to profile a single plugin is to use the following
+commandline:</p></div>
+<div class="literalblock">
+<div class="content">
+<pre><tt>dstat -t --dstat --debug --profile
+dstat -t --dstat --top-cpu --debug --profile</tt></pre>
+</div></div>
+<div class="paragraph"><p>The default profiling infrastructure is quite expensive, so it is important
+that you first make a baseline including the profiling itself, then
+compare it against the same commandline including the plugin you want to
+profile.</p></div>
+</div>
+<h2 id="_improving_dstat_8217_s_footprint_even_more">Improving Dstat&#8217;s footprint even more</h2>
+<div class="sectionbody">
+<div class="paragraph"><p>Another way to win a few CPU cycles is to pre-compile the Dstat plugins
+by running the compileall.py script that comes with python on your
+plugins directory. It can save about 10% in execution time.</p></div>
<div class="paragraph"><p>Remember that invisible plugins (that run out of your terminal window)
do take up cycles because the information is still being collected and
possibly written to CSV output.</p></div>
@@ -600,7 +637,7 @@ <h2 id="_performance_tuning">Performance tuning</h2>
<div id="footnotes"><hr /></div>
<div id="footer">
<div id="footer-text">
-Last updated 2009-11-24 02:49:07 CEST
+Last updated 2010-02-11 11:24:41 CEST
</div>
</div>
</body>
View
61 docs/performance.txt
@@ -1,36 +1,66 @@
-Dstat performance
-=================
+= Dstat performance
-Introduction
-------------
-Since dstat is written in python, it is not optimized for performance.
+== Introduction
+Since Dstat is written in python, it is not optimized for performance.
+But that doesn't mean that Dstat performs bad, it performs quite good
+given its written in python and a lot of dedication went into profiling
+and optimizing Dstat and Dstat plugins.
-When doing performance analysis, it is always important to verify that
-the monitoring tool is not messing with the performance numbers.
+But when doing performance analysis, it is always important to verify
+that the monitoring tool is not interfering with the performance numbers.
(eg. writing to disk, using cpu/memory/network, increasing load)
-Depending on the stats being used and the load on the server itself
+== Compare with baseline
+Depending on the plugins being used and the load on the server itself
the impact Dstat has on the system you are monitoring might be
considerable. A lot of plugins are pretty fast (less than 0.1ms on
-an modest 1.2Ghz laptop, but some plugins may use up to 3ms using
-up to 2% of your CPU).
+an modest 1.2Ghz laptop), but some plugins may use up to 3ms or even
+up to 2% of your CPU. (eg. each top-plugin scans the process-list)
Before performing any tests please verify for yourself what impact
Dstat has on your test results and keep that in mind when analysing
-the results afterwards.
+the results afterwards. Especially if you suspect Dstat to be
+influencing your results, do a baseline with and without the Dstat
+commandline.
-In case the impact is higher than expected, reduce the number of stats
-and remove expensive stats or even look at the plugin you're using and
-send me optimisations.
+== Selection of plugins
+In case the impact is higher than expected, reduce the number of plugins
+and remove expensive plugins, or even better, look at the plugin you're
+using and send me optimizations.
Newer python versions are also faster than older ones, and hardware is
only becoming faster at a pace that these considerations may not hold
anylonger.
+== Debugging and profiling Dstat
If you need feedback about plugin performance, use the --debug option
to profile different plugins. If you use -t together with --debug, you
can see the time deviation on your system in relation to load/plugins.
+If you want to profile certain plugins, you can use the --profile option
+which provides you with detailed information of the function calls that
+are the most expensive.
+
+You can also run the dstat plugin (--dstat) to look what overhead (cputime)
+and response (latency) Dstat has during runtime, which can be very useful
+to compare with your baseline and the system in idle state.
+
+One common way to profile a single plugin is to use the following
+commandline:
+
+ dstat -t --dstat --debug --profile
+ dstat -t --dstat --top-cpu --debug --profile
+
+The default profiling infrastructure is quite expensive, so it is important
+that you first make a baseline including the profiling itself, then
+compare it against the same commandline including the plugin you want to
+profile.
+
+== Improving Dstat's footprint even more
+Another way to win a few CPU cycles is to pre-compile the Dstat plugins
+by running the compileall.py script that comes with python on your
+plugins directory. It can save about 10% in execution time.
+
Remember that invisible plugins (that run out of your terminal window)
do take up cycles because the information is still being collected and
possibly written to CSV output.
@@ -40,8 +70,7 @@ the system, but I have no experience with writing python modules in C.
Any feedback on this is welcomed.
-Performance tuning
-------------------
+== Performance tuning
The following documents may be useful to tune a system for performance
* http://people.redhat.com/alikins/system_tuning.html[]
View
2  docs/screen.html
@@ -603,7 +603,7 @@
<div id="footnotes"><hr /></div>
<div id="footer">
<div id="footer-text">
-Last updated 2006-12-12 16:39:55 CEST
+Last updated 2010-02-11 13:58:16 CEST
</div>
</div>
</body>
View
3  docs/screen.txt
@@ -1,5 +1,4 @@
-Configuring screen to display multiple dstat for different systems
-==================================================================
+= Configuring screen to display multiple dstat for different systems
Here is an example of how I monitor 5 nodes in a cluster with a minimum
of effort using screen:
View
89 dstat
@@ -17,7 +17,7 @@
from __future__ import generators
try:
- import sys, os, time, sched, re
+ import sys, os, time, sched, re, getopt
import types, resource, getpass, glob, linecache
except KeyboardInterrupt:
pass
@@ -111,10 +111,9 @@ class Options:
}
try:
- import getopt
opts, args = getopt.getopt(args, 'acdfghilmno:prstTvyC:D:I:M:N:S:V',
['all', 'all-plugins', 'bw', 'blackonwhite', 'debug',
- 'filesystem', 'float', 'full', 'gonuts', 'help', 'integer',
+ 'filesystem', 'float', 'full', 'help', 'integer',
'list', 'mods', 'modules', 'nocolor', 'noheaders', 'noupdate',
'output=', 'pidfile=', 'profile', 'version', 'vmstat'] + allplugins)
except getopt.error, exc:
@@ -1605,6 +1604,7 @@ char = {
}
def set_theme():
+ "Provide a set of colors to use"
if op.blackonwhite:
theme = {
'title': ansi['darkblue'],
@@ -1741,7 +1741,11 @@ def matchpipe(fileobj, string, tmout = 0.001):
raise Exception, 'Nothing found during matchpipe data collection'
return None
-def linecache_readlines(filename):
+def proc_readlines(filename):
+ "Return the lines of a file, one by one"
+# for line in open(filename).readlines():
+# yield line
+
### Implemented linecache (for top-plugins)
i = 1
while True:
@@ -1750,7 +1754,11 @@ def linecache_readlines(filename):
yield line
i += 1
-def linecache_splitlines(filename):
+def proc_splitlines(filename):
+ "Return the splitted lines of a file, one by one"
+# for line in open(filename).readlines():
+# yield line.split()
+
### Implemented linecache (for top-plugins)
i = 1
while True:
@@ -1759,6 +1767,33 @@ def linecache_splitlines(filename):
yield line.split()
i += 1
+def proc_readline(filename):
+ "Return the first line of a file"
+# return open(filename).read()
+ return linecache.getline(filename, 1)
+
+def proc_splitline(filename):
+ "Return the first line of a file splitted"
+# return open(filename).read().split()
+ return linecache.getline(filename, 1).split()
+
+### FIXME: Should we cache this within every step ?
+def proc_pidlist():
+ "Return a list of process IDs"
+ dstat_pid = str(os.getpid())
+ for pid in os.listdir('/proc/'):
+ try:
+ ### Is it a pid ?
+ int(pid)
+
+ ### Filter out dstat
+ if pid == dstat_pid: continue
+
+ yield pid
+
+ except ValueError:
+ continue
+
def dchg(var, width, base):
"Convert decimal to string given base and length"
c = 0
@@ -1798,6 +1833,17 @@ def fchg(var, width, base):
c = -1
return ret, c
+def tchg(var, width):
+ "Convert time string to given length"
+ ret = '%2dh%02d' % (var / 60, var % 60)
+ if len(ret) > width:
+ ret = '%2dh' % (var / 60)
+ if len(ret) > width:
+ ret = '%2dd' % (var / 60 / 24)
+ if len(ret) > width:
+ ret = '%2dw' % (var / 60 / 24 / 7)
+ return ret
+
def cprintlist(varlist, type, width, scale):
"Return all columns color printed"
ret = sep = ''
@@ -1852,7 +1898,7 @@ def cprint(var, type = 'f', width = 4, scale = 1000):
elif type in ('s'):
ret, c = str(var), ctext
elif type in ('t'):
- ret, c = '%2dh%02d' % (var / 60, var % 60), ctext
+ ret, c = tchg(var, width), ctext
else:
raise Exception, 'Type %s not known to dstat.' % type
@@ -1886,6 +1932,7 @@ def cprint(var, type = 'f', width = 4, scale = 1000):
return ret
def header(totlist, vislist):
+ "Return the header for a set of module counters"
line = ''
### Process title
for o in vislist:
@@ -1905,6 +1952,7 @@ def header(totlist, vislist):
return line + '\n'
def csvheader(totlist):
+ "Return the CVS header for a set of module counters"
line = ''
### Process title
for o in totlist:
@@ -1972,6 +2020,7 @@ def gettermsize():
return termsize
def gettermcolor(color=True):
+ "Return whether the system can use colors or not"
if color and sys.stdout.isatty():
try:
import curses
@@ -1985,11 +2034,13 @@ def gettermcolor(color=True):
### We only want to filter out paths, not ksoftirqd/1
def basename(name):
+ "Perform basename on paths only"
if name[0] in ('/', '.'):
return os.path.basename(name)
return name
def getnamebypid(pid, name):
+ "Return the name of a process by taking best guesses and exclusion"
ret = None
try:
# cmdline = open('/proc/%s/cmdline' % pid).read().split('\0')
@@ -2106,6 +2157,19 @@ def dev(maj, min):
def exit(ret):
sys.stdout.write(ansi['reset'])
+
+ if op.profile:
+ rows, cols = gettermsize()
+ import pstats
+ p = pstats.Stats('dstat_profile.log')
+# p.sort_stats('name')
+# p.print_stats()
+ p.sort_stats('cumulative').print_stats(rows - 12)
+# p.sort_stats('time').print_stats(rows - 12)
+# p.sort_stats('file').print_stats('__init__')
+# p.sort_stats('time', 'cum').print_stats(.5, 'init')
+# p.print_callees()
+
sys.exit(ret)
def listplugins():
@@ -2158,6 +2222,7 @@ def showplugins():
print mod
def main():
+ "Initialization of the program, terminal, internal structures"
global pagesize, cpunr, hz, ansi, theme, outputfile
global totlist, inittime
global update, missed
@@ -2318,6 +2383,7 @@ def main():
sys.stdout.write('\n')
def perform(update):
+ "Inner loop that calculates counters and constructs output"
global totlist, oldvislist, vislist, showheader, rows, cols
global elapsed, totaltime, starttime
global loop, step, missed
@@ -2453,17 +2519,6 @@ if __name__ == '__main__':
if op.pidfile and os.path.exists(op.pidfile):
os.remove(op.pidfile)
- if op.profile:
- rows, cols = gettermsize()
- import pstats
- p = pstats.Stats('dstat_profile.log')
-# p.sort_stats('name')
-# p.print_stats()
- p.sort_stats('cumulative').print_stats(rows - 12)
-# p.sort_stats('time').print_stats(rows - 12)
-# p.sort_stats('file').print_stats('__init__')
-# p.sort_stats('time', 'cum').print_stats(.5, 'init')
-# p.print_callees()
exit(0)
else:
op = Options('')
View
7 plugins/dstat_dstat.py
@@ -2,11 +2,14 @@
class dstat_plugin(dstat):
"""
- Provide more information related to the dstat process
+ Provide more information related to the dstat process.
+
+ The dstat cputime is the total cputime dstat requires per second. On a
+ system with one cpu and one core, the total cputime is 1000ms. On a system
+ with 2 cores the total is 2000ms.
"""
def __init__(self):
self.name = 'dstat'
- self.nick = ('time', 'latency')
self.vars = ('cputime', 'latency')
self.type = 'd'
self.width = 4
View
21 plugins/dstat_top_bio.py
@@ -12,7 +12,6 @@ def __init__(self):
self.type = 's'
self.width = 22
self.scale = 0
- self.pid = str(os.getpid())
self.pidset1 = {}; self.pidset2 = {}
def check(self):
@@ -22,14 +21,8 @@ def check(self):
def extract(self):
self.val['usage'] = 0.0
self.val['block i/o process'] = ''
- for pid in os.listdir('/proc/'):
+ for pid in proc_pidlist():
try:
- ### Is it a pid ?
- int(pid)
-
- ### Filter out dstat
- if pid == self.pid: continue
-
### Reset values
if not self.pidset2.has_key(pid):
self.pidset2[pid] = {'read_bytes:': 0, 'write_bytes:': 0}
@@ -37,20 +30,16 @@ def extract(self):
self.pidset1[pid] = {'read_bytes:': 0, 'write_bytes:': 0}
### Extract name
-# name = open('/proc/%s/stat' % pid).read().split()[1][1:-1]
- name = linecache.getline('/proc/%s/stat' % pid, 1).split()[1][1:-1]
+ name = proc_splitline('/proc/%s/stat' % pid)[1][1:-1]
### Extract counters
-# for line in open('/proc/%s/io' % pid).readlines():
-# l = line.split()
- for l in linecache_splitlines('/proc/%s/io' % pid):
+ for l in proc_splitlines('/proc/%s/io' % pid):
if len(l) != 2: continue
self.pidset2[pid][l[0]] = int(l[1])
-
- except ValueError:
- continue
except IOError:
continue
+ except IndexError:
+ continue
read_usage = (self.pidset2[pid]['read_bytes:'] - self.pidset1[pid]['read_bytes:']) * 1.0 / elapsed
write_usage = (self.pidset2[pid]['write_bytes:'] - self.pidset1[pid]['write_bytes:']) * 1.0 / elapsed
View
56 plugins/dstat_top_childwait.py
@@ -0,0 +1,56 @@
+### Dstat most expensive process plugin
+### Displays the name of the most expensive process
+###
+### Authority: dag@wieers.com
+
+global cpunr
+
+class dstat_plugin(dstat):
+ def __init__(self):
+ self.name = 'most waiting for'
+ self.vars = ('child process',)
+ self.type = 's'
+ self.width = 16
+ self.scale = 0
+
+ def extract(self):
+ self.val['max'] = 0.0
+ for pid in proc_pidlist():
+ try:
+ ### Using dopen() will cause too many open files
+ l = proc_splitline('/proc/%s/stat' % pid)
+ except IOError:
+ continue
+
+ if len(l) < 15: continue
+
+ ### Reset previous value if it doesn't exist
+ if not self.set1.has_key(pid):
+ self.set1[pid] = 0
+
+ self.set2[pid] = int(l[15]) + int(l[16])
+ usage = (self.set2[pid] - self.set1[pid]) * 1.0 / elapsed / cpunr
+
+ ### Is it a new topper ?
+ if usage <= self.val['max']: continue
+
+ self.val['max'] = usage
+ self.val['name'] = getnamebypid(pid, l[1][1:-1])
+ self.val['pid'] = pid
+
+ ### Debug (show PID)
+# self.val['process'] = '%*s %-*s' % (5, self.val['pid'], self.width-6, self.val['name'])
+
+ if step == op.delay:
+ self.set1.update(self.set2)
+
+ def show(self):
+ if self.val['max'] == 0.0:
+ return '%-*s' % (self.width, '')
+ else:
+ return '%s%-*s%s' % (theme['default'], self.width-3, self.val['name'][0:self.width-3], cprint(self.val['max'], 'p', 3, 34))
+
+ def showcsv(self):
+ return '%s / %d%%' % (self.val['name'], self.val['max'])
+
+# vim:ts=4:sw=4:et
View
19 plugins/dstat_top_cpu.py
@@ -14,26 +14,15 @@ def __init__(self):
self.type = 's'
self.width = 16
self.scale = 0
- self.pid = str(os.getpid())
self.pidset1 = {}; self.pidset2 = {}
def extract(self):
self.val['max'] = 0.0
self.val['cpu process'] = ''
- for pid in os.listdir('/proc/'):
+ for pid in proc_pidlist():
try:
- ### Is it a pid ?
- int(pid)
-
- ### Filter out dstat
- if pid == self.pid: continue
-
### Using dopen() will cause too many open files
-# l = open('/proc/%s/stat' % pid).read().split()
- l = linecache.getline('/proc/%s/stat' % pid, 1).split()
-
- except ValueError:
- continue
+ l = proc_splitline('/proc/%s/stat' % pid)
except IOError:
continue
@@ -53,8 +42,8 @@ def extract(self):
self.val['max'] = usage
self.val['pid'] = pid
-# self.val['name'] = getnamebypid(pid, name)
- self.val['name'] = name
+ self.val['name'] = getnamebypid(pid, name)
+# self.val['name'] = name
if self.val['max'] != 0.0:
self.val['cpu process'] = '%-*s%s' % (self.width-3, self.val['name'][0:self.width-3], cprint(self.val['max'], 'f', 3, 34))
View
25 plugins/dstat_top_cputime.py
@@ -7,6 +7,9 @@ class dstat_plugin(dstat):
"""
Name and total amount of CPU time consumed in milliseconds of the process
that has the highest total amount of cputime for the measured timeframe.
+
+ On a system with one CPU and one core, the total cputime is 1000ms. On a
+ system with two cores the total cputime is 2000ms.
"""
def __init__(self):
@@ -15,7 +18,6 @@ def __init__(self):
self.type = 's'
self.width = 17
self.scale = 0
- self.pid = str(os.getpid())
self.pidset1 = {}; self.pidset2 = {}
def check(self):
@@ -24,31 +26,22 @@ def check(self):
def extract(self):
self.val['result'] = 0
- self.val['process'] = ''
- for pid in os.listdir('/proc/'):
+ self.val['cputime process'] = ''
+ for pid in proc_pidlist():
try:
- ### Is it a pid ?
- int(pid)
-
- ### Filter out dstat
- if pid == self.pid: continue
-
### Reset values
if not self.pidset1.has_key(pid):
self.pidset1[pid] = {'run_ticks': 0}
### Extract name
-# name = open('/proc/%s/stat' % pid).read().split()[1][1:-1]
- name = linecache.getline('/proc/%s/stat' % pid, 1).split()[1][1:-1]
+ name = proc_splitline('/proc/%s/stat' % pid)[1][1:-1]
### Extract counters
-# l = open('/proc/%s/schedstat' % pid).read().split()
- l = linecache.getline('/proc/%s/schedstat' % pid, 1).split()
-
- except ValueError:
- continue
+ l = proc_splitline('/proc/%s/schedstat' % pid)
except IOError:
continue
+ except IndexError:
+ continue
if len(l) != 3: continue
View
31 plugins/dstat_top_cputime_avg.py
@@ -7,13 +7,21 @@
### http://eaglet.rain.com/rick/linux/schedstat/
class dstat_plugin(dstat):
+ """
+ Name and average amount of CPU time consumed in milliseconds of the process
+ that has the highest average amount of cputime for the different slices for
+ the measured timeframe.
+
+ On a system with one CPU and one core, the total cputime is 1000ms. On a
+ system with two cores the total cputime is 2000ms.
+ """
+
def __init__(self):
self.name = 'highest average'
self.vars = ('cputime process',)
self.type = 's'
self.width = 17
self.scale = 0
- self.pid = str(os.getpid())
self.pidset1 = {}; self.pidset2 = {}
def check(self):
@@ -22,31 +30,22 @@ def check(self):
def extract(self):
self.val['result'] = 0
- self.val['process'] = ''
- for pid in os.listdir('/proc/'):
+ self.val['cputime process'] = ''
+ for pid in proc_pidlist():
try:
- ### Is it a pid ?
- int(pid)
-
- ### Filter out dstat
- if pid == self.pid: continue
-
### Reset values
if not self.pidset1.has_key(pid):
self.pidset1[pid] = {'run_ticks': 0, 'ran': 0}
### Extract name
-# name = open('/proc/%s/stat' % pid).read().split()[1][1:-1]
- name = linecache.getline('/proc/%s/stat' % pid, 1).split()[1][1:-1]
+ name = proc_splitline('/proc/%s/stat' % pid)[1][1:-1]
### Extract counters
-# l = open('/proc/%s/schedstat' % pid).read().split()
- l = linecache.getline('/proc/%s/schedstat' % pid, 1).split()
-
- except ValueError:
- continue
+ l = proc_splitline('/proc/%s/schedstat' % pid)
except IOError:
continue
+ except IndexError:
+ continue
if len(l) != 3: continue
View
23 plugins/dstat_top_io.py
@@ -10,7 +10,6 @@ def __init__(self):
self.type = 's'
self.width = 22
self.scale = 0
- self.pid = str(os.getpid())
self.pidset1 = {}; self.pidset2 = {}
def check(self):
@@ -20,14 +19,8 @@ def check(self):
def extract(self):
self.val['usage'] = 0.0
self.val['i/o process'] = ''
- for pid in os.listdir('/proc/'):
+ for pid in proc_pidlist():
try:
- ### Is it a pid ?
- int(pid)
-
- ### Filter out dstat
- if pid == self.pid: continue
-
### Reset values
if not self.pidset2.has_key(pid):
self.pidset2[pid] = {'rchar:': 0, 'wchar:': 0}
@@ -35,20 +28,16 @@ def extract(self):
self.pidset1[pid] = {'rchar:': 0, 'wchar:': 0}
### Extract name
-# name = open('/proc/%s/stat' % pid).read().split()[1][1:-1]
- name = linecache.getline('/proc/%s/stat' % pid, 1).split()[1][1:-1]
+ name = proc_splitline('/proc/%s/stat' % pid)[1][1:-1]
### Extract counters
-# for line in open('/proc/%s/io' % pid).readlines():
-# l = line.split()
- for l in linecache_splitlines('/proc/%s/io' % pid):
+ for l in proc_splitlines('/proc/%s/io' % pid):
if len(l) != 2: continue
self.pidset2[pid][l[0]] = int(l[1])
-
- except ValueError:
- continue
except IOError:
continue
+ except IndexError:
+ continue
read_usage = (self.pidset2[pid]['rchar:'] - self.pidset1[pid]['rchar:']) * 1.0 / elapsed
write_usage = (self.pidset2[pid]['wchar:'] - self.pidset1[pid]['wchar:']) * 1.0 / elapsed
@@ -72,7 +61,7 @@ def extract(self):
self.val['i/o process'] = '%-*s%s %s' % (self.width-11, self.val['name'][0:self.width-11], cprint(self.val['read_usage'], 'd', 5, 1024), cprint(self.val['write_usage'], 'd', 5, 1024))
### Debug (show PID)
-# self.val['i/o process'] = '%*s %-*s' % (5, self.val['pid'], self.width-6, self.val['name'])
+# self.val['i/o process'] = '%*s %-*s%s %s' % (5, self.val['pid'], self.width-17, self.val['name'][0:self.width-17], cprint(self.val['read_usage'], 'd', 5, 1024), cprint(self.val['write_usage'], 'd', 5, 1024))
def showcsv(self):
return '%s / %d:%d' % (self.val['name'], self.val['read_usage'], self.val['write_usage'])
View
22 plugins/dstat_top_latency.py
@@ -19,7 +19,6 @@ def __init__(self):
self.type = 's'
self.width = 17
self.scale = 0
- self.pid = str(os.getpid())
self.pidset1 = {}; self.pidset2 = {}
def check(self):
@@ -28,31 +27,22 @@ def check(self):
def extract(self):
self.val['result'] = 0
- self.val['process'] = ''
- for pid in os.listdir('/proc/'):
+ self.val['latency process'] = ''
+ for pid in proc_pidlist():
try:
- ### Is it a pid ?
- int(pid)
-
- ### Filter out dstat
- if pid == self.pid: continue
-
### Reset values
if not self.pidset1.has_key(pid):
self.pidset1[pid] = {'wait_ticks': 0}
### Extract name
-# name = open('/proc/%s/stat' % pid).read().split()[1][1:-1]
- name = linecache.getline('/proc/%s/stat' % pid, 1).split()[1][1:-1]
+ name = proc_splitline('/proc/%s/stat' % pid)[1][1:-1]
### Extract counters
-# l = open('/proc/%s/schedstat' % pid).read().split()
- l = linecache.getline('/proc/%s/schedstat' % pid, 1).split()
-
- except ValueError:
- continue
+ l = proc_splitline('/proc/%s/schedstat' % pid)
except IOError:
continue
+ except IndexError:
+ continue
if len(l) != 3: continue
View
22 plugins/dstat_top_latency_avg.py
@@ -13,7 +13,6 @@ def __init__(self):
self.type = 's'
self.width = 17
self.scale = 0
- self.pid = str(os.getpid())
self.pidset1 = {}; self.pidset2 = {}
def check(self):
@@ -22,31 +21,22 @@ def check(self):
def extract(self):
self.val['result'] = 0
- self.val['process'] = ''
- for pid in os.listdir('/proc/'):
+ self.val['latency process'] = ''
+ for pid in proc_pidlist():
try:
- ### Is it a pid ?
- int(pid)
-
- ### Filter out dstat
- if pid == self.pid: continue
-
### Reset values
if not self.pidset1.has_key(pid):
self.pidset1[pid] = {'wait_ticks': 0, 'ran': 0}
### Extract name
-# name = open('/proc/%s/stat' % pid).read().split()[1][1:-1]
- name = linecache.getline('/proc/%s/stat' % pid, 1).split()[1][1:-1]
+ name = proc_splitline('/proc/%s/stat' % pid)[1][1:-1]
### Extract counters
-# l = open('/proc/%s/schedstat' % pid).read().split()
- l = linecache.getline('/proc/%s/stat' % pid, 1).split()
-
- except ValueError:
- continue
+ l = proc_splitline('/proc/%s/schedstat' % pid)
except IOError:
continue
+ except IndexError:
+ continue
if len(l) != 3: continue
View
15 plugins/dstat_top_mem.py
@@ -14,24 +14,13 @@ def __init__(self):
self.type = 's'
self.width = 17
self.scale = 0
- self.pid = str(os.getpid())
def extract(self):
self.val['max'] = 0.0
- for pid in os.listdir('/proc/'):
+ for pid in proc_pidlist():
try:
- ### Is it a pid ?
- int(pid)
-
- ### Filter out dstat
- if pid == self.pid: continue
-
### Using dopen() will cause too many open files
-# l = open('/proc/%s/stat' % pid).read().split()
- l = linecache.getline('/proc/%s/stat' % pid, 1).split()
-
- except ValueError:
- continue
+ l = proc_splitline('/proc/%s/stat' % pid)
except IOError:
continue
View
20 plugins/dstat_top_oom.py
@@ -13,7 +13,6 @@ def __init__(self):
self.type = 's'
self.width = 18
self.scale = 0
- self.pid = str(os.getpid())
def check(self):
if not os.access('/proc/self/oom_score', os.R_OK):
@@ -22,26 +21,17 @@ def check(self):
def extract(self):
self.val['max'] = 0.0
self.val['kill score'] = ''
- for pid in os.listdir('/proc/'):
+ for pid in proc_pidlist():
try:
- ### Is it a pid ?
- int(pid)
-
- ### Filter out dstat
- if pid == self.pid: continue
-
### Extract name
-# name = open('/proc/%s/stat' % pid).read().split()[1][1:-1]
- name = linecache.getline('/proc/%s/stat' % pid, 1).split()[1][1:-1]
+ name = proc_splitline('/proc/%s/stat' % pid)[1][1:-1]
### Using dopen() will cause too many open files
-# l = open('/proc/%s/oom_score' % pid).read().split()
- l = linecache.getline('/proc/%s/oom_score' % pid, 1).split()
-
- except ValueError:
- continue
+ l = proc_splitline('/proc/%s/oom_score' % pid)
except IOError:
continue
+ except IndexError:
+ continue
if len(l) < 1: continue
oom_score = int(l[0])
Please sign in to comment.
Something went wrong with that request. Please try again.