Skip to content

Commit 8cf44c4

Browse files
committed
Update readme with longer example
1 parent 2b4bdfc commit 8cf44c4

File tree

1 file changed

+13
-13
lines changed

1 file changed

+13
-13
lines changed

examples/debug_python_with_pyroscope.md

Lines changed: 13 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -23,10 +23,11 @@ During the period of 100% CPU utilization, you can assume:
2323

2424
The question is: **which part of the code is responsible for the increase in CPU utilization?** That's where flame graphs come in!
2525

26-
## How to use flame graphs to debug performance issues and save money
26+
## How to use flame graphs to debug performance issues (and save $66,000 on servers)
2727
Let's say the flame graph below represents the timespan that corresponds with the "incident" in the picture above where CPU usage spiked. During this spike, the server's CPUs were spending:
2828
- 75% of time in `foo()`
2929
- 25% of time in `bar()`
30+
- $100,000 on server costs
3031

3132
![pyro_python_blog_example_00-01](https://user-images.githubusercontent.com/23323466/105620812-75197b00-5db5-11eb-92af-33e356d9bb42.png)
3233

@@ -41,7 +42,7 @@ In this case, `foo()` is taking up 75% of the total time range, so we can improv
4142
## Creating a flame graph and Table with Pyroscope
4243
To recreate this example with actual code, we'll use Pyroscope - an open-source continuous profiler that was built specifically for debugging performance issues. To simulate the server doing work, I've created a `work(duration)` function that simulates doing work for the duration passed in. This way, we can replicate `foo()` taking 75% of time and `bar()` taking 25% of the time by producing this flame graph from the code below:
4344

44-
![image](https://user-images.githubusercontent.com/23323466/105621021-f96cfd80-5db7-11eb-8ceb-055ffd4bbcd1.png)
45+
<img width="897" alt="foo_75_bar_25_minutes_30" src="https://user-images.githubusercontent.com/23323466/105665338-acf2f200-5e8b-11eb-87b7-d94b7bdda0fc.png">
4546

4647

4748
```python
@@ -61,25 +62,24 @@ def bar():
6162
```
6263
Then, let's say you optimize your code to decrease `foo()` time from 75000 to 8000, but left all other portions of the code the same. The new code and flame graph would look like:
6364

64-
![image](https://user-images.githubusercontent.com/23323466/105621075-a9db0180-5db8-11eb-9716-a9b643b9ff5e.png)
65+
<img width="935" alt="foo_25_bar_75_minutes_10" src="https://user-images.githubusercontent.com/23323466/105665392-cd22b100-5e8b-11eb-97cc-4dfcceb44cdc.png">
6566

6667
```python
67-
# where each iteration simulates CPU time
68-
def work(n):
69-
i = 0
70-
while i < n:
71-
i += 1
72-
7368
# This would simulate a CPU running for 0.8 seconds
74-
def a():
69+
def foo():
7570
# work(75000)
7671
work(8000)
7772

7873
# This would simulate a CPU running for 2.5 seconds
79-
def b():
74+
def bar():
8075
work(25000)
8176
```
77+
## Improving `foo()` saved us $66,000
78+
Thanks to the flame graphs, we we're able to identify immediately that `foo()` was the bottleneck in our code. After optimizing it, we were able to significantly decrease our cpu utilization.
79+
80+
![image](https://user-images.githubusercontent.com/23323466/105666001-1a535280-5e8d-11eb-9407-c63955ba86a1.png)
81+
82+
83+
This means your total CPU utilization decreased 66%. If you were paying $100,000 dollars for your servers, you could now manage the same load for just $34,000.
8284

83-
This means your total CPU utilization decreased 66%. If you were paying $100,000 dollars for your servers, you could now manage the same load for just $66,000.
8485

85-
![image](https://user-images.githubusercontent.com/23323466/105621350-659d3080-5dbb-11eb-8a25-bf358458e5ac.png)

0 commit comments

Comments
 (0)