diff --git a/locale/en/docs/guides/diagnostics/index.md b/locale/en/docs/guides/diagnostics/index.md index 713f170f8c0d6..7a0b1af78ace7 100644 --- a/locale/en/docs/guides/diagnostics/index.md +++ b/locale/en/docs/guides/diagnostics/index.md @@ -5,13 +5,16 @@ layout: docs.hbs # Diagnostics Guide -These guides were created in the [Diagnostics Working Group](https://github.com/nodejs/diagnostics) -with the objective to provide a guidance when diagnosing an issue -in the user application. -The documentation project is organized based on user journey. -Those journeys are a coherent set of step-by-step procedures, -that a user follows for problem determination of reported issues. +These guides were created by the [Diagnostics Working Group][] with the +objective of providing guidance when diagnosing an issue in a user's +application. + +The documentation project is organized based on user journey. Those journeys +are a coherent set of step-by-step procedures that a user can follow to +root-cause their issues. This is the available set of diagnostics guides: * [Memory](/en/docs/guides/diagnostics/memory) + +[Diagnostics Working Group]: https://github.com/nodejs/diagnostics) diff --git a/locale/en/docs/guides/diagnostics/memory/index.md b/locale/en/docs/guides/diagnostics/memory/index.md index 669f984a5f335..c807b2d45928a 100644 --- a/locale/en/docs/guides/diagnostics/memory/index.md +++ b/locale/en/docs/guides/diagnostics/memory/index.md @@ -11,11 +11,10 @@ In this document you can learn about how to debug memory related issues. * [My process runs out of memory](#my-process-runs-out-of-memory) * [Symptoms](#symptoms) * [Side Effects](#side-effects) - * [Debugging](#debugging) * [My process utilizes memory inefficiently](#my-process-utilizes-memory-inefficiently) * [Symptoms](#symptoms-1) * [Side Effects](#side-effects-1) - * [Debugging](#debugging-1) + * [Debugging](#debugging) ## My process runs out of memory @@ -29,28 +28,17 @@ efficient way of finding a memory leak is essential. The user observes continuously increasing memory usage _(can be fast or slow, over days or even weeks)_ then sees the process crashing and restarting by the process manager. The process is maybe running slower than before and the -restarts make certain requests to fail _(load balancer responds with 502)_. +restarts cause some requests to fail _(load balancer responds with 502)_. ### Side Effects -* Process restarts due to the memory exhaustion and request are dropped on the - floor +* Process restarts due to the memory exhaustion and requests are dropped + on the floor * Increased GC activity leads to higher CPU usage and slower response time * GC blocking the Event Loop causing slowness * Increased memory swapping slows down the process (GC activity) * May not have enough available memory to get a Heap Snapshot -### Debugging - -To debug a memory issue we need to be able to see how much space our specific -type of objects take, and what variables retain them to get garbage collected. -For the effective debugging we also need to know the allocation pattern of our -variables over time. - -* [Using Heap Profiler](/en/docs/guides/diagnostics/memory/using-heap-profiler/) -* [Using Heap Snapshot](/en/docs/guides/diagnostics/memory/using-heap-snapshot/) -* [GC Traces](/en/docs/guides/diagnostics/memory/using-gc-traces) - ## My process utilizes memory inefficiently ### Symptoms @@ -63,12 +51,12 @@ garbage collector activity. * An elevated number of page faults * Higher GC activity and CPU usage -### Debugging +## Debugging -To debug a memory issue we need to be able to see how much space our specific -type of objects take, and what variables retain them to get garbage collected. -For the effective debugging we also need to know the allocation pattern of our -variables over time. +Most memory issues can be solved by determining how much space our specific +type of objects take and what variables are preventing them from being garbage +collected. It can also help to know the allocation pattern of our program over +time. * [Using Heap Profiler](/en/docs/guides/diagnostics/memory/using-heap-profiler/) * [Using Heap Snapshot](/en/docs/guides/diagnostics/memory/using-heap-snapshot/) diff --git a/locale/en/docs/guides/diagnostics/memory/using-gc-traces.md b/locale/en/docs/guides/diagnostics/memory/using-gc-traces.md index b86169e55f99a..07589480e85ca 100644 --- a/locale/en/docs/guides/diagnostics/memory/using-gc-traces.md +++ b/locale/en/docs/guides/diagnostics/memory/using-gc-traces.md @@ -11,7 +11,7 @@ one thing it's that when GC is running, your code is not. You may want to know how often and how long the garbage collection is running, and what is the outcome. -## Runnig with garbage collection traces +## Running with garbage collection traces You can see traces for garbage collection in console output of your process using the `--trace_gc` flag. @@ -21,10 +21,9 @@ $ node --trace_gc app.js ``` You might want to avoid getting traces from the entire lifetime of your -process running on a server. In that case, set the flag from within the process, -and switch it off once the need for tracing is over. - -Here's how to print GC events to stdout for one minute. +process. In that case, you can set the flag from within the process, and switch +it off once the need for tracing is over. For example, here's how to print GC +events to stdout for one minute: ```js const v8 = require('v8'); @@ -34,7 +33,7 @@ setTimeout(() => { v8.setFlagsFromString('--notrace_gc'); }, 60e3); ### Examining a trace with `--trace_gc` -Obtained traces of garbage collection looks like the following lines. +The output traces look like the following: ``` [19278:0x5408db0] 44 ms: Scavenge 2.3 (3.0) -> 1.9 (4.0) MB, 1.2 / 0.0 ms (average mu = 1.000, current mu = 1.000) allocation failure @@ -42,7 +41,7 @@ Obtained traces of garbage collection looks like the following lines. [23521:0x10268b000] 120 ms: Mark-sweep 100.7 (122.7) -> 100.6 (122.7) MB, 0.15 / 0.0 ms (average mu = 0.132, current mu = 0.137) deserialize GC in old space requested ``` -This is how to interpret the trace data (for the second line): +Let's look at the second line. Here is how to interpret the trace data:
120 | -Time since the process start in ms | +Time since the thread start in ms |
Mark-sweep | @@ -67,25 +66,31 @@ This is how to interpret the trace data (for the second line):||
100.7 | -Heap used before GC in MB | +Heap used before GC in MiB |
122.7 | -Total heap before GC in MB | +Total heap before GC in MiB |
100.6 | -Heap used after GC in MB | +Heap used after GC in MiB |
122.7 | -Total heap after GC in MB | +Total heap after GC in MiB |
0.15 / 0.0 - (average mu = 0.132, current mu = 0.137) |
+ 0.15 | Time spent in GC in ms |
0.0 | +Time spent in GC callbacks in ms | +|
(average mu = 0.132, current mu = 0.137) | +Mutator utilization (from 0-1) | |
deserialize GC in old space requested | Reason for GC | @@ -104,8 +109,8 @@ const { PerformanceObserver } = require('perf_hooks'); const obs = new PerformanceObserver((list) => { const entry = list.getEntries()[0]; /* - The entry would be an instance of PerformanceEntry containing - metrics of garbage collection. + The entry is an instance of PerformanceEntry containing + metrics of a single garbage collection event. For example: PerformanceEntry { name: 'gc', @@ -117,7 +122,7 @@ const obs = new PerformanceObserver((list) => { */ }); -// Subscribe notifications of GCs +// Subscribe to notifications of GCs obs.observe({ entryTypes: ['gc'] }); // Stop subscription @@ -178,39 +183,41 @@ For more information, you can refer to ## Examples of diagnosing memory issues with trace option: A. How to get context of bad allocations -1. Suppose we observe that the old space is continously increasing. -2. But due to heavy gc, the heap maximum is not hit, but the process is slow. +1. Suppose we observe that the old space is continuously increasing. +2. But due to heavy GC, the heap maximum is not hit, but the process is slow. 3. Review the trace data and figure out how much is the total heap before and -after the gc. -4. Reduce `--max-old-space-size` such that the total heap is closer to the -limit. -5. Allow the program to run, hit the out of memory. + after the GC. +4. Reduce [`--max-old-space-size`][] such that the total heap is closer to the + limit. +5. Allow the program to run and run out-of-memory. 6. The produced log shows the failing context. B. How to assert whether there is a memory leak when heap growth is observed -1. Suppose we observe that the old space is continously increasing. -2. Due to heavy gc, the heap maximum is not hit, but the process is slow. +1. Suppose we observe that the old space is continuously increasing. +2. Due to heavy GC, the heap maximum is not hit, but the process is slow. 3. Review the trace data and figure out how much is the total heap before and -after the gc. -4. Reduce `--max-old-space-size` such that the total heap is closer to the -limit. + after the GC. +4. Reduce [`--max-old-space-size`][] such that the total heap is closer to the + limit. 5. Allow the program to run, see if it hits the out of memory. -6. If it hits OOM, increment the heap size by ~10% or so and repeat few times. -If the same pattern is observed, it is indicative of a memory leak. -7. If there is no OOM, then freeze the heap size to that value - A packed heap -reduces memory footprint and compaction latency. - -C. How to assert whether too many gcs are happening or too many gcs are causing -an overhead -1. Review the trace data, specifically around time between consecutive gcs. -2. Review the trace data, specifically around time spent in gc. -3. If the time between two gc is less than the time spent in gc, the -application is severely starving. -4. If the time between two gcs and the time spent in gc are very high, probably -the application can use a smaller heap. -5. If the time between two gcs are much greater than the time spent in gc, -application is relatively healthy. +6. If it hits out-of-memory, increment the heap size by ~10% or so and repeat + few times. If the same pattern is observed, it is indicative of a memory + leak. +7. If there is no out-of-memory error, then freeze the heap size to that value. + A packed heap reduces memory footprint and compaction latency. + +C. How to assert whether too many GCs are happening or too many GCs are causing + an overhead +1. Review the trace data, specifically around time between consecutive GCs. +2. Review the trace data, specifically around time spent in GC. +3. If the time between two GC is less than the time spent in GC, the + application is severely starving. +4. If the time between two GCs and the time spent in GC is very high, + the application can probably use a smaller heap. +5. If the time between two GCs is much greater than the time spent in GC, + the application is relatively healthy. -[performance hooks]: https://nodejs.org/api/perf_hooks.html [PerformanceEntry]: https://nodejs.org/api/perf_hooks.html#perf_hooks_class_performanceentry [PerformanceObserver]: https://nodejs.org/api/perf_hooks.html#perf_hooks_class_performanceobserver +[`--max-old-space-size`]: https://nodejs.org/api/cli.html#--max-old-space-sizesize-in-megabytes +[performance hooks]: https://nodejs.org/api/perf_hooks.html diff --git a/locale/en/docs/guides/diagnostics/memory/using-heap-profiler.md b/locale/en/docs/guides/diagnostics/memory/using-heap-profiler.md index 03915402fa27d..8a6ce6a85b5d9 100644 --- a/locale/en/docs/guides/diagnostics/memory/using-heap-profiler.md +++ b/locale/en/docs/guides/diagnostics/memory/using-heap-profiler.md @@ -5,20 +5,15 @@ layout: docs.hbs # Using Heap Profiler -To debug a memory issue we need to be able to see how much space our specific -type of objects take, and what variables retain them to get garbage collected. -For the effective debugging we also need to know the allocation pattern of our -variables over time. - -The heap profiler acts on top of V8 towards to bring snapshots of memory over -time. In this document, we will cover the memory profiling using: +The heap profiler acts on top of V8 to capture allocations over time. In this +document, we will cover memory profiling using: 1. Allocation Timeline 2. Sampling Heap Profiler -Unlike heap dump that was cover in the [Using Heap Snapshot][], -the idea of using real-time profiling is to understand allocations in a given -time frame. +Unlike heap dumps which were covered in the [Using Heap Snapshot][] guide, the +idea of using real-time profiling is to understand allocations over a period of +time. ## Heap Profiler - Allocation Timeline @@ -26,7 +21,8 @@ Heap Profiler is similar to the Sampling Heap Profiler, except it will trace every allocation. It has higher overhead than the Sampling Heap Profiler so it’s not recommended to use in production. -> You can use [@mmarchini/observe][] to do it programmatically. +> You can use [@mmarchini/observe][] to start and stop the profiler +> programmatically. ### How To @@ -36,44 +32,43 @@ Start the application: node --inspect index.js ``` -> `--inspect-brk` is an better choice for scripts. +> `--inspect-brk` is a better choice for scripts. Connect to the dev-tools instance in chrome and then: -* Select `memory` tab -* Select `Allocation instrumentation timeline` -* Start profiling +* Select the `Memory` tab. +* Select `Allocation instrumentation timeline`. +* Start profiling. ![heap profiler tutorial step 1][heap profiler tutorial 1] -After it, the heap profiling is running, it is strongly recommended to run -samples in order to identify memory issues, for this example, we will use -`Apache Benchmark` to produce load in the application. - -> In this example, we are assuming the heap profiling under web application. +Once the heap profiling is running, it is strongly recommended to run samples +in order to identify memory issues. For example, if we were heap profiling a +web application, we could use `Apache Benchmark` to produce load: ```console $ ab -n 1000 -c 5 http://localhost:3000 ``` -Hence, press stop button when the load expected is complete +Then, press stop button when the load is complete: ![heap profiler tutorial step 2][heap profiler tutorial 2] -Then look at the snapshot data towards to memory allocation. +Finally, look at the snapshot data: ![heap profiler tutorial step 3][heap profiler tutorial 3] -Check the [usefull links](#usefull-links) section for futher information +Check the [useful links](#useful-links) section for futher information about memory terminology. ## Sampling Heap Profiler -Sampling Heap Profiler tracks memory allocation pattern and reserved space -over time. As it’s sampling based it has a low enough overhead to use it in +Sampling Heap Profiler tracks the memory allocation pattern and reserved space +over time. Since it is sampling based its overhead is low enough to use in production systems. -> You can use the module [`heap-profiler`][] to do it programmatically. +> You can use the module [`heap-profiler`][] to start and stop the heap +> profiler programatically. ### How To @@ -87,15 +82,15 @@ $ node --inspect index.js Connect to the dev-tools instance and then: -1. Select `memory` tab -2. Select `Allocation sampling` -3. Start profiling +1. Select the `Memory` tab. +2. Select `Allocation sampling`. +3. Start profiling. ![heap profiler tutorial 4][heap profiler tutorial 4] Produce some load and stop the profiler. It will generate a summary with -allocation based in the stacktrace, you can lookup to the functions with more -heap allocations in a timespan, see the example below: +allocation based on their stacktraces. You can focus on the functions with more +heap allocations, see the example below: ![heap profiler tutorial 5][heap profiler tutorial 5] diff --git a/locale/en/docs/guides/diagnostics/memory/using-heap-snapshot.md b/locale/en/docs/guides/diagnostics/memory/using-heap-snapshot.md index a4ba8cc06abc6..ad18686eb003f 100644 --- a/locale/en/docs/guides/diagnostics/memory/using-heap-snapshot.md +++ b/locale/en/docs/guides/diagnostics/memory/using-heap-snapshot.md @@ -24,20 +24,22 @@ availability. ### Get the Heap Snapshot -1. via inspector -2. via external signal and commandline flag -3. via writeHeapSnapshot call withing the process -4. via inspector protocol +There are multiple ways to obtain a heap snapshot: + +1. via the inspector, +2. via an external signal and command-line flag, +3. via a `writeHeapSnapshot` call within the process, +4. via the inspector protocol. #### 1. Use memory profiling in inspector > Works in all actively maintained versions of Node.js -Run node with `--inspect` flag. Open inspector. +Run node with `--inspect` flag and open the inspector. ![open inspector][open inspector image] The simplest way to get a Heap Snapshot is to connect a inspector to your -process running locally and go to Memory tab, choose to take a heap snapshot. +process running locally. Then go to Memory tab and take a heap snapshot. ![take a heap snapshot][take a heap snapshot image] @@ -45,7 +47,7 @@ process running locally and go to Memory tab, choose to take a heap snapshot. > Works in v12.0.0 or later -You can start node with a commandline flag enabling reacting to a signal to +You can start node with a command-line flag enabling reacting to a signal to create a heap snapshot. ``` @@ -61,7 +63,7 @@ $ ls Heap.20190718.133405.15554.0.001.heapsnapshot ``` -For details, see the latest documentation of [heapsnapshot-signal flag][] +For details, see the latest documentation of [heapsnapshot-signal flag][]. #### 3. Use `writeHeapSnapshot` function @@ -75,14 +77,14 @@ server, you can implement getting it using: require('v8').writeHeapSnapshot(); ``` -Check [writeHeapSnapshot docs][] for file name options +Check [`writeHeapSnapshot` docs][] for file name options. You need to have a way to invoke it without stopping the process, so calling it -in a http handler or as a reaction to a signal from the operating system -is advised. Be careful not to expose the http endpoint triggering a snapshot. +in a HTTP handler or as a reaction to a signal from the operating system +is advised. Be careful not to expose the HTTP endpoint triggering a snapshot. It should not be possible for anybody else to access it. -For versions of Node.js before v11.13.0 you can use the [heapdump package][] +For versions of Node.js before v11.13.0 you can use the [heapdump package][]. #### 4. Trigger Heap Snapshot using inspector protocol @@ -91,7 +93,7 @@ process. It's not necessary to run the actual inspector from Chromium to use the API. -Here's an example snapshot trigger in bash, using `websocat` and `jq` +Here's an example snapshot trigger in bash, using `websocat` and `jq`: ```bash #!/bin/bash @@ -119,41 +121,42 @@ done < <(cat out | tail -n +2 | head -n -1) exec 3>&- ``` -Not exhaustive list of memory profiling tools usable with inspector protocol: +Here is a non-exhaustive list of memory profiling tools usable with the +inspector protocol: * [OpenProfiling for Node.js][openprofiling] ## How to find a memory leak with Heap Snapshots -To find a memory leak one compares two snapshots. It's important to make sure -the snapshots diff doesn't contain unnecessary information. +You can find a memory leak by compaing too snapshots. It's important to make +sure the snapshots difference does not contain unnecessary information. Following steps should produce a clean diff between snapshots. 1. Let the process load all sources and finish bootstrapping. It should take a -few seconds at most. + few seconds at most. 2. Start using the functionality you suspect of leaking memory. It's likely it -makes some initial allocations that are not the leaking ones. + makes some initial allocations that are not the leaking ones. 3. Take one heap snapshot. 4. Continue using the functionality for a while, preferably without running -anything else in between. + anything else in between. 5. Take another heap snapshot. The difference between the two should mostly -contain what was leaking. + contain what was leaking. 6. Open Chromium/Chrome dev tools and go to *Memory* tab -7. Load the older snapshot file first, newer one second -![Load button in tools][load button image] -8. Select the newer snapshot and switch mode in a dropdown at the top from -*Summary* to *Comparison*. ![Comparison dropdown][comparison image] +7. Load the older snapshot file first, and the newer one second. + ![Load button in tools][load button image] +8. Select the newer snapshot and switch mode in the dropdown at the top from + *Summary* to *Comparison*. ![Comparison dropdown][comparison image] 9. Look for large positive deltas and explore the references that caused -them in the bottom panel. + them in the bottom panel. -Practice capturing heap snapshots and finding memory leaks with -[a heap snapshot exercise][heapsnapshot exercise] +You can practice capturing heap snapshots and finding memory leaks with [this +heap snapshot exercise][heapsnapshot exercise]. [open inspector image]: /static/images/docs/guides/diagnostics/tools.png [take a heap snapshot image]: /static/images/docs/guides/diagnostics/snapshot.png [heapsnapshot-signal flag]: https://nodejs.org/api/cli.html#--heapsnapshot-signalsignal [heapdump package]: https://www.npmjs.com/package/heapdump -[writeHeapSnapshot docs]: https://nodejs.org/api/v8.html#v8_v8_writeheapsnapshot_filename +[`writeHeapSnapshot` docs]: https://nodejs.org/api/v8.html#v8_v8_writeheapsnapshot_filename [openprofiling]: https://github.com/vmarchaud/openprofiling-node [load button image]: /static/images/docs/guides/diagnostics/load-snapshot.png [comparison image]: /static/images/docs/guides/diagnostics/compare.png