Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: Add a Linux badge and fix typos and grammar #72

Merged
merged 4 commits into from
May 4, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
31 changes: 16 additions & 15 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@

---

![OS Linux](https://img.shields.io/badge/OS-Linux-blue)
![PyPI - Python Version](https://img.shields.io/pypi/pyversions/memray)
![PyPI - Implementation](https://img.shields.io/pypi/implementation/memray)
![PyPI](https://img.shields.io/pypi/v/memray)
Expand All @@ -22,7 +23,7 @@ Notable features:

- 🕵️‍♀️ Traces every function call so it can accurately represent the call stack, unlike sampling profilers.
- ℭ Also handles native calls in C/C++ libraries so the entire call stack is present in the results.
- 🏎 Blazing fast! Profiling causes minimal slowdown in the application. Tracking native code is somewhat slower,
- 🏎 Blazing fast! Profiling slows the application only slightly. Tracking native code is somewhat slower,
but this can be enabled or disabled on demand.
- 📈 It can generate various reports about the collected memory usage data, like flame graphs.
- 🧵 Works with Python threads.
Expand All @@ -32,7 +33,7 @@ Memray can help with the following problems:

- Analyze allocations in applications to help discover the cause of high memory usage.
- Find memory leaks.
- Find hotspots in code which cause a lot of allocations.
- Find hotspots in code that cause a lot of allocations.

Note that Memray only works on Linux and cannot be installed on other platforms.

Expand Down Expand Up @@ -78,7 +79,7 @@ You can find the latest documentation available [here](https://bloomberg.github.

# Usage

There are many ways to use Memray. The easiest way is to use it as a command line tool to run your script, application or library.
There are many ways to use Memray. The easiest way is to use it as a command line tool to run your script, application, or library.

```
usage: memray [-h] [-v] {run,flamegraph,table,live,tree,parse,summary,stats} ...
Expand All @@ -97,19 +98,19 @@ positional arguments:
{run,flamegraph,table,live,tree,parse,summary,stats}
Mode of operation
run Run the specified application and track memory usage
flamegraph Generate an HTML flame graph for peak memory usage.
table Generate an HTML table with all records in the peak memory usage.
live Remotely monitor allocations in a text-based interface.
tree Generate an tree view in the terminal for peak memory usage.
parse Debug a results file by parsing and printing each record in it.
flamegraph Generate an HTML flame graph for peak memory usage
table Generate an HTML table with all records in the peak memory usage
live Remotely monitor allocations in a text-based interface
tree Generate a tree view in the terminal for peak memory usage
parse Debug a results file by parsing and printing each record in it
summary Generate a terminal-based summary report of the functions that allocate most memory
stats Generate high level stats of the memory usage in the terminal

optional arguments:
-h, --help show this help message and exit
-v, --verbose Increase verbosity. Option is additive, can be specified up to 3 times.
-h, --help Show this help message and exit
-v, --verbose Increase verbosity. Option is additive and can be specified up to 3 times

Please submit feedback, ideas and bugs by filing a new issue at https://github.com/bloomberg/memray/issues
Please submit feedback, ideas, and bug reports by filing a new issue at https://github.com/bloomberg/memray/issues
```

To use Memray over a script or a single python file you can use
Expand Down Expand Up @@ -189,7 +190,7 @@ To learn more on how the plugin can be used and configured check out [the plugin

# Native mode

Memray supports tracking native C/C++ functions as well as Python functions. This can be especially useful when profiling applications that have C extensions (such as `numpy` or `pandas`) as this gives holistic vision of how much memory is allocated by the extension and how much is allocated by Python itself.
Memray supports tracking native C/C++ functions as well as Python functions. This can be especially useful when profiling applications that have C extensions (such as `numpy` or `pandas`) as this gives a holistic vision of how much memory is allocated by the extension and how much is allocated by Python itself.

To activate native tracking, you need to provide the `--native` argument when using the `run` subcommand:

Expand Down Expand Up @@ -266,13 +267,13 @@ Memray is Apache-2.0 licensed, as found in the [LICENSE](LICENSE) file.

- [Code of Conduct](https://github.com/bloomberg/.github/blob/main/CODE_OF_CONDUCT.md)

This project has adopted a Code of Conduct. If you have any concerns about the Code, or behavior which you have experienced in the project, please contact us at opensource@bloomberg.net.
This project has adopted a Code of Conduct. If you have any concerns about the Code, or behavior that you have experienced in the project, please contact us at opensource@bloomberg.net.

# Security Policy

- [Security Policy](https://github.com/bloomberg/memray/security/policy)

If you believe you have identified a security vulnerability in this project, please send email to the project team at opensource@bloomberg.net, detailing the suspected issue and any methods you've found to reproduce it.
If you believe you have identified a security vulnerability in this project, please send an email to the project team at opensource@bloomberg.net, detailing the suspected issue and any methods you've found to reproduce it.

Please do NOT open an issue in the GitHub repository, as we'd prefer to keep vulnerability reports private until we've had an opportunity to review and address them.

Expand Down Expand Up @@ -302,7 +303,7 @@ You must use your real name (sorry, no pseudonyms, and no anonymous contribution

## Steps

- Create an Issue, selecting 'Feature Request', and explain the proposed change.
- Create an Issue, select 'Feature Request', and explain the proposed change.
- Follow the guidelines in the issue template presented to you.
- Submit the Issue.
- Submit a Pull Request and link it to the Issue by including "#<issue number>" in the Pull Request summary.
2 changes: 1 addition & 1 deletion docs/api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ Memray exposes an API that can be used to programmatically activate or
deactivate tracking of a Python process's memory allocations. You do this by
creating a `Tracker` object and using it as a context manager in a ``with``
statement. While the body of the ``with`` statement runs, tracking will be
enabled, with output being sent a destination you specify when creating the
enabled, with output being sent to a destination you specify when creating the
`Tracker`. When the ``with`` block ends, tracking will be disabled and the
output will be flushed and closed.

Expand Down
14 changes: 7 additions & 7 deletions docs/flamegraph.rst
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ Flame graphs can be interpreted as follows:
mean more memory was allocated by the given node and those are the
most important to understand first.

- Major forks in the flame graph (when a node splits in several ones in
- Major forks in the flame graph (when a node splits into several ones in
the next level) can be useful to study: these nodes can indicate
a logical grouping of code, where a function processes work in stages,
each with its own function. It can also be caused by a conditional
Expand Down Expand Up @@ -146,9 +146,9 @@ flame graph looks like this:
The top edge shows that function ``g()`` allocates the most memory,
``d()`` is wider, but its exposed top edge is smaller, which means that
``d()`` itself allocated less memory than the one allocated by the
functions called by it. Functions including ``b()`` and ``c()`` do not
not allocate memory themselves directly; rather, their child functions
did the allocation.
functions called by it. Functions including ``b()`` and ``c()`` do
not allocate memory themselves directly; rather, the functions they
called did the allocating.

Functions beneath ``g()`` show its ancestry: ``g()`` was called by
``f()``, which was called by ``d()``, and so on.
Expand All @@ -173,7 +173,7 @@ memory allocated by ``missing()`` didn't contribute at all to the total
amount of memory. This is because the memory allocated by ``missing()``
is deallocated as soon as the call ends.

With this information we know that if you need to chose a place to start
With this information, we know that if you need to choose a place to start
looking for optimizations, you should start looking at ``g()``, then
``a()`` and then ``i()`` (in that order) as these are the places that
allocated the most memory when the program reached its maximum. Of
Expand Down Expand Up @@ -230,13 +230,13 @@ if ``--split-threads`` is used, the allocation patterns of individual
threads can be analyzed.

When opening the report, the same merged thread view is presented, but
a new "Filter Thread" drop down will be shown. This can be used to
a new "Filter Thread" dropdown will be shown. This can be used to
select a specific thread to display a flame graph for that one thread:

.. image:: _static/images/filter_thread_dropdown.png

To go back to the merged view, the "Reset" entry can be used in the
drop down menu.
dropdown menu.

Note that the root node (displayed as **memray**) is always present
and is displayed as thread 0.
Expand Down
6 changes: 3 additions & 3 deletions docs/python_allocators.rst
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ How ``pymalloc`` works
Requests greater than 512 bytes are routed to the system's allocator. This
means that even if ``pymalloc`` is active, it will only affect requests for 512
bytes or less. For those small requests, the ``pymalloc`` allocator will
allocate big chunks of memory from the system allocator, and then subdivide
allocate big chunks of memory from the system allocator and then subdivide
those big chunks.

.. image:: _static/images/pymalloc.png
Expand All @@ -27,7 +27,7 @@ those big chunks.

- Arenas: These are chunks of memory that ``pymalloc`` directly requests from
the system allocator using ``mmap``. Arenas are always a multiple of
4 kilobytes. Arenas are subdivided in pools of different types.
4 kilobytes. Arenas are subdivided into pools of different types.
- Pools: Pools contain fixed size blocks of memory. Each pool only contains
blocks of a single consistent size, though different pools have blocks of
different sizes. Pools are used to easily find, allocate, and free memory
Expand Down Expand Up @@ -75,7 +75,7 @@ existing memory that has previously been used for other Python objects. This
has two main consequences:

- Requests for **small** amounts of memory that can be satisfied from an
existing arena won't result in a call to the system allocator, and therefore
existing arena won't result in a call to the system allocator and therefore
won't appear in the profiler reports at all.

- Requests for **small** amounts of memory that can't be satisfied from an
Expand Down
4 changes: 3 additions & 1 deletion docs/run.rst
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,9 @@ disabled. Some of the most important allocations happen when operating on NumPy

.. image:: _static/images/mandelbrot_operation_non_native.png

Here, we can see some that the allocation happens when doing some math on NumPy arrays but unfortunately this doesn't inform us a of what exact operation is allocating memory or how temporaries are being used. We also don't know if the memory was allocated by NumPy or by the interpreter itself. By using the native tracking mode with Memray we can get a much richer report:
Here, we can see that the allocation happens when doing some math on NumPy arrays but unfortunately this doesn't inform us
of what exact operation is allocating memory or how temporaries are being used. We also don't know if the memory was
allocated by NumPy or by the interpreter itself. By using the native tracking mode with Memray we can get a much richer report:

.. image:: _static/images/mandelbrot_operation_native.png

Expand Down
2 changes: 1 addition & 1 deletion docs/stats.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ Stats Reporter
==============

The stats reporter generates high level statistics about the tracked process's
memory allocations. By default it computes these statistics for the moment when
memory allocations. By default, it computes these statistics for the moment when
the tracked process's memory usage was at its peak, but it can optionally
compute the stats for *all* allocations instead.

Expand Down
2 changes: 1 addition & 1 deletion docs/tree.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ representation:
* Only the 10 source locations responsible for the most allocated bytes are
displayed. This is configurable with the ``--biggest-allocs`` command line
parameter.
* The total memory and percentage shown in the root node of the tree is
* The total memory and percentage shown in the root node of the tree are
calculated based only on the allocations that are shown. Since any allocation
not big enough to be shown will not be included there, the reported total
memory of the root node is normally less than the process's peak memory size.
Expand Down
4 changes: 2 additions & 2 deletions src/memray/commands/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@

_EPILOG = textwrap.dedent(
"""\
Please submit feedback, ideas and bugs by filing a new issue at
Please submit feedback, ideas, and bug reports by filing a new issue at
https://github.com/bloomberg/memray/issues
"""
)
Expand Down Expand Up @@ -77,7 +77,7 @@ def get_argument_parser() -> argparse.ArgumentParser:
"--verbose",
action="count",
default=0,
help="Increase verbosity. Option is additive, can be specified up to 3 times.",
help="Increase verbosity. Option is additive and can be specified up to 3 times",
)

subparsers = parser.add_subparsers(
Expand Down
2 changes: 1 addition & 1 deletion src/memray/commands/flamegraph.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@


class FlamegraphCommand(HighWatermarkCommand):
"""Generate an HTML flame graph for peak memory usage."""
"""Generate an HTML flame graph for peak memory usage"""

def __init__(self) -> None:
super().__init__(
Expand Down
2 changes: 1 addition & 1 deletion src/memray/commands/live.py
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ def readkey() -> str: # pragma: no cover


class LiveCommand:
"""Remotely monitor allocations in a text-based interface."""
"""Remotely monitor allocations in a text-based interface"""

def prepare_parser(self, parser: argparse.ArgumentParser) -> None:
parser.add_argument(
Expand Down
2 changes: 1 addition & 1 deletion src/memray/commands/parse.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@


class ParseCommand:
"""Debug a results file by parsing and printing each record in it."""
"""Debug a results file by parsing and printing each record in it"""

def prepare_parser(self, parser: argparse.ArgumentParser) -> None:
parser.add_argument("results", help="Results of the tracker run")
Expand Down
2 changes: 1 addition & 1 deletion src/memray/commands/table.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@


class TableCommand(HighWatermarkCommand):
"""Generate an HTML table with all records in the peak memory usage."""
"""Generate an HTML table with all records in the peak memory usage"""

def __init__(self) -> None:
super().__init__(
Expand Down
2 changes: 1 addition & 1 deletion src/memray/commands/tree.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@


class TreeCommand:
"""Generate an tree view in the terminal for peak memory usage."""
"""Generate a tree view in the terminal for peak memory usage"""

def prepare_parser(self, parser: argparse.ArgumentParser) -> None:
parser.add_argument("results", help="Results of the tracker run")
Expand Down