Skip to content

kadirtutenn/taskmon

Repository files navigation

πŸ“Š taskmon - Function-Level Performance Monitoring

Simple, powerful Python function monitoring with beautiful terminal output.

Track CPU, memory, and performance of any Python function with a simple decorator. Perfect for finding memory leaks, bottlenecks, and understanding your code's resource usage.

✨ Features

  • 🎯 Function-level tracking - Monitor any function with a simple @monitor() decorator
  • πŸ“Š Real-time metrics - CPU, memory, execution time, and throughput
  • 🎨 Beautiful output - Color-coded, visual terminal display
  • πŸ” Memory leak detection - Automatic detection of memory growth
  • πŸ“ˆ Nested function support - Track call hierarchies with visual indentation
  • πŸš€ Zero configuration - Works out of the box
  • πŸ“ Aggregated statistics - See summaries across all function calls
  • 🧡 Thread-safe - Works with multi-threaded applications

πŸš€ Quick Start

Installation

pip install taskmon

Or install from source:

git clone https://github.com/kadirtutenn/taskmon
cd taskmon
pip install -e .

Basic Usage

from taskmon import monitor

@monitor()
def process_data(items):
    result = []
    for item in items:
        result.append(item * 2)
    return result

# Call your function normally
data = process_data([1, 2, 3, 4, 5])

Output:

β”Œβ”€ πŸš€ process_data() started [call_1_1738574400123]
β”‚  πŸ“Š Baseline: CPU 2.3% | MEM 156.2MB | THR 4
  └─ βœ… process_data() completed in 0.02s
     πŸ“Š CPU: 3.1% [β–ˆβ–ˆβ–ˆβ–‘β–‘β–‘β–‘β–‘β–‘β–‘] | MEM: 🟒 +0.8MB (peak: 157.0MB)
     ⚑ Processed: 5 items (250.0 items/s)

πŸ“š Usage Examples

1. Monitor Any Function

from taskmon import monitor

@monitor()
def calculate_fibonacci(n):
    if n <= 1:
        return n
    return calculate_fibonacci(n-1) + calculate_fibonacci(n-2)

result = calculate_fibonacci(10)

2. Track Code Sections

from taskmon import monitor, track

@monitor()
def data_pipeline():
    with track("load_data"):
        data = load_from_database()
    
    with track("process"):
        processed = process_data(data)
    
    with track("save"):
        save_to_database(processed)

data_pipeline()

Output shows nested structure:

β”Œβ”€ πŸš€ data_pipeline() started
β”‚  πŸ“Š Baseline: CPU 2.1% | MEM 145.3MB | THR 4
  β”œβ”€ πŸš€ load_data started
  β”‚  πŸ“Š Baseline: CPU 2.3% | MEM 145.5MB | THR 4
    └─ βœ… load_data completed in 1.23s
       πŸ“Š CPU: 15.2% [β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‘β–‘β–‘β–‘β–‘] | MEM: 🟒 +12.3MB (peak: 157.8MB)
  
  β”œβ”€ πŸš€ process started
  β”‚  πŸ“Š Baseline: CPU 3.1% | MEM 157.8MB | THR 4
    └─ βœ… process completed in 2.45s
       πŸ“Š CPU: 45.6% [β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‘β–‘] | MEM: 🟑 +45.2MB (peak: 203.0MB)
  
  └─ βœ… data_pipeline() completed in 3.89s
     πŸ“Š CPU: 21.3% [β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‘β–‘β–‘β–‘] | MEM: 🟒 +9.1MB (peak: 203.0MB)

3. Track Items Processed

from taskmon import monitor

@monitor()
def process_batch(items):
    results = []
    for item in items:
        results.append(transform_item(item))
    return results

# Automatically tracks len(results)
items = ["item1", "item2", "item3"]
process_batch(items)

Output:

└─ βœ… process_batch() completed in 5.23s
   πŸ“Š CPU: 12.3% [β–ˆβ–ˆβ–ˆβ–ˆβ–‘β–‘β–‘β–‘β–‘β–‘] | MEM: 🟒 +2.1MB (peak: 158.4MB)
   ⚑ Processed: 3 items (0.6 items/s)

4. View Statistics

from taskmon import monitor, print_summary

@monitor()
def expensive_function(n):
    time.sleep(n)
    return n * 2

# Call multiple times
for i in range(5):
    expensive_function(i)

# Print summary
print_summary()

Output:

================================================================================
πŸ“Š FUNCTION MONITORING SUMMARY
================================================================================

Function                                    Calls   Total Time   Avg Time   Failures
--------------------------------------------------------------------------------
__main__.expensive_function                     5       10.00s      2.000s      0 (0.0%)

================================================================================

5. Find Memory Leaks

from taskmon import monitor

@monitor()
def potential_leak():
    data = []
    for i in range(1000000):
        data.append(i)
    # Oops, forgot to clean up!
    return len(data)

potential_leak()

Output (notice the πŸ”΄ red indicator):

└─ βœ… potential_leak() completed in 0.15s
   πŸ“Š CPU: 85.2% [β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‘] | MEM: πŸ”΄ +156.3MB (peak: 312.5MB)
   ⚑ Processed: 1,000,000 items (6666666.7 items/s)

6. Real-World Example: Space Mission Data Pipeline

from taskmon import monitor, track

@monitor()
def plan_mission_data_collection():
    with track("Load Configuration"):
        satellites = get_satellite_config_cached()
    
    with track("Count Signals"):
        total_signals = get_total_signal_count()
    
    # Process in chunks
    chunk_size = 50000
    for offset in range(0, total_signals, chunk_size):
        with track(f"Process Chunk {offset//chunk_size + 1}"):
            process_signal_chunk(offset, chunk_size, satellites)

plan_mission_data_collection()

Output shows where memory leaks occur:

β”Œβ”€ πŸš€ plan_mission_data_collection() started
  β”œβ”€ πŸš€ Load Configuration started
    └─ βœ… Load Configuration completed in 0.05s
       πŸ“Š CPU: 2.1% | MEM: 🟒 +1.2MB
  
  β”œβ”€ πŸš€ Count Signals started
    └─ βœ… Count Signals completed in 0.23s
       πŸ“Š CPU: 5.3% | MEM: 🟒 +0.1MB
  
  β”œβ”€ πŸš€ Process Chunk 1 started
    └─ βœ… Process Chunk 1 completed in 12.34s
       πŸ“Š CPU: 45.2% | MEM: 🟒 +2.3MB
  
  β”œβ”€ πŸš€ Process Chunk 2 started
    └─ βœ… Process Chunk 2 completed in 11.89s
       πŸ“Š CPU: 43.1% | MEM: πŸ”΄ +125.6MB  <-- Memory leak here!
  
  └─ βœ… plan_mission_data_collection() completed in 45.67s
     πŸ“Š CPU: 38.5% | MEM: πŸ”΄ +128.4MB

🎨 Visual Indicators

Status Icons

  • πŸš€ Function started
  • βœ… Successfully completed
  • ❌ Failed with error
  • ⚠️ Warning

Memory Status

  • 🟒 Green: < 10MB change (normal)
  • 🟑 Yellow: 10-100MB change (monitor)
  • πŸ”΄ Red: > 100MB change (memory leak!)

CPU Bars

CPU: 15.2% [β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‘β–‘β–‘β–‘β–‘]  <- Visual bar
CPU: 45.6% [β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‘β–‘]
CPU: 85.2% [β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‘]

πŸ“Š API Reference

@monitor(items_attr=None, print_summary=False)

Decorator to monitor function performance.

Parameters:

  • items_attr (str, optional): Attribute name to track as items_processed
  • print_summary (bool): Print summary after function completes

Example:

@monitor(items_attr='count', print_summary=True)
def process_data():
    return {"count": 100, "data": [...]}

track(section_name, items=0)

Context manager for monitoring code sections.

Parameters:

  • section_name (str): Name of the section
  • items (int): Number of items processed

Example:

with track("database_query", items=100):
    results = db.query(...)

get_stats(function_name=None)

Get function statistics.

Parameters:

  • function_name (str, optional): Specific function name, or None for all

Returns: Dict with statistics

Example:

stats = get_stats("my_module.my_function")
print(f"Total calls: {stats['total_calls']}")
print(f"Avg CPU: {stats['avg_cpu']:.1f}%")

print_summary()

Print summary of all monitored functions.

clear_stats()

Clear all statistics.

get_active_calls()

Get list of currently active function calls.

Returns: List of FunctionMetrics objects

πŸ”§ Advanced Usage

Custom Items Tracking

@monitor()
def process_batch(items):
    count = 0
    for item in items:
        process(item)
        count += 1
    return count  # Automatically tracked

Nested Function Monitoring

@monitor()
def parent_function():
    child_function_1()
    child_function_2()

@monitor()
def child_function_1():
    pass

@monitor()
def child_function_2():
    pass

Output shows hierarchy:

β”Œβ”€ πŸš€ parent_function() started
  β”œβ”€ πŸš€ child_function_1() started
    └─ βœ… child_function_1() completed
  β”œβ”€ πŸš€ child_function_2() started
    └─ βœ… child_function_2() completed
  └─ βœ… parent_function() completed

Integration with Celery

from celery import shared_task
from taskmon import monitor

@shared_task
@monitor()
def celery_task(data):
    process(data)

Programmatic Access

from taskmon import get_active_calls, get_stats

# Get currently running functions
active = get_active_calls()
for call in active:
    print(f"{call.function_name}: {call.memory_delta_mb:.1f}MB")

# Get historical stats
stats = get_stats()
for func_name, metrics in stats.items():
    if metrics['avg_cpu'] > 50:
        print(f"High CPU function: {func_name}")

🎯 Use Cases

1. Find Memory Leaks

@monitor()
def suspect_function():
    # Your code
    pass

Look for πŸ”΄ indicators showing large memory growth.

2. Identify Bottlenecks

@monitor()
def pipeline():
    with track("step1"):
        slow_step()
    with track("step2"):
        fast_step()

See which sections take the most time.

3. Optimize Batch Processing

@monitor()
def process_chunks():
    for chunk in chunks:
        with track(f"chunk_{i}"):
            process(chunk)

Identify which chunks are slow or leak memory.

4. Monitor Production Code

@monitor()
def critical_api_endpoint():
    # Monitor in production
    pass

Track performance metrics over time.

πŸ“ˆ Performance Overhead

taskmon is designed to be lightweight:

  • CPU overhead: < 1% in most cases
  • Memory overhead: < 1MB
  • Sampling: Background thread samples every 1 second
  • Thread-safe: Safe for multi-threaded applications

πŸ› Troubleshooting

Output not showing colors?

Set your terminal to support ANSI colors:

export TERM=xterm-256color

Too much output?

Disable monitoring for specific functions:

# Just remove the decorator
def my_function():
    pass

Want quieter output?

The library respects standard output - redirect if needed:

import sys
sys.stdout = open('monitor.log', 'w')

🀝 Contributing

Contributions welcome! Please:

  1. Fork the repository
  2. Create a feature branch
  3. Add tests for new functionality
  4. Submit a pull request

πŸ“ License

MIT License - see LICENSE file for details.

πŸ™ Acknowledgments

Built with:

  • psutil - Process and system utilities

πŸ“ž Support


Made with ❀️ for Python developers who care about performance

About

task based monitoring system

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •