Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,9 @@ htmlcov/
# Rust
target/

# UV
uv.lock

# Project specific
site/
*.so
20 changes: 10 additions & 10 deletions .readthedocs.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,16 +3,16 @@ version: 2
build:
os: ubuntu-24.04
tools:
python: "3.11"
python: "3.12"
jobs:
pre_build:
- curl -LsSf https://astral.sh/uv/install.sh | sh
- source $HOME/.cargo/env
- uv sync
pre_create_environment:
- asdf plugin add uv
- asdf install uv latest
- asdf global uv latest
create_environment:
- uv venv "${READTHEDOCS_VIRTUALENV_PATH}"
install:
- UV_PROJECT_ENVIRONMENT="${READTHEDOCS_VIRTUALENV_PATH}" uv sync --group docs

mkdocs:
configuration: mkdocs.yml

python:
install:
- requirements: docs/requirements.txt
configuration: mkdocs.yml
35 changes: 22 additions & 13 deletions docs/how-to-guides/cli-recipes.md
Original file line number Diff line number Diff line change
Expand Up @@ -126,7 +126,7 @@ fi
cel -i

# Example session:
CEL> :context user='{"name": "Alice", "role": "admin", "verified": true}'
CEL> context {"user": {"name": "Alice", "role": "admin", "verified": true}}
Context updated: user

CEL> user.name
Expand All @@ -138,27 +138,29 @@ true
CEL> user.verified && user.role in ["admin", "moderator"]
true

CEL> :load-context permissions.json
Context loaded from permissions.json
CEL> load permissions.json
Loaded context from permissions.json

CEL> "write" in permissions
true

CEL> :history
CEL> history
1: user.name
2: user.role == "admin"
3: user.verified && user.role in ["admin", "moderator"]
4: "write" in permissions

CEL> :exit
CEL> exit
```

### Rapid Prototyping

```bash
# Test expressions quickly
# Test expressions quickly:
cel -i
CEL> :context data='{"users": [{"name": "Alice", "active": true}, {"name": "Bob", "active": false}]}'
CEL> context {"data": {"users": [{"name": "Alice", "active": true}, {"name": "Bob", "active": false}]}}
Context updated: data

CEL> data.users.filter(u, u.active).map(u, u.name)
["Alice"]

Expand Down Expand Up @@ -455,9 +457,11 @@ cel --verbose 'expression' --context-file context.json
# Validate JSON files before using them
cat context.json | python -m json.tool

# Test expressions step by step in interactive mode
# Test expressions step by step in interactive mode:
cel -i
CEL> :context test_data='{"user": {"name": "Alice"}}'
CEL> context {"user": {"name": "Alice"}}
Context updated: user

CEL> has(user)
true
CEL> has(user.name)
Expand Down Expand Up @@ -500,8 +504,11 @@ cel 'expression' --context-file "$(pwd)/context.json"

# 4. Test context loading in interactive mode
cel -i
CEL> :load-context context.json
CEL> :show-context
CEL> load context.json
Loaded context from context.json

CEL> context
{"user": {"name": "Alice"}}
```

#### Type conversion issues
Expand All @@ -517,9 +524,11 @@ cel 'string(age) + " years"' --context '{"age": 30}'
# Instead of: int_string > 10
cel 'int(int_string) > 10' --context '{"int_string": "15"}'

# Check types in interactive mode
# Check types in interactive mode:
cel -i
CEL> :context data='{"value": "123"}'
CEL> context {"data": {"value": "123"}}
Context updated: data

CEL> typeof(data.value)
string
CEL> int(data.value)
Expand Down
106 changes: 84 additions & 22 deletions docs/how-to-guides/production-patterns-best-practices.md
Original file line number Diff line number Diff line change
Expand Up @@ -462,36 +462,98 @@ import time
from cel import evaluate

def benchmark_cel_performance():
# Simple expressions
simple_expr = "x + y * 2"
context = {"x": 10, "y": 20}
"""Comprehensive CEL performance benchmark matching documented claims."""

iterations = 1000 # Reduced for testing
start_time = time.perf_counter()
# Test scenarios matching the performance table
test_cases = [
{
"name": "Simple expressions",
"expression": "x + y * 2",
"context": {"x": 10, "y": 20},
"expected": 50,
"iterations": 10000
},
{
"name": "Complex expressions",
"expression": "user.active && user.role in ['admin', 'editor'] && has(user.permissions) && user.permissions.size() > 0",
"context": {
"user": {
"active": True,
"role": "admin",
"permissions": ["read", "write", "delete"]
}
},
"expected": True,
"iterations": 5000
},
{
"name": "Function calls",
"expression": "double(x) + square(y)",
"context": {
"x": 5,
"y": 3,
"double": lambda x: x * 2,
"square": lambda x: x * x
},
"expected": 19, # double(5) + square(3) = 10 + 9
"iterations": 3000
}
]

for _ in range(iterations):
result = evaluate(simple_expr, context)
results = []

end_time = time.perf_counter()
avg_time_us = ((end_time - start_time) / iterations) * 1_000_000
throughput = iterations / (end_time - start_time)

# Verify the benchmark ran correctly
assert avg_time_us > 0
assert throughput > 0
for test_case in test_cases:
print(f"\nBenchmarking: {test_case['name']}")

# Verify the expression works correctly
result = evaluate(test_case["expression"], test_case["context"])
assert result == test_case["expected"], f"Expected {test_case['expected']}, got {result}"

# Warmup
for _ in range(100):
evaluate(test_case["expression"], test_case["context"])

# Benchmark
start_time = time.perf_counter()
for _ in range(test_case["iterations"]):
evaluate(test_case["expression"], test_case["context"])
end_time = time.perf_counter()

# Calculate metrics
total_time = end_time - start_time
avg_time_us = (total_time / test_case["iterations"]) * 1_000_000
throughput = test_case["iterations"] / total_time

result_data = {
"name": test_case["name"],
"avg_time_us": avg_time_us,
"throughput": throughput,
"iterations": test_case["iterations"]
}
results.append(result_data)

print(f" Average time: {avg_time_us:.1f} μs")
print(f" Throughput: {throughput:,.0f} ops/sec")

# Test that the expression actually works
result = evaluate(simple_expr, context)
assert result == 50 # 10 + 20 * 2
return results

# Run the benchmark
benchmark_cel_performance()
# Run the benchmark and display results
if __name__ == "__main__":
print("CEL Performance Benchmark")
print("=" * 40)
results = benchmark_cel_performance()

print("\nSummary:")
print("-" * 40)
for result in results:
print(f"{result['name']:20} | {result['avg_time_us']:6.1f} μs | {result['throughput']:8,.0f} ops/sec")
```

**Expected Results**:
- **Modern hardware**: 5-50 μs per evaluation
- **Simple expressions**: 50,000+ ops/sec
- **Complex expressions**: 10,000+ ops/sec

- **Simple expressions**: 5-15 μs per evaluation, 50,000+ ops/sec
- **Complex expressions**: 15-40 μs per evaluation, 25,000+ ops/sec
- **Function calls**: 20-50 μs per evaluation, 20,000+ ops/sec

**Learn More**: See [Performance Benchmarking Examples](https://github.com/hardbyte/python-common-expression-language/tree/main/examples/performance) for comprehensive benchmarking scripts.

Expand Down
Loading
Loading