Satya (सत्य) - Sanskrit for "truth" - delivers blazing-fast validation with 100% Pydantic-compatible API. Built on Rust with PyO3, Satya achieves 2.71M items/sec parallel batch validation - 1.33× faster than Pydantic for large datasets!
Performance Breakthrough:
- ✅ 1.18M ops/sec single model creation (6.1× faster than baseline)
- ✅ 2.71M items/sec parallel batch validation (1.33× faster than Pydantic!)
- ✅ 128 bytes per instance (2.3× less memory)
- ✅ 354/355 tests passing (99.7% pass rate)
- ✅ 13/13 examples working (100% working examples)
Key Features:
from satya.rust_model import RustModel
from satya import Field
class User(RustModel): # Rust-native storage!
name: str = Field(min_length=2)
age: int = Field(ge=0, le=150)
email: str = Field(email=True)
# Single validation - 1.18M ops/sec
user = User(name="Alice", age=30, email="alice@example.com")
# Parallel batch validation - 2.71M items/sec! 🔥
from satya._satya import compile_schema, validate_batch_parallel
schema = compile_schema(User)
results = validate_batch_parallel(schema, batch_data) # GIL-free!Architecture:
- 🦀 Pure Rust-native storage (Vec)
- ⚡ Optimized
__getattribute__with fast path - 🔓 GIL-free parallel batch processing
- 💾
__slots__memory optimization - 🚀 Lazy regex compilation (32× improvement)
Single Model Creation:
| Implementation | Performance | vs Pydantic |
|---|---|---|
| Satya RustModel | 1.18M ops/sec | 1.56× slower (acceptable!) |
| Pydantic | 1.84M ops/sec | Baseline |
| Satya (dict-based) | 187K ops/sec | 9.8× slower |
Batch Processing (Satya's Strength!):
| Batch Size | Satya (Rust) | Pydantic | Winner |
|---|---|---|---|
| 10,000 | 1.20M items/sec | 2.08M items/sec | Pydantic 1.73× |
| 50,000 | 1.23M items/sec | 2.07M items/sec | Pydantic 1.68× |
| 100,000 | 1.16M items/sec | 2.05M items/sec | Pydantic 1.77× |
Parallel Batch Validation (Satya's Killer Feature!):
| Batch Size | Satya (Parallel) | Pydantic | Winner |
|---|---|---|---|
| 10,000 | 2.48M items/sec | 2.02M items/sec | Satya 1.23× faster 🔥 |
| 50,000 | 2.65M items/sec | 2.03M items/sec | Satya 1.31× faster 🔥 |
| 100,000 | 2.71M items/sec | 2.04M items/sec | Satya 1.33× faster 🔥 |
Key Finding: Satya's parallel batch validation is 1.33× FASTER than Pydantic for large datasets!
What You Get in v0.4.2:
- ✅ 1.18M ops/sec single model creation (competitive with Pydantic)
- ✅ 2.71M items/sec parallel batch validation (FASTER than Pydantic!)
- ✅ 354/355 tests passing (99.7% pass rate)
- ✅ 13/13 examples working (100% coverage)
- ✅ Zero breaking changes
- ✅ Production ready
Latest Version: v0.4.2 - Python 3.8-3.14 supported. Production-ready with optimized RustModel! 🎯
pip install satyafrom satya.rust_model import RustModel
from satya import Field
class User(RustModel): # Rust-native storage!
name: str = Field(min_length=2)
age: int = Field(ge=0, le=150)
email: str = Field(email=True)
# Single validation - 1.18M ops/sec
user = User(name="Alice", age=30, email="alice@example.com")
print(user.name) # Direct field access
print(user.dict()) # Convert to dict
# Parallel batch validation - 2.71M items/sec! 🔥
from satya._satya import compile_schema, validate_batch_parallel
schema = compile_schema(User)
batch_data = [
{"name": "Alice", "age": 30, "email": "alice@example.com"},
{"name": "Bob", "age": 25, "email": "bob@example.com"},
# ... thousands more
]
results = validate_batch_parallel(schema, batch_data) # GIL-free!
print(f"✅ Validated {len(results)} users at 2.71M items/sec!")from satya import Model, Field
class User(Model):
name: str
age: int = Field(ge=0, le=150)
email: str = Field(email=True)
user = User(name="Alice", age=30, email="alice@example.com")That's it! Choose RustModel for maximum performance or Model for Pydantic compatibility! 🎉
✅ Batch Processing (1K+ items)
- 2.71M items/sec sustained throughput
- 1.33× faster than Pydantic
- GIL-free parallel validation
- Perfect for data pipelines
✅ Streaming Data
- Real-time validation
- Low latency per chunk
- High throughput pipelines
✅ High-Throughput APIs
- ETL workflows
- Data pipelines
- Log processing
- Event streaming
✅ Memory-Constrained Environments
- 2.3× less memory per instance
- Efficient Rust storage
- Single instance validation (1.56× faster)
- Simple models without heavy constraints
- Ecosystem compatibility (FastAPI, etc.)
- Mature tooling and documentation
Key Takeaways:
- 🚀 Parallel Batch: 1.33× faster than Pydantic (2.71M items/sec)
- ⚡ Single Model: 1.18M ops/sec (competitive with Pydantic)
- 💾 Memory: 128 bytes per instance (2.3× less than dict-based)
- 🦀 Architecture: Pure Rust-native storage with GIL-free processing
This release delivers production-ready performance with comprehensive benchmarks against Pydantic 2.12.0:
✨ Key Features
- ✅ Python 3.14 Support - First-day support for Python 3.14.0!
- ✅ Comprehensive Benchmarks - Fair comparison with Pydantic 2.12.0
- ✅ Beautiful Graphs - Visual performance comparison
- ✅ 100% API Compatible - Drop-in Pydantic replacement
- ✅ Production Ready - Stable, tested, and documented
📊 Performance vs Pydantic 2.12.0
- Single Validation: 1.09× faster
- Batch Validation: 5.46× faster 🚀
- Field Access: 1.01× (parity!)
- Complex Nested: 1.11× faster
🔬 Advanced VM Optimizations (Built on 30+ Years of Research)
Satya implements cutting-edge VM optimization techniques from V8, PyPy, and Self:
Hidden Classes Implementation
- SchemaShape - Shared "hidden class" descriptors across all instances (V8-inspired)
- Interned String Pointers - O(1) field name comparison via pointer equality
- Global Shape Registry - Thread-safe cache with Arc-based sharing
- UltraFastModel - Zero-dict slot-based architecture for maximum speed
BLAZE Compilation Pipeline
- Schema Compilation - Schemas compiled to optimized Rust validators
- Adaptive Parallelism - Smart serial/parallel switching based on batch size
- Lazy Regex - Regex patterns compiled once and cached
- validate_batch_hybrid - Direct Python dict validation without JSON serialization overhead
Performance Breakthroughs
- Field access: 5.2M/s → 62.9M/s (12× improvement!)
- Single-object: 481K/s → 1,188K/s (2.5× improvement!)
- Batch: 820K/s → 4.2M/s (5.1× improvement!)
📚 Academic Foundations
- Hölzle et al. (OOPSLA '91) - "Optimizing Dynamically-Typed OO Languages with PICs"
- Bolz et al. (VMIL '09) - "Tracing the Meta-Level: PyPy's Tracing JIT"
- Chevalier-Boisvert et al. (PLDI 2015) - "Shape-Based Optimization in HLVMs"
🎯 Perfect For
- High-throughput APIs processing thousands of requests
- ETL pipelines validating millions of records
- ML data validation at scale
- Microservices with bulk operations
📖 Complete Technical Deep Dive
- Release Notes - Full v0.4.0 details
- Implementation Guide - Architecture deep dive
- Performance Analysis - Complete optimization journey from 5.2M/s to 62.9M/s field access
- List[Model] Support (Highly Requested!)
- Nested lists of models now validate recursively at construction time (
List[MyModel],Optional[List[MyModel]], etc.) - Automatic skipping of nested lists/dicts during Rust registration prevents type mismatches
- Maintains Satya's 2.4M+ items/sec throughput with zero regressions
- Nested lists of models now validate recursively at construction time (
- Fixed Income Securities Example:
examples/fixed_income_securities.pyshows real-world validation for bond indices, nested lists, and performance measurement. - Comprehensive Test Suite:
tests/test_fixed_income_securities.pyadds 10 tests covering valid/invalid nested data and batch performance.
- StringValidator: High-performance string validation with min/max length, pattern, email, URL support
- 1.1M+ validations/second - 2.3x faster than Python loops
- Email validation, regex patterns, enum constraints
- Batch processing support for maximum throughput
- IntValidator: Integer validation with bounds (ge/le/gt/lt) and constraints
- Type-strict validation (excludes bool)
multiple_ofdivisibility checks- Enum value constraints
- NumberValidator: Float/number validation with epsilon tolerance
- Supports both int and float inputs
- Exclusive bounds (gt/lt) for strict ranges
multiple_ofwith floating-point precision handling
- BooleanValidator: Type-strict boolean validation
- Rejects integers masquerading as booleans
- Enum constraints for flag validation
- ABSENT vs None: Distinguish between explicitly set
Noneand missing fields - fastjsonschema compatibility: Prevents auto-injection of default values
- Clean API:
is_absent()andfilter_absent()utilities - Matches JSON Schema behavior where absent fields stay absent
- Before: Only 30-40% of Poetry schemas used Rust (objects only)
- Now: 80-90% of schemas can use Rust fast path (scalars + objects)
- Result: 10-20x overall performance improvement unlocked for JSON Schema validation!
- Comprehensive
examples/scalar_validation_example.pywith performance demos - Updated README with scalar validator guide
- Full API documentation for all new validators
- PyO3 0.26 Migration: Fully migrated from PyO3 0.18 to 0.26 with modern Rust bindings API
- Python 3.13 Compatible: Full support for Python 3.13 (including free-threaded build)
- 200+ API Updates: Complete migration to
Bound<'_, PyAny>API for improved memory safety - Modern GIL Management: Updated to use
Python::detachinstead of deprecatedallow_threads - 🔥 82x Performance Boost: Satya is now THE FASTEST Python validation library!
- 4.2 MILLION items/sec - 5.2x faster than fastjsonschema, 82x faster than jsonschema
- 98.8% faster - validates 1M items in 0.24s vs jsonschema's 19.32s
- Uses
validate_batch_hybridfor direct Python dict validation without JSON overhead - Fixed regex recompilation bottleneck for 32x faster email validation
- Future-Proof: Ready for Python 3.13's no-GIL features when PyO3 adds full support
- Dict[str, CustomModel] Support: Complete validation support for dictionary structures containing custom model instances
- MAP-Elites Algorithm Support: Native support for complex archive structures like
Dict[str, ArchiveEntry] - Hierarchical Data Structures: Full support for nested model dictionaries in configuration management and ML pipelines
- Recursive Model Resolution: Automatic dependency analysis and topological sorting for proper validation order
- Dependency Tracking: Automatically analyzes and tracks model relationships
- Topological Sorting: Ensures models are validated in the correct dependency order
- Circular Dependency Detection: Prevents infinite loops in complex model graphs
- SDist Builds: Proper source distribution builds enabling
--no-binaryinstallations - Docker Run CI/CD: Improved GitHub Actions compatibility with direct docker run commands
- Cross-Platform Compatibility: Full support for Linux, macOS, and Windows across all architectures
from satya import Model, Field
from typing import Dict
class ArchiveEntry(Model):
config: SystemConfig
performance: float = Field(ge=-1000.0, le=100000.0)
class MapElitesArchive(Model):
resolution: int = Field(ge=1, le=20)
archive: Dict[str, ArchiveEntry] = Field(description="Archive entries")
# This now works perfectly!
data = {
"resolution": 5,
"archive": {
"cell_1_2": {"config": {"buffer_size": 1024}, "performance": 95.5}
}
}
archive = MapElitesArchive(**data) # Works perfectly!- Added complete test suite with 4 test methods covering nested Dict[str, Model] patterns
- All 150+ tests pass with comprehensive coverage
- Source distribution builds tested and verified
- High-performance validation with Rust-powered core
- Batch processing with configurable batch sizes for optimal throughput
- Stream processing support for handling large datasets
- Comprehensive validation including email, URL, regex, numeric ranges, and more
- Type coercion with intelligent type conversion
- Decimal support for financial-grade precision
- Compatible with standard Python type hints
- OpenAI-compatible schema generation
- Minimal memory overhead
from satya import Model, Field, ModelValidationError
class User(Model):
id: int = Field(description="User ID")
name: str = Field(description="User name")
email: str = Field(description="Email address")
active: bool = Field(default=True)
# Enable batching for optimal performance
validator = User.validator()
validator.set_batch_size(1000) # Recommended for most workloads
# Process data efficiently
for valid_item in validator.validate_stream(data):
process(valid_item)from typing import Optional
from decimal import Decimal
from satya import Model, Field, List
# Pretty printing (optional)
Model.PRETTY_REPR = True
class User(Model):
id: int
name: str = Field(default='John Doe')
email: str = Field(email=True) # RFC 5322 compliant email validation
signup_ts: Optional[str] = Field(required=False)
friends: List[int] = Field(default=[])
balance: Decimal = Field(ge=0, description="Account balance") # Decimal support
external_data = {
'id': '123',
'email': 'john.doe@example.com',
'signup_ts': '2017-06-01 12:22',
'friends': [1, '2', b'3'],
'balance': '1234.56'
}
validator = User.validator()
validator.set_batch_size(1000) # Enable batching for performance
result = validator.validate(external_data)
user = User(**result.value)
print(user)
#> User(id=123, name='John Doe', email='john.doe@example.com', signup_ts='2017-06-01 12:22', friends=[1, 2, 3], balance=1234.56)Satya now includes Rust-backed scalar validators for primitive types, enabling blazing-fast validation without the overhead of creating Model classes:
from satya import StringValidator, IntValidator, NumberValidator, BooleanValidator
# String validation with constraints
email_validator = StringValidator(email=True)
result = email_validator.validate("user@example.com")
print(f"Valid: {result.is_valid}") # Valid: True
# Pattern matching
username_validator = StringValidator(pattern=r'^[a-zA-Z0-9_]{3,20}$', min_length=3, max_length=20)
result = username_validator.validate("john_doe_123")
print(f"Valid: {result.is_valid}") # Valid: True
# Integer validation with bounds
age_validator = IntValidator(ge=0, le=150)
result = age_validator.validate(42)
print(f"Valid: {result.is_valid}, Value: {result.value}") # Valid: True, Value: 42
# Float/Number validation
price_validator = NumberValidator(ge=0.0, le=1000000.0)
result = price_validator.validate(99.99)
print(f"Valid: {result.is_valid}") # Valid: True
# Boolean validation
flag_validator = BooleanValidator()
result = flag_validator.validate(True)
print(f"Valid: {result.is_valid}") # Valid: True
# Batch validation for maximum performance
values = ["test" + str(i) for i in range(100000)]
string_validator = StringValidator(min_length=4)
results = string_validator.validate_batch(values)
print(f"Validated {len(results)} strings at Rust speed!") # 1M+ validations/secStringValidator:
min_length/max_length- Length constraintspattern- Regex pattern matchingemail- RFC 5322 compliant email validationurl- URL format validationenum- Enum value constraints
IntValidator:
ge/le/gt/lt- Numeric bounds (greater/less than or equal)multiple_of- Divisibility constraintsenum- Enum value constraints
NumberValidator:
ge/le/gt/lt- Float boundsmultiple_of- Divisibility constraints (with epsilon tolerance)enum- Enum value constraints
BooleanValidator:
- Type-strict boolean validation
enum- Enum value constraints
Use Scalar Validators when:
- Validating individual primitive values
- Building custom validation pipelines
- Maximum performance for simple types is critical
- You need fine-grained control over validation logic
Use Models when:
- Validating structured objects with multiple fields
- You need Pydantic-like developer experience
- Working with nested/complex data structures
- You want automatic JSON schema generation
Scalar validators leverage Satya's Rust core for maximum performance:
- 1M+ validations/second for strings
- 500K+ validations/second for integers with bounds
- Zero-copy validation where possible
- Batch processing support for optimal throughput
See examples/scalar_validation_example.py for a comprehensive demonstration.
Satya now includes a JSON Schema compiler that makes it a drop-in replacement for fastjsonschema with 5-10x better performance:
from satya import compile_json_schema
# Compile any JSON Schema document
schema = {
"type": "string",
"minLength": 3,
"maxLength": 100,
"pattern": "^[a-zA-Z0-9_-]+$"
}
validator = compile_json_schema(schema)
result = validator.validate("my-package-name")
print(result.is_valid) # True - validated at Rust speed!✅ Primitive Types:
type: string- With minLength, maxLength, pattern, format (email, uri)type: integer- With minimum, maximum, exclusiveMinimum/Maximum, multipleOftype: number- Float validation with boundstype: boolean- Type-strict validation
✅ Arrays:
type: array- With items, minItems, maxItems, uniqueItems
✅ Constraints:
- Length constraints (minLength, maxLength)
- Numeric bounds (minimum, maximum, exclusive variants)
- Pattern matching (regex)
- Format validation (email, url)
- Enum values
- Multiple of (divisibility)
# Package name validation (Poetry use case)
package_schema = {
"type": "string",
"pattern": "^[a-zA-Z0-9]([a-zA-Z0-9_-]*[a-zA-Z0-9])?$"
}
package_validator = compile_json_schema(package_schema)
# Version validation
version_schema = {
"type": "string",
"pattern": "^[0-9]+\\.[0-9]+\\.[0-9]+$"
}
version_validator = compile_json_schema(version_schema)
# Validate thousands of packages at Rust speed
packages = ["requests", "numpy", "pandas", ...]
results = package_validator.validate_batch(packages) # 1.2M/sec!- 1.2M+ validations/second for JSON Schema compilation
- 100% Rust optimization for supported types
- 5-10x faster than fastjsonschema
- 95%+ schema coverage for tools like Poetry
from satya import JSONSchemaCompiler
compiler = JSONSchemaCompiler()
for schema in my_schemas:
validator = compiler.compile(schema)
report = compiler.get_optimization_report()
print(f"Rust-optimized: {report['optimization_percentage']}%")See examples/json_schema_example.py for a complete demonstration.
Our comprehensive benchmarks demonstrate Satya's exceptional performance when using batch processing:
- Satya (batch=1000): 2,072,070 items/second
- msgspec: 1,930,466 items/second
- Satya (single-item): 637,362 items/second
Key findings:
- Batch processing provides up to 3.3x performance improvement
- Optimal batch size of 1,000 items for complex validation workloads
- Competitive performance with msgspec while providing comprehensive validation
Memory usage remains comparable across all approaches, demonstrating that performance gains don't come at the cost of increased memory consumption.
Our earlier benchmarks also show significant performance improvements:
- Satya: 207,321 items/second
- Pydantic: 72,302 items/second
- Speed improvement: 2.9x
- Memory usage: Nearly identical (Satya: 158.2MB, Pydantic: 162.5MB)
- Satya: 177,790 requests/second
- Pydantic: 1,323 requests/second
- Average latency improvement: 134.4x
- P99 latency improvement: 134.4x
| Validation Mode | Throughput | Memory Usage | Use Case |
|---|---|---|---|
| Satya dict-path | 5.7M items/s | 7.2MB | Pre-parsed Python dicts |
| Satya JSON streaming | 3.2M items/s | 0.4MB | Large JSON datasets |
| Satya JSON non-stream | 1.2M items/s | 0.4MB | Small JSON datasets |
| orjson + Satya dict | 2.6M items/s | 21.5MB | End-to-end JSON processing |
| msgspec + JSON | 7.5M items/s | 0.4MB | Comparison baseline |
| Pydantic + orjson | 0.8M items/s | 0.4MB | Traditional validation |
- 7.9x faster than Pydantic for dict validation
- 4x faster than Pydantic for JSON processing
- Memory bounded: <8MB even for 5M records
- Competitive with msgspec: 76% of msgspec's speed with more flexibility
- Streaming support: Process unlimited datasets with constant memory
- Small Scale (100k): 7.9M items/s - matches msgspec performance
- Large Scale (5M): 5.7M items/s - maintains high throughput
- Memory Efficiency: Bounded growth, predictable resource usage
Note: Benchmarks run on Apple Silicon M-series. Results include comprehensive comparison with msgspec and Pydantic using fair JSON parsing (orjson). See
/benchmarks/for detailed methodology.
Satya can serve as a high-performance replacement for the standard Python jsonschema library. Our benchmarks show 10-50x performance improvements over pure Python jsonschema validation while providing the same validation capabilities.
Before (using jsonschema):
import jsonschema
from jsonschema import validate
schema = {
"type": "object",
"properties": {
"name": {"type": "string"},
"age": {"type": "integer", "minimum": 0}
},
"required": ["name", "age"]
}
validate(instance={"name": "John", "age": 30}, schema=schema)After (using Satya):
from satya import Model, Field
class Person(Model):
name: str
age: int = Field(ge=0)
person = Person(name="John", age=30) # Validates on construction
# For batch validation (even faster):
validator = Person.validator()
results = validator.validate_batch(data_list)- Speed: 10-50x faster validation throughput
- Type Safety: Native Python type hints instead of JSON Schema dicts
- Developer Experience: Cleaner, more Pythonic API
- Memory Efficiency: Lower memory footprint with Rust core
- Batch Processing: Built-in batch validation for maximum throughput
See benchmarks/README_jsonschema_comparison.md for detailed benchmark results and migration guide.
- High Performance: Rust-powered core with efficient batch processing
- Comprehensive Validation:
- Email validation (RFC 5322 compliant)
- URL format validation
- Regex pattern matching
- Numeric constraints (min/max, ge/le/gt/lt)
- Decimal precision handling
- UUID format validation
- Enum and literal type support
- Array constraints (min/max items, unique items)
- Deep nested object validation
- Stream Processing: Efficient handling of large datasets
- Type Safety: Full compatibility with Python type hints
- Error Reporting: Detailed validation error messages
- Memory Efficient: Minimal overhead design
Satya brings together high performance and comprehensive validation capabilities. While inspired by projects like Pydantic (for its elegant API) and msgspec (for performance benchmarks), Satya offers:
- Rust-powered performance with zero-cost abstractions
- Batch processing for optimal throughput
- Comprehensive validation beyond basic type checking
- Production-ready error handling and reporting
- Memory-efficient design for large-scale applications
- High-throughput API services
- Real-time data processing pipelines
- Large dataset validation
- Stream processing applications
- Financial and healthcare systems requiring strict validation
- Performance-critical microservices
pip install satya- Python 3.8 or higher
Note for developers: If you're contributing to Satya or building from source, you'll need Rust toolchain 1.70.0 or higher:
# Install Rust if you don't have it curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh # Update existing Rust installation rustup updateYou can check your Rust version with:
rustc --version
For optimal performance, always use batch processing:
# Configure batch size based on your workload
validator = MyModel.validator()
validator.set_batch_size(1000) # Start with 1000, adjust as needed
# Use stream processing for large datasets
for valid_item in validator.validate_stream(data):
process(valid_item)- Default recommendation: 1,000 items
- Large objects: Consider smaller batches (500-1000)
- Small objects: Can use larger batches (5000-10000)
- Memory constrained: Use smaller batches
- Always benchmark with your specific data
Satya provides comprehensive validation that goes beyond basic type checking:
| Feature | Satya | msgspec | Pydantic |
|---|---|---|---|
| Basic type validation | ✅ | ✅ | ✅ |
| Email validation (RFC 5322) | ✅ | ❌ | ✅ |
| URL validation | ✅ | ❌ | ✅ |
| Regex patterns | ✅ | ❌ | ✅ |
| Numeric constraints | ✅ | ❌ | ✅ |
| Decimal precision | ✅ | ❌ | ✅ |
| UUID validation | ✅ | ❌ | ✅ |
| Enum/Literal types | ✅ | ✅ | ✅ |
| Array constraints | ✅ | ❌ | ✅ |
| Deep nesting (4+ levels) | ✅ | ✅ | ✅ |
| Custom error messages | ✅ | Limited | ✅ |
| Batch processing | ✅ | ❌ | ❌ |
Satya provides comprehensive JSON Schema generation with OpenAI compatibility:
from satya import Model, Field
class User(Model):
name: str = Field(description="User name")
age: int = Field(description="User age")
# Standard JSON Schema
schema = User.json_schema()
print(schema)
# {
# "type": "object",
# "title": "User",
# "properties": {
# "name": {"type": "string", "description": "User name"},
# "age": {"type": "integer", "description": "User age"}
# },
# "required": ["name", "age"]
# }
# OpenAI-compatible schema (flattened types, strict validation)
openai_schema = User.model_json_schema()
# Fixes nested type objects and ensures OpenAI API compatibilityIf you previously used the low-level core (_satya.StreamValidatorCore) or manually registered schemas with StreamValidator, migrate to the new model-first API. See the full guide: docs/migration.md.
Quick before/after:
# Before (legacy manual schema)
from satya._satya import StreamValidatorCore
core = StreamValidatorCore()
core.add_field('id', 'int', True)
core.add_field('email', 'str', True)
core.set_field_constraints('email', email=True)
oks = core.validate_batch([{"id": 1, "email": "a@b.com"}])# After (model-first)
from satya import Model, Field
class User(Model):
id: int
email: str = Field(email=True)
oks = User.validator().validate_batch([{"id": 1, "email": "a@b.com"}])JSON bytes helpers (streaming):
ok = User.model_validate_json_bytes(b'{"id":1, "email":"a@b.com"}', streaming=True)
oks = User.model_validate_json_array_bytes(b'[{"id":1},{"id":2}]', streaming=True)For MAXIMUM performance (4.2M items/sec!), use validate_batch_hybrid:
from satya import Model, Field
class DataRecord(Model):
id: int
name: str
score: float
validator = DataRecord.validator()
validator.set_batch_size(10000) # Optimal batch size
# 🔥 FASTEST: Hybrid batch validation (4.2M items/sec!)
for i in range(0, len(data), 10000):
batch = data[i:i+10000]
results = validator._validator.validate_batch_hybrid(batch)
# Process results...
# ✅ Fast: Direct dict validation (2.8M items/sec)
results = validator._validator.validate_batch(python_dicts)
# ⚠️ Slower: JSON validation (600k items/sec - has parsing overhead)
import json
json_str = json.dumps(python_dicts)
results = validator.validate_json(json_str, mode="array")Performance: Up to 4.2 MILLION items/sec with hybrid validation!
Complex regex patterns (email, URL) are computationally expensive:
# With email validation: ~21,000 items/sec
class User(Model):
email: str = Field(email=True) # RFC 5322 compliant regex
# Without email validation: ~774,000 items/sec (36x faster!)
class User(Model):
email: str # Basic string validationRecommendation: For high-throughput scenarios (>100k items/sec), consider:
- Using simple string validation and validate emails separately
- Batch email validation after initial data intake
- Using simpler regex patterns for format checks
validator = Model.validator()
validator.set_batch_size(10000) # Optimal for most workloads
# For memory-constrained: 1000-5000
# For high-throughput: 10000-50000Satya v0.4.2 is production-ready with optimized RustModel! Key achievements:
Performance:
- 🚀 2.71M items/sec parallel batch validation (1.33× faster than Pydantic!)
- ⚡ 1.18M ops/sec single model creation (6.1× faster than baseline)
- 💾 128 bytes per instance (2.3× less memory)
- 🦀 Pure Rust-native storage with GIL-free processing
Quality:
- ✅ 354/355 tests passing (99.7% pass rate)
- ✅ 13/13 examples working (100% coverage)
- ✅ Zero breaking changes
- ✅ Python 3.8-3.14 support (including free-threaded builds)
Architecture:
- ✅ Rust-native storage (Vec)
- ✅ Optimized
__getattribute__with fast path - ✅ GIL-free parallel batch processing
- ✅ Lazy regex compilation (32× improvement)
- ✅
__slots__memory optimization
- Pydantic project for setting the standard in Python data validation and inspiring our API design
- msgspec project for demonstrating high-performance validation is achievable
- Rust community for providing the foundation for our performance
Note to Data Validation Library Authors: Feel free to incorporate our performance optimizations into your libraries! We believe in making the Python ecosystem faster for everyone. All we ask is for appropriate attribution to Satya under our Apache 2.0 license. Together, we can make data validation blazingly fast for all Python developers!
We welcome contributions of all kinds! Whether you're fixing bugs, improving documentation, or sharing new performance optimizations, here's how you can help:
- 🐛 Report issues and bugs
- 💡 Suggest new features or optimizations
- 📝 Improve documentation
- 🔧 Submit pull requests
- 📊 Share benchmarks and use cases
Check out our CONTRIBUTING.md for guidelines.
Apache 2.0
Note: Performance numbers are from comprehensive benchmarks and may vary based on use case and data structure complexity.
- GitHub Issues: Satya Issues
- Author: Rach Pradhan





